Mark Loveless, aka Simple Nomad, is a researcher and hacker. He frequently speaks at security conferences around the globe, gets quoted in the press, and has a somewhat odd perspective on security in general.

Contact Tracing Issues

Contact Tracing Issues

Photo by Victor He on Unsplash

Photo by Victor He on Unsplash

I thought I’d put together a few thoughts on the subject of contact tracing, what my concerns are, and a few pluses and minuses to the entire thing. The concept itself isn’t new, but with the advent of COVID-19 it is being discussed in earnest.

What Is Contact Tracing?

It is exactly what it says, a method of tracing who you have come in contact with. In the context of COVID-19 and modern electronics, contact tracing is when you use an app on your phone that generates unique data, and broadcasts that data to anyone in the immediate area running a copy of the same app. Other people will be doing the same, broadcasting their own unique data. The apps gather this broadcast data, and has others’ unique data as prove of contact. If you become infected, you can note this in the app, and all of that unique data that you collected for the past couple of weeks is uploaded and used to notify those users of possible contact. To ensure privacy and integrity of the data, it needs to be cryptographically secure, devoid of personal identifiers, and the data needs to be secure regardless of storage and transit.

Are The Algorithms Safe?

The very first concern people have is the impact to their privacy. Are the apps safe? And just as important, are the algorithms doing all the crypto in the background doing things in a way that prevents tampering with the data as well as protecting users’ privacy.

As of this writing I have not seen any apps released here in the United States, so I can only speculate on their security. However the algorithms being discussed I have looked at, and they look quite good on paper. If the algorithms are good, and the apps turn out to be good, that’s awesome, right?

I had an entire write-up on the subject, and that blog post was scheduled to happen next Tuesday, but then a friend of mine - Nick Steele - put out his own excellent blog post that covered a decent chunk of the same material I was going to talk about. Figures, this kind of thing happens all of the time in the security field. So instead I’m going to focus on my opinions on the entire process and not so much the algorithm. Nick focused on what many of us were looking at, the Google and Apple joint effort. There is no app yet, and you can bet that when it is finally released I will immediately start playing with it. But until then, my opinions are just speculative, as the published data that has been released so far could certainly change.

Bluetooth

The method for broadcasting your data - in the form of cryptographic keys - is via Bluetooth. Historically Bluetooth has had a checkered past as far as security goes, although recent versions of it are much better. I do trust Apple in this regard - in 2015 I took a deep dive into how the Apple Watch and the iPhone exchanged data using Bluetooth, and they used every single possible security feature Bluetooth had to offer, and it was rock solid. So I expect the usage of Bluetooth will be handled properly with every applicable Bluetooth security feature enabled. But my issues in this blog post are not with the security features of Bluetooth, though. They are a bit more basic than that.

Bluetooth is a wireless communications standard (compromised of multiple protocols) that has a fairly small range, so at first glance it seems ideal. The max range is 33 feet for your typical phone, and that is supposedly line-of-sight, so taking into account things like walls and other obstacles you might not make Bluetooth contact between devices unless you are much closer. As someone who has experimented heavily with Bluetooth, I can attest that Bluetooth in the lab is one thing, and in the wild is another. There are a number of things that can throw off how contact tracing will work just based upon limitations of how well Bluetooth performs normally. So we’ll assume the app and its algorithms work flawlessly, and talk about how this might actually work in the real world.

False Positives and Negatives

When you do risk assessment, security research, and general hackery for a living, you think about scenarios where things can go wrong. Let’s talk about a few scenarios:

  • Close contact but no exposure. I’m not just talking about two people wearing masks and gloves standing 10 feet apart and one of them is infected. Oh no, much more than that. I have managed to connect my phone via Bluetooth to other Bluetooth devices in other cars while out driving. Okay, while a passenger. But I could certainly do it. There are other scenarios similar as well. I could be sitting in my parked car waiting for curbside delivery of my prepaid meal, and an infected person with the app on their phone could walk past my car. I could be leaning against the glass inside a Starbucks waiting for my order and an infected person with the app walks by on the sidewalk. Bingo, false positive.

  • Exposure but no phone communication. I walk up to a counter to purchase something with my phone in my pocket and the person behind the counter is infected. Their phone is also in their pocket, but they are sitting on it. Their chair and the counter itself are metal, and manage to create enough interference to impede Bluetooth communications. This creates a false negative.

  • I’m not holding my phone. I go to a shopping mall to pick up something, and I drop my phone without realizing it and walk away. An infected app user picks up my phone, and turns it into the mall’s lost and found, where I later recover it. Our phones were certainly close enough to make the Bluetooth connection, registering a false positive since I was never in close physical contact with the infected person. Just my phone was.

Malicious Behavior

Is there a way to use the app maliciously? Well if there is a way to submit an “I am infected” alert from the app without some actual proof a test was taken, someone will probably do it just for the lulz. Let’s outline a couple of ugly scenarios:

  • Multiple attacks against an individual. I’m evil and I hate Bob. I buy a bunch of burner phones, load the app up on all of them, and make sure the burner phones are carried one at a time near Bob. I swap to a new burner every week or so, maybe over the course of a couple of months. And every week or so, I trigger an “infection” and Bob is notified he was exposed. Thinking he’s been exposed every week or so could drive Bob crazy.

  • Single attack against a large number of individuals. This is also evil. The phone is powered on, maybe with a portable charger hooked up, and is placed in a hidden location near a physical choke point where a lot of people pass by. Again, a false trigger is made after I’ve retrieved the phone, and dozens if not a few hundred may think they have been infected.

Mitigation

How do we correct some of these problems? While not 100% reliable, tracking signal strength of each Bluetooth encounter is a decent way to try and determine if the phones were possibly close enough. If the signal strength suggests contact within a range of 15-20 feet vs 3-4 feet, that could help you determine how serious the contact was. This is something Google and Apple are actually looking into.

If a timestamp is included with the data that is also a good way to help determine the contact. If you know that during the few minutes where the contact occurred you were alone and driving in your car or your phone was accidentally left sitting on a park bench before you went back and got it, that could save you a lot of worry.

Confirmation of an actual infection is paramount. One could argue that a health care professional could “verify” and do the upload instead of the app user themselves, or maybe both - if you are then notified you were exposed, it could say “this came from an individual” or “this came from a registered health care organization” and you could decide next steps. Again, Google and Apple have already considered the “evil” scenario and are looking into the health care professional option.

If these mitigations seem somewhat weak, look over the framework that Apple has developed so far, and you’ll see what I mean. They are making an effort to make this happen correctly, although the main thing Google and Apple are doing is putting together standards and tools. Others besides Google and Apple could end up writing the apps, but the framework is there.

Summary

There are a lot of benefits to using an app like this, although if you do, you have to remember that the information may not be perfect. If you are prone to panic attacks, being told by an app you’re been exposed to a deadly virus can be a massively upsetting event. Malicious false positives could make this worse. I personally plan on getting the app when it is available, and if I use it I fully expect to get those false positives. Maybe I will just reverse the app like a sane little hacker and not use it. I’m still undecided on actual usage, what about you?

My MITM/Sniffing Station

My MITM/Sniffing Station

Tales from my Teens: Bad Date

Tales from my Teens: Bad Date