Mark Loveless, aka Simple Nomad, is a researcher and hacker. He frequently speaks at security conferences around the globe, gets quoted in the press, and has a somewhat odd perspective on security in general.

Past Predictions

Past Predictions

Instead of giving predictions about the future, I thought I should look back at some of my predictions from the past to see how I did. No I’m not talking about extremely serious predictions I did a couple of years ago or even four years ago, I am talking long term technological predictions. As I quickly found out, self-review can be a humbling experience.

The vast majority of my “visions of the future” involve technology and are tied to the security realm. Many at the time were dismissed as paranoid rantings. Many in fact can be interpreted as just that. Others were eerily accurate, yet overall I was usually only half right. Here are a few that I either got dead wrong, or not nearly as right as I wanted to be:

Distributed denial of service (DDoS)

I originally predicted DDoS in about 1995, and no I don’t remember what exactly I called it, but someone else eventually named it. Source code began to appear later on (I think as early as 1996), and in 1998 I revised my prediction (which I presented at some SANS event) that it could involve a botnet of literally dozens of computers. By 1999 the real botnets appeared and I had vastly underestimated the numbers, as many botnets contained tens of thousands of compromised nodes.

Not only did I get botnets wrong on size, I figured the bots would have two functions - self-propagate and launch a denial of service attack against a victim upon command. Functionality expansion was not anticipated. Botnets that sent email - spambots - never occurred to me. Adbots that drove up ad impression traffic leading to monetary compensation also never occurred to me. Technically I was right, but I underestimated the scale and scope, so in my mind now I’d say I was half right at best.

Monetization of exploit information will be bad

With the advent of companies like iDefense (founded in the late 90s) who began paying for exploit information so they could write attacks signatures for their IDS product, I argued that this was one of the worse moves ever, and even spoke about this on a panel at a conference in 2003. Note that at this conference, a person from iDefense was on the panel as well as a representative from US CERT. Myself and the US CERT guy absolutely raked the iDefense dude over the coals. My argument was that by monetizing exploit information, it incentivized bug hunters to not only avoid disclosing the bugs to the vendor, but to sell to the highest bidder, and it was a danger to consumers, companies, and governments.

The results on this were a mixed bag. Yes, “bug brokers” became a thing (I even personally knew a couple) and they would match up an exploit with a buyer and take a cut of the sale. The money was insane, and in the earlier days of bug brokers the US government became one of the most prolific bug broker customers, mainly to exclusively own bugs so that other nation states couldn’t. Six and eventually seven figure payouts became a reality. Weaponized bugs (exploits ready for attack against a target that worked flawlessly) became extremely valuable. My thinking was exploits + money = bad, bad, bad.

The thing I did not anticipate was the rise of bug bounty programs. These I am actually highly in favor of. A vendor asks for bug hunters to submit bugs, and pays out bounties based upon severity of the bug. The vendor patches, the bug hunter gets paid AND gets credit, and the vendor’s customers gets a safer product. Win win all the way around. Logically it makes sense, but I could not see it at the time.

Government password escrow

I’d played around with this prediction off and on, but as France had already criminalized the use of encryption without disclosing the password to the government on demand, I figured the USA was headed that way as well. This prediction was further anticipated with the UK’s RIPA in 2000 (and its revisions/enactments in 2007). See this Wikipedia reference for more information.

I was dead wrong on this one, at least from the USA perspective. Tech companies have doubled down and increased user privacy with encryption being automatic for the vast majority of their offerings (see this example involving Apple and the FBI).

This doesn’t mean the story is over. I really thought this was going to have happened well before now, and it hasn’t so I am definitely wrong. However the subject keeps coming up. About a week before this blog post, an opinion piece appeared in the New York Times where the author suggests Signal is somewhat evil and that the ability for law enforcement to get access to encrypted data should at least be discussed. This is yet another slippery slope towards password or encryption key escrow where a government-accessible backdoor would have to be introduced into allowed encryption solutions. So far I am wrong - password escrow or encryption backdoors are not required, and let’s hope I am always wrong on this one.

US Government Monitoring US Citizens

This was not just me stating this. But I had stated publicly numerous times that I thought the US government was illegally spying on its citizens, and through agreements with other 5 Eyes nations they would monitor each others’ citizens’ traffic and provide detailed intelligence specific targets. For example, the US could ask the UK to monitor Alice and Bob - two US citizens - and the UK would do it, and report any odd things. The odd things would give enough “justification” for full blown (illegal) citizen spying. Anyway, that was my basic theory, but I had also stated that they probably could spy on everything.

While I did’t know this specific “event” was going on, I was completely unsurprised with the AT&T/NSA story leaking out. It did prompt a somewhat hollow “victory” when skeptical friends who had made fun of me said, “wait, you were actually right on that paranoid shit you ranted about”. I would have preferred to have been wrong.

AES “BACKDOOR”

When the original AES protocol came out and was introduced to the world as the new standard supported by NIST, within certain circles there were suspicions that it was backdoored. I thought there could be cryptographic flaws that someone like the NSA had probably figured out, but they were encouraging everyone to use it so they could spy on bad actors who used it. I furthered this with the idea that AES was approved for Secret but not Top Secret at the time (it is now, using AES-256 with large pre-shared keys). I included this tidbit in my keynote in 2006 at Toorcon 8 entitled “The State of the Enemy of the State” (in that keynote I also discussed a lot of other weird paranoid stuff).

While I am not sure if the NSA part of the prediction applied, the “flaw” part was at least partially correct. It seems that in certain scenarios the use of AES in CBC mode allowed for a padding oracle attack, and this really came to light in 2009. It seems that it wasn’t just AES, it was the way CBC mode was being used, but as a result (particularly after things like POODLE) there was a move to use GCM mode instead of CBC mode. Technically I was wondering about the last block of data if the needed size didn’t match the final block (as file sizes are by nature “random”) and wondered if there were clues in “leaked” bits there, but I was clearly looking at the wrong end. I made the mistake that the world’s leading cryptographers had thoroughly examined the front part.

So technically there was a problem, and I could say “I was right” although quite frankly I don’t think I was close to being right by a long shot. All it confirmed to me is that the rumors might have been onto something, and maybe I should look at the other end of things (so to speak) when picking apart cryptography.

Finally I’d say that it has long been rumored/suspected that the NSA could do quantum computing - far beyond what technological advances have been occurring in the private sector in recent years - and this also could be the reason for the rumors.

Illegality of Reverse Engineering

With the passing of the Digital Millennium Copyright Act (DMCA), the US implementation of the WIPO Copyright Treaty, even though there were provisions allowing for exceptions for security research, I assumed that eventually broad interpretation would make reverse engineering illegal. The idea was that an organization could have a product that was closed-sourced or the firmware was locked away, and if researchers were trying to understand how the thing worked to allow for modifications, repairs, or even understanding the context in which various subsystems and components operated, they would be challenged and be subject to penalty/prosecution/bad things. The sticking point for me was forbidding the bypassing of an access control. As a researcher I might want to bypass an access control to get to the code or feature or activate some underlying administrative component to complete testing. I figured that after passage of things like the Communications Decency Act (later ruled unconstitutional), the US Government didn’t have a good track record in correctly handling technology, especially emerging technology, and would fuck this up. In more recent years the idea of right-to-repair began to gain attention where I expected a similar pushback from vendors against research.

While I haven’t conceded my position on this entirely, as there have been continued court cases involving DMCA and it is constantly getting exemptions/revisions/interpretations, plus the whole right-to-repair issue is still a hot topic having positive gains, it looks like security research and reverse engineering is doing fine. Yes, there are a lot of organizations that have been “fighting the good fight” such as the EFF and the ACLU and they have to battle in the courts frequently, but they are slowly gaining ground and making things a better place.

Death of the Password

I cannot find the reference, but I know a while back I made a statement that in a few short years we would eliminate the password. Probably the reason I can’t find it is because I’ve made numerous variations of that same prediction, or stated it is going away, and basically just ranted about how awful the password is - the argument is that the only reason we have two factor is because the first factor is so poor, so why not eliminate it?

I’m dead wrong on this. As I write this, everyone is still reeling from the LastPass breach and arguing about which password manager everyone should be using. I still stand by the idea that if we eliminate SF3AS (Shit Factor As An Authentication Solution) aka the password and just use the second factor we will be much better off (jesus, I’m ranting again) but will apparently have to live with the reality that this prediction might not happen in my lifetime. Passwords are still here, and it looks like we are going to continue to have them for a long time.

Why was I wrong?

A natural thing that’s come up while going through these is to do a bit of self reflection, which has led me to some realizations. Here are a few of them:

  • Biases lead to pre-mature conclusions. I am by nature paranoid. This is because I’m living out here on the edges of digital society. As hackers we tend to explore new technologies and start finding strengths as well as weaknesses in them before others, and we’ve seen abuses. We are often the edge cases and that digital fine edge is often sharpened against us - be that the court of public opinion or a court of law. Sure, in some situations we are able to help correct things, so that by the time a software or hardware product is mainstream it is more secure and less prone to abuse, but this isn’t always the case. This has led to situations where I simply figure that the worse thing is going to happen; the worse interpretation will occur; those in power will abuse this to thwart, steal from, take advantage of, and/or hurt those not in power. I have to recognize when my own biases are jumping in. This is because as soon as I find evidence that supports my paranoid view, I call it “confirmed”. Yes, looking at additional information with the same lens means I find multiple “confirmed” facts that back up my viewpoint. I have to recognize the uniqueness of my viewpoint and realize there could be more than one angle from which to view the facts.

  • A lack of big-picture variations narrows the vision. I sometimes look at something without breaking it down into its components and understanding each component individually. This usually is from rushing in and using my so-called “hacker instinct” or inflated “vast security knowledge” without considering anything else. While there were a few exceptions (and I knew a couple personally), for the most part bug brokers turned out to be a bad thing, but because I was so focused in on one single element, I didn’t take things apart at the big picture level. In this case, it was the idea that paying for bugs were bad. I had one scenario I was looking at, and the idea of a bug bounty program being run by a company wanting to improve the security of their product never entered my head. That concept - pay a hacker for a discovered flaw - stands alone. Of course there is potential for abuse, but there are other scenarios when the right organization is paying for the bug where it is a huge positive.

  • Don’t work in an echo chamber. A number of my theories and predictions have come because I have heard a rumor or came up with a paranoid thought and decided to ask experts about it, but managed to ask experts with at least some of the same general philosophies as myself. Yes, it is handy to know a lot of people from the world of spies, government agencies, and federal law enforcement. Yes I have heard some wild stories, and some of my suspicions or dark, overheard rumors turn out to be true, but by limiting my input to only those sources can cause some of those biases I mentioned earlier. Having two separate people that don’t know each other confirming things could mean this is two similar souls reaching similar conclusions given certain data. Yes in some cases it was a correct conclusion on my part, but in other cases it wasn’t.

  • Run theories by others with completely different viewpoints and backgrounds. This could have saved my ass numerous times from embarrassment. If I come up with some shockingly wild scenario and a few select spook friends confirm it, I also need to run it by a non-techie. Run it by a non-believer. If it is left-leaning-sympathetic theory or has a left-supporting impact, run it by a conservative. Don’t dismiss their arguments, listen to them, and really look at how you might address their skepticism or alternate interpretation of your presented “facts.” Whenever I have done this, it has led me down pathways that have enriched my work or opened up completely new areas to explore, and while I didn’t do it very often in the beginning of my career, I do it more now. I just need to make sure that I don’t think I’m “smart” and the facts are so “obvious” that I don’t need to run something by anyone else.

  • Understand imposter syndrome. There are times when I think I am right, I have solid information, but because of past revelations where I was wrong, I am completely worried that I having no fucking idea what I am talking about. Imposter syndrome. I have to recognize it, and I have to fight it. Yes it is a great mechanism for helping me to keep myself in check. A digital speed bump of sorts. I just don’t need to constantly drive around it at break-neck speeds, and I cannot let it cause so much fear I can’t leave the parking lot.

  • Correct but not correct. I did many of these more outlandish predictions pre-Shadow Brokers. If you’re in Infosec and living under a rock, check out this Darknet Diaries podcast for a bit more background. Let’s say I had a prediction, and it was confirmed as correct by a few spooks. If the way to get to my conclusion involved steps A, B, and C but I thought it was X, Y, and Z instead, it is often in the best interest of the some of these spooks to not correct me. To me, this is why I was both “right” on the AES prediction above and yet still “wrong” as I had the details incorrect. This has the added advantage of some agency being able to publicly dismiss everything I say because they could later state “and he thought it was X, Y, and Z so he clearly cannot be trusted.” Yeah, I’m getting into the weeds here a bit, aren’t I? Well, you start playing in this world, this is what happens to your brain.

Conclusion

I know the whole “next 12 months expect these things” lists are interesting, but I hardly take them seriously other than as a list of security-related things that are considered topical, and nothing more. If someone was seriously making predictions about tech, security, break-throughs, and advances, they’re not putting it in a blog or in some tech news article, they’re having a serious conversation with their broker, their accountant, or even investors.

Self-reflection is good. It can be productive. If you’re one of those people who makes predictions or poses new theories or explores radical interpretations of the various ones and zeroes, I encourage going back and seeing how correct you actually were. I’m glad I did this blog post, but it wasn’t the easiest thing to write.

Fun Friday: The Great Pharmacy Heist

Fun Friday: The Great Pharmacy Heist

Solar Follow-up

Solar Follow-up