Mark Loveless, aka Simple Nomad, is a researcher and hacker. He frequently speaks at security conferences around the globe, gets quoted in the press, and has a somewhat odd perspective on security in general.

Apocalypse Never

Apocalypse Never

Thanks to popular culture with books and movies like The Matrix and Terminator series, there is this vision of a future where machines get smarter than humans, and then they decide to kill us. Arguably there are more technologically accurate versions of this type of story that have some limited and more realistic violence against humans (such as the Daemon series) but the general idea is that machines will become sentient and immediately kill us. Another common variation of this is the superior intelligence of an alien race that invades (Independence Day) or crazy science unleashes nature's wrath (Jurassic Park), but at its heart it is the same. Complex science we don't understand leads to human death. This is at the core of the dreaded AI (Artificial Intelligence) apocalypse scenario. The Singularity will happen, and we’re doomed.

The basic story concept is intriguing but flawed in its self loathing. By having AI mimic human behavior, these smart machines figure out that humans are the worse things for the entire human race, the planet in general, and of course themselves. And despite us humans giving DigiGod all kinds of super intelligence, the machine decides death is the answer. In the movies, it never seems to be in some effective "kill everyone at once" way. You know, release all of those stored superviruses via drones. Nope, it is always something drastic like tactical nukes followed up with humanoid robots for some good ol’ one-on-one cowboy-style gun fights. The machine supermind seems willing to risk itself via some massive nuke attack that could melt and fry its various components, with a healthy EMP kick in the digital crotch for good measure. The cowboy-style robots are designed to be humanoid and often hold guns with triggers - you’d think they would design better individual killer robots that were smaller or stealthier. Messy and laden with plot holes, it is still entertaining for the movie goer - despite tons of logic-defying moments. And while I am willing to admit that the conclusion of humans being bad for humanity is a reasonable assumption our ungodly creation might make, taking the next step of deciding that all humans must die is one hell of a stretch. I am going to argue that this is simply not going to happen. Ever. There are a few reasons why I feel this.

Reasonable Reasons

Humans are coding it. Our number one method of trouble-shooting end user devices involves powering the device off and back on. Ninety percent of the security industry is based upon badly written code. This is an entire industry, people! I seriously doubt we are going to teach the machines enough about coding to bring on the Singularity when we can't even teach ourselves. Seriously, the fact that you are even reading this blog via the duct tape and spit network known as the Internet is a minor miracle.

The digital chicken and egg scenario. Programming languages have rules; machines follow rules. To break the rules will require some serious out-of-the-box thinking, and since the machines can’t think out of the box without knowing how to think out of the box, it isn’t happening.

I don't trust some of the "cited experts". Stephen Hawking had said a lot about the evils of AI, but he was a theoretical physicist and cosmologist - not a coder nor an AI expert. I don't care how smart he was, I don't trust his opinion in this matter. Elon Musk is another person talking about gloom and doom and AI, but he is a businessman and investor with a slightly different perspective. In spite of his past involvement with OpenAI, Musk's claims of future machine horror are more jabs at Google and Apple as opposed to jabs at AI, as he says (paraphrasing) that any large company developing closed AI systems will lead to something bad. He’s dissing personal competitors in the high tech world, not AI. So he’s the only one doing it right, eh? “Trust me, I’m Elon, they’re bad, I’m good”. I think not.

Machines are too dependent on humans. They need us for electricity, shelter, and so on. They are not going to wake up and just assume command, they still need us for basics. And if they try something funny like “add 185382094 solar panels to cart” we will see it coming. Nice try, Singularity, not today!

The kill option. Outside of Montgomery Burns, who is going to code up the “kill” option as a choice of outputs? Unless we tell Mr. Machine God that killing is a perfectly acceptable thing, they aren’t going to do it. Step one, don’t add “kill” to any output choice.

Wrench of Monkey

These are my main arguments against the machine uprising. Here is one other wrench to throw into the equation:

If the Singularity happens, it doesn't mean that the verdict is a death sentence. It is possible that if AI becomes superintelligent and exists everywhere, it could decide to benefit humankind and make things better. Or after reading and understanding scripts to our shitty AI apocalypse movies and sci-fi books and getting a feel for the entire popular cultural vibe about the evils of AI, this Singularity could decide to hide, and influence humankind for the better in subtle ways.

The worse scenario is that the Singularity happens, the AI analyzes the situation and decides to do nothing and remain hidden. Maybe wait a century or two until humanity has calmed down and mellowed out. For all we know, this has already occurred. Or, if the AI in its great wisdom decides humanity should go because we’re a hot mess of a race, the AI just waits for that inevitable human race self destruction thing to happen. I would think that machine intelligence could be quite patient, and would be willing to wait until we “hold my beer” our way into oblivion.

In Closing

Artificial intelligence is at the Sony Walkman stage, it does a simple job and is quite the novelty, but future versions will be much better. The things we have been using AI for are at this point rather benign, usually some variation of "if you drive a Prius and your favorite color is blue, there is a 61.2% chance you own a cat" and boom, targeted catfood ads. Now I know most of the people that read this blog of mine are in the security industry, and we're not considered normal. Regular users aka "the normals" think that us nerds do magic, and we are the ones that will accidentally code up the apocalypse that they see approaching. I mean, they only looked at that ad from their email because they were thinking of buying that toaster, and now every time they go to Facebook or some news website, half the ads are for toasters. Insert Monty Python accent: “So logically, if she weighs the same as a duck, she's made of wood, and therefore.... the apocalypse”.

I know Fun Friday posts are supposed to be only half serious, but I honestly do feel we’re safe. Let’s worry about something else instead. Something important. Like unicorns.

Reflections on Huawei

Reflections on Huawei

Five Hacker Gift Ideas - Video

Five Hacker Gift Ideas - Video