The Uncomfortable Effects of AI
Photo by Luke Jones on Unsplash
This is going to be a rather uncomfortable and potentially irritating blog post regarding AI. This is not a pro-AI or anti-AI blog post by any means, because I am rapidly finding that discussion to be rather pointless and my feelings on the topic relate to both. What I believe to be the uncomfortable truth of AI is that the security community is going to have to “fix” it - whether we want to or not.
Relevant History
Tech advances over the years have led to fast deployments with lots of interesting side effects. Let me list just a few:
Online payments. There was a time when using a credit card online was not only considered risky, most security professionals considered it absolutely insane and said “long term this will never become a real thing.” However there were entire business models and ventures being launched that completely depended on it. Case in point was the online porn industry. Many of the online porn portals turned to various hackers and infosec people to improve and lock things down, and the security folk rose to the occasion. Yes, all of those security measures in place for online transactions were developed and honed in the porn industry.
Cloud adoption. Who in their right mind would move their precious data from servers under literal lock and key in isolated rooms in their offices where they could reach them with Ethernet speeds, and place them in servers run by someone else in another city, state, or even country? Again, this was considered insane by many hardened pros, but slowly models emerged that showed the technology had a place. Yes, they had security problems, and the security professionals had to step up and make things right. And again it was the online porn industry (based upon hacker friends that worked there) that were some of the earliest cloud adopters. At this point having on-prem services is a somewhat rare thing with most businesses relying heavily on the cloud.
PHP. If you have a bit of gray in your hair you likely remember the emergence of PHP. Around that time it required someone with a bit of skill and experience to build a website, and while it wasn’t 100% PHP that changed it, it was around the time of PHP becoming popular that less-experienced web developers emerged. Using extremely basic knowledge of HTML coupled with handy tech like PHP, websites for various businesses began populating the landscape rather quickly. And of course this new wave of web was completely filled with tons of security holes. Who had to come forward and help fix this mess? You got it, the infosec crowd.
I think you can see the pattern, I could continue on with many more recent examples but you get the idea.
Current State of AI
Like the whole “dot com bubble” that happened a few decades ago - which had some rather interesting side effects when it “burst” - there is a lot of discussion about the AI bubble bursting. There is the pro-AI crowd that encourages everyone to rush forward as fast as possible, and there are the anti-AI crowd that encourages everyone to completely reject and boycott any company and their products that use AI. My thinking is oddly not in either camp.
As with any new bit of tech, I always try to get my hands on it and explore things to find out if it is useful, secure, and has potential. And if there are ways I can make it do things the way I want it to do things in possible completely different directions that the developers of the new tech. AI is no exception to this. I feel like I can say that of course it has potential and it definitely has its uses, but the “secure” part of it is quite lacking. Oh sure, with a bit of prompt manipulation I can get better and much more secure output, but then again I am a security person so I know how to develop the prompt. And of course I can prompt inject and whatnot - plenty of people are doing that as well.
Guess Who Has To Fix This?
I am more than familiar with AI. My first exposure to the concept was a version of ELIZA on the Apple II in the very early 1980s. I was exposed to AI now and again, but fully encountered a project that heavily used machine learning and model building when I worked at MITRE a decade and a half ago. I can’t talk much about the details as this was a government-funded project (and truthfully by comparison to today’s tech it might be considered boring) but at the time it was quite eye opening for me. My job in that project was data preparation for the model building as well as verifying the model output based upon the data I helped cleanse and select.
Back then, this was a security project through and through. The intent was to detect bad actors, and back then I was trying to think of ways to bypass things undetected. I strongly think this is what is going to happen now. Security professionals are going to be the ones that have to fix AI. Plenty of naysayers out there that are saying the bubble will burst and AI will end up failing. However I am seeing too much potential. My employer is leaning heavily into AI, and as we are internally dogfooding it we are also discovering what works, what doesn’t, and how to manipulate it to our will. As first and foremost a hacker, I am trying to “hack” AI to get the output I want. As I pass on the things I learn to others (who are also doing their own AI “hacking”), things improve.
Does it need guardrails? Of course it does, this is early days. Will it always need guardrails? No idea. But my hacker mind is flowing very quickly as a result - whether I’m in a security mood trying to fix it or hacker mood trying to break it.
In Conclusion
I could compare AI adoption to the impacts of the second industrial revolution as I think it’s a much more appropriate comparison - based upon the divisions in many societal circles - but instead I will simply restate what I think is obvious. It is not going away, and it is only going to get better if the security community steps up and deals with it. I ask all of you early adopters to seek out infosec’s help, and I ask for the infosec community (especially the AI skeptics) to get involved as soon as possible, because this is going to be one hell of a challenge.
