The Dark Side of Generative AI
When generative AI models first gained prominence around 2018-2019, there was a barrage of nefarious use cases that captured the popular narrative of these tools. Deepfake tools like FakeApp were propagating the creation of false videos like celebrity and revenge porn Voice synthesis software was being used to execute a $35 million bank heist and spread fake audio clips of celebrities.
Three years later, generative AI models are significantly better, yet we hear less about the nefarious side of AI. There’s the general concern of AI replacing artists and creators by emulating and drawing out all human creations. Beyond that, though, we haven’t heard much evil brought about by generative AI lately.
It begs the question of what’s to come with the evil side of generative AI. What schemes will generative AI enable criminals to commit?
- Profile picture generators (among other text-to-image AIs) allow you to upload photos of yourself and generate believable selfies from just a dozen or so photos.
- Synthetic voice tech is good enough to clone a voice with just one minute of audio.
- Text-to-image and text-to-text AI platforms allow you to generate content in any artists’ or writers’ style – even offering the ability to upload source material for the AI to emulate.
- Soon, the same tech used today by synthetic video generators, like Synthesis, will be more readily available, allowing anyone to generate accurate talking head videos of themselves and others.
It doesn’t take a genius to see how this could lead to a massive Internet problem. A little bit of imagination is all it takes to create some havoc with these tools. Especially when our defenses aren’t a major point of development.
Our primary safeguards against nefarious AI usage are the long waiting lists to access certain generative AI tools, an approval process (sometimes), and the platforms’ Terms & Conditions. But is this enough protection?
Sweeping Up Your Digital Dust
The question you should ask yourself is, “what’s scrapable?” What data have I shared on public platforms that can easily be scraped and used to fine-tune an AI model against me? What photos, videos, art, comments, and writing of mine is across the web? And how much of it exists in the public domain?
Not many of us are completely nonexistent on the web. Most of us leave quite a bit of digital dust behind in the rooms we frequent online – including search engines, social platforms, and personal websites. Of course, there’s a lot of variability in the amount of data each of us leaves behind.
Take Ryan and me for comparison. Ryan has never been an avid social media user. I’ll bet you can’t find more than 50 photos of him online. At best, you’ll find a few dozen videos of him and me talking tech on YouTube. But you’d need to know to go through my profile to find him.
I, on the other hand, have thousands of photos, videos, articles, and comments floating around the web because I’ve been digitally active for over a decade and a half.
Who is the easier target of an AI-generated attack? Is it the person with more scrapable data? Of the person with a less-visible online presence? The better question is, who is a more lucrative target in the eyes of a cybercriminal?
Chances are you fall somewhere on the spectrum between Ryan and me. And chances are you don’t delete much of the data you share online, just like us.
Cleaning up one’s digital footprint has never been a common activity. We’ve never had guidance on this practice. It’s not like we get a user manual when we start using the Internet on how to control and clean up the data we share.
But it’s something we may all need to consider making a habit of in the era of generative AI.
Overall, it’s a great time to be thinking about how we can build defenses and services against AI-enabled crimes. For example, Source+ allows artists to opt in or out of letting AI databases – Midjourney, DALL-E, Stable Diffusion, etc. – train with their art to create generated images.
What other types of safeguards can we create and normalize? What will be effective in reducing AI-enabled cybercrime?
Member discussion