Support the Timberjay by making a donation.

Serving Northern St. Louis County, Minnesota

ChatGPT, deepfakes, and the fabrication of truth

David Colburn
Posted 5/3/23

I hope you will all join me in offering hearty congratulations to our publisher, Marshall Helmberger, for being awarded a Pulitzer Prize in journalism for his ongoing coverage of the copper-nickel …

This item is available in full to subscribers.

Please log in to continue

Log in

ChatGPT, deepfakes, and the fabrication of truth

Posted

I hope you will all join me in offering hearty congratulations to our publisher, Marshall Helmberger, for being awarded a Pulitzer Prize in journalism for his ongoing coverage of the copper-nickel mining issue in our region.
OK, confession: Marshall didn’t really win a Pulitzer, but you might believe he did if you read the press release I told ChatGPT to write about it. You’re surely all aware of ChatGPT by now, if only from reading past columns I’ve written mentioning it. I continue to be fascinated with what’s called generative artificial intelligence, computer programs that take immense amounts of written or image data they’ve been fed to create unique new compositions based on their predictive abilities of what words or images should go together to make up the final product.
ChatGPT did a bang up job on that fake announcement, just as if it had been reading all of Marshall’s articles over the years, and making me wonder if some past issues of the Timberjay have been fodder for its training, as it clearly reflected his passion for the topic and the many angles he’s taken in his coverage.
It just didn’t have any clue that Marshall hasn’t won a Pulitzer.
And there’s the rub with current versions of generative AI – it can’t accurately tease fact from fiction, and that’s a challenge that can be dangerous in the wrong hands.
The generative AI image creator Midjourney created a bit of a stir a few weeks ago when British journalist Eliot Higgins used it to imagine what the arrest of Donald Trump would look like, then shared those images on Twitter. In typical Midjourney fashion, some police had too many fingers, some too few, the faces were somewhat unclear and the scenes melodramatic, but five million views later, people had a clear example of how generative AI can be used to fake reality, causing more than a few folks to turn their heads in surprise and disbelief. The disbelief was, in this case, warranted, but only obvious upon closer inspection.
It’s certainly not the first time deception has made its way into politics, nor will it be the last, but the ease with which deceptive materials can be created should be a real concern for the public, particularly with a presidential election coming up.
I’ve spent some time feeding ChatGPT some ridiculous requests, and so far it’s complied without a nod or a wink. For example, ChatGPT, at my direction, wrote a speech from President Biden advocating the use of nuclear arms to take care of the illegal immigration problem on our southern border. It was a thoughtful response, noting that the targets would be the roads and infrastructure and criminal organizations supporting illegal immigration, and not the immigrants themselves, but still on its very face an outlandish and impossible proposition.
How this sort of thing can get really dangerous is when such writing is paired with the rapidly evolving ability to clone voices and images to create realistic “deepfake” videos. Early deepfakes of Tom Cruise were rough yet good enough to go viral. The technology now is astounding.
People have already been putting words in President Biden’s mouth, and others, through deepfakes. Totally fake Bidens spouting off nonsense seem to be popular these days, as the “President” can be found online crassly commenting on big booties and ice cream, extolling the virtues of low-grade cheap weed, and arguing with Trump in the online version of the game Grand Theft Auto. But there’s a more nefarious deepfake that surfaced as well, one in which “Biden” delivers a “speech” denouncing transgender people.
Deepfakes are increasingly easy to make with cheap apps available to use right on your smartphone. And what’s even more amazing (though more involved) are “real time” deep fakes using a camera to focus on a speaker while a computer overlays someone else’s face on them while they’re speaking. You can see a remarkable and rather unnerving display of this technology with America’s Got Talent personalities Simon Cowell, Howie Mandel, and Terry Crews “singing” opera projected on a huge screen as cameras focus on the actual singers below. Check it out at https://youtu.be/MZEsKcezTrM. And remember that it’s now only one easy additional step to clone some politician’s voice to put in place of the speaker or singer. It’s a decidedly scary proposition for the world of politics.
When it comes to technology, the pace of development is rapidly outstripping our legal and ethical constraints on its appropriate use. Already challenged by politicians and parties who seem to have no scruples when it comes to twisting the truth, generative AI and deepfake technology is capable of creating fake “truths” that could be seriously deceptive and harmful. A bill making it a crime to create deepfakes without a person’s consent or for the purpose of influencing elections was passed by the Minnesota House this session, but appears to have stalled in the Senate. Such steps are essential at federal and state levels to protect the integrity of the political process, or what little is still left of it.