Be afraid, the deep-fakes are among us

FOUNDATIONS OF MISTRUST

Be afraid, the deep-fakes are among us

The most immediate danger posed by generative artificial intelligence — an onslaught of misinformation — has officially arrived in South Africa, writes WESSEL VAN RENSBURG.

Image: ANGELA TUCK

DUDU Zuma-Sambudla's recent sharing of a crudely fabricated video purportedly showing President Donald Trump endorsing the MK Party heralds the arrival in South Africa of what many consider to be the most immediate danger posed by generative artificial intelligence: an onslaught of misinformation through so-called deep-fakes.

In a year punctuated by numerous elections around the globe, examples abound. Perhaps most notably, during the recent primaries in New Hampshire a fake robocall featuring President Joe Biden's voice urged voters to abstain from casting their ballots, underscoring the insidious potential of this technology to undermine democratic processes.

There was a time when the printing press and broadcasting infrastructure could sway public opinion and confer significant political power. However, with the advent of social media, exerting such control has become far more complex.

As sociologist Zeynep Tufekci astutely observes, the new weapons wielded by those seeking to censor are both less ambitious and more sophisticated. Unable to stem the flow of information entirely, they instead aim to manipulate our attention and erode societal trust. The pernicious nature of this new era of misinformation lies not only in its capacity to convince the public of falsehoods but also in its more easily attainable goal: to corrode the foundation of trust upon which our institutions rely.



When individuals are repeatedly bombarded with misleading or fabricated information, they begin to question the veracity of all sources, even those that are generally reliable. This erosion of trust strikes at the heart of democracy, which depends on a certain level of confidence in the integrity of our institutions and the accuracy of the information they provide to function effectively.

Yet, in other parts of the world, this same technology has been harnessed to circumvent censorship. In Pakistan, supporters of the imprisoned leader Imran Khan have been disseminating his speeches from jail, reanimating them using AI-generated audio clips that mimic Khan's voice — a practice that appears to have his full backing.

In one such clip, Khan acknowledged that his party was barred from holding public rallies and called on his supporters to turn out in force for the general elections scheduled for February 8. His party later asserted that he had emerged victorious in those elections.

Could these examples of manipulation and influence become more sophisticated and personalised? What is transpiring in the world of financial fraud could give us some clues. A Darktrace white paper reports a staggering 135% increase in “social engineering” phishing attacks among its customer base from January to February 2023. This is a type of cybercrime that exploits human psychology to manipulate individuals into divulging sensitive information or performing actions that compromise their security.

This trend is ascribed to the swift integration of generative AI into mainstream technology. Further experiments demonstrate this issue; one researcher employed GPT-4 to develop a program that extracted information from the Wikipedia pages of all British MPs elected in 2019. This data enabled the creation of detailed biographies for each MP.

After being asked about the principles and techniques of crafting successful phishing emails, the model was then instructed to compose convincing personalised messages using these principles, tailored to the individual MPs' regions, party affiliations and interests. All in a couple of days and at low cost. 

This week also saw Google facing criticism for its “woke" AI, a model that clumsily and surreptitiously rewrote user prompts to generate more racially diverse output. While laudable in many instances, it is not so for historical output. While this may seem unrelated to deep-fakes, the incident raises the spectre of a future in which a state could release a Large Language Model akin to GPT-4, one that is more subtly misaligned to sow doubt while remaining otherwise useful — much like the Russia Today TV channel is used as a soft power tool today. 

Imagine querying such a model about a disease and receiving a mixture of accurate information and suggestions to consider alternative “truths", or asking about the events that transpired at the insurrection in Washington on January 6, 2021, only to be presented with two contradictory narratives, both deemed plausible and worthy of consideration. An even more subtle battle over soft power and influence via AI models is unfolding right now. More on that in a subsequent column. 

♦ VWB ♦


BE PART OF THE CONVERSATION: Go to the bottom of this page to share your opinion. We look forward to hearing from you.

Speech Bubbles

To comment on this article, register (it's fast and free) or log in.

First read Vrye Weekblad's Comment Policy before commenting.