Some Nigerian disinformation experts have deployed audio-based deepfake technology to fake sensitive conversations of political figures, stirring controversies.
In the realm of digital media and information manipulation, deepfake technology has gained popularity. This artificial intelligence tool enables the alteration of videos, photos and audio. It has been utilised for numerous purposes, notably political propaganda and disinformation campaigns.
The artificial intelligence used in deepfake technology produces modified content that looks real. Using deep learning algorithms, the tech tool entails the production of synthetic media that is nearly indistinguishable from authentic media. Although the technology is not new, it has dramatically expanded recently and has been utilised for different objectives, including pushing false political narratives.
A deepfake video impersonates the voice and face of a popular person to distribute disinformation. Deepfakes are usually visually based. However, there is a recent influx of audio-based deepfakes where the voices of popular figures are doctored to have them say anything the actors want.
In 2022, the chief of a UK subsidiary of a German energy firm paid nearly £200,000 into a Hungarian bank account after being phoned by a fraudster who simulated the German CEO’s voice. The company’s insurers believe the voice was a deepfake but the evidence is unclear. Similar scams have reportedly used recorded WhatsApp voice messages.
Some Nigerian disinformation experts have deployed audio-based deepfake technology to fake sensitive conversations of political figures, stirring controversies. They aim to deceive the public to push their political agenda further.
On 25 February, during the 2023 presidential elections, an audio clip went viral in which Atiku Abubakar, the presidential candidate of the Peoples Democratic Party (PDP), was having an acclaimed secret call with two other prominent party men on how to rig the elections. But fact-checkers revealed it was false and misleading.
Audio deepfakes can impact elections outcomes
However, this new technology is disturbing, particularly regarding its effects on society, since the voices of prominent individuals could now be faked to mislead unsuspecting Nigerians. The effect of this deepfake tool on democracy is that it has the potential to sway public opinion and could significantly affect the outcomes of elections.
It also has a detrimental effect on credibility and trust. The prevalence of deepfake audio, for instance, can decrease public confidence in the veracity of audio recordings and make it more difficult to confirm the veracity of audio content. While it may be more challenging to use audio recordings as evidence in court, this may have consequences for law enforcement.
The more insidious impact of deepfakes, along with other synthetic media and fake news, is to create a zero-trust society where people cannot, or no longer bother to, distinguish truth from falsehood. And when trust is eroded, it is easier to raise doubts about specific events.
The difficulty in verifying audio deepfakes potentially poses an even greater threat since it is communicated verbally without video. However, there are ways you can verify audio deepfakes.
Audio deepfake: Tips and tools
Audio deepfakes are created by first allowing a computer to listen to audio recordings of a targeted victim speaker. Depending on the exact techniques, the computer might need to listen to as little as ten to 20 seconds of audio. This audio extracts key information about the unique aspects of the victim’s voice.
Therefore, researchers around the world have initiated some tools on the internet to help fact-checkers and citizens distinguish fake from real.
The Splitter App and the Google Reverse Image
The Splitter app allows users to distinguish audio sources from a blended track and is one such tool. This app can assist in determining whether a deepfake audio recording was made by combining various audio sources. It can assist in identifying the distinct voices and determining whether they were recorded individually and then combined, especially if a deepfake audio clip purports to capture a conversation between two persons.
It is significant to remember that no technology can completely guarantee its ability to distinguish between deepfake audio recordings. To ensure the validity of an audio recording, it is crucial to employ various methods and technologies.
It’s also crucial to approach any audio recordings with a healthy dose of scepticism and to confirm the recording’s source and context before believing its contents.
There are further human attitudes that could be used to detect when audio has been faked.
Below are a few of them.
Be aware of the context
The environment in which audio recordings were made is among the most crucial factors to consider when listening to them. An audio recording might be a deepfake. For instance, if it contains a conversation that seems too plausible to be true or sounds too nice to be true. A similar warning sign would be if an audio recording lacked background noise or other ambient sounds present in a real-life context.
Analyse the frequency
The audio recording’s quality is a further crucial issue to consider. Since the deepfake technology is still in its infancy, it is easy to spot them by their inferior sound quality in audio recordings. For instance, a deepfake audio may have glitches or abnormalities absent from authentic recordings, or the voices may have robotic or off-key tones.
Check the source
Before believing the content of an audio recording, it is crucial to confirm its source. Deeply false audio files are frequently shared on social media and other internet venues. Yet, something is not necessarily accurate just because it is widely shared. Check the audio recording’s origin and compare the data to reliable sources.
Paying attention to the nuances is likely the most crucial when listening to an audio recording. Pay close attention to the recorded voices, background noise, and other sounds. Consider the conversation’s setting, the recording’s quality, and the information’s source. Something might be a deepfake if it seems strange or unbelievable.
Even in their infancy, deepfake video and audio undermine people’s confidence in these exchanges, effectively limiting their usefulness. If the digital world is to remain a critical resource for information in people’s lives, effective and secure techniques for determining the source of an audio sample are crucial.