Connect with us

Technology

The Dangers of Voter Fraud: We Can’t Detect What We Can’t See

Avatar

Published

on

The Dangers of Voter Fraud: We Can't Detect What We Can't See

Don’t miss the leaders from OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One at VentureBeat Transform 2024. Gain essential insights about GenAI and grow your network during this exclusive three-day event. Learn more


It’s hard to believe that deepfakes have been around for so long that we don’t even blink at the sound of a new case of identity manipulation. But it won’t be long before we forget.

In 2018, a deepfake Showing Barack Obama saying words he never said set the internet ablaze and caused concern among US lawmakers. They warned of a future in which AI could disrupt elections or spread disinformation.

In 2019, a famous one was manipulated video by Nancy Pelosi spread like wildfire across social media. The video was subtly altered to make her speech appear slurred and her movements appear slow, suggesting her inability or intoxication during an official speech.

In 2020, deepfake videos were used to increase political tensions between China and India.


Countdown to VB Transform 2024

Join business leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with colleagues, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications into your industry. register now


And I won’t even get into the hundreds – if not thousands – of celebrity videos that have circulated the Internet in recent years, from Taylor Swift’s pornography scandal to Mark Zuckerberg’s sinister speech about the power of Facebook.

But despite these concerns, a subtler and potentially more deceptive threat looms: voting fraud. Which – at the risk of sounding like a doomsday scenario – could very well turn out to be the nail that sealed the coffin.

The invisible problem

Unlike high-definition video, the typical transmission quality of audio, especially for telephone calls, is remarkably low.

By now we’re desensitized to low-fidelity audio – from poor signal, to background noise, to distortions – which makes it incredibly difficult to discern a true anomaly.

The inherent imperfections in audio provide a veil of anonymity to voice manipulations. A slightly robotic tone or static voice message can easily be dismissed as a technical glitch rather than an attempt at fraud. This makes voter fraud not only effective, but also remarkably insidious.

Imagine receiving a call from a loved one’s number, telling you that he or she is in trouble and asking for help. The voice may sound a bit off, but you attribute this to the wind or a bad line. The emotional urgency of the call may push you to take action before you even think about verifying its authenticity. Herein lies the danger: Voter fraud takes advantage of our willingness to ignore small audio differences that are common in everyday phone use.

Video, on the other hand, provides visual cues. There are obvious giveaways in small details, like hairlines or facial expressions, that even the most sophisticated fraudsters haven’t been able to see past the human eye.

These alerts are not available during a voice call. That’s one reason why most mobile operators, including T-Mobile, Verizon, and others, provide free services to block (or at least identify and alert) suspected scam calls.

The urgency to validate anything and everything

One consequence of all this is that people will default to scrutinizing the validity of the source or origin of information. That’s a great thing.

Society will regain confidence in verified institutions. Despite the push to discredit traditional media, people will place even more trust in verified entities like C-SPAN, for example. In contrast, people may become increasingly skeptical of talk on social media and lesser-known media channels or platforms that lack reputation.

On a personal level, people will be more wary of incoming calls from unknown or unexpected numbers. The old “I’m just borrowing a friend’s phone” excuse will carry much less weight, as the risk of voter fraud makes us wary of unverified claims. The same applies to caller ID or a trusted interconnection. As a result, individuals may be more likely to use and trust services that provide secure and encrypted voice communications, where the identity of each party can be unambiguously confirmed.

And the technology will get better, and hopefully help. Authentication technologies and practices will become significantly more sophisticated. Techniques such as multi-factor authentication (MFA) for voice calls and the use of blockchain to verify the origin of digital communications will become standard. Likewise, practices such as verbal passcodes or callback authentication can become routine, especially in scenarios involving sensitive information or transactions.

MFA is not just technology

But MFA is not just about technology. Effectively combating voter fraud requires a combination of education, prudence, business practices, technology and government regulation.

For humans: It is essential that you take extra care. Understand that the voices of their loved ones may already have been captured and possibly cloned. NB; ask; listen.

For organizations, your job is to create reliable methods for consumers to verify that they are communicating with legitimate representatives. As a matter of principle, you cannot pass on the bill. And in specific jurisdictions, a financial institution may be at least partially responsible for customer account fraud from a legal perspective. This includes any business or media platform you interact with.

For the government: continue to make it easier for technology companies to innovate. And continue to enact legislation to protect people’s right to internet safety.

It takes a village, but it is possible.

Rick Song is CEO of Persona.