AI/ML

Evil Twins and Digital Elves: How the Metaverse Will Usher in New Forms of Deception and Fraud

The metaverse is likely to usher in a new era of deception and fraud, fueled by increasingly sophisticated immersive technologies and artificial intelligence. "Evil" digital twins — accurate virtual replicas that look, sound, and act like you (or people you know and trust) but are actually controlled by fraudsters — could be one type of future fraud. Another possibility is the use of digital elves, which are virtual assistants that gently steer us toward sponsored content or propaganda.

We are obsessed with technologies that blur the lines between what is real and what is made up. Indeed, the metaverse and artificial intelligence are two of the hottest fields right now, defined by their ability to deceive us.

In the metaverse, the goal of VR and AR technology is to deceive the senses by making computer-generated content appear to be real-world experiences. In the field of artificial intelligence, Alan Turing famously stated that the ultimate test of a human-level AI would be to successfully fool us into believing it was human.

Whether you like these technologies or not, their ability to deceive will soon transform society. Tens of billions of dollars are being invested in developing virtual worlds that achieve “suspension of disbelief,” while additional funds are being invested in populating those virtual worlds with AI-driven avatars that look, sound, and act so realistically that we won’t be able to tell the difference between actual people and virtual people.

According to a startling study recently published by researchers at Lancaster University and UC Berkeley, this is not the case. They created artificial human faces (i.e. photorealistic fakes) using a sophisticated form of AI known as a GAN (generative adversarial network) and presented those fakes to hundreds of human subjects, along with a mix of real faces. They discovered that AI has advanced to the point where humans can no longer distinguish between real and virtual faces. And that wasn’t their most terrifying discovery.

The researchers also asked their test subjects to rate each person’s “trustworthiness,” and discovered that we humans find AI-generated faces to be significantly more trustworthy. This finding suggests that advertisers will increasingly use AI-generated people in place of human actors and models, particularly in the metaverse. Working with virtual people is less expensive and faster, and if they are perceived as more trustworthy, they will be more persuasive.

The Dangers of Handing Over Power to Corporations

The extreme power these metaverse technologies will give large corporations, not the technology itself. Companies in control of metaverse platforms will be able to monitor users at unprecedented levels, tracking every aspect of your virtual life: where you go, what you do, who you’re with, what you say, and what you look at. They will even track your facial expressions and vocal inflections to determine your emotional state as you react to your surroundings.

Don’t be fooled into thinking you’ll spend little time in a virtual world. You certainly will. That’s because augmented reality will shower us with virtual content. When Apple, Meta, Microsoft, Google, and others release their fashionable consumer-focused AR glasses, eyewear will become standard equipment in our digital lives. This transition will be as rapid as the transition from flip phones to smartphones. After all, you won’t be able to access the wealth of magical content that will soon fill our world if you don’t have AR glasses.

And, once we’re inside big tech’s metaverse platforms, they’ll use every tool at their disposal to maximize profits. This entails approaching customers with AI-powered virtual people who engage us in promotional conversation. I know it sounds creepy, but these conversational agents will target us with extreme precision, monitoring our emotions in real time so they can optimize their promotional strategy (i.e. sales pitch). Yes, this will be a gold mine for predatory advertising, which is a legitimate application of this technology.

What About The Illegitimate Use of Virtual People?

This brings us to the topic of identity theft in the metaverse. According to Executive Vice President Charlie Bell in a recent Microsoft blog post, fraud and phishing attacks in the metaverse could “come from a familiar face – literally – like an avatar that impersonates your coworker.” I wholeheartedly concur. In fact, it is concerning that the ability to hijack or emulate avatars will destabilize our sense of identity, leaving us constantly unsure whether the people we meet are genuine or quality forgeries.

Furthermore, advances in AI technology are not required for this type of digital impersonation. That’s because an imposter avatar could be controlled by a human imposter hiding behind a virtual facade. AI automation is no longer required. Instead, AI should be used only for real-time voice-changing, allowing the criminal to imitate the sound of a friend, coworker, or other trusted figure. These technologies are already available and rapidly improving.

Digital Elves and Evil Twins

Accurately replicating a person’s appearance and sound in the metaverse is commonly referred to as creating a “digital twin.” Jensen Haung, the CEO of NVIDIA, gave a keynote address last month using a cartoonish digital twin. He stated that fidelity will rapidly improve in the coming years, as will AI engines’ ability to autonomously control your twin so that you can be in multiple places at once. Yes, digital twins are on their way.

This is why we must prepare for what I refer to as “evil twins”: exact virtual replicas that look, sound, and act like you (or people you know and trust) and are used for fraudulent purposes. This type of identity theft will occur in the metaverse because it is a simple combination of current technologies developed for deepfakes, voice emulation, digital twinning, and AI-powered avatars.

This means that platform providers must develop equally powerful authentication technologies that will allow us to instantly determine whether we’re interacting with the person we expect (or their authorized digital twin) and NOT an evil twin that has been deployed fraudulently to deceive us. If platforms do not address this issue early on, the metaverse may be destroyed by an avalanche of deception.

But, of all the technologies on the horizon, the ones we eagerly embrace will have the greatest impact, both positive and negative. This brings me to the electronic life facilitator, or ELF, which will be the most powerful form of coercion in the metaverse. These AI-powered avatars will be the natural evolution of digital assistants like Siri and Alexa as they transition from disembodied voices to personified digital beings of the future.

Big tech will position these AI agents as virtual life coaches who will accompany you throughout your day as you navigate the metaverse. And, because augmented reality will be our primary access point to virtual content, these digital elves will accompany you wherever you go, whether you’re shopping, working, walking down the street, or just hanging out. And if the platform providers succeed, you will come to regard these virtual beings as trusted figures in your life, a cross between a familiar friend, a helpful advisor, and a caring therapist.

Yes, this sounds creepy, which is why big tech will almost certainly make them cute and non-threatening, with innocent features and mannerisms that make them appear more like a magical character than a human-sized assistant following you around. This is why the term “elf” is appropriate, because they may appear to you as a cute fairy, sprite, or gremlin hovering over your shoulder — a small anthropomorphic character who can whisper in your ear or fly out in front of you to draw your attention to items in your virtual or augmented world.

If we don’t push for metaverse regulation, these life facilitators could become the most effective tools of persuasion ever created, subtly guiding us towards sponsored content ranging from products and services to political messaging and propaganda, all while smiling and laughing. I know it sounds far-fetched, but it’s not. Based on current technological advancements, digital elves and evil twins could be commonplace in our virtual lives by 2030.

Finally, VR, AR, and AI technologies have the potential to improve society. However, when these innovations are combined, they become especially dangerous because they all share one powerful feature: they blur the lines between the true and the false. This ability to deceive digitally necessitates a serious focus on security and regulation. Consumers in the metaverse will be extremely vulnerable to fraudsters, identity thieves, and predatory advertising tactics if such safeguards are not in place. The time to prepare has arrived.

 

Leave a Reply

Your email address will not be published. Required fields are marked *