Metaverse creators grapple with the mystery of sexual harassment


Issued on: Changed:

Paris (AFP) – Nina Jane Patel felt trapped and threatened when the male avatars approached, intimidated her, touched her avatar against her will and photographed the incident.

The abuse took place in a virtual world, but it felt real to them, and this kind of story is a major headache for the architects of the Metaverse – the immersive 3D version of the internet being developed by Microsoft and Meta.

“I entered the shared space and almost immediately three or four male avatars came very close to me, so there was a sense of imprisonment,” Patel told AFP.

“Their voices started verbally and sexually harassing me with sexual innuendos,” said the London-based entrepreneur.

“They touched and fondled my avatar without my consent. And while they were doing that, another avatar took selfie photos.”

Patel, whose company develops kid-friendly Metaverse experiences, says it’s “nothing short of sexual assault.”

Her story and others like it have led to a deep investigation into the nature of harassment in the virtual world and a search for an answer to the question: Can an avatar suffer sexual assault?

Trick the brain

“VR (virtual reality) essentially relies on getting your brain to perceive the virtual world around you as real,” says Katherine Cross, a graduate student at the University of Washington who has researched online harassment.

“When it comes to harassment in virtual reality — say, a sexual assault — it can mean your body treats it as real for the first few moments before your conscious mind can catch up and confirm that it’s not happening physically.”

Her research suggests that despite the virtual space, such victimization causes harm in the real world.

Patel emphasized this point, explaining that their ordeal continued briefly outside of the constructed online space.

She said she eventually took off her VR headset after failing to get her attackers to stop, but she could still hear them through the speakers in her living room.

The male avatars taunted her with the words “don’t act like you don’t like it” and “that’s why you came here”.

The ordeal took place last November at the Horizon Venues virtual world built by Meta, Facebook’s parent company.

The space hosts virtual events such as concerts, conferences, and basketball games.

The legal ramifications are still unclear, although Cross suggests that sexual harassment laws in some countries could be extended to these types of acts.

protective bubbles

Meta and Microsoft — the two Silicon Valley giants dedicated to the Metaverse — have tried to quell the controversy by developing tools to keep unknown avatars out.

Microsoft has also removed dating spaces from its Altspace VR metaverse.

“I think the harassment problem will actually be solved because people choose what platform they use,” says Louis Rosenberg, an engineer who developed the first augmented reality system for US Air Force research labs in 1992.

The entrepreneur, who has since founded a company specializing in artificial intelligence, told AFP he was more concerned about how companies will monetize the virtual space.

He says an advertising-based model will likely result in companies collecting all kinds of personal data, from users’ eye movements and heart rates to their real-time interactions.

“We need to change the business model,” he says, suggesting that security would be better protected if funding came from subscriptions.

However, tech companies have made themselves incredibly wealthy through a business model based on targeted advertising refined by huge streams of data.

And the industry is already trying to stay ahead of the curve by setting its own standards.

The Oasis Consortium, a think tank with ties to several tech companies and advertisers, has developed some security standards that it thinks are good for the Metaverse era.

“When platforms identify content that poses a real risk, it is important to notify law enforcement,” says one of their standards.

But that leaves the main question unsolved: how do platforms define “real-world risk”?


Comments are closed.