The Apple Vision Pro is the newest tool for Australians to access three-dimensional, immersive online environments, also known as the metaverse. Released in Australia earlier this month, it allows users to take a (virtual) walk in the savanna from their living room, watch sports matches in an immersive environment, or even buy a house after completing a virtual inspection.
But these new online environments also have the potential to enable new crimes.
Metacrimes are those crimes occurring in the metaverse. They challenge our definitions of crimes in the digital realm, because they do not fit into existing frameworks of reporting and investigating crime.
Our new study tackles this problem by shedding light on the key characteristics of metacrimes. And by understanding these crimes better, we will be better able to fight them.
Metaverse, metacrime and cybercrime
The metaverse is a loose term describing a kind of three-dimensional, virtual world that users access via a virtual reality headset.
The 2018 movie Ready Player One provides a good visualization of what the metaverse might look like. In the movie, people put on special goggles and pick their avatar to enter a massive, interactive digital universe where they can do almost anything.
Our research found crimes committed in the metaverse have commonalities with conventional cybercrime. For example, both involve different kinds of illegal activities happening online or in virtual spaces. As technology gets better, these crimes are also becoming more global and anonymous. This makes it near impossible to catch the perpetrators.
But we also found a number of metacrime features that do not overlap with conventional cybercrime.
The unique features of metacrimes
One such feature is immersive VR attacks, which are made to feel real through immersion and spatial presence.
Immersion is created through a number of sensory techniques in the headset, including visual, sound and haptic (touch). This creates a feeling of spatial presence that allows the user to perceive and experience the virtual space as real. This means negative experiences such as sexual violence and harassment also feel real.
Unless you are constantly recording your interactions in the metaverse via your headset, crucial evidence of that unpleasant interaction would not be captured. Some companies have created user controls, such as a safety bubble that can be activated around your avatar. However, we do not yet have sufficient research to know whether these are effective.
Our study argues the impact of metacrimes will also be exacerbated for vulnerable populations, especially children who occupy a large proportion of active metaverse users. Difficulties in verifying children’s age online add extra concerns about grooming and minor abuse.
These risks are not hypothetical.
In 2022, researchers from the Center for Countering Digital Hate conducted 11 hours and 30 minutes of recorded user interactions on Meta’s Oculus headset in the popular VRChat. They found that users, including children, encounter abusive behavior approximately every seven minutes.
Bullying and sexual harassment was also rife, and minors were often manipulated into using racist slurs and promoting extremist ideas.
In January 2024, police in the United Kingdom launched the first case of rape in the metaverse after a 16-year-old girl’s avatar was attacked. Police reported the victim suffered psychological and emotional trauma, similar to an attack in the physical world.
The outcomes of the case are currently pending and are likely to set a legal precedent for the protection of minors in the metaverse. At the moment, metacrime presents new challenges in defining, measuring and pursuing avatars’ liability that conventional cybercrime does not usually confront.
We also found other risks including hacking and recording of a person’s environment. Manipulation of VR technologies, such as haptic suites that enable users to physically engage with virtual spaces, also enable perpetrators to inflict direct physical harm on users.
This can include inflicting visual vertigo, motion sickness, and neurologic symptoms.
Where to from here?
Major tech companies such as Apple, Meta and Microsoft are investing heavily in the metaverse, developing both hardware and software to enhance their platforms. Research firm Gartner predicted by 2026, 25% of people will spend at least an hour each day in the metaverse for work, shopping, education, social media and entertainment.
This prediction may be not too far away from reality. Australia’s eSafety Commissioner’s national online safety survey conducted in 2022 found 49% of metaverse users said they had entered the metaverse at least once a month in the last year.
It is therefore urgent that governments and tech companies develop metaverse-specific legal and regulatory frameworks to safeguard immersive virtual environments. National and international legal frameworks will need to account for the new characteristics of metacrime we have identified. Law enforcement will need to upskill in metacrime reporting and investigations.
In the past, companies have talked about using new technologies responsibly—but haven’t taken responsibility when their platforms were used for crimes and harm. Instead, tech leaders deploy what researchers are now calling an “artful apology” (for example, “I’m sorry you experienced this on our platform”).
But this does nothing tangible to tackle the problem, and metaverse companies should instill clear regulatory frameworks for their virtual environments to make them safe for everyone to inhabit.
The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
What are ‘metacrimes?’—and how can we stop them? (2024, July 30)
retrieved 30 July 2024
from https://techxplore.com/news/2024-07-metacrimes.html
part may be reproduced without the written permission. The content is provided for information purposes only.
The Apple Vision Pro is the newest tool for Australians to access three-dimensional, immersive online environments, also known as the metaverse. Released in Australia earlier this month, it allows users to take a (virtual) walk in the savanna from their living room, watch sports matches in an immersive environment, or even buy a house after completing a virtual inspection.
But these new online environments also have the potential to enable new crimes.
Metacrimes are those crimes occurring in the metaverse. They challenge our definitions of crimes in the digital realm, because they do not fit into existing frameworks of reporting and investigating crime.
Our new study tackles this problem by shedding light on the key characteristics of metacrimes. And by understanding these crimes better, we will be better able to fight them.
Metaverse, metacrime and cybercrime
The metaverse is a loose term describing a kind of three-dimensional, virtual world that users access via a virtual reality headset.
The 2018 movie Ready Player One provides a good visualization of what the metaverse might look like. In the movie, people put on special goggles and pick their avatar to enter a massive, interactive digital universe where they can do almost anything.
Our research found crimes committed in the metaverse have commonalities with conventional cybercrime. For example, both involve different kinds of illegal activities happening online or in virtual spaces. As technology gets better, these crimes are also becoming more global and anonymous. This makes it near impossible to catch the perpetrators.
But we also found a number of metacrime features that do not overlap with conventional cybercrime.
The unique features of metacrimes
One such feature is immersive VR attacks, which are made to feel real through immersion and spatial presence.
Immersion is created through a number of sensory techniques in the headset, including visual, sound and haptic (touch). This creates a feeling of spatial presence that allows the user to perceive and experience the virtual space as real. This means negative experiences such as sexual violence and harassment also feel real.
Unless you are constantly recording your interactions in the metaverse via your headset, crucial evidence of that unpleasant interaction would not be captured. Some companies have created user controls, such as a safety bubble that can be activated around your avatar. However, we do not yet have sufficient research to know whether these are effective.
Our study argues the impact of metacrimes will also be exacerbated for vulnerable populations, especially children who occupy a large proportion of active metaverse users. Difficulties in verifying children’s age online add extra concerns about grooming and minor abuse.
These risks are not hypothetical.
In 2022, researchers from the Center for Countering Digital Hate conducted 11 hours and 30 minutes of recorded user interactions on Meta’s Oculus headset in the popular VRChat. They found that users, including children, encounter abusive behavior approximately every seven minutes.
Bullying and sexual harassment was also rife, and minors were often manipulated into using racist slurs and promoting extremist ideas.
In January 2024, police in the United Kingdom launched the first case of rape in the metaverse after a 16-year-old girl’s avatar was attacked. Police reported the victim suffered psychological and emotional trauma, similar to an attack in the physical world.
The outcomes of the case are currently pending and are likely to set a legal precedent for the protection of minors in the metaverse. At the moment, metacrime presents new challenges in defining, measuring and pursuing avatars’ liability that conventional cybercrime does not usually confront.
We also found other risks including hacking and recording of a person’s environment. Manipulation of VR technologies, such as haptic suites that enable users to physically engage with virtual spaces, also enable perpetrators to inflict direct physical harm on users.
This can include inflicting visual vertigo, motion sickness, and neurologic symptoms.
Where to from here?
Major tech companies such as Apple, Meta and Microsoft are investing heavily in the metaverse, developing both hardware and software to enhance their platforms. Research firm Gartner predicted by 2026, 25% of people will spend at least an hour each day in the metaverse for work, shopping, education, social media and entertainment.
This prediction may be not too far away from reality. Australia’s eSafety Commissioner’s national online safety survey conducted in 2022 found 49% of metaverse users said they had entered the metaverse at least once a month in the last year.
It is therefore urgent that governments and tech companies develop metaverse-specific legal and regulatory frameworks to safeguard immersive virtual environments. National and international legal frameworks will need to account for the new characteristics of metacrime we have identified. Law enforcement will need to upskill in metacrime reporting and investigations.
In the past, companies have talked about using new technologies responsibly—but haven’t taken responsibility when their platforms were used for crimes and harm. Instead, tech leaders deploy what researchers are now calling an “artful apology” (for example, “I’m sorry you experienced this on our platform”).
But this does nothing tangible to tackle the problem, and metaverse companies should instill clear regulatory frameworks for their virtual environments to make them safe for everyone to inhabit.
The Conversation
This article is republished from The Conversation under a Creative Commons license. Read the original article.
What are ‘metacrimes?’—and how can we stop them? (2024, July 30)
retrieved 30 July 2024
from https://techxplore.com/news/2024-07-metacrimes.html
part may be reproduced without the written permission. The content is provided for information purposes only.