Best News Network

New virtual moderators could oversee 3D virtual spaces and intervene against bad behavior

Artificially intelligent moderators that oversee 3D virtual spaces and intervene against bad behavior
Credit: University of Glasgow

New forms of artificially intelligent moderators that oversee 3D virtual spaces and intervene against bad behavior could help protect children from online bullying and harassment, a new study suggests.

Researchers from the University of Glasgow collaborated with parents and their children to gauge their reactions to “Big Buddy,” a prototype virtual moderator for online social spaces in virtual reality developed by the team.

During the study, Big Buddy helped parents stay informed about their children’s experiences in virtual spaces, and helped children feel safe and more secure by reacting to misbehavior with punishments similar to those meted out by teachers in real-life classrooms.

The team’s research, which will be presented as a paper at the ACM Interaction Design and Children conference on 20 June, could help inform the design of future AI-controlled moderators for use in virtual spaces like the Metaverse.

A group of 43 children aged between 8 and 16, recruited with assistance from the Scottish anti-bullying service RespectMe and Giggleswick School, took part in the study along with 17 of their parents. The team’s paper, titled “Big Buddy: Exploring Child Reactions and Parental Perceptions Toward a Simulated Embodied Moderating System for Social Virtual Reality” is published in Proceedings of the 22nd annual ACM Interaction Design and Children Conference.

The children put on virtual reality headsets which placed them at a desk in a classroom environment the researchers created using the game development tool Unity 3D.

The children used handheld VR controllers to play rounds of a game which tasked them with building towers of blocks in a timed competition against another classmate, whose actions were pre-programmed. They earned points for quickly building their towers before the timer ends.

Each round, children faced scenarios designed to measure their reaction to being confronted with bad behavior and the resulting responses from Big Buddy—an adult-sized virtual character standing at the front of the classroom

In each scenario, the children’s opponent moved towards them and knocked over the tower they had built. In one scenario, Big Buddy was absent and no action was taken against the bad behavior. In the others, Big Buddy intervened to reset the competitor’s points, inform their parents they had misbehaved, block them from the game, or some combination of the three.






Credit: University of Glasgow

Dr. Mark McGill, a lecturer in human-computer interaction in the School of Computing Science, is one of the authors of the paper. He said, “Part of the impact of social VR is that it can offer very realistic-feeling experiences, giving users a virtual body and a presence that leads to a strong psychological sense of being in a real space.

“That can be a really positive experience when the virtual space feels safe and secure for the user. However, it can quickly turn negative when users choose to misbehave, and there have already been many reports of bullying and harassment in VR. The realism of the VR experience can make bullying feel just as upsetting as when it happens in real life, and children and parents can be ill-equipped to know how to respond using in-game tools like blocking or reporting.

“Bullying might even be more likely in unmoderated virtual spaces, where the social inhibitions that restrict misbehavior in real life are lower because users are anonymous. What we wanted to explore in the Big Buddy study was how children and their parents felt when games were disrupted, and how they reacted to the interventions Big Buddy made.”

Between rounds, the children’s reactions to the provocative scenarios were tested with a questionnaire which scored their emotional responses on scales from sad to happy, calm to angry, and scared or intimidated to safe. Their perceptions of the fairness of Big Buddy’s reactions were also tested.

Meanwhile, parents were asked to watch video footage of their children reacting to the scenarios and to offer their own emotional reactions to seeing their children be affected by unwanted behavior, and their perception of Big Buddy’s interventions.

Cristina Fiani, a postgraduate research student in the School of Computing Science, co-authored the paper. She said, “We saw very clearly in the responses that most of the children felt safer when Big Buddy was present in the classroom, and that many of them thought Big Buddy helped to make the game fairer. Big Buddy was seen as filling a similar role to a referee or a teacher who would make the game fairer and punish others if they did something wrong.

“Many parents reported some level of upset, shock or worry at seeing their children affected by bad behavior. However, it was common for parents to feel reassured by Big Buddy’s presence, and they mostly appreciated Big Buddy’s interventions as reasonable responses. They also reported that they may appreciate receiving alerts in real-life to help them know when their children are being faced with challenging situations, and to allow them to intervene themselves if required.

“The feedback we received suggests that Big Buddy offers a useful and perhaps necessary model for future AI moderators. Big Buddy offers children autonomy but helps them to feel safe, and potentially offers parents a greater degree of involvement in and awareness of what their children are seeing and doing in VR.”

Dr. Mohamed Khamis, who led the research, added: “Human moderators in virtual spaces, if they exist at all, are outnumbered and under-resourced, so AI has a lot of potential to provide an alternative that can impartially monitor behavior and keep parents informed of what’s going on in their child’s headset.

“Although Big Buddy was controlled by a human operator in this initial study, we’re encouraged by the feedback we gathered from parents and children which suggests that an automated version of this model of moderation could be valuable in future VR social spaces. However, while it’s important that spaces feel safe, that needs to be balanced with maintaining users’ agency to have fun on their own terms without feeling too heavily monitored.

“We’re keen to continue developing the Big Buddy model by working closely with teachers and experts in child development to help fine-tune its effectiveness, and to look at developing a robust AI which can make it useful in VR social spaces in the future.”

More information:
Cristina Fiani et al, Big Buddy: Exploring Child Reactions and Parental Perceptions towards a Simulated Embodied Moderating System for Social Virtual Reality, Proceedings of the 22nd Annual ACM Interaction Design and Children Conference (2023). DOI: 10.1145/3585088.3589374

Provided by
University of Glasgow


Citation:
New virtual moderators could oversee 3D virtual spaces and intervene against bad behavior (2023, June 19)
retrieved 19 June 2023
from https://techxplore.com/news/2023-06-virtual-moderators-oversee-3d-spaces.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsAzi is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.