Simulated eye movement helps shape the Metaverse

Computer engineers at Duke University have developed virtual eyes that can simulate the way humans see the world. Virtual eyes have the precision that can be used to train virtual reality and augmented reality programs. It will prove incredibly useful for developers looking to build applications in the metaverse.

The Results It will be presented May 4-6 at the International Conference on Information Processing in Sensor Networks (PSN).

The new virtual eyes are called EyeSyn.

Training algorithms to act as eyes

Maria Gorlatova is the Nortel Networks Assistant Professor of Electrical and Computer Engineering at Duke University.

“If you are interested in finding out if a person is reading a comic or an advanced level “Literature just looks into their eyes, you can do it,” Gorlatova said.

“But training this kind of algorithm requires data from hundreds of people who wear headphones for hours at a time,” Gorlatova said. “We wanted to develop software that not only reduces privacy concerns about collecting this type of data, but also allows small businesses that don’t have these levels of resources to get into the metaverse.”

The human eye can do a lot of things, like tell if we’re bored or excited, where the focus is, or if we’re expert at a particular task.

“When you prioritize your vision, it also says a lot about you as a person,” Gorlatova said. “It can inadvertently reveal gender and racial biases, interests we don’t want others to know, and information we may not even know about ourselves.”

Eye movement data is very useful for companies that build platforms and software in the metaverse. It can allow developers to customize content for interaction responses or reduce the accuracy of their peripheral vision, which can save computing power.

The team of computer scientists, which included former postdoctoral fellow Guohao Lan and current doctoral student Tim Scargill, set out to develop virtual eyes to mimic how a normal human responds to a variety of sound stimuli. To do this, they looked at the cognitive science literature to explore how humans view the world and process virtual information.

Lan is currently an Assistant Professor at Delft University of Technology in the Netherlands.

“If you give EyeSyn many different inputs and run it enough times, it will generate an artificial eye movement dataset large enough to train a classifier (machine learning) for a new program,” Gorlatova said.

System Test

The researchers tested the accuracy of the prosthetic eyes with publicly available data. Eyes initially began analyzing videos of Dr. Anthony Fauci speaking to the media at press conferences. The team then compared it to eye movement data from actual viewers. They also compared a virtual data set of artificial eyes looking at art with real data sets collected from people looking at a virtual art museum. The results show that EyeSyn can closely match distinct patterns of real gaze cues and simulate the different ways in which people’s eyes respond.

Gorlatova says these findings suggest that virtual eyes are good enough for companies to use as a baseline for training new Metaverse platforms and software.

See also  Dijon: Discovering the Interaction between Art and Science

“Synthetic data alone isn’t perfect, but it’s a good starting point,” Gorlatova said. “Small businesses can use it instead of spending time and money trying to build their datasets in the real world (with human subjects). And because the algorithm customization can be done on local systems, they don’t have to worry about their eye movement data being part of a large database .

Frank Mccarthy

<p class="sign">"Certified gamer. Problem solver. Internet enthusiast. Twitter scholar. Infuriatingly humble alcohol geek. Tv guru."</p>

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top