Best News Network

Game-playing automaton acts like an ‘irrational’ human

video game
Credit: Unsplash/CC0 Public Domain

Humans make lots of irrational decisions in predictable ways, but what if we’re all just doing our best within the limits of our abilities?

Researchers were able to simulate human behaviors using a probabilistic finite automaton, a well-known model of limited computational power. They programmed the automatons to compete against each other in a wildlife poaching game, as either a rhino poacher or a ranger trying to stop the poaching.

When the automatons could remember everything, they settled into an optimal game strategy. But when researchers limited their memories, they took some decision-making shortcuts—the same kinds as actual humans playing the game.

This new work supports the idea of bounded rationality, that “sometimes we do silly things or make systemic mistakes, not because we’re irrational but because we have limited resources,” said first author Xinming Liu ’20. “Oftentimes, we cannot remember everything that happened in the past or we don’t have enough time to make a fully rational decision.”

Liu presented the work, “Strategic Play By Resource-Bounded Agents in Security Games,” in May at the 2023 International Conference on Autonomous Agents and Multiagent Systems. The senior author is Joseph Halpern, professor of computer science in the Cornell Ann S. Bowers College of Computing and Information Science.

In the poaching game, there are a handful of sites, each with a different probability of containing a rhino. In each round, the poacher and ranger choose a site to visit, making their decisions based on data from previous rounds. The poacher gains points by catching a rhino; the ranger gains points by catching the poacher.

If the poacher and ranger can recall every move in the game, they soon settle into Nash equilibrium—a rational, unchanging pair of strategies. But if the automatons have more limited memory—so they can’t remember where they saw that rhino 10, 100 or 1,000 rounds back—they make seemingly irrational human-like decisions.

One human behavior the automatons emulated was probability matching. This occurs when a person is guessing the results of a coin toss when the coin is weighted to be heads three out of four times. Instead of always guessing heads, which would give a 75% success rate, many people would guess heads three-quarters of the time, which would lower their success rate to about 63%.

In the game, this means the poacher made more visits to sites where they most often encountered rhinos in the past, and fewer visits to sites that rarely had a rhino. For the automatons, this strategy wasn’t ideal, but still yielded decent results.

Another irrational human behavior that led to good game performance was overweighting significant results—a phenomenon in which important or traumatic incidents loom especially large in the memory. For example, a person might drive slowly down a stretch of road where they received a speeding ticket many years ago.

When the researchers programmed the poachers to overweight previous encounters with the ranger, it paid off in the game. They ended up avoiding sites where the rangers were most likely to be.

To see how these results match up to actual humans, Liu recruited approximately 100 people to play as the poacher on an online platform. While some humans chose the same site every time or picked randomly just to finish the game and receive payment, others chose sites purely based on probability matching. A third group assumed the ranger was probability matching, and visited sites accordingly to avoid the ranger.

The similarities in gameplay between the humans and automatons show that the model can re-create at least two human behaviors, which, instead of being irrational, actually improved their performance.

“Another way to interpret it is to say that you’re doing the best you can given your computational limitations,” Halpern said. “And that strikes me as pretty rational.”

More information:
Xinming Liu et al, Strategic Play By Resource-Bounded Agents in Security Games, International Conference on Autonomous Agents and Multiagent Systems ’23 (2023). DOI: 10.5555/3545946.3598973 , dl.acm.org/doi/10.5555/3545946.3598973

Provided by
Cornell University


Citation:
Game-playing automaton acts like an ‘irrational’ human (2023, July 10)
retrieved 10 July 2023
from https://techxplore.com/news/2023-07-game-playing-automaton-irrational-human.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsAzi is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.