Questions? +1 (202) 335-3939 Login
Trusted News Since 1995
A service for global professionals · Wednesday, June 18, 2025 · 823,370,434 Articles · 3+ Million Readers

Training AI to see more like humans

At Brown University, an innovative new project is revealing that teaching artificial intelligence to perceive things more like people may begin with something as simple as a game. The project invites participants to play an online game called Click Me, which helps AI models learn how people see and interpret images. While the game is fun and accessible, its purpose is more ambitious: to understand the root causes of AI errors and to systematically improve how AI systems represent the visual world.

Over the past decade, AI systems have become more powerful and widely used, particularly in tasks like recognizing images. For example, these systems can identify animals, objects or diagnose medical conditions from images. However, they sometimes make mistakes that humans rarely do. For instance, an AI algorithm might confidently label a photo of a dog wearing sunglasses as a completely different animal or fail to recognize a stop sign if it's partially covered by graffiti. As these models become larger and more complex, these kinds of errors become more frequent, revealing a growing gap between how AI and humans perceive the world.

Recognizing this challenge, researchers funded in part by the U.S. National Science Foundation propose to combine insights from psychology and neuroscience with machine learning to create the next generation of human-aligned AI. Their goal is to understand how people process visual information and translate those patterns into algorithms that guide AI systems to act in similar ways.

The Click Me game plays a central role in this vision. In the game, participants click on parts of an image they believe will be most informative for the AI to recognize. The AI only sees the parts of the image that have been clicked. Therefore, players are encouraged to think strategically about the most informative parts of the image rather than clicking at random to maximize the AI's learning.

The AI-human alignment occurs at a later stage, during which the AI is trained to categorize images. In this "neural harmonization" procedure, the researchers force the AI to focus on the same image features that humans had identified — those clicked during the game — to make sure its visual recognition strategy aligns with that of humans.

What makes this project especially remarkable is how successfully it has engaged the public. NSF funding has allowed the team to attract thousands of people to participate in Click Me, helping it gain attention across platforms like Reddit and Instagram, and generating tens of millions of interactions with the website to help train the AI model. This type of large-scale public participation allows the research team to rapidly collect data on how people perceive and evaluate visual information.

At the same time, the team has also developed a new computational framework to train AI models using this kind of behavioral data. By aligning AI response times and choices with those of humans, the researchers can build systems that not only match what humans decide, but also how long they take to decide. This leads to a more natural and interpretable decision-making process.

The practical applications of this work are wide-ranging. In medicine, for instance, doctors need to understand and trust the AI tools that assist with diagnoses. If AI systems can explain their conclusions in ways that match human reasoning, they become more reliable and easier to integrate into care. Similarly, in self-driving cars, AI that better understands how humans make visual decisions can help predict driver behavior and prevent accidents. Beyond these examples, human-aligned AI could improve accessibility tools, educational software and decision support across many industries. Importantly, this work also sheds light on how the human brain works. By emulating human vision in AI systems, the researchers have been able to develop more accurate models of human visual perception than were previously available.

This initiative underscores why federal support for foundational research matters. Through NSF's investment, researchers are advancing the science of AI and its relevance to society. The research not only pushes the boundaries of knowledge but also delivers practical tools that can improve the safety and reliability of the technologies we use daily.

Powered by EIN Presswire

Distribution channels: Science

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Submit your press release