A question for the AI age: do machines and humans learn the same way?

mohitsiddhi

A question for the AI age: do machines and humans learn the same way?


The dramatic surge of artificial intelligence (AI) has also made visible the machines humming underneath to make its applications possible.

From their origins in being able to separate data into different groups, AI today excels at too many tasks to count. Just in 2024, smartphones have started to be sold with AI models built into them while five of the seven men who won the 2024 science Nobel Prizes did so for work in AI.

As it happens, the age of AI also promises to be a time in which scientists will learn a lot about the human brain as well. Existing AI models are inspired mostly by the brains of animals. Since these brains haven’t been easy to study, scientists have been looking to AI models as a proxy.

How do humans learn?

Machines excel at things that are nearly impossible for most humans, including rapidly analysing large datasets, predicting complex patterns, and learning to play chess like a grandmaster within a day. Yet neuroscientists say they also struggle with tasks that human children find easy, like understanding motives.

“The paradox of today’s AI stems from the fact that the human brain has an evolutionary, biological origin and AI does not,” Celeste Kidd, associate professor of psychology at the University of California, Berkeley, said. “It is likely that [for] the type of intelligence that we have evolved for taking care of helpless offspring, we need to be able to read the intentions of a child that is running towards a cliff [or one] that’s not yet able to feed themselves and say that they are hungry.”

According to Arjun Ramakrishnan, assistant professor in the department of biological sciences and bioengineering at IIT-Kanpur, “at the heart of what drives learning in humans and animals” is a “dual focus on both meeting immediate biological needs and adapting to a constantly shifting environment.”

“The need to secure resources and maintain balance in the face of an ever-changing environment,” he added, “likely spurred the evolution of sophisticated neural mechanisms, driving not just simple responses to immediate needs but also complex learning and strategic decision-making abilities.”

Learning is thus not just a process of acquiring static information but an ongoing, dynamic interaction between an organism and its environment.

“The brain, shaped by evolutionary pressures, must adapt not only to predictable stimuli but also to the unpredictability of environmental fluctuations,” he added. “This complexity is reflected in the ability of humans and animals to sense and respond to rapid changes in the environment and social interactions, a key advantage for survival.”

Learning is thus long-duration, interactive, and includes feedback loops between the organism’s internal state and external challenges.

Humans’ upper hand

According to biologists at the Heidelberg Laureate Forum, a meeting held in September 2024 in Germany, machines are not curious. “Unlike AI systems, children are naturally curious, exploring the world on their own while simultaneously learning within a social and cultural context,” Kidd said at the forum. “Our curiosity is driven by knowing what we don’t know.”

According to Kidd, the information children discover when they seek it is of a different type than the data fed into AI systems.

“The single experience of a child with an apple is very different from Google Photos labeling an apple in an image. A child’s experience with an apple is sensory. They’re feeling the apple, they’re seeing the apple, it’s multi-dimensional. The data people are getting is much, much richer. And there are tons of correlations you can pick up on in order to leverage things like learning and generalisation.”

The human brain and the body have been ‘trained’ on such data over millennia.

Thus, human learning requires much less data to solve a problem with the same level of proficiency, according to Ashesh Dhawale, the DBT Wellcome Trust India Alliance Intermediate Fellow at the Centre for Neuroscience, Indian Institute of Science, Bengaluru, said.

For example, although the AlphaZero model developed by Google subsidiary DeepMind is better at chess than any human player, it reached this level of proficiency only after playing around 40 million games during its training, Dhawale said. “In contrast, it is estimated that humans need some tens of thousands of training games to reach grandmaster proficiency.”

“One of the key advantages humans have over machines lies in the speed and efficiency of learning,” Ramakrishnan said. “We can absorb new information rapidly, building on past experiences and knowledge in a flexible, adaptive way.”

This ability to continuously improve on prior lessons without extensive reprogramming gives humans a significant edge in dynamic environments where new information and challenges emerge constantly.

Humans are also remarkably good at “transfer learning”. “We can apply knowledge and skills from one context to entirely different, unfamiliar scenarios with relative ease,” Ramakrishnan said. This ability to generalise is still a significant challenge for machines and artificial networks, which are typically confined to narrow domains and struggle to adapt to new or unforeseen contexts without retraining.

The communication between neurons in the human brain takes the form of biochemical processes that operate more slowly than the channels between neurons in artificial neural networks, according to Brigitte Röder, professor of biological psychology and neuropsychology at the University of Hamburg. Yet the human brain makes decisions stunningly fast using abstractions and generalisation whereas machines still struggle to do this.

Dhawale used the example of chess. “If you are proficient at chess, this ability will likely extend to other board games like checkers. This means humans can learn the structure underlying a task and generalise it to quickly solve new tasks — that is, they can learn to learn,” he said.

Researchers are now attempting to bring this paradigm to machine learning, an approach called meta learning. It’s not unlikely that machines will catch up here as well.

Humans also excel at motor-skill learning. “Somehow humans and animals are very efficient at learning how to move,” according to Dhawale, “but we don’t know exactly why this is the case.”

Neural networks are great at navigating tasks involving discrete choices but they stumble with movement. One reason is because being able to make a simple motion, such as reaching for a fruit on a table, requires a learning agent to optimise for many independent parameters varying continuously across many degrees of freedom.

Then there’s energy efficiency. According to Ramakrishnan, the human brain’s low power consumption becomes readily apparent when recognising patterns, making decisions, and conducting social interactions. Machines can operate very fast but their energy consumption is also much higher, especially when they process large datasets.

Where machines excel

However, machines are more reliable.

Unlike machines, which are built for repeatability and can perform the same task again with consistent precision, humans contend with fatigue, emotional decision-making, and distractions.

“While we are designed to operate in volatile, ever-changing environments and our ability to explore and adapt is one of our greatest strengths, this flexibility often comes at the cost of consistency,” Ramakrishnan said.

In contrast to the brain, neural network models are often trained to search exhaustively for solutions to complex tasks, Dhawale explained. This means they are more likely to discover new, better solutions to problems than humans can. At games like chess and go, AI models have been known to develop moves that surprise even expert players.

“One could argue that the strategies used by humans to learn may be more efficient but can’t discover the most optimal solutions because they are not designed to search exhaustively.”

From artificial to human

The differences between human and machine learning could elucidate where the neural network of each brain — artificial or biological — falls short.

“Neurons are often treated simplistically as point processes that communicate via electrical impulses, essentially operating in an on/off mode,” Ramakrishnan said. “This reductionist approach has nonetheless allowed us to uncover fundamental principles that underlie complex cognitive behaviours.”

At its core is the idea that feedback loops drive learning. Researchers used it to develop reinforcement learning, a training algorithm that has also been remarkably successful at explaining how organisms update their knowledge and adapt based on their experiences, according to Ramakrishnan.

The development of artificial neural networks has also expanded our understanding of how memories could be stored and accessed in the brain: as dynamic processes that can be activated and adjusted over time rather than remain preserved in particular areas.

Artificial neural networks with this ability can perform better. “The development of algorithms that handle short-term and long-term memory processes in artificial networks has provided us with a deeper understanding of how the brain may operate in these domains,” Ramakrishnan said.

More broadly, AI models’ successes in the real world have prompted neuroscientists and cognitive scientists to revisit ideas of how the human brain learns.

For some time since the mid-20th century, scientists assumed the brain represented information about the world in a symbolic manner and that its many abilities — perception, planning, reasoning, etc. — were achieved through symbolic operations. Many early attempts at building AI models thus used approaches. One well-known application was expert systems, models capable of complex reasoning as a series of if-then problems.

On the other hand, contemporary neural networks operate connectionist models, named for the weighted connections between the nodes in a network. These models begin with a blank slate and use pattern recognition techniques to achieve their primary goals: say, to accurately predict the next word in an unfinished sentence.

“The question, therefore, is what type of AI — symbolic or connectionist — is the better model for human learning,” Dhawale said. “Despite the success of neural network AI models, I still think they learn in a very different way from how humans learn.”

T.V. Padma is a science journalist in New Delhi.



Source link

Leave a Comment