Skip to main content

Verified by Psychology Today

Artificial Intelligence

Novel AI Method Shows More Human-Like Cognition

New AI technique shows human-like language generalization ability—a step to AGI.

Key points

  • Humans can flexibly apply concepts to new contexts, unlike AI.
  • For example, humans apply "balance" to different contexts: "balance on a tightrope" and "balance the budget."
  • However, a new AI technique, MLC, mimics humans applying concepts to new contexts.
TheDigitalArtist/Pixabay
TheDigitalArtist/Pixabay

The human brain is superior to artificial intelligence (AI) machine learning in its ability to grasp new combinations and apply them towards comprehending related uses of known components. Scientists are working to bridge that gap.

A study published in Nature unveils meta-learning for compositionality (MLC), a new AI technique that endows neural networks with more human-like language generalization ability—a milestone achievement in pursuing artificial general intelligence (AGI).

“We showed how MLC enables a standard neural network optimized for its compositional skills to mimic or exceed human systematic generalization in a side-by-side comparison,” wrote study authors Brenden Lake, Ph.D., assistant professor of psychology and data science at New York University, and Marco Baroni, Ph.D., a research professor at the Catalan Institution for Research and Advanced Studies (ICREA) in Barcelona, Spain.

The human brain serves as the inspiration for the neural network architecture of AI machine learning. Connectionist models and parallel distributed processing (PDP) models are a class of computation models often used to model human behavior, cognition, perception, memory storage and retrieval, and learning processes. Connectionist approaches are related to neural networks made up of many artificial neurons (nodes), and processing happens with the propagation of activation from one artificial neuron to another via the connection between nodes.

It’s a long-held belief that artificial neural networks do not possess the human brain’s ability to comprehend and create new combinations from known components. For example, once the human brain grasps the concept of the word “balance,” it can flexibly and intelligently apply it in other combinations such as “balance on a tightrope,” “balance the budget,” or “work-life balance.” In contrast, AI is not as flexible as the human brain regarding generalization.

The scientists wrote,

The power of human language and thought arises from systematic compositionality—the algebraic ability to understand and produce novel combinations from known components.

Can an artificial neural network have this human-like capability? American philosopher Jerry Fodor (1935-2017) and Zenon Pylyshyn (1937-2022), the late professor emeritus in cognitive psychology at Rutgers University, argued 35 years prior in the 1988 publication of Connectionism and Cognitive Architecture: A Critical Analysis that at the cognitive level, it’s not possible to have a connectionist architecture and combinatorial representation system, and that the mind and brain architecture is not connectionist at the cognitive level.

Lake and Baroni wrote,

Here we successfully address Fodor and Pylyshyn’s challenge by providing evidence that neural networks can achieve human-like systematicity when optimized for their compositional skills.

The researchers optimize the standard neural network architecture for its compositional skills. The learning process consists of a neural network continuously improving its skill over a series of episodes where the AI model is given a new word and asked to apply it compositionally.

The meta-learning for compositionality method uses a standard sequence-to-sequence (seq2seq) transformer consisting of an encoder that processes input and a decoder that receives messages from the encoder and outputs a sequence. Seq2seq is an AI machine-learning model that was developed for machine translation.

Ilya Sutskever, Oriol Vinyals, and Quoc V. Le introduced seq2seq in their 2014 Google paper Sequence to Sequence Learning with Neural Networks. The use of seq2seq models has expanded for other natural language processing (NLP) purposes, such as image captioning, text summarization, and conversational models. NLP is an interdisciplinary field that combines computer science, linguistics, and artificial intelligence to enable computers to interpret, process, and generate human language.

“Our results show how a standard neural network architecture, optimized for its compositional skills, can mimic human systematic generalization in a head-to-head comparison,” the researchers concluded.

Copyright © 2023 Cami Rosso. All rights reserved.

advertisement
More from Cami Rosso
More from Psychology Today
More from Cami Rosso
More from Psychology Today