from AI Trends https://ift.tt/3bMuShX Allison Proffitt https://ift.tt/2RaFOwu
By AI Trends Staff
Research at the intersection of AI, psychology, and neuroscience is attracting interest and investment. The study of the nervous system is called by some the “ultimate challenge” of the biological sciences.
The trend is exemplified in the experience of Irina Rish, now an Associate Professor in the Computer Science and Operations Research department at the Université de Montréal (UdeM),and a core member of Mila – the Quebec AI Institute.
Rish was 14 years old and going to high school in the central Asian city of Samarkand, Uzbekistan, when she first came across the notion of artificial intelligence. “I saw a book, translated from English into Russian, the cover was black with yellow letters, and the title was ‘Can Machines Think?’” Rish recalled in a recent article in Mirage.
Rish was intrigued. “The book was about AI, and I said to myself: ‘Gosh, that’s exactly what I was wondering: what algorithms can we design to solve difficult problems, and how can we boost our own ‘natural intelligence,’” she recalled.
That curiosity set her on a path to her life’s work. She graduated from universities in Moscow and California, then embarked on what became a 20-year career at IBM, including 10 years as a research scientist at the Watson Research Center. Last October, she moved to Canada to become an associate professor at Université de Montréal and a core faculty member at its affiliated AI institute, Mila.
This summer she was awarded a Canada Excellence Research Chair (CERC), which came with a $34 million grant over several years from the federal government and other sources, including industry players Samsung, IBM, Microsoft, and Element AI.
“It’s a wonderful opportunity for me and my team at Mila,” Rish said. “Over the coming years, this chair will allow us to explore the frontiers of AI research at the intersection of machine learning and neuroscience, and advance the field toward more autonomous, human-level AI by developing novel models and methods for broad and robust AI systems, as opposed to today’s narrow and brittle ones,” she said.
Rish holds 64 patents, has published over 90 research papers, written several book chapters, edited three books and published a monograph on sparse modelling, an area of statistical machine learning particularly important for scientific data analysis such as computational biology and neuroimaging.
So far at Mila, among her projects has been working with scientific director Yoshua Bengio to help develop Covi, a contact-tracing app for Covid-19.
Her goal is “to develop continual, lifelong learning AI capabilities, similar to those of humans, as well as approaches to making AI more robust to changes in its environment and tasks it has to solve, and capable of better understanding and generalization, akin to human capabilities,” she stated.
She sees the work as “the intersection of artificial intelligence, neuroscience, and psychology, using computers to analyze brain data and find interesting patterns there related to human behavior, to mental states and their changes, and using what you learn to better understand how the brain works and to make computers work better and AI less artificial.”
Using AI to Decode How the Brain Sends Signals to Limbs
Researcher Chethan Pandarinath, a biomedical engineer at Emory University and the Georgia Institute of Technology, both in Atlanta, is working on enabling people with paralyzed limbs to reach out and grasp with a robotic arm as they would their own. He is collecting recordings of brain activity in people with paralysis, in the hopes of identifying patterns of electrical activity in neurons that correspond to moving an arm in a particular way, so that the instruction can be fed to an artificial limb. That is akin to reading minds.
“It turns out, that’s a really challenging problem,” Pandarinath stated in a recent account in Nature. “These signals from the brain—they’re really complicated.” He decided to feed his brain activity recordings into an artificial neural network, a software architecture inspired by the brain, to try to get it to reproduce the data.
Patterns the researchers call latent factors were found to control the overall behavior of the recorded activity. The effort revealed the brain’s temporal dynamics, the way a pattern of neural activity changes from one moment to the next. This allowed a more fine-grained set of instructions to be produced for arm movements than previous methods. “Now, we can very precisely say, on an almost millisecond-by-millisecond basis, right now the animal is trying to move at this precise angle,” Pandarinath stated. “That’s exactly what we need to know to control a robotic arm.”
In this way, AI is helping brain science and brain science is giving more insight to AI researchers. “The technology is coming full circle and being applied back to understand the brain,” he stated.
An artificial neural network is only a rough analogy of how the brain works, stated David Sussillo, a computational neuroscientist with the Google Brain Team in San Francisco, who collaborated with Pandarinath on his work on latent factors. For instance, it models synapses as numbers in a matrix, when in reality they are complex pieces of biological machinery that use both chemical and electrical activity to send or terminate signals, and that interact with their neighbors in dynamic patterns. “You couldn’t get further from the truth of what a synapse actually is than a single number in a matrix,” Sussillo stated.
Still, artificial neural networks have proved useful for studying the brain. If such a system can produce a pattern of neural activity that resembles the pattern that is recorded from the brain, scientists can examine how the system generates its output and then make inferences about how the brain does the same thing. This approach can be applied to any cognitive task of interest to neuroscientists, including processing an image. “If you can train a neural network to do it,” stated Sussillo, “then perhaps you can understand how that network functions, and then use that to understand the biological data.”
Comparing How Machine Learning Works to How the Brain Works
A similar conclusion was reached by Gabriel A. Silva, a professor of Bioengineering and Neurosciences at the University of California, San Diego, whose work includes how study of the brain can have practical benefits for new AI systems.
“I and other researchers in the field, including a number of its leaders, have a growing sense that finding out more about how the brain processes information could help programmers translate the concepts of thinking from the wet and squishy world of biology into all-new forms of machine learning in the digital world,” Silva stated in an article in Neuroscience News.
How machine learning works and how the brain works are very different. To recognize an image of a cow, a machine learning system needs to be fed many, many images of cows in order to learn. Whereas, “The brain takes in a very small amount of input data—like a photograph of a cow and a drawing of a cow—very quickly. And after only a very small number of examples, even a toddler will grasp the idea of what a cow looks like and be able to identify one in new images, from different angles, and in different colors,” Silva wrote.
The brain and machine learning systems use fundamentally different algorithms. Because of this, “each excels in ways the other fails miserably,” Silva observes.
It is challenging to try to distinguish which brain processes might work well as machine learning algorithms. One approach is to focus on ideas that improve machine learning and identify new areas of neuroscience, at the same time.
“Lessons can go both ways, from brain science to artificial intelligence—and back, with AI research highlighting new questions for biological neuroscientists,” Silva suggests.