ScienceSpiegeloog 399: Attention

Ivory Tower: Neural networks

By December 18, 2019 No Comments

After the introduction of computers into scientific research in the late 1960s, a sizeable group of researchers thought intelligent robots were just around the corner. However, they had greatly overestimated the speed at which the evolution of artificial intelligence (AI) occurred. The story goes that in the 1970s one of AI’s pioneers, Marvin Minsky, ordered an intern to ‘program the visual system’ over the period of a summer holiday (!). When I entered the university in 1992, computers still were not able to recognize anything more complicated than a black square on a white background.

The hippest thing in AI those days were connectionist models, invented in 1949 by psychologist Donald Hebb. As a student, I learned how to program these models, also known as ‘neural networks’, in a now-extinct programming language by the name of Turbo Pascal. Neural networks consisted of a number of interconnected virtual neurons. You could feed the network information, e.g., a string of zeroes and ones, and train it to recognize certain patterns. Basically, the input information triggered neurons to fire, which triggered other neurons to fire, until at some point the network returned a response. If that response was correct, the network was programmed to strengthen whatever connections it had used, otherwise, it weakened them. After training the network with a lot of stimuli, you could, for instance, teach it to distinguish a triangle from a square.

But that was about all these networks could do. Neural networks were able to learn elementary patterns ‘by them- selves’, and were useful models for studying how cognition could operate in the brain – but the performance of AI in the 1990s was distinctly unimpressive. Then fast computers, the internet, and the data revolution happened. Just when everybody thought AI was dead, neural networks went through a spectacular series of improvements, and before you knew it, they were recognizing faces at Schiphol Airport, beating grandmasters at chess and generating figure captions all by themselves. It is amazing, truly amazing, how good these models have become.

But there is a catch. Neural networks are very complex, and the best ones utilize a strategy – deep learning – that produces convoluted mathematical transformations between input and output. Due to this internal complexity, it is extremely hard to find out how the networks actually work. This has produced a bit of a conundrum. Scientists developed these models to understand how brains learn, but now that they actually work, we don’t understand them anymore.

Last week, I was at a fantastic symposium about deep learning, organized by Steven Scholte of our Brain and Cognition group, to discover that scientists have now found a method to confront this problem. Namely: scientific psychology. Researchers are submitting their neural networks to the exact same experimental tests that psychologists have developed to figure out how humans perceive and learn. By studying the networks’ responses, they investigate what information these models use and how they transform that information internally to arrive at a selected action. In other words, researchers are using psychological tests to figure out how machines think.

Think about that for a second. Scientists wanted to use machines to understand psychology but ended up using psychology to understand machines. The wonders of our discipline are beyond compare.

After the introduction of computers into scientific research in the late 1960s, a sizeable group of researchers thought intelligent robots were just around the corner. However, they had greatly overestimated the speed at which the evolution of artificial intelligence (AI) occurred. The story goes that in the 1970s one of AI’s pioneers, Marvin Minsky, ordered an intern to ‘program the visual system’ over the period of a summer holiday (!). When I entered the university in 1992, computers still were not able to recognize anything more complicated than a black square on a white background.

The hippest thing in AI those days were connectionist models, invented in 1949 by psychologist Donald Hebb. As a student, I learned how to program these models, also known as ‘neural networks’, in a now-extinct programming language by the name of Turbo Pascal. Neural networks consisted of a number of interconnected virtual neurons. You could feed the network information, e.g., a string of zeroes and ones, and train it to recognize certain patterns. Basically, the input information triggered neurons to fire, which triggered other neurons to fire, until at some point the network returned a response. If that response was correct, the network was programmed to strengthen whatever connections it had used, otherwise, it weakened them. After training the network with a lot of stimuli, you could, for instance, teach it to distinguish a triangle from a square.

But that was about all these networks could do. Neural networks were able to learn elementary patterns ‘by them- selves’, and were useful models for studying how cognition could operate in the brain – but the performance of AI in the 1990s was distinctly unimpressive. Then fast computers, the internet, and the data revolution happened. Just when everybody thought AI was dead, neural networks went through a spectacular series of improvements, and before you knew it, they were recognizing faces at Schiphol Airport, beating grandmasters at chess and generating figure captions all by themselves. It is amazing, truly amazing, how good these models have become.

But there is a catch. Neural networks are very complex, and the best ones utilize a strategy – deep learning – that produces convoluted mathematical transformations between input and output. Due to this internal complexity, it is extremely hard to find out how the networks actually work. This has produced a bit of a conundrum. Scientists developed these models to understand how brains learn, but now that they actually work, we don’t understand them anymore.

Last week, I was at a fantastic symposium about deep learning, organized by Steven Scholte of our Brain and Cognition group, to discover that scientists have now found a method to confront this problem. Namely: scientific psychology. Researchers are submitting their neural networks to the exact same experimental tests that psychologists have developed to figure out how humans perceive and learn. By studying the networks’ responses, they investigate what information these models use and how they transform that information internally to arrive at a selected action. In other words, researchers are using psychological tests to figure out how machines think.

Think about that for a second. Scientists wanted to use machines to understand psychology but ended up using psychology to understand machines. The wonders of our discipline are beyond compare.

Denny Borsboom

Author Denny Borsboom

More posts by Denny Borsboom