Skip to main content
Ask the ExpertPeopleSpiegeloog 419: Harmony

Ask the Expert: Can AI Learn From One Shot?

By June 6, 2022January 23rd, 2024No Comments
Han van der Maas

Claire Stevenson

Han van der Maas’ (Psychological Methods) question

Dear Claire,

You are a great expert of intelligent systems. With regard to the discussion about the difference between learning in AI and human learning, what is the role of one-shot learning?

Han

Claire Stevenson’s (Psychological Methods) answer

Dear Han,

Interesting question! In machine learning language one-shot learning describes how many examples are needed to learn a task, so in this case one. For example, learning to recognize the category of zebra from being shown only one zebra, or solving the letter-string analogy “If a b c changes to a c c d d d, then what does p q r change to?” where there is only one example to learn the rule from. This is something that humans excel at from an early age.

A toddler has a book with a picture of a zebra in it. There’s just one book with one zebra in it. But, when she accompanies her father to the zoo and sees a zebra for the first time in real-life, she recognizes it immediately. It doesn’t matter that the zebra is facing a different direction or is partly obscured by a tree. The toddler recognizes it because she can infer many characteristics about an object from a simple drawing, largely due to her prior knowledge about related objects, say horses in both drawings and real life.

This type of inference or generalization is really hard for AI models to do. AI models have to be trained on millions of examples and counter-examples of zebras and other objects from various perspectives and contexts before they are able to correctly categorize them. Even the newest AI models that are so-called few-shot learners, i.e., those that can generalize after just a few examples, must first be pre-trained on millions of images (minus the zebra) to be able to learn a new category (like zebra) later within a few “shots”. The pre-training of AI models also seems fair – after all, aren’t humans pre-trained to recognize objects and symbols from early on?

But, then let’s look at the letter-string analogy above. What if we compare the performance of five- or six-year-olds who are just learning to recognize letters of the alphabet to that of an advanced AI model, that’s capable of few shot learning. Will the AI model outperform the children? GPT-3 is a large language model that you can have conversations with and that can answer pretty much any trivia question you ask. It has also written and published original newspaper articles, prose and poetry with “mind boggling fluency” (New York Times magazine headline April, 2022). GPT-3 is renowned as a few-shot learner and pre-trained on pretty much all the text data available on the Internet to predict the next text fragment. Interestingly, the 5- and 6-year-olds I asked to solve these letter string analogies, who notably all still confused their p’s and q’s or b’s and d’s, could solve all of the letter-string analogies I showed them such as “a b c : a c c d d d :: p q r : ?” with only one example. In contrast, GPT-3 couldn’t solve many of these last year, and this year -with the improved model due to additional pre-training probably including texts on letter-string analogies- required at least three examples to learn the rule to solve this question.

So, these examples and numerous research studies show that whereas humans easily learn new things in one-shot, AI has a difficult time of it. With the advent of more generalized AI models, pre-trained on hoards of online data, especially from different modes (images combined with text, sound, movement, etc.), AI will likely become more and more capable of learning in very few shots. I don’t think this means that AI will “understand” things like humans do, but it will likely become more capable of learning like a human. One thing that worries me is how biased and discriminatory the output of these AI systems can be.

Claire

Claire Stevenson’s question is for David Amodio (Social Psychology)

Dear David,

You are a social neuroscientist who investigates the psychological and neural bases of prejudice. You also apply this knowledge to the field of AI – how do racial or gender biases in, for example, facial recognition software or generative language models like GPT-3 come to be? Of course some of these biases are due to the data used to train these AI models. How do you think the shift in AI to create few-shot learners, i.e. that try to generalize from only a handful of training examples, will affect AI models in terms of biased output?

Claire

Han van der Maas’ (Psychological Methods) question

Dear Claire,

You are a great expert of intelligent systems. With regard to the discussion about the difference between learning in AI and human learning, what is the role of one-shot learning?

Han

Claire Stevenson’s (Psychological Methods) answer

Dear Han,

Interesting question! In machine learning language one-shot learning describes how many examples are needed to learn a task, so in this case one. For example, learning to recognize the category of zebra from being shown only one zebra, or solving the letter-string analogy “If a b c changes to a c c d d d, then what does p q r change to?” where there is only one example to learn the rule from. This is something that humans excel at from an early age.

A toddler has a book with a picture of a zebra in it. There’s just one book with one zebra in it. But, when she accompanies her father to the zoo and sees a zebra for the first time in real-life, she recognizes it immediately. It doesn’t matter that the zebra is facing a different direction or is partly obscured by a tree. The toddler recognizes it because she can infer many characteristics about an object from a simple drawing, largely due to her prior knowledge about related objects, say horses in both drawings and real life.

This type of inference or generalization is really hard for AI models to do. AI models have to be trained on millions of examples and counter-examples of zebras and other objects from various perspectives and contexts before they are able to correctly categorize them. Even the newest AI models that are so-called few-shot learners, i.e., those that can generalize after just a few examples, must first be pre-trained on millions of images (minus the zebra) to be able to learn a new category (like zebra) later within a few “shots”. The pre-training of AI models also seems fair – after all, aren’t humans pre-trained to recognize objects and symbols from early on?

But, then let’s look at the letter-string analogy above. What if we compare the performance of five- or six-year-olds who are just learning to recognize letters of the alphabet to that of an advanced AI model, that’s capable of few shot learning. Will the AI model outperform the children? GPT-3 is a large language model that you can have conversations with and that can answer pretty much any trivia question you ask. It has also written and published original newspaper articles, prose and poetry with “mind boggling fluency” (New York Times magazine headline April, 2022). GPT-3 is renowned as a few-shot learner and pre-trained on pretty much all the text data available on the Internet to predict the next text fragment. Interestingly, the 5- and 6-year-olds I asked to solve these letter string analogies, who notably all still confused their p’s and q’s or b’s and d’s, could solve all of the letter-string analogies I showed them such as “a b c : a c c d d d :: p q r : ?” with only one example. In contrast, GPT-3 couldn’t solve many of these last year, and this year -with the improved model due to additional pre-training probably including texts on letter-string analogies- required at least three examples to learn the rule to solve this question.

So, these examples and numerous research studies show that whereas humans easily learn new things in one-shot, AI has a difficult time of it. With the advent of more generalized AI models, pre-trained on hoards of online data, especially from different modes (images combined with text, sound, movement, etc.), AI will likely become more and more capable of learning in very few shots. I don’t think this means that AI will “understand” things like humans do, but it will likely become more capable of learning like a human. One thing that worries me is how biased and discriminatory the output of these AI systems can be.

Claire

Claire Stevenson’s question is for David Amodio (Social Psychology)

Dear David,

You are a social neuroscientist who investigates the psychological and neural bases of prejudice. You also apply this knowledge to the field of AI – how do racial or gender biases in, for example, facial recognition software or generative language models like GPT-3 come to be? Of course some of these biases are due to the data used to train these AI models. How do you think the shift in AI to create few-shot learners, i.e. that try to generalize from only a handful of training examples, will affect AI models in terms of biased output?

Claire

Spiegeloog Editors

Author Spiegeloog Editors

Spiegeloog editorial staff.

More posts by Spiegeloog Editors