Skip to main content
ScienceSpiegeloog 416: Language

Machines with Semantic Networks

By January 21, 2022September 11th, 2022No Comments

Language is an incredible, but also a very complex human skill. It comes as no surprise that building it into the machines is not the easiest task. 

Language is an incredible, but also a very complex human skill. It comes as no surprise that building it into the machines is not the easiest task. 

Photo by Unsplash

I find language possibly the most marvelous capability of the human brain. We are able to take a millisecond-look at a bunch of random symbols, that are letters, and understand meaningful concepts. Cognitive neuroscience has theories explaining how meaning is derived from these symbols and gets represented in the brain. Natural Language Processing (NLP) goes beyond human brains and uses the knowledge from these theories to teach language to artificially intelligent machines.  While these developments are being made, however, there are remaining questions on where the meaning in language originates from exactly. It is also quite a mystery whether AI actually understands, or can ever understand, the meaning of the symbols that we are teaching it.

The neuroscientific explanation of the meaning representation in the brain is not too complicated. Words are initially simple symbols, represented as nodes in the brain, and they don’t have any meaning in themselves. They get their underlying meaning when the mental lexicon connects them (to each other) in the semantic memory, based on the association (co-occurrence) between them (Rhodes & Donaldson, 2008). Therefore, meaning is represented in the connection between the nodes within the semantic network that is created by inhibitory and excitatory interactions of neurons. This computational explanation of meaning representation in the brain makes it easy to imagine that machines would be able to do this as well. An artificial neural network can be fed data from the neurons of the brain and simulate human actions based on those neuronal activities. If a machine is given this semantic network, then, it should also be able to communicate meaningfully, right? It actually isn’t that simple, and this question brings us into stormy debate territories.

Back in 1950, Alan Turing came up with the Turing test in order to measure the thinking abilities, the humanness of machines. There is an interrogator, a machine, and a human in this test. The interrogator is put into a room where he is unable to see who he is talking to, and tries to determine which of the two is the machine. If the interrogator fails to differentiate between the machine and the human, the machine is declared to have (human) intelligence and even consciousness. Although the Turing test received a lot of praise, there were, of course, oppositions as well. John Searle disagreed with the Turing test by saying that the simple actions of machines don’t prove that they are also aware of the meaning of their actions. Searle then came up with the Chinese room thought experiment to support his argument. Imagine yourself in an isolated room with a computer that has a Chinese dictionary. You have no Chinese knowledge, but with this dictionary you are able to reply to the messages written in Chinese characters that are slipped under the door. You receive the input characters, look up the appropriate output characters in the dictionary, copy those on the paper and send it back outside. So without any actual knowledge yourself, you are able to hold a written conversation in Chinese, and the people on the other side of the door mistakenly think that you understand what is going on. If we take this analogy into account, the Turing test doesn’t tell us anything about the machine’s thinking capabilities, or whether it understands the language or not. It simply shows us that the machine is doing a great job at matching the input with the correct output.

This thought experiment is interesting for the aforementioned language theory as well. Neurons receive the input, like the messages slipped under the door. Then they make connections and deduce their meaning within the network, just like using the Chinese dictionary. And lastly, these connections decide on the output, like copying the answer on the paper and slipping it back out. This theory claims meaning representation to happen (at the brain level) as a manipulation of abstract neural activation patterns. However, when we take the case of Chinese room experiment, there is no place left for meaning in this system. We can’t learn Chinese from a Chinese-to-Chinese dictionary, how can the network then give the word a meaning, if all the nodes in it are initially meaningless symbols (i.e., words)? The system turns out to be too simple and doesn’t explain where the meaning comes from exactly. This rock that we hit now, is called the symbol grounding problem in language. It questions the origin of the meaning in the semantic network. There are already several solutions to this problem that have been proposed over the years, and one of the most interesting ones is the embodied cognition theory.

“Even if the machines are capable of translating thousands of words into different languages, in order for them to see meaning in those words, they need more than what they have now.”

According to embodied cognition, the symbols that connect with each other for the meaning of a word aren’t abstract, but they are grounded in our experiences with the world. The symbols that make up the meaning of ‘apple’, for example, are attached to our experiences with it, such as its taste, smell, color, or shape. This would also mean that by thinking about the word ‘apple’, we activate the brain areas that are related to our apple experiences. Various studies looked into the sensorimotor activation in the brain during language processing and found that words indeed activate related sensory brain areas. For example, one study by Goldberg and colleagues (2006) found that the orbitofrontal cortex, known to be related to smell and taste, shows activation while participants are making decisions regarding fruits. This activation isn’t seen when the decision is irrelevant to food. Although embodied cognition struggles to answer how more abstract words (e.g. beauty) would be presented, it still proposes a good (partial) answer to the symbol grounding problem. Most importantly, it shows that the semantic network receives input from multiple different brain areas.

It is, however, very difficult to adapt the embodied cognition theory to machines that have no body or human-like life experience. The first challenge is that the machines we have now are very limited in their interactions with the environment. A machine can learn how to play chess all by itself, but it can’t learn more daily life activities, such as doing the dishes, that give humans experiences. Because for a machine, learning to do the dishes means breaking hundreds of plates, as well as itself probably, over and over again until it semi-randomly succeeds to find a proper way. Then it would need to learn it all over again, with the same process, for every other tableware. As you can imagine, this would take an incredible amount of time and money. Another challenge is the limited knowledge about the brain networks that we are trying to simulate in the machines. So far it is known that the semantic network is spread over to the brain, receiving input from multiple different areas. However, it hasn’t been fully localized yet. Therefore, even if the machines are capable of translating thousands of words into different languages, in order for them to see meaning in those words, they need more than what they have now.

It is undeniable that AI is achieving amazing things with machines. They aren’t just improving the way machines work, but they also provide feedback to psychology and neuroscience by pointing out what is missing in the theories. After trying to apply it to the machines, the semantic network turned out to be more detailed and larger than what was initially thought. Searle’s thought experiment pushed the boundaries of cognitive neuroscience and contributed to the improvement of this network. However, it seems like we still need many improvements until we free the machines from Searle’s room and have meaningful conversations with them.<<

References

-Cole, David, “The Chinese Room Argument”, The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/win2020/entries/chinese-room.  
-Goldberg, R. F., Perfetti, C. A., & Schneider, W. (2006). Perceptual knowledge retrieval activates sensory brain regions. Journal of Neuroscience, 26(18), 4917-4921.
-Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3), 335-346.
-Oppy, Graham and David Dowe, “The Turing Test”, The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/win2021/entries/turing-test.

I find language possibly the most marvelous capability of the human brain. We are able to take a millisecond-look at a bunch of random symbols, that are letters, and understand meaningful concepts. Cognitive neuroscience has theories explaining how meaning is derived from these symbols and gets represented in the brain. Natural Language Processing (NLP) goes beyond human brains and uses the knowledge from these theories to teach language to artificially intelligent machines.  While these developments are being made, however, there are remaining questions on where the meaning in language originates from exactly. It is also quite a mystery whether AI actually understands, or can ever understand, the meaning of the symbols that we are teaching it.

The neuroscientific explanation of the meaning representation in the brain is not too complicated. Words are initially simple symbols, represented as nodes in the brain, and they don’t have any meaning in themselves. They get their underlying meaning when the mental lexicon connects them (to each other) in the semantic memory, based on the association (co-occurrence) between them (Rhodes & Donaldson, 2008). Therefore, meaning is represented in the connection between the nodes within the semantic network that is created by inhibitory and excitatory interactions of neurons. This computational explanation of meaning representation in the brain makes it easy to imagine that machines would be able to do this as well. An artificial neural network can be fed data from the neurons of the brain and simulate human actions based on those neuronal activities. If a machine is given this semantic network, then, it should also be able to communicate meaningfully, right? It actually isn’t that simple, and this question brings us into stormy debate territories.

Back in 1950, Alan Turing came up with the Turing test in order to measure the thinking abilities, the humanness of machines. There is an interrogator, a machine, and a human in this test. The interrogator is put into a room where he is unable to see who he is talking to, and tries to determine which of the two is the machine. If the interrogator fails to differentiate between the machine and the human, the machine is declared to have (human) intelligence and even consciousness. Although the Turing test received a lot of praise, there were, of course, oppositions as well. John Searle disagreed with the Turing test by saying that the simple actions of machines don’t prove that they are also aware of the meaning of their actions. Searle then came up with the Chinese room thought experiment to support his argument. Imagine yourself in an isolated room with a computer that has a Chinese dictionary. You have no Chinese knowledge, but with this dictionary you are able to reply to the messages written in Chinese characters that are slipped under the door. You receive the input characters, look up the appropriate output characters in the dictionary, copy those on the paper and send it back outside. So without any actual knowledge yourself, you are able to hold a written conversation in Chinese, and the people on the other side of the door mistakenly think that you understand what is going on. If we take this analogy into account, the Turing test doesn’t tell us anything about the machine’s thinking capabilities, or whether it understands the language or not. It simply shows us that the machine is doing a great job at matching the input with the correct output.

This thought experiment is interesting for the aforementioned language theory as well. Neurons receive the input, like the messages slipped under the door. Then they make connections and deduce their meaning within the network, just like using the Chinese dictionary. And lastly, these connections decide on the output, like copying the answer on the paper and slipping it back out. This theory claims meaning representation to happen (at the brain level) as a manipulation of abstract neural activation patterns. However, when we take the case of Chinese room experiment, there is no place left for meaning in this system. We can’t learn Chinese from a Chinese-to-Chinese dictionary, how can the network then give the word a meaning, if all the nodes in it are initially meaningless symbols (i.e., words)? The system turns out to be too simple and doesn’t explain where the meaning comes from exactly. This rock that we hit now, is called the symbol grounding problem in language. It questions the origin of the meaning in the semantic network. There are already several solutions to this problem that have been proposed over the years, and one of the most interesting ones is the embodied cognition theory.

“Even if the machines are capable of translating thousands of words into different languages, in order for them to see meaning in those words, they need more than what they have now.”

According to embodied cognition, the symbols that connect with each other for the meaning of a word aren’t abstract, but they are grounded in our experiences with the world. The symbols that make up the meaning of ‘apple’, for example, are attached to our experiences with it, such as its taste, smell, color, or shape. This would also mean that by thinking about the word ‘apple’, we activate the brain areas that are related to our apple experiences. Various studies looked into the sensorimotor activation in the brain during language processing and found that words indeed activate related sensory brain areas. For example, one study by Goldberg and colleagues (2006) found that the orbitofrontal cortex, known to be related to smell and taste, shows activation while participants are making decisions regarding fruits. This activation isn’t seen when the decision is irrelevant to food. Although embodied cognition struggles to answer how more abstract words (e.g. beauty) would be presented, it still proposes a good (partial) answer to the symbol grounding problem. Most importantly, it shows that the semantic network receives input from multiple different brain areas.

It is, however, very difficult to adapt the embodied cognition theory to machines that have no body or human-like life experience. The first challenge is that the machines we have now are very limited in their interactions with the environment. A machine can learn how to play chess all by itself, but it can’t learn more daily life activities, such as doing the dishes, that gives humans experiences. Because for a machine, learning to do the dishes means breaking hundreds of plates, as well as itself probably, over and over again until it semi-randomly succeeds to find a proper way. Then it would need to learn it all over again, with the same process, for every other tableware. As you can imagine, this would take an incredible amount of time and money. Another challenge is the limited knowledge about the brain networks that we are trying to simulate in the machines. So far it is known that the semantic network is spread over to the brain, receiving input from multiple different areas. However, it hasn’t been fully localized yet. Therefore, even if the machines are capable of translating thousands of words into different languages, in order for them to see meaning in those words, they need more than what they have now.

It is undeniable that AI is achieving amazing things with machines. They aren’t just improving the way machines work, but they also provide feedback to psychology and neuroscience by pointing out what is missing in the theories. After trying to apply it to the machines, the semantic network turned out to be more detailed and larger than what was initially thought. Searle’s thought experiment pushed the boundaries of cognitive neuroscience and contributed to the improvement of this network. However, it seems like we still need many improvements until we free the machines from Searle’s room and have meaningful conversations with them.<<

References

-Cole, David, “The Chinese Room Argument”, The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/win2020/entries/chinese-room.  
-Goldberg, R. F., Perfetti, C. A., & Schneider, W. (2006). Perceptual knowledge retrieval activates sensory brain regions. Journal of Neuroscience, 26(18), 4917-4921.
-Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3), 335-346.
-Oppy, Graham and David Dowe, “The Turing Test”, The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/win2021/entries/turing-test.
Esna Mualla Gunay

Author Esna Mualla Gunay

Mualla (2000) graduated from the UvA in 2022 with a specialization in Brain and Cognition. She is interested in the intersection between cognitive and clinical psychology. She enjoys spending time in nature, reading, and journaling.

More posts by Esna Mualla Gunay