As AI enters our homes through smart home devices or tries to conquer our streets through self-driving cars, one need not be a Luddite to contemplate the potentially heavy implications of AI upon our daily lives and livelihood. The key to answering the question and indeed to understand the ultimate limits of AI is to ask if machines can really think. In this article, I list three tests drawn from three different disciplines to address that query.
Turing Test from Theoretical Computer Science,
Winograd Schema Challenge from modern Machine Learning literature, and
Chinese room thought experiment from Philosophy and Behavioral Sciences
Turing Test
This test was proposed by Alan Turing in 1950 in his seminal paper “Computing Machinery and Intelligence”. Turning started off with saying that asking if machines can think is hard without first agreeing on the definitions of the word “machine” and “think”. So circumventing that, he suggests that we ask instead if machines can imitate humans (The Imitation Game). In a situation where a human is talking to a machine behind a wall, if the machine can convince the human that it is a human, then the machine can perfectly “imitate” the human.
Winograd Schema Challenge
WSC was proposed by computer scientist Hector Levesque from the University of Toronto and it was named after another computer scientist Terry Winograd from Stanford University. WSC is a simple multiple choice test that measures a computer’s ability to understand pronouns in a statement (Pronoun Disambiguation Problem). For example take the statement below:
The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.
If the verb is feared, then “they” refers to the councilmen but otherwise, “they” refers to demonstrators.For a human, it is easy to make the distinction between who the pronouns refer to depending on the verb following the pronoun, but machines have yet to reach that level of sophistication.
Researchers entered their AIs in the very first WSC held in 2016 and the results indicate that AIs still lacks common sense for now. A summary of the results can be found here.
Chinese room thought experiment
The Chinese room thought experiment is not an actual test. The argument was posed by the philosopher John Searle in his paper “Minds, Brains and Programs” in 1980. Suppose AI research has been successful in creating a computer that behaves as if it understands Chinese. It takes in Chinese characters as input, and presents other Chinese characters as output after some processing. The question Searle wants to answer is: does the computer literally “understand” Chinese, or is it merely simulating the ability to?
If we replaced this computer with a human with access to the machine code, then the human can execute the computer algorithm manually to answer questions in Chinese. However, such a human would never be credited as knowing Chinese. Therefore, neither should a machine, really.
Per Searle’s proposition, imitation can result from “Weak AI” but understanding comes from “Strong AI”. It is still unclear if “Strong AI” is possible to achieve.
Some opinions expressed in this article may be those of a guest author and not necessarily Analytikus. Staff authors are listed http://www.datasciencecentral.com/profiles/blogs/can-computers-think-an-epistemology-for-artificial-intelligence