fbpx

The Turing test and the imitation game

by | Mar 8, 2021 | Blog

Alan Turing, considered the father of computer science, was the first person to theorize about a programmable machine and consequently to ask himself the question of how far we could go with automation, creating the foundations for the birth of the concept of artificial intelligence. Rather than wondering if a machine could be intelligent, a concept which is complicated to formalize even for humans, he wondered how much a machine would be able to appear human, with the same cognitive abilities. This question resulted in the creation of the now famous Turing Test in 1950.

The test is very simple and takes its cue from a game called “imitation game”, which is also the title that was chosen for a beautiful film on the history of Turing’s contributions to deciphering the ENIGMA machine, which the Germans used to communicate during the second world war.

In its original formulation the test involves three participants, a man, a woman and a third person who cannot have contact with the first two and to whom he can ask a series of questions to establish which of the two is the man and which the woman. The man also has the task of deceiving the third person to put him on the wrong path, while the woman has the task of helping him. To avoid contact, interactions must be conveyed in another way, such as by typing.

The test was reformulated overtime to make it more reliable, given that in some circumstances even simple, obviously not intelligent applications pass it. This is the case of ELIZA, a chatbot written in 1966 by a German computer scientist, which simulates the questions of a psychotherapist at the beginning of the psychiatric intervention, reworking the answers of the interlocutor.

Anyway, besides being very interesting from a historical point of view, the Turing test is the key to understanding what we are trying to do today with artificial intelligence: solving optimization problems and providing support in decision-making processes, not creating a thinking machine.

This clarification is important because when we talk about artificial intelligence, we always imagine a thinking machine, which is aware of itself, and can make decisions. Some comments on my last article also confirmed that this strange idea is still around.

Another idea I hear about is that there may be an omniscient system that gives answers in every application domain. It would certainly be nice, but today it is not so and perhaps we must try to scale back our expectations. By verticalizing the applications, we can specialize them in the domain we are interested in, tailoring their usefulness to what we need.

At Ellycode, this is what we are aiming for when creating our virtual assistant. Firstly, user requests are acquired, and immediately afterward the difficult arrives. We have to give an answer to the user, which we accept may be inaccurate the first time, but from which we can learn how to respond better next time. If you think about it, that’s also what happens with humans at work, right? You learn something new every day so you’re prepared for the next time you need it.

Thanks to this approach, we can also achieve another interesting result: asking our virtual assistant to learn how I want certain information. If, as a manager, I ask my assistant to extract some data into an Excel file, because I have to make a business decision and I do not have the skills to do this extraction operation by myself, what can happen is that the data prepared is not in the form that I need.

I can explain to my assistant how to repeat the operation, giving him more accurate information on what I want. This way he will know how to prepare them the next time I make a similar request. However, if someone else asks him for something similar, he may still be wrong if he follows my directions, simply because different people may have different needs.

So if instead of aiming for a system that can pull out information in a generic way, I could teach him how he can do it for ME, wouldn’t that be much more useful than a generic system that probably doesn’t fit my needs? The interesting thing is that it is easier to do than to aim for an omniscient system.

Even in this case, however, the system will not be intelligent or thinking, but it will seem to have the intelligence necessary to help me do my job. If you think about it, it’s like saying that fallibility makes you smart, which, apart from being very heartening for all of us, is probably the right way to create useful tools.

So that we never forget this aspect, we have posted one of (in my opinion) Alan Turing’s most beautiful quotes

“If a machine is expected to be infallible, it cannot also be intelligent.”

I would say that his pragmatism still resonates even after many years.

Stay tuned!

Written by

Written by

Michele Aponte