fbpx

Artificial Intelligence: let’s open the box

by | Mar 29, 2021 | Blog

We can say, without fear of being proven wrong, that artificial intelligence is entering all aspects of our life. Often we don’t even realize it, treating the devices we use and the applications we interact with as a black box that we don’t care about how they work.

The problem with this approach is to leave to our imagination the principle of internal working, which we notice or which we usually reflect on when something is wrong. It is a bit like a car: at driving school, they explain the operating principles of an internal combustion engine, but until the car breaks down we just need to know how to use it to live happily. But when it breaks, we try to understand what may have happened, and that’s when we begin to imagine how it can work internally to try to find a solution that solves the problem. In simple cases, maybe we can as well. If a tire is punctured, for example, we easily come to the fact that we have to replace it with the spare one. But if we hear a metallic noise, a warning light comes on or the car does not start, we rely either on our past experiences or those of a friend “who understands”. In the worst cases, we rely on a specialist who knows exactly how it works and can solve the problem for us.

Now, no one says that to have a car you have to be a mechanic, but knowing a minimum of how it works helps us not to be fooled by an unethical “insider”, and perhaps solve the simplest problems independently. So let’s try to understand the operating principles of artificial intelligence applications, at the highest possible level, with the aim at least of excluding the fear of being annihilated by a phantom sinister artificial consciousness and not widening our eyes when we hear words like Deep Learning and Machine Learning, in the same way, that maybe we happen to do when the mechanic talks to us about lambda probe, connecting rod or manifold.

What’s in the box

Despite being a technical term, the word algorithm has become part of common speech, as a finite sequence of steps to follow to implement a functionality. A bit like a cooking recipe, in which I follow the instructions step by step to make my dish. A feature of algorithms is that they can accept data as input and provide a precise result.

Artificial intelligence, on the other hand, is used when we cannot define these steps precisely and we accept the idea of having a result accompanied by a value that indicates the reliability with which we must take it into consideration. To understand this, we accept the idea that the machine tells us that in a photo there is a cat with an 80% reliability, while a classic algorithm would tell us that in the photo there is or is not a cat.

Although we would always like to have certain answers, in many applications such information is enough, which allows us to make a decision or require human intervention in cases where this percentage value is not sufficient for an automatic decision. Take the case that you want to allow users of your site to upload a photo, but you want to be sure that no violent or offensive image is uploaded: with artificial intelligence you can immediately discard the photos or you can request the intervention of an administrator, saving the pending data, if the detection reliability percentage is, for example, between 50% and 80%.

How do you get such a thing? We have two possibilities: one is to teach the machine to recognize a cat in a photo, giving it lots of photos and an indication of the presence or absence of the cat; the other is to rely on the similarities between the data, looking for patterns. In the first case, we are talking about supervised learning, in the second case we are talking about unsupervised learning.

There is actually a third approach, called reinforcement learning, in which the system is taught to respond correctly by giving positive feedback every time it guesses a result. This approach is widely used, for example, in games such as chess (if you haven’t already done so, read Salvatore’s article on chess and artificial intelligence).

So we distinguish two important moments in the realization of our artificial intelligence application: the learning phase, in which a model is created based on the available data, and a prediction phase, in which we use the model created to evaluate new data. As already explained by Salvatore, the main Cloud providers have trained their services for you and allow you to make your prediction with a simple endpoint.

The main applications of artificial intelligence that we use every day are based on supervised learning, where we have both the input data (the famous photos with cats and without) and the result (whether the cat is there or not). The internal mechanism is based on a system of weights, whose value is changed with each data we provide during learning. If you think about it, this is what we do with children when we teach them to recognize animals: we show them different images of animals and tell them the name of the animal aloud, after a while the child learns to recognize them even on images that have never seen before.

AI, Machine Learning, Deep Learning, Neural Networks: what’s the difference?

If you are interested in this sector you will surely have heard the terms AI, Machine Learning, Neural Networks, and Deep Learning, often used incorrectly as synonyms.

Let’s start with the term Artificial Intelligence, often abbreviated as AI, which is the most generic of all and which encompasses all the techniques and applications of this interdisciplinary science. In this large container we have the Machine Learning techniques, that is, the techniques with which we teach a machine to predict values starting from training based on a dataset (supervised, unsupervised, or for reinforcement).

As part of Machine Learning, it is possible to use Neural Networks, a model inspired by (but absolutely different) to the way our neurons are made. Imagine a layered network where, in the simplest case, you have an input layer, an inner layer and an output layer. To obtain good results, such a small network is usually not enough, so we are going to increase the number of input elements and the internal layers. The network then becomes deeper, leading us into what is called Deep Learning.

Unfortunately it is impossible to go into more detail without introducing a bit of mathematics, but for the moment I leave you a one-minute video that can be a first approach:

Conclusions

For simplicity, we will not launch into a more accurate description of neural networks, which we will perhaps resume in a future article, but I just wanted to give you the fundamental concepts to be able to talk to the mechanic on duty and understand, at a high level, what he is saying. Furthermore, I hope it is clear now that there is absolutely nothing intelligent in the human sense of the term here and that we are simply exploiting the great computing power we can access today to apply mathematical/statistical models of the 1950s to a very large amount of data.

Disappointed? Don’t worry, because even with these tools alone we can do very interesting things: keep following us to find out which ones!

Written by

Written by

Michele Aponte