A more ethical artificial intelligence

by | Feb 15, 2021 | Blog

Ever since artificial intelligence started going through a second period of rapid growth, there has been concern about the problems that this could create for humanity. The fear that machines could kill us off or reduce us to slavery is strangely still very present in our collective imagination. This is almost certainly thanks to Hollywood.

If you don’t know the film in the video, you are either too young or you don’t like this genre: in either case it is a gap that must absolutely be filled!

As you may have guessed from Salvatore’s article and from the huge amount of material on the net, this is certainly not the aspect that should worry us, but rather the serious possibility of developing and using this technology without having ethical principles that act as a guide.

What do we mean by “ethical”?

I think that science fiction fans like myself automatically associate the phrase “ethics of artificial intelligence” with Isaac Asimov’s three laws of robotics, the first in particular :

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The funny thing is that, if we broaden the meaning of “not injure”, Asimov’s rule can actually summarize the content of the document of “Ethics guidelines for trustworthy AI”, published by the European Union in 2019 as a result of the work of a commission of 52 experts including citizens, researchers, and institutions.

Four principles emerge from this document:

  • Respect for human dignity: AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans.
  • Prevention of harm: AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings, this entails the protection of human dignity as well as mental and physical integrity.
  • Fairness: The development, deployment and use of AI systems must be fair. The use of AI systems should never lead to people being deceived or unjustifiably curtailed in their freedom of choice.
  • Explicability: The processes need to be transparent, and the capabilities and purpose of AI systems openly communicated, and decisions, to the extent possible, explicable to those directly and indirectly affected.

The complete document is very interesting and easy to read. It even provides guidelines on how to implement the principles it talks about. Give it a read.

What are the dangers?

You will surely remember the success of the Netflix documentary “The social dilemma”, which caused much debate about the artificial intelligence algorithms designed to optimize advertising sales, which have had a considerable impact by controlling what we pay attention to while using social media.

But this isn’t the only danger we face. For example, just think about facial recognition and how it could be used for less than noble purposes, Although it may be unintentional, it’s often based on cognitive biases due to the data with which algorithms are trained, which can be influenced by human prejudice.

Another important aspect is privacy, especially with the use of virtual assistants, in which user voice intonation should be used only to respond to the requests made and not to trace user profiles for targeted advertisement. Unfortunately, on various occasions, there was the suspicion that the recordings were used not to improve the service, a use to which the user can give or deny consent, but to convey advertising messages.

The big companies however have taken steps to have their own code of ethics on artificial intelligence, such as Microsoft for example, which outlines these principles in six points: fairness, reliability and safety, privacy, inclusiveness, transparency and accountability. Google is no exception, focusing on the benefits for society, security and privacy. These are just to name a few, but AWS, Apple, Facebook have also moved in the same direction, certainly driven by public opinion and state institutions.

What can we do?

As in all ethical issues, a fundamental step is to talk about it. We can all do this and it can be our first contribution. A further step for companies working in the sector is to always keep these principles in mind, collaborating only with companies and services that do the same.

At Ellycode, being engaged in the creation of a product that has precisely the focus of improving the interaction between artificial intelligence and human beings, these principles really are a powerful guide and we can’t wait to show you how!

Keep following us.

Written by

Written by

Michele Aponte