In this TED Conference, the technologist Kriti Sharma raises a very serious issue. Artificial Intelligence algorithms are being used all the time to make important decisions that affect our lives: not only our consumer preferences, but also, for example, how much to make us pay for car insurance or whether or not we get a job interview.

And these decisions are all filtered by a series of assumptions about our identity, our race, our gender, our age and our possible behavior.

One could think that these assessments are objective, because they are performed by a machine. In reality, Artificial Intelligence learns from the data it is given in order to train its automatic learning model. Therefore, if this data is influenced or distorted by our biases, the algorithms on which Artificial Intelligence are based are more likely to produce distorted results. In addition, it is very difficult to keep all the aspects of Artificial Intelligence’s auto-learning (so-called “deep learning”) under control, as it is based on highly-complex IT architecture that also envisage phases where the machine decides independently, without human intervention.

The lack of transparency in explaining how decisions are made by these algorithms has led several parties to call for the regulation of Artificial Intelligence. However, this is not an easy path to go down, as it swings between the issuing of ethical guidelines and the application of stricter and more rigorous regulatory and legal measures. Plus the technology community, governments and members of civil society do not share the same opinion in this regard.

As for Kriti Sharma, she proposes three simple actions to be taken:

“There are three things we can do. One, we can be aware of our own biases and the bias in machines around us. Two, we can make sure that diverse teams are building this technology. And three, we have to give Artificial Intelligence diverse experiences to learn from”.

It is interesting to note that, beyond indisputable and complex technical and operational questions, these suggestions aim to introduce a series of good practices that are very similar to those that we have been trying to apply to human education for years. But these also require new types of skills and abilities:

“We need people who can write and tell stories to help us create ‘personalities’ of Artificial Intelligence. We need people who can solve problems. We need people who face different challenges and we need people who can tell us what are the real issues that need fixing and help us find ways that technology can actually fix it. Because, when people from diverse backgrounds come together, when we build things in the right way, the possibilities are limitless”.