Artificial intelligence, human heart
“Modern technology brings with it potential social risks that we must address,” adds Kjetil Kalager, Head of Amesto’s AI lab, which works on artificial intelligence, machine learning and analysis.
He is of the opinion that we must be conscious of the fact that machine learning, in many ways, reflects the people who train it. This means that AI solutions will often have the same shortcomings, perceptions or opinions as those who developed the solution.
The algorithms must be trained using a data basis that reflects both women and men, ethnicity, religious beliefs and other factors. We have to create good datasets to feed the algorithms and the data basis must represent multiple perspectives and different types of people.
A major challenge is the fact that the data that is used to create AI is generally not representative enough. The test data does not understand the difference between men, women or specific ethnic groups in the same way that we do. The first facial recognition solutions are a good example of this, as the test data was based on the people who had developed the solution, primarily white men.
If you create AI solutions without critical thinking and diversity considerations, you run the risk of young black men being jailed, while young white men enjoy lighter sentencing, which is something that actually happens in the US legal system. The issue is often that problems from reality are manifested in the data and the AI will copy and repeat the errors.
So how can we manage to create objectivity and neutrality in AI solutions?
“We have to increase interdisciplinary work and build teams consisting of people with different experiences, opinions, beliefs, etc. in order to create a representative reality. We need to be able to think more than one thought at a time and to view expertise, data and diversity in relation to one another,” says Kristine.
“We possess the truth. And with the truth comes great social responsibility. An algorithm will, for example, always reinforce itself and our job is to ensure that it reinforces the correct perspective. We need to code algorithms that take diversity into account.”