Learned prejudices in artificial intelligence.
How can we ensure that artificial intelligence does not end up with our prejudices and continues, or in the worst case, reinforces humans’ unconscious discrimination and biases? Prejudice is not necessarily deliberate, but there is a lot that takes place in our unconscious that can help manifest the diversity patterns we have been trying to change for a long time, explains Kristine Hofer Næss from Amesto. So how do we counteract this?
Kristine Hofer Næss, CEO at Amesto TechHouse.
UNCONSCIOUS PREJUDICES ARE TRANSFERRED TO THE ALGORITHMS
-Our unconscious prejudices are transferred to the algorithms,” explains Kristine, who is the Chief Customer Officer at Amesto and the Head of ODA, the largest network for women in technology in the Nordic region. She notes that most technology is driven by a relatively homogeneous group, in which the vast majority are men and that diversity challenges related to e.g. gender and ethnicity have occurred in a number of cases.
The examples referenced by Kristine are only a Google search away. Amazon scrapped its AI-based recruitment tool when it turned out that the AI solution “didn’t like women”. A while back, if you typed in ‘Johanne’ on LinkedIn, LinkedIn would ask whether you meant ‘Johan’.
“And have you ever considered the temperature settings at the office?” Kristine asks. The productivity temperature for men is actually a few degrees lower than it is for women. Do you know how the standard office temperature has been determined?
AUTOMATION AND TECHNOLOGY ARE BOTH GOOD
But we need to have a reflective relationship with them. We must not only consider how to train AI but who will train AI. The challenge also lies not only in the fact that the algorithms have often been developed by men, but the fact that the data bases for the algorithms themselves are prejudiced in and of themselves and are not adequately corrected. Our job is therefore to ensure that the algorithms do not reinforce any undesirable tendencies.
The human brain is set to be as efficient as possible and a lot of our thinking takes place in the unconscious mind. Creating neutral and non-discriminatory AI solutions poses many different challenges, as we are not yet good enough at being aware of our own prejudices, so diversity among those who create the technology is therefore of particular importance.
"It is time to put AI expertise and diversity higher up on the agenda. We cannot have spent this many years balancing out the inequalities in society, simply to lose it all in a jungle of algorithms?"
ARTIFICIAL INTELLIGENCE, HUMAN HEART
- Modern technology brings with it potential social risks that we must address,” adds Kjetil Kalager, Head of Amesto’s AI lab, which works on artificial intelligence, machine learning and analysis.
He is of the opinion that we must be conscious of the fact that machine learning, in many ways, reflects the people who train it. This means that AI solutions will often have the same shortcomings, perceptions or opinions as those who developed the solution.
The algorithms must be trained using a data basis that reflects both women and men, ethnicity, religious beliefs and other factors. We have to create good datasets to feed the algorithms and the data basis must represent multiple perspectives and different types of people.
A major challenge is the fact that the data that is used to create AI is generally not representative enough. The test data does not understand the difference between men, women or specific ethnic groups in the same way that we do. The first facial recognition solutions are a good example of this, as the test data was based on the people who had developed the solution, primarily white men.
If you create AI solutions without critical thinking and diversity considerations, you run the risk of young black men being jailed, while young white men enjoy lighter sentencing, which is something that actually happens in the US legal system. The issue is often that problems from reality are manifested in the data and the AI will copy and repeat the errors.
So how can we manage to create objectivity and neutrality in AI solutions?
- We have to increase interdisciplinary work and build teams consisting of people with different experiences, opinions, beliefs, etc. in order to create a representative reality. We need to be able to think more than one thought at a time and to view expertise, data and diversity in relation to one another,” says Kristine.
We possess the truth. And with the truth comes great social responsibility. An algorithm will, for example, always reinforce itself and our job is to ensure that it reinforces the correct perspective. We need to code algorithms that take diversity into account.
MORE WOMEN IN TECH
Today, there are 29% women in the IT industry in Norway. There are even fewer in the AI industry. This means that the vast majority of machine learning is performed by men.
“In order to break with established patterns, we need to think differently and modernise ourselves,” says Kristine. We need to employ different types of people, while also ensuring that we consider diversity in everything that we do. Not least, we need to raise expertise in Norway, both to ensure our competitiveness and to create the solutions society will need in the future.
TECH IS NOT ONLY CREATED IN SILICON VALLEY!
Kristine thinks that a lot comes down to having good role models and involvement from senior management and highlights Lene Diesen, who heads up the AI solution Semine, as a good example. She believes that Diesen is a pioneer who can show young girls that it is both possible and exciting to work with AI.
“I also think it is crucial that we attract diversity by having attractive values in the company,” Kjetil adds. Even though we are basically a commercial technology company, we work in accordance with what we refer to as the triple bottom line. This means that we are not only driven by profit, but that we also care about the people around us and the planet we live on.
He considers it essential for senior management to genuinely care about this.
“We have to ensure that the AI solutions of the future do not inhibit but rather promote diversity and we cannot risk discrimination in the form of prejudiced algorithms. Ultimately, the decisions are made by the people behind the solutions. Do they make good choices?