Over the past we published social scientific studies comparing professional predictions of risk to predictions made by statistics. They work well in different fields, like in medicine in cancer prediction, or finance and criminal justice, with statistical actuarial tools being 10% more accurate at predicting human behavior than experts, according to a 2000 paper written by a team of psychologists at the University of Minnesota.
However, not everything is perfect in the algorithm world.
In 2016 a new algorithm called COMPAS started to be used in courts in the US to help the decision on bail and sentencing decisions. ProPublica claimed the algorithm was biased against black people. The Norhpointe (today Equivant) company that created COMPAS answered with a report questioning the ProPublica claim. ProPublica refuted, academic researchers entered the dispute, and even the Wisconsin Supreme court cited the controversy in it’s recent ruling that upheld the use of COMPAS in sentencing.
Furthermore, it is now known to the public how Cambridge analytica was involved in the US elections.
As we see, algorithms have been involved in our everyday life and because of all prior problems we keep creating new rules to protect us:
In the U.S.A. FTC will supervise companies using bias algorithms.
In Europe we are looking into the expert study “Algorithms and human rights – Study on the human rights dimensions of automated data processing techniques and possible regulatory implications“, published in March 2018 by the Council of Europe, which analyzes the consequences of the use of algorithms when human rights are involved to keep human rights standards, including rule of law principles and judiciary processes.
Systems are increasing in complexity and interact with each other’s’ outputs in ways that become progressively impenetrable to the non-mathematical mind.
As such, we have different actions:
- The Council of Europe wrote this report, it still does not address all aspects related to algorithms and human rights, but seeks to show concerns to look at possible regulatory options that Europe may consider to minimize adverse effects, or to promote good practices.
- European strategy for data, sets how data should be managed.
- Whitepaper on AI, shows the problems AI is having.
- Creation of the AI council, is to establish a place of contact.
- AI recommendation is for the use of AI. Ethics guidelines also gives a set of instructions on AI. There is also a proposal for regulation that will soon set the rules of use. There is also a tweet from Ursula von der Leyen explaining it all of this https://twitter.com/i/status/1384815026066894849.
All of the above efforts seem to be going in the direction of informing us how we are affected by an algorithm, providing the right to know why the algorithm has made the decision that drives our life.
We need regulation of algorithms as they will be, or already are, present in new jobs of new technological fields, and those people with the knowledge to understand algorithms and these regulations will understand that at the heart of this topic is an important ethical question: What does it mean for an algorithm to be fair?
Luckily, all the new generations that are studying algorithms today will know there is a mathematical limit to how fair any algorithm goes and they will be able to discuss openly some questions like:
- How do you define ‘fair’? Are the scores that define fairness fair?
- Who is responsible when human rights are infringed based on algorithmically-prepared decisions? The person who programmed the algorithm, the operator of the algorithm, or the human being who implemented the decision?
- What does the judge say when the one being questioned has been influenced by an algorithm to commit a bad act?
- Are the algorithms guilty of learning from our biased outputs?
- Is it fair to use algorithms against algorithms like to stop face recognition?
- Could we get targeted by our preferred stores with new grown-up needs?
- Are we ready to be moved from our diversity to the average to increase efficiency/reduce risk?
- How deep do we want to know about who we are by our actions?