A tragic scandal at the UK Post Office highlights the need for legal change, especially as organizations embrace artificial intelligence to enhance decision-making.
There’s always small hardware quirks to be accounted for, but when we are talking about machine learning, which is not directly programmed, it’s less applicable to blame developers.
The issue is that computer system are now used to whitewash mistakes or biases with a veneer of objective impartiality. Even an accounting system’s results are taken as fact.
Consider that an AI trained with data from the history of policing and criminal cases might make racist decisions, because the dataset includes a plenty of racist bias, but it’s very easy for the people using it to say “welp, the machine said it so it must be true”. The responsibility for mistakes is also abstracted away, because the user and even the software provider might say they had nothing to do with it.
I the example you gave I would actually put the blame the software provider. It wouldn’t be ridiculously difficult to anonimize the data, get rid of name, race, gender, and leave only the information about the crime committed, the evidence, any extenuating circumstances, and the judgment.
It’s more difficult then simply throwing in all the data, but it can and should be done. It could still contain some bias, based on things like the location of the crime. But the bias would be already greatly reduced.
I don’t think you can completely anonymize data and still end up with useful results, because the AI will be faced with human inconsistency and biases regardless. Take away personally identifiable information and it might mysteriously start behaving harsher regarding certain locations, like, you know, districts where mostly black and poor people live.
We’d need to have a reckoning with our societal injustices before we can determine what data can be used for many purposes. Unfortunately many people who are responsible for these injustices are still there, and they will be the people who will determine if the AI output is serving their purpose or not.
The “AI” that I think is being referenced is one that instructs officers to more heavily patrol certain areas based on crime statistics. As racist officers often patrol black neighbourhoods more heavily, the crime statistics are higher (more crimes caught and reported as more eyes are there).
This leads to a feedback loop where the AI looks at the crime stats for certain areas, picks out the black populated ones, then further increases patrols there.
In the above case, any details about the people aren’t needed, only location, time, and the severity of the crime. The AI is still being racist despite race not being in the dataset
There’s always small hardware quirks to be accounted for, but when we are talking about machine learning, which is not directly programmed, it’s less applicable to blame developers.
The issue is that computer system are now used to whitewash mistakes or biases with a veneer of objective impartiality. Even an accounting system’s results are taken as fact.
Consider that an AI trained with data from the history of policing and criminal cases might make racist decisions, because the dataset includes a plenty of racist bias, but it’s very easy for the people using it to say “welp, the machine said it so it must be true”. The responsibility for mistakes is also abstracted away, because the user and even the software provider might say they had nothing to do with it.
I the example you gave I would actually put the blame the software provider. It wouldn’t be ridiculously difficult to anonimize the data, get rid of name, race, gender, and leave only the information about the crime committed, the evidence, any extenuating circumstances, and the judgment.
It’s more difficult then simply throwing in all the data, but it can and should be done. It could still contain some bias, based on things like the location of the crime. But the bias would be already greatly reduced.
I don’t think you can completely anonymize data and still end up with useful results, because the AI will be faced with human inconsistency and biases regardless. Take away personally identifiable information and it might mysteriously start behaving harsher regarding certain locations, like, you know, districts where mostly black and poor people live.
We’d need to have a reckoning with our societal injustices before we can determine what data can be used for many purposes. Unfortunately many people who are responsible for these injustices are still there, and they will be the people who will determine if the AI output is serving their purpose or not.
The “AI” that I think is being referenced is one that instructs officers to more heavily patrol certain areas based on crime statistics. As racist officers often patrol black neighbourhoods more heavily, the crime statistics are higher (more crimes caught and reported as more eyes are there). This leads to a feedback loop where the AI looks at the crime stats for certain areas, picks out the black populated ones, then further increases patrols there.
In the above case, any details about the people aren’t needed, only location, time, and the severity of the crime. The AI is still being racist despite race not being in the dataset