A tragic scandal at the UK Post Office highlights the need for legal change, especially as organizations embrace artificial intelligence to enhance decision-making.
However, if you are at the receiving end of a mistake made my either a classic algorithm or an machine learning algorithm, then you probably won’t care whether it was the computer or the programmer making the mistake. In the end the result is the same.
“Computers make mistakes” is just a way of saying that you shouldn’t blindly trust whatever output the computer spits out.
if you are at the receiving end of a mistake made my either a classic algorithm or an machine learning algorithm, then you probably won’t care whether it was the computer or the programmer making the mistake
I’m absolutely expecting corporations to get away with the argument that “they cannot be blamed for the outcome of a system that they neither control nor understand, and that is shown to work in X% of cases”. Or at least to spend billions trying to.
And in case you think traceability doesn’t matter anyway, think again.
IMHO it’s crucial we defend the “computers don’t make mistakes” fact for two reasons:
Computers are defined as working through the flawless execution of rational logic. And somehow, I don’t see a “broader” definition working in the favor of the public (i.e. less waste, more fault tolerant systems), but strictly in favor of mega corporations.
If we let the public opinion mix up “computers” with the LLMs that are running on them, we will get even more restrictive ultra-broad legislation against the general public. Think “3D printers ownership heavily restricted because some people printed guns with them” but on an unprecedented scale. All we will have left are smartphones, because we are not their owners.
I mostly agree with this distinction.
However, if you are at the receiving end of a mistake made my either a classic algorithm or an machine learning algorithm, then you probably won’t care whether it was the computer or the programmer making the mistake. In the end the result is the same.
“Computers make mistakes” is just a way of saying that you shouldn’t blindly trust whatever output the computer spits out.
I’m absolutely expecting corporations to get away with the argument that “they cannot be blamed for the outcome of a system that they neither control nor understand, and that is shown to work in X% of cases”. Or at least to spend billions trying to.
And in case you think traceability doesn’t matter anyway, think again.
IMHO it’s crucial we defend the “computers don’t make mistakes” fact for two reasons:
You’ll care if you’re trying to sue someone and you want to win.