Why Don’t We Trust Machines when We Obviously Should?
And why must humans stay in the automation loop? How we can create a better future of the human-machine relationship?
‘Humans are inscrutable. Infinitely unpredictable. This is what makes them dangerous.’ — Daniel H. Wilson
Previously, I talked about how AI could change the human-machine relationship. I posted the following questions:
Who should be making the decisions in an autonomous car? Should humans always be able to overrule robot decisions? What if you only have a split second to react? And will the answer change if your loved ones are in the car?
What about this scenario? If you are going to court tomorrow, will you choose an algorithm lacking empathy or a human judge inclined to bias and error to decide your sentence?
‘Even knowing that the human judge might make more errors, the offenders still prefer a human to an algorithm. They want that human touch,’ said Mandeep Dhami, a Professor of Decision Psychology.
It seems that we don’t trust machines. There are other examples too: While research indicates that autonomous vehicles are safer, nearly half of Americans prefer not to use a self-driving car.
We are actively looking for ways to address the transparency and fairness issues with AI. But we seem to be less concerned about the fact that human brains also act like a black box that we know little about. Why is that the case?