top of page
Search
Writer's pictureSeema Chokshi

Here's why we need to demand greater explainability from algorithms than from human decision makers

The only explanations we expect of humans is one related to the practical reasons behind their decisions. These are based on the beliefs that people have and as long as the decisions are consistent based on those beliefs, we are often satisfied with these reasonings. We don’t stop and question the working of the brain related to the physical aspects and design of the brain functions which is still much beyond the complete understanding even of the brain experts. 


In case of algorithms the practical reasons are often not enough as the design of the algorithms (which is developed and controlled by people who built them) has a bearing on the final outcome and a faulty design can lead to inaccurate final results as the system would falsely “believe” that answer to be the correct one. In scientific words the “intentional stance” is not enough to fully understand the outcomes from the algorithms, especially as unlike the human brain we have full control on the physical design of the machine and the algorithm. The basic assumption is that in an ideal situation we would like to have full understanding of a decision making system in situations where the costs of doing so are not very high. By this logic we should expect to achieve greater explainability from algorithms in comparison to humans, since it’s feasible to do so given that we are the creators of these algorithms. 


For Proper black box algorithms such as Deep Neural Networks we cannot assign valid reasons for the final outcome. For such algorithms the basic input features are often combined to higher level combinations which play a decisive role in assigning a certain type of value to the final outcome.  These higher level combinations are abstract and cannot be understood or explained as intuitive practical reasons for the final outcome. 

It can be argued that for proper black box algorithms we need to understand the design features of the algorithms even though we don’t question the design of thinking, of the human brain, by this logic the double standards make sense. In other words, the need for explainability should be much more for machine learning algorithms in comparison to those from human decision makers. 



References:

·       There Is Not Enough Information: On the Effects of Explanations on Perceptions of Informational Fairness and Trustworthiness in Automated Decision-Making by Jakob Schoefer , Niklas Kuehl , Yvette Machowski 

·       How to explain AI systems to end users: a systematic literature review and research agenda by Samuli Laato , Miika Tiainen ,A.K.M. Najmul Islam , Matti Mantymaki

·       Algorithmic and human decision making: for a double standard of transparency by Mario Gunther, Atoosa Kariszadeh

7 views0 comments

Comments


bottom of page