Ethics of explainable AI: What characterises good explanations?
Opacity
The more opaque, data-based services enter various sectors of society, the more urgent the question of whether we understand them sufficiently becomes: Do we know what they do? What they should and shouldn't do? In order to find errors, ensure smooth operations or take responsibility for decisions made, it often seems necessary to be able to understand system behaviour. Depending on their 'digital literacy', software and robots can be more or less opaque to someone, they can be deliberately kept opaque to third parties as a trade secret or remain opaque due to their genuinely technical properties (Burrell).
Asymmetries of knowledge and power
This lack of transparency can become problematic if it is accompanied by an inability to assess whether or not an existing unequal treatment of people with or by an AI system is justified, e.g. in decisions on the granting of loans, medical treatments, the categorisation of insurance classes or the recruitment of staff. This is because we usually consider it justified to treat different people differently for good reasons, such as prohibiting the sale of alcohol to minors. However, if we cannot assess why someone is treated differently, it is difficult, perhaps impossible, to argue for or against alleged discrimination.
When are explanations sensible, necessary, appropriate?
At present, xAI research is orientated towards the guiding principle of "human-centric AI" and thus focuses on potential users. From an ethical point of view, further aspects should be taken into account for a good design of these systems: Firstly, explaining is not an end in itself. It is conceivable that explanations could be used to manipulate users or create acceptance for a technology that is ethically or legally unacceptable. It is therefore important to carefully consider whether and which explanations are appropriate in each situation. As social situations can only be described in an idealised form, constellations can arise that no one has anticipated and for which the systems are not sufficiently 'prepared'. From an ethical point of view, meta-strategies should be integrated to enable those involved to deal with the situation in a purposeful manner. Secondly, a consistent distinction should be made between ethical requirements and user wishes: Whether you can control your own home robot better or worse need not be ethically relevant. If declarations strengthen the ability of their users to act, this can also
be positive in an ethical sense (autonomy). However, it is also possible that the two requirements are contradictory: A high degree of transparency enables some users to 'outsmart' the system to their 'personal advantage'. However, the same transparency could overwhelm others and thus run counter to the idea of equal opportunities.
The specialist group "Applied Ethics, Technology Ethics" is researching these questions in collaboration with colleagues from TRR 318 "Constructing Explainability" and the research network "SustAInable Life-cycle of Intelligent Socio-Technical Systems" (SAIL).