« Expertise, opacity, and trust in AI systems »
Boisseau, É. « Expertise, opacity, and trust in AI systems », Synthese 207, 104 (2026). https://doi.org/10.1007/s11229-026-05484-2
Publication dans Synthese d'un article dont voici le résumé :
Abstract :
This paper critically examines a family of arguments by analogy which suggest that the trust granted to an AI system should mirror the one usually granted by a layperson to a human expert. I particularly dispute the idea that, on the grounds that both share some degree of opacity, human experts and AI systems can be considered epistemic ‘black boxes’ and both be subject to the same blind trust on the part of non-experts. To uncover the problematic nature of this rather widespread analogy, I proceed by identifying a form of ambivalence in the notion of opacity mobilised, as well as a number of highly charged presuppositions concerning expertise (notably relating to a kind of obsession with what is sometimes dubbed its ‘veritistic’ character). I suggest that such a reductionist, monomaniacal conception of expertise is flawed, negligent or inadequate. The other forgotten facets of expertise are not merely cosmetic, but constitutive of it. I show that artificial systems cannot instantiate them, and that we cannot expect them to ever do so.
