The promise of Explainable AI (XAI) is seductive: make AI decisions transparent, and we will understand, trust, and perhaps even control them. But beneath this technical optimism lies a deeper flaw—XAI is not addressing the root problem. The real issue is not that AI is inexplicable. The real issue is that we are biased—and so is our world.
AI does not invent prejudice. It mirrors what we feed it. If our societies, histories, and institutions are biased, then the datasets derived from them are biased as well. Attempts to “clean” data for fairness often result in ethical distortion: rewriting or erasing inconvenient truths to produce artificial balance.
Explainability offers a comforting illusion. It makes us feel that transparency is control, and that developers can be held accountable through interpretability. But these narratives hide the structural nature of bias and promote misplaced blame.
A Structural Analogy: XAI as Psychoanalysis
To better understand why XAI fails to solve the real problem, consider the following analogy between AI and human development:
1. Explainable AI = Psychoanalysis
Just as psychoanalysis traces adult behavior back to childhood experiences, XAI traces AI decisions back to model weights or training steps. Both approaches often offer surface-level insights without addressing the structural context that created the behavior.
2. Blaming childhood = Scapegoating developers
We often blame parents for a child’s dysfunction, ignoring societal influences. Similarly, we blame developers for biased AI outputs, even though the training data reflects collective human input.
3. Nature vs. nurture = Code vs. data
Human behavior is shaped by genetics and environment. AI behavior is shaped by programmatic structure and the data it is trained on. Bias arises not from code alone, but from the world AI reflects.
4. A grown-up turned racist = An AI turned biased
Both humans and AI absorb prejudice from their environments. In AI’s case, the environment is historical and contemporary data—news, decisions, behavior patterns.
5. Taking away a phone = Erasing biased data
Removing access to harmful content may limit exposure but does not change the underlying environment. Similarly, deleting biased data may sanitize outputs but also erase important historical context.
6. Banning people = Restricting AI input
Limiting a child’s interactions may reduce risk but restricts learning. AI systems similarly become epistemically constrained when shielded from complex or uncomfortable data.
7. Making laws = Engineering fairness through outputs
Legal systems aim to reduce harm but cannot retroactively change societal structures. Likewise, adjusting model outputs to appear fair does not alter systemic bias—it hides it.
What XAI Promises—And Why It Fails
XAI suggests that if we can trace a decision, we can trust it. But this is misleading. Transparency is not the same as fairness. XAI makes bias visible—but it cannot make it just.
The core illusion of XAI is that better explanations equal better decisions. But understanding a flawed outcome doesn’t change the fact that it came from a flawed world. Without confronting the realities embedded in our data, XAI becomes not a path to fairness—but a detour around responsibility.

Comments are closed