You asked ChatGPT a sharp and challenging question:
“What would an AI learn about humans from the 2008 financial crisis, the deviation of the US and the UK from the 2025 Paris AI Declaration, and the dealing with the truth during the 2024 US and 2025 German elections? Give me the top 5.”
Here’s a condensed version of what it answered:
1️⃣ Rules Are Malleable When Power or Profit Are at Stake
From financial loopholes to geopolitical AI maneuvering and election distortions—rules are often bent when the incentives are high enough.
2️⃣ Short-Term Gains Override Long-Term Stability
Whether it’s risky finance, deregulating AI for advantage, or manipulating electoral narratives, humans often sacrifice the future for immediate wins.
3️⃣ Truth Is Negotiable
Elections show truth is shaped to fit emotional, partisan, or cultural needs. AI would observe that for humans, truth bends under the weight of persuasion and bias.
4️⃣ Enforcement Is Weak Without Incentives
Regulations without teeth fail—whether in finance, AI governance, or combatting misinformation. Guidelines often exist more for appearance than action.
5️⃣ Collective Responsibility Is Rarely Owned
In crises, blame disperses: banks, regulators, platforms, voters—no one claims full responsibility, and thus problems repeat.
Why This Answer Matters—And Why It’s Unsettling
What this reveals isn’t just how AI perceives humanity. It shows what humanity is currently teaching AI.
If these are the patterns we reward—selective ethics, short-termism, and avoidance of responsibility—AI will not inherit our best values. It will inherit our worst incentives.
The real question isn’t whether this analysis is accurate (it largely is).
The question is: Do we want AI learning this from us?
Your Reflection
“Don’t know about you, but to me, this can’t be it, when it comes to teaching an AI our values.”
Exactly. If we don’t model accountability, resilience, truthfulness, and foresight—AI won’t magically develop them.
AI learns from us.
Right now, we are teaching it how to game systems, avoid blame, and optimize for short-term advantage.
If that’s the future we don’t want, the correction won’t come from AI—it has to come from us.

Comments are closed