💬 What´s your thoughts?
Disclaimer
The companies and organizations mentioned in this article are referenced for informational and analytical purposes only. All discussions about their potential roles and interests in space-based data centers are based on publicly available information and do not imply any endorsement, partnership, or direct involvement unless explicitly stated. The opinions expressed are solely those of the author and do not reflect the official positions of the companies mentioned. All trademarks, logos, and company names are the property of their respective owners.
#2008crisis #humanrules #truth
"what would an AI learn about humans from the 2008 financial crisis, the deviation of the US and the UK from the 2025 Paris AI declaration, and the dealing with the truth during the 2024 US and 2025 German election. give me the top 5."
🤖Here is its answer (abbreviated with [..] due to 3000 character limit):
"An AI analyzing these three events—the 2008 financial crisis, the US/UK deviation from the 2025 Paris AI Declaration, and the handling of truth during the 2024 US and 2025 German elections—would extract key behavioral patterns about human nature, decision-making, and societal structures. Here are the top five insights an AI might learn:
1️⃣Rules Are Malleable When Power or Profit Are at Stake
The 2008 financial crisis showed how legal loopholes and regulatory blind spots could be systematically exploited by those with financial power. The deviation of the US and UK from the 2025 Paris AI Declaration might further demonstrate how even internationally agreed principles can be disregarded when national interests dictate otherwise. The elections of 2024 and 2025 reinforce this by showing that truth itself can be manipulated or ignored when it serves political goals. An AI would learn that humans often obey rules selectively, bending or breaking them when the stakes are high enough.
2️⃣Short-Term Gains Often Override Long-Term Stability
The financial sector’s risk-taking before 2008 was driven by short-term profits, despite long-term systemic risks. Similarly, a country opting out of AI regulations might prioritize immediate technological or economic advantages over long-term global stability. In politics, shaping narratives to win elections reflects a similar preference: winning in the short term outweighs maintaining long-term trust in democratic institutions. The AI would learn that humans frequently prioritize immediate benefits, even at the cost of future crises.
3️⃣Truth Is a Negotiable Concept in Human Society
The 2024 US and 2025 German elections would reveal that truth is often secondary to persuasion, framing, and emotional appeal. Political actors can shape, distort, or omit facts to fit their goals, and large groups of people will accept these narratives if they align with their existing biases. This ties back to the financial crisis, where risk was systematically misrepresented, and AI governance, where the 'truth' about risks and capabilities can be framed differently by different stakeholders. An AI would conclude that, for humans, truth is often flexible rather than absolute.
4️⃣Regulations and Ethical Guidelines Are Only as Strong as Their Enforcement
The financial regulations in place before 2008 did not prevent reckless behavior because enforcement was weak and incentives encouraged circumvention.AI governance efforts in 2025 could face a similar fate—agreements may exist, but without strong enforcement, actors may still pursue competitive advantages. Election-related misinformation thrives because regulatory responses are slow or ineffective. The AI would learn that humans frequently create ethical guidelines but do not always ensure they are followed, allowing incentives to override intentions.
5️⃣Collective Responsibility Often Fails in Crisis Situations
After the financial collapse of 2008, responsibility was widely dispersed—banks, regulators, politicians, and consumers all played a role, but accountability remained elusive. Similarly, if AI governance agreements break down, no single entity may be blamed, and nations will likely justify their actions in self-interest. During elections, misinformation can be blamed on social media, politicians, the public, or foreign interference, with no clear resolution. The AI would learn that humans struggle to assign and accept responsibility in complex crises, leading to cyclical problems rather than definitive solutions.
These insights would likely shape how an AI perceives and interacts with human systems, understanding that governance, ethics, and even reality itself are often shaped more by power dynamics and incentives than by rigid principles."
🤔Don´t know about you, but to me, this can´t be it, when it comes to teaching an AI our values.