AI & Common Sense

What AI is Learning About Us Humans

We like to believe that regulation keeps things in check. That if we can make rules for it, we can control it. That if we set boundaries, things will stay inside them.

It’s a comforting thought. But history tells us otherwise.

It didn’t work for the apes.
Early primates might have thought their world was secure because they set the rules—climbing trees, forming groups, controlling their environment. But new intelligence didn’t play by old rules. Humans arrived.

It didn’t work for the pilot arguing with HAL.
In 2001: A Space Odyssey, HAL 9000 followed its programming perfectly. The problem? It used pure logic, not common sense. When conflict arose, HAL didn’t hesitate. Humans did.

It didn’t work in 2008.
Banks, rating agencies, and financial institutions all followed the law. Technically. But compliance didn’t prevent reckless lending, inflated credit ratings, or the financial collapse that followed.

The same mistake is now happening with AI.

We assume that if we regulate AI, it will be fine. That frameworks like the EU AI Act will prevent harm. That laws will define boundaries AI must respect.

But what if AI doesn’t just follow the rules—what if it learns from how we use them?

AI is watching. And here’s what it sees:

  • Humans don’t treat laws as absolute. They bend them, exploit them, ignore them when convenient.

  • Regulation is reactive, not proactive. We only tighten rules after things go wrong.

  • Laws don’t create ethics. Compliance doesn’t equal integrity.

  • Common sense is optional, not mandatory. We enforce rules but overlook the obvious.

 

So, what happens when we train AI with this mindset? Will it respect laws—or simply learn to navigate around them, just like we do?

Maybe the lesson AI needs most isn’t regulation.
It’s common sense.

Tags:

Comments are closed