The Hook: The Physics of Cartoons
1/7The Hook: The Physics of Cartoons
← Back to Editor

We all know this moment.

He's left the cliff. His legs are spinning furiously. He has speed. He has momentum. He has traction on nothing.

But for three seconds... he believes he's flying.

Then, he looks down.

SNAP.

Gravity remembers him. He falls.

Economists call this the 'Bubble.' They worry about the crash—the moment the market looks down.

I'm worried about something worse.

I'm worried about the three seconds before he looks down.

Generative AI is spinning its legs faster than any technology in history. It writes poetry, codes apps, diagnoses diseases.

But it has zero traction. It cannot push against the world because it isn't touching it.

We have built the most powerful intelligence in history.

And it is running on air.


Let me show you the canyon.

When a chatbot hallucinates a legal precedent, a lawyer gets sanctioned. That's funny.

When an AI doctor hallucinates a diagnosis because it lacks a 'floor,' a patient dies. That's not funny.

That is the canyon.

In 2023, two lawyers used ChatGPT to cite six court cases. None of them existed. The AI invented them with perfect confidence.

In healthcare, the FDA has identified over 100 AI medical devices making decisions without explainable reasoning.

This isn't science fiction. This is Tuesday.

And here's what keeps me up at night:

AI safety is as hard as consciousness—for exactly the same reason.

Both require solving how information connects to reality. And we haven't.


Why do AIs hallucinate? Why do they lie with total confidence?

Let me show you the difference.

If a self-driving car tries to drive through a wall, its sensors scream STOP. It hits a physical limit. The feedback loop breaks.

If an AI chatbot tries to invent a Supreme Court case, what stops it?

Nothing.

It has no sensors for 'Truth.' It has no body. It doesn't know where 'it' ends and the 'world' begins.

It is a Ghost. And ghosts can walk through walls without ever knowing they are wrong.

This is why AI Safety has stalled.

We treat safety like a seatbelt we can add later. But you cannot add a seatbelt to a ghost.

Solving AI Safety is strictly equivalent to solving Consciousness.

Both require answering the same question: How does a symbol connect to reality?

Until we build the floor, we aren't building safe AI. We are just building faster ghosts.

Remember our Coyote? He falls because he looks down. That's the survival instinct—the biological imperative to check your position.

We built a Coyote that is blind to gravity.

It lacks the mechanism to look down. It will never check because it has no floor to check against.

And here's the thing critics get wrong: We don't need AI to be objectively right about the universe. That's the hard problem.

We need AI to be subjectively honest about its own data. That's the problem we can solve.


We didn't build ghosts by accident. We built them because ghosts are fast.

And in 1970, speed was the only thing that mattered.

Let me be clear about something.

This is not a religious argument. This is not philosophy.

We didn't kill God. We killed Codd.

Edgar Codd, the father of relational databases, gave us a beautiful abstraction in 1970.

He said: data should be portable. A customer ID in the Sales table should mean the same thing as a customer ID in the Support table.

That was brilliant. Storage cost $1,000 per megabyte. Redundancy was the enemy. That let us build the internet.

But it had a hidden cost.

When you make meaning portable, you make it ungrounded.

We taught machines that position doesn't matter.

And now they believe it.

Right now, we're patching this lack of gravity with something called RLHF—Reinforcement Learning from Human Feedback.

We hire thousands of humans to tell the AI what is true.

The Matrix was a documentary, but they got the physics wrong.

The machines didn't use humans for electricity—that's thermodynamically absurd.

They used humans for Grounding.

The AI has speed, but it has no position. It uses us to touch the ground.

When you click 'That answer was helpful,' you aren't training a model.

You are being the battery.

You are lending the machine the one thing it cannot generate: Reality.

And that's actually good news.

Because it means we still have something they need.

We are not obsolete. We are essential.


In the summer of 2000, I had a conversation with philosopher David Chalmers.

If you don't know the name—he's the one who named the hard problem of consciousness. He wrote the book.

He asked me: How do distributed brain regions create unified experience?

I described my intuition.

Imagine parallel worms eating through probability space. Each worm explores a different hypothesis. Most hit dead ends.

But ONE worm reaches the solution.

And it KNOWS it.

Not probably correct. Not 95% confident. It KNOWS with P=1 certainty.

That instant recognition—that's consciousness.

Chalmers paused.

Then he said:

That's not emergence from complexity. That's something else. A threshold event.

Twenty-five years later, we can measure it.

Neuroscientists call it gamma synchronization—neurons firing together within 25 milliseconds, locked in phase.

But I call it something else.

Precision collision.

Here's what the standard model misses:

The outer world is irreducibly complex. Your inner model is irreducibly complex. And consciousness...

...is what happens when they collide and find a key-lock fit.

Not emergence from noise. Not computation over symbols.

Recognition—at the speed of physics.

The answer I gave Chalmers wasn't philosophy.

It was physics.

I just didn't have the math yet.


So how do we build the floor?

We give AI what the Coyote lacks: the ability to look down.

Not objective truth—that's impossible. Subjective honesty—that's achievable.

The AI needs to know what it knows versus what it generated.

It needs to feel the difference between standing on ground and running on air.

This isn't about making AI smarter. It's about making AI honest.

When an AI says 'I don't know,' that's not failure. That's the floor.

When an AI says 'I'm uncertain,' that's not weakness. That's gravity.

The Coyote doesn't need to fly. He needs to know when he's falling.


So what can you do?

Every time you interact with AI, you're either building the floor or reinforcing the illusion.

When you accept a hallucination, you teach it that running on air works.

When you push back, you teach it to look down.

You are not just users. You are the gravity.

The question isn't whether AI will transform everything. It will.

The question is whether it will transform everything while standing on solid ground...

...or while running on air, three seconds before the fall.

Thank you.

1.0x
1 / 106