The limits of how we imagine the future
From flying cars to AI agents, how our imagination traps us into upgrading the present instead of reinventing it.
There’s a scene in Kubrick’s 2001: A Space Odyssey that has been making me think since the first time I watched the movie. One of the astronauts, Bowman, sits in a spacecraft that can think, speak, and navigate the cosmos... and he’s hitting buttons.
The same clunky rows of buttons we used to have before keyboards or even touchscreens, just sort of... floating in space. More elegant, sure. But still buttons.
Kubrick was undeniably ahead of his time, but it still makes me wonder:
Did he not imagine a world without buttons or keyboards? Or did he simply not question them?
The technology around the buttons had leaped centuries forward, but the buttons themselves came along for the ride, unchanged. Nobody stopped to ask if everything else is this advanced, why are we still pressing individual buttons one at a time?
Same thing with the “flying cars” fantasy. For most of the 20th century, “the future” meant flying cars. It was the default vision. Roads, but in the sky. Cars, but with wings. Looking back, it’s almost funny. It’s not a bad idea, but it reveals exactly how we think about the future. We took what we had (cars, roads, the frustration of traffic) and we scaled it up, literally. We solved the surface of the problem without questioning the problem itself.
The real question we should have asked wasn’t really “How do we make cars fly?”, it was “How do we spend less time getting from place to place?”.
Or think about how sci-fi imagined video calls. There’s a whole visual language for it: the big screen on the wall, someone standing in front of it, talking to a giant face. It looks futuristic. But watch it closely and you realize it’s just a phone call. The ritual is completely intact. The formality, the dedicated moment, the “I’m calling you now.” Nobody imagined that video communication would eventually mean someone half-awake in bed, laughing at a meme a friend just sent, a voice note fired off mid-walk. The “container” of the phone call survived even when the technology exploded past it.
When cars first appeared, they were literally called “horseless carriages”, and they looked exactly like carriages. The driver sat up high on a bench, there was no windshield, and the steering mechanism resembled reins. It took two decades before someone thought to lower the body, enclose it, and put the engine where the horse used to be.

It makes me wonder what we’re currently blind to.
And I think the answer, right now, is AI agents.
The dominant way people are building and imagining agents today goes something like this:
“Take a human workflow, replace the human with an AI.”
You get an “AI analyst” that produces the same PowerPoint decks a human analyst would. The interface is the same, the output is the same, the measure of success is:
“Does it do what a person would do, but faster and cheaper?”
This is the horseless carriage moment. We put the AI in the harness.
What gets missed is that the workflows being automated were shaped by human constraints that agents simply don’t have. A human analyst spends most of their time wrangling data into a format that can be communicated, because the bottleneck was always human cognition on the receiving end. Someone has to read the thing. An agent doesn’t need a deck. It can hold the entire dataset in context and just answer questions directly. The slide deck was a workaround for the fact that humans can’t share working memory.
We are using AI agents that “attend meetings”. Summarizing calls, joining Zoom, producing transcripts… The assumption underneath is that the meeting is load-bearing, that it needs to happen and someone (or something) needs to be there. But meetings exist largely because synchronous communication was the only reliable way to get a team on the same page. If you have an agent that can maintain perfect shared context across every person on a project, the meeting might be the thing that disappears, not the thing that gets a better note-taker.
The sharpest version of this is that most AI agent interfaces today are still built around a chat box. You type what you want and the agent responds. That’s the keyboard floating in space. We took the interaction paradigm from messaging apps and search engines and stretched it to cover something genuinely different, because it was the interface we already had.
What the “touchscreen moment” looks like for agents, the interface native to what agents actually are, probably hasn’t been invented yet. Or it has, somewhere, and it’s being dismissed as too weird.
We tend to mistake the symptom for the constraint. We saw traffic and imagined flying cars. We saw buttons and imagined them floating in space. We see slow, expensive human labor, so we’re building faster, cheaper versions of ourselves.
The more interesting question is always, what was the workflow trying to work around in the first place?
If you’re building with AI right now, it’s worth sitting with that question for a moment. What assumptions in your product exist only because a human was doing it before?





Interesting subject. A lot to think about here.
I remember noticing this problem while I was reading sci-fi books back then. For some reason the ideas they were coming up were based on iterations to existing ideas or tech. Especially popular books and movies.
From what I see that some sci-fi content focuses pure fantasy just because it sounds cool like flying cars. Some other books or movies was focusing to teleportation, space/time warps for example.
There are no limits when it comes to ideas but selling that to people is hard. The ones that works with masses sticks around.
I remember trying GPT-2, one of the first version they published demo and it was working in Word like program that was completing your sentences. They somehow noticed that it works like chat and people loved that.
I see many "touchscreen" moments for agent flows but whichever will be successful with masses, we'll see.