When I first started working with Claude, it felt like I had unlocked an entirely new way of working. Suddenly, tasks that would normally take multiple designers, strategists, and thinkers weeks to complete were being done in minutes. The speed was almost uncomfortable. The depth of knowledge, the structure, the outputs, it genuinely felt like I had access to an incredibly capable teammate who could think, write, analyze, and execute at scale.
And then came skills.
That’s when things really changed. The idea that you could build repeatable workflows, essentially creating specialized agents with deep contextual understanding, meant you could run multiple streams of work in parallel. While one task was generating, you could move onto another. It felt like maximizing output in a way that simply wasn’t possible before.
But very quickly, the reality set in.
The power is real, but so is the oversight.
Claude is insanely powerful. There’s no debating that.
What would traditionally take weeks of coordination, thinking, iteration, and execution can now happen in minutes. That alone changes how we approach work.
But power without control is dangerous. Every output needed to be reviewed. Every insight needed to be validated. Every assumption needed to be questioned.
Claude, on its own, could not be trusted blindly.
Prompting is everything (and it’s not what you think).
The biggest learning for me was prompting. Not in the surface-level sense of "write better prompts", but in understanding that Claude does not think like a human.
Each generation had value. Each output had merit. But very rarely was it exactly what I needed.
Even with structured templates or highly detailed instructions, I realized something important:
the same prompt will not behave the same way across different chats.
That forced me to become extremely explicit. Painfully explicit.
- Define exactly what is required
- Define what is not allowed
- Specify validation methods
- Break down how outputs should be structured
And even then, it still needed iteration.
The unexpected trade-off: Time.
Ironically, what started as a tool to save time ended up doing the opposite.
I found myself:
- Running prompts
- Waiting for generations
- Reviewing outputs
- Adjusting prompts
- Running them again
Over and over again.
Instead of working less, I was working longer. Not because Claude wasn’t capable, but because I had to constantly guide, correct, and refine its outputs.
At some point, it genuinely felt like I was babysitting my laptop.
Memory, context and the problem of drift.
Another major challenge was memory and context retention. Over time, chats would:
- Lose the original brief
- Introduce inconsistencies
- Drift away from the intended direction
The best workaround was to segment tasks into separate chats, each with a clear and focused objective. It wasn’t elegant, but it was necessary.
The danger of ambition.
Because Claude is so capable, it naturally pushes you to aim higher. You start thinking: "What else can I get it to do?".
And before you know it, your scope becomes extremely ambitious. But here’s the problem: If you don’t fully understand what you’re asking it to generate, you’re setting yourself up for failure.
Claude can produce convincing outputs. Very convincing. But without your own knowledge and expertise, you won’t be able to:
- Spot inconsistencies
- Challenge flawed logic
- Validate accuracy
Our skills as designers and thinkers aren’t replaced - they’re more important than ever.
Assumptions: the silent problem.
One of the biggest issues I encountered was assumptions. Claude will:
- Infer
- Generalize
- Fill in gaps
Based on patterns, data, and likely scenarios. But in my work, we cannot afford assumptions. Every claim must be:
- Verified
- Backed by evidence
- Linked to actual sources
- Supported with screenshots and URLs
To get Claude to operate this way, I had to explicitly tell it "make zero assumptions".
Every single time.
And then go even further:
- Define how validation should happen
- Specify what qualifies as proof
- Force it into a structured verification process
It became repetitive. Monotonous. Necessary.
Endless generations, no natural finish line.
Another unexpected realization: there is no natural stopping point. Claude will keep generating. Refining. Expanding. Improving. Forever.
Unless you define:
- What “done” looks like
- What level of quality is acceptable
- When to stop
Without that, you can easily fall into an endless loop of "Just one more iteration".
Learning to set limits became critical.
Claude is not a tool, it’s a coworker.
The best way I can describe Claude is this: it’s not a tool. It’s a coworker.
A very knowledgeable one. A very fast one. But still… a new employee that needs:
- Clear instructions
- Context
- Supervision
- Feedback
It will make mistakes. It will misunderstand things. It will require guidance. And over time, with the right direction, it improves.
But it is not self-sufficient.
Where we stand today.
A year ago, none of this would have been possible. Even a few months ago, not at this level. And it’s only going to get better. Many of the pain points I’ve experienced will likely disappear as models evolve:
- Better memory
- Fewer assumptions
- More reliable outputs
But today, in this moment, one thing is clear: none of this works without human input.
The bigger picture.
We are not replaceable by AI. Not yet. And maybe not ever in the way people fear. The real concern is something else. Future generations may never develop the depth of understanding we were forced to build:
- Breaking down problems manually
- Thinking through systems
- Validating every step
If AI starts doing all the thinking, that layer of understanding disappears. And that, is a much bigger risk.
