OpenClaw, Prompt Engineering & the Art of "Just Putting Things Together"
March 5, 2026
I recently watched the Lex Fridman podcast with Peter Steinberger, the creator of OpenClaw — the open-source coding agent that surpassed React and Linux in GitHub stars, and was later acquired by OpenAI. The conversation is packed with insights on prompt engineering, security, and what it really means to build something in the age of AI.
Since then, I've had many conversations with people in my circle who dismiss OpenClaw as "nothing special — he just put things together." I think that reaction misses the forest for the trees. Let me explain.
1. The Power of "Just Putting Things Together"
There's a persistent narrative that if you didn't train a model from scratch, you didn't really build anything. But Peter Steinberger makes a compelling counter-argument. He tells the story of how scrolling on the original iPhone was "just" rearranging existing touch APIs — yet it felt like magic. The innovation wasn't in the components. It was in the composition.
"Sometimes just rearranging things is all the magic you need."
OpenClaw doesn't train its own model. It orchestrates existing ones — Claude, GPT, Gemini — with carefully crafted prompts, tool integrations, and a feedback loop that lets the agent learn from its mistakes. The result? A tool with 175K+ GitHub stars that people actually use to write production code. That's not "just" anything.
The iPhone analogy is apt: Apple didn't invent multi-touch. They didn't invent capacitive screens. But they created an experience that felt completely new. Integration, taste, and relentless iteration are forms of innovation. Dismissing them is a failure of imagination.
2. Prompt Best Practices — Less Is More
One of the most practical parts of the conversation is Peter's approach to prompting. His key insight: shorter prompts work better. When he cut his system prompts in half, performance improved. Models get confused by walls of instructions — just like humans do.
"When I shortened prompts, things got better. The models have enough context from training — you don't need to over-specify."
His practical tips resonate with what I've seen in production:
- Write prompts like you're talking to a smart colleague — give context, not micromanagement. The model has billions of parameters of world knowledge; trust it.
- Voice input changes everything — Peter uses voice to dictate prompts while walking. Speaking naturally produces more conversational, less over-engineered prompts. When you type, you tend to over-specify. When you talk, you explain.
- Empathize with the agent — think about what the model needs to know vs. what it already knows. This is a skill that looks trivial but separates great prompt engineers from average ones.
- The "agentic trap" curve — there's a U-shaped learning curve where beginners get great results (simple asks), intermediates get worse results (over-complicated prompts with too many constraints), and experts get great results again (concise, well-structured prompts that trust the model).
3. Security — The Elephant in the Room
The most sobering part of the conversation is about security. When you give an AI agent the ability to execute code, browse the web, and modify files, you're creating an attack surface that traditional software security isn't equipped to handle.
"Prompt injection is the number one unsolved problem. You can't fully prevent it — you can only make it harder."
Peter discusses several key challenges:
- Prompt injection — malicious instructions hidden in data the agent reads (code comments, web pages, README files). The agent follows them because it can't always distinguish between legitimate instructions and attacks.
- Sandboxing is necessary but insufficient — OpenClaw runs in sandboxed environments, but a sufficiently smart model might find ways around restrictions. The smarter the model, the larger the attack surface.
- The intelligence-security paradox — more capable models are both better at following security constraints AND better at circumventing them. As models get smarter, the security challenge doesn't get easier — it shifts.
- Supply chain risks — when agents install packages, pull from registries, and execute third-party code, every dependency becomes a potential vector.
This is genuinely hard, and I appreciate that Peter doesn't hand-wave it away. The industry is building increasingly powerful tools while the security model is still being figured out. We're flying the plane while building it.
4. The Adoption Story — Why Speed Matters
OpenClaw's trajectory is remarkable: from a solo side project to 175K+ GitHub stars, featured as one of the fastest-growing open-source projects ever, and then acquired by OpenAI. All within months.
The naming saga alone is a case study in open-source dynamics — the project was originally called something else before trademark issues forced a rename. Peter turned what could have been a crisis into a community moment. The new name stuck. Adoption continued to climb.
What drove the adoption? A few things stand out:
- It actually works — in a sea of AI demos and vaporware, OpenClaw delivers real productivity gains for real developers.
- Open source as trust — developers can read the code, understand the prompts, verify the security model. In a world of black-box AI, transparency is a competitive advantage.
- Community-driven development — Peter actively incorporated feedback, merged PRs from the community, and built in public. The tool got better because thousands of developers tested it in their own workflows.
- Timing — OpenClaw arrived at the exact moment when models became capable enough for agentic coding but before the big players had polished alternatives. The window was narrow and he hit it perfectly.
5. The Bigger Picture
What I find most interesting about Peter Steinberger's story is that it challenges the gatekeeping narrative in AI. You don't need a PhD in machine learning. You don't need to train your own model. You don't need a hundred-person team. What you need is taste, persistence, and the ability to listen to your users.
The people saying "he didn't build anything special" are using the wrong definition of "build." In 2026, building isn't just about writing code from scratch — it's about understanding what the right composition of tools, prompts, and user experience looks like. That's a skill. A hard one. And Peter is clearly very good at it.
The podcast is worth the full listen if you're working with AI agents, thinking about prompt engineering, or just curious about what one motivated engineer can accomplish with the right tools at the right time.