Introduction
A new label has entered the AI-assisted programming vocabulary: vibe engineering. Coined as a deliberately cheeky counterpoint to “vibe coding,” the term describes the more disciplined, accountable practice of accelerating professional software work with LLMs and coding agents while retaining clear ownership of design, quality and maintainability. The idea surfaced in a 7 October 2025 post by Simon Willison and ties into recent shifts in how developers use agentic tools.
What vibe engineering means
Vibe engineering sits at the opposite end of the spectrum from fast, prompt-driven experiments that prioritize quick results over traceability and robustness. It treats LLMs and coding agents as powerful collaborators that can be orchestrated, audited and guided to produce production-quality software. The workflow emphasizes planning, specification, testing and review, rather than simply accepting whatever code an LLM returns.
Why this is different from vibe coding
Where vibe coding delegates small, low-stakes tasks to an LLM and accepts the output if it looks plausible, vibe engineering expects the human participant to remain accountable. The role shifts from pure code authoring to:
- researching approaches and selecting the right architecture,
- writing clear specifications and success criteria,
- designing agentic loops and QA plans,
- performing extensive code review and manual testing,
- and managing multiple simultaneous agents when appropriate.
Recent tooling has pushed this forward: coding agents such as Claude Code (Feb 2025), OpenAI’s Codex CLI (April), and Gemini CLI (June) are all examples of tools that can iteratively test and modify code until it meets a goal, increasing the utility of LLMs for real-world problems.
Practices that LLMs actively reward
LLMs and agentic workflows amplify the value of established engineering practices. The most relevant are:
- Automated testing. A robust test suite allows agents to iterate quickly and safely; without tests an agent may claim success without verification.
- Planning in advance. High-level design and specification reduce wasted iterations and make handoffs to agents more effective.
- Comprehensive documentation. Documentation lets models use APIs and modules without loading full source files into context.
- Good version control habits. Clear history and the ability to undo changes are crucial when agents modify repositories; agents are also adept at using git tools such as git bisect.
- Effective automation. CI, formatting, linting, and preview deployments make agent-produced changes repeatable and reviewable.
- A culture of code review. Rapid, disciplined review practice is essential when LLMs generate large volumes of changes.
- Practical management skills. Orchestrating agents requires concise instructions, appropriate context provision, and actionable feedback—skills similar to managing human collaborators.
- Strong manual QA. Beyond automated tests, careful manual exploration of edge cases remains important.
- Research skills. Selecting the best approach among many alternatives still requires human judgement.
- Preview environments. Shipping to preview environments enables safe verification before production deployment.
- Judgement about outsourcing. Deciding which tasks are safe to hand off to an agent and which need manual attention is a continuously evolving skill.
- Updated estimation sense. Estimating timelines changes as agents accelerate some tasks while introducing new uncertainties.
Working at the top of the game
Extracting real value from LLMs is not a shortcut around solid engineering. Instead, AI tools amplify existing expertise: the more mature the engineering practices and judgement, the greater the speed and quality of results. Some engineers are already running multiple agents in parallel to tackle different problems simultaneously, expanding what a small team can ship—albeit with increased cognitive overhead for coordination and review. For teams, that implies a shift toward roles that combine specification, orchestration and deep review rather than only writing every line of new code manually.
A deliberately mischievous name
The contrast between “vibes” and “engineering” is intentionally self-contradictory. The label is partly a reclaiming of the vibe vocabulary and partly a nudge that serious, accountable work with LLMs is a distinct discipline from casual prompt-driven coding. The name is meant to be provocative rather than definitive, inviting discussion about how to codify best practices for AI-assisted software engineering.
Original post: https://simonwillison.net/2025/Oct/7/vibe-engineering/