From MCP Loadouts to Skills: Why Agents Prefer Lightweight Summaries

Armin Ronacher shifts from embedding MCP tool specs into LLM prompts to lightweight 'skills'—short guides that teach agents to use tools. Skills cut token bloat, ease maintenance, and allow quick updates.

From MCP Loadouts to Skills: Why Agents Prefer Lightweight Summaries

TL;DR

  • Move from MCP loadouts to skills: Armin shifted from materialized MCP tool definitions to lightweight skills that teach agents how to use existing tools.
  • Deferred tool loading: Anthropic’s method declares tools up front, injects them later via regex, reduces immediate context bloat but leaves definitions fixed and preserves cost, cache, and protocol tradeoffs.
  • Skills are concise summaries pointing to a manual or file instead of embedding full tool specs; they teach use of existing tools and yield lighter contexts with better practical outcomes.
  • MCPs via CLI (mcporter) expose commands but still require maintained summaries; API changes or truncated descriptions can break those summaries (Sentry’s switch to natural‑language query cited).
  • Agents writing and maintaining small skills: local control, quick on‑demand updates, avoids high token cost; MCP protocol stability needed before dynamic loadouts can match skill simplicity.
  • Original source: https://lucumr.pocoo.org/2025/12/13/skills-vs-mcp/

Why Armin moved from MCP loadouts to skills

Armin Ronacher revisits a long-running experiment with MCPs and agent tooling and explains a practical shift: moving away from materialized MCP tool definitions toward lightweight skills — short summaries that teach an agent how to use existing tools. The write-up contrasts Anthropic’s recent experiments with deferred tool loading with the simpler skill approach, and why skills have become the preferred path for many of his workflows. Read the original post for full examples and anecdotes: https://lucumr.pocoo.org/2025/12/13/skills-vs-mcp/

What deferred tool loading does — and why it isn’t a silver bullet

Anthropic’s deferred loading lets tools be declared up front but injected later, discovered by regex and kept static for the conversation. That reduces immediate context bloat compared with eager MCP injection, but it still requires significant LLM-API engineering and leaves tool definitions fixed for the session. The model’s reinforcement-learning–driven tool-calling remains intact, but the tradeoffs around cost, cache behavior, and protocol constraints persist.

How skills change the game

Skills are concise summaries that point to a manual or file rather than embedding a full tool spec into the prompt. They don’t introduce new tool primitives; instead, they teach the agent how to use the existing tools (bash, etc.) more effectively. In Armin’s experience, this leads to lighter contexts and better practical outcomes than trying to cram long MCP definitions into the system message.

MCPs via CLI (mcporter) and where that falls short

Wrapping MCPs behind a CLI (example: Peter Steinberger’s mcporter) exposes MCPs as callable commands, but the LLM needs information about what commands exist and how to use them. That information often must be maintained manually as skills. Frequent changes in MCP server APIs or truncated tool descriptions (to save tokens) can break these summaries — Armin cites Sentry switching to a natural-language query syntax as an instance where eager descriptions became misleading.

Going forward: agent-written, maintainable skills

Armin has settled on having agents produce and maintain their own small skills. The benefits: local control over the tool wrapper, quick on-demand updates when behavior shifts, and avoiding the heavy token cost of embedding large MCP definitions. He expects protocol evolution and potentially more dynamic tooling, but also highlights a need for greater MCP protocol stability before dynamic loadouts can match the simplicity of skill-driven workflows.

Original source: https://lucumr.pocoo.org/2025/12/13/skills-vs-mcp/

Continue the conversation on Slack

Did this article spark your interest? Join our community of experts and enthusiasts to dive deeper, ask questions, and share your ideas.

Join our community