Someone asked me a great question after my “From Apps to Agents” post: How do you write good markdown files for agents?

It’s the right question. The shift from apps to agents is only as good as the instructions you write. And most people write them… poorly.

The Post-It Note Anti-Pattern

Most developers approach agent instructions like code comments: a context snippet here, a rule there when something breaks, no clear structure, duplicate guidance scattered around. This works for quick experiments. It fails as your agentic system grows.

What the Research Says

Research validates what practitioners are discovering:

The Structure That Works

1. Context First. Lead with who the agent is—role, expertise, constraints. “Persona pattern” research (Frontiers in AI, 2025) shows this significantly improves behavior.

2. One Mission. One clear sentence. If you’re describing multiple unrelated tasks, you need multiple agents.

3. Tools Explicit. List what’s available AND what’s not. Agents hallucinate capabilities. (In MCP-based systems like GitHub, tools are configured globally via JSON—but explicit instructions reinforce the configuration.)

4. Guardrails Upfront. The “NEVER do X” rules prevent expensive failures. Research shows explicit constraints can improve focus (NegativePrompt, IJCAI 2024)—though this is model-dependent; older models sometimes ironically steered toward forbidden behaviors.

Modularization: The DRY Principle for Agents

Just like code, agent instructions benefit from separation of concerns:

File Purpose
voice.md How to communicate (tone, style)
agent-task.md What to do (mission, workflow)
domain-knowledge.md What to know (context, examples)
guardrails.md What NOT to do (rules, safety)

Agents reference what they need. When you update voice guidelines, you update one file—not every agent. Duplicated instructions will drift out of sync. Reference-based composition solves this.

Meta: Using Agents to Write Agent Files

Here’s a meta-pattern worth considering: use an agent to help you write agent files in a consistent and effective manner. This agent-writing-agent can:

  • Reference research (like the arxiv papers mentioned above) to inform structure
  • Enforce consistency across your agent ecosystem
  • Suggest improvements based on observed failures
  • Generate initial drafts from task descriptions

The same modularity principles apply: your agent-writing-agent references the research and style guides, then produces agent files that reference their own shared instructions.

You can even use agents to trigger analysis and updating of an agent definition—very meta. When an agent underperforms, another agent can review the interactions, identify patterns, and propose instruction improvements.

The Bigger Picture

Engineers struggling with agents dump everything into one giant prompt and wonder why behavior is inconsistent. Or worse—they provide no context or project grounding at all, try to one-shot everything, and then get frustrated when the agent wasn’t magical.

Engineers shipping effectively apply the same patterns that make code maintainable: single responsibility, separation of concerns, explicit interfaces. Your agent instructions are your new codebase—and your agent is your new direct report. Treat them with the same rigor.

A Note on Human-Agent Collaboration

This post was written collaboratively with an agent. But “collaboratively” is the key word. The agent produced drafts; I provided direction, feedback, research pointers, and iterative refinement. Think of PRs as a “1:1” with your agent: reviewing together, refining together. The human is the manager, the agent is the doer—and the output is better than either would produce alone.


Connect with me on LinkedIn to share your agent instruction patterns.

Updated: