Most engineers see a workflow problem and immediately think: “What app should I build?”
After 18 years of client-side and OS development, that instinct was deeply wired into my brain. But when several months ago I first tried to build a LinkedIn-posting agent, I failed three times before realizing I was asking the wrong question entirely.
The Three Failed Attempts
Attempt 1: Claude Desktop Project
My first instinct was to create a project in Claude Desktop with carefully tuned system instructions. I fed it my past LinkedIn posts, attached engagement metrics as frontmatter, and iterated on the prompts.
It worked okay. The AI produced first drafts for me to fill in the details. But I noticed two problems:
- No portability. My entire context was trapped in one tool. If I wanted to switch tools or work from my phone, I’d have to start over.
- Sensationalism creep. Some of this is “the name of the game” for blogs and LinkedIn content nowadays, but… it was too much.
Attempt 2: The Python App
I asked Claude: “How can I build an agent that looks at trending AI topics on Reddit and Hacker News, then produces a LinkedIn post?”
Claude happily generated a Python app. The instructions were something like: “Run this script, then paste the output into claude.ai.”
That’s when I realized I was thinking about this all wrong.
That’s not an agent. That’s automation with extra steps. I was still in “build an app” mode. Also, I had no intention of reading, maintaining, or debugging Python code. 🐍
Attempt 3: Claude Code CLI
My next thought: “Let me use Claude Code CLI to automate the GenAI portion.”
Still wrong. I was still trying to build infrastructure around a problem that didn’t need infrastructure.
The Breakthrough 💡
But we’re in the agent era now, and agents don’t work that way.
An agent is a markdown file (context and instructions) + the ability to take actions (MCP servers).
This seems obvious now (December 2025), but 6-9 months ago when I was figuring this out? Not so much.
No app. No deployment. No infrastructure. Just instructions that any AI tool can read. The runtime is the agent and its model. ✨
When I finally fired up Claude Code and used /agent to create an agent, it generated… a markdown file. That’s it.
Why This Matters
Context Portability
The markdown file with my agent instructions works everywhere:
- Claude Desktop
- VS Code with GitHub Copilot
- Any future AI tool that can read markdown
My context isn’t locked into any vendor. If I find a better tool tomorrow, I take my instructions with me. Zero migration cost.
The beauty of plain text is that it’s not a proprietary format - it’s yours to keep and move wherever you want.
The MCP Connection
As someone working on enabling agentic workflows on Windows, I live and breathe MCP (Model Context Protocol) these days. When I needed my agent to pull from RSS feeds, I didn’t build anything - I just connected an existing MCP server.
Reddit has RSS feeds. Hacker News has RSS feeds. Someone had already written an MCP server for RSS. I just… used it. 🎉
Where I Am Today
My approach has evolved quite a bit since those early experiments. Here’s what it looks like now:
- GitHub repo with GitHub Copilot and especially GitHub agent tasks
- Multiple agents that reference separate instructions for different concerns - post guidelines, style guide, research protocols
- MCP servers configured for GitHub’s coding agent: RSS readers (which I rarely use to be frank), the GitHub MCP server with some additional permissions, and sequential-thinking as my core servers
When I want to write a post, I launch an agent task and get a draft. I can iterate on it, and my blog automatically deploys a preview to a private URI I can view and tightly iterate on.
No Python scripts. No deployed services. No infrastructure to maintain (other than this website).
The Mindset Shift
| Old Thinking | New Thinking |
|---|---|
| Build an app | Write instructions |
| Deploy infrastructure | Connect MCP servers |
| Maintain code | Iterate on markdown |
| Vendor lock-in | Context portability |
| Build, deploy, maintain | Write, connect, iterate |
Key Takeaways
The app habit runs deep. It took multiple failed attempts to break it. You have to catch yourself mid-assumption and redirect.
Context beats features. I could have built a more “feature-rich” Python app. But my markdown files work across tools and will keep working as the AI landscape evolves.
MCP is the multiplier. The real power isn’t the agent instructions alone - it’s connecting to MCP servers that extend what the agent can do.
Bet on plain text as long as frontier models deal with language. Markdown files work everywhere, version control trivially (if they’re in git), and survive tool changes.
The Bigger Picture
We’re at an inflection point similar to the mobile era. Remember when every company needed an app for everything?
Now we’re realizing: most problems don’t need apps. They need well-structured context that AI agents can work with.
The engineers who figure this out first will ship faster than those still thinking in apps. 🚀
What habits are holding you back from thinking in agents? Connect with me on LinkedIn to share your experience.