A GitHub repo with zero code got 35K stars by turning AI into employees

When Greg Isenberg tweeted about a GitHub repository on March 6, 2026, he probably didn’t expect 1.3 million views and 8,600 likes. The repo, called “The Agency,” had been growing steadily since October 2025. But Isenberg’s endorsement to his 620,000 followers turned a niche tool into a phenomenon: 10,000 new stars in under 7 days, eventually hitting 34,840 stars and 5,322 forks.
Here’s what makes it interesting: the repository doesn’t contain a Python framework, a JavaScript library, or a machine learning model. The Agency is 112+ markdown files. Each one describes a specialized AI agent persona. A Frontend Developer with opinions about React component architecture. A Growth Hacker with viral loop playbooks. A “Whimsy Injector” whose job is adding delight and Easter eggs to products. A Blockchain Security Auditor who hunts for smart contract vulnerabilities.
Copy these files into Claude Code, Cursor, or Aider, and your generic AI assistant becomes a domain specialist with personality, workflows, deliverables, and success metrics. No code required.
I looked at 12 sources – the GitHub repo itself, industry reports from Gartner and IDC, comparable viral repos – to understand why this resonated and what it says about where AI agent development is heading.
The anatomy of an AI employee

Each agent file follows a consistent structure:
- Identity traits defining the agent’s perspective and personality
- A core mission describing what the agent exists to accomplish
- Domain-specific rules that constrain behavior to expert-level outputs
- Success metrics for validating deliverable quality
The agents are organized into divisions mirroring a real company: Engineering (20 agents), Design (8), Marketing (18), Product (4), Project Management (6), Testing (8), Support (6), Spatial Computing (6), Specialized (16), Game Development (18+), Paid Media (7), and Sales (8).
What separates these from generic prompt templates is depth. The Frontend Developer doesn’t just “help with React.” It has opinions about component architecture, state management patterns, Core Web Vitals optimization, and accessibility standards. The Backend Architect thinks about API design patterns, database normalization tradeoffs, and microservice boundaries. Each agent carries the accumulated knowledge of how that role actually works in practice.
The creator’s background explains a lot
Michael Sitarzewski isn’t an AI researcher. He’s a serial entrepreneur – 8 companies, TechStars Cloud 2012, currently VP of Innovation and Technology at Tandem Theory, a Dallas-based data-driven marketing agency.
The Agency grew out of a Reddit thread and months of iterating on prompts that actually worked for real agency operations. That practitioner origin is why the agents feel authentic instead of academic. You can tell the Growth Hacker agent was written by someone who’s actually run growth experiments, not someone who read about them.
There’s a broader point here: the most effective agent personas are being created by domain experts, not ML engineers. Knowing transformer architectures is mostly irrelevant to defining how a Security Engineer should think about threat modeling. Knowing security engineering is everything.
Why it went viral

The Agency followed a playbook that’s now been proven twice in AI. The AI Hedge Fund repo (virattt/ai-hedge-fund, 43,000+ stars) used the same approach: 18 AI agents modeled after investors like Warren Buffett, Charlie Munger, Michael Burry, and Cathie Wood, each analyzing stocks through their own philosophy.
Both repos share the same formula.
Give agents human-relatable identities. Not Agent_001 and Agent_002. “Frontend Developer” and “Brand Guardian.” “Warren Buffett” and “Cathie Wood.”
Organize them like a real organization. Departments and divisions, not pipelines and graphs. The mental model is hiring, not configuring.
Define personality alongside function. Communication styles, opinions, quirks. The Whimsy Injector doesn’t exist in any multi-agent framework documentation. It exists in The Agency because real creative teams have people whose job is making products feel fun.
PrismNews attributed the viral growth partly to accessibility – the documentation was designed so “even a rookie could understand” it. But the deeper driver was the framing. “Spin up an AI agency with AI employees” is immediately understandable. “Configure a multi-agent orchestration pipeline with shared state management” is not. Same capability, totally different packaging.
How it compares to multi-agent frameworks
The 2026 multi-agent landscape is crowded:
CrewAI offers role-based Python orchestration with intuitive team modeling. Fastest-growing framework for multi-agent business workflow automation.
LangGraph provides graph-based state management with production-grade durability. Powers the AI Hedge Fund’s agent coordination.
AutoGen (Microsoft) handles conversational multi-agent systems with group decision-making patterns.
MetaGPT (57,568 stars) simulates a full software company with standard operating procedures, following the philosophy “Code = SOP(Team).”
The Agency is fundamentally different. No orchestration logic. No execution framework. No inter-agent communication protocol. It’s purely the specification layer – the “who” and “what” without the “how.”
That makes it complementary, not competitive. You could use Agency persona files with CrewAI for orchestration, LangGraph for state management, or MetaGPT for SOP execution. The agents are portable markdown that works with any tool accepting system prompts.
The tradeoff: traditional frameworks give you autonomous multi-agent execution at the cost of complexity and vendor lock-in. The Agency gives you portable agent definitions at the cost of needing a separate execution environment (Claude Code, Cursor, etc.) with a human in the loop.
The revenue gap: Isenberg’s most interesting observation

Isenberg’s most provocative point about the repo wasn’t about its popularity but its blind spot: “Every AI agency framework builds agents for engineering, design, marketing, and product. But nobody’s building the agent that actually generates revenue.”
He went further: “Selling is the hardest thing to systematize. It requires reading humans, adapting in real time, and building trust that no prompt can fake.”
The numbers back this up. The Agency’s initial wave featured 20 engineering agents, 18 marketing agents, but sales agents were added later, almost as an afterthought. The broader AI agent ecosystem optimizes for creation over conversion.
This isn’t random. Sales requires capabilities that current LLMs struggle with: real-time emotional intelligence, relationship memory spanning months, handling rejection gracefully, knowing when to push and when to back off. These aren’t prompt engineering problems. They’re fundamental capability gaps.
The implication: whoever figures out AI-assisted selling (not AI replacement of selling) will capture disproportionate value.
Stars aren’t production

Beneath the excitement, there’s an uncomfortable reality. From Gartner and IDC:
- By 2026, 40% of enterprise applications will include task-specific AI agents (Gartner)
- AI copilots will be embedded in 80% of enterprise workplace applications (IDC)
- But fewer than 1 in 4 organizations have successfully scaled AI agents to production
- By 2028, only 38% of organizations will have AI agents as team members within human teams
The 34,840 stars represent aspiration, not adoption. GitHub stars measure “I want this to work” far more than “I use this daily.” The gap between bookmarking a repo and deploying agents that handle real production workloads is the central challenge of the entire multi-agent space.
The barriers aren’t primarily technical. They’re organizational: trust, reliability, accountability, cost management, and the difficulty of debugging non-deterministic AI behavior in production.
What this tells us about where things are going
System prompts are becoming infrastructure
The Agency shows that carefully crafted system prompts have standalone value. Agent persona libraries are becoming a product category – curated, version-controlled, community-maintained collections of specialized knowledge encoded as prompts. This parallels how UI component libraries (Bootstrap, Material UI) evolved from custom code to shared infrastructure.
The execution layer is where the value lives
If agent definitions are becoming commoditized markdown files, the competitive moat shifts to execution: reliable deployment, persistent state, channel integration, cost management, monitoring. Defining a “Frontend Developer” persona is the easy part. Making that agent reliably ship production code, respond to Slack messages, and manage its own compute costs – that’s where actual value gets created.
Blended teams are the near-term reality
The data consistently points toward augmentation, not replacement. The model isn’t “AI replaces your engineering team” but “your engineering team gets AI specialists handling specific tasks around the clock.” Job postings for “agent orchestrator” and “AI team lead” are already appearing.
Organizational design is undervalued
If the most popular AI agent repo of 2026 was built by an agency operations expert rather than an ML researcher, maybe the most valuable skill in the multi-agent era isn’t technical. Maybe it’s organizational design: understanding how roles, responsibilities, workflows, and coordination actually function in specific domains.
Takeaways
-
The most viral AI repo of 2026 contains zero code. The value is in organizational knowledge encoded as system prompts, which suggests the specification layer and execution layer of multi-agent AI are separating into distinct products.
-
Giving AI agents human-relatable identities, job titles, and personalities consistently outperforms abstract technical framing. Both repos that went viral (The Agency, AI Hedge Fund) treated agents as team members, not software components.
-
The revenue-generating agent is the biggest unsolved problem. Engineering, design, and marketing are getting automated. Sales remains the frontier because it requires human capabilities that current AI can’t replicate.
-
The gap between interest and production is the real story. 35,000 stars is impressive. Fewer than 25% of organizations running AI agents in production is reality. Bridging this gap through reliability, trust, and operational tooling is where the next wave of value gets created.
-
Domain experts are the new AI builders. The most effective agent personas come from people who deeply understand specific roles, not people who deeply understand neural network architectures.
This analysis draws from 12 sources including the msitarzewski/agency-agents GitHub repository (34,840 stars), the virattt/ai-hedge-fund repository (43K+ stars), industry reports from Gartner, IDC, and Goldman Sachs, framework comparisons across CrewAI/AutoGen/LangGraph/MetaGPT, and the GitHub Blog’s analysis of top open source AI projects.
