Task Orchestrator
STDIOAI-native task orchestrator for hierarchical project management with dependencies and templates.
AI-native task orchestrator for hierarchical project management with dependencies and templates.
Stop losing context. Start building faster.
An orchestration framework for AI coding assistants that solves context pollution and token exhaustion - enabling your AI to work on complex projects without running out of memory.
AI assistants suffer from context pollution - a well-documented challenge where model accuracy degrades as token count increases. This "context rot" stems from transformer architecture's quadratic attention mechanism, where each token must maintain pairwise relationships with all others.
The Impact: As your AI works on complex features, it accumulates conversation history, tool outputs, and code examples. By task 10-15, the context window fills with 200k+ tokens. The model loses focus, forgets earlier decisions, and eventually fails. You're forced to restart sessions and spend 30-60 minutes rebuilding context just to continue.
Industry Validation: Anthropic's research on context management confirms production AI agents "exhaust their effective context windows" on long-running tasks, requiring active intervention to prevent failure.
Traditional approaches treat context windows like unlimited memory. Task Orchestrator recognizes they're a finite resource that must be managed proactively.
Task Orchestrator implements industry-recommended patterns from Anthropic's context engineering research: persistent external memory, summary-based context passing, and sub-agent architectures with clean contexts.
How it works:
Result: Scale to 50+ tasks without hitting context limits. Up to 90% token reduction (matching Anthropic's 84% benchmark). Zero time wasted rebuilding context.
📖 Deep dive: See Agent Architecture Guide for token efficiency comparison and Developer Architecture for technical details.
Easiest way - Install everything (MCP server, skills, subagents, hooks) in one step:
Clone this repository:
git clone https://github.com/jpicklyk/task-orchestrator.git cd task-orchestrator
Add the local marketplace:
/plugin marketplace add ./
Install the plugin:
/plugin install task-orchestrator@task-orchestrator-marketplace
Restart Claude Code
Initialize your project:
setup_project
Note: Once this repository is published on GitHub, you'll be able to use:
/plugin marketplace add jpicklyk/task-orchestrator
/plugin install task-orchestrator
See Plugin Installation Guide for detailed instructions and troubleshooting.
For other MCP clients or custom setup:
Install via Docker:
docker pull ghcr.io/jpicklyk/task-orchestrator:latest
Configure your AI platform:
Claude Code:
claude mcp add-json task-orchestrator '{"type":"stdio","command":"docker","args":["run","--rm","-i","-v","mcp-task-data:/app/data","-v",".:/project","-e","AGENT_CONFIG_DIR=/project","ghcr.io/jpicklyk/task-orchestrator:latest"]}'
This single command works across all platforms (macOS, Linux, Windows).
Other MCP clients: Task Orchestrator's core MCP protocol (persistent memory, task management) works with any MCP client, but advanced features (skills, subagents, hooks) are Claude Code-specific. See Installation Guide for configuration.
First time setup - Initialize your AI with Task Orchestrator patterns:
"Run the initialize_task_orchestrator workflow"
This writes Task Orchestrator patterns to your AI's permanent memory (CLAUDE.md, .cursorrules, etc.)
Project setup - Initialize your project with configuration:
"Run setup_project to initialize Task Orchestrator"
Quick reference - View essential patterns anytime:
"Show me the getting_started guide"
That's it! Your AI can now create and manage tasks with persistent memory.
🚀 Complete setup: Quick Start Guide - Includes sub-agent setup, templates, and first feature walkthrough.
Your AI remembers project state, completed work, and technical decisions - even after restarting. No more re-explaining your codebase every morning.
Build features with 10+ tasks without hitting context limits. Traditional approaches fail at 12-15 tasks. Task Orchestrator scales to 50+ tasks effortlessly.
Database → Backend → Frontend → Testing workflows with automatic context passing. Each specialist sees only what they need, not everything.
Multiple AI agents work in parallel without conflicts. Built-in concurrency protection and dependency management.
Capture bugs and improvements as you find them. Organize work without losing track of what needs fixing.
1. Hierarchical Task Management
Project: E-Commerce Platform
└── Feature: User Authentication
├── Task: Database schema [COMPLETED]
├── Task: Login API [IN-PROGRESS]
├── Task: Password reset [PENDING]
└── Task: API docs [PENDING] [BLOCKED BY: Login API]
2. Summary-Based Context Passing
Instead of passing 5,000 tokens of full task details, specialists create 300-500 token summaries:
### Completed Created Users table with authentication fields (id, email, password_hash). Added indexes for email lookup. ### Files Changed - db/migration/V5__create_users.sql - src/model/User.kt ### Next Steps API endpoints can use this schema for authentication
Result: Up to 92% token reduction per dependency. This implements Anthropic's "compaction" pattern - preserving critical information while discarding redundant details.
3. Event-Driven Workflows
Tasks progress automatically based on workflow events:
work_started → Task moves to in-progressimplementation_complete → Task moves to testingtests_passed → Task completesall_tasks_complete → Feature moves to testingAll status transitions validated by your config in .taskorchestrator/config.yaml.
📘 Learn more: Status Progression Guide and Workflow Prompts
Task Orchestrator follows a Plan → Orchestrate → Execute pattern that prevents context pollution:
Start with either:
Example:
# User Authentication Feature Build complete authentication system with login, signup, and password reset. Requirements: - JWT-based authentication - Password hashing with bcrypt - Email verification - Rate limiting on login attempts
Use the coordinate_feature_development workflow (Claude Code):
"Run coordinate_feature_development with my plan file"
What happens:
Result: Feature with 5-15 tasks, proper templates, clear dependencies, appropriate specialist tags.
AI automatically:
Default Specialists:
Custom Specialists (optional via .taskorchestrator/agent-mapping.yaml):
Your role: Just say "What's next?" and the AI handles routing, dependencies, and coordination.
💡 Pro Tip: The Task Orchestrator communication style plugin is automatically active in Claude Code for clearer coordination (uses phase labels, status indicators ✅⚠️❌🔄, and concise progress updates) when installed via the plugin marketplace.
Task Orchestrator uses event-driven status progression mapped to your workflow:
.taskorchestrator/config.yaml)Configuration: .taskorchestrator/config.yaml defines:
📘 Deep dive: Status Progression Guide for complete configuration reference and workflow examples.
| Feature | Claude Code | Other MCP Clients |
|---|---|---|
| Persistent Memory | ✅ Tested & Supported | ✅ MCP Protocol Support |
| Template System | ✅ Tested & Supported | ✅ MCP Protocol Support |
| Task Management | ✅ Tested & Supported | ✅ MCP Protocol Support |
| Sub-Agent Orchestration | ✅ Tested & Supported | ❌ Claude Code-specific |
| Skills (Lightweight Coordination) | ✅ Tested & Supported | ❌ Claude Code-specific |
| Hooks (Workflow Automation) | ✅ Tested & Supported | ❌ Claude Code-specific |
| Status Event System | ✅ Tested & Supported | ✅ MCP Protocol Support |
Primary Platform: Claude Code is the primary tested and supported platform with full feature access including skills, subagents, and hooks.
Other MCP Clients: The core MCP protocol (persistent memory, task management, templates, status events) works with any MCP client, but we cannot verify functionality on untested platforms. Advanced orchestration features (skills, subagents, hooks) require Claude Code's .claude/ directory structure.
Claude Code (Full Orchestration):
You: "I have a plan for user authentication in plan.md"
AI: "Loading Feature Orchestration Skill..."
"Launching Feature Architect (Opus) with plan file..."
→ Feature created with 8 tasks
"Launching Planning Specialist (Sonnet)..."
→ Tasks broken down with dependencies
You: "What's next?"
AI: "Task 1: Database schema [PENDING]. No blockers."
Launches Implementation Specialist → Implements schema → Creates 400-token summary
You: "What's next?"
AI: "Task 2: Authentication API [PENDING]. Dependencies satisfied."
Reads 400-token summary (not 5k full context)
Launches Implementation Specialist → Implements API → Creates summary
You: "What's next?"
AI: "Task 3: Login UI [PENDING]. Backend ready."
Launches Implementation Specialist → Implements UI → Feature progresses
[Next morning - new session]
You: "What's next?"
AI: "Task 4: Integration tests [PENDING]. 3 tasks completed yesterday."
No context rebuilding - AI remembers everything from persistent memory
Key Benefits:
coordinate_feature_development handles specialist selectionQuick Fixes:
docker versionMCP_DEBUG=true in Docker configGet Help:
Built with modern, reliable technologies:
Architecture Validation: Task Orchestrator implements patterns recommended in Anthropic's context engineering research: sub-agent architectures, compaction through summarization, just-in-time context loading, and persistent external memory. Our approach prevents context accumulation rather than managing it after the fact.
🏗️ Architecture details: See Developer Guides
We welcome contributions! Task Orchestrator follows Clean Architecture with 4 distinct layers (Domain → Application → Infrastructure → Interface).
To contribute:
git checkout -b feature/amazing-feature)See Contributing Guidelines for detailed development setup.
Version format: {major}.{minor}.{patch}.{git-commit-count}-{qualifier}
Current versioning defined in build.gradle.kts.
MIT License - Free for personal and commercial use
AI coding tools, AI pair programming, Model Context Protocol, MCP server, Claude Code, Claude Desktop, AI task management, context persistence, AI memory, token optimization, RAG, AI workflow automation, persistent AI assistant, context pollution solution, AI orchestration, sub-agent coordination
Ready to build complex features without context limits?
docker pull ghcr.io/jpicklyk/task-orchestrator:latest
Then follow the Quick Start Guide to configure your AI platform. 🚀