Sub-Agent System
OpenSentinel decomposes complex tasks across five specialist agents that work in parallel with shared context and coordination.
chatWithTools(), the task coordinator automatically decomposes it and assigns subtasks to the right agents.Overview
Unlike single-agent systems that funnel every request through one prompt, OpenSentinel routes complex tasks to specialized sub-agents. Each agent has its own system prompt, tool permissions, token budget, and execution context.
The Task Coordinator receives a request, breaks the work into parallel streams, and assigns each stream to the appropriate agent. Agents communicate via a SharedContext key-value store and an AgentMessenger pub/sub system, so each agent can build on the others' outputs without blocking.
This architecture delivers faster results on multi-step tasks (research + analysis + writing, for example) because independent subtasks run concurrently, up to 5 at a time, while a dependency graph ensures ordering when one subtask depends on another.
The Five Agents
Each agent type has a dedicated system prompt, a restricted set of tool permissions, and a defined process for completing its objective.
Research Agent
The Research Agent investigates topics and produces well-sourced information. It breaks research questions into sub-questions, searches multiple sources, cross-references findings, and synthesizes results into a coherent report.
| Attribute | Details |
|---|---|
| Capabilities | Web search, information gathering, source synthesis, fact verification |
| Allowed Tools | web_search, browse_url, read_file, list_directory, search_files |
| Process | Decompose question → Search sources → Cross-reference → Synthesize report |
| Best For | Competitive analysis, topic deep-dives, market research, fact-checking |
Coding Agent
The Coding Agent implements, debugs, and improves code. It reads existing source files, plans an implementation approach, writes clean and documented code, and verifies the solution.
| Attribute | Details |
|---|---|
| Capabilities | Code generation, debugging, code review, refactoring, test writing |
| Allowed Tools | read_file, write_file, list_directory, search_files, execute_command |
| Process | Understand requirements → Explore code → Plan → Implement → Test |
| Best For | Feature implementation, bug fixes, PR reviews, test scaffolding |
Writing Agent
The Writing Agent creates high-quality written content. It analyzes the purpose and audience, outlines the piece, drafts the content, and refines it for tone and clarity.
| Attribute | Details |
|---|---|
| Capabilities | Drafts, editing, content creation, tone adjustment, summarization |
| Allowed Tools | read_file, write_file, web_search, browse_url |
| Process | Understand audience → Research → Outline → Draft → Refine |
| Best For | Reports, blog posts, documentation, emails, proposals |
Analysis Agent
The Analysis Agent processes data and information to produce actionable insights. It gathers and organizes data, applies analytical methods, identifies patterns, and delivers findings with recommendations.
| Attribute | Details |
|---|---|
| Capabilities | Data processing, pattern detection, insights generation, statistical analysis |
| Allowed Tools | read_file, web_search, browse_url, list_directory, search_files |
| Process | Define objective → Gather data → Analyze → Identify patterns → Recommend |
| Best For | Data analysis, competitor comparisons, trend detection, code quality audits |
OSINT Agent
The OSINT Agent investigates entities, traces financial flows, maps organizational relationships, and builds comprehensive intelligence profiles using public records and open data sources.
| Attribute | Details |
|---|---|
| Capabilities | Entity investigation, financial flow tracing, relationship mapping, public records search |
| Allowed Tools | web_search, browse_url, read_file, osint_search, osint_graph, osint_enrich, osint_analyze |
| Process | Identify targets → Gather public records → Map relationships → Analyze patterns → Compile profile |
| Best For | Entity research, financial investigations, corporate relationship mapping, public records analysis |
Agentic RAG Pipeline
Before messages even reach the LLM, OpenSentinel runs a full agentic pipeline that pre-classifies tools, auto-searches memory, and pre-executes preparatory steps. This is the intelligence layer between your message and Claude's response.
User Message
|
v
Agentic Orchestrator (parallel)
|—— Memory Search —— auto-search relevant memories
|—— Tool Classifier —— Haiku pre-classifies into 18 categories
|
v
Pre-Execution ——> run classified tools before main LLM
|
v
Brain (LLM) ——> enriched context + filtered tools
|
v
Memory Middleware ——> fire-and-forget fact extraction
| Component | File | Purpose |
|---|---|---|
| Tool Classifier | src/core/brain/tool-classifier.ts | Uses Haiku to pre-classify queries into 18 tool categories |
| Memory Middleware | src/core/memory/memory-middleware.ts | Auto-searches memories before AI calls; auto-extracts facts after |
| Agentic Orchestrator | src/core/brain/agentic-orchestrator.ts | Runs memory + classification in parallel, then pre-execution |
| Agent Processor | src/core/agents/agent-processor.ts | BullMQ worker consuming sentinel-agents queue |
| Brain Telemetry | src/core/observability/brain-telemetry.ts | Real-time event emitter for pipeline observability |
/brain. Watch pipeline stages light up, see memory searches complete, and track tool execution with latency metrics.How It Works
The agent system follows a structured pipeline from request to result. Here is the architecture at a high level:
A complex user message arrives via any channel (Telegram, API, web). The coordinator evaluates whether the task requires multiple agents or can be handled by a single agent.
The coordinator breaks the objective into discrete subtasks. Each subtask gets an id, an objective, a requiredAgentType, a priority (1-5), and optionally a list of dependencies (other task IDs that must finish first).
The coordinator spawns agents via spawnAgent(). Each agent is queued into a BullMQ job on the sentinel-agents queue with its own token budget (default 50,000) and time budget (default 1 hour). Up to 3 agent workers run concurrently.
Each workflow creates a SharedContext backed by Redis. Agents store artifacts, findings, decisions, and references as they work. Any agent in the workflow can read entries from any other agent.
Agents communicate in real time via Redis pub/sub channels. Message types include request, response, broadcast, handoff, status_update, and notification. Agents can request assistance from a specific agent type or hand off tasks entirely.
When all subtasks complete (or fail), the coordinator aggregates outputs from the taskResults map, stores the final result in the shared context, and persists important findings/decisions to long-term vector memory.
User Request
|
v
Task Coordinator ——> Decompose into subtasks
|
|—— Research Agent —— web_search, browse_url
|—— Coding Agent —— write_file, execute_command
|—— Writing Agent —— read_file, write_file
|—— Analysis Agent —— read_file, search_files
|—— OSINT Agent —— osint_search, osint_graph
|
v
SharedContext (Redis key-value store)
AgentMessenger (Redis pub/sub)
|
v
Merged Result ——> Long-term Memory
Agent Collaboration
Three components enable agents to work together effectively:
Shared Context
A Redis-backed key-value store accessible by all agents in a workflow. Each entry has a type (artifact, finding, decision, reference, state, error, metadata), tags for filtering, and the ID of the agent that created it.
// Store a finding from the Research Agent
await sharedContext.recordFinding('competitor-pricing', {
summary: 'Competitor A charges $49/mo for the base tier',
confidence: 'high',
sources: ['https://competitor-a.com/pricing'],
}, { agentId, agentType: 'research' });
// Analysis Agent reads it later
const findings = await sharedContext.query({
type: 'finding',
tags: ['competitor-pricing'],
});
Agent Messenger
A pub/sub system built on Redis channels. Each agent subscribes to three channels: its own direct channel, its agent-type channel, and the global broadcast channel. Message types include:
| Message Type | Description |
|---|---|
request | Request assistance or information from another agent |
response | Response to a request (paired via correlation ID) |
broadcast | Broadcast to all agents in the workflow |
handoff | Hand off a task to another agent entirely |
status_update | Progress notification (non-blocking) |
notification | Non-blocking informational message |
heartbeat | Agent health check (expires after 10 seconds) |
// Request help from the Coding Agent
const response = await messenger.request(codingAgentId, 'review_code', {
file: 'src/core/brain.ts',
focus: 'performance',
}, { timeout: 30000 });
// Broadcast a status update to all agents
await messenger.sendStatusUpdate({
state: 'working',
progress: 65,
currentTask: 'Analyzing pricing data',
});
Task Coordinator
The orchestration layer that manages the full lifecycle. Key features:
- Dependency tracking — Tasks with
dependenciesare marked asblockeduntil those dependencies complete. Tasks without dependencies start immediately. - Parallel execution — Up to 5 tasks run concurrently (
MAX_CONCURRENT_TASKS). The coordinator fills available slots with the highest-priority ready tasks. - Retry logic — Failed tasks are retried up to 2 times (
DEFAULT_MAX_RETRIES) before being marked as failed. - Deadlock detection — If no tasks are running or ready but some remain pending, the coordinator identifies failed dependencies and circular references.
- Three execution strategies:
sequential(one at a time),parallel(all independent tasks at once), andadaptive(dynamically adjusted based on results).
Configuration
The sub-agent system requires the full platform (PostgreSQL + Redis) because agents are queued via BullMQ and share state via Redis.
Environment Variables
| Variable | Description | Default |
|---|---|---|
CLAUDE_API_KEY | Required for agent Claude API calls | — |
REDIS_URL | Redis connection for BullMQ queue, shared context, and messaging | redis://localhost:6379 |
DATABASE_URL | PostgreSQL connection for agent records and progress tracking | — |
Programmatic Configuration
When using OpenSentinel as a library, configure the agent system via configure():
import { configure } from 'opensentinel';
configure({
CLAUDE_API_KEY: process.env.CLAUDE_API_KEY,
DATABASE_URL: 'postgresql://user:pass@localhost:5445/opensentinel',
REDIS_URL: 'redis://localhost:6379',
});
Agent Defaults
| Setting | Default | Description |
|---|---|---|
| Token budget per agent | 50,000 | Maximum tokens an agent can consume before stopping |
| Time budget per agent | 1 hour | Maximum wall-clock time before an agent is cancelled |
| Max steps per agent | 20 | Maximum Claude API call iterations per agent run |
| Worker concurrency | 3 | Number of agents that can run simultaneously |
| Max concurrent tasks | 5 | Maximum tasks the coordinator runs in parallel |
| Task timeout | 30 minutes | Default timeout for a single coordinated task |
| Max retries | 2 | Number of times a failed task is retried |
Using Agents via Library
Complex tasks are automatically routed to sub-agents when you use chatWithTools(). You can also spawn agents directly for fine-grained control.
Automatic Routing
import { chatWithTools } from 'opensentinel';
// Complex task automatically routed to sub-agents
const result = await chatWithTools([
{ role: 'user', content: 'Research our competitors, analyze their pricing, and draft a report' }
]);
console.log(result.text);
// The coordinator splits this into:
// 1. Research Agent -> gather competitor information
// 2. Analysis Agent -> analyze pricing data (depends on #1)
// 3. Writing Agent -> draft the report (depends on #2)
Spawning Agents Directly
import { spawnAgent, getAgent } from 'opensentinel/agents';
// Spawn a research agent with custom budget
const agentId = await spawnAgent({
userId: 'user_123',
type: 'research',
objective: 'Find the top 5 competitors in the AI assistant market',
tokenBudget: 30000,
timeBudgetMs: 600000, // 10 minutes
});
// Check progress
const agent = await getAgent(agentId);
console.log(agent.status); // 'pending' | 'running' | 'completed' | 'failed'
console.log(agent.progress); // Array of step descriptions
console.log(agent.result); // Final result when completed
Running Workflows
Use the workflow API for multi-agent pipelines with dependency management:
import { createWorkflow, runWorkflow } from 'opensentinel/agents';
const workflow = createWorkflow(
'Competitor Report',
'Research competitors and produce a pricing analysis report',
[
{
id: 'research',
name: 'Competitor Research',
description: 'Gather competitor data',
objective: 'Research top 5 competitors and their pricing models',
agentType: 'research',
priority: 1,
},
{
id: 'analyze',
name: 'Pricing Analysis',
description: 'Analyze pricing data',
objective: 'Compare pricing tiers and identify market positioning',
agentType: 'analysis',
dependencies: ['research'],
priority: 2,
},
{
id: 'write',
name: 'Draft Report',
description: 'Write the final report',
objective: 'Produce a comprehensive pricing analysis report',
agentType: 'writing',
dependencies: ['analyze'],
priority: 3,
},
],
{ strategy: 'parallel' }
);
const coordinator = await runWorkflow(workflow, 'user_123');
// Monitor progress
coordinator.on('task:completed', ({ taskId, result }) => {
console.log(`Task ${taskId} done:`, result.summary);
});
coordinator.on('workflow:completed', ({ results }) => {
console.log('All done!', results);
});
researchReport(topic), codeReview(description), and parallelResearch(topics). Import them from opensentinel/agents.Domain Experts
In addition to the five agent types, OpenSentinel supports 15 domain expert personalities. When activated, the domain expert modifies the system prompt, response style, preferred tools, and terminology for a specific field. You can activate a domain expert for any conversation or agent.
| Expert | Domain | Response Style |
|---|---|---|
| Coding | Software development, architecture, debugging | Technical, detailed, with code examples |
| Legal | Contract review, compliance, regulations | Professional, precise, with citations |
| Medical | Health information, clinical analysis | Academic, comprehensive, evidence-based |
| Finance | Investment, accounting, financial analysis | Professional, data-driven, with tables |
| Writing | Content creation, editing, copywriting | Moderate formality, creative, with examples |
| Research | Academic research, literature review | Academic, comprehensive, with citations |
| Marketing | Campaigns, brand strategy, analytics | Professional, concise, data-informed |
| Design | UI/UX, visual design, prototyping | Technical, moderate detail, with diagrams |
| Data Science | ML, statistics, data pipelines | Technical, detailed, with code and tables |
| Security | Cybersecurity, auditing, compliance | Technical, precise, risk-focused |
| DevOps | CI/CD, infrastructure, monitoring | Technical, concise, with code |
| Product | Product strategy, roadmaps, prioritization | Professional, moderate, with bullet points |
| HR | Recruiting, employee relations, policy | Professional, empathetic, structured |
| Education | Teaching, curriculum, learning design | Moderate, detailed, with examples |
| General | Default all-purpose assistant | Casual, moderate detail, flexible format |
import { chat } from 'opensentinel';
// Activate the Finance domain expert for this conversation
const result = await chat([
{ role: 'user', content: 'Analyze Q3 revenue trends for SaaS companies' }
], {
domainExpert: 'finance',
});