Getting Started
Get OpenSentinel running and send your first AI-powered message. Choose the path that fits your use case: NPM library, full platform, or ready-made template.
npm install opensentinel, add your Claude API key, and call chat(). No database, no Docker. Jump to library setup →Prerequisites
| Requirement | Version | Required For |
|---|---|---|
| Bun or Node.js | Bun 1.0+ / Node 20+ | Runtime |
| Docker (optional) | 20+ | Alternative deployment method |
| Claude API Key | — | Always required |
Check your versions:
$ bun --version
# Should output 1.x.x or higher
Option A: Use as an NPM Library
The simplest way to use OpenSentinel. Add AI capabilities to any existing project.
$ npm install opensentinel
Create a file and add your API key:
import { configure, chat } from 'opensentinel';
configure({
CLAUDE_API_KEY: process.env.CLAUDE_API_KEY,
});
const response = await chat([
{ role: 'user', content: 'Hello! What can you do?' }
]);
console.log(response.text);
$ CLAUDE_API_KEY=sk-ant-... npx tsx app.ts
chatWithTools() to enable tool use (shell, browser, files, search, etc.).Option B: Deploy the Full Platform
Run OpenSentinel with all services: advanced RAG memory (HyDE, re-ranking, caching), task scheduler, web dashboard, and all messaging channels.
This installs Bun, PostgreSQL, Redis, and OpenSentinel, then runs the interactive setup wizard:
$ curl -fsSL https://opensentinel.ai/install.sh | bash
The wizard will prompt you for your API keys, install PostgreSQL 16 + pgvector and Redis, run database migrations, and set up a system service.
$ bun install -g opensentinel
$ opensentinel setup # interactive wizard
CLAUDE_API_KEY is required. Add TELEGRAM_BOT_TOKEN, DISCORD_BOT_TOKEN, etc. to enable specific channels. OpenSentinel gracefully skips any unconfigured service.$ git clone https://github.com/dsiemon2/OpenSentinel.git
$ cd OpenSentinel
$ bun install
$ cp .env.example .env # add your API keys
$ bun run db:migrate # set up database
$ bun run start # launch OpenSentinel
Option C: Start from a Template
Pick a use case, clone the template, and customize. Each template is a standalone project under 200 lines.
$ git clone https://github.com/dsiemon2/OpenSentinel.git
$ cd OpenSentinel/templates/ai-sales-agent # pick any template
$ bun install
$ CLAUDE_API_KEY=sk-ant-... bun run start
Available Templates (20)
| Template | Use Case |
|---|---|
ai-web-monitor | Monitor web pages for changes with intelligent alerts |
ai-sales-agent | Lead research, scoring, and outreach drafting |
ai-recruiter | Candidate screening, ranking, and interview prep |
ai-devops-agent | Server monitoring, log analysis, incident response |
ai-trading-researcher | Market research, sentiment scoring, trend detection |
ai-customer-support | Ticket triage, response drafting, escalation |
ai-content-creator | Multi-platform content from a single brief |
ai-security-monitor | Auth log analysis, network audit, file integrity |
ai-code-reviewer | PR review, bug detection, test coverage |
ai-data-analyst | Dataset profiling, insights, anomaly detection |
ai-email-assistant | Inbox triage, reply drafting, action extraction |
ai-meeting-assistant | Transcript summaries, action items, weekly digests |
ai-competitor-tracker | Product monitoring, pricing changes, hiring signals |
ai-seo-optimizer | Page scoring, meta optimization, content outlines |
ai-legal-reviewer | Contract review, risk flagging, amendments |
ai-social-listener | Brand monitoring, sentiment, trend detection |
ai-documentation-writer | Auto-generate API refs, guides, changelogs |
ai-onboarding-agent | Personalized onboarding plans, Q&A, tracking |
ai-inventory-manager | Stock tracking, demand forecasting, PO generation |
ai-real-estate-analyst | Property analysis, market research, ROI estimation |
Configuration
Environment Variables
OpenSentinel uses environment variables for configuration. Only CLAUDE_API_KEY is always required — everything else enables optional features.
| Variable | Description | Required |
|---|---|---|
CLAUDE_API_KEY | Anthropic API key | Always |
DATABASE_URL | PostgreSQL connection string | Full platform |
REDIS_URL | Redis connection string | Full platform |
OPENAI_API_KEY | For embeddings, STT, image generation | For memory/voice |
TELEGRAM_BOT_TOKEN | Telegram bot token | For Telegram |
DISCORD_BOT_TOKEN | Discord bot token | For Discord |
SLACK_BOT_TOKEN | Slack bot token | For Slack |
ELEVENLABS_API_KEY | Text-to-speech | For voice output |
TWILIO_ACCOUNT_SID | SMS and phone calls | For Twilio |
LLM_PROVIDER | Default LLM provider (anthropic, gemini, openrouter, groq, mistral, ollama) | No (default: anthropic) |
GEMINI_API_KEY | Google Gemini API key (1M context, vision, tool use) | For Gemini |
OPENROUTER_API_KEY | OpenRouter API key for multi-model access | For OpenRouter |
GROQ_API_KEY | Groq API key for fast inference | For Groq |
OLLAMA_ENABLED | Enable local Ollama models | For local models |
TUNNEL_ENABLED | Auto-start tunnel for public URL | For tunnels |
AUTONOMY_LEVEL | Agent autonomy (readonly/supervised/autonomous) | No (default: autonomous) |
PROMETHEUS_ENABLED | Enable Prometheus metrics at /metrics | For monitoring |
MATRIX_ENABLED | Enable Matrix messaging channel | For Matrix |
PAIRING_ENABLED | Enable device pairing auth | For pairing |
GATEWAY_TOKEN | Shared secret for web UI/API auth (disabled = open access) | For remote access |
FIELD_ENCRYPTION_KEY | AES-256 key for encrypting sensitive fields at rest | For SOC 2 |
HYDE_ENABLED | HyDE — generate ideal answer before searching memory | For better recall |
RERANK_ENABLED | LLM re-ranking — score results 0-10 for true relevance | For precision |
CONTEXTUAL_QUERY_ENABLED | Rewrite queries using conversation context | For follow-ups |
MULTISTEP_RAG_ENABLED | Multi-step retrieval — detect and fill context gaps | For thoroughness |
RETRIEVAL_CACHE_ENABLED | Redis cache for retrieval results (1h TTL) | For speed |
HYDE_ENABLED + RERANK_ENABLED for best memory recall quality, or enable all five RAG flags for the full pipeline. See the RAG configuration guide for details.Library Configuration (Programmatic)
When using OpenSentinel as a library, use configure() instead of environment variables:
import { configure } from 'opensentinel';
// Minimal
configure({ CLAUDE_API_KEY: 'sk-ant-...' });
// With persistent memory
configure({
CLAUDE_API_KEY: 'sk-ant-...',
OPENAI_API_KEY: 'sk-proj-...',
DATABASE_URL: 'postgresql://user:pass@localhost:5432/db',
});
Advanced RAG Configuration
OpenSentinel's memory system includes five retrieval enhancements that dramatically improve recall quality. As of v3.0.0, the full Advanced RAG pipeline is enabled by default (HyDE, Re-ranking, Contextual Query Rewriting, Multi-Step RAG, and Retrieval Caching). You can disable individual stages if needed.
# Rewrite queries using conversation context (resolves "he", "that", etc.)
CONTEXTUAL_QUERY_ENABLED=true
# Generate a hypothetical answer before searching (better semantic matching)
HYDE_ENABLED=true
# LLM scores each result 0-10 for true relevance (eliminates false positives)
RERANK_ENABLED=true
RERANK_MIN_SCORE=3
# Detect gaps in context and run follow-up searches
MULTISTEP_RAG_ENABLED=true
MULTISTEP_MAX_STEPS=2
# Cache retrieval results in Redis (1h TTL, ~60% faster on repeated queries)
RETRIEVAL_CACHE_ENABLED=true
Recommended Setups
| Goal | Enable | Tradeoff |
|---|---|---|
| Best quality | HYDE_ENABLED + RERANK_ENABLED | +1-2s latency (LLM calls) |
| Conversational | CONTEXTUAL_QUERY_ENABLED | Minimal overhead |
| Speed | RETRIEVAL_CACHE_ENABLED | Requires Redis |
| Full pipeline | All five flags | Max quality + caching |
Send Your First Message
Web Dashboard
Open http://localhost:8030 in your browser. Type a message and press Enter.
REST API
$ curl -X POST http://localhost:8030/api/ask \
-H "Content-Type: application/json" \
-d '{"message": "Hello! What can you do?"}'
Telegram
Open Telegram, find your bot, and send /start.
Code (Library)
import { configure, chat, chatWithTools } from 'opensentinel';
configure({ CLAUDE_API_KEY: process.env.CLAUDE_API_KEY });
// Simple conversation
const res = await chat([
{ role: 'user', content: 'Summarize the top 3 HN stories' }
]);
console.log(res.text);
// With tools (browser, shell, files, search, etc.)
const result = await chatWithTools([
{ role: 'user', content: 'Research my competitors and create a PDF report' }
]);
console.log(result.text);