Getting Started

Get OpenSentinel running and send your first AI-powered message. Choose the path that fits your use case: NPM library, full platform, or ready-made template.

Fastest path to a chat: Run npm install opensentinel, add your Claude API key, and call chat(). No database, no Docker. Jump to library setup →

Prerequisites

RequirementVersionRequired For
Bun or Node.jsBun 1.0+ / Node 20+Runtime
Docker (optional)20+Alternative deployment method
Claude API KeyAlways required
💡
Library mode needs only a Claude API key. PostgreSQL and Redis are needed when running the full platform with persistent memory and task scheduling. Start simple and add infrastructure as needed.

Check your versions:

$ bun --version
# Should output 1.x.x or higher

Option A: Use as an NPM Library

The simplest way to use OpenSentinel. Add AI capabilities to any existing project.

Install the package
$ npm install opensentinel
Configure and chat

Create a file and add your API key:

app.ts
import { configure, chat } from 'opensentinel';

configure({
  CLAUDE_API_KEY: process.env.CLAUDE_API_KEY,
});

const response = await chat([
  { role: 'user', content: 'Hello! What can you do?' }
]);

console.log(response.text);
Run it
Terminal
$ CLAUDE_API_KEY=sk-ant-... npx tsx app.ts
That's it. You now have a working AI assistant with access to Claude's full capabilities. Add chatWithTools() to enable tool use (shell, browser, files, search, etc.).

Option B: Deploy the Full Platform

Run OpenSentinel with all services: advanced RAG memory (HyDE, re-ranking, caching), task scheduler, web dashboard, and all messaging channels.

One-liner install (recommended)

This installs Bun, PostgreSQL, Redis, and OpenSentinel, then runs the interactive setup wizard:

Terminal
$ curl -fsSL https://opensentinel.ai/install.sh | bash

The wizard will prompt you for your API keys, install PostgreSQL 16 + pgvector and Redis, run database migrations, and set up a system service.

Or install manually
Terminal
$ bun install -g opensentinel
$ opensentinel setup     # interactive wizard
💡
Only CLAUDE_API_KEY is required. Add TELEGRAM_BOT_TOKEN, DISCORD_BOT_TOKEN, etc. to enable specific channels. OpenSentinel gracefully skips any unconfigured service.
Or from source
Terminal
$ git clone https://github.com/dsiemon2/OpenSentinel.git
$ cd OpenSentinel
$ bun install
$ cp .env.example .env   # add your API keys
$ bun run db:migrate     # set up database
$ bun run start          # launch OpenSentinel
OpenSentinel is running! Open http://localhost:8030 to access the web dashboard. All configured channels (Telegram, Discord, Slack, etc.) are now listening.

Option C: Start from a Template

Pick a use case, clone the template, and customize. Each template is a standalone project under 200 lines.

Terminal
$ git clone https://github.com/dsiemon2/OpenSentinel.git
$ cd OpenSentinel/templates/ai-sales-agent  # pick any template
$ bun install
$ CLAUDE_API_KEY=sk-ant-... bun run start

Available Templates (20)

TemplateUse Case
ai-web-monitorMonitor web pages for changes with intelligent alerts
ai-sales-agentLead research, scoring, and outreach drafting
ai-recruiterCandidate screening, ranking, and interview prep
ai-devops-agentServer monitoring, log analysis, incident response
ai-trading-researcherMarket research, sentiment scoring, trend detection
ai-customer-supportTicket triage, response drafting, escalation
ai-content-creatorMulti-platform content from a single brief
ai-security-monitorAuth log analysis, network audit, file integrity
ai-code-reviewerPR review, bug detection, test coverage
ai-data-analystDataset profiling, insights, anomaly detection
ai-email-assistantInbox triage, reply drafting, action extraction
ai-meeting-assistantTranscript summaries, action items, weekly digests
ai-competitor-trackerProduct monitoring, pricing changes, hiring signals
ai-seo-optimizerPage scoring, meta optimization, content outlines
ai-legal-reviewerContract review, risk flagging, amendments
ai-social-listenerBrand monitoring, sentiment, trend detection
ai-documentation-writerAuto-generate API refs, guides, changelogs
ai-onboarding-agentPersonalized onboarding plans, Q&A, tracking
ai-inventory-managerStock tracking, demand forecasting, PO generation
ai-real-estate-analystProperty analysis, market research, ROI estimation

Configuration

Environment Variables

OpenSentinel uses environment variables for configuration. Only CLAUDE_API_KEY is always required — everything else enables optional features.

VariableDescriptionRequired
CLAUDE_API_KEYAnthropic API keyAlways
DATABASE_URLPostgreSQL connection stringFull platform
REDIS_URLRedis connection stringFull platform
OPENAI_API_KEYFor embeddings, STT, image generationFor memory/voice
TELEGRAM_BOT_TOKENTelegram bot tokenFor Telegram
DISCORD_BOT_TOKENDiscord bot tokenFor Discord
SLACK_BOT_TOKENSlack bot tokenFor Slack
ELEVENLABS_API_KEYText-to-speechFor voice output
TWILIO_ACCOUNT_SIDSMS and phone callsFor Twilio
LLM_PROVIDERDefault LLM provider (anthropic, gemini, openrouter, groq, mistral, ollama)No (default: anthropic)
GEMINI_API_KEYGoogle Gemini API key (1M context, vision, tool use)For Gemini
OPENROUTER_API_KEYOpenRouter API key for multi-model accessFor OpenRouter
GROQ_API_KEYGroq API key for fast inferenceFor Groq
OLLAMA_ENABLEDEnable local Ollama modelsFor local models
TUNNEL_ENABLEDAuto-start tunnel for public URLFor tunnels
AUTONOMY_LEVELAgent autonomy (readonly/supervised/autonomous)No (default: autonomous)
PROMETHEUS_ENABLEDEnable Prometheus metrics at /metricsFor monitoring
MATRIX_ENABLEDEnable Matrix messaging channelFor Matrix
PAIRING_ENABLEDEnable device pairing authFor pairing
GATEWAY_TOKENShared secret for web UI/API auth (disabled = open access)For remote access
FIELD_ENCRYPTION_KEYAES-256 key for encrypting sensitive fields at restFor SOC 2
HYDE_ENABLEDHyDE — generate ideal answer before searching memoryFor better recall
RERANK_ENABLEDLLM re-ranking — score results 0-10 for true relevanceFor precision
CONTEXTUAL_QUERY_ENABLEDRewrite queries using conversation contextFor follow-ups
MULTISTEP_RAG_ENABLEDMulti-step retrieval — detect and fill context gapsFor thoroughness
RETRIEVAL_CACHE_ENABLEDRedis cache for retrieval results (1h TTL)For speed
💡
Advanced RAG: Enable HYDE_ENABLED + RERANK_ENABLED for best memory recall quality, or enable all five RAG flags for the full pipeline. See the RAG configuration guide for details.

Library Configuration (Programmatic)

When using OpenSentinel as a library, use configure() instead of environment variables:

TypeScript
import { configure } from 'opensentinel';

// Minimal
configure({ CLAUDE_API_KEY: 'sk-ant-...' });

// With persistent memory
configure({
  CLAUDE_API_KEY: 'sk-ant-...',
  OPENAI_API_KEY: 'sk-proj-...',
  DATABASE_URL: 'postgresql://user:pass@localhost:5432/db',
});
ℹ️
Lazy initialization. OpenSentinel doesn't connect to any service at import time. Database, Redis, and API connections are created only when you first use a feature that needs them.

Advanced RAG Configuration

OpenSentinel's memory system includes five retrieval enhancements that dramatically improve recall quality. As of v3.0.0, the full Advanced RAG pipeline is enabled by default (HyDE, Re-ranking, Contextual Query Rewriting, Multi-Step RAG, and Retrieval Caching). You can disable individual stages if needed.

.env
# Rewrite queries using conversation context (resolves "he", "that", etc.)
CONTEXTUAL_QUERY_ENABLED=true

# Generate a hypothetical answer before searching (better semantic matching)
HYDE_ENABLED=true

# LLM scores each result 0-10 for true relevance (eliminates false positives)
RERANK_ENABLED=true
RERANK_MIN_SCORE=3

# Detect gaps in context and run follow-up searches
MULTISTEP_RAG_ENABLED=true
MULTISTEP_MAX_STEPS=2

# Cache retrieval results in Redis (1h TTL, ~60% faster on repeated queries)
RETRIEVAL_CACHE_ENABLED=true

Recommended Setups

GoalEnableTradeoff
Best qualityHYDE_ENABLED + RERANK_ENABLED+1-2s latency (LLM calls)
ConversationalCONTEXTUAL_QUERY_ENABLEDMinimal overhead
SpeedRETRIEVAL_CACHE_ENABLEDRequires Redis
Full pipelineAll five flagsMax quality + caching
ℹ️
Pipeline flow: Query → Contextual Rewrite → HyDE → Cache Check → Hybrid Search → Re-rank → Cache Store → Multi-Step gap fill. Disabled stages are skipped automatically.

Send Your First Message

Web Dashboard

Open http://localhost:8030 in your browser. Type a message and press Enter.

REST API

Terminal
$ curl -X POST http://localhost:8030/api/ask \
    -H "Content-Type: application/json" \
    -d '{"message": "Hello! What can you do?"}'

Telegram

Open Telegram, find your bot, and send /start.

Code (Library)

TypeScript
import { configure, chat, chatWithTools } from 'opensentinel';

configure({ CLAUDE_API_KEY: process.env.CLAUDE_API_KEY });

// Simple conversation
const res = await chat([
  { role: 'user', content: 'Summarize the top 3 HN stories' }
]);
console.log(res.text);

// With tools (browser, shell, files, search, etc.)
const result = await chatWithTools([
  { role: 'user', content: 'Research my competitors and create a PDF report' }
]);
console.log(result.text);

What's Next