Cognition Documentation

Complete reference for the zero-knowledge encrypted AI platform. Cognition combines AES-256-GCM client-side encryption with a beautiful chat interface, multi-provider AI support, and agent workflows.

01 Quick Start

Get Cognition running in under a minute. Choose your preferred method:

# Clone and run
docker pull cognitionai/cognition
docker compose up -d

# Open http://localhost:3000
# Create your vault with a strong password
# Add AI providers in Settings
# Clone and install
npm install cognition-ai
cd cognition-ai

# Development mode
npm run dev
# Open http://localhost:5173

# Production build
npm run build
node build
# No install needed — use the live instance
https://app.cognitionai.tech

# 1. Create a vault (set a strong password)
# 2. Go to Settings → Add Provider
# 3. Enter your OpenAI/Anthropic API key
# 4. Start chatting — everything encrypted
First time? When you create a vault, your password derives a 256-bit encryption key via PBKDF2 (600,000 iterations). This takes 1-3 seconds — that's intentional. The key derivation makes brute-force attacks computationally expensive.

02 Docker Setup

Cognition ships as a single Docker container with zero external dependencies. SQLite stores all data in a persistent volume.

# docker-compose.yml
services:
  cognition:
    build: .
    ports:
      - "3000:3000"
    volumes:
      - cognition-data:/app/data
    extra_hosts:
      - "host.docker.internal:host-gateway"
    environment:
      - PORT=3000
      - NODE_ENV=production
      - DATABASE_PATH=/app/data/cognition.db

volumes:
  cognition-data:

The extra_hosts mapping enables Ollama access from inside the container — if you run Ollama locally, Cognition auto-detects it at host.docker.internal:11434.

Environment Variables

VariableDefaultDescription
PORT3000Server port
NODE_ENVdevelopmentSet to production for optimized builds
DATABASE_PATH./data/cognition.dbSQLite database file path
BODY_SIZE_LIMIT10485760Max request body size (10MB)

03 Architecture

Cognition follows a strict client-side encryption model. The server is architecturally a "dumb data store" — it stores and retrieves ciphertext blobs. It never has access to encryption keys, plaintext conversations, or decrypted API keys.

Browser
PBKDF2 Key Derivation
AES-256-GCM Encrypt/Decrypt
Web Crypto API
Svelte 5 UI
Server
Store Ciphertext
Retrieve Ciphertext
SSE Proxy
SQLite + Drizzle
AI Backends
OpenAI API
Anthropic API
Ollama (Local)
Any OpenAI-Compat
Cognition architecture diagram

Data Flow

  1. User types a message in the browser
  2. Browser encrypts the message with AES-256-GCM using the in-memory master key
  3. Ciphertext is sent to the server via API and stored in SQLite
  4. For AI chat: browser sends decrypted API key + plaintext message to server
  5. Server proxies the request to the AI provider via SSE, streams response back
  6. Browser receives AI response, encrypts it, stores ciphertext on server
  7. API key exists in server memory only during the request, then is garbage collected
Important: During AI chat requests, the decrypted API key and message content exist transiently in server memory for the duration of the HTTP request. This is necessary to proxy the request to the AI provider. The plaintext is never written to disk or database.

04 Encryption

Cognition uses AES-256-GCM (Galois/Counter Mode) — the same authenticated encryption used in TLS 1.3, military communications, and hardware security modules. Combined with PBKDF2 key derivation, it provides both confidentiality and integrity.

Encryption flow

Key Derivation

ParameterValuePurpose
AlgorithmPBKDF2Password-based key derivation
Iterations600,000Brute-force resistance (~1-3s on modern hardware)
HashSHA-256Digest function
Salt32 bytes (random)Prevents rainbow table attacks
Output256-bit CryptoKeyAES-256-GCM master key
ExtractablefalseRaw key bytes cannot be read from JS

Encryption Process

// Simplified encryption flow
1. Generate random 12-byte IV (crypto.getRandomValues)
2. Encrypt: AES-256-GCM(masterKey, IV, plaintext) → ciphertext
3. Encode: base64(IV) + ":" + base64(ciphertext)
4. Store encoded string on server

// Decryption (reverse)
1. Split on ":" → base64(IV), base64(ciphertext)
2. Decode both from base64
3. Decrypt: AES-256-GCM(masterKey, IV, ciphertext) → plaintext
Ciphertext format: Every encrypted value is stored as base64(12-byte-IV):base64(ciphertext). The IV is randomly generated per encryption operation, ensuring identical plaintext produces different ciphertext each time.

05 Key Management

Key Lifecycle

  1. Derivation: User enters password → PBKDF2(password, salt, 600000, SHA-256) → 256-bit CryptoKey
  2. Storage: CryptoKey held in Svelte store (JavaScript memory only). Never in localStorage, cookies, or IndexedDB
  3. Usage: Used for encrypt/decrypt operations via Web Crypto API
  4. Lock: CryptoKey reference set to null → garbage collected. All decrypted data cleared from memory
  5. Auto-lock: Configurable timeout (default 15 minutes). Every encrypt/decrypt operation resets the timer

Verification

On vault creation, the string "cognition-vault-test" is encrypted and stored. On subsequent unlocks, Cognition tries to decrypt this test string. Success = correct password. Failure = wrong password. The server never validates the password — the client does by attempting decryption.

06 AI Providers

Cognition supports multiple AI backends. Add as many as you need — each provider's API key is encrypted with your vault key before storage.

Multi-provider support
ProviderTypeConfigPrivacy
OllamaollamaAuto-detected at localhost:11434Local
OpenAIopenaiAPI key requiredCloud
AnthropicanthropicAPI key requiredCloud
Customopenai-compatibleAPI key + base URLCloud

Adding a Provider

  1. Open Settings (gear icon in sidebar)
  2. Click "Add Provider" and select the type
  3. Enter your API key (for cloud providers)
  4. The key is encrypted with your vault key before being sent to the server
  5. Available models will load automatically

API Key Flow (Zero-Knowledge)

// How API keys are handled
1. User enters API key in Settings
2. Client: encrypt(vaultKey, apiKey) → ciphertext
3. Server stores ciphertext in providers table
4. On vault unlock: client fetches providers → decrypts API keys → holds in memory
5. For model listing/chat: client sends decrypted key per-request
6. Server uses key transiently, never persists plaintext

07 Agents

Create custom AI agents with specialized behaviors, system prompts, and model selection. Agent configurations are encrypted like everything else.

Agent builder

Built-in Templates

TemplateRoleSpecialization
ResearcherResearch AssistantAnalyze topics, find patterns, cite sources, synthesize information
CoderSoftware EngineerWrite clean, efficient code with explanations and best practices
WriterContent WriterCreate clear, engaging content adapted to audience and format
AnalystData AnalystInterpret data, identify patterns, provide actionable insights

Custom Agent Configuration

// Agent config (stored encrypted)
{
  name: "Legal Advisor",
  role: "Legal Research",
  systemPrompt: "You are a legal research assistant...",
  modelId: "gpt-4o",
  providerId: "provider-uuid",
  color: "#7c5cfc",
  icon: "⚖️"
}

08 Workflows

Chain multiple agents into sequential pipelines. The output of each agent becomes the input to the next, enabling complex multi-step AI processes.

Workflow pipeline

How Workflows Execute

  1. User provides initial input text
  2. First agent processes the input with its system prompt + step instruction
  3. Output streams in real-time to the UI
  4. Completed output becomes input to the next agent
  5. Process repeats for each step in the pipeline
  6. All intermediate and final outputs are encrypted before storage

Example: Research Pipeline

// Research → Analysis → Summary workflow
Step 1: Researcher agent → "Research this topic thoroughly"
Step 2: Analyst agent → "Extract key findings and data points"
Step 3: Writer agent → "Synthesize into a concise executive summary"

Input: "Impact of quantum computing on current encryption standards"
→ Researcher produces detailed research
→ Analyst identifies key findings
→ Writer creates executive summary

09 API Reference

Cognition exposes a RESTful API for all operations. All conversation content is stored as ciphertext — the API neither encrypts nor decrypts.

MethodEndpointDescription
POST/api/auth/registerCreate vault (single user). Stores password hash, salt, encrypted test.
GET/api/auth/userCheck if vault exists. Returns salt + encrypted test for key verification.
GET/api/conversationsList conversations by userId. Returns encrypted titles.
POST/api/conversationsCreate conversation with encrypted title.
PATCH/api/conversations/:idUpdate title, model, or provider.
DELETE/api/conversations/:idDelete conversation + cascade messages.
GET/api/messagesList messages by conversationId. Returns encrypted content.
POST/api/messagesCreate message with encrypted content.
POST/api/modelsList models. Accepts decrypted API keys per-provider.
POST/api/chatSSE streaming proxy. Client sends API key per-request.
GET/POST/api/providersCRUD for AI providers. Config stored encrypted.
GET/POST/api/agentsCRUD for agents. Config stored encrypted.
GET/POST/api/workflowsCRUD for workflows. Config stored encrypted.

SSE Stream Format

// POST /api/chat response (text/event-stream)
data: {"token":"Hello"}
data: {"token":" world"}
data: {"token":"!"}
data: [DONE]

// On error:
data: {"error":"Provider returned 401 Unauthorized"}

10 Deployment

Docker (Recommended)

# Production deployment
docker pull cognitionai/cognition
docker compose up -d

# Custom port
PORT=8080 docker compose up -d

# View logs
docker compose logs -f cognition

Railway

Cognition auto-deploys from the main branch on Railway. The live instance runs at cognition-production.up.railway.app. Data persists on a Railway volume mounted at /app/data.

Manual Build

# Build for production
npm install
npm run build

# Run production server
NODE_ENV=production PORT=3000 node build

# Or use the Dockerfile directly
docker build -t cognition .
docker run -p 3000:3000 -v cognition-data:/app/data cognition

Reverse Proxy (nginx)

# nginx config for custom domain
server {
    listen 443 ssl http2;
    server_name your-domain.com;

    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;

    location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_buffering off;  # Required for SSE
    }
}

11 Security Model

Privacy shield

What's Encrypted

What's NOT Encrypted

Threat Model

ThreatProtected?Notes
Server database breachYesAll content is AES-256-GCM ciphertext
Server admin reading dataYesAdmin sees only ciphertext blobs
Network interception (HTTPS)YesDouble encrypted: TLS + AES-256-GCM
Weak password brute-forceMitigatedPBKDF2 600K iterations, random salt
AI provider data collectionPartialPrompts sent to AI in plaintext (necessary for inference)
Server memory inspectionPartialAPI keys transiently in memory during requests
Browser extension/malwareNoMalware with DOM access can read decrypted content
Physical device accessYesVault locks automatically, key not persisted

Guarantees

Zero-knowledge guarantee: Even if you hand over the entire server (database, disk, memory dump after restart), an attacker cannot read your conversations without your password. The server never stores, logs, or transmits the encryption key. The PBKDF2 salt prevents precomputed attacks, and the non-extractable CryptoKey flag prevents key exfiltration via JavaScript.

For maximum privacy, use Cognition with Ollama (local models). This ensures your prompts never leave your network — combined with client-side encryption, you get true end-to-end privacy with zero data exposure.