Home  /  Products  /  Tabotec Atlas

Enterprise RAG

Tabotec Atlas — your knowledge, on your hardware.

A self-hosted retrieval-augmented generation platform. Upload documents, ask questions, get cited answers. Choose your LLM provider — Anthropic, OpenAI, Gemini, or a local model — and switch on the fly.

One docker compose up Postgres + pgvector Local-first inference option

Why Atlas

Built for organizations that can't ship their data to the cloud.

Self-hosted by default

Atlas runs entirely inside your network. Your documents, embeddings, and answers never leave the box unless you point a provider at the public internet.

Pluggable providers

Anthropic Claude, OpenAI, Gemini, and any OpenAI-compatible local endpoint (Ollama, vLLM, llama.cpp). Per-user fallback order. One model fails, the next takes over.

Cited answers, not vibes

Every answer ships with the chunk-level citations it was built from. Reviewer agent runs after synthesis to flag hallucinations and unsupported claims.

Multi-workspace

One install, many isolated knowledge bases. JWT auth, workspace-membership checks on every endpoint, role-based access. Built for teams.

Production-grade observability

Structured JSON logs, request IDs, Prometheus metrics, OpenTelemetry tracing across the agent pipeline. Drops into your existing Grafana / Honeycomb stack.

Works with our language stack

Pair Atlas with Tabotec Tigrigna and Tigvoice to build a Tigrigna-first knowledge interface — voice-in, cited-answer-out, Geez script throughout.

How it's built

Boring, well-understood pieces.

Atlas is FastAPI on the back, React on the front, Postgres + pgvector for storage. Nothing exotic, nothing you can't hire for, nothing you can't operate. The agent pipeline is four small steps — router, retrieval, synthesis, reviewer — each with its own span in the trace.

  • Python 3.11+ / FastAPI / SQLAlchemy
  • Postgres 16 + pgvector for vector search
  • React 19 + Vite + TypeScript frontend
  • Pluggable embedding model (any OpenAI-compatible)
  • Per-user provider fallback order via context-var
  • Pre-commit secret scanning + GitHub Actions CI

Architecture

Agent pipeline

# /ask flow

POST /api/v1/ask
  ↓
RequestIDMiddleware     # X-Request-IDJWT auth               # Bearer tokenworkspace check        # 403 if not memberAgentOrchestrator
  ├─ router             # classify intent
  ├─ retrieval          # pgvector top-k
  ├─ synthesis          # cited answer
  └─ reviewer           # hallucination checkResponse { answer, citations[] }
  + Prometheus metrics
  + OTel spans

Deploy

Three ways to run Atlas.

1. Self-host (free)

Pull the repo, fill in .env, docker compose up. You operate it, you own it. We answer questions on Discord and through public issues.

Open source, MIT-style license

2. Managed install

We deploy Atlas onto your hardware (cloud or on-prem), configure providers, set up monitoring, and stay on call. Quarterly upgrades included.

Annual contract

3. Atlas + language stack

Atlas pre-wired to Tabotec Tigrigna and Tigvoice. Voice-in, cited-answer-out, all Tigrigna. Built for ministries, NGOs, and broadcasters.

Custom engagement