Home / Products / Languages / Tabotec Tigrigna
Language modelTabotec Tigrigna — an LLM that thinks in Tigrigna.
An instruction-tuned model that reads, writes, and follows instructions in Tigrigna with the fluency you'd expect, plus graceful fallback to English and Amharic when needed.
Why a dedicated Tigrigna model
Generic frontier models do Tigrigna badly.
Ask GPT-4 or Claude a question in Tigrigna and you'll get a response that hovers between Amharic transliteration, halting Tigrigna, and a polite redirect to English. The Geez script is in the tokenizer; the language is not.
Tabotec Tigrigna is a continued-pretrain plus instruction-tune on Tigrigna text, broadcast transcripts, and a curated assistant dataset built specifically for the language. The result is a model that follows Tigrigna instructions, generates Tigrigna prose, and reasons about Tigrigna content without code-switching to English unprompted.
It's also small enough to run on a single workstation GPU — by design. We don't believe Horn-of-Africa AI should require renting Nvidia H100s in Virginia.
Read our approachWhat it's good at
- Tigrigna ↔ Amharic ↔ English summarization
- Long-form Tigrigna writing assistance
- Question-answering over Tigrigna documents
- Customer-support style assistant flows
- Reading-level adaptation (formal ↔ casual)
- Educational content generation in Tigrigna
What it's not yet
A general "chat with everything" model. We optimize for Tigrigna fluency and Horn-of-Africa context — not coding benchmarks or arcane trivia. If you need both, pair it with a frontier model behind an Atlas-style router.
Specs
What ships, and how to run it.
Model sizes
tabotec-tigrigna-8b — Llama 3.1 8B base, the everyday workhorse. Runs on a single 24 GB GPU (RTX 3090 / 4090 / L4).
tabotec-tigrigna-mini — 3–4B class, runs on consumer hardware including DGX Spark or Apple Silicon.
Formats
HuggingFace safetensors for fine-tuning. GGUF (Q4_K_M, Q5_K_M, Q8_0) for llama.cpp, Ollama, and LM Studio. Runs anywhere those run.
License
Open weights under the base model's license (Llama Community / Qwen). Commercial use permitted within those terms. Custom commercial agreements available on request.
Use it from anywhere
HTTP API or local inference.
Use our hosted API for the fastest path to integration. Or pull the GGUF and run it on your own hardware — Ollama, llama.cpp, vLLM, LM Studio. The API is OpenAI-compatible, so you can swap your existing client over by changing the base URL.
- OpenAI-compatible
/v1/chat/completions - Streaming and non-streaming
- JSON mode and tool-use compatible
- Drop-in for any existing OpenAI SDK
Example
Tigrigna chat completion
from openai import OpenAI
client = OpenAI(
api_key=TABOTEC_KEY,
base_url="https://api.tabotec.ai/v1",
)
resp = client.chat.completions.create(
model="tabotec-tigrigna-8b",
messages=[
{"role": "system",
"content": "ኣብ ትግርኛ ብኣግባቡ መልሲ ሃበኒ።"},
{"role": "user",
"content": "ብዛዕባ ሓደ ካብ ኣገደስቲ ቦታታት ኣስመራ ኣተንብሃኒ።"},
],
)
print(resp.choices[0].message.content)
Pilot program
Request access to Tabotec Tigrigna.
We're working with a small group of pilot users — researchers, NGOs, broadcasters, translation teams. Tell us what you'd build.
