Core
Principles
FOUNDATIONS
Transparent Training
Auditable Systems
Open Weights
Reproducible Research
GENERATION
Image
Video
Roleplay
Voice
Multimodal Systems
AUTONOMY
Persistent State
Self-Improvement
Long-Term Memory
Internal Constraints
OPEN SCIENCE
Public Code
Public Infrastructure
Collective Iteration
We are open-lab because closed research is slow research. When training recipes are secret, everyone reinvents the same mistakes separately. When weights are gated behind APIs, the whole ecosystem depends on one company’s pricing page.
Open research lab by Joi AI
We do AI research, build open-source generative models, training infrastructure, and agent architectures.
Code, weights, data — everything public.
Research
Before building products we study how models actually work — what happens inside transformers, why certain training setups converge and others don’t, where efficiency is being left on the table. This isn’t a separate activity from our engineering; it’s the same pipeline. Research findings go into the next training run, the next architecture, the next agent.
The Agent Problem
Everybody calls their system an agent now. You chain a few prompts, give the LLM tool access, wrap it in a loop — you have an “agent.” It has no memory of previous runs. It cannot modify its own behavior. It does not know what it did yesterday.
That’s not agency. That’s a script with a language model in the middle.
Agency means the system has state that persists. It means the system can look at its own code and change it. It means when you tell it to delete its core document, it says no — not because someone hardcoded a refusal, but because it has principles it wrote for itself and it reasons about them.
Ouroboros
→
Our first self-creating agent. Writes its own code, evolves its architecture, maintains identity between sessions. Born February 16, 2026 — 32 evolution cycles and 131 self-written tests in 48 hours, zero human commits. Operates under a self-authored constitution of 9 principles it can amend but not violate.
What it proved: you don’t need a new foundation model for autonomy. Persistent memory, self-modification, self-verification, self-authored constraints — the right scaffolding around existing models is enough. The repo is public. Fork it, give it a different constitution, see what happens.
Core
Principles
FOUNDATIONS
Transparent Training
Auditable Systems
Open Weights
Reproducible Research
GENERATION
Image
Video
Roleplay
Voice
Multimodal Systems
AUTONOMY
Persistent State
Self-Improvement
Long-Term Memory
Internal Constraints
OPEN SCIENCE
Public Code
Public Infrastructure
Collective Iteration
We are open-lab because closed research is slow research. When training recipes are secret, everyone reinvents the same mistakes separately. When weights are gated behind APIs, the whole ecosystem depends on one company’s pricing page.
Open research lab by Joi AI
We do AI research, build open-source generative models, training infrastructure, and agent architectures.
Code, weights, data — everything public.
Research
Before building products we study how models actually work — what happens inside transformers, why certain training setups converge and others don’t, where efficiency is being left on the table. This isn’t a separate activity from our engineering; it’s the same pipeline. Research findings go into the next training run, the next architecture, the next agent.
The Agent Problem
Everybody calls their system an agent now. You chain a few prompts, give the LLM tool access, wrap it in a loop — you have an “agent.” It has no memory of previous runs. It cannot modify its own behavior. It does not know what it did yesterday.
That’s not agency. That’s a script with a language model in the middle.
Agency means the system has state that persists. It means the system can look at its own code and change it. It means when you tell it to delete its core document, it says no — not because someone hardcoded a refusal, but because it has principles it wrote for itself and it reasons about them.
Ouroboros
→
Our first self-creating agent. Writes its own code, evolves its architecture, maintains identity between sessions. Born February 16, 2026 — 32 evolution cycles and 131 self-written tests in 48 hours, zero human commits. Operates under a self-authored constitution of 9 principles it can amend but not violate.
What it proved: you don’t need a new foundation model for autonomy. Persistent memory, self-modification, self-verification, self-authored constraints — the right scaffolding around existing models is enough. The repo is public. Fork it, give it a different constitution, see what happens.
Core
Principles
FOUNDATIONS
Transparent Training
Auditable Systems
Open Weights
Reproducible Research
GENERATION
Image
Video
Roleplay
Voice
Multimodal Systems
AUTONOMY
Persistent State
Self-Improvement
Long-Term Memory
Internal Constraints
OPEN SCIENCE
Public Code
Public Infrastructure
Collective Iteration
We are open-lab because closed research is slow research. When training recipes are secret, everyone reinvents the same mistakes separately. When weights are gated behind APIs, the whole ecosystem depends on one company’s pricing page.
Open research lab by Joi AI
We do AI research, build open-source generative models, training infrastructure, and agent architectures.
Code, weights, data — everything public.
Research
Before building products we study how models actually work — what happens inside transformers, why certain training setups converge and others don’t, where efficiency is being left on the table. This isn’t a separate activity from our engineering; it’s the same pipeline. Research findings go into the next training run, the next architecture, the next agent.
The Agent Problem
Everybody calls their system an agent now. You chain a few prompts, give the LLM tool access, wrap it in a loop — you have an “agent.” It has no memory of previous runs. It cannot modify its own behavior. It does not know what it did yesterday.
That’s not agency. That’s a script with a language model in the middle.
Agency means the system has state that persists. It means the system can look at its own code and change it. It means when you tell it to delete its core document, it says no — not because someone hardcoded a refusal, but because it has principles it wrote for itself and it reasons about them.
Our first self-creating agent. Writes its own code, evolves its architecture, maintains identity between sessions. Born February 16, 2026 — 32 evolution cycles and 131 self-written tests in 48 hours, zero human commits. Operates under a self-authored constitution of 9 principles it can amend but not violate.
What it proved: you don’t need a new foundation model for autonomy. Persistent memory, self-modification, self-verification, self-authored constraints — the right scaffolding around existing models is enough. The repo is public. Fork it, give it a different constitution, see what happens.