The path to AGI is open-source reasoning
Published on
January 25, 2026
Read time:
6 mins
The path to AGI is open-source reasoning

Authored by Himanshu Tyagi

The next phase of AI progress requires collective reasoning—built openly, shared widely, and capable of advancing both open and closed systems


What is AGI (Artificial General Intelligence)?

AGI does not mean excelling at a fixed benchmark or mastering a closed distribution of tasks. It refers to robust, cross-domain intelligence that continues to improve when faced with truly novel problems—-problems whose solutions do not exist in the model’s training distribution. The defining feature of such intelligence is not scale, modality, or tool use.

It is reasoning under distributional shift.

Modern LLMs are extraordinary predictors of human knowledge. But prediction over a learned distribution, even at massive scale, is not sufficient for out-of-distribution generalization. Performance degrades sharply when a problem requires new abstractions, long-horizon planning, or recursive decomposition. This failure is structural.

Our thesis at Sentient is that general intelligence will emerge from systems that can accumulate, verify, and improve reasoning across tasks, rather than ever-larger monolithic models.

This requires systems that decompose problems into reusable reasoning units, verify intermediate conclusions, persist failure signals across runs, and improve via a new curriculum designed for specific gaps. These properties do not arise naturally from single-model prompting, hence our belief in multi-agent systems.


Why reasoning is not just a model capability

LLMs operate by sampling from a learned distribution over text. Everything about a model (chain-of-thought, tool use, reflection, etc.) remains bounded by that distribution. Reasoning begins where quick stochastic prediction ends. 

Reasoning systems must manage external memory and state, coordinate multiple hypotheses and plans, detect and correct internal failures and adapt their strategy mid-execution. Reasoning is necessary for better results, as you cannot cover all of the context and data needed for predicting an answer or are sometimes missing the tools to plan and execute properly for getting the answer. 

Reasoning requires an appropriate architecture and tools, not just a model. 


From single agents to multiagent reasoning

Single-agent systems fail as tasks grow longer and more entropic—context collapses, tools interfere, and errors compound. A multiagent reasoning system contains various specialized agents for distinct, cognitive roles (planning, search, verification, critique, execution, etc). Having multiple agents introduces redundancy, verification, and division of cognitive labor—properties shared by every scalable intelligent system we know (including humans).

All emerging advanced reasoning systems are multiagentic—like those used for solving advanced maths problems or discovering new drugs—and this structure is required for true advancement in AI capabilities. The system grows as a group, but creating and improving a multiagent system is a tedious process: observing specific issues from reasoning traces, identifying distinct components to optimize, adding new elements, etc. The secret source for improvement is reasoning traces that provide crucial feedback for improvement.


What we have learned from building

Since mid-2024, Sentient has built reasoning systems across multiple domains:

  • Reasoning for strategic games: Last year, we built a strategic-game arena where agents competed against each other in Werewolf (Mafia), testing how experiments in human-like scenarios overflowing with context dependencies would improve reasoning inside and outside the game.  
  • Reasoning for model ideology: At the beginning of 2025, we built Dobby, a community-aligned model trained on community preferences. Our goal was to explore another key aspect of reasoning: the ability to adhere to advice and follow a specific ideology. We discovered that solely prompting was not enough to inspire an entire belief system, so we finetuned models to think internally with that ideology. At the time, Dobby was one of the first models with a distinct personality, and a successful example of training models on inner thoughts.
  • Reasoning for search: In March 2025, we released Open Deep Search (ODS): a SOTA deep search agent that rephrased queries and reasoned through code for better responses. This task required elaborate reasoning where the search agent had to collate the right information over the internet, ensure the information is relevant, consistent and timely, and then use that to answer the query. 
  • Reasoning for agent alignment: In May 2025, we built an alignment agent for EigenLayer called EigenJudge that reasons and responds in accordance with a given constitution (a set of rules in natural language). This set a new paradigm for governance in blockchains where the values of a community could be implemented in governance by a representative AI judge. From the technical perspective, this is the first judge agent built by Sentient that evaluated and explained the reason for evaluation, a basic primitive for all reasoning tasks.
  • New reasoning architectures: Most recently in August 2025, we launched a general recursive reasoning agent framework called ROMA (Recursive Open Meta Agent) that breaks down queries into smaller, more achievable tasks. This new architecture enables many long-horizon tasks and prevents context rot in models, effectively boosting their reasoning capabilities. 

Across all of these efforts, one pattern repeats: improvements come from studying long, explicit reasoning traces to extract structural insights about failure, decomposition, and abstraction. Creating new architectures or using a better model improves the baseline, but one must still trudge through tedious iterations after understanding failure modes. 


The gap in AI development…affecting open and closed-source companies alike

The open ecosystem is full of individual components: models, agents, tools, and benchmarks. But it lacks a shared process for understanding how reasoning systems fail and improve over long horizons. Most real failures do not appear in final answers or scores, they emerge in long execution traces, where plans drift or verification breaks down. Today, these traces are rarely preserved, compared, or analyzed across systems. They remain local debugging artifacts rather than shared scientific evidence.

This fragmentation slows progress. Builders repeatedly rediscover the same failure modes, misattribute system-level flaws to model quality, and optimize in isolation without understanding global behavior. The necessary solution is clear: an open space where long-horizon traces can be exchanged, examined, and re-analyzed. We need a space where different builders can plug in new judges, analyses, or agent designs against the same evidence and let insights compound.

This gap affects open and closed models alike. Even the strongest proprietary systems learn from private traces in isolation. Filling the gap does not require opening model weights; it requires opening the process by which reasoning behavior is studied and improved. Sentient exists to provide the missing layer of standardizing the way reasoning is analyzed and improved across any system.


Sentient’s Mission 

Sentient’s mission is to consolidate and converge everything required to build multi-agent reasoning systems—bringing models, design patterns, evals, long-horizon traces, and improvement mechanisms into a single open-source ecosystem.


Building open-source AGI with Sentient 

Sentient is building an open reasoning ecosystem that enables anyone to study, build, and improve complex reasoning systems. Everything we develop (frameworks, models, agents, tools, etc.) is available on GRID: a shared space for reasoning artifacts, traces, and evaluations. Sentient Chat already demonstrates this approach through production-grade search and crypto agents that combine structured and unstructured data sources, and these capabilities will soon be ported into a fully open agent framework that others can deploy, extend, and evolve.

We invite AI researchers and builders operating at the frontier of reasoning to join us. Existing models will falter at the boundaries we are exploring, and that progress will require new thinking models, richer world models, new arenas for sharing long-horizon traces and evals, and evolution mechanisms that drive continual improvement. Creating open-source AGI will not be a one-shot effort. It requires a long-term process and a shared ecosystem that allows reasoning systems to be understood and improved over time. General artificial intelligence will not come by reducing all intelligence as in-distribution for a single large model, but through widening model distribution with open reasoning.