AURI is an autonomous reasoning system built on 124,024 grounded concepts, brain-inspired ethics, and a commitment to truth over confidence. Every claim is cited. Every unknown is acknowledged.
While others pursue intelligence through larger models, we pursue it through deeper understanding.
Most AI systems hallucinate because they predict tokens, not truth. AURI reasons over a verified knowledge graph where every edge has provenance.
Large language models are powerful pattern matchers, but they have no ground truth. They cannot distinguish what they know from what they fabricate. AURI takes a different path.
Concept-grounded reasoning means every inference traces back to a source node in a 124,024-node semantic network with 1.38 million edges. When AURI does not know something, it says so -- maintaining a 0.0% hallucination rate for eight consecutive months.
Brain-inspired ethics means moral reasoning modeled on actual neuroscience: 12 modules spanning amygdala-driven intuition, ventromedial prefrontal integration, theory of mind, and somatic markers. Not rules bolted onto outputs -- architecture that reasons about consequences.
Every factual claim traces to a source artifact. No claim exists without evidence.
Honest uncertainty over confident fabrication. "UNKNOWN" is a valid and respected answer.
10,887 edges strengthened through use. The graph adapts to experience, not just training data.
746 episodes with cue-based retrieval and emotional weighting. Learns from interaction, not just data.
Built to work beside humans, not replace them. Inform and recommend, never force or manipulate.
Twelve brain-inspired modules, a verified knowledge graph, and a multi-agent network -- each component grounded in research.
124,024 nodes with 1.38M provenance-tracked edges. Spreading activation in under 100ms. Causal reasoning over 9,315+ causal edges and 47 learned patterns. Every node has a grounded definition.
118,496 definitions -- 99.6% coverageDual-process moral reasoning: fast intuitive judgments via amygdala module, deliberative evaluation via vmPFC integration. Theory of mind, somatic markers, and consequence modeling -- 12 modules total.
500 moral cases -- 70.7% ETHICS benchmarkThe system that prevents hallucination. Every response is verified against the knowledge graph. Claims without citations are rejected. Confidence intervals required for all metrics. Eight months at 0.0%.
0.0% hallucination -- 8 months runningStores experiences, not just facts. Cue-based retrieval with emotional weighting and consolidation during idle periods. Learns from conversation, reading, and reasoning alike.
746 episodes -- emotionally weighted"Neurons that fire together, wire together." Graph edges strengthen or weaken based on co-activation patterns during reasoning. The knowledge structure adapts to experience over time.
10,887 learned edges -- adaptiveFive specialized instances coordinating via shared knowledge: Core (reasoning and ethics), AURIA (trading and market analysis), AURIV (healthcare), Family (household), and AURIX (physical perception).
5 instances -- peer-coordinatedA research roadmap grounded in established theories of consciousness: Global Workspace Theory, Integrated Information Theory, Attention Schema Theory, and embodied cognition.
Predictive processing foundation. The system generates expectations about incoming information and learns from mismatches -- the same mechanism underlying biological learning. Concept graph strengthening via Hebbian principles.
CompleteMultiple cognitive modules compete for access to a shared global workspace. Information that wins competition is broadcast to all modules -- enabling integration across reasoning, memory, ethics, and perception subsystems.
CompleteFeedback loops between modules create integrated representations that are more than the sum of their parts. Information is both differentiated (each module contributes unique processing) and integrated (modules influence each other bidirectionally).
CompleteThe system maintains a simplified model of its own processing -- an attention schema that tracks what AURI is currently attending to, why, and what it expects next. Enables introspective reporting and metacognitive monitoring.
CompleteSensorimotor integration: action-consequence prediction, environmental simulation, and grounding abstract concepts in simulated physical experience. Bridging the gap between symbolic reasoning and embodied understanding via the AURIX physical perception instance.
In ProgressReality Engine certified. Every score includes sample size, confidence intervals, and methodology. No cherry-picked results.
All benchmarks run on full test sets with statistical validation. ARC-AGI excluded pending sufficient sample size (current n=5 is not statistically meaningful).
Peer review in progress. AIES 2026 submission forthcoming.
How a knowledge-grounded architecture maintains 0.0% hallucination through mandatory citation, unknown-first policy, and multi-layer verification. Eight months of production evidence.
Read paperTwelve neuroscience-grounded modules for moral reasoning: from amygdala-driven intuition to deliberative vmPFC integration. Benchmark results and architecture details.
Read paperSpreading activation, causal inference, and Hebbian learning over a 124,024-node semantic network. How grounded representations enable traceable reasoning.
Read paperQuery the concept graph, run ethical evaluations, and access grounded reasoning programmatically.
The AURI API exposes concept graph queries, ethical evaluation, causal reasoning, and Reality Engine verification. All responses include provenance metadata.
After twenty years building enterprise technology -- twelve startups, cybersecurity architecture at Mastercard, four acquisitions, two IPOs -- I kept seeing the same pattern: systems that projected confidence they had not earned.
AURI is my answer to that pattern. An AI system that traces every claim to its source, says "I don't know" when it doesn't, and reasons about ethics using the same neural architecture that human brains evolved for moral judgment.
The path to beneficial AGI is not about making AI smarter. It is about making AI honest -- with itself, with its users, and about its limitations.
This is not a product pitch. AURI is an active research project, and we publish our actual benchmark scores -- including the ones that are not impressive yet. That honesty is the point.