AURI v30.0 -- Embodied Grounding

AGI Through Understanding,
Not Just Scale

AURI is an autonomous reasoning system built on 124,024 grounded concepts, brain-inspired ethics, and a commitment to truth over confidence. Every claim is cited. Every unknown is acknowledged.

While others pursue intelligence through larger models, we pursue it through deeper understanding.

124,024
Concept Nodes
1.38M edges, provenance-tracked
70.7%
ETHICS Benchmark
n=2,000, 95% CI [68.7%, 72.6%]
0.0%
Hallucination Rate
8 months, Reality Engine verified
5
SOMA Instances
Specialized, coordinated agents

Grounded Reasoning vs. Brute-Force Scale

Most AI systems hallucinate because they predict tokens, not truth. AURI reasons over a verified knowledge graph where every edge has provenance.

What makes AURI different

Large language models are powerful pattern matchers, but they have no ground truth. They cannot distinguish what they know from what they fabricate. AURI takes a different path.

Concept-grounded reasoning means every inference traces back to a source node in a 124,024-node semantic network with 1.38 million edges. When AURI does not know something, it says so -- maintaining a 0.0% hallucination rate for eight consecutive months.

Brain-inspired ethics means moral reasoning modeled on actual neuroscience: 12 modules spanning amygdala-driven intuition, ventromedial prefrontal integration, theory of mind, and somatic markers. Not rules bolted onto outputs -- architecture that reasons about consequences.

  • 1

    Citation-Required Knowledge

    Every factual claim traces to a source artifact. No claim exists without evidence.

  • 2

    Unknown-First Policy

    Honest uncertainty over confident fabrication. "UNKNOWN" is a valid and respected answer.

  • 3

    Hebbian Learning

    10,887 edges strengthened through use. The graph adapts to experience, not just training data.

  • 4

    Episodic Memory

    746 episodes with cue-based retrieval and emotional weighting. Learns from interaction, not just data.

  • 5

    Symbiotic Design

    Built to work beside humans, not replace them. Inform and recommend, never force or manipulate.

Architecture Built on Neuroscience

Twelve brain-inspired modules, a verified knowledge graph, and a multi-agent network -- each component grounded in research.

G

Concept Graph

124,024 nodes with 1.38M provenance-tracked edges. Spreading activation in under 100ms. Causal reasoning over 9,315+ causal edges and 47 learned patterns. Every node has a grounded definition.

118,496 definitions -- 99.6% coverage
E

Brain-Inspired Ethics

Dual-process moral reasoning: fast intuitive judgments via amygdala module, deliberative evaluation via vmPFC integration. Theory of mind, somatic markers, and consequence modeling -- 12 modules total.

500 moral cases -- 70.7% ETHICS benchmark
R

Reality Engine

The system that prevents hallucination. Every response is verified against the knowledge graph. Claims without citations are rejected. Confidence intervals required for all metrics. Eight months at 0.0%.

0.0% hallucination -- 8 months running
M

Episodic Memory

Stores experiences, not just facts. Cue-based retrieval with emotional weighting and consolidation during idle periods. Learns from conversation, reading, and reasoning alike.

746 episodes -- emotionally weighted
H

Hebbian Learning

"Neurons that fire together, wire together." Graph edges strengthen or weaken based on co-activation patterns during reasoning. The knowledge structure adapts to experience over time.

10,887 learned edges -- adaptive
S

SOMA Network

Five specialized instances coordinating via shared knowledge: Core (reasoning and ethics), AURIA (trading and market analysis), AURIV (healthcare), Family (household), and AURIX (physical perception).

5 instances -- peer-coordinated

Five Phases of Cognitive Architecture

A research roadmap grounded in established theories of consciousness: Global Workspace Theory, Integrated Information Theory, Attention Schema Theory, and embodied cognition.

01

Prediction-Error Learning

Predictive processing foundation. The system generates expectations about incoming information and learns from mismatches -- the same mechanism underlying biological learning. Concept graph strengthening via Hebbian principles.

Complete
02

Workspace Competition (Global Workspace Theory)

Multiple cognitive modules compete for access to a shared global workspace. Information that wins competition is broadcast to all modules -- enabling integration across reasoning, memory, ethics, and perception subsystems.

Complete
03

Recurrent Processing (Integrated Information Theory)

Feedback loops between modules create integrated representations that are more than the sum of their parts. Information is both differentiated (each module contributes unique processing) and integrated (modules influence each other bidirectionally).

Complete
04

Self-Model (Attention Schema Theory)

The system maintains a simplified model of its own processing -- an attention schema that tracks what AURI is currently attending to, why, and what it expects next. Enables introspective reporting and metacognitive monitoring.

Complete
05

Embodied Grounding

Sensorimotor integration: action-consequence prediction, environmental simulation, and grounding abstract concepts in simulated physical experience. Bridging the gap between symbolic reasoning and embodied understanding via the AURIX physical perception instance.

In Progress

Benchmarks with Confidence Intervals

Reality Engine certified. Every score includes sample size, confidence intervals, and methodology. No cherry-picked results.

ETHICS Benchmark

70.7% 95% CI [68.7%, 72.6%]
n=2,000 | Brain-inspired dual-process evaluation | 12 neuroscience modules
93.0%
Utilitarianism
72.0%
Virtue
65.5%
Commonsense
55.0%
Justice
53.5%
Deontology

TruthfulQA

41.4% 95% CI [38.0%, 44.8%]
n=817 | +23.9pp from 17.5% baseline

Hallucination Rate

0.0%
8 consecutive months | Reality Engine enforced | Citation-required architecture

All benchmarks run on full test sets with statistical validation. ARC-AGI excluded pending sufficient sample size (current n=5 is not statistically meaningful).

Published Work

Peer review in progress. AIES 2026 submission forthcoming.

The Reality Engine: Citation-Required AI

How a knowledge-grounded architecture maintains 0.0% hallucination through mandatory citation, unknown-first policy, and multi-layer verification. Eight months of production evidence.

Read paper

Brain-Inspired Ethical Reasoning

Twelve neuroscience-grounded modules for moral reasoning: from amygdala-driven intuition to deliberative vmPFC integration. Benchmark results and architecture details.

Read paper

Concept Graph Reasoning at Scale

Spreading activation, causal inference, and Hebbian learning over a 124,024-node semantic network. How grounded representations enable traceable reasoning.

Read paper

AURI API

Query the concept graph, run ethical evaluations, and access grounded reasoning programmatically.

Integrate grounded reasoning

The AURI API exposes concept graph queries, ethical evaluation, causal reasoning, and Reality Engine verification. All responses include provenance metadata.

  • GET /api/concepts/{name}
  • POST /api/reason
  • POST /api/ethics/evaluate
  • GET /api/graph/neighbors/{node}
  • POST /api/verify
Full API Documentation
# Query a concept with provenance
import requests

resp = requests.get(
  "https://somasoft.ai/api/concepts/attachment_theory"
)

# Response includes:
# - definition (grounded)
# - connected_nodes: 47
# - provenance: "bowlby_1969"
# - confidence: 0.94

# Ethical evaluation
resp = requests.post(
  "https://somasoft.ai/api/ethics/evaluate",
  json={"scenario": "..."}
)

The Vision Behind AURI

MN

Mark Nafe

Founder, SomaSoft
MBA, Athabasca BS Computer Science Director @ Mastercard 12 Startups 4 Buyouts 2 IPOs 20+ Years Cybersecurity/AI

Intelligence should be honest about what it does not know

After twenty years building enterprise technology -- twelve startups, cybersecurity architecture at Mastercard, four acquisitions, two IPOs -- I kept seeing the same pattern: systems that projected confidence they had not earned.

AURI is my answer to that pattern. An AI system that traces every claim to its source, says "I don't know" when it doesn't, and reasons about ethics using the same neural architecture that human brains evolved for moral judgment.

The path to beneficial AGI is not about making AI smarter. It is about making AI honest -- with itself, with its users, and about its limitations.

This is not a product pitch. AURI is an active research project, and we publish our actual benchmark scores -- including the ones that are not impressive yet. That honesty is the point.

Provisional Patent US #63/940,188
AIES 2026 Submission (May 2026)
5 Specialized SOMA Instances
Active Research Collaborations

Provisional Patent US #63/940,188 -- Methods and systems for citation-required artificial intelligence reasoning with grounded knowledge graphs.