The Fundamental Problem
Large Language Models are powerful storytellers, but they’re fragile reasoners. Naive single-prompt simulations collapse beyond ~10 entities or ~20 interactions due to:- Inconsistency drift: Entities forget facts, relationships change arbitrarily
- Token explosion: Full-context approaches hit limits fast
- Causal incoherence: No tracking of who knows what, when they learned it
- Hallucinated knowledge: Entities magically know things they couldn’t
What is SNAG?
SNAG (Social Network Augmented Generation) is to social simulation what RAG (Retrieval Augmented Generation) is to document search.RAG
Retrieves documents to ground generation in factual knowledge
SNAG
Synthesizes social graphs to ground generation in causal structure
SNAG vs RAG: Side-by-Side
| Dimension | RAG | SNAG (Timepoint Pro) |
|---|---|---|
| Grounds LLMs in | Retrieved documents | Synthesized social graphs |
| Maintains | Document relevance | Causal provenance + temporal consistency |
| Scales to | Millions of documents | Dozens of entities, hundreds of timepoints |
| Output | Grounded answers | Auditable causal simulations + training data |
| Core Structure | Document embeddings + retrieval | Entity tensors + exposure events + causal chains |
| Validation | Relevance scoring | Multi-constraint validation (information conservation, energy budgets, network flow) |
How SNAG Grounds LLMs
1. Structured Social Graph
Every entity exists in a typed graph with:2. Causal Provenance
Knowledge doesn’t appear magically—it has exposure events:Example: Constitutional Convention Knowledge Flow
Example: Constitutional Convention Knowledge Flow
Day 1: Madison creates Virginia Plan → exposure event (type=
created, source=self)Day 3: Madison shares with Washington → exposure event (type=told, source=madison, entity=washington)Day 5: Washington references plan in debate ✅ Valid (has exposure from Day 3)Day 5: Jefferson references plan ❌ INVALID (no exposure event, Jefferson not present)3. Temporal Consistency
Timepoints form explicit causal chains:4. Variable-Depth Fidelity
Not all entities deserve equal computational attention:
Fidelity is query-driven: entities start compressed and elevate on-demand.
Why This Matters
Transform LLMs from Storytellers to Reasoners
SNAG’s structured propagation, variable-depth fidelity, and composable mechanisms let you scale to dozens of entities across hundreds of timepoints while keeping costs low and causality auditable.Exponential Value at Scale
The larger and more intricate the social system, the more emergent behaviors surface that intuition misses:Superior Training Data
SNAG outputs include:- Full causal ancestry: Every fact has provenance
- Counterfactual branches: “What if” scenarios with controlled interventions
- Quantitative states: Emotion tensors, energy budgets, confidence levels
- Dialog with context: Every conversation includes relationship state, knowledge access, emotional tone
The 95% Cost Reduction
Traditional approach: uniform high fidelity for all entities- 100 entities × 10 timepoints × 50k tokens = 50M tokens
- ~10% TRAINED (5M tokens)
- ~20% DIALOG (2M tokens)
- ~30% SCENE-GRAPH (1.5M tokens)
- ~40% TENSOR_ONLY (80k tokens)
- Total: ~2.5M tokens (95% reduction)
Actual costs: 1.00 per run depending on complexity and temporal mode.
Architecture Insight
SNAG treats fidelity as a query-driven 2D surface over (entity, timepoint) space:Key Mechanisms
M1: Heterogeneous Fidelity
Power-law resolution distribution
M3: Exposure Events
Tracked knowledge acquisition
M7: Causal Chains
Explicit temporal ancestry
M6: TTM Compression
97% compression with structure
M11: Dialog Synthesis
Per-character turn generation
M17: Temporal Modes
5 causality regimes
Next Steps
Temporal Modes
Learn how time itself can have different causal rules
Fidelity Management
Deep dive into resolution levels and TTM tensors
Knowledge Provenance
How exposure events prevent anachronisms
All 19 Mechanisms
Complete technical architecture

