Training Modes Overview
train
Single-timepoint training with rich historical context
temporal_train
Multi-timepoint training with causal propagation
historical_training
Context-aware training from predefined scenarios
Basic Training Mode
Train entities at a single timepoint with historical context:Configuration
Number of entities to generate
Resolution level for entities:
TENSOR_ONLY, SCENE, DIALOG, or FULL_CONTEXTRandom seed for reproducibility
What It Does
- Creates Test Graph - Generates a social network of entities
- Runs Training Workflow - Uses LangGraph workflow to populate entities
- Populates Knowledge - LLM generates:
- Knowledge states (what entities know)
- Energy budgets (resource constraints)
- Personality traits
- Temporal awareness
- Confidence scores
- Saves to Database - Persists entities with metadata
- Computes Metrics - Calculates graph centrality and top entities
Example Output
Historical Training Mode
Train entities using predefined historical contexts:Available Contexts
Historical scenarios are defined inentity_templates.py:
- founding_fathers_1789
Constitutional Inauguration - April 30, 1789Entities:
- George Washington (President-elect, age 57)
- John Adams (Vice President-elect, age 53)
- Thomas Jefferson (Secretary of State, age 45)
- Alexander Hamilton (Treasury Secretary, age 34)
- James Madison (Congressman, age 38)
Configuration
Historical context template name
What It Does
- Loads Historical Context - Entity roles, ages, locations, relationships
- Creates Relationship Graph - NetworkX graph with historical connections
- Enhanced LLM Prompts - Context-aware entity population:
- Historical role and responsibilities
- Age and location at the time
- Major event being witnessed
- Relationships to other entities
- Records Exposure Events - Tracks what each entity learned and when
- Saves Entities - Persists to database with full metadata
Example Output
Temporal Training Mode
Train entities across multiple timepoints with causal evolution:Configuration
Historical context template
Number of timepoints in the temporal chain
What It Does
- Builds Temporal Chain - Creates sequence of causally-linked timepoints
- Saves Timepoints - Persists timepoint data with:
- Event descriptions
- Timestamps
- Resolution levels
- Causal parent links
- Entities present
- Processes Each Timepoint - Sequential training:
- Retrieves previous knowledge state (causal propagation)
- Generates new knowledge based on current event
- Updates entity metadata
- Records exposure events for new information
- Validates Changes - Runs validators on each update:
- Temporal causality checks
- Network flow validation
- Behavioral inertia
- Compresses Tensors - Applies tensor compression for storage efficiency
Temporal Chain Structure
Timepoints are causally linked: Each timepoint tracks:causal_parent- Previous timepoint IDevent_description- What happenedtimestamp- When it occurredentities_present- Who was thereresolution_level- Detail level
Knowledge Propagation
Knowledge flows forward through the chain:Timepoint 2
Entity learns new information from eventPrevious knowledge + New exposure = Updated knowledge state
Example Output
Validation During Training
All training modes enforce validators (Mechanism 1.2):Temporal Causality
Temporal Causality
Ensures knowledge can only come from past events, not future onesValidates:
- Knowledge items have valid causal paths
- No anachronistic information
- Proper timepoint sequencing
Network Flow
Network Flow
Verifies information spreads through relationship graphValidates:
- Knowledge flows along edges
- No spontaneous knowledge generation
- Social network constraints
Behavioral Inertia
Behavioral Inertia
Checks personality consistency over timeValidates:
- Personality traits remain stable
- Character consistency
- No sudden personality shifts
Violation Handling
When violations occur:- WARNING - Logged but training continues
- ERROR - Logged; could block update (currently logs only)
Tensor Compression
Temporal training applies tensor compression (Mechanism 1.1) for storage efficiency:- TENSOR_ONLY
- SCENE+
Maximum Compression
- Stores ONLY compressed representations
- Removes full tensor data
- Compression methods: PCA, SVD
- Use for: Background entities, low-priority characters
Hydra Configuration
All training modes use Hydra for configuration management:conf/config.yaml
Command-Line Overrides
Cost Tracking
All training modes track costs:- Basic training (10 entities): $0.10-0.20
- Historical training (5 entities): $0.02-0.05
- Temporal training (5 timepoints): $0.10-0.25
Next Steps
Evaluate
Run evaluation metrics on trained entities
Interactive Queries
Query your trained entities
Run Command
Execute full simulations
CLI Overview
Back to CLI overview

