Skip to main content

System Requirements

Minimum Requirements:
  • Python 3.10, 3.11, 3.12, or 3.13
  • 4GB RAM (8GB recommended for comprehensive simulations)
  • 500MB disk space for code + dependencies
  • Internet connection for API calls
Timepoint Pro is a standalone simulation engine. It has no runtime dependencies on other Timepoint Suite services. All LLM calls go directly to OpenRouter. All data stays in local SQLite + flat files.

Installation Methods

Environment Configuration

API Keys

Timepoint Pro requires an OpenRouter API key. Optional: Oxen.ai key for dataset uploads.
1

Get an OpenRouter API key

  1. Visit openrouter.ai/keys
  2. Sign up (free tier available)
  3. Create a new API key
  4. Copy the key (format: sk-or-v1-...)
2

Create .env file

In the project root, create a .env file:
.env
# Required: OpenRouter API for LLM calls
OPENROUTER_API_KEY=sk-or-v1-your_key_here

# Optional: Oxen.ai for dataset versioning and uploads
OXEN_API_KEY=your_oxen_key_here

# Optional: Timepoint API (for cloud execution)
TIMEPOINT_API_KEY=your_timepoint_key_here
TIMEPOINT_API_URL=http://localhost:8080
Critical: Ensure your API key is on a single line with no line breaks. Multi-line keys cause “Illegal header value” errors.
3

Load environment variables

Before running any simulation:
# Export all variables from .env
export $(cat .env | xargs)

# Or source the file directly
source .env

# Or set directly (must be ONE line, no breaks)
export OPENROUTER_API_KEY="your_key_here"
You must load environment variables in every new terminal session before running simulations.
4

Verify environment

./run.sh doctor
This validates:
  • Python 3.10+ installation
  • .env file exists and API keys are set
  • Database paths are accessible
  • Key dependencies installed

Directory Structure

After installation, your project structure looks like:
timepoint-pro/
├── .env                          # Your API keys (NEVER commit)
├── requirements.txt              # pip dependencies
├── pyproject.toml                # Poetry config + dependencies
├── run.sh                        # Main CLI entry point
├── run_all_mechanism_tests.py    # Python runner
├── metadata/
   ├── runs.db                   # SQLite database for run metadata
   └── tensors.db                # Entity tensor storage
├── output/
   └── simulations/              # Generated simulation artifacts
├── templates/                    # JSON scenario templates (if present)
├── generation/
   └── templates/
       └── loader.py             # TemplateLoader class
├── workflows/                    # Dialog, portal, branching strategies
└── tests/                        # Pytest test suites

Verify Installation

1

Run environment check

./run.sh doctor
Expected output:
[OK] Python 3.10+ detected
[OK] .env file found
[OK] OPENROUTER_API_KEY set
[OK] Database paths accessible
[OK] Key dependencies installed
2

List available templates

./run.sh list
You should see 21 templates organized by category:
  • Showcase (13 templates): Production-ready scenarios
  • Persona (5 templates): Domain evaluator testing
  • Convergence (3 templates): Consistency evaluation
3

Run a quick test simulation (optional)

./run.sh run --free convergence_simple
This runs a lightweight template with free models ($0 cost) to verify the full pipeline.

Model Configuration

Default Models

Timepoint Pro uses OpenRouter to access multiple model providers:
TaskDefault ModelContextCost
Graph generationLlama 4 Scout128K$0.40/M tokens
Dialog synthesisLlama 4 Scout128K$0.40/M tokens
PORTAL judgingLlama 3.1 405B128K$3.00/M tokens
SummariesLlama 4 Scout128K$0.40/M tokens
Updated February 2026: Costs are ~10x lower than previous estimates due to efficient Llama 4 Scout pricing.

Free Models

For testing without cost:
# Best quality free model (Qwen 235B, Llama 70B, etc.)
./run.sh run --free board_meeting

# Fastest free model (Gemini Flash, smaller Llama)
./run.sh run --free-fast board_meeting

# List currently available free models
python run_all_mechanism_tests.py --list-free-models
Free models have more restrictive rate limits and availability may rotate.

Model Override

Override the default model for all LLM calls:
# Use DeepSeek Chat (MIT license, training-safe)
./run.sh run --model deepseek/deepseek-chat board_meeting

# Use Gemini 3 Flash (1M context, fast inference)
./run.sh run --gemini-flash board_meeting

# Use Groq for ultra-fast inference (~300 tok/s)
./run.sh run --groq board_meeting
Model Licensing for Training Data:If you plan to use simulation outputs as training data for fine-tuning, you must use models with licenses that permit it:
LicenseModelsTraining Data Status
MITDeepSeek Chat, DeepSeek R1✓ Fully unrestricted
Apache 2.0Mistral 7B, Mixtral 8x7B/8x22B✓ Fully unrestricted
LlamaLlama 3.1 8B/70B/405B, Llama 4 Scout✗ Restricted - prohibits training non-Llama models
QwenQwen 2.5 7B/72B, QwQ 32B✓ Permissive for most uses
Default behavior: The model selector (M18) automatically filters to training-safe models (MIT/Apache-2.0) when for_training_data=True.

Database Setup

Timepoint Pro uses SQLite for persistence. No manual database setup required.

Automatic Initialization

Databases are created automatically on first run:
  • metadata/runs.db - Stores run metadata, convergence sets, usage tracking
  • metadata/tensors.db - Stores entity tensors and embeddings
  • output/simulations/sim_TIMESTAMP.db - Per-run simulation database

Manual Initialization (Optional)

If you need to manually initialize:
# Create metadata directories
mkdir -p metadata output/simulations

# Run database migrations (if applicable)
python -c "from storage import GraphStore; GraphStore('sqlite:///metadata/runs.db')"

Optional: Oxen.ai Integration

For dataset versioning and collaborative data management:
1

Get an Oxen.ai API key

  1. Visit oxen.ai
  2. Create an account
  3. Generate an API key
2

Add to .env

.env
OXEN_API_KEY=your_oxen_key_here
3

Auto-upload training data

When OXEN_API_KEY is set, training data is automatically uploaded to Oxen.ai after simulation runs complete.
./run.sh run board_meeting
# Training data auto-uploaded to Oxen.ai with full versioning
Oxen.ai provides Git-like versioning for datasets. When enabled, all simulation outputs (JSONL, SQLite, TDF) are automatically tracked and versioned.

Testing Your Installation

Unit Tests

Run the test suite to verify everything works:
# All unit tests (fast, no LLM calls)
./run.sh test unit

# SynthasAIzer tests (142 ADPRS waveform tests)
./run.sh test synth

# All mechanism tests (M1-M19)
./run.sh test mechanisms

# Specific mechanism (e.g., M7 causal chains)
./run.sh test m7

# With coverage report
./run.sh test unit --coverage

Integration Tests

# Waveform pipeline integration
pytest tests/integration/test_adprs_phase2_integration.py \
  tests/integration/test_waveform_sufficiency.py -v

Development Setup

Code Quality Tools

Timepoint Pro uses:
  • ruff - Fast Python linter (replaces flake8, isort)
  • black - Code formatter
  • mypy - Type checking
  • pytest - Testing framework
# Format with black
black . --line-length 100

# Lint with ruff
ruff check .

# Auto-fix linting issues
ruff check --fix .

Security

Static analysis is integrated:
# Run Bandit security scan
bandit -r . -ll

# Run Semgrep security scan
semgrep --config=auto .
All HIGH security findings have been resolved as of February 2026. Embedding index uses numpy .npz + JSON sidecar (safe serialization). All DB queries are parameterized. No hardcoded secrets.

Troubleshooting

Problem: requires-python >=3.10,<3.14Solution: Use Python 3.10, 3.11, 3.12, or 3.13. Check version:
python --version
Install Python 3.10+ via:
  • macOS: brew install python@3.10
  • Ubuntu: sudo apt install python3.10
  • Windows: Download from python.org
Problem: grpcio fails to build from source on macOS M1/M2Solution: Use pre-built wheels by pinning to version ≥1.68.1:
pip install grpcio>=1.68.1
Or install with conda:
conda install grpcio
Problem: ImportError: numpy.core.multiarray failed to importSolution: Requirements pin NumPy to 1.x for matplotlib/numba compatibility:
pip install "numpy>=1.26,<2.0"
Problem: ModuleNotFoundError after poetry installSolution: Install with dev dependencies:
poetry install --with dev
Or install specific missing packages:
poetry add msgspec
Problem: OperationalError: database is lockedSolution: Another process is using the database. Kill it:
# Find Python processes
ps aux | grep python

# Kill the process
kill -9 <PID>
Or use a different database:
./run.sh run board_meeting --db metadata/runs_alt.db

Next Steps

First Simulation

Run your first template and understand the output

Templates

Explore 21 production templates

Temporal Modes

Forward, Portal, Branching, Cyclical, Directorial

API Reference

Programmatic simulation submission