AI for event forecasting
Forecast real-world events and get better over time.
AI can already generate plausible answers. XRTM is built for the harder job: forecasting real-world events, keeping score, and learning whether later changes actually help. Start with one provider-free demo, then expand into benchmarking, monitoring, history, and advanced local-model paths when you want a real candidate change.
Local-LLM mode is supported, but it is intentionally optional and secondary. The default first run stays provider-free so you can prove the event-forecasting loop before adding model-serving complexity.
python3.11 -m venv .venv
. .venv/bin/activate
pip install xrtm==0.3.1
xrtm start
xrtm runs show latest --runs-dir runs
xrtm artifacts inspect --latest --runs-dir runs
xrtm report html --latest --runs-dir runs
xrtm web --runs-dir runsThis path proves the core product with released features only: one provider-free demo, scored artifacts, and a browser or terminal view over the same saved evidence. It establishes the honest baseline before you move into benchmarking, monitoring, and advanced local-model paths.
Why XRTM
A forecasting system, not just a prompt.
The point is not to admire one answer. The point is to run forecasts, keep the evidence, measure the result, and learn from it. Philosophy and package internals remain available, but they no longer replace the product story.
Know what happened, not just what was said
Every run writes canonical artifacts to disk so you can inspect the forecast payloads, scores, events, reports, and logs after the model finishes.
Start with proof, not setup
The guided first run uses the built-in mock provider so you can prove the forecasting loop end to end before adding API keys, model downloads, or a hosted control plane.
Improve forecasting systems, not chat demos
XRTM focuses on probabilistic workflows: scored runs, calibration-aware evaluation, historical replay, and repeatable operator paths that help you learn which changes genuinely improve the system.
What you can do today
Prove the event-forecasting loop with shipped workflows.
Provider-free first success
Use the release-gated provider-free demo to prove the first event-forecasting loop, inspect explicit run artifacts, and open the same run in WebUI or TUI.
xrtm demo --provider mock --limit 1 --runs-dir runs
xrtm runs list --runs-dir runs
xrtm runs show <run-id> --runs-dir runs
xrtm artifacts inspect runs/<run-id>
xrtm report html runs/<run-id>
xrtm web --runs-dir runsBenchmark and performance workflow
Generate deterministic benchmark evidence first, treat it as the stable control, then use the released compare/export surface to judge later changes honestly.
xrtm perf run --scenario provider-free-smoke --iterations 3 --limit 1 --runs-dir runs-perf --output performance.json
xrtm web --runs-dir runs --smokeMonitoring, history, and report workflow
Move from one-off runs into repeatable profiles, monitoring, compare/export review, and explicit keep-or-revert decisions.
xrtm profile create my-local --provider mock --limit 2 --runs-dir runs
xrtm run profile my-local
xrtm monitor start --provider mock --limit 2 --runs-dir runs
xrtm runs export <run-id> --runs-dir runs --output export.jsonLocal-LLM advanced workflow
Only after the provider-free workflows are healthy, verify your local endpoint and run the bounded local-LLM demo.
export XRTM_LOCAL_LLM_BASE_URL=http://localhost:8080/v1
xrtm local-llm status
xrtm demo --provider local-llm --limit 1 --max-tokens 768 --runs-dir runs-localAudience paths
Choose the path that matches your job.
Researcher / model-eval
Run repeatable passes, compare outputs, review Brier and calibration signals, and keep the evidence on disk.
Open path →
Operator
Create profiles, manage run directories, inspect artifacts, use monitoring, and bring up the WebUI or TUI for daily operations.
Open path →
Team
Use shared conventions, exports, and run history honestly today while keeping built-in multi-user features clearly out of scope.
Open path →
Developer / integrator
Start from the shipped CLI path, then drop into package APIs, providers, and example scripts when you need custom integration.
Open path →
Packages and architecture
Use the product shell first; learn the package stack when you need depth.
Newcomers should not need package taxonomy to reach first success, but the package boundaries are real and documented once you are ready to go deeper.
xrtm
Product shell
Provider-free demo path, canonical artifacts, HTML reports, WebUI, TUI, profiles, compare/export, and monitoring for the first event-forecasting loop.
xrtm-forecast
Runtime package
Forecast runtime, orchestration, inference providers, and reasoning workflows for event-forecasting systems.
xrtm-eval
Evaluation package
Brier scoring, calibration-focused evaluation, and verification utilities.
xrtm-data
Data package
Schemas and temporal snapshot foundations for zero-leakage evaluation.
xrtm-train
Training package
Backtesting, replay, calibration demos, and optimization loops built on the rest of the stack.
Keep philosophy and roadmap in context
The philosophy, standard, and roadmap still matter, but they now live behind the product path instead of replacing it. Start with `xrtm demo --provider mock --limit 1 --runs-dir runs` and docs overview, then go deeper once you understand the shipped workflow.