Skip to main content

AI for event forecasting

Forecast real-world events and get better over time.

AI can already generate plausible answers. XRTM is built for the harder job: forecasting real-world events, keeping score, and learning whether later changes actually help. Start with one provider-free demo, then expand into benchmarking, monitoring, history, and advanced local-model paths when you want a real candidate change.

Local-LLM mode is supported, but it is intentionally optional and secondary. The default first run stays provider-free so you can prove the event-forecasting loop before adding model-serving complexity.

First forecasting loopforecast → score → inspect → improve
python3.11 -m venv .venv
. .venv/bin/activate
pip install xrtm==0.3.1
xrtm start
xrtm runs show latest --runs-dir runs
xrtm artifacts inspect --latest --runs-dir runs
xrtm report html --latest --runs-dir runs
xrtm web --runs-dir runs

This path proves the core product with released features only: one provider-free demo, scored artifacts, and a browser or terminal view over the same saved evidence. It establishes the honest baseline before you move into benchmarking, monitoring, and advanced local-model paths.

Provider-free first successBenchmark and performance workflowMonitoring/history/report workflowLocal-LLM advanced workflow

Why XRTM

A forecasting system, not just a prompt.

The point is not to admire one answer. The point is to run forecasts, keep the evidence, measure the result, and learn from it. Philosophy and package internals remain available, but they no longer replace the product story.

Know what happened, not just what was said

Every run writes canonical artifacts to disk so you can inspect the forecast payloads, scores, events, reports, and logs after the model finishes.

Start with proof, not setup

The guided first run uses the built-in mock provider so you can prove the forecasting loop end to end before adding API keys, model downloads, or a hosted control plane.

Improve forecasting systems, not chat demos

XRTM focuses on probabilistic workflows: scored runs, calibration-aware evaluation, historical replay, and repeatable operator paths that help you learn which changes genuinely improve the system.

What you can do today

Prove the event-forecasting loop with shipped workflows.

Audience paths

Choose the path that matches your job.

Packages and architecture

Use the product shell first; learn the package stack when you need depth.

Newcomers should not need package taxonomy to reach first success, but the package boundaries are real and documented once you are ready to go deeper.

xrtm

Product shell

Provider-free demo path, canonical artifacts, HTML reports, WebUI, TUI, profiles, compare/export, and monitoring for the first event-forecasting loop.

xrtm-forecast

Runtime package

Forecast runtime, orchestration, inference providers, and reasoning workflows for event-forecasting systems.

xrtm-eval

Evaluation package

Brier scoring, calibration-focused evaluation, and verification utilities.

xrtm-data

Data package

Schemas and temporal snapshot foundations for zero-leakage evaluation.

xrtm-train

Training package

Backtesting, replay, calibration demos, and optimization loops built on the rest of the stack.

Keep philosophy and roadmap in context

The philosophy, standard, and roadmap still matter, but they now live behind the product path instead of replacing it. Start with `xrtm demo --provider mock --limit 1 --runs-dir runs` and docs overview, then go deeper once you understand the shipped workflow.