Core Module#
The foundational classes for building and running Monte Carlo simulations with parallel execution, reproducible seeding, and result aggregation.
Quick Start#
from mcframework.core import MonteCarloSimulation
import numpy as np
class DiceSim(MonteCarloSimulation):
def single_simulation(self, _rng=None, n_dice=2):
rng = self._rng(_rng, self.rng)
return float(rng.integers(1, 7, size=n_dice).sum())
sim = DiceSim(name="2d6")
sim.set_seed(42)
result = sim.run(50_000, backend="thread")
print(f"Mean: {result.mean:.2f}") # ~7.0
print(f"95% CI: {result.stats['ci_mean']}")
Classes#
SimulationResult#
Container for simulation outputs and computed statistics.
Container for the outcome of a Monte Carlo run. |
See SimulationResult for full attribute documentation.
Quick Reference:
Attribute |
Description |
|---|---|
|
Raw NumPy array of simulation values (length = |
|
Number of simulation draws performed |
|
Wall-clock time in seconds |
|
Sample mean \(\bar{X}\) |
|
Sample standard deviation (ddof=1) |
|
Dict mapping percentile keys to values, e.g., |
|
Additional statistics from the stats engine ( |
|
Freeform dict with |
Usage:
result = sim.run(10_000)
# Access raw data
raw = result.results # np.ndarray
# Access computed stats
print(result.mean, result.std)
print(result.percentiles[50]) # Median
print(result.stats['ci_mean']) # {'low': ..., 'high': ..., ...}
# Pretty print
print(result.result_to_string())
MonteCarloSimulation#
Abstract base class for defining simulations. Subclass and implement
single_simulation().
Abstract base class for Monte Carlo simulations. |
Key Attributes:
Whether this simulation supports batch GPU execution. |
Key Methods:
Method |
Description |
|---|---|
|
Abstract. Implement to define your simulation logic. Return a |
|
Execute |
|
Initialize RNG with a seed for reproducibility. |
|
Helper to select thread-local RNG inside |
|
Optional. Vectorized Torch implementation for GPU acceleration. |
|
Class attribute. Set to |
Example Implementation:
from mcframework.core import MonteCarloSimulation
class PiEstimator(MonteCarloSimulation):
"""Estimate π using random points in a unit square."""
def single_simulation(self, _rng=None, n_points=10_000):
rng = self._rng(_rng, self.rng)
x = rng.random(n_points)
y = rng.random(n_points)
inside = (x*x + y*y) <= 1.0
return 4.0 * inside.mean()
sim = PiEstimator(name="Pi")
sim.set_seed(42)
result = sim.run(1000, backend="thread")
print(f"π ≈ {result.mean:.6f}")
MonteCarloFramework#
Registry and runner for managing multiple simulations.
Registry for named simulations that runs and compares results. |
Usage:
from mcframework.core import MonteCarloFramework
framework = MonteCarloFramework()
framework.register_simulation(sim1)
framework.register_simulation(sim2)
# Run simulations
res1 = framework.run_simulation("Sim1", 10_000, backend="thread")
res2 = framework.run_simulation("Sim2", 10_000)
# Compare results
comparison = framework.compare_results(["Sim1", "Sim2"], metric="mean")
print(comparison) # {'Sim1': 3.14, 'Sim2': 3.15}
Execution Backends#
The framework supports multiple execution backends via the backend parameter:
Backend |
Description |
Best For |
|---|---|---|
sequential |
Single-threaded execution |
Debugging, small jobs (< 20K) |
thread |
NumPy-heavy code (releases GIL) |
|
process |
|
Python-bound code, Windows |
torch |
GPU-accelerated batch execution |
Large jobs (100K+), GPU available |
Auto Selection:
Small jobs (< 20K): Sequential execution
POSIX (macOS/Linux): Defaults to threads (NumPy releases GIL)
Windows: Defaults to processes (threads serialize under GIL)
# Explicit backend selection
result = sim.run(100_000, backend="sequential") # Single-threaded
result = sim.run(100_000, backend="thread", n_workers=8)
result = sim.run(100_000, backend="process", n_workers=4)
result = sim.run(100_000, backend="auto") # Platform default
# GPU backends (requires pip install mcframework[gpu])
result = sim.run(1_000_000, backend="torch", torch_device="cpu")
result = sim.run(1_000_000, backend="torch", torch_device="mps") # Apple Silicon
result = sim.run(1_000_000, backend="torch", torch_device="cuda") # NVIDIA GPU
Reproducibility#
Reproducible results via NumPy’s SeedSequence:
sim.set_seed(42)
result1 = sim.run(10_000, backend="thread")
sim.set_seed(42)
result2 = sim.run(10_000, backend="thread")
assert np.allclose(result1.results, result2.results) # Identical!
Each parallel worker receives an independent child sequence via spawn(),
ensuring deterministic streams regardless of scheduling order.
For GPU backends, explicit Generator objects are seeded from the same
SeedSequence, preserving reproducibility across CPU and GPU execution.
Functions#
Partition an integer range \([0, n)\) into half-open blocks \((i, j)\). |
Chunking Helper:
from mcframework.core import make_blocks
blocks = make_blocks(100_000, block_size=10_000)
# [(0, 10000), (10000, 20000), ..., (90000, 100000)]
See Also#
Backends Module — Execution backends (sequential, parallel, GPU)
Stats Engine — Statistical metrics and confidence intervals
Simulations Module — Built-in simulation implementations (Pi, Portfolio, Black-Scholes)
Utilities Module — Critical value utilities