mcframework.sims.BlackScholesPathSimulation#
- class mcframework.sims.BlackScholesPathSimulation[source]#
Bases:
MonteCarloSimulationSimulate stock price paths under Black-Scholes dynamics.
Methods
Optional vectorized cuRAND implementation using CuPy.
Run the Monte Carlo simulation.
Set the random seed for reproducible experiments.
Generate \(n_{\text{paths}}\) independent GBM paths.
Draw a GBM path and return the terminal value \(S_T\).
Optional vectorized Torch implementation.
- cupy_batch(n: int, *, device: torch.device, rng: cupy.random.RandomState) cupy.ndarray#
Optional vectorized cuRAND implementation using CuPy.
Override this method in subclasses to enable GPU-accelerated batch execution. When implemented alongside
supports_batch = True, the framework will use this method instead of repeatedsingle_simulationcalls.- Parameters:
- n
int Number of simulation draws.
- device
torch.device Device to use for the simulation (
"cuda").- rng
cupy.random.RandomState cuRAND generator for reproducible random sampling.
- n
- Returns:
cupy.ndarrayA 1D array of length
ncontaining simulation results.
- run(n_simulations: int, *, backend: str = 'auto', torch_device: str = 'cpu', cuda_device_id: int = 0, cuda_use_curand: bool = False, cuda_batch_size: int | None = None, cuda_use_streams: bool = True, parallel: bool | None = None, n_workers: int | None = None, progress_callback: Callable[[int, int], None] | None = None, percentiles: Iterable[int] | None = None, compute_stats: bool = True, stats_engine: StatsEngine | None = None, confidence: float = 0.95, ci_method: str = 'auto', extra_context: Mapping[str, Any] | None = None, **simulation_kwargs: Any) SimulationResult#
Run the Monte Carlo simulation.
- Parameters:
- n_simulations
int Number of simulation draws.
- backend{“auto”, “sequential”, “thread”, “process”, “torch”}, default
"auto" Execution backend to use:
"auto"— Sequential for small jobs, parallel (thread/process) for large jobs"sequential"— Single-threaded execution"thread"— Thread-based parallelism (best when NumPy releases GIL)"process"— Process-based parallelism (required on Windows for true parallelism)"torch"— Torch batch execution (requiressupports_batch = True)
- torch_device{“cpu”, “mps”, “cuda”}, default
"cpu" Torch device for
backend="torch". Ignored for other backends."cpu"— Safe default, works everywhere"mps"— Apple Metal Performance Shaders (M1/M2/M3 Macs)"cuda"— NVIDIA GPU acceleration
- cuda_device_id
int, default 0 CUDA device index for multi-GPU systems. Only used when
backend="torch"andtorch_device="cuda".- cuda_use_curand
bool, defaultFalse Use cuRAND (via CuPy) instead of torch.Generator for maximum GPU performance. Requires CuPy and
curand_batch()implementation.- cuda_batch_size
intorNone, defaultNone Fixed batch size for CUDA execution. If None, automatically estimates optimal batch size based on available GPU memory.
- cuda_use_streams
bool, defaultTrue Use CUDA streams for overlapped execution. Recommended for performance.
- parallel
bool, optional Deprecated. Use
backendinstead. If provided,parallel=Truemaps tobackend="auto"with parallel preference,parallel=Falsemaps tobackend="sequential".- n_workers
int, optional Worker count for parallel backends. Defaults to CPU count.
- progress_callback
callable(), optional A function
f(completed: int, total: int)called periodically.- percentilesiterable of
int, optional Percentiles to compute from raw results. If
Noneandcompute_stats=True, the stats engine’s defaults (_PCTS) are used; ifcompute_stats=False, no percentiles are computed unless explicitly provided.- compute_stats
bool, defaultTrue Compute additional metrics via a
StatsEngine.- stats_engine
StatsEngine, optional Custom engine (defaults to
mcframework.stats_engine.DEFAULT_ENGINE).- confidence
float, default0.95 Confidence level for CI-related metrics.
- ci_method{“auto”,”z”,”t”}, default
"auto" Which critical values the stats engine should use.
- extra_contextmapping, optional
Extra context forwarded to the stats engine.
- **simulation_kwargs
Any Keyword arguments forwarded to
single_simulation().
- n_simulations
- Returns:
See also
run_simulation()Run a registered simulation by name.
Notes
MPS determinism caveat. When using
torch_device="mps", the framework preserves RNG stream structure but does not guarantee bitwise reproducibility due to Metal backend scheduling and float32 arithmetic. Statistical properties (mean, variance, CI coverage) remain correct.
- set_seed(seed: int | None) None#
Set the random seed for reproducible experiments.
- Parameters:
- seed
intorNone Seed for
numpy.random.SeedSequence.Nonechooses entropy from the OS.
- seed
Notes
The framework spawns independent child sequences per worker/chunk via
numpy.random.SeedSequence.spawn(), ensuring deterministic parallel streams given the sameseedand block layout.
- simulate_paths(n_paths: int, S0: float = 100.0, r: float = 0.05, sigma: float = 0.2, T: float = 1.0, n_steps: int = 252) ndarray[source]#
Generate \(n_{\text{paths}}\) independent GBM paths.
- single_simulation(*, S0: float = 100.0, r: float = 0.05, sigma: float = 0.2, T: float = 1.0, n_steps: int = 252, _rng: Generator | None = None, **kwargs) float[source]#
Draw a GBM path and return the terminal value \(S_T\).
- torch_batch(n: int, *, device: torch.device, generator: torch.Generator) torch.Tensor#
Optional vectorized Torch implementation.
Override this method in subclasses to enable GPU-accelerated batch execution. When implemented alongside
supports_batch = True, the framework will use this method instead of repeatedsingle_simulationcalls.- Parameters:
- n
int Number of simulation draws.
- device
torch.device Device to use for the simulation (
"cpu","mps", or"cuda").- generator
torch.Generator Explicit Torch generator for reproducible random sampling. This generator is seeded from
numpy.random.SeedSequenceto maintain the same spawning semantics as the NumPy backend.
- n
- Returns:
torch.TensorA 1D tensor of length
ncontaining simulation results. Use float32 for MPS compatibility; the framework promotes to float64 after moving to CPU.
- Raises:
NotImplementedErrorIf the subclass does not implement this method.
Notes
RNG discipline. All random sampling must use the provided
generatorexplicitly. Never use global Torch RNG (torch.manual_seed).Dtype policy (device-specific):
MPS (Apple Silicon): Must return float32 (Metal doesn’t support float64). Framework promotes to float64 on CPU.
CUDA (NVIDIA): Can return float32 or float64. Float64 preferred for zero conversion overhead and full precision.
CPU: Can return float32 or float64. Float64 preferred for consistency with framework precision.
This method is optional and must be implemented by subclasses that support the Torch backend. If not implemented, the framework will fall back to the NumPy backend.
Examples
>>> class PiSim(MonteCarloSimulation): ... supports_batch = True ... def torch_batch(self, n, *, device, generator): ... import torch ... x = torch.rand(n, device=device, generator=generator) ... y = torch.rand(n, device=device, generator=generator) ... inside = (x * x + y * y) <= 1.0 ... return 4.0 * inside.float() # float32 for MPS compatibility
- classmethod __new__(*args, **kwargs)#