mcframework.sims.BlackScholesPathSimulation#

class mcframework.sims.BlackScholesPathSimulation[source]#

Bases: MonteCarloSimulation

Simulate stock price paths under Black-Scholes dynamics.

Methods

cupy_batch

Optional vectorized cuRAND implementation using CuPy.

run

Run the Monte Carlo simulation.

set_seed

Set the random seed for reproducible experiments.

simulate_paths

Generate \(n_{\text{paths}}\) independent GBM paths.

single_simulation

Draw a GBM path and return the terminal value \(S_T\).

torch_batch

Optional vectorized Torch implementation.

cupy_batch(n: int, *, device: torch.device, rng: cupy.random.RandomState) cupy.ndarray#

Optional vectorized cuRAND implementation using CuPy.

Override this method in subclasses to enable GPU-accelerated batch execution. When implemented alongside supports_batch = True, the framework will use this method instead of repeated single_simulation calls.

Parameters:
nint

Number of simulation draws.

devicetorch.device

Device to use for the simulation ("cuda").

rngcupy.random.RandomState

cuRAND generator for reproducible random sampling.

Returns:
cupy.ndarray

A 1D array of length n containing simulation results.

run(n_simulations: int, *, backend: str = 'auto', torch_device: str = 'cpu', cuda_device_id: int = 0, cuda_use_curand: bool = False, cuda_batch_size: int | None = None, cuda_use_streams: bool = True, parallel: bool | None = None, n_workers: int | None = None, progress_callback: Callable[[int, int], None] | None = None, percentiles: Iterable[int] | None = None, compute_stats: bool = True, stats_engine: StatsEngine | None = None, confidence: float = 0.95, ci_method: str = 'auto', extra_context: Mapping[str, Any] | None = None, **simulation_kwargs: Any) SimulationResult#

Run the Monte Carlo simulation.

Parameters:
n_simulationsint

Number of simulation draws.

backend{“auto”, “sequential”, “thread”, “process”, “torch”}, default "auto"

Execution backend to use:

  • "auto" — Sequential for small jobs, parallel (thread/process) for large jobs

  • "sequential" — Single-threaded execution

  • "thread" — Thread-based parallelism (best when NumPy releases GIL)

  • "process" — Process-based parallelism (required on Windows for true parallelism)

  • "torch" — Torch batch execution (requires supports_batch = True)

torch_device{“cpu”, “mps”, “cuda”}, default "cpu"

Torch device for backend="torch". Ignored for other backends.

  • "cpu" — Safe default, works everywhere

  • "mps" — Apple Metal Performance Shaders (M1/M2/M3 Macs)

  • "cuda" — NVIDIA GPU acceleration

cuda_device_idint, default 0

CUDA device index for multi-GPU systems. Only used when backend="torch" and torch_device="cuda".

cuda_use_curandbool, default False

Use cuRAND (via CuPy) instead of torch.Generator for maximum GPU performance. Requires CuPy and curand_batch() implementation.

cuda_batch_sizeint or None, default None

Fixed batch size for CUDA execution. If None, automatically estimates optimal batch size based on available GPU memory.

cuda_use_streamsbool, default True

Use CUDA streams for overlapped execution. Recommended for performance.

parallelbool, optional

Deprecated. Use backend instead. If provided, parallel=True maps to backend="auto" with parallel preference, parallel=False maps to backend="sequential".

n_workersint, optional

Worker count for parallel backends. Defaults to CPU count.

progress_callbackcallable(), optional

A function f(completed: int, total: int) called periodically.

percentilesiterable of int, optional

Percentiles to compute from raw results. If None and compute_stats=True, the stats engine’s defaults (_PCTS) are used; if compute_stats=False, no percentiles are computed unless explicitly provided.

compute_statsbool, default True

Compute additional metrics via a StatsEngine.

stats_engineStatsEngine, optional

Custom engine (defaults to mcframework.stats_engine.DEFAULT_ENGINE).

confidencefloat, default 0.95

Confidence level for CI-related metrics.

ci_method{“auto”,”z”,”t”}, default "auto"

Which critical values the stats engine should use.

extra_contextmapping, optional

Extra context forwarded to the stats engine.

**simulation_kwargsAny

Keyword arguments forwarded to single_simulation().

Returns:
SimulationResult

See SimulationResult.

See also

run_simulation()

Run a registered simulation by name.

Notes

MPS determinism caveat. When using torch_device="mps", the framework preserves RNG stream structure but does not guarantee bitwise reproducibility due to Metal backend scheduling and float32 arithmetic. Statistical properties (mean, variance, CI coverage) remain correct.

set_seed(seed: int | None) None#

Set the random seed for reproducible experiments.

Parameters:
seedint or None

Seed for numpy.random.SeedSequence. None chooses entropy from the OS.

Notes

The framework spawns independent child sequences per worker/chunk via numpy.random.SeedSequence.spawn(), ensuring deterministic parallel streams given the same seed and block layout.

simulate_paths(n_paths: int, S0: float = 100.0, r: float = 0.05, sigma: float = 0.2, T: float = 1.0, n_steps: int = 252) ndarray[source]#

Generate \(n_{\text{paths}}\) independent GBM paths.

single_simulation(*, S0: float = 100.0, r: float = 0.05, sigma: float = 0.2, T: float = 1.0, n_steps: int = 252, _rng: Generator | None = None, **kwargs) float[source]#

Draw a GBM path and return the terminal value \(S_T\).

torch_batch(n: int, *, device: torch.device, generator: torch.Generator) torch.Tensor#

Optional vectorized Torch implementation.

Override this method in subclasses to enable GPU-accelerated batch execution. When implemented alongside supports_batch = True, the framework will use this method instead of repeated single_simulation calls.

Parameters:
nint

Number of simulation draws.

devicetorch.device

Device to use for the simulation ("cpu", "mps", or "cuda").

generatortorch.Generator

Explicit Torch generator for reproducible random sampling. This generator is seeded from numpy.random.SeedSequence to maintain the same spawning semantics as the NumPy backend.

Returns:
torch.Tensor

A 1D tensor of length n containing simulation results. Use float32 for MPS compatibility; the framework promotes to float64 after moving to CPU.

Raises:
NotImplementedError

If the subclass does not implement this method.

Notes

RNG discipline. All random sampling must use the provided generator explicitly. Never use global Torch RNG (torch.manual_seed).

Dtype policy (device-specific):

  • MPS (Apple Silicon): Must return float32 (Metal doesn’t support float64). Framework promotes to float64 on CPU.

  • CUDA (NVIDIA): Can return float32 or float64. Float64 preferred for zero conversion overhead and full precision.

  • CPU: Can return float32 or float64. Float64 preferred for consistency with framework precision.

This method is optional and must be implemented by subclasses that support the Torch backend. If not implemented, the framework will fall back to the NumPy backend.

Examples

>>> class PiSim(MonteCarloSimulation):
...     supports_batch = True
...     def torch_batch(self, n, *, device, generator):
...         import torch
...         x = torch.rand(n, device=device, generator=generator)
...         y = torch.rand(n, device=device, generator=generator)
...         inside = (x * x + y * y) <= 1.0
...         return 4.0 * inside.float()  # float32 for MPS compatibility
__init__(name: str = 'Black-Scholes Path Simulation')[source]#
classmethod __new__(*args, **kwargs)#