mcframework.sims.PortfolioSimulation#

class mcframework.sims.PortfolioSimulation[source]#

Bases: MonteCarloSimulation

Compound an initial wealth under log-normal or arithmetic return models.

Let \(V_0\) be the initial value. Under GBM dynamics the terminal value after \(T\) years with \(n = 252T\) daily steps is

\[V_T = V_0 \exp\left(\sum_{k=1}^n \Big[(\mu - \tfrac{1}{2}\sigma^2)\Delta t + \sigma \sqrt{\Delta t}\,Z_k\Big]\right),\]

where \(Z_k \sim \mathcal{N}(0, 1)\) i.i.d. The alternative branch integrates arithmetic returns via \(\log(1 + R_k)\).

Attributes:
namestr

Default registry label "Portfolio Simulation".

Methods

cupy_batch

Optional vectorized cuRAND implementation using CuPy.

run

Run the Monte Carlo simulation.

set_seed

Set the random seed for reproducible experiments.

single_simulation

Simulate the terminal portfolio value under discrete compounding.

torch_batch

Optional vectorized Torch implementation.

cupy_batch(n: int, *, device: torch.device, rng: cupy.random.RandomState) cupy.ndarray#

Optional vectorized cuRAND implementation using CuPy.

Override this method in subclasses to enable GPU-accelerated batch execution. When implemented alongside supports_batch = True, the framework will use this method instead of repeated single_simulation calls.

Parameters:
nint

Number of simulation draws.

devicetorch.device

Device to use for the simulation ("cuda").

rngcupy.random.RandomState

cuRAND generator for reproducible random sampling.

Returns:
cupy.ndarray

A 1D array of length n containing simulation results.

run(n_simulations: int, *, backend: str = 'auto', torch_device: str = 'cpu', cuda_device_id: int = 0, cuda_use_curand: bool = False, cuda_batch_size: int | None = None, cuda_use_streams: bool = True, parallel: bool | None = None, n_workers: int | None = None, progress_callback: Callable[[int, int], None] | None = None, percentiles: Iterable[int] | None = None, compute_stats: bool = True, stats_engine: StatsEngine | None = None, confidence: float = 0.95, ci_method: str = 'auto', extra_context: Mapping[str, Any] | None = None, **simulation_kwargs: Any) SimulationResult#

Run the Monte Carlo simulation.

Parameters:
n_simulationsint

Number of simulation draws.

backend{“auto”, “sequential”, “thread”, “process”, “torch”}, default "auto"

Execution backend to use:

  • "auto" — Sequential for small jobs, parallel (thread/process) for large jobs

  • "sequential" — Single-threaded execution

  • "thread" — Thread-based parallelism (best when NumPy releases GIL)

  • "process" — Process-based parallelism (required on Windows for true parallelism)

  • "torch" — Torch batch execution (requires supports_batch = True)

torch_device{“cpu”, “mps”, “cuda”}, default "cpu"

Torch device for backend="torch". Ignored for other backends.

  • "cpu" — Safe default, works everywhere

  • "mps" — Apple Metal Performance Shaders (M1/M2/M3 Macs)

  • "cuda" — NVIDIA GPU acceleration

cuda_device_idint, default 0

CUDA device index for multi-GPU systems. Only used when backend="torch" and torch_device="cuda".

cuda_use_curandbool, default False

Use cuRAND (via CuPy) instead of torch.Generator for maximum GPU performance. Requires CuPy and curand_batch() implementation.

cuda_batch_sizeint or None, default None

Fixed batch size for CUDA execution. If None, automatically estimates optimal batch size based on available GPU memory.

cuda_use_streamsbool, default True

Use CUDA streams for overlapped execution. Recommended for performance.

parallelbool, optional

Deprecated. Use backend instead. If provided, parallel=True maps to backend="auto" with parallel preference, parallel=False maps to backend="sequential".

n_workersint, optional

Worker count for parallel backends. Defaults to CPU count.

progress_callbackcallable(), optional

A function f(completed: int, total: int) called periodically.

percentilesiterable of int, optional

Percentiles to compute from raw results. If None and compute_stats=True, the stats engine’s defaults (_PCTS) are used; if compute_stats=False, no percentiles are computed unless explicitly provided.

compute_statsbool, default True

Compute additional metrics via a StatsEngine.

stats_engineStatsEngine, optional

Custom engine (defaults to mcframework.stats_engine.DEFAULT_ENGINE).

confidencefloat, default 0.95

Confidence level for CI-related metrics.

ci_method{“auto”,”z”,”t”}, default "auto"

Which critical values the stats engine should use.

extra_contextmapping, optional

Extra context forwarded to the stats engine.

**simulation_kwargsAny

Keyword arguments forwarded to single_simulation().

Returns:
SimulationResult

See SimulationResult.

See also

run_simulation()

Run a registered simulation by name.

Notes

MPS determinism caveat. When using torch_device="mps", the framework preserves RNG stream structure but does not guarantee bitwise reproducibility due to Metal backend scheduling and float32 arithmetic. Statistical properties (mean, variance, CI coverage) remain correct.

set_seed(seed: int | None) None#

Set the random seed for reproducible experiments.

Parameters:
seedint or None

Seed for numpy.random.SeedSequence. None chooses entropy from the OS.

Notes

The framework spawns independent child sequences per worker/chunk via numpy.random.SeedSequence.spawn(), ensuring deterministic parallel streams given the same seed and block layout.

single_simulation(*, initial_value: float = 10000.0, annual_return: float = 0.07, volatility: float = 0.2, years: int = 10, use_gbm: bool = True, _rng: Generator | None = None, **kwargs) float[source]#

Simulate the terminal portfolio value under discrete compounding.

Parameters:
initial_valuefloat, default 10_000

Starting wealth \(V_0\) expressed in currency units.

annual_returnfloat, default 0.07

Drift \(\mu\) expressed as an annualized continuously compounded rate.

volatilityfloat, default 0.20

Annualized diffusion coefficient \(\sigma\).

yearsint, default 10

Investment horizon \(T\) in years. The simulation uses daily steps \(\Delta t = 1/252\).

use_gbmbool, default True

If True evolve log returns via GBM; otherwise simulate simple returns and compose them multiplicatively.

**kwargsAny

Ignored. Reserved for framework compatibility.

Returns:
float

Terminal value \(V_T\). Under GBM the logarithm follows \(\log V_T \sim \mathcal{N}\big(\log V_0 + (\mu - \tfrac{1}{2}\sigma^2)T,\;\sigma^2 T\big)\).

torch_batch(n: int, *, device: torch.device, generator: torch.Generator) torch.Tensor#

Optional vectorized Torch implementation.

Override this method in subclasses to enable GPU-accelerated batch execution. When implemented alongside supports_batch = True, the framework will use this method instead of repeated single_simulation calls.

Parameters:
nint

Number of simulation draws.

devicetorch.device

Device to use for the simulation ("cpu", "mps", or "cuda").

generatortorch.Generator

Explicit Torch generator for reproducible random sampling. This generator is seeded from numpy.random.SeedSequence to maintain the same spawning semantics as the NumPy backend.

Returns:
torch.Tensor

A 1D tensor of length n containing simulation results. Use float32 for MPS compatibility; the framework promotes to float64 after moving to CPU.

Raises:
NotImplementedError

If the subclass does not implement this method.

Notes

RNG discipline. All random sampling must use the provided generator explicitly. Never use global Torch RNG (torch.manual_seed).

Dtype policy (device-specific):

  • MPS (Apple Silicon): Must return float32 (Metal doesn’t support float64). Framework promotes to float64 on CPU.

  • CUDA (NVIDIA): Can return float32 or float64. Float64 preferred for zero conversion overhead and full precision.

  • CPU: Can return float32 or float64. Float64 preferred for consistency with framework precision.

This method is optional and must be implemented by subclasses that support the Torch backend. If not implemented, the framework will fall back to the NumPy backend.

Examples

>>> class PiSim(MonteCarloSimulation):
...     supports_batch = True
...     def torch_batch(self, n, *, device, generator):
...         import torch
...         x = torch.rand(n, device=device, generator=generator)
...         y = torch.rand(n, device=device, generator=generator)
...         inside = (x * x + y * y) <= 1.0
...         return 4.0 * inside.float()  # float32 for MPS compatibility
__init__()[source]#
classmethod __new__(*args, **kwargs)#