mcframework.sims.PortfolioSimulation#
- class mcframework.sims.PortfolioSimulation[source]#
Bases:
MonteCarloSimulationCompound an initial wealth under log-normal or arithmetic return models.
Let \(V_0\) be the initial value. Under GBM dynamics the terminal value after \(T\) years with \(n = 252T\) daily steps is
\[V_T = V_0 \exp\left(\sum_{k=1}^n \Big[(\mu - \tfrac{1}{2}\sigma^2)\Delta t + \sigma \sqrt{\Delta t}\,Z_k\Big]\right),\]where \(Z_k \sim \mathcal{N}(0, 1)\) i.i.d. The alternative branch integrates arithmetic returns via \(\log(1 + R_k)\).
- Attributes:
- name
str Default registry label
"Portfolio Simulation".
- name
Methods
Optional vectorized cuRAND implementation using CuPy.
Run the Monte Carlo simulation.
Set the random seed for reproducible experiments.
Simulate the terminal portfolio value under discrete compounding.
Optional vectorized Torch implementation.
- cupy_batch(n: int, *, device: torch.device, rng: cupy.random.RandomState) cupy.ndarray#
Optional vectorized cuRAND implementation using CuPy.
Override this method in subclasses to enable GPU-accelerated batch execution. When implemented alongside
supports_batch = True, the framework will use this method instead of repeatedsingle_simulationcalls.- Parameters:
- n
int Number of simulation draws.
- device
torch.device Device to use for the simulation (
"cuda").- rng
cupy.random.RandomState cuRAND generator for reproducible random sampling.
- n
- Returns:
cupy.ndarrayA 1D array of length
ncontaining simulation results.
- run(n_simulations: int, *, backend: str = 'auto', torch_device: str = 'cpu', cuda_device_id: int = 0, cuda_use_curand: bool = False, cuda_batch_size: int | None = None, cuda_use_streams: bool = True, parallel: bool | None = None, n_workers: int | None = None, progress_callback: Callable[[int, int], None] | None = None, percentiles: Iterable[int] | None = None, compute_stats: bool = True, stats_engine: StatsEngine | None = None, confidence: float = 0.95, ci_method: str = 'auto', extra_context: Mapping[str, Any] | None = None, **simulation_kwargs: Any) SimulationResult#
Run the Monte Carlo simulation.
- Parameters:
- n_simulations
int Number of simulation draws.
- backend{“auto”, “sequential”, “thread”, “process”, “torch”}, default
"auto" Execution backend to use:
"auto"— Sequential for small jobs, parallel (thread/process) for large jobs"sequential"— Single-threaded execution"thread"— Thread-based parallelism (best when NumPy releases GIL)"process"— Process-based parallelism (required on Windows for true parallelism)"torch"— Torch batch execution (requiressupports_batch = True)
- torch_device{“cpu”, “mps”, “cuda”}, default
"cpu" Torch device for
backend="torch". Ignored for other backends."cpu"— Safe default, works everywhere"mps"— Apple Metal Performance Shaders (M1/M2/M3 Macs)"cuda"— NVIDIA GPU acceleration
- cuda_device_id
int, default 0 CUDA device index for multi-GPU systems. Only used when
backend="torch"andtorch_device="cuda".- cuda_use_curand
bool, defaultFalse Use cuRAND (via CuPy) instead of torch.Generator for maximum GPU performance. Requires CuPy and
curand_batch()implementation.- cuda_batch_size
intorNone, defaultNone Fixed batch size for CUDA execution. If None, automatically estimates optimal batch size based on available GPU memory.
- cuda_use_streams
bool, defaultTrue Use CUDA streams for overlapped execution. Recommended for performance.
- parallel
bool, optional Deprecated. Use
backendinstead. If provided,parallel=Truemaps tobackend="auto"with parallel preference,parallel=Falsemaps tobackend="sequential".- n_workers
int, optional Worker count for parallel backends. Defaults to CPU count.
- progress_callback
callable(), optional A function
f(completed: int, total: int)called periodically.- percentilesiterable of
int, optional Percentiles to compute from raw results. If
Noneandcompute_stats=True, the stats engine’s defaults (_PCTS) are used; ifcompute_stats=False, no percentiles are computed unless explicitly provided.- compute_stats
bool, defaultTrue Compute additional metrics via a
StatsEngine.- stats_engine
StatsEngine, optional Custom engine (defaults to
mcframework.stats_engine.DEFAULT_ENGINE).- confidence
float, default0.95 Confidence level for CI-related metrics.
- ci_method{“auto”,”z”,”t”}, default
"auto" Which critical values the stats engine should use.
- extra_contextmapping, optional
Extra context forwarded to the stats engine.
- **simulation_kwargs
Any Keyword arguments forwarded to
single_simulation().
- n_simulations
- Returns:
See also
run_simulation()Run a registered simulation by name.
Notes
MPS determinism caveat. When using
torch_device="mps", the framework preserves RNG stream structure but does not guarantee bitwise reproducibility due to Metal backend scheduling and float32 arithmetic. Statistical properties (mean, variance, CI coverage) remain correct.
- set_seed(seed: int | None) None#
Set the random seed for reproducible experiments.
- Parameters:
- seed
intorNone Seed for
numpy.random.SeedSequence.Nonechooses entropy from the OS.
- seed
Notes
The framework spawns independent child sequences per worker/chunk via
numpy.random.SeedSequence.spawn(), ensuring deterministic parallel streams given the sameseedand block layout.
- single_simulation(*, initial_value: float = 10000.0, annual_return: float = 0.07, volatility: float = 0.2, years: int = 10, use_gbm: bool = True, _rng: Generator | None = None, **kwargs) float[source]#
Simulate the terminal portfolio value under discrete compounding.
- Parameters:
- initial_value
float, default10_000 Starting wealth \(V_0\) expressed in currency units.
- annual_return
float, default0.07 Drift \(\mu\) expressed as an annualized continuously compounded rate.
- volatility
float, default0.20 Annualized diffusion coefficient \(\sigma\).
- years
int, default10 Investment horizon \(T\) in years. The simulation uses daily steps \(\Delta t = 1/252\).
- use_gbm
bool, defaultTrue If
Trueevolve log returns via GBM; otherwise simulate simple returns and compose them multiplicatively.- **kwargs
Any Ignored. Reserved for framework compatibility.
- initial_value
- Returns:
floatTerminal value \(V_T\). Under GBM the logarithm follows \(\log V_T \sim \mathcal{N}\big(\log V_0 + (\mu - \tfrac{1}{2}\sigma^2)T,\;\sigma^2 T\big)\).
- torch_batch(n: int, *, device: torch.device, generator: torch.Generator) torch.Tensor#
Optional vectorized Torch implementation.
Override this method in subclasses to enable GPU-accelerated batch execution. When implemented alongside
supports_batch = True, the framework will use this method instead of repeatedsingle_simulationcalls.- Parameters:
- n
int Number of simulation draws.
- device
torch.device Device to use for the simulation (
"cpu","mps", or"cuda").- generator
torch.Generator Explicit Torch generator for reproducible random sampling. This generator is seeded from
numpy.random.SeedSequenceto maintain the same spawning semantics as the NumPy backend.
- n
- Returns:
torch.TensorA 1D tensor of length
ncontaining simulation results. Use float32 for MPS compatibility; the framework promotes to float64 after moving to CPU.
- Raises:
NotImplementedErrorIf the subclass does not implement this method.
Notes
RNG discipline. All random sampling must use the provided
generatorexplicitly. Never use global Torch RNG (torch.manual_seed).Dtype policy (device-specific):
MPS (Apple Silicon): Must return float32 (Metal doesn’t support float64). Framework promotes to float64 on CPU.
CUDA (NVIDIA): Can return float32 or float64. Float64 preferred for zero conversion overhead and full precision.
CPU: Can return float32 or float64. Float64 preferred for consistency with framework precision.
This method is optional and must be implemented by subclasses that support the Torch backend. If not implemented, the framework will fall back to the NumPy backend.
Examples
>>> class PiSim(MonteCarloSimulation): ... supports_batch = True ... def torch_batch(self, n, *, device, generator): ... import torch ... x = torch.rand(n, device=device, generator=generator) ... y = torch.rand(n, device=device, generator=generator) ... inside = (x * x + y * y) <= 1.0 ... return 4.0 * inside.float() # float32 for MPS compatibility
- classmethod __new__(*args, **kwargs)#