mcframework.backends.TorchMPSBackend#
- class mcframework.backends.TorchMPSBackend[source]#
Bases:
objectTorch MPS batch execution backend for Apple Silicon GPUs.
Uses PyTorch with MPS (Metal Performance Shaders) backend for GPU-accelerated execution on Apple Silicon Macs and leverage unified memory architecture. Requires simulations to implement
torch_batch()and setsupports_batchtoTrueto enable Metal Performance Shaders GPU-accelerated batch execution.See also
is_mps_available()Check MPS availability before instantiation.
TorchCPUBackendFallback for non-Apple systems.
Notes
RNG architecture. Uses explicit
Generatorobjects seeded fromSeedSequenceviaspawn(). This preserves:Deterministic parallel streams (best-effort on MPS)
Counter-based RNG (Philox) semantics
Correct statistical structure
Never uses
manual_seed()(global state).Dtype policy. MPS performs best with
float()(float32):Sampling uses
float()(float32) on deviceResults moved to CPU and promoted to
double()(float64).The framework converts the results to
numpy.ndarrayofnumpy.double(float64)
for stats engine compatibility.
MPS determinism caveat. Torch MPS preserves RNG stream structure but does not guarantee bitwise reproducibility due to:
Metal backend scheduling variations
float32 arithmetic rounding
GPU kernel execution order
Statistical properties (mean, variance, CI coverage) remain correct despite potential bitwise differences between runs. (see
TestMPSDeterminismintests/test_torch_backend.pyfor actual tests)Examples
>>> if is_mps_available(): ... backend = TorchMPSBackend() ... results = backend.run(sim, n_simulations=1_000_000, seed_seq=seed_seq) ...
Methods
Run simulations using Torch MPS batch execution.
- run(sim: MonteCarloSimulation, n_simulations: int, seed_seq: np.random.SeedSequence | None, progress_callback: Callable[[int, int], None] | None = None, **_simulation_kwargs: Any) np.ndarray[source]#
Run simulations using Torch MPS batch execution.
- Parameters:
- sim
MonteCarloSimulation The simulation instance to run. Must have
supports_batch=Trueand implementtorch_batch().- n_simulations
int Number of simulation draws to perform.
- seed_seq
SeedSequenceorNone Seed sequence for reproducible random streams.
- progress_callback
callable()orNone Optional callback
f(completed, total)for progress reporting.- **_simulation_kwargs
Any Ignored for Torch backend (batch method handles all parameters).
- sim
- Returns:
np.ndarrayArray of simulation results with shape
(n_simulations,). Results are float64 despite MPS using float32 internally.
- Raises:
ValueErrorIf the simulation does not support batch execution.
NotImplementedErrorIf the simulation does not implement
torch_batch().
Notes
The dtype conversion flow is:
torch_batch()returnsfloat()(float32) on MPS device.
This ensures stats engine precision while maximizing MPS performance.
- __init__()[source]#
Initialize Torch MPS backend.
- Raises:
ImportErrorIf PyTorch is not installed.
RuntimeErrorIf MPS is not available on this system.
- classmethod __new__(*args, **kwargs)#