mcframework.backends.TorchMPSBackend#

class mcframework.backends.TorchMPSBackend[source]#

Bases: object

Torch MPS batch execution backend for Apple Silicon GPUs.

Uses PyTorch with MPS (Metal Performance Shaders) backend for GPU-accelerated execution on Apple Silicon Macs and leverage unified memory architecture. Requires simulations to implement torch_batch() and set supports_batch to True to enable Metal Performance Shaders GPU-accelerated batch execution.

See also

is_mps_available()

Check MPS availability before instantiation.

TorchCPUBackend

Fallback for non-Apple systems.

Notes

RNG architecture. Uses explicit Generator objects seeded from SeedSequence via spawn(). This preserves:

  • Deterministic parallel streams (best-effort on MPS)

  • Counter-based RNG (Philox) semantics

  • Correct statistical structure

Never uses manual_seed() (global state).

Dtype policy. MPS performs best with float() (float32):

for stats engine compatibility.

MPS determinism caveat. Torch MPS preserves RNG stream structure but does not guarantee bitwise reproducibility due to:

  • Metal backend scheduling variations

  • float32 arithmetic rounding

  • GPU kernel execution order

Statistical properties (mean, variance, CI coverage) remain correct despite potential bitwise differences between runs. (see TestMPSDeterminism in tests/test_torch_backend.py for actual tests)

Examples

>>> if is_mps_available():
...     backend = TorchMPSBackend()
...     results = backend.run(sim, n_simulations=1_000_000, seed_seq=seed_seq)
...

Methods

run

Run simulations using Torch MPS batch execution.

run(sim: MonteCarloSimulation, n_simulations: int, seed_seq: np.random.SeedSequence | None, progress_callback: Callable[[int, int], None] | None = None, **_simulation_kwargs: Any) np.ndarray[source]#

Run simulations using Torch MPS batch execution.

Parameters:
simMonteCarloSimulation

The simulation instance to run. Must have supports_batch = True and implement torch_batch().

n_simulationsint

Number of simulation draws to perform.

seed_seqSeedSequence or None

Seed sequence for reproducible random streams.

progress_callbackcallable() or None

Optional callback f(completed, total) for progress reporting.

**_simulation_kwargsAny

Ignored for Torch backend (batch method handles all parameters).

Returns:
np.ndarray

Array of simulation results with shape (n_simulations,). Results are float64 despite MPS using float32 internally.

Raises:
ValueError

If the simulation does not support batch execution.

NotImplementedError

If the simulation does not implement torch_batch().

Notes

The dtype conversion flow is:

  1. torch_batch() returns float() (float32) on MPS device.

  2. Tensor moved to CPU via detach() and cpu()

  3. Promoted to double() (float64) via to()

  4. Converted to ndarray of double (float64) via numpy()

This ensures stats engine precision while maximizing MPS performance.

__init__()[source]#

Initialize Torch MPS backend.

Raises:
ImportError

If PyTorch is not installed.

RuntimeError

If MPS is not available on this system.

classmethod __new__(*args, **kwargs)#