mcframework.backends.TorchBackend#

class mcframework.backends.TorchBackend[source]#

Bases: object

Factory class that creates and wraps the appropriate device-specific backend.

This is a factory class that creates and wraps the appropriate device-specific backend (TorchCPUBackend, TorchMPSBackend, or TorchCUDABackend) based on the device parameter.

Parameters:
device{“cpu”, “mps”, “cuda”}, default "cpu"

Torch device for computation:

See also

TorchCPUBackend

Direct CPU backend access.

TorchMPSBackend

Direct MPS backend access.

TorchCUDABackend

Direct CUDA backend access.

Notes

Delegation model. This class delegates all execution to the device-specific backend. It exists to provide a unified interface and for backward compatibility.

Device selection. The backend is selected at construction time based on the device parameter. Device availability is validated during construction.

Examples

>>> # CPU execution
>>> backend = TorchBackend(device="cpu")
>>> results = backend.run(sim, n_simulations=100000, seed_seq=seed_seq)
>>> # Apple Silicon GPU
>>> backend = TorchBackend(device="mps")
>>> results = backend.run(sim, n_simulations=1000000, seed_seq=seed_seq)
>>> # NVIDIA GPU (CUDA 12.x with CuPy for CuRAND)
>>> backend = TorchBackend(device="cuda")

Methods

run

Run simulations using the device-specific Torch backend.

run(sim: MonteCarloSimulation, n_simulations: int, seed_seq: np.random.SeedSequence | None, progress_callback: Callable[[int, int], None] | None = None, **simulation_kwargs: Any) np.ndarray[source]#

Run simulations using the device-specific Torch backend.

Parameters:
simMonteCarloSimulation

The simulation instance to run. Must have supports_batch = True and implement torch_batch().

n_simulationsint

Number of simulation draws to perform.

seed_seqSeedSequence or None

Seed sequence for reproducible random streams.

progress_callbackcallable() or None

Optional callback f(completed, total) for progress reporting.

**simulation_kwargsAny

Ignored for Torch backend (batch method handles all parameters).

Returns:
np.ndarray

Array of simulation results with shape (n_simulations, ...).

Raises:
ValueError

If the simulation does not support batch execution.

NotImplementedError

If the simulation does not implement torch_batch().

__init__(device: str = 'cpu', **device_kwargs: Any)[source]#

Initialize Torch backend with specified device.

Parameters:
device{“cpu”, “mps”, “cuda”}, default "cpu"

Torch device for computation.

**device_kwargsAny

Device-specific configuration options:

CUDA options (ignored for cpu/mps):

  • device_id : int, default 0 — CUDA device index

  • use_curand : bool, default False — Use cuRAND via CuPy

  • batch_size : int or None — Fixed batch size (None = adaptive)

  • use_streams : bool, default True — Enable CUDA streams

Raises:
ImportError

If PyTorch is not installed.

ValueError

If the device type is not recognized.

RuntimeError

If the requested device is not available.

Examples

>>> # CPU (no kwargs needed)
>>> backend = TorchBackend(device="cpu")
>>> # MPS (no kwargs needed)
>>> backend = TorchBackend(device="mps")
>>> # CUDA with default settings
>>> backend = TorchBackend(device="cuda")
>>> # CUDA with custom settings
>>> backend = TorchBackend(
...     device="cuda",
...     device_id=0,
...     use_curand=True,
...     batch_size=100_000,
...     use_streams=True,
... )
classmethod __new__(*args, **kwargs)#