mcframework.backends.TorchBackend#
- class mcframework.backends.TorchBackend[source]#
Bases:
objectFactory class that creates and wraps the appropriate device-specific backend.
This is a factory class that creates and wraps the appropriate device-specific backend (
TorchCPUBackend,TorchMPSBackend, orTorchCUDABackend) based on thedeviceparameter.- Parameters:
- device{“cpu”, “mps”, “cuda”}, default
"cpu" Torch device for computation:
"cpu"— UsesTorchCPUBackend"mps"— UsesTorchMPSBackend(Apple Silicon)"cuda"— UsesTorchCUDABackend(NVIDIA, stub)
- device{“cpu”, “mps”, “cuda”}, default
See also
TorchCPUBackendDirect CPU backend access.
TorchMPSBackendDirect MPS backend access.
TorchCUDABackendDirect CUDA backend access.
Notes
Delegation model. This class delegates all execution to the device-specific backend. It exists to provide a unified interface and for backward compatibility.
Device selection. The backend is selected at construction time based on the
deviceparameter. Device availability is validated during construction.Examples
>>> # CPU execution >>> backend = TorchBackend(device="cpu") >>> results = backend.run(sim, n_simulations=100000, seed_seq=seed_seq)
>>> # Apple Silicon GPU >>> backend = TorchBackend(device="mps") >>> results = backend.run(sim, n_simulations=1000000, seed_seq=seed_seq)
>>> # NVIDIA GPU (CUDA 12.x with CuPy for CuRAND) >>> backend = TorchBackend(device="cuda")
Methods
Run simulations using the device-specific Torch backend.
- run(sim: MonteCarloSimulation, n_simulations: int, seed_seq: np.random.SeedSequence | None, progress_callback: Callable[[int, int], None] | None = None, **simulation_kwargs: Any) np.ndarray[source]#
Run simulations using the device-specific Torch backend.
- Parameters:
- sim
MonteCarloSimulation The simulation instance to run. Must have
supports_batch = Trueand implementtorch_batch().- n_simulations
int Number of simulation draws to perform.
- seed_seq
SeedSequenceorNone Seed sequence for reproducible random streams.
- progress_callback
callable()orNone Optional callback
f(completed, total)for progress reporting.- **simulation_kwargs
Any Ignored for Torch backend (batch method handles all parameters).
- sim
- Returns:
np.ndarrayArray of simulation results with shape
(n_simulations, ...).
- Raises:
ValueErrorIf the simulation does not support batch execution.
NotImplementedErrorIf the simulation does not implement
torch_batch().
- __init__(device: str = 'cpu', **device_kwargs: Any)[source]#
Initialize Torch backend with specified device.
- Parameters:
- device{“cpu”, “mps”, “cuda”}, default
"cpu" Torch device for computation.
- **device_kwargs
Any Device-specific configuration options:
CUDA options (ignored for cpu/mps):
device_id: int, default 0 — CUDA device indexuse_curand: bool, default False — Use cuRAND via CuPybatch_size: int or None — Fixed batch size (None = adaptive)use_streams: bool, default True — Enable CUDA streams
- device{“cpu”, “mps”, “cuda”}, default
- Raises:
ImportErrorIf PyTorch is not installed.
ValueErrorIf the device type is not recognized.
RuntimeErrorIf the requested device is not available.
Examples
>>> # CPU (no kwargs needed) >>> backend = TorchBackend(device="cpu")
>>> # MPS (no kwargs needed) >>> backend = TorchBackend(device="mps")
>>> # CUDA with default settings >>> backend = TorchBackend(device="cuda")
>>> # CUDA with custom settings >>> backend = TorchBackend( ... device="cuda", ... device_id=0, ... use_curand=True, ... batch_size=100_000, ... use_streams=True, ... )
- classmethod __new__(*args, **kwargs)#