mcframework.stats_engine.StatsEngine#

class mcframework.stats_engine.StatsEngine[source]#

Bases: object

Orchestrator that evaluates a set of metrics over an input array.

Given a collection of metric callables \(\{\phi_j\}_{j=1}^m\) and an array \(x \in \mathbb{R}^n\), the engine returns the dictionary

\[\{\phi_j(x, \texttt{ctx}) : j = 1,\dots,m\},\]

while recording any skipped/failed evaluations for downstream inspection.

Parameters:
metricsiterable of Metric

Callables with a name and signature metric(x, ctx).

Notes

All metrics receive the same StatsContext. Prefer field names that read well across multiple metrics and avoid collisions.

Examples

>>> eng = StatsEngine([FnMetric("mean", mean), FnMetric("std", std)])
>>> x = np.array([1., 2., 3.])
>>> result = eng.compute(x, StatsContext(n=len(x)))
>>> result.metrics['mean']
2.0
>>> result.metrics['std']
1.0

Methods

compute

Evaluate all registered metrics on x.

compute(x: ndarray, ctx: StatsContext | None = None, select: Sequence[str] | None = None, **kwargs: Any) ComputeResult[source]#

Evaluate all registered metrics on x.

Parameters:
xndarray

Sample values.

ctxStatsContext, optional

Context parameters. If None, one is built from **kwargs.

selectsequence of str, optional

If given, compute only the metrics with these names.

**kwargsAny

Used to build a StatsContext if ctx is None. Required: ‘n’ (int). Optional: ‘confidence’, ‘ci_method’, ‘percentiles’, etc.

Returns:
ComputeResult

Result object containing:

  • metrics: Successfully computed metric values.

  • skipped: List of (metric_name, reason) for skipped metrics.

  • errors: List of (metric_name, error_message) for failed metrics.

__init__(metrics: Iterable[Metric])[source]#
classmethod __new__(*args, **kwargs)#