mcframework.stats_engine.StatsEngine#
- class mcframework.stats_engine.StatsEngine[source]#
Bases:
objectOrchestrator that evaluates a set of metrics over an input array.
Given a collection of metric callables \(\{\phi_j\}_{j=1}^m\) and an array \(x \in \mathbb{R}^n\), the engine returns the dictionary
\[\{\phi_j(x, \texttt{ctx}) : j = 1,\dots,m\},\]while recording any skipped/failed evaluations for downstream inspection.
- Parameters:
- metricsiterable of
Metric Callables with a
nameand signaturemetric(x, ctx).
- metricsiterable of
Notes
All metrics receive the same
StatsContext. Prefer field names that read well across multiple metrics and avoid collisions.Examples
>>> eng = StatsEngine([FnMetric("mean", mean), FnMetric("std", std)]) >>> x = np.array([1., 2., 3.]) >>> result = eng.compute(x, StatsContext(n=len(x))) >>> result.metrics['mean'] 2.0 >>> result.metrics['std'] 1.0
Methods
Evaluate all registered metrics on
x.- compute(x: ndarray, ctx: StatsContext | None = None, select: Sequence[str] | None = None, **kwargs: Any) ComputeResult[source]#
Evaluate all registered metrics on
x.- Parameters:
- x
ndarray Sample values.
- ctx
StatsContext, optional Context parameters. If None, one is built from
**kwargs.- selectsequence of
str, optional If given, compute only the metrics with these names.
- **kwargs
Any Used to build a StatsContext if ctx is None. Required: ‘n’ (int). Optional: ‘confidence’, ‘ci_method’, ‘percentiles’, etc.
- x
- Returns:
ComputeResultResult object containing:
metrics: Successfully computed metric values.skipped: List of (metric_name, reason) for skipped metrics.errors: List of (metric_name, error_message) for failed metrics.
- classmethod __new__(*args, **kwargs)#