mcframework.stats_engine.StatsContext#
- class mcframework.stats_engine.StatsContext[source]#
Bases:
objectShared, explicit configuration for statistic and CI computations.
The context keeps track of three recurring quantities:
Confidence level \(\gamma = \texttt{confidence}\) with tail mass \(\alpha = 1 - \gamma\).
Effective sample size \(n_\text{eff}\), obtained from
eff_n()and used by every finite-sample adjustment.Requested quantiles \(\mathcal{P} = \{p_i\}\) that drive percentile metrics.
Throughout the module we repeatedly use the identities
\[\alpha = 1 - \gamma, \qquad q_\text{low} = 100 \frac{\alpha}{2}, \qquad q_\text{high} = 100 \left(1 - \frac{\alpha}{2}\right),\]which are provided via the
alpha()andq_bound()helpers.- Attributes:
- n
int Declared sample size (fallback when NaNs are not omitted).
- confidence
float, default 0.95 Confidence level in \((0, 1)\).
- ci_method{“auto”, “z”, “t”, “bootstrap”}, default “auto”
Strategy for
ci_mean(). If"auto", use Student-t when \(n_\text{eff} < 30\) else normal z.- percentiles
tupleofint, default(5, 25, 50, 75, 95) Percentiles to compute in
percentiles().- nan_policy{“propagate”, “omit”}, default “propagate”
If
"omit", drop non-finite values before all computations.- target
float, optional Optional target value (e.g., true mean) for bias/MSE/Markov metrics.
- eps
float, optional Tolerance used by Chebyshev sizing and Markov bounds, when required.
- ddof
int, default 1 Degrees of freedom for
numpy.std()(1 => Bessel correction).- ess
int, optional Effective sample size override (e.g., from MCMC diagnostics).
- rng
intornumpy.random.Generator, optional Seed or Generator used by bootstrap methods for reproducibility.
- n_bootstrap
int, default 10000 Number of bootstrap resamples for
ci_mean_bootstrap().- bootstrap{“percentile”, “bca”}, default “percentile”
Bootstrap flavor for
ci_mean_bootstrap().- block_size
int, optional Reserved for future block bootstrap support.
- n
Notes
The context is immutable by convention at runtime; prefer
with_overrides()to construct a modified copy with a small set of changed fields.Examples
>>> ctx = StatsContext(n=5000, confidence=0.95, ci_method=CIMethod.auto, nan_policy=NanPolicy.omit) >>> round(ctx.alpha, 2) 0.05
Methods
Effective sample size \(n_\text{eff}\) used by CI calculations.
Return a NumPy
Generatorinitialized fromrng.Percentile bounds corresponding to the current confidence.
Return a shallow copy with selected fields replaced.
- eff_n(observed_len: int, finite_count: int | None = None) int[source]#
Effective sample size \(n_\text{eff}\) used by CI calculations.
Priority is: 1) explicit
ess; 2) count of finite values ifnan_policy="omit"; 3) declaredn(fallback); elseobserved_len.In symbols,
\[\begin{split}n_\text{eff} = \begin{cases} \texttt{ess}, & \text{if provided},\\[4pt] \#\{i : x_i \text{ finite}\}, & \text{if nan policy = ``omit''},\\[4pt] \texttt{n}, & \text{otherwise}. \end{cases}\end{split}\]
- q_bound() tuple[float, float][source]#
Percentile bounds corresponding to the current confidence.
For \(\alpha = 1 - \text{confidence}\), returns \((100\alpha/2,\; 100(1-\alpha/2))\).
- with_overrides(**changes) StatsContext[source]#
Return a shallow copy with selected fields replaced.
- Parameters:
- ``**changes``
Field overrides passed to
dataclasses.replace().
- Returns:
StatsContextModified copy.
Examples
>>> ctx = StatsContext(n=1000) >>> ctx2 = ctx.with_overrides(confidence=0.9, n_bootstrap=2000)
- classmethod __new__(*args, **kwargs)#
- __init__(n: int, confidence: float = 0.95, ci_method: CIMethod = CIMethod.auto, percentiles: tuple[int, ...] = (5, 25, 50, 75, 95), nan_policy: NanPolicy = NanPolicy.propagate, target: float | None = None, eps: float | None = None, ddof: int = 1, ess: int | None = None, rng: int | Generator | None = None, n_bootstrap: int = 10000, bootstrap: BootstrapMethod = BootstrapMethod.percentile) None#