invode.error_functions#
Error Functions Module (erf)#
This module provides a collection of common error/loss functions for ODE parameter optimization. These functions are designed to work with the invode optimization framework and provide standardized metrics for model fitting.
All error functions follow the signature: error_func(y_pred) -> float where y_pred is the model prediction and the function returns a scalar error value.
- class invode.error_functions.ChiSquaredMSE(data: ndarray, sigma: ndarray | float, normalize: bool = False)[source]#
Bases:
ErrorFunction
Chi-squared weighted Mean Squared Error.
Computes: χ² = Σ((y_pred - y_data)² / σ²)
This error function weights residuals by their expected variance (σ²), making it appropriate when different data points have different uncertainties.
- Parameters:
data (np.ndarray) – Reference/observed data.
sigma (np.ndarray or float) – Standard deviation/uncertainty for each data point. If float, assumes constant uncertainty across all points.
normalize (bool, optional) – If True, normalize by number of data points (default False).
Examples
>>> data = np.array([1.0, 2.0, 3.0, 4.0]) >>> sigma = np.array([0.1, 0.2, 0.1, 0.3]) # Different uncertainties >>> chi2_func = ChiSquaredMSE(data, sigma=sigma) >>> prediction = np.array([1.1, 2.2, 2.8, 4.1]) >>> error = chi2_func(prediction) >>> print(f"Chi-squared: {error:.4f}")
>>> # Constant uncertainty >>> chi2_func_const = ChiSquaredMSE(data, sigma=0.2)
- class invode.error_functions.ErrorFunction(data: ndarray, **kwargs)[source]#
Bases:
object
Base class for error functions with data storage and validation.
This class handles common functionality like data storage, validation, and provides a consistent interface for all error functions.
- class invode.error_functions.HuberLoss(data: ndarray, delta: float = 1.0)[source]#
Bases:
ErrorFunction
Huber Loss function (robust regression).
Combines the best properties of MSE and MAE: - Quadratic for small errors (|error| <= delta) - Linear for large errors (|error| > delta)
This makes it less sensitive to outliers than MSE while maintaining smoothness for optimization.
- Parameters:
data (np.ndarray) – Reference/observed data.
delta (float, optional) – Threshold for switching between quadratic and linear loss. Default is 1.0.
Examples
>>> data = np.array([1.0, 2.0, 3.0, 4.0]) >>> huber_func = HuberLoss(data, delta=0.5) >>> prediction = np.array([1.1, 2.2, 2.8, 4.1]) >>> error = huber_func(prediction) >>> print(f"Huber Loss: {error:.4f}")
- class invode.error_functions.LogLikelihood(data: ndarray, sigma: float, **kwargs)[source]#
Bases:
ErrorFunction
Gaussian Log-Likelihood Error Function.
Computes the log-likelihood of the predicted values y_pred under the assumption that the observed data data follows a Gaussian distribution with mean equal to y_pred and constant variance sigma^2.
- The log-likelihood is given by:
LL(μ, σ²) = -n/2 * log(2πσ²) - 1/(2σ²) * Σ(yi - μ)²
- Parameters:
data (np.ndarray) – Observed data points.
sigma (float) – Standard deviation of the Gaussian noise. Must be positive.
- Raises:
ValueError – If sigma is not positive or if the data array is empty.
- class invode.error_functions.MAE(data: ndarray, **kwargs)[source]#
Bases:
ErrorFunction
Mean Absolute Error (MAE) function.
Computes: MAE = (1/n) * Σ|y_pred - y_data|
MAE is more robust to outliers than MSE since it doesn’t square the residuals. It provides a linear penalty for errors.
Examples
>>> data = np.array([1.0, 2.0, 3.0, 4.0]) >>> mae_func = MAE(data) >>> prediction = np.array([1.1, 2.2, 2.8, 4.1]) >>> error = mae_func(prediction) >>> print(f"MAE: {error:.4f}")
- class invode.error_functions.MSE(data: ndarray, **kwargs)[source]#
Bases:
ErrorFunction
Mean Squared Error (MSE) function.
Computes: MSE = (1/n) * Σ(y_pred - y_data)²
This is the most common error function for continuous regression problems. It penalizes large errors more heavily than small ones due to the squaring.
Examples
>>> import numpy as np >>> data = np.array([1.0, 2.0, 3.0, 4.0]) >>> mse_func = MSE(data) >>> prediction = np.array([1.1, 2.2, 2.8, 4.1]) >>> error = mse_func(prediction) >>> print(f"MSE: {error:.4f}")
- class invode.error_functions.RMSE(data: ndarray, **kwargs)[source]#
Bases:
ErrorFunction
Root Mean Squared Error (RMSE) function.
Computes: RMSE = √((1/n) * Σ(y_pred - y_data)²)
RMSE is in the same units as the original data, making it more interpretable than MSE while maintaining the same optimization properties.
Examples
>>> data = np.array([1.0, 2.0, 3.0, 4.0]) >>> rmse_func = RMSE(data) >>> prediction = np.array([1.1, 2.2, 2.8, 4.1]) >>> error = rmse_func(prediction) >>> print(f"RMSE: {error:.4f}")
- class invode.error_functions.RegularizedError(data: ndarray, base_error: str | ErrorFunction = 'mse', l1_lambda: float = 0.0, l2_lambda: float = 0.0, param_getter: Callable | None = None)[source]#
Bases:
ErrorFunction
Error function with L1, L2, or elastic net regularization.
Combines a base error function with parameter regularization: Total Error = Base Error + λ₁ * L1_penalty + λ₂ * L2_penalty
This is useful for preventing overfitting and promoting sparse solutions.
- Parameters:
data (np.ndarray) – Reference/observed data.
base_error (str or ErrorFunction) – Base error function (‘mse’, ‘mae’, ‘rmse’) or custom ErrorFunction instance.
l1_lambda (float, optional) – L1 regularization strength (promotes sparsity). Default is 0.0.
l2_lambda (float, optional) – L2 regularization strength (promotes smoothness). Default is 0.0.
param_getter (callable, optional) – Function to extract parameters for regularization. If None, regularization is not applied (requires external parameter passing).
Examples
>>> data = np.array([1.0, 2.0, 3.0, 4.0]) >>> reg_func = RegularizedError(data, 'mse', l1_lambda=0.01, l2_lambda=0.1) >>> >>> # Usage in optimization (parameters passed externally) >>> def error_with_params(y_pred, params): ... base_error = reg_func(y_pred) ... l1_penalty = np.sum(np.abs(list(params.values()))) ... l2_penalty = np.sum([p**2 for p in params.values()]) ... return base_error + 0.01 * l1_penalty + 0.1 * l2_penalty
- class invode.error_functions.WeightedError(data: ndarray, weights: ndarray, base_error: str = 'mse')[source]#
Bases:
ErrorFunction
Weighted error function for handling different importance of data points.
Applies weights to individual data points, allowing some measurements to contribute more to the total error than others.
- Parameters:
data (np.ndarray) – Reference/observed data.
weights (np.ndarray) – Weights for each data point. Higher weights = more importance.
base_error (str, optional) – Base error type (‘mse’, ‘mae’). Default is ‘mse’.
Examples
>>> data = np.array([1.0, 2.0, 3.0, 4.0]) >>> weights = np.array([1.0, 2.0, 1.0, 0.5]) # Different importance >>> weighted_func = WeightedError(data, weights, 'mse') >>> prediction = np.array([1.1, 2.2, 2.8, 4.1]) >>> error = weighted_func(prediction) >>> print(f"Weighted MSE: {error:.4f}")
- invode.error_functions.chisquared(data: ndarray, sigma: ndarray | float, normalize: bool = False) ChiSquaredMSE [source]#
Will be deprecated in future versions. Create Chi-squared error function.
- Parameters:
data (np.ndarray) – Reference data.
sigma (np.ndarray or float) – Standard deviation for each data point.
normalize (bool, optional) – Whether to normalize by number of points.
- Returns:
Configured Chi-squared error function.
- Return type:
- invode.error_functions.huber(data: ndarray, delta: float = 1.0) HuberLoss [source]#
Will be deprecated in future versions.Create Huber loss error function.
- invode.error_functions.mae(data: ndarray) MAE [source]#
- Will be deprecated in future versions.
Create MAE error function.
- invode.error_functions.mse(data: ndarray) MSE [source]#
Will be deprecated in future versions.
Create MSE error function.
- Parameters:
data (np.ndarray) – Reference data for comparison.
- Returns:
Configured MSE error function.
- Return type:
Examples
>>> import numpy as np >>> data = np.array([1.0, 2.0, 3.0]) >>> error_func = mse(data) >>> prediction = np.array([1.1, 2.1, 2.9]) >>> error = error_func(prediction)