invode.sensitivity#
- class invode.sensitivity.ODESensitivity(ode_func, error_func)[source]#
Bases:
object
A class for performing sensitivity analysis on ODE model parameters.
This class provides methods to analyze how sensitive the model output is to changes in different parameters, using data from optimization history or direct parameter sampling.
- analyze_parameter_sensitivity(candidates_df: DataFrame, method: str = 'correlation', normalize: bool = True, min_samples: int = 4, exclude_columns: List[str] | None = None) Dict[str, float] [source]#
Analyze parameter sensitivity using optimization candidates data.
This method examines how changes in each parameter correlate with changes in the error function, providing insights into which parameters have the strongest influence on model performance.
- Parameters:
candidates_df (pd.DataFrame) – DataFrame containing optimization candidates with parameter values and errors. Expected to have columns: ‘iteration’, ‘rank’, ‘error’, and parameter columns. This is typically obtained from ODEOptimizer.get_top_candidates_table().
method (str, optional) –
Method for calculating sensitivity. Options:
’correlation’: Pearson correlation between parameter values and errors
’variance’: Normalized variance of error with respect to parameter changes
’gradient’: Approximate gradient of error with respect to parameters
’mutual_info’: Mutual information between parameters and error
’rank_correlation’: Spearman rank correlation (robust to outliers)
Default is ‘correlation’.
normalize (bool, optional) – If True, normalize sensitivity values to [0, 1] range for comparison. Default is True.
min_samples (int, optional) – Minimum number of samples required for reliable sensitivity analysis. If fewer samples are available, a warning is issued. Default is 10.
exclude_columns (List[str], optional) – List of column names to exclude from sensitivity analysis. By default, excludes [‘iteration’, ‘rank’, ‘error’].
- Returns:
Dictionary mapping parameter names to their sensitivity values. Higher absolute values indicate greater sensitivity. For correlation methods, negative values indicate inverse relationships.
- Return type:
Dict[str, float]
- Raises:
ValueError – If candidates_df is empty, missing required columns, or contains insufficient data.
TypeError – If candidates_df is not a pandas DataFrame.
Notes
Sensitivity Interpretation:
High sensitivity: Small parameter changes cause large error changes
Low sensitivity: Parameter changes have minimal impact on error
Negative correlation: Increasing parameter decreases error
Positive correlation: Increasing parameter increases error
Method Details:
Correlation: Measures linear relationship between parameter and error
Rank Correlation: Spearman correlation, robust to non-linear monotonic relationships
Variance: Quantifies error variability attributable to parameter
Gradient: Estimates local derivative of error w.r.t. parameter
Mutual Info: Captures non-linear parameter-error relationships
The analysis uses all candidates from all iterations, providing a global view of parameter sensitivity across the optimization landscape.
Examples
Basic sensitivity analysis from optimizer results:
>>> # After running optimization >>> optimizer = ODEOptimizer(...) >>> optimizer.fit() >>> >>> # Get candidates table and analyze sensitivity >>> df = optimizer.get_top_candidates_table() >>> sensitivity = ODESensitivity(optimizer.ode_func, optimizer.error_func) >>> sensitivities = sensitivity.analyze_parameter_sensitivity(df) >>> >>> # Display results sorted by sensitivity magnitude >>> for param, sens in sorted(sensitivities.items(), key=lambda x: abs(x[1]), reverse=True): ... print(f"{param}: {sens:.4f}") alpha: -0.8234 # Highly sensitive, negative correlation beta: 0.6891 # Highly sensitive, positive correlation gamma: -0.3456 # Moderately sensitive delta: 0.1234 # Low sensitivity
Compare different sensitivity methods:
>>> methods = ['correlation', 'rank_correlation', 'variance', 'mutual_info'] >>> results = {} >>> for method in methods: ... sens = sensitivity.analyze_parameter_sensitivity(df, method=method) ... results[method] = sens >>> >>> # Create comparison DataFrame >>> comparison_df = pd.DataFrame(results) >>> print(comparison_df.round(4)) correlation rank_correlation variance mutual_info alpha -0.823 -0.801 0.745 0.234 beta 0.689 0.712 0.523 0.189 gamma -0.346 -0.298 0.187 0.098 delta 0.123 0.145 0.076 0.043
Analyze sensitivity for specific iterations:
>>> # Focus on later iterations (better convergence) >>> late_iterations = df[df['iteration'] >= 5] >>> late_sensitivities = sensitivity.analyze_parameter_sensitivity(late_iterations) >>> >>> # Compare early vs late sensitivity >>> early_iterations = df[df['iteration'] <= 3] >>> early_sensitivities = sensitivity.analyze_parameter_sensitivity(early_iterations)
Filter by rank to focus on best candidates:
>>> # Only analyze top candidates from each iteration >>> top_candidates = df[df['rank'] == 1] >>> top_sensitivities = sensitivity.analyze_parameter_sensitivity(top_candidates)
Custom column exclusions:
>>> # Exclude additional metadata columns >>> sensitivities = sensitivity.analyze_parameter_sensitivity( ... df, exclude_columns=['iteration', 'rank', 'error', 'timestamp'] ... )
- analyze_sensitivity_by_iteration(candidates_df: DataFrame, method: str = 'correlation', normalize: bool = True) DataFrame [source]#
Analyze how parameter sensitivity changes across optimization iterations.
This method provides insights into how the importance of different parameters evolves as the optimization progresses, which can reveal whether certain parameters become more or less critical in later stages.
- Parameters:
candidates_df (pd.DataFrame) – DataFrame containing optimization candidates from get_top_candidates_table().
method (str, optional) – Sensitivity analysis method. Default is ‘correlation’.
normalize (bool, optional) – Whether to normalize sensitivity values. Default is True.
- Returns:
DataFrame with iterations as rows and parameters as columns, containing sensitivity values for each iteration.
- Return type:
pd.DataFrame
Examples
>>> df = optimizer.get_top_candidates_table() >>> sensitivity = ODESensitivity(optimizer.ode_func, optimizer.error_func) >>> iteration_sens = sensitivity.analyze_sensitivity_by_iteration(df) >>> print(iteration_sens)
>>> # Plot evolution of parameter sensitivity >>> import matplotlib.pyplot as plt >>> plt.figure(figsize=(12, 6)) >>> for param in iteration_sens.columns: ... plt.plot(iteration_sens.index, iteration_sens[param], ... marker='o', label=param) >>> plt.xlabel('Iteration') >>> plt.ylabel('Parameter Sensitivity') >>> plt.title('Evolution of Parameter Sensitivity') >>> plt.legend() >>> plt.grid(True, alpha=0.3) >>> plt.show()
- analyze_sensitivity_by_rank(candidates_df: DataFrame, method: str = 'correlation', normalize: bool = True) DataFrame [source]#
Analyze parameter sensitivity for different candidate ranks.
This method examines whether parameter sensitivity differs between the best candidates (rank 1) versus lower-ranked candidates, which can provide insights into parameter importance in high-performance regions.
- candidates_dfpd.DataFrame
DataFrame containing optimization candidates from get_top_candidates_table().
- methodstr, optional
Sensitivity analysis method. Default is ‘correlation’.
- normalizebool, optional
Whether to normalize sensitivity values. Default is True.
- pd.DataFrame
DataFrame with ranks as rows and parameters as columns, containing sensitivity values for each rank.
>>> df = optimizer.get_top_candidates_table() >>> sensitivity = ODESensitivity(optimizer.ode_func, optimizer.error_func) >>> rank_sens = sensitivity.analyze_sensitivity_by_rank(df) >>> print(rank_sens)
>>> # Compare sensitivity between best and worst candidates >>> print("Best candidates (rank 1):") >>> print(rank_sens.loc[1]) >>> print("
- Worst candidates (rank 3):”)
>>> print(rank_sens.loc[3])
- create_sensitivity_summary(candidates_df: DataFrame, methods: List[str] | None = None) DataFrame [source]#
Create a comprehensive summary of parameter sensitivities using multiple methods.
- Parameters:
candidates_df (pd.DataFrame) – DataFrame containing optimization candidates from get_top_candidates_table().
methods (List[str], optional) – List of sensitivity methods to compare. If None, uses all available methods.
- Returns:
DataFrame with parameters as rows and different sensitivity methods as columns.
- Return type:
pd.DataFrame
Examples
>>> df = optimizer.get_top_candidates_table() >>> sensitivity = ODESensitivity(optimizer.ode_func, optimizer.error_func) >>> summary = sensitivity.create_sensitivity_summary(df) >>> print(summary.round(4))
>>> # Identify most consistently sensitive parameters >>> summary['mean_abs_sensitivity'] = summary.abs().mean(axis=1) >>> print(summary.sort_values('mean_abs_sensitivity', ascending=False))
- plot_sensitivity_analysis(sensitivities: Dict[str, float], title: str = 'Parameter Sensitivity Analysis', figsize: Tuple[int, int] = (10, 6), save_path: str | None = None) Figure [source]#
Create a visualization of parameter sensitivities.
- Parameters:
sensitivities (Dict[str, float]) – Dictionary of parameter sensitivities from analyze_parameter_sensitivity_from_history
title (str, optional) – Plot title. Default is “Parameter Sensitivity Analysis”.
figsize (Tuple[int, int], optional) – Figure size as (width, height). Default is (10, 6).
save_path (str, optional) – If provided, save the plot to this path.
- Returns:
The matplotlib figure object.
- Return type:
plt.Figure