BNMPy.result_evaluation

The result_evaluation module provides tools for evaluating optimization results.

class BNMPy.result_evaluation.ResultEvaluator(optimizer_result, parameter_optimizer)[source]

Bases: object

Evaluate optimization results by comparing simulation output with experimental data.

This provides tools to assess the quality of optimized models by: 1. Simulating the optimized model on experimental conditions 2. Comparing simulation results with experimental measurements 3. Calculating correlation and other statistical metrics 4. Generating visualization plots

Methods

calculate_evaluation_metrics()

Calculate evaluation metrics comparing simulation results with experimental data.

export_results_to_csv(save_path)

Export detailed results to CSV for further analysis.

generate_evaluation_report([save_path])

Generate a comprehensive evaluation report.

plot_prediction_vs_experimental([save_path, ...])

Create a scatter plot comparing predicted vs experimental values.

plot_residuals([save_path, ...])

Create residual plots to assess model fit quality.

simulate_optimized_model()

Simulate the optimized model on all experimental conditions.

__init__(optimizer_result, parameter_optimizer)[source]

Initialize the result evaluator.

simulate_optimized_model() Dict[source]

Simulate the optimized model on all experimental conditions.

calculate_evaluation_metrics() Dict[source]

Calculate evaluation metrics comparing simulation results with experimental data.

plot_prediction_vs_experimental(save_path: str | None = None, show_confidence_interval: bool = False, show_experiment_ids: bool = False, figsize: Tuple[int, int] = (8, 6)) Figure[source]

Create a scatter plot comparing predicted vs experimental values.

plot_residuals(save_path: str | None = None, show_experiment_ids: bool = False, figsize: Tuple[int, int] = (9, 4)) Figure[source]

Create residual plots to assess model fit quality.

generate_evaluation_report(save_path: str | None = None) str[source]

Generate a comprehensive evaluation report.

export_results_to_csv(save_path: str)[source]

Export detailed results to CSV for further analysis.

BNMPy.result_evaluation.evaluate_optimization_result(optimizer_result, parameter_optimizer, output_dir: str = '.', plot_residuals: bool = True, save: bool = True, detailed: bool = False, figsize: Tuple[int, int] = (8, 6), show_confidence_interval: bool = False) ResultEvaluator[source]

Convenience function to perform a complete evaluation of optimization results.

BNMPy.result_evaluation.evaluate_pbn(pbn, experiments, output_dir: str = '.', generate_plots: bool = True, generate_report: bool = True, config: dict | None = None)[source]

Evaluate a PBN directly against experiment data (list or CSV).

Overview

This module provides functions to evaluate and visualize optimization results, including prediction quality, residual analysis, and model performance metrics.

Functions

evaluate_optimization_result

BNMPy.result_evaluation.evaluate_optimization_result(optimizer_result, parameter_optimizer, output_dir: str = '.', plot_residuals: bool = True, save: bool = True, detailed: bool = False, figsize: Tuple[int, int] = (8, 6), show_confidence_interval: bool = False) ResultEvaluator[source]

Convenience function to perform a complete evaluation of optimization results.

Comprehensive evaluation of optimization results with plots and reports.

evaluate_pbn

BNMPy.result_evaluation.evaluate_pbn(pbn, experiments, output_dir: str = '.', generate_plots: bool = True, generate_report: bool = True, config: dict | None = None)[source]

Evaluate a PBN directly against experiment data (list or CSV).

Evaluate a PBN against experimental data directly.

Basic Usage

Evaluating Optimization Results

import BNMPy

# Run optimization
optimizer = BNMPy.ParameterOptimizer(pbn, "experiments.csv")
result = optimizer.optimize(method='differential_evolution')

# Evaluate results with plots and report
evaluator = BNMPy.evaluate_optimization_result(
    result,
    optimizer,
    output_dir="evaluation_results",
    plot_residuals=True,
    save=True,
    detailed=True,
    figsize=(8, 6)
)

Evaluating a PBN

import BNMPy

# Evaluate an existing PBN
pbn = BNMPy.load_pbn_from_file("network.txt")
exp_data = BNMPy.ExperimentData("experiments.csv")

results = BNMPy.evaluate_pbn(
    pbn,
    exp_data,
    output_dir="pbn_evaluation",
    config={'steady_state': {'method': 'monte_carlo'}}
)

print(f"MSE: {results['mse']:.4f}")
print(f"Correlation: {results['correlation']:.3f}")

Generated Plots

The evaluation functions generate several plots to assess model quality:

1. Prediction vs Experimental Plot

prediction_vs_experimental.png

Scatter plot comparing predicted vs experimental values:

  • X-axis: Experimental values from CSV file

  • Y-axis: Predicted values from the model

  • Perfect prediction line: Red dashed line (y=x)

  • Regression line: Green line showing linear relationship

  • Confidence interval: Light green shaded area (95% confidence)

  • Statistics: Correlation coefficient (r), p-value, and MSE

2. Residuals Plot

residuals.png

Shows distribution of prediction errors:

  • Left panel: Residuals vs Predicted values

    • Residuals = Predicted - Experimental

    • Horizontal red line at y=0

  • Right panel: Histogram of residuals

    • Distribution of prediction errors

    • Shows mean and standard deviation

3. Optimization History Plot

optimization_history.png

Shows MSE progression during optimization:

  • X-axis: Optimization iterations

  • Y-axis: Mean Squared Error (MSE)

  • Line: MSE values over iterations

  • Stagnation periods: Highlighted if enabled

Output Files

When save=True, the function generates:

  • detailed_results.csv: Per-experiment predictions and errors

  • evaluation_report.txt: Summary statistics and model performance

  • prediction_vs_experimental.png: Prediction quality plot

  • residuals.png: Residual analysis (if plot_residuals=True)

Example Output Structure

evaluation_results/
├── detailed_results.csv
├── evaluation_report.txt
├── prediction_vs_experimental.png
└── residuals.png

Evaluation Report

Text file with summary statistics:

Optimization Evaluation Report
==============================

Final MSE: 0.0123
Correlation: 0.89
P-value: 1.2e-15
RMSE: 0.111
MAE: 0.089

Number of experiments: 10
Number of measurements: 40

Optimization converged successfully
Iterations: 245
Function evaluations: 3675

See Also