BNMPy.parameter_optimizer

The parameter_optimizer module provides PBN parameter optimization based on the optPBN framework.

class BNMPy.parameter_optimizer.OptimizationResult(success: bool, message: str, x: ndarray, fun: float, nfev: int, nit: int, method: str, cij_matrix: ndarray, convergence_info: Dict[str, Any])[source]

Bases: object

Container for optimization results

success: bool
message: str
x: ndarray
fun: float
nfev: int
nit: int
method: str
cij_matrix: ndarray
convergence_info: Dict[str, Any]
__init__(success: bool, message: str, x: ndarray, fun: float, nfev: int, nit: int, method: str, cij_matrix: ndarray, convergence_info: Dict[str, Any]) None
exception BNMPy.parameter_optimizer.OptimizationError[source]

Bases: Exception

Custom exception for optimization errors

class BNMPy.parameter_optimizer.ParameterOptimizer(pbn, experiments, config=None, nodes_to_optimize=None, discrete=False, verbose=False)[source]

Bases: object

Parameter optimization for Probabilistic Boolean Networks using experimental data.

Methods

get_optimized_pbn(result)

Get a new PBN object with the optimized parameters from a result.

get_pbn_rules_string()

Format the final PBN into a readable string.

optimize([method])

Run parameter optimization using specified method.

plot_optimization_history(result[, ...])

Plot the optimization history (MSE over iterations).

test_steady_states([test_experiments, ...])

Test steady state calculation methods for each experiment condition.

__init__(pbn, experiments, config=None, nodes_to_optimize=None, discrete=False, verbose=False)[source]

Initialize the parameter optimizer.

optimize(method: str = 'differential_evolution')[source]

Run parameter optimization using specified method.

plot_optimization_history(result: OptimizeResult, save_path: str | None = None, show_stagnation: bool = False, log_scale: bool = False, sorted: bool = False)[source]

Plot the optimization history (MSE over iterations).

get_pbn_rules_string() str[source]

Format the final PBN into a readable string.

get_optimized_pbn(result: OptimizeResult) Any[source]

Get a new PBN object with the optimized parameters from a result.

test_steady_states(test_experiments=None, show_plot=False, convergence_threshold=1.0)[source]

Test steady state calculation methods for each experiment condition. Uses the optimizer’s existing configuration and reports timing and convergence.

ParameterOptimizer Class

class BNMPy.parameter_optimizer.ParameterOptimizer(pbn, experiments, config=None, nodes_to_optimize=None, discrete=False, verbose=False)[source]

Bases: object

Parameter optimization for Probabilistic Boolean Networks using experimental data.

Methods

get_optimized_pbn(result)

Get a new PBN object with the optimized parameters from a result.

get_pbn_rules_string()

Format the final PBN into a readable string.

optimize([method])

Run parameter optimization using specified method.

plot_optimization_history(result[, ...])

Plot the optimization history (MSE over iterations).

test_steady_states([test_experiments, ...])

Test steady state calculation methods for each experiment condition.

__init__(pbn, experiments, config=None, nodes_to_optimize=None, discrete=False, verbose=False)[source]

Initialize the parameter optimizer.

optimize(method: str = 'differential_evolution')[source]

Run parameter optimization using specified method.

plot_optimization_history(result: OptimizeResult, save_path: str | None = None, show_stagnation: bool = False, log_scale: bool = False, sorted: bool = False)[source]

Plot the optimization history (MSE over iterations).

get_pbn_rules_string() str[source]

Format the final PBN into a readable string.

get_optimized_pbn(result: OptimizeResult) Any[source]

Get a new PBN object with the optimized parameters from a result.

test_steady_states(test_experiments=None, show_plot=False, convergence_threshold=1.0)[source]

Test steady state calculation methods for each experiment condition. Uses the optimizer’s existing configuration and reports timing and convergence.

Overview

The ParameterOptimizer enables parameter estimation for Probabilistic Boolean Networks (PBNs) using experimental data. It supports multiple optimization algorithms and provides comprehensive result evaluation.

Basic Usage

import BNMPy

# Load or create your PBN
pbn = BNMPy.load_pbn_from_file("network.txt")

# Initialize optimizer
optimizer = BNMPy.ParameterOptimizer(
    pbn,
    "experiments.csv",
    nodes_to_optimize=['Cas3'],
    verbose=True
)

# Run optimization
result = optimizer.optimize(method='differential_evolution')

# Get optimized PBN
if result.success:
    optimized_pbn = optimizer.get_optimized_pbn(result)

Configuration

Optimization Methods

  • Differential Evolution: Global optimization using evolutionary strategies

  • Particle Swarm Optimization: Swarm-based optimization

config = {
    # Global settings
    'seed': 9,
    'success_threshold': 0.005,
    'max_try': 3,

    # Differential Evolution parameters
    'de_params': {
        'strategy': 'best1bin',
        'maxiter': 500,
        'popsize': 15,
        'tol': 0.01,
        'mutation': (0.5, 1),
        'recombination': 0.7,
        'init': 'sobol',
        'workers': -1,
        'early_stopping': True,
    },

    # Particle Swarm Optimization parameters
    'pso_params': {
        'n_particles': 30,
        'iters': 100,
        'options': {'c1': 0.5, 'c2': 0.3, 'w': 0.9},
        'ftol': 1e-6,
        'ftol_iter': 15,
    },

    # Steady state calculation
    'steady_state': {
        'method': 'monte_carlo',
        'monte_carlo_params': {
            'n_runs': 10,
            'n_steps': 1000
        }
    }
}

optimizer = BNMPy.ParameterOptimizer(pbn, "experiments.csv", config=config)

Discrete Mode

For Boolean Network optimization:

config = {
    'discrete_params': {
        'threshold': 0.6
    }
}

result = optimizer.optimize(
    method='differential_evolution',
    discrete=True
)

Results

The optimization returns an OptimizeResult object:

  • success: Boolean indicating successful termination

  • message: Status message

  • x: Optimized parameters

  • fun: Final objective value (MSE)

  • history: MSE values at each iteration

  • nfev: Number of function evaluations

  • nit: Number of iterations

Visualization

# Plot optimization history
optimizer.plot_optimization_history(
    result,
    save_path='optimization_history.png',
    show_stagnation=True,
    log_scale=True
)

References

Based on the optPBN framework:

Trairatphisan, P., et al. (2014). “optPBN: An Optimisation Toolbox for Probabilistic Boolean Networks.” PLOS ONE 9(7): e98001.