gemseo.algos.post_optimal_analysis module#

Post-optimal analysis.

class PostOptimalAnalysis(opt_problem, ineq_tol=None)[source]#

Bases: object

Post-optimal analysis of a parameterized optimization problem.

Consider the parameterized optimization problem below, whose objective and constraint functions depend on both the optimization variable \(x\) and a parameter \(p\).

\[\begin{split}\begin{aligned} & \text{Minimize} & & f(x,p) \\ & \text{relative to} & & x \\ & \text{subject to} & & \left\{\begin{aligned} & g(x,p)\le0, \\ & h(x,p)=0, \\ & \ell\le x\le u. \end{aligned}\right. \end{aligned}\end{split}\]

Denote \(x^\ast(p)\) a solution of the problem, which depends on \(p\). The post-optimal analysis consists in computing the following total derivative:

\[\newcommand{\total}{\mathrm{d}} \frac{\total f(x^\ast(p),p)}{\total p}(p) =\frac{\partial f}{\partial p}(x^\ast(p),p) +\lambda_g^\top\frac{\partial g}{\partial p}(x^\ast(p),p) +\lambda_h^\top\frac{\partial h}{\partial p}(x^\ast(p),p),\]

where \(\lambda_g\) and \(\lambda_h\) are the Lagrange multipliers of \(x^\ast(p)\). N.B. the equality above relies on the assumption that

\[\newcommand{\total}{\mathrm{d}} \lambda_g^\top\frac{\total g(x^\ast(p),p)}{\total p}(p)=0 \text{ and } \lambda_h^\top\frac{\total h(x^\ast(p),p)}{\total p}(p)=0.\]
Parameters:
  • opt_problem (OptimizationProblem) -- The solved optimization problem to be analyzed.

  • ineq_tol (float | None) -- The tolerance to determine active inequality constraints. If None, its value is fetched in the optimization problem.

Raises:

ValueError -- If the optimization problem is not solved.

check_validity(total_jac, partial_jac, parameters, threshold)[source]#

Check whether the assumption for post-optimal validity holds.

Parameters:
  • total_jac (dict[str, dict[str, ndarray]]) -- The total derivatives of the post-optimal constraints.

  • partial_jac (dict[str, dict[str, ndarray]]) -- The partial derivatives of the constraints.

  • parameters (list[str]) -- The names of the optimization problem parameters.

  • threshold (float) -- The tolerance on the validity assumption.

compute_lagrangian_jac(functions_jac, input_names)[source]#

Compute the Jacobian of the Lagrangian.

Parameters:
  • functions_jac (dict[str, dict[str, ndarray]]) -- The Jacobians of the optimization function w.r.t. the differentiation inputs.

  • input_names (Iterable[str]) -- The names of the inputs w.r.t. which to differentiate.

Returns:

The Jacobian of the Lagrangian.

Return type:

dict[str, dict[str, ndarray]]

execute(output_names, input_names, functions_jac)[source]#

Perform the post-optimal analysis.

Parameters:
  • output_names (Iterable[str]) -- The names of the outputs to differentiate.

  • input_names (Iterable[str]) -- The names of the inputs w.r.t. which to differentiate.

  • functions_jac (dict[str, dict[str, ndarray]]) -- The Jacobians of the optimization functions w.r.t. the differentiation inputs.

Returns:

The Jacobian of the Lagrangian.

Return type:

dict[str, dict[str, ndarray]]

MULT_DOT_CONSTR_JAC = 'mult_dot_constr_jac'#