gemseo / algos

# post_optimal_analysis module¶

Post-optimal analysis.

class gemseo.algos.post_optimal_analysis.PostOptimalAnalysis(opt_problem, ineq_tol=None)[source]

Bases: object

Post-optimal analysis of a parameterized optimization problem.

Consider the parameterized optimization problem below, whose objective and constraint functions depend on both the optimization variable $$x$$ and a parameter $$p$$.

\begin{split}\begin{aligned} & \text{Minimize} & & f(x,p) \\ & \text{relative to} & & x \\ & \text{subject to} & & \left\{\begin{aligned} & g(x,p)\le0, \\ & h(x,p)=0, \\ & \ell\le x\le u. \end{aligned}\right. \end{aligned}\end{split}

Denote $$x^\ast(p)$$ a solution of the problem, which depends on $$p$$. The post-optimal analysis consists in computing the following total derivative:

$\newcommand{\total}{\mathrm{d}} \frac{\total f(x^\ast(p),p)}{\total p}(p) =\frac{\partial f}{\partial p}(x^\ast(p),p) +\lambda_g^\top\frac{\partial g}{\partial p}(x^\ast(p),p) +\lambda_h^\top\frac{\partial h}{\partial p}(x^\ast(p),p),$

where $$\lambda_g$$ and $$\lambda_h$$ are the Lagrange multipliers of $$x^\ast(p)$$. N.B. the equality above relies on the assumption that

$\newcommand{\total}{\mathrm{d}} \lambda_g^\top\frac{\total g(x^\ast(p),p)}{\total p}(p)=0 \text{ and } \lambda_h^\top\frac{\total h(x^\ast(p),p)}{\total p}(p)=0.$
Parameters
• opt_problem (gemseo.algos.opt_problem.OptimizationProblem) – The solved optimization problem to be analyzed.

• ineq_tol (Optional[bool]) –

The tolerance to determine active inequality constraints. If None, its value is fetched in the optimization problem.

By default it is set to None.

Return type

None

check_validity(total_jac, partial_jac, parameters, threshold)[source]

Check whether the assumption for post-optimal validity holds.

Parameters
• total_jac (dict[str, dict[str, numpy.ndarray]]) – The total derivatives of the post-optimal constraints.

• partial_jac (dict[str, dict[str, numpy.ndarray]]) – The partial derivatives of the constraints.

• parameters (list[str]) – The names of the optimization problem parameters.

• threshold (float) – The tolerance on the validity assumption.

compute_lagrangian_jac(functions_jac, inputs)[source]

Compute the Jacobian of the Lagrangian.

Parameters
• functions_jac (dict[str, dict[str, numpy.ndarray]]) – The Jacobians of the optimization function w.r.t. the differentiation inputs.

• inputs (Iterable[str]) – The names of the inputs w.r.t. which to differentiate.

Returns

The Jacobian of the Lagrangian.

Return type
execute(outputs, inputs, functions_jac)[source]

Perform the post-optimal analysis.

Parameters
• outputs (Iterable[str]) – The names of the outputs to differentiate.

• inputs (Iterable[str]) – The names of the inputs w.r.t. which to differentiate.

• functions_jac (dict[str, dict[str, numpy.ndarray]]) – The Jacobians of the optimization functions w.r.t. the differentiation inputs.

Returns

The Jacobian of the Lagrangian.

Return type
MULT_DOT_CONSTR_JAC = 'mult_dot_constr_jac'