Coupled derivatives computation

Reminder on adjoint method for gradient computation

Use of gradient-based methods implies the computation of the total derivatives of the output function \(\bf{f}=(f_0,\ldots,f_N)^T\) with respect to the design vector \(\bf{x}=(x_0,\ldots,x_n)^T\):

\[\begin{split}\frac{d\bf{f}}{d\bf{x}}=\begin{pmatrix} \displaystyle\frac{df_0}{d x_0} &\ldots&\displaystyle\frac{df_0}{dx_n}\\ \vdots&\ddots&\vdots\\ \displaystyle\frac{df_N}{d x_0} &\ldots&\displaystyle\frac{df_N}{dx_n}. \end{pmatrix}\end{split}\]

A new feature of v1.0.0 of GEMSEO is the management of gradients. Compared to v0.1.0, for which only finite differences or complex step methods were available, jacobian assembly allows time savings and higher precision MDAs have to be solved for each perturbed point \((\bf{x}+h_j\bf{e}_j))\):

\[\frac{d f_i}{d x_j} = \frac{f_i(\bf{x}+h_j\bf{e}_j)-f_i(\bf{x})}{h_j}+\mathcal{O}(h_j).\]

If the size of the design vector is large, it becomes very long to get the sensitivity of the output \(\bf{f}\) with respect to the design vector \(\bf{x}\).

Jacobian assembly is based on discrete adjoint theory ():

  • direct method: linear solve of \(\dfrac{d\bf{\mathcal{W}}}{d\bf{x}}\)

    \[\dfrac{d\bf{f}}{d\bf{x}} = -\dfrac{\partial \bf{f}}{\partial \bf{\mathcal{W}}} \cdot \underbrace{\left[ \left(\dfrac{\partial\bf{\mathcal{R}}}{\partial \bf{\mathcal{W}}}\right)^{-1}\cdot \dfrac{\partial \bf{\mathcal{R}}}{\partial \bf{x}}\right]}_{-d\bf{\mathcal{W}}/d\bf{x}} + \dfrac{\partial \bf{f}}{\partial \bf{x}}\]
  • adjoint method: computation of the adjoint vector \(\bf{\lambda}\)

    \[\dfrac{d\bf{f}}{d\bf{x}} = -\underbrace{ \left[ \dfrac{\partial \bf{f}}{\partial \bf{\mathcal{W}}} \cdot \left(\dfrac{\partial\bf{\mathcal{R}}}{\partial \bf{\mathcal{W}}}\right)^{-1} \right]}_{\bf{\lambda}^T} \cdot \dfrac{\partial \bf{\mathcal{R}}}{\partial \bf{x}} + \dfrac{\partial \bf{f}}{\partial \bf{x}} = -\bf{\lambda}^T\cdot \dfrac{\partial \bf{\mathcal{R}}}{\partial \bf{x}} + \dfrac{\partial \bf{f}}{\partial \bf{x}}\]

Dependency to design variable vector \(\bf{x}\) has been removed.

The choice of which method (direct or adjoint) should be used depends on how the number \(n\) of outputs compares to the size of vector \(N\): if \(N \ll n\), the adjoint method should be used, whereas the direct method should be preferred if \(n\ll N\).

Both the direct and adjoint methods are implemented since |g| v1.0.0.

Derivatives computation in GEMSEO: design and classes

In GEMSEO, the JacobianAssembly class computes the derivatives of the MDAs. All MDA classes delegate the coupled derivatives computations to a JacobianAssembly instance. The MDOCouplingStructure class is responsible for the analysis of the dependencies between the MDODiscipline’s inputs and outputs, using a graph.

Many MDA algorithms are implemented in GEMSEO (Gauss-Seidel, Jacobi, Newton variants).

Jacobian assembly: application to Sobieski’s test-case

In GEMSEO, the jacobian matrix of a discipline is a dictionary of dictionaries. When wrapping the execution, a MDODiscipline._compute_jacobian() method must be defined (it overloads the generical one defined in MDODiscipline class): the jacobian matrix must be defined as MDODiscipline.jac.

def _compute_jacobian(self, inputs=None, outputs=None, mode='auto'):
    """
    Compute the partial derivatives of all outputs wrt all inputs
    """
    # Initialize all matrices to zeros
    data_names = ["y_14", "y_24", "y_34", "x_shared"]
    y_14, y_24, y_34, x_shared = self.get_inputs_by_name(data_names)
    self.jac = self.sobieski_problem.derive_blackbox_mission(x_shared,
                                                             y_14, y_24,
                                                             y_34)

The differentiation method is set by the method set_differentiation_method() of Scenario:

  • for "finite_differences" (default value):

scenario.set_differentiation_method("finite_differences")
  • for the "complex_step" method (each discipline must handle complex numbers):

scenario.set_differentiation_method("complex_step")
  • for linearized version of the disciplines ("user"): switching from direct mode to reverse mode is automatic, depending on the number of inputs and outputs. It can also be set by the user, setting linearization_mode at "direct" or "adjoint").

scenario.set_differentiation_method("user")
for discipline in scenario.disciplines:
   discipline.linearization_mode='auto' # default, can also be 'direct' or 'adjoint'

When deriving a tool, it is very easy to make some errors or to forget to derive some terms: that is why implementation of derivation can be validated against finite differences or complex step method, by means of the method check_jacobian():

from gemseo.problems.sobieski.disciplines import SobieskiMission
from gemseo.problems.sobieski.core import SobieskiProblem

problem = SobieskiProblem("complex128")
sr = SobieskiMission("complex128")
sr.check_jacobian(indata, threshold=1e-12)

In order to be relevant, threshold value should be kept at a low level (\(<10^{-10}\)).