gemseo / mda

mda_chain module

An advanced MDA splitting algorithm based on graphs

class gemseo.mda.mda_chain.MDAChain(disciplines, sub_mda_class='MDAJacobi', max_mda_iter=20, name=None, n_processes=2, chain_linearize=False, tolerance=1e-06, use_lu_fact=False, norm0=None, **sub_mda_options)[source]

Bases: gemseo.mda.mda.MDA

The MDAChain computes a chain of subMDAs and simple evaluations.

The execution sequence is provided by the DependencyGraph class.

Constructor

Parameters
  • disciplines (list(MDODiscipline)) – the disciplines list

  • sub_mda_class (str) – the class to instantiate for sub MDAs

  • max_mda_iter (int) – maximum number of iterations for sub MDAs

  • name (str) – name of self

  • n_processes (int) – number of processes for parallel run

  • chain_linearize (bool) – linearize the chain of execution, if True Otherwise, linearize the oveall MDA with base class method Last option is preferred to minimize computations in adjoint mode in direct mode, chain_linearize may be cheaper

  • tolerance (float) – tolerance of the iterative direct coupling solver, norm of the current residuals divided by initial residuals norm shall be lower than the tolerance to stop iterating

  • use_lu_fact (bool) – if True, when using adjoint/forward differenciation, store a LU factorization of the matrix to solve faster multiple RHS problem

  • norm0 (float) – reference value of the norm of the residual to compute the decrease stop criteria. Iterations stops when norm(residual)/norm0<tolerance

  • sub_mda_options (dict) – options dict passed to the sub mda

add_differentiated_inputs(inputs=None)[source]

Add inputs to the differentiation list.

Updates self._differentiated_inputs with inputs

Parameters

inputs – list of inputs variables to differentiate if None, all inputs of discipline are used (Default value = None)

add_differentiated_outputs(outputs=None)[source]

Add outputs to the differentiation list.

Updates self._differentiated_inputs with inputs

Parameters

outputs – list of output variables to differentiate if None, all outputs of discipline are used

get_expected_dataflow()[source]

Get the expected dataflow.

See MDOChain.get_expected_dataflow

get_expected_workflow()[source]

Get the expected workflow.

See MDOChain.get_expected_workflow

plot_residual_history(show=False, save=True, n_iterations=None, logscale=None, filename=None, figsize=(50, 10))[source]

Generate a plot of the residual history All residuals are stored in the history ; only the final residual of the converged MDA is plotted at each optimization iteration

Parameters
  • show – if True, displays the plot on screen (Default value = False)

  • save – if True, saves the plot as a PDF file (Default value = True)

  • n_iterations – if not None, fix the number of iterations in the x axis (Default value = None)

  • logscale – if not None, fix the logscale in the y axis (Default value = None)

  • filename – Default value = None)

reset_statuses_for_run()[source]

Set all the statuses to PENDING.