gemseo / post / core

hessians module

Hessian matrix approximations from gradient pairs

class gemseo.post.core.hessians.BFGSApprox(history)[source]

Bases: gemseo.post.core.hessians.HessianApproximation

Builds a BFGS approximation from optimization history

Constructor :param history: the optimization history

static iterate_s_k_y_k(x_hist, x_grad_hist)[source]

Generate the s_k and y_k terms, respectively design variable difference and gradients difference between iterates

Parameters
  • x_hist – design variables history array

  • x_grad_hist – gradients history array

class gemseo.post.core.hessians.HessianApproximation(history)[source]

Bases: object

Abstract class for hessian approximations from optimization history

Constructor :param history: the optimization history

build_approximation(funcname, save_diag=False, first_iter=0, last_iter=- 1, b_mat0=None, at_most_niter=- 1, return_x_grad=False, func_index=None, save_matrix=False, scaling=False, normalize_design_space=False, design_space=None)[source]

Builds the hessian approximation B.

Parameters
  • funcname – function name

  • save_diag – if True, returns the list of diagonal approximations (Default value = False)

  • first_iter – first iteration after which the history is extracted (Default value = 0)

  • last_iter – last iteration before which the history is extracted (Default value = -1)

  • b_mat0 – initial approximation matrix (Default value = None)

  • at_most_niter – maximum number of iterations to take (Default value = -1)

  • return_x_grad – if True, also returns the last gradient and x (Default value = False)

  • func_index – Default value = None) (Default value = False)

  • normalize_design_space – if True, scale the values to work in a normalized design space (x between 0 and 1)

  • design_space – the design space used to scale all values mandatory if normalize_design_space==True

Returns

the B matrix, its diagonal, and eventually the x and grad history pairs used to build B, if return_x_grad=True, otherwise, None and None are returned for args consistency

build_inverse_approximation(funcname, save_diag=False, first_iter=0, last_iter=- 1, h_mat0=None, at_most_niter=- 1, return_x_grad=False, func_index=None, save_matrix=False, factorize=False, scaling=False, angle_tol=1e-05, step_tol=10000000000.0, normalize_design_space=False, design_space=None)[source]

Builds the inversed hessian approximation H

Parameters
  • funcname – function name

  • save_diag – if True, returns the list of diagonal approximations (Default value = False)

  • first_iter – first iteration after which the history is extracted (Default value = 0)

  • last_iter – last iteration before which the history is extracted (Default value = -1)

  • h_mat0 – initial inverse approximation matrix (Default value = None)

  • at_most_niter – maximum number of iterations to take (Default value = -1)

  • return_x_grad – if True, also returns the last gradient and x (Default value = False)

  • func_index – Default value = None)

  • normalize_design_space – if True, scale the values to work in a normalized design space (x between 0 and 1)

  • design_space – the design space used to scale all values mandatory if normalize_design_space==True

Returns

the H matrix, its diagonal, and eventually the x and grad history pairs used to build H, if return_x_grad=True, otherwise, None and None are returned for args consistency

static compute_corrections(x_hist, x_grad_hist)[source]

Computes the corrections from the history.

static compute_scaling(hessk, hessk_dsk, dskt_hessk_dsk, dyk, dyt_dsk)[source]

Compute scaling :param hessk: previous approximation :param hessk_s: product between hessk and dsk :param dskt_hessk_sk: product between dsk^t, hessk and dsk :param dyk: gradients difference between iterates :param dyt_dsk: product between dyk^t and dsk^t

static get_s_k_y_k(x_hist, x_grad_hist, iteration)[source]

Generate the s_k and y_k terms, respectively design variable difference and gradients difference between iterates

Parameters
  • x_hist – design variables history array

  • x_grad_hist – gradients history array

  • iteration – iteration number for which the pair must be generated

get_x_grad_history(funcname, first_iter=0, last_iter=0, at_most_niter=- 1, func_index=None, normalize_design_space=False, design_space=None)[source]

Get gradient history and design variables history of gradient evaluations

Parameters
  • funcname – function name

  • first_iter – first iteration after which the history is extracted (Default value = 0)

  • last_iter – last iteration before which the history is extracted (Default value = 0)

  • at_most_niter – maximum number of iterations to take (Default value = -1)

  • func_index – Default value = None)

  • normalize_design_space – if True, scale the values to work in a normalized design space (x between 0 and 1)

  • design_space – the design space used to scale all values mandatory if normalize_design_space==True

static iterate_approximation(hessk, dsk, dyk, scaling=False)[source]

BFGS iteration from step k to step k+1

Parameters
  • hessk – previous approximation

  • dsk – design variable difference between iterates

  • dyk – gradients difference between iterates

  • scaling – do scaling step

Returns

updated approximation

static iterate_inverse_approximation(h_mat, s_k, y_k, h_factor=None, b_mat=None, b_factor=None, factorize=False, scaling=False)[source]

Inverse BFGS iteration

Parameters
  • h_mat – previous approximation

  • s_k – design variable difference between iterates

  • y_k – gradients difference between iterates

Returns

updated inverse approximation

static iterate_s_k_y_k(x_hist, x_grad_hist)[source]

Generate the s_k and y_k terms, respectively design variable difference and gradients difference between iterates

Parameters
  • x_hist – design variables history array

  • x_grad_hist – gradients history array

static rebuild_history(x_corr, x_0, grad_corr, g_0)[source]

Computes the history from the corrections.

class gemseo.post.core.hessians.LSTSQApprox(history)[source]

Bases: gemseo.post.core.hessians.HessianApproximation

Builds a Least squares approximation from optimization history

Constructor :param history: the optimization history

build_approximation(funcname, save_diag=False, first_iter=0, last_iter=- 1, b_mat0=None, at_most_niter=- 1, return_x_grad=False, scaling=False, func_index=- 1, normalize_design_space=False, design_space=None)[source]

Builds the hessian approximation

Parameters
  • funcname – function name

  • save_diag – if True, returns the list of diagonal approximations (Default value = False)

  • first_iter – first iteration after which the history is extracted (Default value = 0)

  • last_iter – last iteration before which the history is extracted (Default value = -1)

  • b_mat0 – initial approximation matrix (Default value = None)

  • at_most_niter – Default value = -1)

  • return_x_grad – Default value = False)

  • func_index – Default value = -1)

  • normalize_design_space – if True, scale the values to work in a normalized design space (x between 0 and 1)

  • design_space – the design space used to scale all values mandatory if normalize_design_space==True

Returns

the B matrix, its diagonal, and eventually the x and grad history pairs used to build B, if return_x_grad=True, otherwise, None and None are returned for args consistency

class gemseo.post.core.hessians.SR1Approx(history)[source]

Bases: gemseo.post.core.hessians.HessianApproximation

Builds a Symmetric Rank One approximation from optimization history

Constructor :param history: the optimization history

EPSILON = 1e-08
static iterate_approximation(b_mat, s_k, y_k, scaling=False)[source]

SR1 iteration

Parameters
  • b_mat – previous approximation

  • s_k – design variable difference between iterates

  • y_k – gradients difference between iterates

  • scaling – do scaling sep

Returns

updated approximation