Commit ff25fdbe authored by Antoine Berchet's avatar Antoine Berchet
Browse files

Merged NILU master

parents 25674479 091603c6
Pipeline #91 failed with stage
Python version of the congrad minimization algorithm
Mike Fisher (ECMWF), April 2002
Frederic Chevallier (LSCE), April 2004, for the Python adaptation
This software is governed by the CeCILL-B license under French law and
abiding by the rules of distribution of free software. You can use,
modify and/ or redistribute the software under the terms of the CeCILL
license as circulated by CEA, CNRS and INRIA at the following URL
This Agreement is an open source software license intended to give users
significant freedom to modify and redistribute the software licensed
The exercising of this freedom is conditional upon a strong obligation
of giving credits for everybody that distributes a software
incorporating a software ruled by the current license so as all
contributions to be properly identified and acknowledged.
In consideration of access to the source code and the rights to copy,
modify and redistribute granted by the license, users are provided only
with a limited warranty and the software's author, the holder of the
economic rights, and the successive licensors only have limited liability.
In this respect, the risks associated with loading, using, modifying
and/or developing or reproducing the software by the user are brought to
the user's attention, given its Free Software status, which may make it
complicated to use, with the result that its use is reserved for
developers and experienced professionals having in-depth computer
knowledge. Users are therefore encouraged to load and test the
suitability of the software as regards their requirements in conditions
enabling the security of their systems and/or data to be ensured and,
more generally, to use and operate it in the same conditions of
security. This Agreement may be freely reproduced and published,
provided it is not altered, and that no provisions are either added or
removed herefrom.
The fact that you are presently reading this means that you have had
knowledge of the CeCILL license and that you accept its terms.
Python version of the congrad minimization algorithm
Mike Fisher (ECMWF), April 2002
Frederic Chevallier (LSCE), April 2004, for the Python adaptation
Description of the Fortran subroutine:
CONGRAD - Lanczos algorithm linear equation and eigenvalue solver
Purpose. Simultaneously minimizes the cost function and determines
-- some of the eigenvalues and eigenvectors of its Hessian.
Explicit arguments:
Inputs: simul -- The simulator
px -- guess at the vector which minimizes the cost.
pgrad -- gradient of cost function at initial 'px'.
planc1 -- Initial Lanczos vector
preduc -- required reduction in gradient norm.
pevbnd -- Accuracy required of approximate eigenvectors.
kvadim -- dimension of vectors px, pgrad, etc.
kmaxit -- iteration stops if number of iterations
exceeds this value.
kverbose -- verbosity (0 => no messages, 1=> taciturn,
2 => verbose)
kulout -- I/O unit number for information messages (e.g.
standard output)
kulerr -- I/O unit number for error messages (e.g. standard
ldevecs -- calculate eigenvectors?
ldsolve -- minimize the cost function?
Outputs: px -- improved guess at the vector which minimizes
the cost.
pgrad -- estimated gradient of cost function at final 'px'.
preduc -- achieved reduction in gradient norm.
pgolubu -- upper bound for: (planc1)' (J'')^-1 planc1
pgolubl -- lower bound for: (planc1)' (J'')^-1 planc1
pevecs -- eigenvectors of the Hessian (with
preconditioning undone, and
scaled by sqrt(lambda-1))
kmaxit -- the number of iterations actually performed
Externals. SIMUL
- SSTEQR/DSTEQR (LAPACK eigen-decomposition routine)
SPTSV/DPTSV (LAPACK symmetric tri-diagonal solver)
The simulator (SIMUL) interface is as follows:
CALL SIMUL (kindic,kvadim,px,pcost,pgrad,kiter)
INTEGER_M, INTENT(IN) :: kindic -- SIMUL is always called
with indic=4
INTEGER_M, INTENT(IN) :: kvadim -- The dimension of the
control vector
INTEGER_M, INTENT(IN) :: kiter -- The iteration counter
REAL_B, INTENT(IN) :: px -- The control variable
REAL_B, INTENT(OUT) :: pcost -- The value of the cost
REAL_B, INTENT(OUT) :: pgrad -- The gradient of the cost
(Note: CONGRAD does not make use of the value of the cost function, pcost)
None yet!
Mike Fisher *ECMWF*
Original 94/04/19
M. Fisher: 95/09/20 Reorganized code
M. Fisher: 96/05/16 Orthogonalize every iteration
M. Fisher: 96/08/01 Optionally keep vectors in memory
E. Andersson 96/08/01 Pre-calculated math.sqrt(SCALP)
M. Fisher: 97/08/05 Change orthogonalization and f90ize
M. Fisher: 99/01/12 Generalized Cross Validation
M. Fisher: 02/04/30 remove ECMWF-specific code
F. Chevallier: 04/05 translate into python
Comparison of congrad variables to those of Lanczos (R Thompson, Feb 2012)
solve x for Ax = w : find T = V*A*Vt
A = pgrad
vj(0) = pgrad/norm(pgrad)
beta(0) = 0
for i = 1:k
wj = A*vj - beta*vj
alpha = < wj, vj >
wj+1 = wj - alpha*vj
beta = norm(wj+1)
vj+1 = wj+1/beta
congrad variable equivalents :
zdelta = alpha
zbeta = beta
zcglwk[j,] = vj
pgrad = A
solution :
T = tridiagonal vector, alpha, beta
x = x0 + V*T^-1*Vt
where :
x = px (best fit solution vector)
x0 = px0 (first guess solution vector)
vt = zqg0
\ No newline at end of file
Python version of the congrad minimization algorithm
Mike Fisher (ECMWF), April 2002
Frederic Chevallier (LSCE), April 2004, for the Python adaptation
from types import MethodType
from .check import check_options
from .congrad import congrad
from .minimize import minimize
_name = "congrad"
requirements = {
"simulator": {
"any": True,
"empty": True,
"name": "gausscost",
"version": "std",
def ini_data(plugin, **kwargs):
"""Initializes congrad.
plugin (Plugin): options for the variational inversion
updated plugin
# Function to check the consistency of options and arguments
plugin.check_options = MethodType(check_options, plugin)
# M1QN3 itself
plugin.congrad = MethodType(congrad, plugin)
return plugin
def check_options(self, chi, **kwargs):
"""Check whether necessary parameters are define.
Fills with default values if not
self (Plugin): the minimizer plugin
chi (numpy array): Chi vector
the updated minimizer
# Required reduction in gradient norm
self.zreduc = getattr(self, "zreduc", 1e-15)
self.pevbnd = getattr(self, "pevbnd", 0.01)
self.kvadim = chi.size
self.kverbose = getattr(self, "kverbose", 1)
self.ldsolve = getattr(self, "ldsolve", 1)
# Check for missing attributes
if not hasattr(self, "maxiter"):
raise AttributeError(
"maxiter is missing in the definition of the "
"minimizer. Please check your Yaml file"
self.knevecout = self.maxiter
return self
import copy
import numpy as np
from logging import info
def minimize(self, finit, gradinit, chi0, **kwargs):
# x, f, g, auxil, io, niter, nsim, iz, df1, m=5, dxmin=1.e-20,
# epsg=1.e-20, impres=1, mode=0, **kwargs
"""Entry point for CONGRAD algorithm.
finit (float): initial value for the function to minimize
gradinit (np.array): gradient at the starting point
chi (np.array): initial state for the unknown to optimize
simulator (module): simulator module to evaluate the function and
its gradient
minimizer (module): minimizer module, used to define minimizer options
(np.array, float): a tuple with the optimized vector and the
corresponding function maximum
# Initializing options (and filling missing values with default)
self = self.check_options(chi0, **kwargs)
# Running CONGRAD
lanczvect0 = copy.deepcopy(gradinit)
xopt, gradopt, preduc, pevecs, iiter = self.congrad(
chi0, gradinit, lanczvect0, **kwargs
# Final verbose and output
towrite = """
number of iterations: {}
achieved relative reduction of the gradient: {}
iiter, preduc
r1 = np.sqrt(, xopt))
r2 = np.sqrt(, gradopt))
info("norm of x = " + str(r1))
info("norm of g = " + str(r2))
return xopt
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment