Skip to content
GitLab
Menu
Projects
Groups
Snippets
/
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
VERIFY
CIF
Commits
ff25fdbe
Commit
ff25fdbe
authored
Feb 06, 2021
by
Antoine Berchet
Browse files
Merged NILU master
parents
25674479
091603c6
Pipeline
#91
failed with stage
Changes
5
Pipelines
1
Hide whitespace changes
Inline
Sidebyside
pycif/plugins/minimizers/congrad/LICENSE
deleted
100644 → 0
View file @
25674479
Python version of the congrad minimization algorithm
Mike Fisher (ECMWF), April 2002
Frederic Chevallier (LSCE), April 2004, for the Python adaptation
This software is governed by the CeCILLB license under French law and
abiding by the rules of distribution of free software. You can use,
modify and/ or redistribute the software under the terms of the CeCILL
license as circulated by CEA, CNRS and INRIA at the following URL
"http://www.cecill.info".
This Agreement is an open source software license intended to give users
significant freedom to modify and redistribute the software licensed
hereunder.
The exercising of this freedom is conditional upon a strong obligation
of giving credits for everybody that distributes a software
incorporating a software ruled by the current license so as all
contributions to be properly identified and acknowledged.
In consideration of access to the source code and the rights to copy,
modify and redistribute granted by the license, users are provided only
with a limited warranty and the software's author, the holder of the
economic rights, and the successive licensors only have limited liability.
In this respect, the risks associated with loading, using, modifying
and/or developing or reproducing the software by the user are brought to
the user's attention, given its Free Software status, which may make it
complicated to use, with the result that its use is reserved for
developers and experienced professionals having indepth computer
knowledge. Users are therefore encouraged to load and test the
suitability of the software as regards their requirements in conditions
enabling the security of their systems and/or data to be ensured and,
more generally, to use and operate it in the same conditions of
security. This Agreement may be freely reproduced and published,
provided it is not altered, and that no provisions are either added or
removed herefrom.
The fact that you are presently reading this means that you have had
knowledge of the CeCILL license and that you accept its terms.
==========================================================================
pycif/plugins/minimizers/congrad/README
deleted
100644 → 0
View file @
25674479
Python version of the congrad minimization algorithm
Mike Fisher (ECMWF), April 2002
Frederic Chevallier (LSCE), April 2004, for the Python adaptation
Description of the Fortran subroutine:
CONGRAD  Lanczos algorithm linear equation and eigenvalue solver
Purpose. Simultaneously minimizes the cost function and determines
 some of the eigenvalues and eigenvectors of its Hessian.
Interface.

CALL CONGRAD (...)
Explicit arguments:
Inputs: simul  The simulator
px  guess at the vector which minimizes the cost.
pgrad  gradient of cost function at initial 'px'.
planc1  Initial Lanczos vector
preduc  required reduction in gradient norm.
pevbnd  Accuracy required of approximate eigenvectors.
kvadim  dimension of vectors px, pgrad, etc.
kmaxit  iteration stops if number of iterations
exceeds this value.
kverbose  verbosity (0 => no messages, 1=> taciturn,
2 => verbose)
kulout  I/O unit number for information messages (e.g.
standard output)
kulerr  I/O unit number for error messages (e.g. standard
error)
ldevecs  calculate eigenvectors?
ldsolve  minimize the cost function?
Outputs: px  improved guess at the vector which minimizes
the cost.
pgrad  estimated gradient of cost function at final 'px'.
preduc  achieved reduction in gradient norm.
pgolubu  upper bound for: (planc1)' (J'')^1 planc1
pgolubl  lower bound for: (planc1)' (J'')^1 planc1
pevecs  eigenvectors of the Hessian (with
preconditioning undone, and
scaled by sqrt(lambda1))
kmaxit  the number of iterations actually performed
Externals. SIMUL
 SSTEQR/DSTEQR (LAPACK eigendecomposition routine)
SPTSV/DPTSV (LAPACK symmetric tridiagonal solver)
PRECOND
WREVECS
XFORMEV
ABOR1
The simulator (SIMUL) interface is as follows:
CALL SIMUL (kindic,kvadim,px,pcost,pgrad,kiter)
INTEGER_M, INTENT(IN) :: kindic  SIMUL is always called
with indic=4
INTEGER_M, INTENT(IN) :: kvadim  The dimension of the
control vector
INTEGER_M, INTENT(IN) :: kiter  The iteration counter
REAL_B, INTENT(IN) :: px  The control variable
REAL_B, INTENT(OUT) :: pcost  The value of the cost
function
REAL_B, INTENT(OUT) :: pgrad  The gradient of the cost
function
(Note: CONGRAD does not make use of the value of the cost function, pcost)
Reference.

None yet!
Author.

Mike Fisher *ECMWF*
Modifications.

Original 94/04/19
M. Fisher: 95/09/20 Reorganized code
M. Fisher: 96/05/16 Orthogonalize every iteration
M. Fisher: 96/08/01 Optionally keep vectors in memory
E. Andersson 96/08/01 Precalculated math.sqrt(SCALP)
M. Fisher: 97/08/05 Change orthogonalization and f90ize
M. Fisher: 97/11/26 USE YOM_DISTRIBUTED_VECTORS
M. Fisher: 99/01/12 Generalized Cross Validation
M. Fisher: 02/04/30 remove ECMWFspecific code
F. Chevallier: 04/05 translate into python
Comparison of congrad variables to those of Lanczos (R Thompson, Feb 2012)

solve x for Ax = w : find T = V*A*Vt
A = pgrad
vj(0) = pgrad/norm(pgrad)
beta(0) = 0
for i = 1:k
wj = A*vj  beta*vj
alpha = < wj, vj >
wj+1 = wj  alpha*vj
beta = norm(wj+1)
vj+1 = wj+1/beta
congrad variable equivalents :
zdelta = alpha
zbeta = beta
zcglwk[j,] = vj
pgrad = A
solution :
T = tridiagonal vector, alpha, beta
x = x0 + V*T^1*Vt
where :
x = px (best fit solution vector)
x0 = px0 (first guess solution vector)
vt = zqg0
\ No newline at end of file
pycif/plugins/minimizers/congrad/__init__.py
deleted
100644 → 0
View file @
25674479
"""
Python version of the congrad minimization algorithm
Mike Fisher (ECMWF), April 2002
Frederic Chevallier (LSCE), April 2004, for the Python adaptation
"""
from
types
import
MethodType
from
.check
import
check_options
from
.congrad
import
congrad
from
.minimize
import
minimize
_name
=
"congrad"
requirements
=
{
"simulator"
:
{
"any"
:
True
,
"empty"
:
True
,
"name"
:
"gausscost"
,
"version"
:
"std"
,
},
}
def
ini_data
(
plugin
,
**
kwargs
):
"""Initializes congrad.
Args:
plugin (Plugin): options for the variational inversion
Returns:
updated plugin
"""
# Function to check the consistency of options and arguments
plugin
.
check_options
=
MethodType
(
check_options
,
plugin
)
# M1QN3 itself
plugin
.
congrad
=
MethodType
(
congrad
,
plugin
)
return
plugin
pycif/plugins/minimizers/congrad/check.py
deleted
100644 → 0
View file @
25674479
def
check_options
(
self
,
chi
,
**
kwargs
):
"""Check whether necessary parameters are define.
Fills with default values if not
Args:
self (Plugin): the minimizer plugin
chi (numpy array): Chi vector
Return:
the updated minimizer
"""
# Required reduction in gradient norm
self
.
zreduc
=
getattr
(
self
,
"zreduc"
,
1e15
)
self
.
pevbnd
=
getattr
(
self
,
"pevbnd"
,
0.01
)
self
.
kvadim
=
chi
.
size
self
.
kverbose
=
getattr
(
self
,
"kverbose"
,
1
)
self
.
ldsolve
=
getattr
(
self
,
"ldsolve"
,
1
)
# Check for missing attributes
if
not
hasattr
(
self
,
"maxiter"
):
raise
AttributeError
(
"maxiter is missing in the definition of the "
"minimizer. Please check your Yaml file"
)
else
:
self
.
knevecout
=
self
.
maxiter
return
self
pycif/plugins/minimizers/congrad/minimize.py
deleted
100644 → 0
View file @
25674479
import
copy
import
numpy
as
np
from
logging
import
info
def
minimize
(
self
,
finit
,
gradinit
,
chi0
,
**
kwargs
):
# x, f, g, auxil, io, niter, nsim, iz, df1, m=5, dxmin=1.e20,
# epsg=1.e20, impres=1, mode=0, **kwargs
"""Entry point for CONGRAD algorithm.
Args:
finit (float): initial value for the function to minimize
gradinit (np.array): gradient at the starting point
chi (np.array): initial state for the unknown to optimize
simulator (module): simulator module to evaluate the function and
its gradient
minimizer (module): minimizer module, used to define minimizer options
Returns:
(np.array, float): a tuple with the optimized vector and the
corresponding function maximum
"""
# Initializing options (and filling missing values with default)
self
=
self
.
check_options
(
chi0
,
**
kwargs
)
# Running CONGRAD
lanczvect0
=
copy
.
deepcopy
(
gradinit
)
xopt
,
gradopt
,
preduc
,
pevecs
,
iiter
=
self
.
congrad
(
chi0
,
gradinit
,
lanczvect0
,
**
kwargs
)
# Final verbose and output
towrite
=
"""
CONGRAD:
number of iterations: {}
achieved relative reduction of the gradient: {}
"""
.
format
(
iiter
,
preduc
)
info
(
towrite
)
r1
=
np
.
sqrt
(
np
.
dot
(
xopt
,
xopt
))
r2
=
np
.
sqrt
(
np
.
dot
(
gradopt
,
gradopt
))
info
(
"norm of x = "
+
str
(
r1
))
info
(
"norm of g = "
+
str
(
r2
))
return
xopt
Write
Preview
Supports
Markdown
0%
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment