Commit 69623ce2 authored by Antoine Berchet's avatar Antoine Berchet
Browse files

Merge branch 'devel' into 'LSCE'

Devel

See merge request satinv/cif!162
parents 47a48716 76e68271
......@@ -126,6 +126,48 @@ release_tagging:
only:
- devel
# Check that coverage did not decreased
improved_coverage:
stage: coverage
image:
name: pycif/pycif-ubuntu:0.1
entrypoint: [""]
before_script:
- pip freeze
- pip install coverage
script:
- echo AAAAAAAAAA
- ls -R coverage_raw/
- echo AAAAAAAAAA
- python3 -m coverage combine coverage_raw/.coverage*
- coverage html -d reports/coverage
- coverage xml -o reports/coverage.xml
after_script:
- mkdir -p coverage
- xmlstarlet sel -t -v "//coverage/@line-rate" reports/coverage.xml > coverage/.current_coverage
- calc() { awk "BEGIN{print $*}"; }
- percent_coverage=`cat coverage/.current_coverage`
- tot_coverage=`calc ${percent_coverage}*100`
- echo 'TOTAL COVERAGE:'" ${tot_coverage}%"
coverage: '/^TOTAL COVERAGE: ([0-9\.]+\%)$/'
# - MAX=`if [ -f coverage/.master_cov ] ; then cat coverage/.master_cov ; else echo 0.00 ; fi`
# - CURRENT=`cat coverage/.current_coverage`
# - echo $CURRENT
# - echo $MAX
# - if [[ $CURRENT < $MAX ]] ; then echo "Coverage decreased!!!"; exit 1 ; else echo "Coverage did not decrease, good job!"; exit 0 ; fi;
artifacts:
when: always
paths:
- reports/coverage/
- coverage/.current_coverage
# - coverage_raw/
# cache:
# paths:
# - coverage/.master_cov
# Generate figures and artifacts for the website
article:
......@@ -141,7 +183,11 @@ article:
after_script:
- mkdir -p coverage
- xmlstarlet sel -t -v "//coverage/@line-rate" reports/coverage.xml > coverage/.current_coverage
- echo 'TOTAL COVERAGE:'" $(cat coverage/.current_coverage)%"
- calc() { awk "BEGIN{print $*}"; }
- percent_coverage=`cat coverage/.current_coverage`
- tot_coverage=`calc ${percent_coverage}*100`
- echo 'TOTAL COVERAGE:'" ${tot_coverage}%"
- mv coverage_raw/coverage/.coverage coverage_raw/.coverage.article
coverage: '/^TOTAL COVERAGE: ([0-9\.]+\%)$/'
artifacts:
when: always
......@@ -149,6 +195,7 @@ article:
- reports/pytest*.html
- reports/coverage/
- coverage/.current_coverage
- coverage_raw
- examples_artifact
- figures_artifact
only:
......@@ -167,7 +214,11 @@ article_uncertainties:
after_script:
- mkdir -p coverage
- xmlstarlet sel -t -v "//coverage/@line-rate" reports/coverage.xml > coverage/.current_coverage
- echo 'TOTAL COVERAGE:'" $(cat coverage/.current_coverage)%"
- calc() { awk "BEGIN{print $*}"; }
- percent_coverage=`cat coverage/.current_coverage`
- tot_coverage=`calc ${percent_coverage}*100`
- echo 'TOTAL COVERAGE:'" ${tot_coverage}%"
- mv coverage_raw/coverage/.coverage coverage_raw/.coverage.article_uncertainties
coverage: '/^TOTAL COVERAGE: ([0-9\.]+\%)$/'
artifacts:
when: always
......@@ -175,6 +226,7 @@ article_uncertainties:
- reports/pytest*.html
- reports/coverage/
- coverage/.current_coverage
- coverage_raw
- examples_artifact
- figures_artifact
only:
......@@ -194,7 +246,11 @@ tests_dummy:
after_script:
- mkdir -p coverage
- xmlstarlet sel -t -v "//coverage/@line-rate" reports/coverage.xml > coverage/.current_coverage
- echo 'TOTAL COVERAGE:'" $(cat coverage/.current_coverage)%"
- calc() { awk "BEGIN{print $*}"; }
- percent_coverage=`cat coverage/.current_coverage`
- tot_coverage=`calc ${percent_coverage}*100`
- echo 'TOTAL COVERAGE:'" ${tot_coverage}%"
- mv coverage_raw/coverage/.coverage coverage_raw/.coverage.dummy
coverage: '/^TOTAL COVERAGE: ([0-9\.]+\%)$/'
artifacts:
when: always
......@@ -202,6 +258,7 @@ tests_dummy:
- reports/pytest*.html
- reports/coverage/
- coverage/.current_coverage
- coverage_raw
- examples_artifact
# Run the tests for chimere (include downloading data)
......@@ -225,7 +282,11 @@ tests_chimere:
after_script:
- mkdir -p coverage
- xmlstarlet sel -t -v "//coverage/@line-rate" reports/coverage.xml > coverage/.current_coverage
- echo 'TOTAL COVERAGE:'" $(cat coverage/.current_coverage)%"
- calc() { awk "BEGIN{print $*}"; }
- percent_coverage=`cat coverage/.current_coverage`
- tot_coverage=`calc ${percent_coverage}*100`
- echo 'TOTAL COVERAGE:'" ${tot_coverage}%"
- mv coverage_raw/coverage/.coverage coverage_raw/.coverage.chimere
coverage: '/^TOTAL COVERAGE: ([0-9\.]+\%)$/'
artifacts:
when: always
......@@ -233,6 +294,7 @@ tests_chimere:
- reports/pytest*.html
- reports/coverage/
- coverage/.current_coverage
- coverage_raw
- examples_artifact
# Run the tests for flexpart (include downloading data)
......@@ -254,7 +316,11 @@ tests_flexpart:
after_script:
- mkdir -p coverage
- xmlstarlet sel -t -v "//coverage/@line-rate" reports/coverage.xml > coverage/.current_coverage
- echo 'TOTAL COVERAGE:'" $(cat coverage/.current_coverage)%"
- calc() { awk "BEGIN{print $*}"; }
- percent_coverage=`cat coverage/.current_coverage`
- tot_coverage=`calc ${percent_coverage}*100`
- echo 'TOTAL COVERAGE:'" ${tot_coverage}%"
- mv coverage_raw/coverage/.coverage coverage_raw/.coverage.flexpart
coverage: '/^TOTAL COVERAGE: ([0-9\.]+\%)$/'
artifacts:
when: always
......@@ -262,6 +328,7 @@ tests_flexpart:
- reports/pytest*.html
- reports/coverage/
- coverage/.current_coverage
- coverage_raw
- examples_artifact
# Run the tests for flexpart (include downloading data)
......@@ -283,7 +350,11 @@ tests_tm5:
after_script:
- mkdir -p coverage
- xmlstarlet sel -t -v "//coverage/@line-rate" reports/coverage.xml > coverage/.current_coverage
- echo 'TOTAL COVERAGE:'" $(cat coverage/.current_coverage)%"
- calc() { awk "BEGIN{print $*}"; }
- percent_coverage=`cat coverage/.current_coverage`
- tot_coverage=`calc ${percent_coverage}*100`
- echo 'TOTAL COVERAGE:'" ${tot_coverage}%"
- mv coverage_raw/coverage/.coverage coverage_raw/.coverage.tm5
coverage: '/^TOTAL COVERAGE: ([0-9\.]+\%)$/'
artifacts:
when: always
......@@ -291,6 +362,7 @@ tests_tm5:
- reports/pytest*.html
- reports/coverage/
- coverage/.current_coverage
- coverage_raw
- examples_artifact
#
......@@ -351,22 +423,7 @@ tests_tm5:
# paths:
# - reports/flake8/
# Check that coverage did not decreased
# improved_coverage:
# stage: coverage
# image: bashell/alpine-bash
# script:
# - MAX=`if [ -f coverage/.master_cov ] ; then cat coverage/.master_cov ; else echo 0.00 ; fi`
# - CURRENT=`cat coverage/.current_coverage`
# - echo $CURRENT
# - echo $MAX
# - if [[ $CURRENT < $MAX ]] ; then echo "Coverage decreased!!!"; exit 1 ; else echo "Coverage did not decrease, good job!"; exit 0 ; fi;
# artifacts:
# paths:
# - coverage/.current_coverage
# cache:
# paths:
# - coverage/.master_cov
# Store the new coverage value only for the master and devel branches
# store_coverage:
......
......@@ -10,13 +10,17 @@ config_file=`echo "$(cd "$(dirname "$config_file")"; pwd)/$(basename "$config_fi
pycif_root_dir=`echo "$(cd "$pycif_root_dir"; pwd)"`
# Fetch workdir to mount it in the container for saving outputs
workdir=`python3 -c \
"from pycif.utils.yml import ordered_load;
workdir=`python3 -W ignore -c \
"
from pycif.utils.yml import ordered_load;
with open('$config_file', 'r') as f:
config = ordered_load(f)
print('AAAAAAAAAAAA')
print(config['workdir']); "`
workdir=`echo $workdir | awk -F "AAAAAAAAAAAA" '{print $2}'`
mkdir -p $workdir
# Run the configuration into the container and writing outputs to workdir
......@@ -24,9 +28,9 @@ docker run -it -v $config_file:/config/config.yml \
-v $pycif_root_dir:/tmp/CIF/ \
-v $workdir:/workdir/ \
$extra_volumns \
--entrypoint /bin/bash pycif/pycif-tm5:0.2
pycif/pycif-tm5:0.2
# --entrypoint /bin/bash pycif/pycif-tm5:0.2
# --entrypoint /bin/bash pycif/pycif-ubuntu:0.1
# pycif/pycif-tm5:latest
......@@ -8,7 +8,7 @@ extra_volumns="-v /home/aberchet/Projects/PYCIF_DATA_TEST/:/tmp/PYCIF_DATA_TEST/
docker run \
-v $pycif_root_dir:/tmp/CIF/ \
$extra_volumns \
-it --entrypoint /tmp/CIF/bin/tox_command.sh pycif/pycif-tm5:0.2
-it --entrypoint /tmp/CIF/bin/tox_command.sh pycif/pycif-ubuntu:0.1
......
......@@ -13,13 +13,13 @@ dirout=/home/chimereges/aberchet/debugchimere/pytest/
export PYCIF_DATATEST=/home/chimereges/PYCIF_TEST_DATA/
# Set up the platform to be run at LSCE, otherwise, use default docker parameters
#export PYCIF_PLATFORM=LSCE
export PYCIF_PLATFORM=LSCE
###
# select a subset of tests to run by using the tags ("mark")
#mark="(dummy and article and inversion and not adjtltest and not uncertainties) or (fwd and ref_config)"
mark="(dummy and article and inversion and not adjtltest and uncertainties) or (fwd and ref_config)"
#mark="(dummy and article and inversion and not adjtltest and uncertainties) or (fwd and ref_config)"
#mark="(fwd and ref_config)"
#mark="test_in_ci and dummy"
mark="test_in_ci and dummy"
#mark="test_in_ci and chimere"
#mark="chimere and argfrsd and fwd"
#mark="tm5 and test_in_ci and fwd"
......
......@@ -2,4 +2,4 @@
cd /tmp/CIF/
pip freeze
tox -e py38 -e coverage -- -m 'test_in_ci and tm5 and fwd'
tox -e py38 -e coverage -- -m 'test_in_ci and dummy and fwd'
......@@ -542,6 +542,7 @@ def build_rst_from_plugins(app):
".. toctree::",
" :maxdepth: 3",
"",
" ../plugin_description",
" ../dependencies"
] + [
" {}/index".format(Plugin.plugin_types[plg_type][0][1:])
......
......@@ -17,9 +17,9 @@ Requirements
To contribute to the documentation the following softwares need to be install on your system:
* `Sphinx <https://www.sphinx-doc.org/en/master/>`__ : automatic generation of html pages from rst files
* `Graphviz <https://www.graphviz.org/>`__ : embed graphics in the documentation
* pycif : the CIF python package needs to be properly installed on your system to automatically build from modules
* `Sphinx <https://www.sphinx-doc.org/en/master/>`__: automatic generation of html pages from rst files
* `Graphviz <https://www.graphviz.org/>`__: embed graphics in the documentation
* :doc:`pycif<installation>`: the CIF python package needs to be properly installed on your system to automatically build from modules
Compiling commands
==================
......@@ -34,6 +34,13 @@ To do so, type:
Then, you can simply open html files generated in :bash:`CIF_ROOT/docs/build/html/` in any web browser.
It is recommended to re-compile the documentation from scratch time to time:
.. code-block:: bash
cd CIF_ROOT/docs
make clean
Adding new files in the documentation
-------------------------------------
......@@ -43,21 +50,22 @@ Tables of contents are generally placed in the :bash:`index.rst` file in each su
Therein, the file name should be added below the command :bash:`.. toctree::` alongside existing files
(the rst extension is not mandatory in that command).
When documentating a new plugin in pyCIF, please find a template of the corresponding documentation file below:
Automatic construction from codes
---------------------------------
.. code-block:: rst
The automatic documentation from codes is available :doc:`here </documentation/plugins/index>`
#####################
My awesome plugin
#####################
.. automodule:: pycif.plugins.the_type_of_plugin.my_awesome_plugin
Class of plugins
================
Type of plugins can be described by a full description, giving e.g., information about the expected structure, functions, etc.
Examples are available in :bash:`pycif/plugins/modes/__init__.py`.
Any help to improve these descriptions is welcome.
This allows the documentation to build the documentation automatically from the code of the plugin (see below).
Automatic construction from codes
---------------------------------
Individual plugins
==================
The documentation is designed to include information in the pycif plugins directly and automatically.
To allow this feature to work properly, the :bash:`__init__.py` file of the plugin being documented should include:
......@@ -72,10 +80,16 @@ To allow this feature to work properly, the :bash:`__init__.py` file of the plug
I can use rst syntax in there, which will automatically compile in the documentation
"""
the result will be inserted in a `Description` section of the corresponding entry of the documentation
the result will be inserted in a `Description` section of the corresponding entry of the documentation.
.. note::
Please note that if you need sub-sections in the description section,
you should use the dot :bash:`..........` to underline sub-titles.
* a dictionary :bash:`input_arguments`: this dictionary is both used to define default values and document arguments
that are needed in the configuration file; for each entry, you can add a :bash:`"doc"` entry, including a string describing the corresponding argument:
that are needed in the configuration file; for each entry, you can add a :bash:`"doc"` entry,
including a string describing the corresponding argument:
.. code-block:: python
......@@ -87,13 +101,47 @@ To allow this feature to work properly, the :bash:`__init__.py` file of the plug
}
}
Some plugins may require sub-paragraphs to be properly defined. This can be done as follows:
.. code-block:: python
input_arguments = {
"some_argument": {
"doc": "some description",
"default": None if mandatory else a default value,
"accepted": a python type that the argument should fit
},
"some_subparagraph": {
"doc": "Documentation of the paragraph",
"default": None (must be None here),
"optional": True or False,
"structure": {
"sub-arguments1": {
"doc": "some description",
"default": None if mandatory else a default value,
"accepted": a python type that the argument should fit
},
"sub-arguments2": {
"doc": "some description",
"default": None if mandatory else a default value,
"accepted": a python type that the argument should fit
},
"some_subsubparagraph": {
...
}
}
}
}
Proposing tutorials
-------------------
Tutorials are critical for new users and developers to start working with the CIF efficiently.
They should encompass as many situations as possible.
Experienced users/developers are encouraged to share their experience and explain how to tackle some specific difficulties they met in using the system.
Experienced users/developers are encouraged to share their experience
and explain how to tackle some specific difficulties they met in using the system.
Rules
=====
......
......@@ -13,8 +13,8 @@ This version does not match any "official/standard" version since it is maintain
misc
meteo
emissions
ci
bc
inicond
bouncond
timesteps
processes
verticalparam
......
......@@ -8,6 +8,7 @@ General input and output structures in the CIF
monitor
controlvect
obsvect
......
......@@ -21,6 +21,7 @@ The observations are ordered along an index, which is simply the ID number of ea
The basic information are:
++++++++++++++++++++++++++
:date:
date at which the observation begins.
The date must be a datetime object.
......
##############################
Plugins: what are they?
##############################
......@@ -3,7 +3,11 @@ Yaml configuration
pyCIF can be entirely set up with a
`Yaml <http://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html>`__
file such as in the example below. The basic idea of Yaml
file such as in the example below.
The Yaml configuration file structure is used to define, initialize and run the building blocks of pyCIF:
:doc:`Plugins <plugins/index>`
The basic idea of Yaml
syntax is that you can easily define a tree structure using ':' and
indentation, which would be automatically interpreted by Python.
......
......@@ -13,9 +13,20 @@ Welcome to the Community Inversion Framework!
The Community Inversion Framework is governed by the CeCILL-C license under French law.
Please consult the reference text `here <https://cecill.info/licences/Licence_CeCILL-C_V1-en.html>`__ for further detail.
The license grants full rights for the users to use, modify and redistribute the original version of the CIF,
conditional to the obligation to make their modifications available to the community and to properly acknowledge the original authors of the code.
conditional to the obligation to make their modifications available to the community
and to properly acknowledge the original authors of the code.
.. admonition:: Use, aknowledgement and citation
The Community Inversion Framework has been designed by a community of scientists who agreed to openly share their developments.
The maintenance and further developments are made possible through continued efforts by the core team of developper,
with the support of their respective funding agencies.
Any use of the Community Inversion Framework should then be fairly acknowledged.
Users are required to establish contact with `the team of developers <help@community-inversion.eu>`__ to determine an appropriate level of acknowledgement
through co-authorship and relevant citations.
.. _VERIFY: http://verify.lsce.ipsl.fr
.. toctree::
......
......@@ -2,4 +2,7 @@
Publications by developers
##########################
- Thanwerdas, J., Saunois, M., Berchet, A., Pison, I., Vaughn, B. H., Michel, S. E., and Bousquet, P.:
Variational inverse modelling within the Community Inversion Framework to assimilate δ13C(CH4) and CH4:
a case study with model LMDz-SACS, Geosci. Model Dev. Discuss. [preprint],
https://doi.org/10.5194/gmd-2021-106, in review, 2021.
......@@ -2,4 +2,9 @@
Publications by users
#####################
- Thanwerdas, J., Saunois, M., Pison, I., Hauglustaine, D., Berchet, A., Baier, B., Sweeney, C., and Bousquet, P.:
How do Cl concentrations matter for simulating CH4, δ13C(CH4) and estimating CH4 budget through
atmospheric inversions?, Atmos. Chem. Phys. Discuss. [preprint],
https://doi.org/10.5194/acp-2021-950, in review, 2021.
from pycif.utils.datastores import dump
import copy
import numpy as np
import pandas as pd
from pycif.utils.netcdf import readnc
import xarray as xr
from itertools import zip_longest
def retrieve_aks(fin,index,list_var):
# fin is the original input monitor
# containing satellite-specific information: ak, qa0, etc
# fin is read to retrieve this info for data number index
index = int(index)
qa0, ak, pavg0, dryair = readnc(fin, list_var)
return qa0[index].tolist(), ak[index].tolist(), pavg0[index].tolist(), dryair[index].tolist()
# WORKDIR of the forward simulation with all "raw" data
refdir = '/home/chimereicos/fwd_all_data/obsoperator/fwd_0000/'
# directory for the info file
dirinfo = 'chain/satellites/default_00001'
# directory for the output monitor file
dirmonit = 'obsvect/satellites/CH4/'
# files to use
monitor_in = '{}{}/monitor.nc'.format(refdir,dirmonit)
infos = '{}{}/infos_201901010000.nc'.format(refdir,dirinfo)
# basic data to provide in an input monitor
list_basic_cols = [ 'date', 'duration', 'station', 'network', 'parameter', 'lon', 'lat', 'obs', 'obserror' ]
# Monitor and info data to use
ds = dump.read_datastore(monitor_in)
dsinf = dump.read_datastore(infos,col2dump= ['orig_index', 'file_aks'])
# paste ds and dsinf together to get full information for each obs
ds3 = pd.concat( [ds,dsinf],axis = 1)
# example filter: filter out obs outside the domain
ds3 = ds3.loc[ ~np.isnan(ds3['orig_index']) ]
# example change: set obserror at a given value
ds3 = ds3.assign(obserror = 20)
# generate a satellite input monitor with these new data:
# for each line in ds2, retrieve qa0,ak,pavg0,dryair from the right file
list_var = ['qa0', 'ak', 'pavg0', 'dryair']
ds4 = ds3.apply(lambda x: retrieve_aks(x.file_aks,x.orig_index,list_var), axis = 1)
# reformat ds4 to put satellite specific info in a xarray
# keep index to keep track of selection - no impact on the CIF
ds5 = pd.DataFrame((item for item in ds4), columns = list_var, index=ds3.index)
fixed_nlevels = len(ds5['ak'].iloc[0])
list_tab = []
for k,var in enumerate(list_var):
list_tab.append(pd.DataFrame((item for item in ds5[var])).values)
# satellite specific info
ds_sat = xr.Dataset({'qa0': (['index', 'level'], list_tab[0]),
'ak': (['index', 'level'], list_tab[1]),
'pavg0': (['index', 'level_pressure'], list_tab[2]),
'dryair': (['index', 'level'], list_tab[3])},
coords={'index': ds5.index,
'level': range(fixed_nlevels),
'level_pressure': range(fixed_nlevels + 1)})
# basic data
basic_data = ds3[list_basic_cols].to_xarray()
# merge basic and satellite-specific data
alldata = xr.merge([ds_sat, basic_data])
# create new clean input monitor
alldata.to_netcdf('new_monitor.nc')
Check what has been done in the :bash:`workdir`:
===================================================
Checking the input files:
-------------------------
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Checking the output files:
--------------------------
XXX part about checking what is done by the pre-proc:
where are the files used by the simu, where are the outputs, etc
The observations plus their simulated equivalents are in
:bash:`$WORKIDR/obsoperator/fwd_0000/obsvect/concs/SPEC/monitor.nc`
with SPEC the species available in the observations.
In this simple case, check that the columns i, j, level, sim, tstep, tstep\_glo and dtstep are filled.
Elaborate the yaml for the CIF
==================================
.. important::
**How to use the** :doc:`cheat-sheet</documentation/dependencies>` **for plugins**
In the following, plugins have te be used and provided specifications.
The arguments can be found in the documentation of each plugin.
To make access to the plugins easier, the cheat-sheet shows them sorted by **type**:
the various types are the left-most (e.g. chemistry, controlvect, fields).
For each type, available plugins are listed with the **name, version** of each displayed.
Note that stating the name and version of a plugin is mandatory, whereas stating its type not always necessary.
.. contents::
:local:
Section for PyCIF parameters:
-----------------------------
.. container:: toggle
.. container:: header
Show/Hide Code
.. yml-block:: /yaml_examples/chimere/config_fwd_ref_chimere.yml
:keys: verbose, logfile, workdir, datei, datef
In this section of the yaml, it is possible to define :doc:`anchors</documentation/paths>` to be used in the rest of the file.