Commit 01615143 authored by Antoine Berchet's avatar Antoine Berchet
Browse files

Tuto on new flux; orchidee fluxes

parent a9c58026
...@@ -27,7 +27,7 @@ The :bash:`datavect` paragraph of your working yaml should look like that: ...@@ -27,7 +27,7 @@ The :bash:`datavect` paragraph of your working yaml should look like that:
.. container:: header .. container:: header
Show/Hide Code Example with CHIMERE
.. code-block:: yaml .. code-block:: yaml
:linenos: :linenos:
...@@ -49,11 +49,13 @@ The :bash:`datavect` paragraph of your working yaml should look like that: ...@@ -49,11 +49,13 @@ The :bash:`datavect` paragraph of your working yaml should look like that:
file: some_file file: some_file
Do the following to make it work with the template flux:
1. follow the initial steps in :doc:`the flux template documentation page</documentation/plugins/datastreams/fluxes/flux_plugin_template>` 1. follow the initial steps in :doc:`the flux template documentation page</documentation/plugins/datastreams/fluxes/flux_plugin_template>`
to initialize your new plugin and register it. to initialize your new plugin and register it.
It includes copying the template folder to a new path and changing the variables It includes copying the template folder to a new path and changing the variables
:bash:`_name`,:bash:`_fullname` and :bash:`_version` in the file :bash:`__init__.py` :bash:`_name`, :bash:`_fullname` and :bash:`_version` in the file :bash:`__init__.py`
2. update your Yaml to use the template flux (renamed with your preference). It should now look like that: 2. update your Yaml to use the template flux (renamed with your preference). It should now look like that:
...@@ -95,95 +97,256 @@ Include the following information: ...@@ -95,95 +97,256 @@ Include the following information:
- data format (temporal and horizontal resolution, names and shape of the data files) - data format (temporal and horizontal resolution, names and shape of the data files)
- any specific treatment that prevents the plugin from working with another type of files. - any specific treatment that prevents the plugin from working with another type of files.
Build and check the documentation
=================================
Before going further, please compile the documentation and check that your new plugin
appears in the list of datastreams plugins :doc:`here</documentation/plugins/datastreams/index>`.
Also check that the documentation of your new plugin is satisfactory.
To compile the documentation, use the command:
.. code-block:: bash
cd $CIF_root/docs
make html
Further details can be found :doc:`here</contrib_doc>`.
Updating functions and data to implement your flux data Updating functions and data to implement your flux data
======================================================= =======================================================
Your new plugin will need functions to be coded to work.
fetch
------
The :bash:`fetch` function determines what files and corresponding dates are available
for running the present case.
The structure of the :bash:`fetch` function is shown here: :ref:`datastreams-fetch-funtions`.
Please read carefully all explanations therein before starting implementing your case.
By default, the :bash:`fetch` function will use the arguments :bash:`dir` and :bash:`file` in your yaml.
Make sure to update your yaml accordingly:
.. container:: toggle
.. container:: header
Show/Hide Code
.. code-block:: yaml
:linenos:
datavect:
plugin:
name: standard
version: std
components:
flux:
parameters:
CO2:
plugin:
name: your_new_name
type: flux
version: your_version
dir: path_to_data
file: file_name
Depending on how you implement your data stream, extra parameters may be needed.
Please document them on-the-fly in the :bash:`input_arguments` variable in :bash:`__init__.py`.
One classical parameter is :bash:`file_freq`, which gives the frequency of the input files
(independently to the simulation to be computed).
Once implemented, re-run your test case.
You can check that everything went as expected by checking:
1. in the folder :bash:`$workdir/datavect/flux/your_species/`, links to original data files should be initialized
2. it is possible to check that the list of dates and files is initialized as expected. To do so, use the option
:bash:`dump_debug` in the :bash:`datavect` paragraph in the yaml
(see details :doc:`here</documentation/plugins/datavects/standard>`).
It will dump the list of dates and files in a file named :bash:`$workdir/datavect/flux.your_species.txt`
get_domain (optional)
---------------------
A datastream plugin needs to be described by a domain to be processed in pyCIF.
There are three valid approaches to associate a valid domain to your flux data.
The two first one are given for information, but the third one is
the one to be preferred in most cases:
1. fetch it from another object in the set-up. This is relevant when the domain
should be exactly the same as the one of another Plugin in your configuration.
For instance, if you are implementing a flux plugin dedicated to a model,
you will expect it to have exactly the same domain as the model.
To ensure that your flux plugin fetch the domain from the present set-up,
it is possible to define a so-called :doc:`requirement </documentation/dependencies>`.
This is done be adding the following lines to the :bash:`__init__.py` file
.. code-block:: python
requirements = {
"domain": {"name": "CHIMERE", "version": "std", "empty": False},
}
In that case, the flux will expect a CHIMERE domain to be defined, otherwise pycif
will return an exception
2. directly define the domain in the yaml as a sub-paragraph.
This will look like that:
.. container:: toggle
.. container:: header
Show/Hide Code
.. code-block:: yaml
:linenos:
datavect:
plugin:
name: standard
version: std
components:
flux:
parameters:
CO2:
plugin:
name: your_new_name
type: flux
version: your_version
dir: path_to_data
file: file_name
domain:
plugin:
name: my_domain_name
version: my_domain_version
some_extra_parameters: grub
Such an approach is not necessarily recommended as it forces the user to properly
configure his/her Yaml file to make the case working properly.
.. warning::
If this path is chosen please document the usage very carefully.
3. Using the function :bash:`get_domain` to define the domain dynamically, based
on input files, or with fixed parameters.
The structure of the :bash:`get_domain` function is shown here: :ref:`datastreams-get_domain-funtions`.
Please read carefully all explanations therein before starting implementing your case.
Once implemented, re-run your test case.
The implementation of the correct domain will have an impact on the native resolution
used to randomly generate fluxes (remember that the :bash:`read` function still
comes from the template and thus generate random fluxes for the corresponding domain).
Therefore, pycif will automatically reproject the fluxes from the implemented domain to
your model's domain.
One can check that the implemented domain is correct by:
1. check that the flux files generated for your model seem to follow the native resolution of
your data
2. it is possible to dump intermediate data during the computation of pycif.
To do so, activate the option :bash:`save_debug` in the :bash:`obsoperator`:
.. container:: toggle
.. container:: header
##################################### Show/Hide Code
.. code-block:: yaml
:linenos:
obsoperator:
plugin:
name: standard
version: std
save_debug: True
0. .. include:: ../newBCdata/knownplugin.rst When activated this option dumps intermediate states in :bash:`$workdir/obsoperator/$run_id/transform_debug/`.
One has to find the ID of the :bash:`regrid` transform reprojecting the native fluxes to your model's domain.
This information can be found in :bash:`$workdir/obsoperator/transform_description.txt`.
Once the transform ID retrieved, go to the folder :bash:`$workdir/obsoperator/$run_id/transform_debug/$transform_ID`.
The directory tree below that folder can be complex, go to the deepest level.
You should find two netCDF files, one for the inputs, one for the outputs.
In the outputs, you should find the native resolution, in the output, the projected one.
1. In directory :bash:`plugins/fluxes`, copy the directory containing the template read
for a flux plugin :bash:`flux_plugin_template` in the directory for your new plugin. -----
.. include:: ../newBCdata/register.rst The :bash:`read` function simply reads data for a list of dates and files as deduced from the
:bash:`read` function.
The expected structure for the :bash:`fetch` function is shown here: :ref:`datastreams-read-funtions`.
This function is rather straighforward to implement.
Be sure to have the following structure in outputs:
2. Modify the yaml file to use the new plugin: the minimum input arguments are .. code-block:: python
:bash:`dir`, :bash:`file`, :bash:`varname` and :bash:`unit_conversion`.
The default space and time interpolations will be applied
(see XXXX doc sur premiere simu directe avec exmeple yaml quand mise a jourXXXXX).
.. code-block:: yaml output_data.shape = (ndate, nlevel, nlat, nlon)
output_dates = start_date_of_each_interval
components: return xr.DataArray(
fluxes: output_data,
plugin: coords={"time": output_dates},
name: fluxes dims=("time", "lev", "lat", "lon"),
version: template )
type: fluxes
dir: dir_with_original_files/
file: file_with_new_fluxes_to_use_as_inputs
varname: NAMEORIG
unit_conversion:
scale: 1.
3. .. include:: ../newBCdata/devplugin.rst Similarly to the :bash:`get_domain` function, it is possible to check that
the :bash:`read` function is properly implemented by using the option :bash:`save_debug`
and checking that the input fluxes are correct.
XXXXXXX what about the input arguements? Ils demandent une partie dediee!?XXXXXXXXXX .. warning::
4. Document the new plugin: It is likely that the fluxes in your native data stream don't have the same unit
as the one expected by your model.
a) .. include:: ../newBCdata/readme.rst To convert the unit properly, add the :bash:`unit_conversion` paragraph to your Yaml file:
.. container:: toggle
b) create the rst file that contains the automatic documentation in docs/source/documentation/plugins/fluxes/. Please provide it with a self-explaining name. Example for the template: file fluxes_template.rst reads .. container:: header
.. code-block:: rest Show/Hide Code
.. role:: bash(code) .. code-block:: yaml
:language: bash :linenos:
########################### datavect:
Template plugin for fluxes plugin:
########################### name: standard
version: std
components:
flux:
parameters:
CO2:
plugin:
name: your_new_name
type: flux
version: your_version
dir: path_to_data
file: file_name
unit_conversion:
scale: ${scaling factor to apply}
.. automodule:: pycif.plugins.datastreams.fluxes.flux_plugin_template write (optional)
-----------------
c) add the reference to the rst file in docs/source/documentation/plugins/fluxes/index.rst: This function is optional and is necessary only when called by other plugins.
One probably does not need to bother about it at the moment...
.. code-block:: rest
#####################
Fluxes :bash:`fluxes`
#####################
Available Fluxes
=========================
The following :bash:`fluxes` are implemented in pyCIF:
.. toctree::
:maxdepth: 3
fluxes_template
dummy_nc
dummy_txt
edgar_v5
flexpart
chimere
lmdz_bin
lmdz_sflx
d) built the documentation (:bash:`make html` in docs/) and check that the link to the new plugin appears in the documentation at file:///your_path/cif/docs/build/html/documentation/plugins/index.html and that the section "doc" of the input arguments is correctly displayed at file:///your_path/cif/docs/build/html/documentation/plugins/fluxes/the_new_plugin.html
...@@ -59,13 +59,15 @@ Download the CIF image ...@@ -59,13 +59,15 @@ Download the CIF image
********************** **********************
The CIF Docker image is stored on `Docker Hub <https://hub.docker.com/>`__. The CIF Docker image is stored on `Docker Hub <https://hub.docker.com/>`__.
The image is publicly available and can be downloaded using the command: The image is publicly available and can be downloaded using the command:
.. code-block:: bash .. code-block:: bash
docker pull pycif/pycif-ubuntu:0.1 docker pull pycif/pycif-ubuntu:0.1
Other images are available for specific usage of the CIF, especially for some CTMs.
Corresponding images can be found `here <https://hub.docker.com/u/pycif>`__.
******************************** ********************************
Running the CIF inside the image Running the CIF inside the image
******************************** ********************************
......
...@@ -9,6 +9,7 @@ General input and output structures in the CIF ...@@ -9,6 +9,7 @@ General input and output structures in the CIF
monitor monitor
controlvect controlvect
obsvect obsvect
others
......
###################
Other input data
###################
.. role:: bash(code)
:language: bash
.. role:: raw-math(raw)
:format: latex html
Inputs other than observations need to be provided to pyCIF.
It includes, e.g., meteorological fields constraining CTMs, flux data to define the control vector, etc.
All these data can be provided with their native format without further modification specific to pyCIF.
However, pyCIF cannot "guess" a data format.
Therefore, the input data must be one of those already implemented in the CIF.
The list of available data streams integrated in pyCIF is given :doc:`here </documentation/plugins/datastreams/index>`.
If one needs to use data whose format is not included in pyCIF, there are two options:
- manually modify the input files to fit one of the existing input formats
- integrate the new format to pyCIF. To do so, please follow the :doc:`tutorial here </devtutos/newfluxdata/newfluxdata>`
...@@ -5,7 +5,7 @@ pyCIF can be entirely set up with a ...@@ -5,7 +5,7 @@ pyCIF can be entirely set up with a
`Yaml <http://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html>`__ `Yaml <http://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html>`__
file such as in the example below. file such as in the example below.
The Yaml configuration file structure is used to define, initialize and run the building blocks of pyCIF: The Yaml configuration file structure is used to define, initialize and run the building blocks of pyCIF:
:doc:`Plugins <plugins/index>` :doc:`Plugins <plugins/index>`.
The basic idea of Yaml The basic idea of Yaml
syntax is that you can easily define a tree structure using ':' and syntax is that you can easily define a tree structure using ':' and
...@@ -49,3 +49,4 @@ other plugins and dependencies. ...@@ -49,3 +49,4 @@ other plugins and dependencies.
Further examples about the Yaml syntax can be found Further examples about the Yaml syntax can be found
`here <http://sweetohm.net/article/introduction-yaml.en.html>`__. `here <http://sweetohm.net/article/introduction-yaml.en.html>`__.
The initialized dictionary is then loaded as :doc:`Plugins <plugins/index>` for later use by pyCIF.
...@@ -25,31 +25,57 @@ Required parameters, dependencies and functions ...@@ -25,31 +25,57 @@ Required parameters, dependencies and functions
Functions Functions
+++++++++ +++++++++
A given :bash:`datastream` Plugin requires the following functions to work
properly within pycif:
- fetch
- get_domain (optional)
- read
- write (optional)
Please find below details on these functions.
.. _datastreams-fetch-funtions:
fetch fetch
--------- ---------
The :bash:`fetch` function determines what files and corresponding dates are available
for running the present case.
The structure of the :bash:`fetch` function is shown below:
.. currentmodule:: pycif.plugins.datastreams.fluxes.flux_plugin_template .. currentmodule:: pycif.plugins.datastreams.fluxes.flux_plugin_template
.. autofunction:: fetch .. autofunction:: fetch
:noindex: :noindex:
|
|
|
.. _datastreams-get_domain-funtions:
get_domain (optional) get_domain (optional)
---------------------- ----------------------
.. autofunction:: get_domain .. autofunction:: get_domain
:noindex: :noindex:
|
|
|
.. _datastreams-get_domain-funtions:
read read
------ ------
.. autofunction:: read .. autofunction:: read
:noindex: :noindex:
|
|
|
write (optional) write (optional)
---------------- ----------------
......
...@@ -29,7 +29,7 @@ To integrate your own flux plugin, please follow the steps: ...@@ -29,7 +29,7 @@ To integrate your own flux plugin, please follow the steps:
1) copy the :bash:`flux_plugin_template` directory into one with a name of your 1) copy the :bash:`flux_plugin_template` directory into one with a name of your
preference preference
2) Start writing the documentation of your plugin by replacing the present 2) Start writing the documentation of your plugin by replacing the present
:bash:`docstring` in the file bash:`__init__.py`. Use rst syntax since this doctring :bash:`docstring` in the file :bash:`__init__.py`. Use rst syntax since this doctring
will be automatically parsed for publication in the documentation will be automatically parsed for publication in the documentation
3) Change the variables :bash:`_name`, :bash:`_version` (default is :bash:`std`) if 3) Change the variables :bash:`_name`, :bash:`_version` (default is :bash:`std`) if
not specified, and :bash:`_fullname` (optional, is used as a title when not specified, and :bash:`_fullname` (optional, is used as a title when
......
...@@ -22,7 +22,14 @@ def fetch(ref_dir, ref_file, input_dates, target_dir, ...@@ -22,7 +22,14 @@ def fetch(ref_dir, ref_file, input_dates, target_dir,
starts on 2010-01-15 to 2010-03-15, the output should at least include the input starts on 2010-01-15 to 2010-03-15, the output should at least include the input
data dates for 2010-01, 2010-02 and 2010-03. data dates for 2010-01, 2010-02 and 2010-03.
Note:
The three main arguments (:bash:`ref_dir`, :bash:`ref_file` and :bash:`file freq`) can either be
defined as :bash:`dir`, :bash:`file` and :bash:`file_freq` respectively
in the relevant davavect/flux/my_spec paragrah in the yaml,
or, if not available, they are fetched from the corresponding components/flux paragraph.
If one of the three needs to have a default value, it can be
integrated in the input_arguments dictionary in :bash:`__init__.py`
Args: Args:
ref_dir (str): the path to the input files ref_dir (str): the path to the input files
ref_file (str): format of the input files ref_file (str): format of the input files
...@@ -36,11 +43,71 @@ def fetch(ref_dir, ref_file, input_dates, target_dir, ...@@ -36,11 +43,71 @@ def fetch(ref_dir, ref_file, input_dates, target_dir,
:bash:`datavect/components/fluxes` in the configuration yaml :bash:`datavect/components/fluxes` in the configuration yaml
Return: Return:
list_files: for each date that begins a period, an array containing (dict, dict): returns two dictionaries: list_files and list_dates
list_files: for each date that begins a period, a list containing
the names of the files that are available for the dates within this period the names of the files that are available for the dates within this period
list_dates: for each date that begins a period, an array containing list_dates: for each date that begins a period, a list containing
the names of the dates matching the files listed in list_files the date intervals (in the form of a list of two dates each)
matching the files listed in list_files
Note:
The output format can be illustrated as follows (the dates are shown as strings,
but datetime.datetime objects are expected):
.. code-block:: python
list_dates = {
"2019-01-01 00:00":
[["2019-01-01 00:00", "2019-01-01 03:00"],
["2019-01-01 03:00", "2019-01-01 06:00"],
["2019-01-01 06:00", "2019-01-01 09:00"],
["2019-01-01 09:00", "2019-01-01 12:00"]],
"2019-01-01 12:00":
[["2019-01-01 12:00", "2019-01-01 15:00"],
["2019-01-01 15:00", "2019-01-01 18:00"],
["2019-01-01 18:00", "2019-01-01 21:00"],
["2019-01-01 21:00", "2019-01-02 00:00"]]
}
list_files = {
"2019-01-01 00:00":
["path_to_file_for_20190101_0000",
"path_to_file_for_20190101_0300",
"path_to_file_for_20190101_0600",
"path_to_file_for_20190101_0900"],
"2019-01-01 12:00":
["path_to_file_for_20190101_1200",
"path_to_file_for_20190101_1500",
"path_to_file_for_20190101_1800",
"path_to_file_for_20190101_2100"]