Commit 01615143 authored by Antoine Berchet's avatar Antoine Berchet
Browse files

Tuto on new flux; orchidee fluxes

parent a9c58026
......@@ -27,7 +27,7 @@ The :bash:`datavect` paragraph of your working yaml should look like that:
.. container:: header
Show/Hide Code
Example with CHIMERE
.. code-block:: yaml
:linenos:
......@@ -49,11 +49,13 @@ The :bash:`datavect` paragraph of your working yaml should look like that:
file: some_file
Do the following to make it work with the template flux:
1. follow the initial steps in :doc:`the flux template documentation page</documentation/plugins/datastreams/fluxes/flux_plugin_template>`
to initialize your new plugin and register it.
It includes copying the template folder to a new path and changing the variables
:bash:`_name`,:bash:`_fullname` and :bash:`_version` in the file :bash:`__init__.py`
:bash:`_name`, :bash:`_fullname` and :bash:`_version` in the file :bash:`__init__.py`
2. update your Yaml to use the template flux (renamed with your preference). It should now look like that:
......@@ -95,95 +97,256 @@ Include the following information:
- data format (temporal and horizontal resolution, names and shape of the data files)
- any specific treatment that prevents the plugin from working with another type of files.
Build and check the documentation
=================================
Before going further, please compile the documentation and check that your new plugin
appears in the list of datastreams plugins :doc:`here</documentation/plugins/datastreams/index>`.
Also check that the documentation of your new plugin is satisfactory.
To compile the documentation, use the command:
.. code-block:: bash
cd $CIF_root/docs
make html
Further details can be found :doc:`here</contrib_doc>`.
Updating functions and data to implement your flux data
=======================================================
Your new plugin will need functions to be coded to work.
fetch
------
The :bash:`fetch` function determines what files and corresponding dates are available
for running the present case.
The structure of the :bash:`fetch` function is shown here: :ref:`datastreams-fetch-funtions`.
Please read carefully all explanations therein before starting implementing your case.
By default, the :bash:`fetch` function will use the arguments :bash:`dir` and :bash:`file` in your yaml.
Make sure to update your yaml accordingly:
.. container:: toggle
.. container:: header
Show/Hide Code
.. code-block:: yaml
:linenos:
datavect:
plugin:
name: standard
version: std
components:
flux:
parameters:
CO2:
plugin:
name: your_new_name
type: flux
version: your_version
dir: path_to_data
file: file_name
Depending on how you implement your data stream, extra parameters may be needed.
Please document them on-the-fly in the :bash:`input_arguments` variable in :bash:`__init__.py`.
One classical parameter is :bash:`file_freq`, which gives the frequency of the input files
(independently to the simulation to be computed).
Once implemented, re-run your test case.
You can check that everything went as expected by checking:
1. in the folder :bash:`$workdir/datavect/flux/your_species/`, links to original data files should be initialized
2. it is possible to check that the list of dates and files is initialized as expected. To do so, use the option
:bash:`dump_debug` in the :bash:`datavect` paragraph in the yaml
(see details :doc:`here</documentation/plugins/datavects/standard>`).
It will dump the list of dates and files in a file named :bash:`$workdir/datavect/flux.your_species.txt`
get_domain (optional)
---------------------
A datastream plugin needs to be described by a domain to be processed in pyCIF.
There are three valid approaches to associate a valid domain to your flux data.
The two first one are given for information, but the third one is
the one to be preferred in most cases:
1. fetch it from another object in the set-up. This is relevant when the domain
should be exactly the same as the one of another Plugin in your configuration.
For instance, if you are implementing a flux plugin dedicated to a model,
you will expect it to have exactly the same domain as the model.
To ensure that your flux plugin fetch the domain from the present set-up,
it is possible to define a so-called :doc:`requirement </documentation/dependencies>`.
This is done be adding the following lines to the :bash:`__init__.py` file
.. code-block:: python
requirements = {
"domain": {"name": "CHIMERE", "version": "std", "empty": False},
}
In that case, the flux will expect a CHIMERE domain to be defined, otherwise pycif
will return an exception
2. directly define the domain in the yaml as a sub-paragraph.
This will look like that:
.. container:: toggle
.. container:: header
Show/Hide Code
.. code-block:: yaml
:linenos:
datavect:
plugin:
name: standard
version: std
components:
flux:
parameters:
CO2:
plugin:
name: your_new_name
type: flux
version: your_version
dir: path_to_data
file: file_name
domain:
plugin:
name: my_domain_name
version: my_domain_version
some_extra_parameters: grub
Such an approach is not necessarily recommended as it forces the user to properly
configure his/her Yaml file to make the case working properly.
.. warning::
If this path is chosen please document the usage very carefully.
3. Using the function :bash:`get_domain` to define the domain dynamically, based
on input files, or with fixed parameters.
The structure of the :bash:`get_domain` function is shown here: :ref:`datastreams-get_domain-funtions`.
Please read carefully all explanations therein before starting implementing your case.
Once implemented, re-run your test case.
The implementation of the correct domain will have an impact on the native resolution
used to randomly generate fluxes (remember that the :bash:`read` function still
comes from the template and thus generate random fluxes for the corresponding domain).
Therefore, pycif will automatically reproject the fluxes from the implemented domain to
your model's domain.
One can check that the implemented domain is correct by:
1. check that the flux files generated for your model seem to follow the native resolution of
your data
2. it is possible to dump intermediate data during the computation of pycif.
To do so, activate the option :bash:`save_debug` in the :bash:`obsoperator`:
.. container:: toggle
.. container:: header
#####################################
Show/Hide Code
.. code-block:: yaml
:linenos:
obsoperator:
plugin:
name: standard
version: std
save_debug: True
0. .. include:: ../newBCdata/knownplugin.rst
When activated this option dumps intermediate states in :bash:`$workdir/obsoperator/$run_id/transform_debug/`.
One has to find the ID of the :bash:`regrid` transform reprojecting the native fluxes to your model's domain.
This information can be found in :bash:`$workdir/obsoperator/transform_description.txt`.
Once the transform ID retrieved, go to the folder :bash:`$workdir/obsoperator/$run_id/transform_debug/$transform_ID`.
The directory tree below that folder can be complex, go to the deepest level.
You should find two netCDF files, one for the inputs, one for the outputs.
In the outputs, you should find the native resolution, in the output, the projected one.
1. In directory :bash:`plugins/fluxes`, copy the directory containing the template
for a flux plugin :bash:`flux_plugin_template` in the directory for your new plugin.
read
-----
.. include:: ../newBCdata/register.rst
The :bash:`read` function simply reads data for a list of dates and files as deduced from the
:bash:`read` function.
The expected structure for the :bash:`fetch` function is shown here: :ref:`datastreams-read-funtions`.
This function is rather straighforward to implement.
Be sure to have the following structure in outputs:
2. Modify the yaml file to use the new plugin: the minimum input arguments are
:bash:`dir`, :bash:`file`, :bash:`varname` and :bash:`unit_conversion`.
The default space and time interpolations will be applied
(see XXXX doc sur premiere simu directe avec exmeple yaml quand mise a jourXXXXX).
.. code-block:: python
.. code-block:: yaml
output_data.shape = (ndate, nlevel, nlat, nlon)
output_dates = start_date_of_each_interval
components:
fluxes:
plugin:
name: fluxes
version: template
type: fluxes
dir: dir_with_original_files/
file: file_with_new_fluxes_to_use_as_inputs
varname: NAMEORIG
unit_conversion:
scale: 1.
return xr.DataArray(
output_data,
coords={"time": output_dates},
dims=("time", "lev", "lat", "lon"),
)
3. .. include:: ../newBCdata/devplugin.rst
Similarly to the :bash:`get_domain` function, it is possible to check that
the :bash:`read` function is properly implemented by using the option :bash:`save_debug`
and checking that the input fluxes are correct.
XXXXXXX what about the input arguements? Ils demandent une partie dediee!?XXXXXXXXXX
.. warning::
4. Document the new plugin:
a) .. include:: ../newBCdata/readme.rst
It is likely that the fluxes in your native data stream don't have the same unit
as the one expected by your model.
To convert the unit properly, add the :bash:`unit_conversion` paragraph to your Yaml file:
.. container:: toggle
b) create the rst file that contains the automatic documentation in docs/source/documentation/plugins/fluxes/. Please provide it with a self-explaining name. Example for the template: file fluxes_template.rst reads
.. container:: header
.. code-block:: rest
Show/Hide Code
.. role:: bash(code)
:language: bash
.. code-block:: yaml
:linenos:
###########################
Template plugin for fluxes
###########################
datavect:
plugin:
name: standard
version: std
components:
flux:
parameters:
CO2:
plugin:
name: your_new_name
type: flux
version: your_version
dir: path_to_data
file: file_name
unit_conversion:
scale: ${scaling factor to apply}
.. automodule:: pycif.plugins.datastreams.fluxes.flux_plugin_template
write (optional)
-----------------
c) add the reference to the rst file in docs/source/documentation/plugins/fluxes/index.rst:
This function is optional and is necessary only when called by other plugins.
One probably does not need to bother about it at the moment...
.. code-block:: rest
#####################
Fluxes :bash:`fluxes`
#####################
Available Fluxes
=========================
The following :bash:`fluxes` are implemented in pyCIF:
.. toctree::
:maxdepth: 3
fluxes_template
dummy_nc
dummy_txt
edgar_v5
flexpart
chimere
lmdz_bin
lmdz_sflx
d) built the documentation (:bash:`make html` in docs/) and check that the link to the new plugin appears in the documentation at file:///your_path/cif/docs/build/html/documentation/plugins/index.html and that the section "doc" of the input arguments is correctly displayed at file:///your_path/cif/docs/build/html/documentation/plugins/fluxes/the_new_plugin.html
......@@ -59,13 +59,15 @@ Download the CIF image
**********************
The CIF Docker image is stored on `Docker Hub <https://hub.docker.com/>`__.
The image is publicly available and can be downloaded using the command:
.. code-block:: bash
docker pull pycif/pycif-ubuntu:0.1
Other images are available for specific usage of the CIF, especially for some CTMs.
Corresponding images can be found `here <https://hub.docker.com/u/pycif>`__.
********************************
Running the CIF inside the image
********************************
......
......@@ -9,6 +9,7 @@ General input and output structures in the CIF
monitor
controlvect
obsvect
others
......
###################
Other input data
###################
.. role:: bash(code)
:language: bash
.. role:: raw-math(raw)
:format: latex html
Inputs other than observations need to be provided to pyCIF.
It includes, e.g., meteorological fields constraining CTMs, flux data to define the control vector, etc.
All these data can be provided with their native format without further modification specific to pyCIF.
However, pyCIF cannot "guess" a data format.
Therefore, the input data must be one of those already implemented in the CIF.
The list of available data streams integrated in pyCIF is given :doc:`here </documentation/plugins/datastreams/index>`.
If one needs to use data whose format is not included in pyCIF, there are two options:
- manually modify the input files to fit one of the existing input formats
- integrate the new format to pyCIF. To do so, please follow the :doc:`tutorial here </devtutos/newfluxdata/newfluxdata>`
......@@ -5,7 +5,7 @@ pyCIF can be entirely set up with a
`Yaml <http://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html>`__
file such as in the example below.
The Yaml configuration file structure is used to define, initialize and run the building blocks of pyCIF:
:doc:`Plugins <plugins/index>`
:doc:`Plugins <plugins/index>`.
The basic idea of Yaml
syntax is that you can easily define a tree structure using ':' and
......@@ -49,3 +49,4 @@ other plugins and dependencies.
Further examples about the Yaml syntax can be found
`here <http://sweetohm.net/article/introduction-yaml.en.html>`__.
The initialized dictionary is then loaded as :doc:`Plugins <plugins/index>` for later use by pyCIF.
......@@ -25,31 +25,57 @@ Required parameters, dependencies and functions
Functions
+++++++++
A given :bash:`datastream` Plugin requires the following functions to work
properly within pycif:
- fetch
- get_domain (optional)
- read
- write (optional)
Please find below details on these functions.
.. _datastreams-fetch-funtions:
fetch
---------
The :bash:`fetch` function determines what files and corresponding dates are available
for running the present case.
The structure of the :bash:`fetch` function is shown below:
.. currentmodule:: pycif.plugins.datastreams.fluxes.flux_plugin_template
.. autofunction:: fetch
:noindex:
|
|
|
.. _datastreams-get_domain-funtions:
get_domain (optional)
----------------------
.. autofunction:: get_domain
:noindex:
|
|
|
.. _datastreams-get_domain-funtions:
read
------
.. autofunction:: read
:noindex:
|
|
|
write (optional)
----------------
......
......@@ -29,7 +29,7 @@ To integrate your own flux plugin, please follow the steps:
1) copy the :bash:`flux_plugin_template` directory into one with a name of your
preference
2) Start writing the documentation of your plugin by replacing the present
:bash:`docstring` in the file bash:`__init__.py`. Use rst syntax since this doctring
:bash:`docstring` in the file :bash:`__init__.py`. Use rst syntax since this doctring
will be automatically parsed for publication in the documentation
3) Change the variables :bash:`_name`, :bash:`_version` (default is :bash:`std`) if
not specified, and :bash:`_fullname` (optional, is used as a title when
......
......@@ -22,7 +22,14 @@ def fetch(ref_dir, ref_file, input_dates, target_dir,
starts on 2010-01-15 to 2010-03-15, the output should at least include the input
data dates for 2010-01, 2010-02 and 2010-03.
Note:
The three main arguments (:bash:`ref_dir`, :bash:`ref_file` and :bash:`file freq`) can either be
defined as :bash:`dir`, :bash:`file` and :bash:`file_freq` respectively
in the relevant davavect/flux/my_spec paragrah in the yaml,
or, if not available, they are fetched from the corresponding components/flux paragraph.
If one of the three needs to have a default value, it can be
integrated in the input_arguments dictionary in :bash:`__init__.py`
Args:
ref_dir (str): the path to the input files
ref_file (str): format of the input files
......@@ -36,11 +43,71 @@ def fetch(ref_dir, ref_file, input_dates, target_dir,
:bash:`datavect/components/fluxes` in the configuration yaml
Return:
list_files: for each date that begins a period, an array containing
(dict, dict): returns two dictionaries: list_files and list_dates
list_files: for each date that begins a period, a list containing
the names of the files that are available for the dates within this period
list_dates: for each date that begins a period, an array containing
the names of the dates matching the files listed in list_files
list_dates: for each date that begins a period, a list containing
the date intervals (in the form of a list of two dates each)
matching the files listed in list_files
Note:
The output format can be illustrated as follows (the dates are shown as strings,
but datetime.datetime objects are expected):
.. code-block:: python
list_dates = {
"2019-01-01 00:00":
[["2019-01-01 00:00", "2019-01-01 03:00"],
["2019-01-01 03:00", "2019-01-01 06:00"],
["2019-01-01 06:00", "2019-01-01 09:00"],
["2019-01-01 09:00", "2019-01-01 12:00"]],
"2019-01-01 12:00":
[["2019-01-01 12:00", "2019-01-01 15:00"],
["2019-01-01 15:00", "2019-01-01 18:00"],
["2019-01-01 18:00", "2019-01-01 21:00"],
["2019-01-01 21:00", "2019-01-02 00:00"]]
}
list_files = {
"2019-01-01 00:00":
["path_to_file_for_20190101_0000",
"path_to_file_for_20190101_0300",
"path_to_file_for_20190101_0600",
"path_to_file_for_20190101_0900"],
"2019-01-01 12:00":
["path_to_file_for_20190101_1200",
"path_to_file_for_20190101_1500",
"path_to_file_for_20190101_1800",
"path_to_file_for_20190101_2100"]
}
In the example above, the native temporal resolution is 3-hourly,
and files are available every 12 hours
Note:
There is no specific rule for sorting dates and files into separate keys of
the output dictionaries. The usage rule would be to have one dictionary key
per input file, therein unfolding all available dates in the corresponding file;
in that rule, the content of :bash:`list_files` is a duplicate of the same file over again
in every given key of the dictionary.
But any combination of the keys is valid as long as the list of dates of each key corresponds
exactly to the file with the same index.
Hence, it is acceptable to have, e.g., one key with all dates and files,
or one key per date even though there are several date per file.
The balance between the number of keys and the size of each key should be
determined by the standard usage expected with the data.
overall, a good practice is to have one key in the input data for each
sub-simulation for which it will be used afterwards by the model.
For instance, CHIMERE emission files store hourly emissions for CHIMERE
sub-simulations, typically 24-hour long. It thus makes sense to have
one key per 24-hour period and in each key the hour emissions.
"""
debug("Fetching files with the following information: \n"
......
......@@ -6,8 +6,26 @@ from .....utils.classes.domains import Domain
def get_domain(ref_dir, ref_file, input_dates, target_dir, tracer=None):
"""Read information to define the data horizontal and, if relevant, vertical domain
"""Read information to define the data horizontal and, if relevant, vertical domain.
There are several possible approaches:
- read a reference file that is necessary in :bash:`ref_dir`
- read a file among the available data files
- read a file specified in the yaml,
by using the corresponding variable name; for instance, tracer.my_file
From the chosen file, obtain the coordinates of the centers and/or the corners
of the grid cells. If corners or centers are not available, deduce them from
the available information.
Warning:
the grid must not be overlapping: e.g for a global grid,
the last grid cell must not be the same as the first
Warning:
Order the centers and corners latitudes and longitudes increasing order
Args:
ref_dir (str): the path to the input files
ref_file (str): format of the input files
......@@ -17,32 +35,16 @@ def get_domain(ref_dir, ref_file, input_dates, target_dir, tracer=None):
:bash:`datavect/components/fluxes/parameters/my_species` in the
configuration yaml; can be needed to fetch extra information
given by the user
Return:
domain (Domain): a domain class object, with the definition of the center grid
Domain: a domain class object, with the definition of the center grid
cells coordinates, as well as corners
"""
# Some explanations
debug(
'Here, read the horizontal grid, e.g., longitudes and latitudes.\n'
'Several possibilities: \n'
' - read a reference file\n'
' - read a file among the available data files\n'
' - read a file specified in the yaml, \n'
' by using the corresponding variable name; for instance, tracer.my_file\n'
'From the chosen file, obtain the coordinates of the centers and/or the corners '
'of the grid cells. If corners or centers are not available, deduce them from '
'the available information.\n'
'\n'
'WARNING: the grid must not be overlapping: '
'e.g for a global grid, the last grid cell must not be the same as the first'
'\n'