In that case, the flux will expect a CHIMERE domain to be defined, otherwise pycif
will return an exception
2. directly define the domain in the yaml as a sub-paragraph.
This will look like that:
.. container:: toggle
.. container:: header
Show/Hide Code
.. code-block:: yaml
:linenos:
datavect:
plugin:
name: standard
version: std
components:
flux:
parameters:
CO2:
plugin:
name: your_new_name
type: flux
version: your_version
dir: path_to_data
file: file_name
domain:
plugin:
name: my_domain_name
version: my_domain_version
some_extra_parameters: grub
Such an approach is not necessarily recommended as it forces the user to properly
configure his/her Yaml file to make the case working properly.
.. warning::
If this path is chosen please document the usage very carefully.
3. Using the function :bash:`get_domain` to define the domain dynamically, based
on input files, or with fixed parameters.
The structure of the :bash:`get_domain` function is shown here: :ref:`datastreams-get_domain-funtions`.
Please read carefully all explanations therein before starting implementing your case.
Once implemented, re-run your test case.
The implementation of the correct domain will have an impact on the native resolution
used to randomly generate fluxes (remember that the :bash:`read` function still
comes from the template and thus generate random fluxes for the corresponding domain).
Therefore, pycif will automatically reproject the fluxes from the implemented domain to
your model's domain.
One can check that the implemented domain is correct by:
1. check that the flux files generated for your model seem to follow the native resolution of
your data
2. it is possible to dump intermediate data during the computation of pycif.
To do so, activate the option :bash:`save_debug` in the :bash:`obsoperator`:
.. container:: toggle
.. container:: header
#####################################
Show/Hide Code
.. code-block:: yaml
:linenos:
obsoperator:
plugin:
name: standard
version: std
save_debug: True
0. .. include:: ../newBCdata/knownplugin.rst
When activated this option dumps intermediate states in :bash:`$workdir/obsoperator/$run_id/transform_debug/`.
One has to find the ID of the :bash:`regrid` transform reprojecting the native fluxes to your model's domain.
This information can be found in :bash:`$workdir/obsoperator/transform_description.txt`.
Once the transform ID retrieved, go to the folder :bash:`$workdir/obsoperator/$run_id/transform_debug/$transform_ID`.
The directory tree below that folder can be complex, go to the deepest level.
You should find two netCDF files, one for the inputs, one for the outputs.
In the outputs, you should find the native resolution, in the output, the projected one.
1. In directory :bash:`plugins/fluxes`, copy the directory containing the template
read
for a flux plugin :bash:`flux_plugin_template` in the directory for your new plugin.
-----
.. include:: ../newBCdata/register.rst
The :bash:`read` function simply reads data for a list of dates and files as deduced from the
:bash:`read` function.
The expected structure for the :bash:`fetch` function is shown here: :ref:`datastreams-read-funtions`.
This function is rather straighforward to implement.
Be sure to have the following structure in outputs:
2. Modify the yaml file to use the new plugin: the minimum input arguments are
.. code-block:: python
:bash:`dir`, :bash:`file`, :bash:`varname` and :bash:`unit_conversion`.
The default space and time interpolations will be applied
(see XXXX doc sur premiere simu directe avec exmeple yaml quand mise a jourXXXXX).
.. code-block:: yaml
output_data.shape = (ndate, nlevel, nlat, nlon)
output_dates = start_date_of_each_interval
components:
return xr.DataArray(
fluxes:
output_data,
plugin:
coords={"time": output_dates},
name: fluxes
dims=("time", "lev", "lat", "lon"),
version: template
)
type: fluxes
dir: dir_with_original_files/
file: file_with_new_fluxes_to_use_as_inputs
varname: NAMEORIG
unit_conversion:
scale: 1.
3. .. include:: ../newBCdata/devplugin.rst
Similarly to the :bash:`get_domain` function, it is possible to check that
the :bash:`read` function is properly implemented by using the option :bash:`save_debug`
and checking that the input fluxes are correct.
XXXXXXX what about the input arguements? Ils demandent une partie dediee!?XXXXXXXXXX
.. warning::
4. Document the new plugin:
It is likely that the fluxes in your native data stream don't have the same unit
as the one expected by your model.
a) .. include:: ../newBCdata/readme.rst
To convert the unit properly, add the :bash:`unit_conversion` paragraph to your Yaml file:
.. container:: toggle
b) create the rst file that contains the automatic documentation in docs/source/documentation/plugins/fluxes/. Please provide it with a self-explaining name. Example for the template: file fluxes_template.rst reads
c) add the reference to the rst file in docs/source/documentation/plugins/fluxes/index.rst:
This function is optional and is necessary only when called by other plugins.
One probably does not need to bother about it at the moment...
.. code-block:: rest
#####################
Fluxes :bash:`fluxes`
#####################
Available Fluxes
=========================
The following :bash:`fluxes` are implemented in pyCIF:
.. toctree::
:maxdepth: 3
fluxes_template
dummy_nc
dummy_txt
edgar_v5
flexpart
chimere
lmdz_bin
lmdz_sflx
d) built the documentation (:bash:`make html` in docs/) and check that the link to the new plugin appears in the documentation at file:///your_path/cif/docs/build/html/documentation/plugins/index.html and that the section "doc" of the input arguments is correctly displayed at file:///your_path/cif/docs/build/html/documentation/plugins/fluxes/the_new_plugin.html