Commit c6aca4c8 authored by Antoine Berchet's avatar Antoine Berchet
Browse files

Merge branch 'new_classes' into 'devel'

Grouping flux/meteo/field into datatream; cleaning plurals in class names;...

See merge request satinv/cif!165
parents 1762bd0a c904fd9c
...@@ -172,20 +172,27 @@ article: ...@@ -172,20 +172,27 @@ article:
image: image:
name: pycif/pycif-ubuntu:0.1 name: pycif/pycif-ubuntu:0.1
entrypoint: [""] entrypoint: [""]
before_script: # before_script:
# - apt-get update ## - apt-get update
- pip freeze # - pip freeze
script: script:
- tox -e py38 -e coverage -- -m "(dummy and article and inversion and not adjtltest and not uncertainties) or (fwd and ref_config)" - echo AAAAAAAAAAAAAAAAAAAAAAAAaaa
after_script: - echo ${CI_COMMIT_BRANCH}
- mkdir -p coverage - echo AAAAAAAAAAAAAAAAAAAAAAAAaaa
- xmlstarlet sel -t -v "//coverage/@line-rate" reports/coverage.xml > coverage/.current_coverage - echo ${CI_MERGE_REQUEST_TARGET_BRANCH_NAME}
- calc() { awk "BEGIN{print $*}"; } - echo AAAAAAAAAAAAAAAAAAAAAAAAaaa
- percent_coverage=`cat coverage/.current_coverage` - if [ ${CI_COMMIT_BRANCH} == "new_classes" ]; then echo "TTTTTTTTTTTTTTTTTTTTTTTTT"; fi;
- tot_coverage=`calc ${percent_coverage}*100`
- echo 'TOTAL COVERAGE:'" ${tot_coverage}%" #- tox -e py38 -e coverage -- -m "(dummy and article and inversion and not adjtltest and not uncertainties) or (fwd and ref_config)"
- mv coverage_raw/coverage/.coverage coverage_raw/.coverage.article # after_script:
coverage: '/^TOTAL COVERAGE: ([0-9\.]+\%)$/' # - mkdir -p coverage
# - xmlstarlet sel -t -v "//coverage/@line-rate" reports/coverage.xml > coverage/.current_coverage
# - calc() { awk "BEGIN{print $*}"; }
# - percent_coverage=`cat coverage/.current_coverage`
# - tot_coverage=`calc ${percent_coverage}*100`
# - echo 'TOTAL COVERAGE:'" ${tot_coverage}%"
# - mv coverage_raw/coverage/.coverage coverage_raw/.coverage.article
# coverage: '/^TOTAL COVERAGE: ([0-9\.]+\%)$/'
artifacts: artifacts:
when: always when: always
paths: paths:
...@@ -195,8 +202,8 @@ article: ...@@ -195,8 +202,8 @@ article:
- coverage_raw - coverage_raw
- examples_artifact - examples_artifact
- figures_artifact - figures_artifact
only: # only:
- LSCE # - LSCE
article_uncertainties: article_uncertainties:
stage: test stage: test
...@@ -297,6 +304,7 @@ tests_chimere: ...@@ -297,6 +304,7 @@ tests_chimere:
# Run the tests for flexpart (include downloading data) # Run the tests for flexpart (include downloading data)
tests_flexpart: tests_flexpart:
stage: test stage: test
retry: 2
image: image:
name: pycif/pycif-ubuntu:0.1 name: pycif/pycif-ubuntu:0.1
entrypoint: [""] entrypoint: [""]
......
...@@ -318,7 +318,6 @@ def process_pycif_keywords(app, what, obj_name, obj, options, lines): ...@@ -318,7 +318,6 @@ def process_pycif_keywords(app, what, obj_name, obj, options, lines):
- default_values - default_values
- mandatory_values - mandatory_values
""" """
ref_lines = copy.deepcopy(lines) ref_lines = copy.deepcopy(lines)
# Adding bash highlight by default # Adding bash highlight by default
...@@ -403,9 +402,30 @@ def process_pycif_keywords(app, what, obj_name, obj, options, lines): ...@@ -403,9 +402,30 @@ def process_pycif_keywords(app, what, obj_name, obj, options, lines):
preftree = key_req.get("preftree", "") preftree = key_req.get("preftree", "")
empty = key_req.get("empty", False) empty = key_req.get("empty", False)
name = key_req.get("name", None) name = key_req.get("name", None)
version = key_req.get("version", "") version = key_req.get("version", None)
plg_type = Plugin.plugin_types[key_req.get("type", req)][1] req_type = key_req.get("type", req)
plg_path = Plugin.plugin_types[key_req.get("type", req)][0][1:] req_subtype = key_req.get("subtype", "")
# Load required plugin to deal with types and sub-types
plg_req = Plugin.from_dict({
"plugin": {
"name": name,
"version": version,
"type": req_type,
"subtype": req_subtype
}
})
plg_req._load_plugin_type(req)
plg_type = \
Plugin.plugin_types[plg_req.plugin.type][1]
plg_path = \
Plugin.plugin_types[plg_req.plugin.type][0][1:]
plg_subtype = \
Plugin.plugin_subtypes[
plg_req.plugin.type][
plg_req.plugin.subtype][1:]
# String to dump
newplg = key_req.get("newplg", False) newplg = key_req.get("newplg", False)
towrite.extend(( towrite.extend((
" * - {}\n" " * - {}\n"
...@@ -534,7 +554,7 @@ def build_rst_from_plugins(app): ...@@ -534,7 +554,7 @@ def build_rst_from_plugins(app):
init_dir(plg_dir) init_dir(plg_dir)
# Initialize index # Initialize index
towrite = [ towrite_overall_index = [
"##################", "##################",
"Plugins in pyCIF", "Plugins in pyCIF",
"##################", "##################",
...@@ -544,12 +564,7 @@ def build_rst_from_plugins(app): ...@@ -544,12 +564,7 @@ def build_rst_from_plugins(app):
"", "",
" ../plugin_description", " ../plugin_description",
" ../dependencies" " ../dependencies"
] + [
" {}/index".format(Plugin.plugin_types[plg_type][0][1:])
for plg_type in Plugin.plugin_types
] ]
with open("{}/index.rst".format(plg_dir), "w") as f:
f.write("\n".join(towrite))
# Loop on all plugin types # Loop on all plugin types
for plg_type in Plugin.plugin_types: for plg_type in Plugin.plugin_types:
...@@ -559,7 +574,7 @@ def build_rst_from_plugins(app): ...@@ -559,7 +574,7 @@ def build_rst_from_plugins(app):
Plugin.plugin_types[plg_type][0]) Plugin.plugin_types[plg_type][0])
class_module = pkgutil.importlib.import_module(class_path) class_module = pkgutil.importlib.import_module(class_path)
local_class = getattr(class_module, class_name) local_class = getattr(class_module, class_name)
# Create directory # Create directory
plg_type_dir = "{}/{}".format( plg_type_dir = "{}/{}".format(
plg_dir, Plugin.plugin_types[plg_type][0][1:]) plg_dir, Plugin.plugin_types[plg_type][0][1:])
...@@ -567,29 +582,79 @@ def build_rst_from_plugins(app): ...@@ -567,29 +582,79 @@ def build_rst_from_plugins(app):
# Loop over modules of this class # Loop over modules of this class
package_path = "pycif.plugins{}".format(Plugin.plugin_types[plg_type][0]) package_path = "pycif.plugins{}".format(Plugin.plugin_types[plg_type][0])
if pkgutil.importlib.util.find_spec(package_path) is None:
continue
# Update overall index
towrite_overall_index.append(
" {}/index".format(Plugin.plugin_types[plg_type][0][1:]))
# Loop over sub-types
import_package = pkgutil.importlib.import_module(package_path) import_package = pkgutil.importlib.import_module(package_path)
package_index = [] for subtype in Plugin.plugin_subtypes[plg_type]:
for mod in pkgutil.walk_packages(import_package.__path__, local_subpackage = "{}{}".format(
prefix=import_package.__name__ + "."): package_path,
if not mod.ispkg: Plugin.plugin_subtypes[plg_type][subtype])
continue import_subpackage = pkgutil.importlib.import_module(local_subpackage)
# Create directory
plg_subtype_dir = "{}/{}".format(
plg_type_dir,
Plugin.plugin_subtypes[plg_type][subtype][1:])
init_dir(plg_subtype_dir)
loc_mod = pkgutil.importlib.import_module(mod.name) # Loop over modules in the sub-type
package_subindex = []
# Register modules only when a name is given for mod in pkgutil.walk_packages(import_subpackage.__path__,
if not hasattr(loc_mod, "_name"): prefix=import_subpackage.__name__ + "."):
continue if not mod.ispkg:
continue
loc_mod = pkgutil.importlib.import_module(mod.name)
# Register modules only when a name is given
if not hasattr(loc_mod, "_name"):
continue
# Create corresponding rst file
file_name = "{}/{}.rst".format(
plg_subtype_dir, loc_mod.__name__.split(".")[-1])
# Create corresponding rst file title = ":bash:`{}` / :bash:`{}`".format(
file_name = "{}/{}.rst".format( loc_mod._name, getattr(loc_mod, "_version", "std"))
plg_type_dir, loc_mod.__name__.split(".")[-1])
if hasattr(loc_mod, "_fullname"):
title = ":bash:`{}` / :bash:`{}`".format( title = "{} ({})".format(loc_mod._fullname, title)
loc_mod._name, getattr(loc_mod, "_version", "std"))
towrite = [
".. role:: bash(code)",
" :language: bash",
"",
"",
len(title) * "#",
title,
len(title) * "#",
"",
".. automodule:: {}".format(loc_mod.__name__)
]
with open(file_name, "w") as f:
f.write("\n".join(towrite))
# Append name for plugin type index
package_subindex.append(loc_mod.__name__.split(".")[-1])
# Sort names
package_subindex.sort()
if hasattr(loc_mod, "_fullname"): # Write the plugin type index
title = "{} ({})".format(loc_mod._fullname, title) if subtype == "":
continue
title = list(subtype)
title[0] = title[0].upper()
title = "".join(title)
towrite = [ towrite = [
".. role:: bash(code)", ".. role:: bash(code)",
" :language: bash", " :language: bash",
...@@ -598,54 +663,86 @@ def build_rst_from_plugins(app): ...@@ -598,54 +663,86 @@ def build_rst_from_plugins(app):
len(title) * "#", len(title) * "#",
title, title,
len(title) * "#", len(title) * "#",
""] + ([".. contents:: Contents", " :local:", ""]
if import_subpackage.__doc__ is not None else []) + [
"Available {}".format(title),
(len(title) + 11) * "=",
"", "",
".. automodule:: {}".format(loc_mod.__name__) "The following :bash:`{}` of sub-type {} "
] "are implemented in pyCIF so far:".format(
Plugin.plugin_types[plg_type][0][1:],
with open(file_name, "w") as f: subtype),
"",
".. toctree::",
"",
] + [
" {}".format(plg) for plg in package_subindex
] + (
import_subpackage.__doc__.split('\n')
if import_subpackage.__doc__ is not None
else []
)
with open("{}/index.rst".format(plg_subtype_dir), "w") as f:
f.write("\n".join(towrite)) f.write("\n".join(towrite))
# Append name for plugin type index
package_index.append(loc_mod.__name__.split(".")[-1])
# Sort names
package_index.sort()
# Write the plugin type index # Write the plugin type index
title = list(Plugin.plugin_types[plg_type][0][1:]) title = list(Plugin.plugin_types[plg_type][0][1:])
title[0] = title[0].upper() title[0] = title[0].upper()
title = "".join(title) + " (:bash:`{}`)".format(plg_type) title = "".join(title) + " (:bash:`{}`)".format(plg_type)
towrite = [ towrite = [
".. role:: bash(code)", ".. role:: bash(code)",
" :language: bash", " :language: bash",
"", "",
"", "",
len(title) * "#", len(title) * "#",
title, title,
len(title) * "#", len(title) * "#",
""] + ([ ""] + ([
".. contents:: Contents", ".. contents:: Contents",
" :local:", " :local:",
"" ""
] if import_package.__doc__ is not None else []) + [ ] if import_package.__doc__ is not None else []) + [
"Available {}".format(title), "Available {}".format(title),
(len(title) + 11) * "=", (len(title) + 11) * "=",
"", ""]
"The following :bash:`{}` are implemented in pyCIF so far:".format(
Plugin.plugin_types[plg_type][0][1:]), # If only one sub-type, just create an index of all available plugins
"", if len(Plugin.plugin_subtypes[plg_type]) == 1:
".. toctree::", towrite.extend([
"", "The following :bash:`{}` are implemented in pyCIF so far:".format(
] + [ Plugin.plugin_types[plg_type][0][1:]),
" {}".format(plg) for plg in package_index "",
] + ( ".. toctree::",
"",
] + [" {}".format(plg) for plg in package_subindex])
# If sub-types create an index pointing to sub-types and plugins
else:
towrite.extend([
"The following sub-types and :bash:`{}` are implemented "
"in pyCIF so far:".format(
Plugin.plugin_types[plg_type][0][1:]),
"",
".. toctree::",
"",
] + [" {}/index".format(Plugin.plugin_subtypes[plg_type][subtype][1:])
for subtype in Plugin.plugin_subtypes[plg_type]
])
# Append overall type description
towrite.extend(
import_package.__doc__.split('\n') import_package.__doc__.split('\n')
if import_package.__doc__ is not None if import_package.__doc__ is not None
else [] else [])
)
# Dump the string to the rst file
with open("{}/index.rst".format(plg_type_dir), "w") as f: with open("{}/index.rst".format(plg_type_dir), "w") as f:
f.write("\n".join(towrite)) f.write("\n".join(towrite))
# Dump the overall index
with open("{}/index.rst".format(plg_dir), "w") as f:
f.write("\n".join(towrite_overall_index))
# Generate available list # Generate available list
s = StringIO() s = StringIO()
Plugin.print_registered(print_rst=True, print_requirement=True, stream=s) Plugin.print_registered(print_rst=True, print_requirement=True, stream=s)
......
############################# #############################
Developments around CHIMERE Developments around CHIMERE
############################ #############################
.. role:: bash(code) .. role:: bash(code)
:language: bash :language: bash
......
...@@ -27,7 +27,7 @@ Example: for a CTM with emitted species, ...@@ -27,7 +27,7 @@ Example: for a CTM with emitted species,
.. code-block:: python .. code-block:: python
emis = { emis = {
("fluxes", s): dict_surface ("flux", s): dict_surface
for s in model.chemistry.emis_species.attributes for s in model.chemistry.emis_species.attributes
} }
......
.. role:: bash(code) .. role:: bash(code)
:language: bash :language: bash
.. currentmodule:: pycif.plugins.fields.bc_plugin_template .. currentmodule:: pycif.plugins.datastreams.fields.bc_plugin_template
Run pycif with this yaml: the new plugin will simply perform what is in the template i.e. print some instructions on what you have to do where. The following codes must be developped in the places matching the instructions - and checked. To check that each new code works as intended, run the CIF with the yaml using the new plugin and with the same yaml but using a known plugin with print statements. The scripts have to be developped in this order: Run pycif with this yaml: the new plugin will simply perform what is in the template i.e. print some instructions on what you have to do where. The following codes must be developped in the places matching the instructions - and checked. To check that each new code works as intended, run the CIF with the yaml using the new plugin and with the same yaml but using a known plugin with print statements. The scripts have to be developped in this order:
......
...@@ -2,9 +2,6 @@ ...@@ -2,9 +2,6 @@
:language: bash :language: bash
Have a yaml file ready with a simulation that works with known plugins. Have a yaml file ready with a simulation that works with known plugins.
For the :doc:`obsoperator</documentation/plugins/obsoperators/index>`,
choose the optional argument :bash:`onlyinit` so that only the inputs are computed
XXXX CHECK THIS OPTION ACTUALLY DOES THISXXXX, not the whole simulation.
.. code-block:: yaml .. code-block:: yaml
......
...@@ -47,7 +47,7 @@ XXXXXXX what about the input arguments? Ils demandent une partie dediee!?XXXXXXX ...@@ -47,7 +47,7 @@ XXXXXXX what about the input arguments? Ils demandent une partie dediee!?XXXXXXX
Template plugin for BCs Template plugin for BCs
######################## ########################
.. automodule:: pycif.plugins.fields.bc_plugin_template .. automodule:: pycif.plugins.datastreams.fields.bc_plugin_template
c) add the reference to the rst file in docs/source/documentation/plugins/fields/index.rst: c) add the reference to the rst file in docs/source/documentation/plugins/fields/index.rst:
......
...@@ -5,33 +5,120 @@ How to add a new type of flux data to be processed by the CIF into a model's inp ...@@ -5,33 +5,120 @@ How to add a new type of flux data to be processed by the CIF into a model's inp
.. role:: bash(code) .. role:: bash(code)
:language: bash :language: bash
0. .. include:: ../newBCdata/knownplugin.rst Pre-requisites
================
Before starting to implement a new flux plugin, you must have:
1. In directory :bash:`plugins/fluxes`, copy the directory containing the template for a flux plugin :bash:`flux_plugin_template` in the directory for your new plugin. - a yaml file ready with a simulation that works with known plugins.
- a folder where the data you need to implement is stored
- basic information about the data you need to implement (licensing, format, etc.)
.. include:: ../newBCdata/register.rst We help you below to navigate through different documentation pages to implement your plugin.
The main reference pages are :doc:`the datastream documentation page </documentation/plugins/datastreams/index>`
and :doc:`the flux template documentation page</documentation/plugins/datastreams/fluxes/flux_plugin_template>`.
Switch from working fluxes to the reference template
=====================================================
2. Modify the yaml file to use the new plugin: the minimum input arguments are :bash:`dir`, :bash:`file`, :bash:`varname` and :bash:`unit_conversion`. The default space and time interpolations will be applied (see XXXX doc sur premiere simu directe avec exmeple yaml quand mise a jourXXXXX). The :bash:`datavect` paragraph of your working yaml should look like that:
.. code-block:: yaml .. container:: toggle
components: .. container:: header
fluxes:
Show/Hide Code
.. code-block:: yaml
:linenos:
datavect:
plugin: plugin:
name: fluxes name: standard
version: template version: std
type: fluxes components:
dir: dir_with_original_files/ flux:
file: file_with_new_fluxes_to_use_as_inputs parameters:
varname: NAMEORIG CO2:
unit_conversion: plugin:
scale: 1. name: CHIMERE
type: flux
version: AEMISSIONS
file_freq: 120H
dir: some_dir
file: some_file
1. follow the initial steps in :doc:`the flux template documentation page</documentation/plugins/datastreams/fluxes/flux_plugin_template>`
to initialize your new plugin and register it.
It includes copying the template folder to a new path and changing the variables
:bash:`_name`,:bash:`_fullname` and :bash:`_version` in the file :bash:`__init__.py`
2. update your Yaml to use the template flux (renamed with your preference). It should now look like that:
.. container:: toggle
.. container:: header
Show/Hide Code
.. code-block:: yaml
:linenos:
datavect:
plugin:
name: standard
version: std
components:
flux:
parameters:
CO2:
plugin:
name: your_new_name
type: flux
version: your_version
3. Test running again your test case. It should generate fluxes with random values
#####################################
0. .. include:: ../newBCdata/knownplugin.rst
1. In directory :bash:`plugins/fluxes`, copy the directory containing the template
for a flux plugin :bash:`flux_plugin_template` in the directory for your new plugin.
.. include:: ../newBCdata/register.rst
2. Modify the yaml file to use the new plugin: the minimum input arguments are
:bash:`dir`, :bash:`file`, :bash:`varname` and :bash:`unit_conversion`.
The default space and time interpolations will be applied
(see XXXX doc sur premiere simu directe avec exmeple yaml quand mise a jourXXXXX).
.. code-block:: yaml
components:
fluxes:
plugin:
name: fluxes
version: template
type: fluxes
dir: dir_with_original_files/
file: file_with_new_fluxes_to_use_as_inputs
varname: NAMEORIG
unit_conversion:
scale: 1.
3. .. include:: ../newBCdata/devplugin.rst 3. .. include:: ../newBCdata/devplugin.rst
XXXXXXX what about the input arguements? Ils demandent une partie dediee!?XXXXXXXXXX XXXXXXX what about the input arguements? Ils demandent une partie dediee!?XXXXXXXXXX
4. Document the new plugin: 4. Document the new plugin: