Skip to content
Snippets Groups Projects
Forked from an inaccessible project.
Rune Åvar Ødegård's avatar
Rune Åvar Ødegård authored
a7d4d31d
History

Raven

Open source airquality system.
Main purpose is to simplify delivery of dataflows to EEA.

PLEASE NOTE

This branch (master) is under development and will frequently change.
Please use the v1.0 tag to install Raven.

Features

  • Data import with csv files via GUI or API
  • View and compare data module
  • GUI for editing the following meta data:
    • Responsible authorities
    • Networks
    • Stations
    • Sampling Points
    • Process
    • Samples
    • Observation Capabilities
    • Assessment Regimes
    • Attainments
    • Exceedance Description
  • Scaling, including calculate data (needed for NO-NO2-NOx)
  • Validation module
  • Verification module
  • XML-generation via download or API for the following dataflows:
    • information on zones and agglomerations (data flow B)
    • information assessment methods (data flow D)
    • information assessment regimes (data flow C)
    • information on attainment of environmental compliance (data flow G)
    • primary validated data (data flow E1a)
    • primary Up-To-Date data (data flow E2a) NB! Only available through API
  • Users/groups read/write access on network level

Requirements

The following is required to install and run the application:

Configuration

Add a file in the web folder called config.ini.
Look at config.example.ini to see what needs to be inside.

Database

Setup postgres database with the postgis extension.
Create new database. Connect to new created database. Import the schema.sql file from the db_scripts folder

Running the app

Install all required python libraries with:

pip install -r requirements.txt

Navigate to the web/client folder and build the gui with the command:

yarn install
yarn build

Run the python server with the the following command:
Linux

export FLASK_APP=web
flask run

Windows

$env:FLASK_APP = "web" 
flask run

```With cmd
FLASK_APP=web
flask run

See https://flask.palletsprojects.com/en/1.1.x/quickstart/ for more information.

For detailed installation steps see: "Steps to install on Windows.md" file

Development

Make sure you have all javascript packages installed by running yarn in the web/client folder.
Run yarn build once so the python server has the required files.
Start the web server as described in "running the app"
Start the client side server by running the command yarn serve

Linux web server

On the webserver we have to start with a configuration for apache2. Write a custom configuration file, /etc/apache2/sites-available/raven.conf.

The ServerName settings in raven.conf has to be registered as a CNAME, pointing to the A-record for the webserver, in the DNS-servers.

Make ssl-certs for https://someurl.com, and put the key and crt file into /etc/apache2/ssl.

Make a DocumentRoot directory under /var/www/html, call it raven. Give it sufficient permissions for apache2 to be able to read it.

Copy all of the files in the web folder to there.

Make a raven.wsgi file, to start the application. Here we source the virtual environment and starts the application.

Next is to make a virtual environment for python, and then installing all the modules from requirement.txt from that. psycopg2 is dependent on libpq-dev and python-dev.

IIS

Please follow this article to install the application on IIS: http://netdot.co/2015/03/09/flask-on-iis/

Make sure the Python folder has IIS_IUSRS rights.
Install all dependencies from requirements.txt globally or with virtualenv.
To do this globally run the following line in the python script folder:
pip install -r requirements.txt
It is highly recommended that you set up your server with https

Importing data

Importing data is done by uploading responisble_authorities, networks, stations, sampling_points, processes, samples, observing_capabilities and observations as csv files. See csv_examples.
Use Curl, Postman or similar tools for this.
Set Content-Type to "multipart/form-data" and use the form name "csv".
Example uploading networks with curl

curl -X POST -u usr:psw --form "csv=@path/to/networks.csv" https://someurl.com/imports/networks

If you upload a content with id's that exists, it will be updated.

There are dependencies between the csv files, so the import order is important.
Start with settings, users, responisble_authorities, networks, stations, sampling_points, processes, samples, observing_capabilities and observations

Security

This application uses basic authentication.
The default user:password combination is admin:admin
It is recommended to change the default password immediately.
You can do this with the user management module on Raven web site.

Settings

The namespace needs to be set for dataflow D and E.
Example of namespace is "NO.NILU.AQD"
To change the namespace with json payload, do the following:

Update namespace:

PAYLOAD {"namespace":"some_namespace"}
POST imports/settings

Responisble authorities

Create a csv file with the following headers. All headers must be present:

id,name,organisation,address,locator,postcode,email,phone,website,is_responsible_reporter

id: A unique self created id. (Required)
name: The name of the reporter (Required)
organisation: The name of the reporting organisation (Required)
address: The address of the reporting organisation (Required)
locator: City of the reporting organisation (Required)
postcode: Post code of the reporting organisation (Required)
email: Email to the reporter (Required)
phone: Phone number to the reporter (Required)
website: Url to the reporting organisation (Required)
is_responsible_reporter: One reporter needs to be set to true. This reporter will be used for AQD_ReportingHeader. (Required)

Insert or update one or multiple reporters:

POST imports/responsible_authorities

Get all imported reporters:

GET imports/responsible_authorities

Delete a specific reporter:

DELETE imports/responsible_authorities/{id}

Networks

Create a csv file with the following headers. All headers must be present:

id,name,responsible_authority_id,organisational,begin_position,end_position,aggregation_timezone

id: The EEA code for your network with prefix. eg NET_NO019A (Required)
name: A name for the network. Can be anything you want (Required)
responsible_authority_id: The id to the responsible authority. This id must already exist in the system (Required)
organisational: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/organisationallevel (Required)
begin_position: When the network started measuring. ie 2000-01-01T00:00:00+01:00 (Required)
end_position: When the network stopped measuring. ie 2000-01-01T00:00:00+01:00 (Optional)
aggregation_timezone: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/timezone (Required)

Insert or update one or multiple networks:

POST imports/networks

Get all imported networks:

GET imports/networks

Delete a specific network:

DELETE imports/networks/{id}

Stations

Create a csv file with the following headers. All headers must be present:

id,name,begin_position,end_position,network_id,municipality,eoi_code,national_station_code,latitude,longitude,epsg,altitude,mobile,area_classification,distance_junction,traffic_volume,heavy_duty_fraction,street_width,height_facades

id: The EEA code for your station with prefix. eg STA_NO0083A (Required)
name: A name for the station. Can be anything you want (Required)
begin_position: When the station started measuring. ie 2000-01-01T00:00:00+01:00 (Required)
end_position: When the station stopped measuring. ie 2000-01-01T00:00:00+01:00 (Optional)
network_id: The id to the network. This id must already exist in the system (Required)
municipality: The name of the municipality where the station is located. (Optional)
eoi_code: The EEA code for your station. (Required)
national_station_code: The national/local id for the station. (Required)
latitude: The latitude of the station. (Required)
longitude: The longitude of the station. (Required)
epsg: The epsg integer code of the station. eg 4326 (Required)
altitude: The altitude of the station. (Required)
mobile: Is the station mobile. True or false (Required)
area_classification: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/areaclassification (Required)
distance_junction: (Optional)
traffic_volume: (Optional)
heavy_duty_fraction: (Optional)
street_width: (Optional)
height_facades: (Optional)

Insert or update one or multiple stations:

POST imports/stations

Get all imported stations:

GET imports/stations

Delete a specific station:

DELETE imports/stations/{id}

Sampling Points

Create a csv file with the following headers. All headers must be present:

id,assessment_type,station_id,station_classification,main_emission_sources,traffic_emissions,heating_emissions,industrial_emissions,distance_source,begin_position,end_position,mobile

id: A unique id with prefix. Recommended format is stationid_componentid_uniqueid. ie SPO_NO0083A_9_1023 (Required)
assessment_type: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/assessmenttype (Required)
station_id: The id to the station. This id must already exist in the system (Required)
station_classification: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/stationclassification (Required)
main_emission_sources: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/emissionsource (Optional)
traffic_emissions: The traffic emissions. (Optional)
heating_emissions: The heating emissions. (Optional)
industrial_emissions: The industrial emissions. (Optional)
distance_source: The distance source. (Optional)
begin_position: When the station started measuring for pollutant. ie 2000-01-01T00:00:00+01:00 (Required)
end_position: When the station stopped measuring for pollutant. ie 2000-01-01T00:00:00+01:00 (Optional)
mobile: Is the sampling point mobile. True or false (Required)

Insert or update one or multiple sampling points:

POST imports/sampling_points

Get all imported sampling points:

GET imports/sampling_points

Delete a specific sampling point:

DELETE imports/sampling_points/{id}

Processes

Create a csv file with the following headers. All headers must be present:

id,responsible_authority_id,measurement_type,measurement_method,other_measurement_method,sampling_method,other_sampling_method,analytical_tech,other_analytical_tech,sampling_equipment,other_sampling_equipment,measurement_equipment,other_measurement_equipment,equiv_demonstration,equiv_demonstration_report,detection_limit,detection_limit_uom,uncertainty_estimate,documentation,qa_report,duration_number,duration_unit,cadence_number,cadence_unit

id: A unique id with prefix. Recommended format is stationid_componentid_uniqueid. ie SPP_NO0083A_9_1023 (Required)
responsible_authority_id: The id to the responsible authority. This id must already exist in the system (Required)
measurement_type: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/measurementtype (Required)
measurement_method: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/measurementmethod (Optional)
other_measurement_method: A text if measurement_method is set to other. (Optional)
sampling_method: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/samplingmethod (Optional)
other_sampling_method: A text if sampling_method is set to other. (Optional)
analytical_tech: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/analyticaltechnique (Optional)
other_analytical_tech: A text if analytical_tech is set to other. (Optional)
sampling_equipment: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/samplingequipment (Optional)
other_sampling_equipment: A text if sampling_equipment is set to other. (Optional)
measurement_equipment: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/measurementequipment (Optional)
other_measurement_equipment: A text if measurement_equipment is set to other. (Optional)
equiv_demonstration: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/equivalencedemonstrated (Optional)
equiv_demonstration_report: A text if equiv_demonstration is set to other. (Optional)
detection_limit: The detection limit (Optional)
detection_limit_uom: An EEA url. See http://dd.eionet.europa.eu/vocabulary/uom/concentration (Optional)
uncertainty_estimate: The uncertainty estimate. (Optional)
documentation: The documentation. (Optional)
qa_report: The QA report. (Optional)
duration_number: The duration number. (Required)
duration_unit: An EEA url. See http://dd.eionet.europa.eu/vocabulary/uom/time (Required)
cadence_number: The cadence number. (Required)
cadence_unit: An EEA url. See http://dd.eionet.europa.eu/vocabulary/uom/time (Required)

Insert or update one or multiple processes:

POST imports/processes

Get all imported processes:

GET imports/processes

Delete a specific process:

DELETE imports/processes/{id}

Samples

Create a csv file with the following headers. All headers must be present:

id,inlet_height,building_distance,kerb_distance

id: A unique id. Recommended format is stationid_componentid_uniqueid_counter. ie NO0083A_9_1023_1 (Required)
inlet_height: The inlet height in meters. (Required)
building_distance: The building distance in meters. (Optional)
kerb_distance: The kerb distance in meters. (Optional)

Insert or update one or multiple samples:

POST imports/samples

Get all imported samples:

GET imports/samples

Delete a specific sample:

DELETE imports/samples/{id}

Observing capabilities

Create a csv file with the following headers. All headers must be present:

id,begin_position,end_position,pollutant,sampling_point_id,sample_id,process_id,concentration,timestep

id: A unique id. Recommended format is stationid_componentid_uniqueid_counter. ie NO0083A_9_1023_1 (Required)
begin_position: When the stations started measuring with capability. ie 2000-01-01T00:00:00+01:00 (Required)
end_position: When the stations stopped measuring with capability. ie 2000-01-01T00:00:00+01:00 (Optional)
pollutant: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/pollutantn (Required)
sampling_point_id: The id to the sampling point. This id must already exist in the system (Required)
sample_id: The id to the sample. This id must already exist in the system (Required)
concentration: An EEA url. See http://dd.eionet.europa.eu/vocabulary/uom/concentration (Required)
timestep: An EEA url. See http://dd.eionet.europa.eu/vocabulary/aq/primaryObservation (Required)

Insert or update one or multiple observing capabilities:

POST imports/observing_capabilities

Get all imported observing capabilities:

GET imports/observing_capabilities

Delete a specific observing capability:

DELETE imports/observing_capabilities/{id}

Observations

This should be imported on a regular basis, ie hourly.
Create a csv file with the following headers. All headers must be present:

sampling_point_id,begin_position,end_position,value,verification_flag,validation_flag

sampling_point_id: The id to the sampling point. This id must already exist in the system (Required)
begin_position: When the measurement started. ie 2000-01-01T00:00:00+01:00 (Required)
end_position: When the measurement stopped. ie 2000-01-01T01:00:00+01:00 (Required)
value: The measured value. (Required)
verification_flag: The integer verification flag. See notation in http://dd.eionet.europa.eu/vocabulary/aq/observationverification (Required)
validation_flag: The integer validation flag. See notation in http://dd.eionet.europa.eu/vocabulary/aq/observationvalidity (Required)

Insert or update one or multiple observations:

POST imports/observations

Get all imported observations:

GET imports/observations

Delete a specific observation:

DELETE imports/observations/{sampling_point_id}/{end_position}

Dataflows

Dataflows do not require authentication.

D

To get dataflow D set the year and the timezone in the url. ie year=2000&tz=1

GET dataflows/d ?year={year}&tz={timezone:int}

E1A

To get dataflow E1A set the year and the timezone in the url. ie year=2000&tz=1

GET dataflows/e1a ?year={year}&tz={timezone:int}

E2A

To get dataflow E2A set the last_request in the url. ie 2000-01-01T00:00:00
This will get all imported/updated data after the set date.

GET dataflows/e2a ?last_request={YYYY-MM-DDTHH:mm:ss}