Skip to content
Snippets Groups Projects

Developer documentation for the development of plugins

Merged Adham Hashibon requested to merge developer-documentation into master
2 files
+ 246
61
Compare changes
  • Side-by-side
  • Inline
Files
2
+ 23
26
Design
------
The application is based on five entities, as written in the introduction:
The application is based on four entities, as written in the introduction:
    • Created by: jjenthought

      I guess we're down to four entities now! But since the KPIs will eventually have their own unique properties (The constraint based stuff) we could still mention them here? Though I wouldn't know what to write about that at the moment, we could just have a placeholder there until we know exactly what form things will take.

Please register or sign in to reply
- Multi Criteria Optimizer (MCO)
- DataSources
- Key Performance Indicator (KPI) Calculators
- Notification Listeners
- UI Hooks
There are a few core assumptions about each of these entities:
- The MCO design must honor the execution model of Dakota, that is, spawn
a secondary process that performs a computation starting from a given set
of input parameters, and produces a resulting set of output parameters.
In our code, this secondary process is ``force_bdss`` itself, invoked with
the option ``--evaluate``.
- The DataSources are entities that, given the MCO parameters, provide some
numerical result. This result is passed to the KPI calculators.
- The KPI calculators now compute the final KPIs that are then returned to
the invoker MCO.
- The MCO provides numerical values and injects them into a pipeline
made of multiple layers. Each layer is composed of multiple data sources.
The MCO can execute this pipeline directly, or indirectly by invoking
the force_bdss with the option ``--evaluate``. This invocation will produce,
given a vector in the input space, a vector in the output space.
- The DataSources are entities that are arranged in layers. Each DataSource has
inputs and outputs, called slots. These slots may depend on the configuration
options of the specific data source. The pipeline is created by binding
data sources outputs on the layers above with the data sources inputs of a
given layer. Each output can be designated as a KPI and thus be transmitted
back to the MCO for optimization.
- The Notification Listener listens to the state of the MCO (Started/New step
of the computation/Finished). It can be a remote database which is filled
with the MCO results during the computation (e.g. the GUI ``force_wfmanager``
@@ -29,21 +30,17 @@ There are a few core assumptions about each of these entities:
bdss, before saving the workflow). Those operations won't be executed by the
command line interface of the bdss.
The result can be represented with the following data flow
1. The MCO produces and, by means of a Communicator, injects...
2. ...DataSourceParameters, that are passed to...
3. one or more DataSources, each performing some computation or data
extraction and produces
4. DataSourceResult, one per DataSource, are then passed (together with the
DataSourceParameters) to...
5. one or more KPICalculators, which perform final data evaluation on the
obtained values, eac producing KPIResult...
6. Whose values are then returned to the MCO via the Communicator.
7. The KPI values are then sent to the notification listeners with the
1. The MCO produces and, potentially by means of a Communicator (if the
execution model is based on invoking ``force_bdss --evaluate``),
injects a given vector of MCO parameter values in the pipeline.
2. These values are passed to one or more DataSources, organised in layers,
each performing some computation or data extraction and produces results.
Layers are executed in order from top to bottom. There is no order among
DataSources on the same layer.
3. Results that have been classified as KPIs are then returned to the MCO
(again, potentially via the Communicator).
4. The KPI values are then sent to the notification listeners with the
associated MCO parameters values
The resulting pipeline is therefore just two layers (DataSources, then
KPICalculators).
5. The cycle repeats until all evaluations have been performed.
Loading