MLFMU serves as a tool for developers looking to integrate machine learning models into simulation environments. It enables the creation of Functional Mock-up Units (FMUs), which are simulation models that adhere to the FMI standard (https://fmi-standard.org/), from trained machine learning models exported in the ONNX format (https://onnx.ai/). The mlfmu package streamlines the process of transforming ONNX models into FMUs, facilitating their use in a wide range of simulation platforms that support the FMI standard such as the Open Simulation Platform or DNV's Simulation Trust Center
- Compile trained ML models into FMUs (Functional Mock-up Units).
- Easy to integrate in building pipelines.
- Declarative solution, just define what the inputs/outputs/parameters of your co-simulation model should look like and MLFMU will take care of the rest.
- Support for FMU signal vectors in FMI 2.0.
- Advanced customizations by enabling you to change the C++ code of the FMU.
pip install mlfmu
Before you use this mlfmu tool, you should create your machine learning (ML) model, using whatever your preferred tool is.
- Define the architecture of your ML model and prepare the model to receive the inputs following to MLFMU's input format.
Note 1: This example subclasses a Keras model for demonstration purposes. However, the tool is flexible and can accommodate other frameworks such as PyTorch, TensorFlow, Scikit-learn, and more.
Note 2: We showcase a simple example here. For more detailed information on how you can prepare your model to be compatible with this tool, see MLMODEL.md
# Create your ML model
class MlModel(tf.keras.Model):
def init(self, num_inputs = 2):
# 1 hidden layer, 1 output layer
self.hidden_layer = tf.keras.layers.Dense(512, activation=tf.nn.relu)
self.output_layer = tf.keras.layers.Dense(1, activation=None)
...
def call(self, all_inputs): # model forward pass
# unpack inputs
inputs, *_ = all_inputs
# Do something with the inputs
# Here we have 1 hidden layer
d1 = self.hidden_layer(inputs)
outputs = self.output_layer(d1)
return outputs
...
- Train your model, then save it as an ONNX file, e.g.:
import onnx
ml_model = MlModel()
# compile: configure model for training
ml_model.compile(optimizer=tf.optimizers.RMSProp, loss='mse')
# fit: train your ML model for some number of epochs
ml_model.fit(training_dataset, epochs=nr_epochs)
# Save the trained model as ONNX at a specified path
onnx_model = tf2onnx.convert.from_keras(ml_model)
onnx.save(onnx_model, 'path/to/save')
- (Optional) You may want to check your onnx file to make sure it produces the right output. You can do this by loading the onnx file and (using the same test input) compare the onnx model predictions to your original model predictions. You can also check the model using Netron: https://netron.app/ or https://github.com/lutzroeder/netron
Given that you have an ML model, you now need to:
- Prepare the FMU interface specification (.json), to specify your FMU's inputs, parameters, and output, map these to the ML model's inputs and output (
agentInputIndexes
) and to specify whether it uses time (usesTime
).
// Interface.json
{
"name": "MyMLFMU",
"description": "A Machine Learning based FMU",
"usesTime": true,
"inputs": [
{
"name": "input_1",
"description": "My input signal to the model at position 0",
"agentInputIndexes": ["0"]
},
{
"name": "input_2",
"description": "My input signal as a vector with four elements at position 1 to 5",
"agentInputIndexes": ["1:5"],
"type": "real",
"isArray": true,
"length": 4
}
],
"parameters": [
{
"name": "parameter_1",
"description": "My input signal to the model at position 1",
"agentInputIndexes": ["1"]
}
],
"outputs": [
{
"name": "prediction",
"description": "The prediction generated by ML model",
"agentOutputIndexes": ["0"]
}
]
}
More information about the interface.json schema can be found in the mlfmu\docs\interface\schema.html
- Compile the FMU:
mlfmu build --interface-file interface.json --model-file model.onnx
or if the files are in your current working directory:
mlfmu build
For more explanation on the ONNX file structure and inputs/outputs for your model, please refer to mlfmu's MLMODEL.md.
For advanced usage options, e.g. editing the generated FMU source code, or using the tool via a Python class, please refer to mlfmu's ADVANCED.md.
This project uses uv
as package manager.
If you haven't already, install uv, preferably using it's "Standalone installer" method:
..on Windows:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
..on MacOS and Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
(see docs.astral.sh/uv for all / alternative installation methods.)
Once installed, you can update uv
to its latest version, anytime, by running:
uv self update
We use conan for building the FMU. For the conan building to work later on, you will need the Visual Studio Build tools 2022 to be installed. It is best to do this before installing conan (which gets installed as part of the package dependencies, see step 5). You can download and install the Build Tools for VS 2022 (for free) from https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2022.
Clone the mlfmu repository into your local development directory:
git clone https://github.com/dnv-opensource/mlfmu path/to/your/dev/mlfmu
git submodule update --init --recursive
Run uv sync
to create a virtual environment and install all project dependencies into it:
uv sync
Use the command line option -p
to specifiy the Python version to resolve the dependencies against.
For instance, use -p 3.12
to specify Python 3.12 .
uv sync -p 3.12
Note: In case the specified Python version is not found on your machine,
uv sync
will automatically download and install it.
Optionally, use -U
in addition to allow package upgrades. Especially in cases when you change to a newer Python version, adding -U
can be useful.
It allows the dependency resolver to upgrade dependencies to newer versions, which might be necessary to support the (newer) Python version you specified.
uv sync -p 3.12 -U
Note: At this point, you should have conan installed. You will want to make sure it has the correct build profile. You can auto-detect and create the profile by running
conan profile detect
. After this, you can check the profile inC:\Users\<USRNAM>\.conan2\profiles\.default
(replace<USRNAM>
with your username). You want to have:compiler=msvc
,compiler.cppstd=17
,compiler.version=193
(for Windows).
When using uv
, most of the time there will be no longer a need to manually activate the virtual environment.
Whenever you run a command via uv run
inside your project folder structure, uv
will find the .venv
virtual environment in the working directory or any parent directory, and activate it on the fly:
uv run <command>
However, you still can manually activate the virtual environment if needed.
While we did not face any issues using VS Code as IDE, you might e.g. use an IDE which needs the .venv manually activated in order to properly work.
If this is the case, you can anytime activate the virtual environment using one of the "known" legacy commands:
..on Windows:
.venv\Scripts\activate.bat
..on Linux:
source .venv/bin/activate
The .pre-commit-config.yaml
file in the project root directory contains a configuration for pre-commit hooks.
To install the pre-commit hooks defined therein in your local git repository, run:
uv run pre-commit install
All pre-commit hooks configured in .pre-commit-config.yaml
will now run each time you commit changes.
To test that the installation works, run pytest in the project root folder:
uv run pytest
cd .\examples\wind_generator\config\
uv run mlfmu build
As an alternative, you can run from the main directory:
uv run mlfmu build --interface-file .\examples\wind_generator\config\interface.json --model-file .\examples\wind_generator\config\example.onnx
Note: wherever you run the build command from, is where the FMU file will be created, unless you specify otherwise with --fmu-path
.
For more options, see uv run mlfmu --help
or uv run mlfmu build --help
.
The created FMU can be used for running (co-)simulations. We have tested the FMUs that we have created in the Simulation Trust Center, which uses the Open Simulation Platform software.
This repository uses sphinx with .rst and .md files as well as Python docstrings, to document the code and usage. To locally build the docs:
cd docs
make html
You can then open index.html for access to all docs (for Windows: start build\html\index.html
).
All code in mlfmu is DNV intellectual property.
Copyright (c) 2024 DNV AS. All rights reserved.
Primary contributors:
Kristoffer Skare - @LinkedIn - [email protected]
Jorge Luis Mendez - @LinkedIn - [email protected]
Additional contributors (testing, docs, examples, etc.):
Melih Akdağ - @LinkedIn - [email protected]
Stephanie Kemna - @LinkedIn
Hee Jong Park - @LinkedIn - [email protected]
- Fork it (https://github.com/dnv-opensource/mlfmu/fork) (Note: this is currently disabled for this repo. For development, continue with the next step.)
- Create an issue in your GitHub repo
- Create your branch based on the issue number and type (
git checkout -b issue-name
) - Evaluate and stage the changes you want to commit (
git add -i
) - Commit your changes (
git commit -am 'place a descriptive commit message here'
) - Push to the branch (
git push origin issue-name
) - Create a new Pull Request in GitHub
For your contribution, please make sure you follow the STYLEGUIDE before creating the Pull Request.
- If you get an error similar to
..\fmu.cpp(4,10): error C1083: Cannot open include file: 'cppfmu_cs.hpp': No such file or directory
, you are missing cppfmu. This is a submodule to this repository. Make sure that you do agit submodule update --init --recursive
in the top level folder.
This code is distributed under the BSD 3-Clause license. See LICENSE for more information.
It makes use of cpp-fmu, which is distributed under the MPL license at https://github.com/viproma/cppfmu.