This repository contains all RLBot related content (bots, exercises, test scenario framework) needed.
Bot scripts are located in src. Currently there are three different bots:
- an example bot created from the RLBot Python Example
- a model bot that loads a ONNX model and supplies all game inputs to the runtime to enable transfer learning
- a test bot which loads a specified scenario.json, runs a predefined action sequence and logs all game packets to a file - this is used for scenario testing
Create a virtual environment / conda environment and install pip
. Run pip install -r requirements.txt
. After that
a training runner can be executed by running e.g. python src\scenarios\goalie\goalie_runner.py
from the root directory.
We found a common error with the flatbuffer library that rlbot is dependant on. It might helpt to uninstall and reinstall rlbot / rlbottraining manually via pip
.
The folder scenarios contains exercise scenarios and training runners. Exercises are typically composed of a bot config, a match config, a exercise / training class and a runner script. All configuration related to the bot (path to the actual bot scrip, accessory config like settings paths etc.) are located in the _bot.cfg. Same goes for the _match.cfg containing all match related configuration. Logic on how an exercise is created, the initial game state, how the exercise is to be graded etc. can be found in the _training.py containing the specified Exercise dataclass and needed graders. The runner script can be run simply executing the main of the script.
The test exercise is used to run predefined scenarios to log relevant game state information. Therefore a JSON file is needed that describes the scenario. Needed values / the file format of a such a scenario is as follows:
"time": 10.0,
"name": "example",
"startValues": [
{
"gameObject": "car",
"position": {
"x": 0.0,
"y": 0.1852,
"z": 0.0
},
"velocity": {
"x": 0.0,
"y": 0.0,
"z": 0.0
},
"angularVelocity": {
"x": 0.0,
"y": 0.0,
"z": 0.0
},
"rotation": {
"x": 0.0,
"y": 90.0,
"z": 0.0
}
},
{
"gameObject": "ball",
"position": {
"x": 5.0,
"y": 0.9275,
"z": 0.0
},
"velocity": {
"x": 0.0,
"y": 0.0,
"z": 0.0
},
"angularVelocity": {
"x": 0.0,
"y": 0.0,
"z": 0.0
},
"rotation": {
"x": 0.0,
"y": 0.0,
"z": 0.0
}
}
],
"actions": [
{
"duration": 0.1,
"inputs": [
{
"name": "jump",
"value": 1.0
}
]
}]
}
This scenario file is loaded from a settings.json
(referred to as scenario_settings) specifying all path information. This itself is loaded from the path specified in the settings.json
in exercise directory.
By running the test_runner.py
the currently specified scenario in the scenario_settings is loaded and run by the exercise / bot. All game packets received during runtime are then logged to a file created in the RLBot output directory specified in scenario_settings and named after the scenario name.
A basic exercise to test out transfer learning from Roboleague to RLBot was created in goalie. It contains all the needed configs and runners.
Based on the RLBot Python Example