Skip to content

The submission template for the MineRL BASALT Competition @ NeurIPS 2021. Clone this to make a new submission!

Notifications You must be signed in to change notification settings

minerllabs/basalt_competition_submission_template

Repository files navigation

NeurIPS 2021: MineRL BASALT Competition Starter Kit

Discord

This repository is the main MineRL BASALT 2021 Competition submission template and starter kit!

MineRL BASALT is a competition on solving human-judged tasks. The tasks in this competition do not have a pre-defined reward function: the goal is to produce trajectories that are judged by real humans to be effective at solving a given task.

See the homepage of the competition for further details.

This repository contains:

  • Documentation on how to submit your agent to the leaderboard
  • The procedure for Round 1 and Round 2
  • Starter code for you to base your submission (an agent that takes random actions)!

Other Resources:

How to Submit a Model on AICrowd.

In brief: you define your Python environment using Anaconda environment files, and AICrowd system will build a Docker image and run your code using the docker scripts inside the utility directory.

You submit pretrained models, the evaluation code and the training code. Training code should produce the same models you upload as part of your submission.

Your evaluation code (test_submission_code.py) only needs to control the agent and accomplish the environment's task. The evaluation server will handle recording of videos.

You specify the task you want to submit agent for with the aicrowd.json file, tags field (see below). One submission only covers one task!

Setup

  1. Clone the github repository or press the "Use this Template" button on GitHub!

    git clone https://github.com/minerllabs/basalt_competition_submission_template.git
    
  2. Install competition specific dependencies! Make sure you have the JDK 8 installed first!

    # 1. Make sure to install the JDK first
    # -> Go to http://minerl.io/docs/tutorials/getting_started.html
    
    # 2. Install the `minerl` package and its dependencies.
    
  3. Specify your specific submission dependencies (PyTorch, Tensorflow, kittens, puppies, etc.)

    • Anaconda Environment. To make a submission you need to specify the environment using Anaconda environment files. It is also recommended you recreate the environment on your local machine. Make sure at least version 4.5.11 is required to correctly populate environment.yml (By following instructions here). Then:

      • Create your new conda environment

        cd basalt_competition_submission_template
        conda env create -f environment.yml 
        conda activate minerl
      • Your code specific dependencies Add your own dependencies to the environment.yml file. Remember to add any additional channels. PyTorch requires the channel pytorch, for example. You can also install them locally using

        conda install <your-package>
    • Pip Packages If you need pip packages (not on conda), you can add them to the environment.yml file (see the currently populated version):

    • Apt Packages If your training procedure or agent depends on specific Debian (Ubuntu, etc.) packages, add them to apt.txt.

These files are used to construct both the local and AICrowd docker containers in which your agent will train.

If above are too restrictive for defining your environment, see this Discourse topic for more information.

What should my code structure be like ?

Please follow the example structure shared in the starter kit for the code structure. The different files and directories have following meaning:

.
├── aicrowd.json             # Submission meta information like your username
├── aicrowd_helper.py        # Helper functions needed for evaluation
├── apt.txt                  # Packages to be installed inside docker image
├── data                     # The downloaded data, the path to directory is also available as `MINERL_DATA_ROOT` env variable
├── test_submission_code.py  # IMPORTANT: Your testing/inference phase code. NOTE: This is NOT the the entry point for testing phase!
├── train                    # Your trained model MUST be saved inside this directory
├── train_submission_code.py # IMPORTANT: Your training code. Running this should produce the same agent as you upload as part of the agent.
├── test_framework.py        # The entry point for the testing phase, which sets up the environment. Your code DOES NOT go here.
└── utility                  # The utility scripts which provide a smoother experience to you.
    ├── debug_build.sh
    ├── docker_run.sh
    ├── environ.sh
    ├── evaluation_locally.sh
    ├── parser.py
    ├── train_locally.sh
    └── verify_or_download_data.sh

Finally, you must specify an AIcrowd submission JSON in aicrowd.json to be scored!

The aicrowd.json of each submission should contain the following content:

{
  "challenge_id": "neurips-2021-minerl-basalt-competition",
  "authors": ["your-aicrowd-username"],
  "description": "sample description about your awesome agent",
  "tags": "FindCave",
  "license": "MIT",
  "gpu": true
}

This JSON is used to map your submission to the said challenge, so please remember to use the correct challenge_id as specified above.

You need to specify the task of the submission with the tags field with one of the following: {"FindCave", "MakeWaterfall", "CreateVillageAnimalPen", "BuildVillageHouse"}. You need to create one submission per task to cover all tasks.

Please specify if your code will use a GPU or not for the evaluation of your model. If you specify true for the GPU, a NVIDIA Tesla K80 GPU will be provided and used for the evaluation.

Dataset location

You don't need to upload the MineRL dataset in submission and it will be provided in online submissions at MINERL_DATA_ROOT path, should you need it. For local training and evaluations, you can download it once in your system via python ./utility/verify_or_download_data.py or place manually into the ./data/ folder.

How to submit!

To make a submission, you will have to create a private repository on https://gitlab.aicrowd.com/.

You will have to add your SSH Keys to your GitLab account by following the instructions here. If you do not have SSH Keys, you will first need to generate one.

Then you can create a submission by making a tag push to your repository on https://gitlab.aicrowd.com/. Any tag push (where the tag name begins with "submission-") to your private repository is considered as a submission
Then you can add the correct git remote, and finally submit by doing :

cd competition_submission_starter_template
# Add AIcrowd git remote endpoint
git remote add aicrowd [email protected]:<YOUR_AICROWD_USER_NAME>/basalt_competition_submission_template.git
git push aicrowd master

# Create a tag for your submission and push
git tag -am "submission-v0.1" submission-v0.1
git push aicrowd master
git push aicrowd submission-v0.1

# Note : If the contents of your repository (latest commit hash) does not change,
# then pushing a new tag will **not** trigger a new evaluation.

You now should be able to see the details of your submission at: https://gitlab.aicrowd.com/<YOUR_AICROWD_USER_NAME>/basalt_competition_submission_template/issues/

Best of Luck 🎉 🎉

Ensuring that your code works.

You can perform local training and evaluation using utility scripts shared in this directory. To mimic the online training phase you can run ./utility/train_locally.sh from the repository root, you can specify --verbose for complete logs.

For local evaluation of your code, you can use ./utility/evaluation_locally.sh, add --verbose if you want to view complete logs. Note that you do not need to record videos in your code! AICrowd server will handle this. Your code only needs to play the games.

For running/testing your submission in a docker environment (identical to the online submission), you can use ./utility/docker_train_locally.sh and ./utility/docker_evaluation_locally.sh. You can also run docker image with bash entrypoint for debugging on the go with the help of ./utility/docker_run.sh. These scripts respect following parameters:

  • --no-build: To skip docker image build and use the last build image
  • --nvidia: To use nvidia-docker instead of docker which include your nvidia related drivers inside docker image

Team

The quick-start kit was authored by Anssi Kanervisto and Shivam Khandelwal with help from William H. Guss

The BASALT competition is organized by the following team:

  • Rohin Shah (UC Berkeley)
  • Cody Wild (UC Berkeley)
  • Steven H. Wang (UC Berkeley)
  • Neel Alex (UC Berkeley)
  • Brandon Houghton (OpenAI and Carnegie Mellon University)
  • William H. Guss (OpenAI and Carnegie Mellon University)
  • Sharada Mohanty (AIcrowd)
  • Anssi Kanervisto (University of Eastern Finland)
  • Stephanie Milani (Carnegie Mellon University)
  • Nicholay Topin (Carnegie Mellon University)
  • Pieter Abbeel (UC Berkeley)
  • Stuart Russell (UC Berkeley)
  • Anca Dragan (UC Berkeley)

About

The submission template for the MineRL BASALT Competition @ NeurIPS 2021. Clone this to make a new submission!

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published