Skip to content

Commit

Permalink
DOC update the section on ramp-kits
Browse files Browse the repository at this point in the history
  • Loading branch information
glemaitre committed Feb 16, 2020
1 parent afb7c89 commit 8b6b6c2
Show file tree
Hide file tree
Showing 7 changed files with 97 additions and 101 deletions.
Binary file added doc/images/ramp_event_joined.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/ramp_join_event.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/ramp_kit_download.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/ramp_sandbox_submission.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified doc/images/ramp_sidebar.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/ramp_waiting_approval.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
198 changes: 97 additions & 101 deletions doc/using_kits.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,153 +3,149 @@
Using RAMP starting-kits
########################

To get started working on an existing RAMP challenge:
Get the starting-kit
====================

1. Clone the `starting kit`_ for the challenge from GitHub:
To start working on a specific challenge, you can download the starting-kit
directly from the event page (i.e. "Download kit" button).

.. code-block:: bash
$ git clone https://github.com/ramp-kits/<ramp_kit_name>.git
$ cd <ramp_kit_name>
.. image:: images/ramp_kit_download.png
:width: 700

|
It will provide a zip file providing all the necessary materials. Sometimes,
each starting-kit comes with specific instructions (e.g. to download extra
data) which will be available in the `README.md`.

If the starting kit does not include the data (i.e. a ``data/`` folder)
you will need to download the data using the ``download_data.py`` file:
Install dependencies
====================

.. code-block:: bash
To run a submission and the notebook you will need the dependencies listed
in `requirements.txt`. You can install install the dependencies with the
following command-line:

$ python download_data.py
.. code-block:: bash
2. Install dependencies:
$ pip install -U -r requirements.txt
* with `conda <https://docs.conda.io/en/latest/miniconda.html>`_ to create
a virtual environment with all the required dependencies:
If you are using `conda`, we provide an `environment.yml` file for similar
usage.

.. code-block:: bash
Test a submission
=================

# to create the environment
$ conda env create -f environment.yml
# to activate the environment
$ source activate <ramp_kit_name>
The submissions need to be located in the `submissions` folder. For instance
for `my_submission`, it should be located in `submissions/my_submission`.

* with `pip <https://pypi.org/project/pip/>`_:

.. code-block:: bash
To run a specific submission, you can use the `ramp-test` command line:

$ pip install -r requirements.txt
# install ramp-workflow
$ pip install https://api.github.com/repos/paris-saclay-cds/ramp-workflow/zipball/master
.. code-block:: bash
3. Test that the starting kit works:
$ ramp-test --submission my_submissions
.. code-block:: bash
You can get more information regarding this command line:

$ ramp_test_submission --quick-test
.. code-block:: bash
Alternatively you can test the kit from a Python shell environment using::
$ ramp-test --help
from rampwf.utils.testing import assert_submission
The output of the `ramp-test` command will be looking like:

assert_submission()
.. code-block:: bash
See the ``assert_submission()`` `source code
<https://github.com/paris-saclay-cds/ramp-workflow/blob/master/rampwf/utils/testing.py#L63>`_
for more details.
Testing Iris classification
Reading train and test files from ./data ...
Reading cv ...
Training submissions/starting_kit ...
CV fold 0
score acc error nll f1_70 time
train 0.58 0.42 1.17 0.33 0.175953
valid 0.65 0.35 0.52 0.33 0.001978
test 0.71 0.29 0.80 0.67 0.001837
CV fold 1
score acc error nll f1_70 time
train 0.63 0.37 0.78 0.67 0.011339
valid 0.65 0.35 0.66 0.67 0.002846
test 0.54 0.46 0.72 0.33 0.003474
----------------------------
Mean CV scores
----------------------------
score acc error nll f1_70 time
train 0.61 ± 0.026 0.39 ± 0.026 0.98 ± 0.197 0.5 ± 0.167 0.1 ± 0.08
valid 0.65 ± 0.0 0.35 ± 0.0 0.59 ± 0.069 0.5 ± 0.167 0.0 ± 0.0
test 0.62 ± 0.083 0.38 ± 0.083 0.76 ± 0.041 0.5 ± 0.167 0.0 ± 0.0
----------------------------
Bagged scores
----------------------------
score acc error nll f1_70
valid 0.65 0.35 0.59 0.33
test 0.71 0.29 0.66 0.33
Now you are ready to write your own solution for the prediction problem. It is
useful to read the starting kit notebook which will introduce the problem,
provide some data analysis and visualisation and walk you through a simple
example solution. There will also be at least one example submission in
``submissions/``, which show you which files are required for each submission,
`submissions/`, which show you which files are required for each submission,
how they need to be named and how each file should be structured.

Submiting to a RAMP event
=========================
Submitting to a RAMP event
==========================

To submit your solution to `RAMP studio`_:

1. Log in at `RAMP studio`_. If it is your first time, you will need to first
register.
2. Find an open event for your RAMP challenge. Events for RAMP challenges
can be found underneath the challenge title:

.. image:: images/ramp_event.png
:width: 700

|
2. For a given challenge, go to the event to which you want to participate.
If you did not join the event yet, you will need to click on "Join event".

To sign up for an event, click on the open event (*not* on the challenge).
This should take you to a page with the challenge and event as the title.
Click on the yellow 'Sign up' button to sign up for this event:
.. image:: images/ramp_join_event.png
:width: 700

|
For most of the events, your registration needs to be validated by an event
administrator. During this transition, you see the status
"Waiting approval...".

.. image:: images/ramp_signup.png
:width: 500
.. image:: images/ramp_waiting_approval.png
:width: 700

Note that registering for RAMP studio and signing up for events are controlled
by RAMP administrators, so there can be a delay between asking to sign up and
being able to submit.
Once you are approved, you will see the status "Event joined".

If you cannot find an open event for the challenge you are interested in,
you can request a new event by clicking on 'Click here if you would like to
use this problem for teaching', on the welcome page of the challenge. Note
you need to click on the RAMP challenge and not an event of the challenge to
find this button.

.. image:: images/ramp_newevent.png
.. image:: images/ramp_event_joined.png
:width: 700

|
3. Once your sign up as been approved you will have access to a number of
menu options on the right hand side:
menu options on the left-hand side:

.. image:: images/ramp_sidebar.png
:width: 650

|
* **leaderboard** - table of all submissions from all participants of the
event. The score metrics, contributivity, time taken to train and test
the model, max RAM usage and time of submission will all be shown. By
default it will show the most recent submissions first, but you can change
this by clicking on the up and down arrows next to the column names.
* **competition leaderboard** - table of the final submission of each
participant. It will show only the 'official' score metric
(see :ref:`score types <score-types>`), time taken to train and test the
model and submission time. This is ordered by the best score, by default.
* **sandbox** - you can make your submissions here. See below for more
details.
* **my submissions** - shows all your previous submissions, with separate
tables for submissions which are pending and submissions which have
finished computing.

4. To make a submission go to your sandbox. Here, you can edit the
required submission files (e.g., ``feature_extractor.py`` and
``classifier.py``), available in the middle of the page, then save
the files. Alternatively, you can upload the required files from
your computer, on the right hand side:

.. image:: images/ramp_sandbox1.png
:width: 750
* **Leaderboard**: it contains a summary of **all** submissions performance
from all event's participants;
* **Competition leaderboard**: it contains only the best submission for each
participant;
* **Sandbox**: it is your sandbox to make a submission to the event;
* **My submissions**: it shows information regarding all your submissions.

4. To make a submission go to your "Sandbox". Paste the code of your submission
that you earlier validated with `ramp-test` command-line. If you wish, you
can save your submission and come back to it later by clicking on
"Save for later". If you are ready, you can click on "Submit now". You will
get the following window:

.. image:: images/ramp_sandbox_submission.png
:width: 650

|
Submit by providing a name for your submission (it is a good idea to start
the name with your username, so you can easily identify your submission on
the leaderboard), then clicking 'submit' at the bottom of the page:
You need to enter a submission name without any spaces containing between
4 and 20 characters. When ready, click on "Submit now".
Shortly, your submission will be send to training and you will be able to
see the processing status in the different leaderboard.

.. image:: images/ramp_sandbox2.png
:width: 700

The submission is trained and tested on our backend in the same way as
``ramp_test_submission`` does it locally. When your submission is waiting in
the queue and being trained, you can find it in the
'New submissions (pending training)' table in 'my submissions'. Once it is
trained, you get will get an email, and your submission will show up on the
public leaderboard. If there is an error (note you should always test your
submission locally with ``ramp_test_submission``), it will show up in the
'Failed submissions' table in 'my submissions'. You can click on the error to
see part of the trace. The data set we use at the backend is usually different
from what you find in the starting kit, so the score may be different.

.. _starting kit: https://github.com/ramp-kits
.. _RAMP studio: http://www.ramp.studio
.. _RAMP studio: https://www.ramp.studio

0 comments on commit 8b6b6c2

Please sign in to comment.