-
Notifications
You must be signed in to change notification settings - Fork 378
SprintCoderSetup
** This page provides instructions for setting up a development environment using the container platform, Docker, which allows a quicker and easier way of getting started for the Code Sprint. If you need to set the environment locally (for bigger, longer-term projects regarding Datatracker), check out NativeSetup. Native setup is NOT recommended for Sprint setups. **
Note that, although the Docker setup makes this process go more quickly, you should set aside at least an hour to complete these steps.
Docker is a toolkit which lets you package software together with its dependencies in lightweight containers, and run it in isolated virtual environments. This is really helpful for Code Sprint participants because they don't have to chase down all the dependencies in order to get started working on the Datatracker.
-
Set up Docker on your preferred platform. Official installers exist for many Linux flavours, OS X, Windows and Cloud services, and can be found at the Docker Store. The free version (CE, Community Edition) is fine.
Please follow the Docker installation all the way through to successfully running the
hello-world
example in a terminal window ($ docker run hello-world
).
Datatracker code is kept in a Subversion repository. If you need a primer on Subversion, here is an online reference.
-
You will need Subversion 1.8.0 or later on your platform in order to check out the repository.
-
Check your email for a notification about the personal branch that has been
created for you to work in.Your personal branch is created based on your signup on the the Sprint Signup page. If you signed up ahead of time, you will receive an email notification about a week before the Code Sprint; if you join just before or during the Code Sprint, add your name to the Sprint Signup page, and then let the people running the Sprint know so that they can run a script that reads the Sprint Signup page and creates your personal branch.
-
(Optional) If your host machine uses a locale in which dates may contain UTF-8 characters, the
ietf/__init__.py
file may contain characters with a non-ASCII encoding. To fix this,export LC_ALL=C
before checkout. -
Check out your Datatracker branch in a suitable directory. We'll assume
~/src/dt/
here, and assume you are'coder'
and the current release is'6.81.4.dev0
':~/src/dt/ $ svn co https://svn.tools.ietf.org/svn/tools/ietfdb/personal/coder/6.81.4.dev0
In the checked-out working copy, you'll find a
docker/
directory and adata/
directory at the top level.
Set up a copy of the MySQL database files under the data/
directory. There are two methods:
-
Either run (the easy way):
~/src/dt/6.81.4.dev0/ $ ./docker/setupdb
-
Or do it step-by-step: fetch down a pre-built copy of the Datatracker database, place it in the
data
directory, unpack it, and fix permissions:~/src/dt/6.81.4.dev0/ $ cd data ~/src/dt/6.81.4.dev0/data/ $ wget https://www.ietf.org/lib/dt/sprint/ietf_utf8.bin.tar.bz2 ~/src/dt/6.81.4.dev0/data/ $ tar xjf ietf_utf8.bin.tar.bz2 ~/src/dt/6.81.4.dev0/data/ $ chmod -R go+rwX mysql
-
In the
docker/
directory there is a wrapper script named'run'
. Use the script to run a pre-built Docker image fetched from the Docker hub:~/src/dt/6.81.4.dev0/ $ ./docker/run
This script does the following:
- pulls down the latest Docker ietf/datatracker-environment image from https://hub.docker.com/r/ietf/datatracker-environment/;
- starts the image with appropriate settings;
- maps the internal
/var/lib/mysql/
directory to the externaldata/mysql/
directory where we placed the database; - sets up a python virtualenv for you;
- installs some dependencies; and most importantly
- drops you in a bash shell where you can run the Datatracker. You can ignore any "unknown manifest" errors as this means the image is not installed and the scripts will do that for you.
Note that the new shell in the virtual machine is started in your home directory.
-
Change directory (cd) to your checked-out copy of the Datatracker source.
(virtual) $ cd <wherever you checked out your working copy to>
-
Ensure that you have fresh and current copies of the dependencies: (You MUST do this for image 45615169429a)
(virtual) $ pip install --upgrade -r requirements.txt
-
FOR IETF 108's SPRINT you need to make these workaround steps until a new docker image can be created (at the time of this edit, this is image 45615169429a):
(virtual) $ cp docker/settings_local.py ietf/settings_local.py (virtual) $ mkdir data/developers/ietf-ftp/yang/catalogmod/
-
Make sure check doesn't report any critical errors. If it does, and it's not obvious how to fix it, ask for help:
(virtual) $ ./ietf/manage.py check
-
(Optional) When you get to this point, and also every time you return to the docker image after some time has elapsed, you may want to download and load the latest database dump. The
docker/setupdb
step gives you a basic copy of the database, but that copy isn't updated as often as the daily snapshots. During Sprint Saturdays that copy is usually fresh enough, but otherwise you should consider updating it. To download and load the latest daily snapshot, run:(virtual) $ ./docker/updatedb
This command will take some time to run; it will first download the latest available daily snapshot, and then load that into the database. It should take between 5 and 10 minutes in total, if you have decent connectivity, and are running on reasonably new hardware (this statement has a timestamp of Mar-2017).
FOR IETF 108's sprint. Don't do this, at least on image 45615169429a. It will take an hour and likely fail.
-
Make sure the migrations have all been run against the database:
(virtual) $ ./ietf/manage.py migrate
This step may throw some warning messages about not being to apply patches, but those can be ignored (will be fixed in the next version of the docker image).
If this step fails, complaining about a missing settings_local.py, then (virtual) $ cp docker/settings_local.py ietf/ and run the migrate command above again.
-
First, run the built-in checks:
(virtual) $ ./ietf/manage.py check
This will
-
ensure that you don't have missing directories or any other obvious issues with the OS environment
-
apply any patches needed to bring library modules up to snuff.
-
-
You are now ready to run the tests. Make sure that one of the following commands runs to completion without errors:
(virtual) $ ./ietf/manage.py test --settings=settings_sqlitetest or (virtual) $ ./ietf/manage.py test --settings=settings_sqlitetest -v 3 # for more verbose output
If you see any errors or failures, you probably have a configuration problem or are missing a required python module. If it's not obvious what to do to correct the problem, please ask for help. The Troubleshooting page may provide some hints. Once you resolve the problem, consider adding a new hint on that page. For help with running the tests, type:
(virtual) $ ./ietf/manage.py test --help
A specific subset of test cases containing a failure can be re-run without re-running all of the test cases using (for example):
(virtual) $ ./ietf/manage.py test --settings=settings_sqlitetest -v 3 ietf.submit.tests.SubmitTests
-
Then start the dev server:
(virtual) $ ./ietf/manage.py runserver 0.0.0.0:8000
or you can run this in a separate terminal window:
~/src/dt/6.81.4.dev0/ $ ./docker/devserver
'''This, and the mailserver command have an issue - if you use ctrl-C to stop the server, it only disconnects from the virtual machine - the command it has run is still running, and you need to go into a shell in the virtual machine to stop it.''' Note the IP address
0.0.0.0
used to make the dev server bind to all addresses. The internal port 8000 has been mapped to port 8000 externally, too. If you are using a recent native Docker for Mac or Docker for Windows install, you will find the datatracker on localhost:8000:~/src/dt/6.81.4.dev0/ $ open http://localhost:8000/
If you are running an older install of Docker, you may have to find the IP address of the VirtualBox which provides the virtual environment. If so, run
'$ docker-machine ip'
outside the virtual environment:~/src/dt/6.81.4.dev0/ $ docker-machine ip 192.168.59.103 ~/src/dt/6.81.4.dev0/ $ open http://192.168.59.103:8000/
-
You can sign into the Datatracker with any valid username using the password 'password'. Signing out will allow you to sign in with a different username. Trying your new code out as people with different roles will help find issues early.
- (Optional) Your local version of Datatracker does not actually send
email messages. If you are testing email functionality, you can see what
would have been sent by running the following in a separate terminal
window:
Any emails generated will print to the terminal window.
~/src/dt/6.81.4.dev0/ $ ./docker/mailserver
The container has empty directories to hold the files the Datatracker uses. You'll find them under data/developers.
-
You can put files in the directories individually if and when you need them. For instance, you can put drafts in data/developers/ietf-ftp/internet-drafts one at a time to see them displayed on the document's main page in your test instance - otherwise, you'll simply get a "cannot read" message.
-
Alternatively, you can use rsync to fetch all current files. Warning - this takes about 3 Gb:
# only if you have the space and time: (virtual) $ rsync -avz rsync.ietf.org::developers/ data/developers/
Note that the settings_local.py provided by this container forces the draft repository and draft archive to be the same directory. Feel free to separate them if you are already keeping a full copy of the draft archive separately (or want to mirror it into data from rsync.ietf.org::id-archive (about 6Gb)).
-
Depending on what you are going to work on, you may later need to obtain the dot and pyang binaries and tell the datatracker where they are by adding this to your setting_local.py (paths here are examples only):
DOT_BINARY = '/opt/local/bin/dot' UNFLATTEN_BINARY= '/opt/local/bin/unflatten' PS2PDF_BINARY = '/opt/local/bin/ps2pdf' SUBMIT_PYANG_COMMAND = ('/opt/local/Library/Frameworks/Python.framework/Versions/2.7/bin/pyang' '-p %(workdir)s --verbose --ietf %(model)s' )
You can now start working on Datatracker code.
-
Pick something to work on. This could be any of the following:
- something you miss in the datatracker, and would like to add
- fixing a bug that irks you in particular
- fixing a bug or implementing an idea from the curated list of EnhancementIdeas Once you have picked something, please add it to the table on the sprint wiki page (e.g., IETF105Sprint) so that other people will know not to pick the same thing to work on, or will know to contact you in order to cooperate on it.
-
Do the coding. Talk with your fellow sprinters about what you're doing. Ask for advice as needed. Whenever possible, please add tests to cover the code you've added.
-
Run the test suite to make sure that it still passes.
(virtual) $ ./ietf/manage.py test --settings=settings_sqlitetest
-
You can run individual tests too while you develop by adding the path to the Class and test case as an argument.
(virtual) $./ietf/manage.py test ietf.meeting.tests_views.IphoneAppJsonTests.test_iphone_app_json --settings=settings_sqlitetest
-
Commit your code to your branch. Provide a commit message which can be used in the changelog and announcement of what's gone into a release. Indicate which bug your code fixes, using the syntax described in SvnTracHooks -- typically Fixes issue !#1234. Also, if your commit is ready to merge, indicate this with the phrase Commit ready for merge. The merge request phrases are also described in [CodeRepository(https://github.com/ietf-svn-conversion/datatracker/wiki/CodeRepository)]. Some examples:
svn commit ietf/foo/bar.py -m "Changed to use the right fozboz function instead of frooboz.\ Fixes issue #42. Commit ready for merge." svn commit ietf/baz.py -m "Partially addresses issue #43, removing redundant code. Also related to #44."