-
Notifications
You must be signed in to change notification settings - Fork 378
Historic: NativeSetup in the Python 2.7 era
** This page provides instructions for setting up a development environment directly on your hardware. If you are interested in getting up and running quickly, consider following the instructions at SprintCoderSetup instead. SprintCoderSetup is now the recommended way to set up.**
- 64-bit operating system
- Python 2.7
- pip
- Subversion
- Mysql 5.x with development package (libmysqlclient-dev in debian and mysql-devel in centos)
- For idnits: gawk (not nawk)
The workflow is roughly as follows:
-
Check your email for a notification about the personal branch that has been created for you.
-
Check out your personal branch. You'll do something like this:
$ svn co http://svn.tools.ietf.org/svn/tools/ietfdb/personal/<<yourname>>/<<version>>
-
Add a settings_local.py (described below at "Provide your local settings"). Set the value of ARCHIVE_PATH in that file to the place you want to store files used by the tool. (In the example below, we assume ARCHIVE_PATH = '/a').
-
(Optional) Set up a python virtual sandbox, see below at "Using a virtual python environment".
-
Install required python modules. These are listed in the file requirements at the top of your repository checkout (in debian you might need to install pip from python-pip first):
$ pip install -r requirements.txt
-
Follow the directions in "Getting a copy of the files the datatracker uses" below to put a copy of the needed files in sub-directories of /a.
-
Follow the instructions at the link in "Setting up a local database" below. Note that the remote database (zinfandel) is not currently working so you must create a local dump of the database before proceeding.
-
Create /a/postfix and run ./ietf/bin/generate-draft-aliases and ./ietf/bin/generate-wg-aliases.
-
Download a copy of idnits from http://tools.ietf.org/tools/idnits/ and tar -xvf to /a.
-
Run the test suite before making any other changes! You should have no failures. If you do, you are probably missing a dependency, and you won't be able to check the work you are about to do. See "Run the tests" below.
-
Pick something to work on. This could be any of the following:
- something you miss in the datatracker, and would like to add
- fixing a bug that irks you in particular
- fixing a bug or implementing an idea from the curated list of EnhancementIdeas
-
Do the coding. Ask for advice as needed. Whenever possible, please add tests to cover the code you've added.
-
Run the test suite to make sure that it still passes.
v42.42/ $ ietf/manage.py test --settings=settings_sqlitetest
-
Commit your code to your branch. Provide a commit message which can be used in the changelog and announcement of what's gone into a release. Indicate which bug your code fixes, using the syntax described in SvnTracHooks -- typically Fixes issue !#1234. Also, if your commit is ready to merge, indicate this with the phrase Commit ready for merge. The merge request phrases are also described in [CodeRepository(https://github.com/ietf-tools/datatracker/wiki/CodeRepository)]. Some examples:
svn commit ietf/foo/bar.py -m "Changed to use the right fozboz function instead of frooboz.\ Fixes issue #42. Commit ready for merge." svn commit ietf/baz.py -m "Partially addresses issue #43, removing redundant code. Also related to #44."
Create a file named settings_local.py in the ietf directory of your SVN checkout and put these lines in it:
# configuration to use local copy of database
DATABASES = {
'default': {
'NAME': 'ietf_utf8',
'ENGINE': 'django.db.backends.mysql',
'USER': 'django',
'PASSWORD': 'selectapassword',
'HOST': '127.0.0.1'
},
}
# Since the remote database is currently not working (10/27/15),
# this section is commented out.
#DATABASES = {
# 'default': {
# 'NAME': 'ietf_utf8',
# 'ENGINE': 'django.db.backends.mysql',
# 'USER': 'django',
# 'PASSWORD': '??????', # Contact [email protected] to get the password
# 'HOST': 'zinfandel.tools.ietf.org'
# },
#}
# Since the zinfandel database above is read-only, you also need to have a
# different session backend in order to avoid exceptions due to attempts
# to save session data to the readonly database:
# NOTE: you should omit this if you are using a local database
# SESSION_ENGINE = "django.contrib.sessions.backends.cached_db"
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.dummy.DummyCache',
}
}
# If you are using a remote database, you may want to enable memcached instead:
#CACHES = {
# 'default': {
# 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
# 'LOCATION': '127.0.0.1:11211',
# }
#}
SERVER_MODE = 'development'
DEBUG = True
# If you need to debug email, you can start a debugging server that just
# outputs whatever it receives with:
# python -m smtpd -n -c DebuggingServer localhost:1025
EMAIL_HOST = 'localhost'
EMAIL_PORT = 1025
EMAIL_HOST_USER = None
EMAIL_HOST_PASSWORD = None
# Depending upon what cases you work on, the
# server may need to store files locally. Configuration is
# required to specify the location.
# You should create the subdirectories if they don't already exist.
# You can get a local copy of idnits using the download link at
# http://tools.ietf.org/tools/idnits/
ARCHIVE_PATH # '<local path to where you want to keep the files below>' # for example '/a'
INTERNET_DRAFT_PATH = '%s/devsync/ietf-ftp/internet-drafts' % ARCHIVE_PATH
INTERNET_DRAFT_ARCHIVE_DIR = INTERNET_DRAFT_PATH
IDSUBMIT_REPOSITORY_PATH = INTERNET_DRAFT_PATH
IDSUBMIT_STAGING_PATH = "%s/devsync/www6s/staging/" % ARCHIVE_PATH
RFC_PATH = '%s/devsync/ietf-ftp/rfc/' % ARCHIVE_PATH
IESG_WG_EVALUATION_DIR = '%s/devsync/www6/iesg/evaluation' % ARCHIVE_PATH
IETFWG_DESCRIPTIONS_PATH = '%s/devsync/www6s/wg-descriptions' % ARCHIVE_PATH
IPR_DOCUMENT_PATH = '%s/devsync/ietf-ftp/ietf/IPR' % ARCHIVE_PATH
AGENDA_PATH = '%s/devsync/www6s/proceedings' % ARCHIVE_PATH
AGENDA_PATH_PATTERN = AGENDA_PATH + '/%(meeting)s/agenda/%(wg)s.%(ext)s'
CHARTER_PATH = '%s/devsync/ietf-ftp/charter/' % ARCHIVE_PATH
STATUS_CHANGE_PATH = '%s/devsync/ietf-ftp/status-changes/' % ARCHIVE_PATH
CONFLICT_REVIEW_PATH = '%s/devsync/ietf-ftp/conflict-reviews' % ARCHIVE_PATH
NOMCOM_PUBLIC_KEYS_DIR = '%s/nomcom_keys/public_keys' % ARCHIVE_PATH
IDSUBMIT_IDNITS_BINARY = '%s/idnits-2.13.02/idnits' % ARCHIVE_PATH # Change to another location of idnits as needed
DRAFT_ALIASES_PATH = "%s/postfix/draft-aliases" % ARCHIVE_PATH
DRAFT_VIRTUAL_PATH = "%s/postfix/draft-virtual" % ARCHIVE_PATH
(There will already be a settings.py file in that directory that will import settings_local for you. Setting CACHE_BACKEND will significantly speed up development when using a remote database backend.)
See the SprintDatabase page. Remember to update your DATABASES dictionary in settings_local.py
In addition to the database, the Datatracker looks in the local filesystem for things like drafts, RFCs, IPR disclosures, and proceedings. If you don't have these files, you will see something like "Unable to open " when looking at a draft's main page for example. You can easily get a copy of these files using
rsync -avz rsync.ietf.org::developers ./devsync
You could provide an explicit path rather than ./ above.
The suggested settings_local configuration for local files above is crafted to use the result.
Note that this set of files is large. As of Mar 2014, it will take about 3Gb.
In addition to that, the tests require that devsync/www6s/proceedings/ directory exists. You do not need to download the them all, just make sure the directory exists.
If you need local copies of the past proceedings (if you're working on the proceedings pages for example), you can get them with
$ rsync -avz rsync.ietf.org::devprocs ./devsync/www6s/proceedings/
But be aware these are very large.
It is possible to set up a virtual Python environment for your project coding -- this will permit you to install python modules needed by the project without sudo and without running into a situation where the project requires newer versions of some modules than what other projects do.
The toolkit that does this for python is called virtualenv, and you
install it as usual, for instance with pip
:
$ pip install virtualenv
$ cd <<version>>
$ virtualenv env
$ source env/bin/activate
If you want to read more, there's the virtualenv documentation.
The Datatracker uses a number of external library models, in addition
to the standard python library. These are listed in the requirements.txt
file at the top of your repository checkout. They can be installed by doing
$ pip install -r requirements.txt
or possibly
$ sudo pip install -r requirements.txt
If you are running on Windows (and Cygwin is highly recommended in general), then you may need to download "MySQL Connector C 6.0.2 32-bit version" to resolve a dependency on config-win.h. This can be found at https://dev.mysql.com/downloads/connector/c/6.0.html.
Before doing anything else, while your current directory is the one above the ietf directory, make sure that
$ ietf/manage.py test --settings=settings_sqlitetest
or
$ ietf/manage.py test --settings=settings_sqlitetest -v 3 # for more verbose output
runs to completion and reports no errors. If you see any errors or failures, you probably have a configuration problem or are missing a required python module. If it's not obvious what to do to correct the problem, please ask for help. The Troubleshooting page may provide some hints. Once you resolve the problem, consider adding a new hint on that page.
A specific subset of test cases containing a failure can be re-run without re-running all of the test cases using (for example):
$ ietf/manage.py test --settings=settings_sqlitetest -v 3 ietf.submit.tests.SubmitTests
(cabo -- unverified (WFM):) The test is likely to fail because pyquery isn't installed. Don't try to install it from MacPorts (py27-pyquery) -- this leads to "'XPathExpr' object has no attribute 'add_post_condition'". Instead try "pip install --user git+git://github.com/gawel/pyquery.git".
$ python manage.py runserver
You can sign in with any valid username using the password 'password'. Signing out will allow you to sign in with a different username. Trying your new code out as people with different roles will help find issues early.