Skip to content

A template containing all the needed information for the American Pets Alive! and Walmart, Return to Owner Hackathon.

License

Notifications You must be signed in to change notification settings

austinpetsalive/ampa-wmt-rto-hackathon

 
 

Repository files navigation

AmPA/WMT RTO Hackathon Project Template

This repository template is to help get participants started in working with the data for the American Pets Alive!, Walmart, Return to Owner Hackathon.

Key Contacts


Rules at a Glance


  • Maximum Team Size: 5
  • Days to Complete: 2.5

License and Legal


License: CC BY-NC-ND 4.0

Data related to this work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

License: GPL v3

Software and/or code related to this work is licensed under a GNU General Public License version 3.

Problem Statements


See APA RTO Hackathon 2022.docx or APA RTO Hackathon 2022.pdf for problem statements and instructions.

If you'd like to jump straight to the data, it can be found here.

Recommended Base Environment


To create the recommended base environment after installing Anaconda, run:

conda create -n rtohack python==3.9

Recommended Packages


For python users, some recommended packages can be found in requirements.txt and installed via:

pip install -r requirements.txt
pip install -r {{ your_project.repo_name }}/requirements.txt

Project Structure


The structure here is only a starting point. You may make whatever changes you deem appropriate. Your README is very important as it is the first-pass the judges will use to determine if your work should proceed to the next round.

├── LICENSE
├── README.md          <- The top-level README for developers using this project.
├── data
│   ├── external       <- Data from third party sources.
│   ├── interim        <- Intermediate data that has been transformed.
│   ├── processed      <- The final, canonical data sets for modeling.
│   └── raw            <- The original, immutable data dump.
│
├── docs               <- A default Sphinx project; see sphinx-doc.org for details
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── references         <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports            <- Generated analysis as HTML, PDF, LaTeX, etc.
│   └── figures        <- Generated graphics and figures to be used in reporting
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported
├── src                <- Source code for use in this project.
│   ├── __init__.py    <- Makes src a Python module
│   │
│   ├── data           <- Scripts to download or generate data
│   │
│   ├── features       <- Scripts to turn raw data into features for modeling
│   │
│   ├── models         <- Scripts to train models and then use trained models to make
│   │                     predictions
│   │
│   └── visualization  <- Scripts to create exploratory and results oriented visualizations

Judging Criteria

Judging is performed in 2 phases.

  • First-Pass Qualification
  • Winner Selection

First-Pass Qualification

Because of the large number of submissions, first-pass qualification will be performed by all judges on the submitted github repositories (forked or cloned from this repo). The README.md in the root of the repo will be observed by judges to look for the following basic criteria:

  • Challenge Category Selection (i.e. Data Science, Software, Innovation)
  • Problem Statement (i.e. a description of the problem the team decided to address)
  • Solution Description (i.e. a description of what you did)
  • Working Demo (i.e. something you can show)
  • Visuals/Screenshots (i.e. illustrating what you did - can be in place of a demo if needed)
  • List of Tools and Technology Used

If one of the above is missing, your solution will not make it to the next round of judging. Minor exceptions will be made in cases when demos, visuals, or tool lists are deemed non-relevant to the solution proposal.

Bonus factors which will be considered which may make up for deficits in the above are:

  • Interestingness of visuals
  • Novelty of problem statement/solution
  • Interestingness of tool usage

See SAMPLE_SUBMISSION_README.md for an example of a submission readme.

Winner Selection

Winners will be selected from those that pass the First-Pass Qualification round. The following prizes and award winner categories will be available:

Best Overall

Oculus Quest 2 (one per team member)

Most Usable

MP Cadet 3D Printer (one per team member)

Most Creative

DJI Tello Drone (one per team member)

About

A template containing all the needed information for the American Pets Alive! and Walmart, Return to Owner Hackathon.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%