Link to complete code zip file (Too large for github, includes application and best weights): https://www.dropbox.com/s/fkcuroah69451hg/pole_AI_code_revised.zip?dl=0
Important:
-
Demonstrative video link: https://youtu.be/elm1BVzHhr8.
-
Note that notebooks were configured to be ran in Google Colab. To run in Jupyter Notebooks, edit accordingly. However, outputs should be visible.
-
The full application code from pyp is not included as it contains most code from https://github.com/googlesamples/mlkit/tree/master/android/vision-quickstart , with the inserted landmarks file used and classes changed to reflect the 3 Pole Moves (Butterfly, Scissor-sit, Superman). However, the built .apk file is included, if you would like to test it simply download to your smartphone, run it, and test over the 3 moves seen on youtube.
-
'assess video' notebook contains explicit frames from videos to match with metric results. Due to GitHub limitations, output has been removed. Please contact me to view notebook output.
The following is as per the Dropbox full code; large files such as application and weights are absent.
Contains resources for the pole move recognition. Included:
- Landmarks Obtained from collected sample data.
- A folder containing model assessment of landmarks (notebooks).
- Notebook used to create landmarks from pole data (a variation of https://colab.research.google.com/drive/19txHpN8exWhstO6WVkfmYYVC6uug_oVR).
Contains resources from the pole versus porn examination. Included:
- Model configuration notebooks, structured as per report model iterations.
- Note that 'assess video' notebook contains no output due to GitHub pornography rules.
- The best weights used for video assessment, included in a seperate folder.