Latency on embedded platforms - Experiences #221
Unanswered
becocabana
asked this question in
Q&A
Replies: 1 comment 1 reply
-
Raspberry Pi is probably not the best option for real-time machine learning inference. I have been trying to run my model on a Bela + Beaglebone, but it seems there is an issue with the compatibility of pytorch during the compilation of ~nn. See my forum entry here: https://forum.bela.io/d/2965-installing-nn-external-for-rave-models/2 For a hardware comparison you can check out: https://www.nime.org/proceedings/2016/nime2016_paper0005.pdf |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone,
I was wondering if anyone is having success with running low latency models on e.g. jetson nano or raspberry.
All my attempts so far end up with latency of around 300-400 ms wich makes most live applications rather unappealing.
I've tried it with various configurations. The Latest one was spherical with:
CAPACITY = 16
LATENT_SIZE = 128
N_BAND = 16
PHASE_1_DURATION = 1000000
RATIOS = [4, 4, 4, 2]
SAMPLING_RATE = 44100
Is that still far too big overall?
I would also be happy to share models with the community, however my current ones all seem rather bad to me.
Thanks
(Here are the specifics of my jetson nano setup, if anyone cares: https://bennoheisel.de/1-2/ I just noted my steps for my own use, so take it with a grain of salt.)
Beta Was this translation helpful? Give feedback.
All reactions