-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why choose to not use numpy? #4
Comments
Hi @jonasvdd , are you able to submit the PR for the numpy vectorized version ? Thank you. |
Sure! |
Just a few questions: def largest_triangle_three_buckets(data, threshold):
"""
Return a downsampled version of data.
Parameters
----------
data: list of lists/tuples
data must be formated this way: [[x,y], [x,y], [x,y], ...]
or: [(x,y), (x,y), (x,y), ...]
threshold: int
threshold must be >= 2 and <= to the len of data
Returns
-------
data, but downsampled using threshold
def largest_triangle_three_buckets(x, y, threshold):
"""
....
Parameters:
------------------
x: list | np.ndarray
.... How should I add the numpy dependency? (via |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi really cool repo 🚀
I am 100% sure that this this code can be significantly sped up by using numpy vectorisation.
I was able to write a pure numpy-python implementation which could downsample 50,000,000->2,000 data points in ~300ms (on a consumer pc).
If interested; I can quickly make a PR where I put the numpy-python version of that algorithm next to yours.
Cheers,
Jonas
The text was updated successfully, but these errors were encountered: