Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump maturin version: 0.14 -> 1.1 #52

Merged
merged 2 commits into from
Jul 21, 2023
Merged

Bump maturin version: 0.14 -> 1.1 #52

merged 2 commits into from
Jul 21, 2023

Conversation

TomaSajt
Copy link
Contributor

This PR bumps the maturin version from 0.14 to 1.1

It also replaces the usage of [package.metadata.maturin].name with [tool.maturin].module-name.

(the former one has been deprecated since 0.15)


I made this change, as I'm trying to package this Python package for nixpkgs and it uses a newer version of maturin, so I had to patch the pyproject.toml with the output path. This PR just that patch as a commit.

@codspeed-hq
Copy link

codspeed-hq bot commented Jul 14, 2023

CodSpeed Performance Report

Merging #52 will degrade performances by 76.06%

Comparing TomaSajt:main (de0be35) with main (33a7cfa)

🎉 Hooray! pytest-codspeed just leveled up to 2.0.0!

A heads-up, this is a breaking change and it might affect your current performance baseline a bit. But here's the exciting part - it's packed with new, cool features and promises improved result stability 🥳!
Curious about what's new? Visit our releases page to delve into all the awesome details about this new version.

Summary

🔥 1 improvements
❌ 15 regressions
✅ 374 untouched benchmarks

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Benchmarks breakdown

Benchmark main TomaSajt:main Change
test_minmax_no_x[False-float32-100-100,000] 794 µs 1,615.9 µs -50.86%
test_minmax_no_x[False-float64-100-100,000] 1.3 ms 1.9 ms -30.15%
test_minmax_no_x[False-int32-100-100,000] 1.5 ms 3.8 ms -60.7%
test_minmax_no_x[False-int64-100-100,000] 2 ms 2.5 ms -20.45%
test_minmax_no_x[True-float32-100-100,000] 882.2 µs 1,169.3 µs -24.56%
test_minmax_with_x[True-float64-1,000-100,000] 6.7 ms 8.3 ms -19.13%
test_minmax_with_x[True-int32-1,000-100,000] 6.9 ms 8.5 ms -18.73%
test_minmax_with_x[True-int64-1,000-100,000] 7.4 ms 9 ms -17.25%
test_m4_with_x[True-float32-1,000-100,000] 5.7 ms 7.2 ms -20.67%
test_m4_with_x[True-float64-1,000-100,000] 6.1 ms 7.9 ms -22.71%
test_m4_with_x[True-int32-1,000-100,000] 6.4 ms 8.1 ms -20.81%
test_m4_with_x[True-int64-1,000-100,000] 7 ms 8.7 ms -19.22%
test_minmaxlttb_with_x[True-float64-100-100,000] 16.6 ms 69.5 ms -76.06%
test_minmaxlttb_with_x[True-int32-100-100,000] 16.9 ms 61.9 ms -72.73%
🔥 test_minmaxlttb_with_x[True-int32-1,000-100,000] 154.9 ms 123.3 ms +25.62%
test_minmaxlttb_with_x[True-int64-100-100,000] 17.4 ms 38.5 ms -54.66%

@jvdd
Copy link
Member

jvdd commented Jul 15, 2023

Do not really understand how this PR might impact performance? Do you have any idea @TomaSajt 🙃

@TomaSajt
Copy link
Contributor Author

Do not really understand how this PR might impact performance? Do you have any idea @TomaSajt

I don't know why it's slower either.

Maybe there's some breaking change, which disables some optimization by default.

https://www.maturin.rs/changelog.html

One guess for the culprit is the new default maturin flag --no-default-features, which somehow (I don't know why it would affect dependencies) disables the nightly_simd feature on argminmax.

Or maybe we just need to use the 0.19 version of pyo3 and numpy,

I don't know, what the problem might be, so I'll make some commits, to look at the new benchmarks. If something doesn't help, I'll revert it.

@TomaSajt
Copy link
Contributor Author

I realize now, that this this repo is still using argminmax 3.1, which doesn't have that flag yet, so I reverted that change. Maybe just changing those package versions will help (but I find it unlikely)

@jvdd
Copy link
Member

jvdd commented Jul 21, 2023

Hey @TomaSajt,

I observe that the "regressions" are only for 100k values, and there are no regressions for 1M values benchmarks. I suppose this can be attributed to a measurement error rather than a real regression (I observed smth similar in #51).

If tests pass, this one is going in!

Thx for submitting this PR 🤝

P.S.: we plan to update the code base to the latest argminmax version #50 (which is the true power behind this library), this will add some more performance improvements + step away from the (kinda outdated) ndarray dependecy :))

@jvdd
Copy link
Member

jvdd commented Jul 21, 2023

tada Hooray! pytest-codspeed just leveled up to 2.0.0!

A heads-up, this is a breaking change and it might affect your current performance baseline a bit. But here's the exciting part - it's packed with new, cool features and promises improved result stability partying_face!
Curious about what's new? Visit our releases page to delve into all the awesome details about this new version.

Actually think it might be due to pytest-codspeed 2.0

@jvdd jvdd merged commit f97a032 into predict-idlab:main Jul 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants