Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance and load tests for codespeed #3585

Closed
ViralBShah opened this issue Jun 30, 2013 · 17 comments
Closed

Performance and load tests for codespeed #3585

ViralBShah opened this issue Jun 30, 2013 · 17 comments
Assignees
Labels
building Build system, or building Julia or its dependencies performance Must go faster

Comments

@ViralBShah
Copy link
Member

Now that we have codespeed integration for performance testing coming into place, we should start running a larger number of codes.

To start with, we should make all the stuff in perf, perf2, and load run uniformly and produce consistent output. Over time, we can even start including some packages as part of the performance measurement.

@staticfloat What do you think?

@staticfloat
Copy link
Member

This sounds great to me. Codespeed requires some metadata about tests when uploading (What the units being reported are, whether "less" is better for that metric, etc....) which motivated me to define each test module to report that metadata when running the tests. I have two examples uploaded right now, but I'm open to alternative designs.

@StefanKarpinski
Copy link
Member

Fairly obviously, but there's no point in benchmarking the non-Julia code in perf over and over again. We should only be running the Julia benchmarks through codespeed.

@staticfloat
Copy link
Member

Definitely. If you like, however, we can measure the non-Julia code once, and provide it as a baseline for certain benchmarks

@StefanKarpinski
Copy link
Member

Yes, that seems like a good idea. We really only need to know how we're doing relative to C, which is nice because that's the benchmark that takes the least time. We certainly don't want to be running the Octave benchmark every time.

@IainNZ
Copy link
Member

IainNZ commented Jul 1, 2013

I think a second task, that is less glamorous/a bit tedious, is to work back through closed issues that are performance related and make sure we have test coverage, to avoid introducing performance regressions.

@mlubin
Copy link
Member

mlubin commented Jul 2, 2013

@IainNZ, just did this with #3598.

+1 for including packages as well.

@staticfloat
Copy link
Member

@StefanKarpinski is the C-code you're referring to just what is contained within the perf/ directory now?

$ find . -name *.c
./micro/perf.c
./kernel/laplace/cilk_laplace.c
./kernel/laplace/c_laplace_parallel_update.c
./kernel/laplace/c_laplace_parallel_update_pointer.c
./kernel/laplace/c_laplace.c
./kernel/go_benchmark.c
./kernel/ziggurat.c

@StefanKarpinski
Copy link
Member

We don't actually have C versions of all of these benchmarks. Maybe that's ok or maybe we should write C versions of everything. Generally, I like to use C as the gold standard, but that's a lot of tedious work.

@staticfloat
Copy link
Member

So when you refer to the C codes and the octave benchmark, etc..... what are you referring to?

@StefanKarpinski
Copy link
Member

I meant the home page microbenchmarks that we have versions of in seven languages. The Octave benchmark takes a really long time to run.

@ViralBShah
Copy link
Member Author

I have been thinking of having a base comparison, which can be either C or matlab. It is not fun to write the cat benchmark in C, and it would not even be a meaningful comparison. All the shootout benchmarks do have C versions available. I have to look if they have Matlab too.

@mlubin
Copy link
Member

mlubin commented Jul 9, 2013

I don't think it's critical to have C versions of all of the benchmarks, although some would be nice. The main point of this is to prevent performance regressions and track improvements, no?

@StefanKarpinski
Copy link
Member

Yes, that's true. The main point of having C versions is to know how good we could get if we really nail it. Certainly for things like hcat and vcat, it doesn't make much sense to have C versions.

@ghost ghost assigned staticfloat Jul 13, 2013
@staticfloat
Copy link
Member

The way it's set up right now, the number of branches getting run in codespeed is pretty crazy. The "Changes" and "Timeline" views can only track master, so that's not a problem, but if you click on Comparison, you'll see the absolute mass of flavor/branch pairings that are created.

Does it make sense to maintain a whitelist of branches we will track? Something like only master and release-0.1?

@ViralBShah
Copy link
Member Author

Yes, we should only track master and the various release branches.

@IainNZ
Copy link
Member

IainNZ commented Jul 17, 2013

I was trawling through julia-users trying to find examples of code that would make for good tests, but was struggling a bit to navigate all the tests to avoid duplication - especially the micro ones. Perhaps another to-do is to have a catalog of the tests?

@staticfloat
Copy link
Member

I'm going to go ahead and close this, we can discuss further codespeed stuff on the mailing list or in more focused issues.

IanButterworth pushed a commit that referenced this issue Aug 19, 2023
…ac617 (#50973)

Co-authored-by: Dilum Aluthge <[email protected]>
fix inference of PackageSpec constructor in presence of imprecise input types (#3585)
IanButterworth pushed a commit that referenced this issue Aug 19, 2023
…3fa4 (#50976)

Co-authored-by: Dilum Aluthge <[email protected]>
fix inference of PackageSpec constructor in presence of imprecise input types (#3585)
IanButterworth pushed a commit that referenced this issue Aug 25, 2023
…3fa4 (#50976)

Co-authored-by: Dilum Aluthge <[email protected]>
fix inference of PackageSpec constructor in presence of imprecise input types (#3585)
nalimilan pushed a commit that referenced this issue Nov 5, 2023
…ac617 (#50973)

Co-authored-by: Dilum Aluthge <[email protected]>
fix inference of PackageSpec constructor in presence of imprecise input types (#3585)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
building Build system, or building Julia or its dependencies performance Must go faster
Projects
None yet
Development

No branches or pull requests

5 participants