Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

simple lock for threads_used update #321

Closed
wants to merge 4 commits into from

Conversation

lmiq
Copy link
Contributor

@lmiq lmiq commented Jul 4, 2024

Simple fix for #317 and #232

This PR has the same effect as #320 but is a simpler modification of the original code. I just added a lock to update threads_used. The update of threads_used occurs only twice at most, so this lock is very likely not important for performance.

@lmiq lmiq mentioned this pull request Jul 4, 2024
@lmiq lmiq marked this pull request as ready for review July 4, 2024 12:21
@lmiq
Copy link
Contributor Author

lmiq commented Jul 4, 2024

This function (which is used in a test which is failing in both master and this updated version) should measure the cost of this lock. I don't see any measurable difference in performance among the versions:

With this PR:

ulia> function prog_perf(n)
           prog = Progress(n)
           x = 0.0
           for i in 1:n
               x += rand()
               next!(prog)
           end
           return x
       end
prog_perf (generic function with 1 method)

julia> @elapsed(prog_perf(10^7))
Progress: 100%|█████████████████████████████████████████████████████| Time: 0:00:04
4.229378385

julia> @elapsed(prog_perf(10^7))
Progress: 100%|█████████████████████████████████████████████████████| Time: 0:00:04
4.185957772

With current stable release:

julia> @elapsed(prog_perf(10^7))
Progress: 100%|█████████████████████████████████████████████████████| Time: 0:00:04
4.125580045

julia> @elapsed(prog_perf(10^7))
Progress: 100%|█████████████████████████████████████████████████████| Time: 0:00:04
4.197386729

@lmiq lmiq mentioned this pull request Jul 4, 2024
@MarcMush
Copy link
Collaborator

MarcMush commented Jul 4, 2024

can you test the performance by always locking? I would be in favor of the no_lock keyword. It would also fix @async (#253), which cannot be detected

Copy link

codecov bot commented Jul 4, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 96.77%. Comparing base (807496a) to head (49aaffc).
Report is 14 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #321      +/-   ##
==========================================
+ Coverage   93.48%   96.77%   +3.29%     
==========================================
  Files           1        1              
  Lines         399      559     +160     
==========================================
+ Hits          373      541     +168     
+ Misses         26       18       -8     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@lmiq
Copy link
Contributor Author

lmiq commented Jul 4, 2024

can you test the performance by always locking? I would be in favor of the no_lock keyword. It would also fix @async (#253), which cannot be detected

Added the option here: #322

with some performance comparison (I think we should always lock for good :-) )

@MarcMush MarcMush closed this Jul 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants