-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Cleanup semaphore usage in utilcode #115685
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
I do not think that this is going to work. These Windows-simulating CoreCLR PAL APIs are used to implement managed System.Threading.Semaphore and friends on non-Windows. The main pre-requisite for cleaning up these Windows-simulating PAL APIs is to switch managed System.Threading.Semaphore and friends to managed WaitSubSystem. It requires factoring out support for named semaphores from the CoreCLR PAL. I know @jkoritzinsky has been thinking about what it would take. |
|
Yeah this PR isn't going to work as-is because the Semaphore's underlying handle needs to be a SafeWaitHandle, which on non-Windows is a PAL wait handle. We can't have two separate implementations here. The way forward is to use the managed wait subsystem, but we need to determine how to handle features that aren't supported there (named semaphores, named mutexes, anything cross-process) before we can move to that. The changes in utilcode to use pthreads can be done, but we can't remove the pal implementation yet. |
I do find such usage for Mutex, but not Semaphore. I can see "can't find QCall" in the test failures now. OK, I see the |
Apparently named semaphore isn't supported in PAL implementation either: runtime/src/coreclr/pal/src/synchobj/semaphore.cpp Lines 179 to 184 in ab105b5
runtime/src/coreclr/pal/src/synchobj/semaphore.cpp Lines 468 to 478 in ab105b5
Am I missing anything? |
I meant to say named mutexes. Named mutexes are supported by CoreCLR PAL. All PAL synchronization primitives (semaphores, mutexes, events) return opaque HANDLEs that can be used interchangeably and passed to PAL methods like WaitForMultipleObjects. It means it is not possible to switch these synchronization primitives to use the managed WaitSubSystem one at a time. |
Yes, I realized this after seeing the QCalls. Then I think it's better to do a minor refactor first and investigate other synchronization primitives. |
This reverts commit 5c31732.
src/coreclr/utilcode/utsem.cpp
Outdated
| m_hWriteWaiterEvent = CreateEvent(NULL, FALSE, FALSE, NULL); | ||
| IfNullRet(m_hWriteWaiterEvent); | ||
| #else // HOST_WINDOWS | ||
| pthread_rwlock_init(&m_rwLock, nullptr); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to handle failures?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so, but unsure about what to do on failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same way as the current code - return HRESULT.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading through the manpage, it's unclear to me that when a writer is holding the lock, whether a reader will be blocked or returned with EBUSY. Simply returning E_FAIL under any failure?
| HANDLE m_hWriteWaiterEvent; // event for awakening write waiters | ||
| #else // HOST_WINDOWS | ||
| bool m_initialized; | ||
| pthread_rwlock_t m_rwLock; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does performance of pthread_rwlock_t compare to the current implementation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Plugged UTSemReadWrite into corerun and some basic tests. The time cost for 1 million loops of single-thread lock-unlock under WSL:
PAL implementation: Read lock: 12350 us Write lock: 12379 us
pthreads implementation: Read lock: 11678 us Write lock: 15976 us
For reference, under Windows: Read lock: 13450 us Write lock: 12793 us
|
I don't think this is a good use of time. This is merely sprinkling compile time branchs in more places and not improving the actual situation. We should be focusing on simpler abstractions in |
| void | ||
| CMiniMdRW::Debug_CheckIsLockedForWrite() | ||
| { | ||
| #ifdef HOST_WINDOWS |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not an appropriate limitation. There should be nothing Windows specific about this code path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't find a way to test whether a pthread_rwlock_t is held.
|
If this needed, we should add a new abstraction in the minipal, but given the current changes in this PR it doesn't seem appropriate without a much larger change. |
Yes, the initial attempt was to remove the pal implementation of semaphore, but as #115685 (comment), all synchronization primitives have to be switched at the same time. The UTSemReadWrite lock is only used in R/W metadata model, which should also be deprecated if we adopt DNMD as new metadata model. |
|
Closing since it would be unnecessary if we switch to new implementation of metadata. Performance is recorded at #115685 (comment) |
CreateSemaphoreis only used in one place, replaced withpthread_rwlock_tinstead.MutexandEventare used in a bit more places, thus not included in the same PR.