From: Koundinya Veluri Date: Wed, 21 Oct 2020 17:18:24 +0000 (-0400) Subject: Migrate coreclr's worker thread pool to be able to use the portable thread pool in... X-Git-Tag: submit/tizen/20210909.063632~4984 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=2a234f91fc654381d5bd2d001c59c48b3634b4ab;p=platform%2Fupstream%2Fdotnet%2Fruntime.git Migrate coreclr's worker thread pool to be able to use the portable thread pool in opt-in fashion (#38225) - Enables using the portable thread pool with coreclr as opt-in. Change is off by default for now, and can be enabled with COMPlus_ThreadPool_UsePortableThreadPool=1. Once it's had bake time and seen to be stable, at a reasonable time in the future the config flag would ideally be removed and the relevant parts of native implementation deleted. - The IO thread pool is not being migrated in this change, and remains on the native side - My goal was to get compatible behavior, compatible with diagnostics tools, and similar perf to the native implementation in coreclr. Tried to avoid changing scheduling behavior, behavior of heuristics, etc., compared with that implementation. - The eventual goal is to have one mostly managed thread pool implementation that can be shared between runtimes, to ease maintenance going forward Commit descriptions: - "Add dependencies" - Ported LowLevelLock from CoreRT, and moved LowLevelSpinWaiter to shared. Since we support Thread.Interrupt(), they were necessary in the wait subsystem in CoreRT partly to support that, and were also used in the portable thread pool implementation where a pending thread interrupt on a thread pool thread would otherwise crash the process. Interruptible waits are already used in the managed side of the thread pool in the queue implementations. It may be reasonable to ignore the thread interrupt problem and suggest that it not be used on thread pool threads, but for now I just brought in the dependencies to keep behavior consistent with the native implementation. - "Add config var" - Added config var COMPlus_ThreadPool_UsePortableThreadPool (disabled by default for now) - Flowed the new config var to the managed side and set up a mechanism to flow all of the thread pool config vars - Removed debug-only config var COMPlus_ThreadpoolTickCountAdjustment, which didn't seem to be too useful - Specialized native and managed thread pool paths based on the config var. Added assertions to paths that should not be reached depending on the config var. - "Move portable RegisteredWaitHandle implementation to shared ThreadPool.cs" - Just moved the portable implementation, no functional changes. In preparation for merging the two implementations. - "Merge RegisteredWaitHandle implementations" - Merged implementations of RegisteredWaitHandle using the portable version as the primary and specializing small parts of it for coreclr - Fixed PortableThreadPool's registered waits to track SafeWaitHandles instead of WaitHandles similarly to the native implementation. The SafeWaitHandle in a WaitHandle can be modified, so it is retrieved once and reused thereafter. Also added/removed refs for the SafeWaitHandles that are registered. - "Separate portable-only portion of RegisteredWaitHandle" - Separated RegisteredWaitHandle.UnregisterPortable into a different file, no functional changes. Those paths reference PortableThreadPool, which is conditionally included unlike ThreadPool.cs. Just for consistency such that the new file can be conditionally included similarly to PortableThreadPool. - "Fix timers, tiered compilation, introduced time-sensitive work item queue to simulate coreclr behavior" - Wired work items queued from the native side (appdomain timer callback, tiered compilation background work callback) to queue them into the managed side - The timer thread calls into managed code to queue the callback - Some tiered compilation work item queuing paths cannot call managed code, so used a timer with zero due time instead - Added a queue of "time-sensitive" work items to the managed side to mimic how work items queued from the native side ran previously. In particular, if the global queue is backed up, when using the native thread pool the native work items still run ahead of them periodically (based on the Dispatch quantum). Potentially they could be queued into the global queue but if it's backed up it can potentially significantly and perhaps artificially delay the appdomain timer callback and the tiering background jitting. I didn't want to change the behavior in an observable (and potentially bad) way here for now, a good time to revisit this would be when IO completion handling is added to the portable thread pool, then the native work items could be handled somewhat similarly. - "Implement ResetThreadPoolThread, set thread names for diagnostics" - Aside from that, setting the thread names (at OS level) allows debuggers to identify the threads better as before. For threads that may run user code, the thread Name property is kept as null as before, such that it may be set without exception. - "Cache-line-separate PortableThreadPool._numRequestedWorkers similarly to coreclr" - Was missed before, separated it for consistency - "Post wait completions to the IO completion port on Windows for coreclr, similarly to before" - On Windows, wait completions are queued to the IO thread pool, which is still implemented on the native side. On Unixes, they are queued to the global queue. - "Reroute managed gate thread into unmanaged side to perform gate activites, don't use unmanaged gate thread" - When the config var is enabled, removed the gate thread from the native side. Instead, the gate thread on the managed side calls into the native side to perform gate activities for the IO completion thread pool, and returns a value to indicate whether the gate thread is still necessary. - Also added a native-to-managed entry point to request the gate thread to run for the IO completion thread pool - "Flow config values from CoreCLR to the portable thread pool for compat" - Flowed the rest of the thread pool config vars to the managed side, such that COMPlus variables continue to work with the portable thread pool - Config var values are stored in AppContext, made the names consistent for supported and unsupported values - "Port - ..." * 3 - Ported a few fixes that did not make it into the portable thread pool implementation - "Fix ETW events" - Fixed the EventSource used by the portable thread pool, added missing events - For now, the event source uses the same name and GUID as the native side. It seems to work for now for ETW, we may switch to a separate provider (along with updating tools) before enabling the portable thread pool by default. - For enqueue/dequeue events, changed to use the object's hash code as the work item identifier instead of the pointer since the pointer may change between enqueue and dequeue - "Fix perf of counts structs" - Structs used for multiple counts with interlocked operations were implemented with explicit struct layout and field offsets. The JIT seems to generate stack-based code for such structs and it was showing up as higher overhead in perf profiles compared to the equivalent native implementation. Slower code in compare-exchange loops can cause a larger gap of time between the read and the compare-exchange, which can also cause higher contention. - Changed the structs to use manual bit manipulation instead, and microoptimized some paths. The code is still not as good as that generated by C++, but it seems to perform similarly based on perf profiles. - Code size also improved in many cases, for example one of the larger differences was in MaybeAddWorkingWorker(), which decreased from 585 bytes to 382 bytes and with far fewer stack memory operations - "Fix perf of dispatch loop" - Just some minor tweaks as I was looking at perf profiles and code of Dispatch() - "Fix perf of ThreadInt64PersistentCounter" - The implementation used to count completed work items was using `ThreadLocal`, which turned out to be too slow for that purpose according to perf profiles - Changed it to rely on the user of the component to provide an object that tracks the count, which the user of the component would obtain from a ThreadStatic field - Also removed the thread-local lookup per iteration in one of the hot paths in Dispatch() and improved inlining - "Miscellaneous perf fixes" - A few more small tweaks as I was looking at perf profiles and code - In ConcurrentQueue, added check for empty into the fast path - For the portable thread pool, updated to trigger the worker thread Wait event after the short spin-wait completes and before actually waiting, the event is otherwise too verbose when profiling and changes performance characteristics - Cache-line-separated the gate thread running state as is done in the native implementation - Accessing PortableThreadPool.ThreadPoolInstance multiple times was generating less than ideal code that was noticeable in perf profiles. Tried to avoid it especially in hot paths, and in some cases where unnecessary for consistency if nothing else. - Removed an extra call to Environment.TickCount in Dispatch() per iteration - Noticed that a field that was intended to be cache-line-separated was not actually being separated, see https://github.com/dotnet/runtime/issues/38215, fixed - "Fix starvation heuristic" - Described in comment - "Implement worker tracking" - Implemented the equivalent in the portable thread pool along with raising the relevant event - "Use smaller stack size for threads that don't run user code" - Using the same stack size as in the native side for those threads - "Note some SOS dependencies, small fixes in hill climbing to make equivalent to coreclr" - Corresponds with PR that updates SOS: https://github.com/dotnet/diagnostics/pull/1274 - Also fixed a couple of things to work similarly to the native implementation - "Port some tests from CoreRT" - Also improved some of the tests - "Fail-fast in thread pool native entry points specific to thread pool implementations based on config" - Scanned all of the managed-to-native entry points from the thread pool and thread-entry functions, and promoted some assertions to be verified in all builds with fail-fast. May help to know in release builds when a path that should not be taken is taken and to avoid running further along that path. - "Fix SetMinThreads() and SetMaxThreads() to return true only when both changes are successful with synchronization" - These are a bit awkward when the portable thread pool is enabled, because they should return true only when both changes are valid and return false without making any changes otherwise, and since the worker thread pool is on the managed side and IO thread pool is on the native side - Added some managed-to-native entry points to allow checking validity before making the changes, all under a lock taken by the managed side - "Fix registered wait removals for fairness since there can be duplicate system wait objects in the wait array" - Described in comment - "Allow multiple DotNETRuntime event providers/sources in EventPipe" - Temporary change to EventPipe to be able to get events from dotnet-trace - For now, the event source uses the same name and GUID as the native side. It seems to work for now for ETW, and with this change it seems to work with EventPipe for getting events. Subscribing to the NativeRuntimeEventSource does not get thread pool events yet, that is left for later. We may switch to a separate provider (along with updating tools) before enabling the portable thread pool by default, as a long-term solution. - "Fix registered wait handle timeout logic in the wait thread" - The timeout logic was comparing against how long the last wait took and was not timing out waits sometimes, fixed to consider the total time since the last reset of timeout instead - "Fix Browser build" - Updated the Browser-specific thread pool variant based on the other changes Corresponding PR to update SOS: https://github.com/dotnet/diagnostics/pull/1274 Fixes https://github.com/dotnet/runtime/issues/32020 --- diff --git a/src/coreclr/src/System.Private.CoreLib/System.Private.CoreLib.csproj b/src/coreclr/src/System.Private.CoreLib/System.Private.CoreLib.csproj index 2075234..e326528 100644 --- a/src/coreclr/src/System.Private.CoreLib/System.Private.CoreLib.csproj +++ b/src/coreclr/src/System.Private.CoreLib/System.Private.CoreLib.csproj @@ -20,6 +20,9 @@ true $(IntermediateOutputPath)System.Private.CoreLib.xml $(MSBuildThisFileDirectory)src\ILLink\ + + true + true @@ -289,6 +292,7 @@ + diff --git a/src/coreclr/src/System.Private.CoreLib/src/System/Threading/LowLevelLifoSemaphore.Unix.cs b/src/coreclr/src/System.Private.CoreLib/src/System/Threading/LowLevelLifoSemaphore.Unix.cs new file mode 100644 index 0000000..552fed5 --- /dev/null +++ b/src/coreclr/src/System.Private.CoreLib/src/System/Threading/LowLevelLifoSemaphore.Unix.cs @@ -0,0 +1,51 @@ +// Licensed to the .NET Foundation under one or more agreements. +// The .NET Foundation licenses this file to you under the MIT license. + +using System.Diagnostics; +using System.Runtime.CompilerServices; +using System.Runtime.InteropServices; +using Microsoft.Win32.SafeHandles; + +namespace System.Threading +{ + /// + /// A LIFO semaphore implemented using the PAL's semaphore with uninterruptible waits. + /// + internal sealed partial class LowLevelLifoSemaphore : IDisposable + { + private Semaphore? _semaphore; + + private void Create(int maximumSignalCount) + { + Debug.Assert(maximumSignalCount > 0); + _semaphore = new Semaphore(0, maximumSignalCount); + } + + public bool WaitCore(int timeoutMs) + { + Debug.Assert(_semaphore != null); + Debug.Assert(timeoutMs >= -1); + + int waitResult = WaitNative(_semaphore!.SafeWaitHandle, timeoutMs); + Debug.Assert(waitResult == WaitHandle.WaitSuccess || waitResult == WaitHandle.WaitTimeout); + return waitResult == WaitHandle.WaitSuccess; + } + + [DllImport(RuntimeHelpers.QCall, CharSet = CharSet.Unicode)] + private static extern int WaitNative(SafeWaitHandle handle, int timeoutMs); + + public void ReleaseCore(int count) + { + Debug.Assert(_semaphore != null); + Debug.Assert(count > 0); + + _semaphore!.Release(count); + } + + public void Dispose() + { + Debug.Assert(_semaphore != null); + _semaphore!.Dispose(); + } + } +} diff --git a/src/coreclr/src/System.Private.CoreLib/src/System/Threading/Thread.CoreCLR.cs b/src/coreclr/src/System.Private.CoreLib/src/System/Threading/Thread.CoreCLR.cs index 7a58e32..cc894d2 100644 --- a/src/coreclr/src/System.Private.CoreLib/src/System/Threading/Thread.CoreCLR.cs +++ b/src/coreclr/src/System.Private.CoreLib/src/System/Threading/Thread.CoreCLR.cs @@ -144,6 +144,11 @@ namespace System.Threading private int _managedThreadId; // INT32 #pragma warning restore CA1823, 169 + // This is used for a quick check on thread pool threads after running a work item to determine if the name, background + // state, or priority were changed by the work item, and if so to reset it. Other threads may also change some of those, + // but those types of changes may race with the reset anyway, so this field doesn't need to be synchronized. + private bool _mayNeedResetForThreadPool; + private Thread() { } private void Create(ThreadStart start) => @@ -259,6 +264,9 @@ namespace System.Threading public static void Sleep(int millisecondsTimeout) => SleepInternal(millisecondsTimeout); + [DllImport(RuntimeHelpers.QCall)] + internal static extern void UninterruptibleSleep0(); + /// /// Wait for a length of time proportional to 'iterations'. Each iteration is should /// only take a few machine instructions. Calling this API is preferable to coding @@ -337,7 +345,14 @@ namespace System.Threading public bool IsBackground { get => IsBackgroundNative(); - set => SetBackgroundNative(value); + set + { + SetBackgroundNative(value); + if (!value) + { + _mayNeedResetForThreadPool = true; + } + } } [MethodImpl(MethodImplOptions.InternalCall)] @@ -351,13 +366,22 @@ namespace System.Threading { [MethodImpl(MethodImplOptions.InternalCall)] get; + [MethodImpl(MethodImplOptions.InternalCall)] + internal set; } /// Returns the priority of the thread. public ThreadPriority Priority { get => (ThreadPriority)GetPriorityNative(); - set => SetPriorityNative((int)value); + set + { + SetPriorityNative((int)value); + if (value != ThreadPriority.Normal) + { + _mayNeedResetForThreadPool = true; + } + } } [MethodImpl(MethodImplOptions.InternalCall)] @@ -503,12 +527,23 @@ namespace System.Threading // we will record that in a readonly static so that it could become a JIT constant and bypass caching entirely. private static readonly bool s_isProcessorNumberReallyFast = ProcessorIdCache.ProcessorNumberSpeedCheck(); -#pragma warning disable CA1822 // Mark members as static + [MethodImpl(MethodImplOptions.AggressiveInlining)] internal void ResetThreadPoolThread() { - // Currently implemented in unmanaged method Thread::InternalReset and - // called internally from the ThreadPool in NotifyWorkItemComplete. + Debug.Assert(this == CurrentThread); + Debug.Assert(IsThreadPoolThread); + + if (!ThreadPool.UsePortableThreadPool) + { + // Currently implemented in unmanaged method Thread::InternalReset and + // called internally from the ThreadPool in NotifyWorkItemComplete. + return; + } + + if (_mayNeedResetForThreadPool) + { + ResetThreadPoolThreadSlow(); + } } -#pragma warning restore CA1822 } // End of class Thread } diff --git a/src/coreclr/src/System.Private.CoreLib/src/System/Threading/ThreadPool.CoreCLR.cs b/src/coreclr/src/System.Private.CoreLib/src/System/Threading/ThreadPool.CoreCLR.cs index 86b6e44..d18e050 100644 --- a/src/coreclr/src/System.Private.CoreLib/src/System/Threading/ThreadPool.CoreCLR.cs +++ b/src/coreclr/src/System.Private.CoreLib/src/System/Threading/ThreadPool.CoreCLR.cs @@ -29,128 +29,102 @@ namespace System.Threading internal static bool PerformWaitCallback() => ThreadPoolWorkQueue.Dispatch(); } - internal sealed class RegisteredWaitHandleSafe : CriticalFinalizerObject + public sealed partial class RegisteredWaitHandle : MarshalByRefObject { - private static IntPtr InvalidHandle => new IntPtr(-1); - private IntPtr registeredWaitHandle = InvalidHandle; - private WaitHandle? m_internalWaitObject; - private bool bReleaseNeeded; - private volatile int m_lock; + private IntPtr _nativeRegisteredWaitHandle = InvalidHandleValue; + private bool _releaseHandle; - internal IntPtr GetHandle() => registeredWaitHandle; + private static bool IsValidHandle(IntPtr handle) => handle != InvalidHandleValue && handle != IntPtr.Zero; - internal void SetHandle(IntPtr handle) + internal void SetNativeRegisteredWaitHandle(IntPtr nativeRegisteredWaitHandle) { - registeredWaitHandle = handle; + Debug.Assert(!ThreadPool.UsePortableThreadPool); + Debug.Assert(IsValidHandle(nativeRegisteredWaitHandle)); + Debug.Assert(!IsValidHandle(_nativeRegisteredWaitHandle)); + + _nativeRegisteredWaitHandle = nativeRegisteredWaitHandle; } - internal void SetWaitObject(WaitHandle waitObject) + internal void OnBeforeRegister() { - m_internalWaitObject = waitObject; - if (waitObject != null) + if (ThreadPool.UsePortableThreadPool) { - m_internalWaitObject.SafeWaitHandle.DangerousAddRef(ref bReleaseNeeded); + GC.SuppressFinalize(this); + return; } + + Handle.DangerousAddRef(ref _releaseHandle); } - internal bool Unregister( - WaitHandle? waitObject // object to be notified when all callbacks to delegates have completed - ) + /// + /// Unregisters this wait handle registration from the wait threads. + /// + /// The event to signal when the handle is unregistered. + /// If the handle was successfully marked to be removed and the provided wait handle was set as the user provided event. + /// + /// This method will only return true on the first call. + /// Passing in a wait handle with a value of -1 will result in a blocking wait, where Unregister will not return until the full unregistration is completed. + /// + public bool Unregister(WaitHandle waitObject) { - bool result = false; + if (ThreadPool.UsePortableThreadPool) + { + return UnregisterPortable(waitObject); + } - // lock(this) cannot be used reliably in Cer since thin lock could be - // promoted to syncblock and that is not a guaranteed operation - bool bLockTaken = false; - do + s_callbackLock.Acquire(); + try { - if (Interlocked.CompareExchange(ref m_lock, 1, 0) == 0) + if (!IsValidHandle(_nativeRegisteredWaitHandle) || + !UnregisterWaitNative(_nativeRegisteredWaitHandle, waitObject?.SafeWaitHandle)) { - bLockTaken = true; - try - { - if (ValidHandle()) - { - result = UnregisterWaitNative(GetHandle(), waitObject?.SafeWaitHandle); - if (result) - { - if (bReleaseNeeded) - { - Debug.Assert(m_internalWaitObject != null, "Must be non-null for bReleaseNeeded to be true"); - m_internalWaitObject.SafeWaitHandle.DangerousRelease(); - bReleaseNeeded = false; - } - // if result not true don't release/suppress here so finalizer can make another attempt - SetHandle(InvalidHandle); - m_internalWaitObject = null; - GC.SuppressFinalize(this); - } - } - } - finally - { - m_lock = 0; - } + return false; } - Thread.SpinWait(1); // yield to processor + _nativeRegisteredWaitHandle = InvalidHandleValue; + + if (_releaseHandle) + { + Handle.DangerousRelease(); + _releaseHandle = false; + } + } + finally + { + s_callbackLock.Release(); } - while (!bLockTaken); - return result; + GC.SuppressFinalize(this); + return true; } - private bool ValidHandle() => - registeredWaitHandle != InvalidHandle && registeredWaitHandle != IntPtr.Zero; - - ~RegisteredWaitHandleSafe() + ~RegisteredWaitHandle() { - // if the app has already unregistered the wait, there is nothing to cleanup - // we can detect this by checking the handle. Normally, there is no race condition here - // so no need to protect reading of handle. However, if this object gets - // resurrected and then someone does an unregister, it would introduce a race condition - // - // PrepareConstrainedRegions call not needed since finalizer already in Cer - // - // lock(this) cannot be used reliably even in Cer since thin lock could be - // promoted to syncblock and that is not a guaranteed operation - // - // Note that we will not "spin" to get this lock. We make only a single attempt; - // if we can't get the lock, it means some other thread is in the middle of a call - // to Unregister, which will do the work of the finalizer anyway. - // - // Further, it's actually critical that we *not* wait for the lock here, because - // the other thread that's in the middle of Unregister may be suspended for shutdown. - // Then, during the live-object finalization phase of shutdown, this thread would - // end up spinning forever, as the other thread would never release the lock. - // This will result in a "leak" of sorts (since the handle will not be cleaned up) - // but the process is exiting anyway. - // - // During AD-unload, we don't finalize live objects until all threads have been - // aborted out of the AD. Since these locked regions are CERs, we won't abort them - // while the lock is held. So there should be no leak on AD-unload. - // - if (Interlocked.CompareExchange(ref m_lock, 1, 0) == 0) + if (ThreadPool.UsePortableThreadPool) { - try + return; + } + + s_callbackLock.Acquire(); + try + { + if (!IsValidHandle(_nativeRegisteredWaitHandle)) { - if (ValidHandle()) - { - WaitHandleCleanupNative(registeredWaitHandle); - if (bReleaseNeeded) - { - Debug.Assert(m_internalWaitObject != null, "Must be non-null for bReleaseNeeded to be true"); - m_internalWaitObject.SafeWaitHandle.DangerousRelease(); - bReleaseNeeded = false; - } - SetHandle(InvalidHandle); - m_internalWaitObject = null; - } + return; } - finally + + WaitHandleCleanupNative(_nativeRegisteredWaitHandle); + _nativeRegisteredWaitHandle = InvalidHandleValue; + + if (_releaseHandle) { - m_lock = 0; + Handle.DangerousRelease(); + _releaseHandle = false; } } + finally + { + s_callbackLock.Release(); + } } [MethodImpl(MethodImplOptions.InternalCall)] @@ -160,51 +134,137 @@ namespace System.Threading private static extern bool UnregisterWaitNative(IntPtr handle, SafeHandle? waitObject); } - [UnsupportedOSPlatform("browser")] - public sealed class RegisteredWaitHandle : MarshalByRefObject + internal sealed partial class CompleteWaitThreadPoolWorkItem : IThreadPoolWorkItem { - private readonly RegisteredWaitHandleSafe internalRegisteredWait; + void IThreadPoolWorkItem.Execute() => CompleteWait(); - internal RegisteredWaitHandle() + // Entry point from unmanaged code + private void CompleteWait() { - internalRegisteredWait = new RegisteredWaitHandleSafe(); + Debug.Assert(ThreadPool.UsePortableThreadPool); + PortableThreadPool.CompleteWait(_registeredWaitHandle, _timedOut); } + } - internal void SetHandle(IntPtr handle) - { - internalRegisteredWait.SetHandle(handle); - } + internal sealed class UnmanagedThreadPoolWorkItem : IThreadPoolWorkItem + { + private readonly IntPtr _callback; + private readonly IntPtr _state; - internal void SetWaitObject(WaitHandle waitObject) + public UnmanagedThreadPoolWorkItem(IntPtr callback, IntPtr state) { - internalRegisteredWait.SetWaitObject(waitObject); + _callback = callback; + _state = state; } - public bool Unregister( - WaitHandle? waitObject // object to be notified when all callbacks to delegates have completed - ) - { - return internalRegisteredWait.Unregister(waitObject); - } + void IThreadPoolWorkItem.Execute() => ExecuteUnmanagedThreadPoolWorkItem(_callback, _state); + + [DllImport(RuntimeHelpers.QCall, CharSet = CharSet.Unicode)] + private static extern void ExecuteUnmanagedThreadPoolWorkItem(IntPtr callback, IntPtr state); } public static partial class ThreadPool { - // Time in ms for which ThreadPoolWorkQueue.Dispatch keeps executing work items before returning to the OS - private const uint DispatchQuantum = 30; - + // SOS's ThreadPool command depends on this name + internal static readonly bool UsePortableThreadPool = InitializeConfigAndDetermineUsePortableThreadPool(); + + // Time-senstiive work items are those that may need to run ahead of normal work items at least periodically. For a + // runtime that does not support time-sensitive work items on the managed side, the thread pool yields the thread to the + // runtime periodically (by exiting the dispatch loop) so that the runtime may use that thread for processing + // any time-sensitive work. For a runtime that supports time-sensitive work items on the managed side, the thread pool + // does not yield the thread and instead processes time-sensitive work items queued by specific APIs periodically. + internal static bool SupportsTimeSensitiveWorkItems => UsePortableThreadPool; + + // This needs to be initialized after UsePortableThreadPool above, as it may depend on UsePortableThreadPool and the + // config initialization internal static readonly bool EnableWorkerTracking = GetEnableWorkerTracking(); - internal static bool KeepDispatching(int startTickCount) + private static unsafe bool InitializeConfigAndDetermineUsePortableThreadPool() { - // Note: this function may incorrectly return false due to TickCount overflow - // if work item execution took around a multiple of 2^32 milliseconds (~49.7 days), - // which is improbable. - return (uint)(Environment.TickCount - startTickCount) < DispatchQuantum; + bool usePortableThreadPool = false; + int configVariableIndex = 0; + while (true) + { + int nextConfigVariableIndex = + GetNextConfigUInt32Value( + configVariableIndex, + out uint configValue, + out bool isBoolean, + out char* appContextConfigNameUnsafe); + if (nextConfigVariableIndex < 0) + { + break; + } + + Debug.Assert(nextConfigVariableIndex > configVariableIndex); + configVariableIndex = nextConfigVariableIndex; + + if (appContextConfigNameUnsafe == null) + { + // Special case for UsePortableThreadPool, which doesn't go into the AppContext + Debug.Assert(configValue != 0); + Debug.Assert(isBoolean); + usePortableThreadPool = true; + continue; + } + + var appContextConfigName = new string(appContextConfigNameUnsafe); + if (isBoolean) + { + AppContext.SetSwitch(appContextConfigName, configValue != 0); + } + else + { + AppContext.SetData(appContextConfigName, configValue); + } + } + + return usePortableThreadPool; + } + + [MethodImpl(MethodImplOptions.InternalCall)] + private static extern unsafe int GetNextConfigUInt32Value( + int configVariableIndex, + out uint configValue, + out bool isBoolean, + out char* appContextConfigName); + + private static bool GetEnableWorkerTracking() => + UsePortableThreadPool + ? AppContextConfigHelper.GetBooleanConfig("System.Threading.ThreadPool.EnableWorkerTracking", false) + : GetEnableWorkerTrackingNative(); + + [MethodImpl(MethodImplOptions.InternalCall)] + internal static extern bool CanSetMinIOCompletionThreads(int ioCompletionThreads); + + internal static void SetMinIOCompletionThreads(int ioCompletionThreads) + { + Debug.Assert(UsePortableThreadPool); + Debug.Assert(ioCompletionThreads >= 0); + + bool success = SetMinThreadsNative(1, ioCompletionThreads); // worker thread count is ignored + Debug.Assert(success); + } + + [MethodImpl(MethodImplOptions.InternalCall)] + internal static extern bool CanSetMaxIOCompletionThreads(int ioCompletionThreads); + + internal static void SetMaxIOCompletionThreads(int ioCompletionThreads) + { + Debug.Assert(UsePortableThreadPool); + Debug.Assert(ioCompletionThreads > 0); + + bool success = SetMaxThreadsNative(1, ioCompletionThreads); // worker thread count is ignored + Debug.Assert(success); } public static bool SetMaxThreads(int workerThreads, int completionPortThreads) { + if (UsePortableThreadPool) + { + return PortableThreadPool.ThreadPoolInstance.SetMaxThreads(workerThreads, completionPortThreads); + } + return workerThreads >= 0 && completionPortThreads >= 0 && @@ -214,10 +274,20 @@ namespace System.Threading public static void GetMaxThreads(out int workerThreads, out int completionPortThreads) { GetMaxThreadsNative(out workerThreads, out completionPortThreads); + + if (UsePortableThreadPool) + { + workerThreads = PortableThreadPool.ThreadPoolInstance.GetMaxThreads(); + } } public static bool SetMinThreads(int workerThreads, int completionPortThreads) { + if (UsePortableThreadPool) + { + return PortableThreadPool.ThreadPoolInstance.SetMinThreads(workerThreads, completionPortThreads); + } + return workerThreads >= 0 && completionPortThreads >= 0 && @@ -227,11 +297,21 @@ namespace System.Threading public static void GetMinThreads(out int workerThreads, out int completionPortThreads) { GetMinThreadsNative(out workerThreads, out completionPortThreads); + + if (UsePortableThreadPool) + { + workerThreads = PortableThreadPool.ThreadPoolInstance.GetMinThreads(); + } } public static void GetAvailableThreads(out int workerThreads, out int completionPortThreads) { GetAvailableThreadsNative(out workerThreads, out completionPortThreads); + + if (UsePortableThreadPool) + { + workerThreads = PortableThreadPool.ThreadPoolInstance.GetAvailableThreads(); + } } /// @@ -240,11 +320,11 @@ namespace System.Threading /// /// For a thread pool implementation that may have different types of threads, the count includes all types. /// - public static extern int ThreadCount - { - [MethodImpl(MethodImplOptions.InternalCall)] - get; - } + public static int ThreadCount => + (UsePortableThreadPool ? PortableThreadPool.ThreadPoolInstance.ThreadCount : 0) + GetThreadCount(); + + [MethodImpl(MethodImplOptions.InternalCall)] + private static extern int GetThreadCount(); /// /// Gets the number of work items that have been processed so far. @@ -252,51 +332,97 @@ namespace System.Threading /// /// For a thread pool implementation that may have different types of work items, the count includes all types. /// - public static long CompletedWorkItemCount => GetCompletedWorkItemCount(); + public static long CompletedWorkItemCount + { + get + { + long count = GetCompletedWorkItemCount(); + if (UsePortableThreadPool) + { + count += PortableThreadPool.ThreadPoolInstance.CompletedWorkItemCount; + } + return count; + } + } [DllImport(RuntimeHelpers.QCall, CharSet = CharSet.Unicode)] private static extern long GetCompletedWorkItemCount(); - private static extern long PendingUnmanagedWorkItemCount - { - [MethodImpl(MethodImplOptions.InternalCall)] - get; - } + private static long PendingUnmanagedWorkItemCount => UsePortableThreadPool ? 0 : GetPendingUnmanagedWorkItemCount(); + + [MethodImpl(MethodImplOptions.InternalCall)] + private static extern long GetPendingUnmanagedWorkItemCount(); - private static RegisteredWaitHandle RegisterWaitForSingleObject( - WaitHandle waitObject, - WaitOrTimerCallback callBack, - object? state, - uint millisecondsTimeOutInterval, - bool executeOnlyOnce, // NOTE: we do not allow other options that allow the callback to be queued as an APC - bool compressStack - ) + private static void RegisterWaitForSingleObjectCore(WaitHandle waitObject, RegisteredWaitHandle registeredWaitHandle) { - RegisteredWaitHandle registeredWaitHandle = new RegisteredWaitHandle(); + registeredWaitHandle.OnBeforeRegister(); - if (callBack != null) + if (UsePortableThreadPool) { - _ThreadPoolWaitOrTimerCallback callBackHelper = new _ThreadPoolWaitOrTimerCallback(callBack, state, compressStack); - state = (object)callBackHelper; - // call SetWaitObject before native call so that waitObject won't be closed before threadpoolmgr registration - // this could occur if callback were to fire before SetWaitObject does its addref - registeredWaitHandle.SetWaitObject(waitObject); - IntPtr nativeRegisteredWaitHandle = RegisterWaitForSingleObjectNative(waitObject, - state, - millisecondsTimeOutInterval, - executeOnlyOnce, - registeredWaitHandle); - registeredWaitHandle.SetHandle(nativeRegisteredWaitHandle); + PortableThreadPool.ThreadPoolInstance.RegisterWaitHandle(registeredWaitHandle); + return; } - else + + IntPtr nativeRegisteredWaitHandle = + RegisterWaitForSingleObjectNative( + waitObject, + registeredWaitHandle.Callback, + (uint)registeredWaitHandle.TimeoutDurationMs, + !registeredWaitHandle.Repeating, + registeredWaitHandle); + registeredWaitHandle.SetNativeRegisteredWaitHandle(nativeRegisteredWaitHandle); + } + + internal static void UnsafeQueueWaitCompletion(CompleteWaitThreadPoolWorkItem completeWaitWorkItem) + { + Debug.Assert(UsePortableThreadPool); + +#if TARGET_WINDOWS // the IO completion thread pool is currently only available on Windows + QueueWaitCompletionNative(completeWaitWorkItem); +#else + UnsafeQueueUserWorkItemInternal(completeWaitWorkItem, preferLocal: false); +#endif + } + +#if TARGET_WINDOWS // the IO completion thread pool is currently only available on Windows + [MethodImpl(MethodImplOptions.InternalCall)] + private static extern void QueueWaitCompletionNative(CompleteWaitThreadPoolWorkItem completeWaitWorkItem); +#endif + + internal static void RequestWorkerThread() + { + if (UsePortableThreadPool) { - throw new ArgumentNullException(nameof(WaitOrTimerCallback)); + PortableThreadPool.ThreadPoolInstance.RequestWorker(); + return; } - return registeredWaitHandle; + + RequestWorkerThreadNative(); } [DllImport(RuntimeHelpers.QCall, CharSet = CharSet.Unicode)] - internal static extern Interop.BOOL RequestWorkerThread(); + private static extern Interop.BOOL RequestWorkerThreadNative(); + + // Entry point from unmanaged code + private static void EnsureGateThreadRunning() + { + Debug.Assert(UsePortableThreadPool); + PortableThreadPool.EnsureGateThreadRunning(); + } + + /// + /// Called from the gate thread periodically to perform runtime-specific gate activities + /// + /// CPU utilization as a percentage since the last call + /// True if the runtime still needs to perform gate activities, false otherwise + internal static bool PerformRuntimeSpecificGateActivities(int cpuUtilization) + { + Debug.Assert(UsePortableThreadPool); + return PerformRuntimeSpecificGateActivitiesNative(cpuUtilization) != Interop.BOOL.FALSE; + } + + [DllImport(RuntimeHelpers.QCall, CharSet = CharSet.Unicode)] + private static extern Interop.BOOL PerformRuntimeSpecificGateActivitiesNative(int cpuUtilization); [MethodImpl(MethodImplOptions.InternalCall)] private static extern unsafe bool PostQueuedCompletionStatus(NativeOverlapped* overlapped); @@ -305,6 +431,13 @@ namespace System.Threading public static unsafe bool UnsafeQueueNativeOverlapped(NativeOverlapped* overlapped) => PostQueuedCompletionStatus(overlapped); + // Entry point from unmanaged code + private static void UnsafeQueueUnmanagedWorkItem(IntPtr callback, IntPtr state) + { + Debug.Assert(SupportsTimeSensitiveWorkItems); + UnsafeQueueTimeSensitiveWorkItemInternal(new UnmanagedThreadPoolWorkItem(callback, state)); + } + // Native methods: [MethodImpl(MethodImplOptions.InternalCall)] @@ -322,22 +455,56 @@ namespace System.Threading [MethodImpl(MethodImplOptions.InternalCall)] private static extern void GetAvailableThreadsNative(out int workerThreads, out int completionPortThreads); + [MethodImpl(MethodImplOptions.AggressiveInlining)] + internal static bool NotifyWorkItemComplete(object? threadLocalCompletionCountObject, int currentTimeMs) + { + if (UsePortableThreadPool) + { + return + PortableThreadPool.ThreadPoolInstance.NotifyWorkItemComplete( + threadLocalCompletionCountObject, + currentTimeMs); + } + + return NotifyWorkItemCompleteNative(); + } + [MethodImpl(MethodImplOptions.InternalCall)] - internal static extern bool NotifyWorkItemComplete(); + private static extern bool NotifyWorkItemCompleteNative(); + + internal static void ReportThreadStatus(bool isWorking) + { + if (UsePortableThreadPool) + { + PortableThreadPool.ThreadPoolInstance.ReportThreadStatus(isWorking); + return; + } + + ReportThreadStatusNative(isWorking); + } [MethodImpl(MethodImplOptions.InternalCall)] - internal static extern void ReportThreadStatus(bool isWorking); + private static extern void ReportThreadStatusNative(bool isWorking); internal static void NotifyWorkItemProgress() { + if (UsePortableThreadPool) + { + PortableThreadPool.ThreadPoolInstance.NotifyWorkItemProgress(); + return; + } + NotifyWorkItemProgressNative(); } [MethodImpl(MethodImplOptions.InternalCall)] - internal static extern void NotifyWorkItemProgressNative(); + private static extern void NotifyWorkItemProgressNative(); + + internal static object? GetOrCreateThreadLocalCompletionCountObject() => + UsePortableThreadPool ? PortableThreadPool.ThreadPoolInstance.GetOrCreateThreadLocalCompletionCountObject() : null; [MethodImpl(MethodImplOptions.InternalCall)] - private static extern bool GetEnableWorkerTracking(); + private static extern bool GetEnableWorkerTrackingNative(); [MethodImpl(MethodImplOptions.InternalCall)] private static extern IntPtr RegisterWaitForSingleObjectNative( diff --git a/src/coreclr/src/inc/clrconfigvalues.h b/src/coreclr/src/inc/clrconfigvalues.h index 3186372..4f2c812 100644 --- a/src/coreclr/src/inc/clrconfigvalues.h +++ b/src/coreclr/src/inc/clrconfigvalues.h @@ -569,6 +569,9 @@ RETAIL_CONFIG_DWORD_INFO(EXTERNAL_Thread_AssignCpuGroups, W("Thread_AssignCpuGro /// /// Threadpool /// +// NOTE: UsePortableThreadPool - Before changing the default value of this config option, see +// https://github.com/dotnet/runtime/issues/38763 for prerequisites +RETAIL_CONFIG_DWORD_INFO(INTERNAL_ThreadPool_UsePortableThreadPool, W("ThreadPool_UsePortableThreadPool"), 0, "Uses the managed portable thread pool implementation instead of the unmanaged one.") RETAIL_CONFIG_DWORD_INFO(INTERNAL_ThreadPool_ForceMinWorkerThreads, W("ThreadPool_ForceMinWorkerThreads"), 0, "Overrides the MinThreads setting for the ThreadPool worker pool") RETAIL_CONFIG_DWORD_INFO(INTERNAL_ThreadPool_ForceMaxWorkerThreads, W("ThreadPool_ForceMaxWorkerThreads"), 0, "Overrides the MaxThreads setting for the ThreadPool worker pool") RETAIL_CONFIG_DWORD_INFO(INTERNAL_ThreadPool_DisableStarvationDetection, W("ThreadPool_DisableStarvationDetection"), 0, "Disables the ThreadPool feature that forces new threads to be added when workitems run for too long") @@ -581,8 +584,6 @@ RETAIL_CONFIG_DWORD_INFO(INTERNAL_ThreadPool_UnfairSemaphoreSpinLimit, W("Thread RETAIL_CONFIG_DWORD_INFO(INTERNAL_ThreadPool_UnfairSemaphoreSpinLimit, W("ThreadPool_UnfairSemaphoreSpinLimit"), 0x46, "Maximum number of spins a thread pool worker thread performs before waiting for work") #endif // TARGET_ARM64 -CONFIG_DWORD_INFO(INTERNAL_ThreadpoolTickCountAdjustment, W("ThreadpoolTickCountAdjustment"), 0, "") - RETAIL_CONFIG_DWORD_INFO(INTERNAL_HillClimbing_Disable, W("HillClimbing_Disable"), 0, "Disables hill climbing for thread adjustments in the thread pool"); RETAIL_CONFIG_DWORD_INFO(INTERNAL_HillClimbing_WavePeriod, W("HillClimbing_WavePeriod"), 4, ""); RETAIL_CONFIG_DWORD_INFO(INTERNAL_HillClimbing_TargetSignalToNoiseRatio, W("HillClimbing_TargetSignalToNoiseRatio"), 300, ""); diff --git a/src/coreclr/src/vm/ceemain.cpp b/src/coreclr/src/vm/ceemain.cpp index c729cd6..d731c6d6 100644 --- a/src/coreclr/src/vm/ceemain.cpp +++ b/src/coreclr/src/vm/ceemain.cpp @@ -176,6 +176,10 @@ #include "stacksampler.h" #endif +#ifndef CROSSGEN_COMPILE +#include "win32threadpool.h" +#endif + #include #include "bbsweep.h" @@ -674,6 +678,8 @@ void EEStartupHelper() // This needs to be done before the EE has started InitializeStartupFlags(); + ThreadpoolMgr::StaticInitialize(); + MethodDescBackpatchInfoTracker::StaticInitialize(); CodeVersionManager::StaticInitialize(); TieredCompilationManager::StaticInitialize(); diff --git a/src/coreclr/src/vm/comsynchronizable.cpp b/src/coreclr/src/vm/comsynchronizable.cpp index 687dbab..2e83ac6 100644 --- a/src/coreclr/src/vm/comsynchronizable.cpp +++ b/src/coreclr/src/vm/comsynchronizable.cpp @@ -648,6 +648,17 @@ FCIMPLEND #define Sleep(dwMilliseconds) Dont_Use_Sleep(dwMilliseconds) +void QCALLTYPE ThreadNative::UninterruptibleSleep0() +{ + QCALL_CONTRACT; + + BEGIN_QCALL; + + ClrSleepEx(0, false); + + END_QCALL; +} + FCIMPL1(INT32, ThreadNative::GetManagedThreadId, ThreadBaseObject* th) { FCALL_CONTRACT; @@ -1318,6 +1329,22 @@ FCIMPL1(FC_BOOL_RET, ThreadNative::IsThreadpoolThread, ThreadBaseObject* thread) } FCIMPLEND +FCIMPL1(void, ThreadNative::SetIsThreadpoolThread, ThreadBaseObject* thread) +{ + FCALL_CONTRACT; + + if (thread == NULL) + FCThrowResVoid(kNullReferenceException, W("NullReference_This")); + + Thread *pThread = thread->GetInternal(); + + if (pThread == NULL) + FCThrowExVoid(kThreadStateException, IDS_EE_THREAD_DEAD_STATE, NULL, NULL, NULL); + + pThread->SetIsThreadPoolThread(); +} +FCIMPLEND + INT32 QCALLTYPE ThreadNative::GetOptimalMaxSpinWaitsPerSpinIteration() { QCALL_CONTRACT; diff --git a/src/coreclr/src/vm/comsynchronizable.h b/src/coreclr/src/vm/comsynchronizable.h index 8de4ee3..2e6fe1e 100644 --- a/src/coreclr/src/vm/comsynchronizable.h +++ b/src/coreclr/src/vm/comsynchronizable.h @@ -71,6 +71,7 @@ public: #undef Sleep static FCDECL1(void, Sleep, INT32 iTime); #define Sleep(a) Dont_Use_Sleep(a) + static void QCALLTYPE UninterruptibleSleep0(); static FCDECL3(void, SetStart, ThreadBaseObject* pThisUNSAFE, Object* pDelegateUNSAFE, INT32 iRequestedStackSize); static FCDECL2(void, SetBackground, ThreadBaseObject* pThisUNSAFE, CLR_BOOL isBackground); static FCDECL1(FC_BOOL_RET, IsBackground, ThreadBaseObject* pThisUNSAFE); @@ -98,7 +99,8 @@ public: #ifdef FEATURE_COMINTEROP static FCDECL1(void, DisableComObjectEagerCleanup, ThreadBaseObject* pThis); #endif //FEATURE_COMINTEROP - static FCDECL1(FC_BOOL_RET,IsThreadpoolThread, ThreadBaseObject* thread); + static FCDECL1(FC_BOOL_RET,IsThreadpoolThread, ThreadBaseObject* thread); + static FCDECL1(void, SetIsThreadpoolThread, ThreadBaseObject* thread); static FCDECL1(Object*, GetThreadDeserializationTracker, StackCrawlMark* stackMark); static FCDECL0(INT32, GetCurrentProcessorNumber); diff --git a/src/coreclr/src/vm/comthreadpool.cpp b/src/coreclr/src/vm/comthreadpool.cpp index edb8e33..85edc1b 100644 --- a/src/coreclr/src/vm/comthreadpool.cpp +++ b/src/coreclr/src/vm/comthreadpool.cpp @@ -123,6 +123,107 @@ DelegateInfo *DelegateInfo::MakeDelegateInfo(OBJECTREF *state, } /*****************************************************************************************************/ +// Enumerates some runtime config variables that are used by CoreLib for initialization. The config variable index should start +// at 0 to begin enumeration. If a config variable at or after the specified config variable index is configured, returns the +// next config variable index to pass in on the next call to continue enumeration. +FCIMPL4(INT32, ThreadPoolNative::GetNextConfigUInt32Value, + INT32 configVariableIndex, + UINT32 *configValueRef, + BOOL *isBooleanRef, + LPCWSTR *appContextConfigNameRef) +{ + FCALL_CONTRACT; + _ASSERTE(configVariableIndex >= 0); + _ASSERTE(configValueRef != NULL); + _ASSERTE(isBooleanRef != NULL); + _ASSERTE(appContextConfigNameRef != NULL); + + if (!ThreadpoolMgr::UsePortableThreadPool()) + { + *configValueRef = 0; + *isBooleanRef = false; + *appContextConfigNameRef = NULL; + return -1; + } + + auto TryGetConfig = + [=](const CLRConfig::ConfigDWORDInfo &configInfo, bool isBoolean, const WCHAR *appContextConfigName) -> bool + { + bool wasNotConfigured = true; + *configValueRef = CLRConfig::GetConfigValue(configInfo, true /* acceptExplicitDefaultFromRegutil */, &wasNotConfigured); + if (wasNotConfigured) + { + return false; + } + + *isBooleanRef = isBoolean; + *appContextConfigNameRef = appContextConfigName; + return true; + }; + + switch (configVariableIndex) + { + case 0: + // Special case for UsePortableThreadPool, which doesn't go into the AppContext + *configValueRef = 1; + *isBooleanRef = true; + *appContextConfigNameRef = NULL; + return 1; + + case 1: if (TryGetConfig(CLRConfig::INTERNAL_ThreadPool_ForceMinWorkerThreads, false, W("System.Threading.ThreadPool.MinThreads"))) { return 2; } // fall through + case 2: if (TryGetConfig(CLRConfig::INTERNAL_ThreadPool_ForceMaxWorkerThreads, false, W("System.Threading.ThreadPool.MaxThreads"))) { return 3; } // fall through + case 3: if (TryGetConfig(CLRConfig::INTERNAL_ThreadPool_DisableStarvationDetection, true, W("System.Threading.ThreadPool.DisableStarvationDetection"))) { return 4; } // fall through + case 4: if (TryGetConfig(CLRConfig::INTERNAL_ThreadPool_DebugBreakOnWorkerStarvation, true, W("System.Threading.ThreadPool.DebugBreakOnWorkerStarvation"))) { return 5; } // fall through + case 5: if (TryGetConfig(CLRConfig::INTERNAL_ThreadPool_EnableWorkerTracking, true, W("System.Threading.ThreadPool.EnableWorkerTracking"))) { return 6; } // fall through + case 6: if (TryGetConfig(CLRConfig::INTERNAL_ThreadPool_UnfairSemaphoreSpinLimit, false, W("System.Threading.ThreadPool.UnfairSemaphoreSpinLimit"))) { return 7; } // fall through + + case 7: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_Disable, true, W("System.Threading.ThreadPool.HillClimbing.Disable"))) { return 8; } // fall through + case 8: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_WavePeriod, false, W("System.Threading.ThreadPool.HillClimbing.WavePeriod"))) { return 9; } // fall through + case 9: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_TargetSignalToNoiseRatio, false, W("System.Threading.ThreadPool.HillClimbing.TargetSignalToNoiseRatio"))) { return 10; } // fall through + case 10: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_ErrorSmoothingFactor, false, W("System.Threading.ThreadPool.HillClimbing.ErrorSmoothingFactor"))) { return 11; } // fall through + case 11: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_WaveMagnitudeMultiplier, false, W("System.Threading.ThreadPool.HillClimbing.WaveMagnitudeMultiplier"))) { return 12; } // fall through + case 12: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_MaxWaveMagnitude, false, W("System.Threading.ThreadPool.HillClimbing.MaxWaveMagnitude"))) { return 13; } // fall through + case 13: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_WaveHistorySize, false, W("System.Threading.ThreadPool.HillClimbing.WaveHistorySize"))) { return 14; } // fall through + case 14: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_Bias, false, W("System.Threading.ThreadPool.HillClimbing.Bias"))) { return 15; } // fall through + case 15: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_MaxChangePerSecond, false, W("System.Threading.ThreadPool.HillClimbing.MaxChangePerSecond"))) { return 16; } // fall through + case 16: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_MaxChangePerSample, false, W("System.Threading.ThreadPool.HillClimbing.MaxChangePerSample"))) { return 17; } // fall through + case 17: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_MaxSampleErrorPercent, false, W("System.Threading.ThreadPool.HillClimbing.MaxSampleErrorPercent"))) { return 18; } // fall through + case 18: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_SampleIntervalLow, false, W("System.Threading.ThreadPool.HillClimbing.SampleIntervalLow"))) { return 19; } // fall through + case 19: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_SampleIntervalHigh, false, W("System.Threading.ThreadPool.HillClimbing.SampleIntervalHigh"))) { return 20; } // fall through + case 20: if (TryGetConfig(CLRConfig::INTERNAL_HillClimbing_GainExponent, false, W("System.Threading.ThreadPool.HillClimbing.GainExponent"))) { return 21; } // fall through + + default: + *configValueRef = 0; + *isBooleanRef = false; + *appContextConfigNameRef = NULL; + return -1; + } +} +FCIMPLEND + +/*****************************************************************************************************/ +FCIMPL1(FC_BOOL_RET, ThreadPoolNative::CorCanSetMinIOCompletionThreads, DWORD ioCompletionThreads) +{ + FCALL_CONTRACT; + _ASSERTE_ALL_BUILDS(__FILE__, ThreadpoolMgr::UsePortableThreadPool()); + + BOOL result = ThreadpoolMgr::CanSetMinIOCompletionThreads(ioCompletionThreads); + FC_RETURN_BOOL(result); +} +FCIMPLEND + +/*****************************************************************************************************/ +FCIMPL1(FC_BOOL_RET, ThreadPoolNative::CorCanSetMaxIOCompletionThreads, DWORD ioCompletionThreads) +{ + FCALL_CONTRACT; + _ASSERTE_ALL_BUILDS(__FILE__, ThreadpoolMgr::UsePortableThreadPool()); + + BOOL result = ThreadpoolMgr::CanSetMaxIOCompletionThreads(ioCompletionThreads); + FC_RETURN_BOOL(result); +} +FCIMPLEND + +/*****************************************************************************************************/ FCIMPL2(FC_BOOL_RET, ThreadPoolNative::CorSetMaxThreads,DWORD workerThreads, DWORD completionPortThreads) { FCALL_CONTRACT; @@ -207,6 +308,8 @@ INT64 QCALLTYPE ThreadPoolNative::GetCompletedWorkItemCount() FCIMPL0(INT64, ThreadPoolNative::GetPendingUnmanagedWorkItemCount) { FCALL_CONTRACT; + _ASSERTE_ALL_BUILDS(__FILE__, !ThreadpoolMgr::UsePortableThreadPool()); + return PerAppDomainTPCountList::GetUnmanagedTPCount()->GetNumRequests(); } FCIMPLEND @@ -216,6 +319,7 @@ FCIMPLEND FCIMPL0(VOID, ThreadPoolNative::NotifyRequestProgress) { FCALL_CONTRACT; + _ASSERTE_ALL_BUILDS(__FILE__, !ThreadpoolMgr::UsePortableThreadPool()); _ASSERTE(ThreadpoolMgr::IsInitialized()); // can't be here without requesting a thread first ThreadpoolMgr::NotifyWorkItemCompleted(); @@ -240,6 +344,8 @@ FCIMPLEND FCIMPL1(VOID, ThreadPoolNative::ReportThreadStatus, CLR_BOOL isWorking) { FCALL_CONTRACT; + _ASSERTE_ALL_BUILDS(__FILE__, !ThreadpoolMgr::UsePortableThreadPool()); + ThreadpoolMgr::ReportThreadStatus(isWorking); } FCIMPLEND @@ -247,6 +353,7 @@ FCIMPLEND FCIMPL0(FC_BOOL_RET, ThreadPoolNative::NotifyRequestComplete) { FCALL_CONTRACT; + _ASSERTE_ALL_BUILDS(__FILE__, !ThreadpoolMgr::UsePortableThreadPool()); _ASSERTE(ThreadpoolMgr::IsInitialized()); // can't be here without requesting a thread first ThreadpoolMgr::NotifyWorkItemCompleted(); @@ -309,6 +416,7 @@ FCIMPLEND FCIMPL0(FC_BOOL_RET, ThreadPoolNative::GetEnableWorkerTracking) { FCALL_CONTRACT; + _ASSERTE_ALL_BUILDS(__FILE__, !ThreadpoolMgr::UsePortableThreadPool()); BOOL result = CLRConfig::GetConfigValue(CLRConfig::INTERNAL_ThreadPool_EnableWorkerTracking) ? TRUE : FALSE; FC_RETURN_BOOL(result); @@ -396,6 +504,7 @@ FCIMPL5(LPVOID, ThreadPoolNative::CorRegisterWaitForSingleObject, Object* registeredWaitObjectUNSAFE) { FCALL_CONTRACT; + _ASSERTE_ALL_BUILDS(__FILE__, !ThreadpoolMgr::UsePortableThreadPool()); HANDLE handle = 0; struct _gc @@ -446,6 +555,26 @@ FCIMPL5(LPVOID, ThreadPoolNative::CorRegisterWaitForSingleObject, } FCIMPLEND +#ifdef TARGET_WINDOWS // the IO completion thread pool is currently only available on Windows +FCIMPL1(void, ThreadPoolNative::CorQueueWaitCompletion, Object* completeWaitWorkItemObjectUNSAFE) +{ + FCALL_CONTRACT; + _ASSERTE_ALL_BUILDS(__FILE__, ThreadpoolMgr::UsePortableThreadPool()); + + OBJECTREF completeWaitWorkItemObject = ObjectToOBJECTREF(completeWaitWorkItemObjectUNSAFE); + HELPER_METHOD_FRAME_BEGIN_1(completeWaitWorkItemObject); + + _ASSERTE(completeWaitWorkItemObject != NULL); + + OBJECTHANDLE completeWaitWorkItemObjectHandle = GetAppDomain()->CreateHandle(completeWaitWorkItemObject); + ThreadpoolMgr::PostQueuedCompletionStatus( + (LPOVERLAPPED)completeWaitWorkItemObjectHandle, + ThreadpoolMgr::ManagedWaitIOCompletionCallback); + + HELPER_METHOD_FRAME_END(); +} +FCIMPLEND +#endif // TARGET_WINDOWS VOID QueueUserWorkItemManagedCallback(PVOID pArg) { @@ -473,6 +602,8 @@ BOOL QCALLTYPE ThreadPoolNative::RequestWorkerThread() BEGIN_QCALL; + _ASSERTE_ALL_BUILDS(__FILE__, !ThreadpoolMgr::UsePortableThreadPool()); + ThreadpoolMgr::EnsureInitialized(); ThreadpoolMgr::SetAppDomainRequestsActive(); @@ -492,12 +623,30 @@ BOOL QCALLTYPE ThreadPoolNative::RequestWorkerThread() return res; } +BOOL QCALLTYPE ThreadPoolNative::PerformGateActivities(INT32 cpuUtilization) +{ + QCALL_CONTRACT; + + bool needGateThread = false; + + BEGIN_QCALL; + + _ASSERTE_ALL_BUILDS(__FILE__, ThreadpoolMgr::UsePortableThreadPool()); + + ThreadpoolMgr::PerformGateActivities(cpuUtilization); + needGateThread = ThreadpoolMgr::NeedGateThreadForIOCompletions(); + + END_QCALL; + + return needGateThread; +} /********************************************************************************************************************/ FCIMPL2(FC_BOOL_RET, ThreadPoolNative::CorUnregisterWait, LPVOID WaitHandle, Object* objectToNotify) { FCALL_CONTRACT; + _ASSERTE_ALL_BUILDS(__FILE__, !ThreadpoolMgr::UsePortableThreadPool()); BOOL retVal = false; SAFEHANDLEREF refSH = (SAFEHANDLEREF) ObjectToOBJECTREF(objectToNotify); @@ -553,6 +702,7 @@ FCIMPLEND FCIMPL1(void, ThreadPoolNative::CorWaitHandleCleanupNative, LPVOID WaitHandle) { FCALL_CONTRACT; + _ASSERTE_ALL_BUILDS(__FILE__, !ThreadpoolMgr::UsePortableThreadPool()); HELPER_METHOD_FRAME_BEGIN_0(); @@ -565,6 +715,18 @@ FCIMPLEND /********************************************************************************************************************/ +void QCALLTYPE ThreadPoolNative::ExecuteUnmanagedThreadPoolWorkItem(LPTHREAD_START_ROUTINE callback, LPVOID state) +{ + QCALL_CONTRACT; + + BEGIN_QCALL; + + _ASSERTE_ALL_BUILDS(__FILE__, ThreadpoolMgr::UsePortableThreadPool()); + callback(state); + + END_QCALL; +} + /********************************************************************************************************************/ struct BindIoCompletion_Args diff --git a/src/coreclr/src/vm/comthreadpool.h b/src/coreclr/src/vm/comthreadpool.h index 9807482..e4e8769 100644 --- a/src/coreclr/src/vm/comthreadpool.h +++ b/src/coreclr/src/vm/comthreadpool.h @@ -22,6 +22,13 @@ class ThreadPoolNative { public: + static FCDECL4(INT32, GetNextConfigUInt32Value, + INT32 configVariableIndex, + UINT32 *configValueRef, + BOOL *isBooleanRef, + LPCWSTR *appContextConfigNameRef); + static FCDECL1(FC_BOOL_RET, CorCanSetMinIOCompletionThreads, DWORD ioCompletionThreads); + static FCDECL1(FC_BOOL_RET, CorCanSetMaxIOCompletionThreads, DWORD ioCompletionThreads); static FCDECL2(FC_BOOL_RET, CorSetMaxThreads, DWORD workerThreads, DWORD completionPortThreads); static FCDECL2(VOID, CorGetMaxThreads, DWORD* workerThreads, DWORD* completionPortThreads); static FCDECL2(FC_BOOL_RET, CorSetMinThreads, DWORD workerThreads, DWORD completionPortThreads); @@ -38,20 +45,25 @@ public: static FCDECL1(void, ReportThreadStatus, CLR_BOOL isWorking); - static FCDECL5(LPVOID, CorRegisterWaitForSingleObject, Object* waitObjectUNSAFE, Object* stateUNSAFE, UINT32 timeout, CLR_BOOL executeOnlyOnce, Object* registeredWaitObjectUNSAFE); +#ifdef TARGET_WINDOWS // the IO completion thread pool is currently only available on Windows + static FCDECL1(void, CorQueueWaitCompletion, Object* completeWaitWorkItemObjectUNSAFE); +#endif static BOOL QCALLTYPE RequestWorkerThread(); + static BOOL QCALLTYPE PerformGateActivities(INT32 cpuUtilization); static FCDECL1(FC_BOOL_RET, CorPostQueuedCompletionStatus, LPOVERLAPPED lpOverlapped); static FCDECL2(FC_BOOL_RET, CorUnregisterWait, LPVOID WaitHandle, Object * objectToNotify); static FCDECL1(void, CorWaitHandleCleanupNative, LPVOID WaitHandle); static FCDECL1(FC_BOOL_RET, CorBindIoCompletionCallback, HANDLE fileHandle); + + static void QCALLTYPE ExecuteUnmanagedThreadPoolWorkItem(LPTHREAD_START_ROUTINE callback, LPVOID state); }; class AppDomainTimerNative diff --git a/src/coreclr/src/vm/comwaithandle.cpp b/src/coreclr/src/vm/comwaithandle.cpp index d43693e..5c9181a 100644 --- a/src/coreclr/src/vm/comwaithandle.cpp +++ b/src/coreclr/src/vm/comwaithandle.cpp @@ -35,6 +35,25 @@ FCIMPL2(INT32, WaitHandleNative::CorWaitOneNative, HANDLE handle, INT32 timeout) } FCIMPLEND +#ifdef TARGET_UNIX +INT32 QCALLTYPE WaitHandleNative::CorWaitOnePrioritizedNative(HANDLE handle, INT32 timeoutMs) +{ + QCALL_CONTRACT; + + DWORD result = WAIT_FAILED; + + BEGIN_QCALL; + + _ASSERTE(handle != NULL); + _ASSERTE(handle != INVALID_HANDLE_VALUE); + + result = PAL_WaitForSingleObjectPrioritized(handle, timeoutMs); + + END_QCALL; + return (INT32)result; +} +#endif + FCIMPL4(INT32, WaitHandleNative::CorWaitMultipleNative, HANDLE *handleArray, INT32 numHandles, CLR_BOOL waitForAll, INT32 timeout) { FCALL_CONTRACT; diff --git a/src/coreclr/src/vm/comwaithandle.h b/src/coreclr/src/vm/comwaithandle.h index 92a48ec..4bdee22 100644 --- a/src/coreclr/src/vm/comwaithandle.h +++ b/src/coreclr/src/vm/comwaithandle.h @@ -19,6 +19,9 @@ class WaitHandleNative { public: static FCDECL2(INT32, CorWaitOneNative, HANDLE handle, INT32 timeout); +#ifdef TARGET_UNIX + static INT32 QCALLTYPE CorWaitOnePrioritizedNative(HANDLE handle, INT32 timeoutMs); +#endif static FCDECL4(INT32, CorWaitMultipleNative, HANDLE *handleArray, INT32 numHandles, CLR_BOOL waitForAll, INT32 timeout); static FCDECL3(INT32, CorSignalAndWaitOneNative, HANDLE waitHandleSignalUNSAFE, HANDLE waitHandleWaitUNSAFE, INT32 timeout); }; diff --git a/src/coreclr/src/vm/corelib.h b/src/coreclr/src/vm/corelib.h index bd5a8a6..05cdb97 100644 --- a/src/coreclr/src/vm/corelib.h +++ b/src/coreclr/src/vm/corelib.h @@ -905,6 +905,13 @@ DEFINE_METHOD(TP_WAIT_CALLBACK, PERFORM_WAIT_CALLBACK, Perf DEFINE_CLASS(TIMER_QUEUE, Threading, TimerQueue) DEFINE_METHOD(TIMER_QUEUE, APPDOMAIN_TIMER_CALLBACK, AppDomainTimerCallback, SM_Int_RetVoid) +DEFINE_CLASS(THREAD_POOL, Threading, ThreadPool) +DEFINE_METHOD(THREAD_POOL, ENSURE_GATE_THREAD_RUNNING, EnsureGateThreadRunning, SM_RetVoid) +DEFINE_METHOD(THREAD_POOL, UNSAFE_QUEUE_UNMANAGED_WORK_ITEM, UnsafeQueueUnmanagedWorkItem, SM_IntPtr_IntPtr_RetVoid) + +DEFINE_CLASS(COMPLETE_WAIT_THREAD_POOL_WORK_ITEM, Threading, CompleteWaitThreadPoolWorkItem) +DEFINE_METHOD(COMPLETE_WAIT_THREAD_POOL_WORK_ITEM, COMPLETE_WAIT, CompleteWait, IM_RetVoid) + DEFINE_CLASS(TIMESPAN, System, TimeSpan) diff --git a/src/coreclr/src/vm/ecalllist.h b/src/coreclr/src/vm/ecalllist.h index c2bcec4..7d79d0b 100644 --- a/src/coreclr/src/vm/ecalllist.h +++ b/src/coreclr/src/vm/ecalllist.h @@ -596,6 +596,7 @@ FCFuncStart(gThreadFuncs) #undef Sleep FCFuncElement("SleepInternal", ThreadNative::Sleep) #define Sleep(a) Dont_Use_Sleep(a) + QCFuncElement("UninterruptibleSleep0", ThreadNative::UninterruptibleSleep0) FCFuncElement("SetStart", ThreadNative::SetStart) QCFuncElement("InformThreadNameChange", ThreadNative::InformThreadNameChange) FCFuncElement("SpinWaitInternal", ThreadNative::SpinWait) @@ -610,6 +611,7 @@ FCFuncStart(gThreadFuncs) FCFuncElement("IsBackgroundNative", ThreadNative::IsBackground) FCFuncElement("SetBackgroundNative", ThreadNative::SetBackground) FCFuncElement("get_IsThreadPoolThread", ThreadNative::IsThreadpoolThread) + FCFuncElement("set_IsThreadPoolThread", ThreadNative::SetIsThreadpoolThread) FCFuncElement("GetPriorityNative", ThreadNative::GetPriority) FCFuncElement("SetPriorityNative", ThreadNative::SetPriority) QCFuncElement("GetCurrentOSThreadId", ThreadNative::GetCurrentOSThreadId) @@ -629,22 +631,29 @@ FCFuncStart(gThreadFuncs) FCFuncEnd() FCFuncStart(gThreadPoolFuncs) + FCFuncElement("GetNextConfigUInt32Value", ThreadPoolNative::GetNextConfigUInt32Value) FCFuncElement("PostQueuedCompletionStatus", ThreadPoolNative::CorPostQueuedCompletionStatus) FCFuncElement("GetAvailableThreadsNative", ThreadPoolNative::CorGetAvailableThreads) + FCFuncElement("CanSetMinIOCompletionThreads", ThreadPoolNative::CorCanSetMinIOCompletionThreads) + FCFuncElement("CanSetMaxIOCompletionThreads", ThreadPoolNative::CorCanSetMaxIOCompletionThreads) FCFuncElement("SetMinThreadsNative", ThreadPoolNative::CorSetMinThreads) FCFuncElement("GetMinThreadsNative", ThreadPoolNative::CorGetMinThreads) - FCFuncElement("get_ThreadCount", ThreadPoolNative::GetThreadCount) + FCFuncElement("GetThreadCount", ThreadPoolNative::GetThreadCount) QCFuncElement("GetCompletedWorkItemCount", ThreadPoolNative::GetCompletedWorkItemCount) - FCFuncElement("get_PendingUnmanagedWorkItemCount", ThreadPoolNative::GetPendingUnmanagedWorkItemCount) + FCFuncElement("GetPendingUnmanagedWorkItemCount", ThreadPoolNative::GetPendingUnmanagedWorkItemCount) FCFuncElement("RegisterWaitForSingleObjectNative", ThreadPoolNative::CorRegisterWaitForSingleObject) +#ifdef TARGET_WINDOWS // the IO completion thread pool is currently only available on Windows + FCFuncElement("QueueWaitCompletionNative", ThreadPoolNative::CorQueueWaitCompletion) +#endif FCFuncElement("BindIOCompletionCallbackNative", ThreadPoolNative::CorBindIoCompletionCallback) FCFuncElement("SetMaxThreadsNative", ThreadPoolNative::CorSetMaxThreads) FCFuncElement("GetMaxThreadsNative", ThreadPoolNative::CorGetMaxThreads) - FCFuncElement("NotifyWorkItemComplete", ThreadPoolNative::NotifyRequestComplete) + FCFuncElement("NotifyWorkItemCompleteNative", ThreadPoolNative::NotifyRequestComplete) FCFuncElement("NotifyWorkItemProgressNative", ThreadPoolNative::NotifyRequestProgress) - FCFuncElement("GetEnableWorkerTracking", ThreadPoolNative::GetEnableWorkerTracking) - FCFuncElement("ReportThreadStatus", ThreadPoolNative::ReportThreadStatus) - QCFuncElement("RequestWorkerThread", ThreadPoolNative::RequestWorkerThread) + FCFuncElement("GetEnableWorkerTrackingNative", ThreadPoolNative::GetEnableWorkerTracking) + FCFuncElement("ReportThreadStatusNative", ThreadPoolNative::ReportThreadStatus) + QCFuncElement("RequestWorkerThreadNative", ThreadPoolNative::RequestWorkerThread) + QCFuncElement("PerformRuntimeSpecificGateActivitiesNative", ThreadPoolNative::PerformGateActivities) FCFuncEnd() FCFuncStart(gTimerFuncs) @@ -653,18 +662,27 @@ FCFuncStart(gTimerFuncs) QCFuncElement("DeleteAppDomainTimer", AppDomainTimerNative::DeleteAppDomainTimer) FCFuncEnd() - FCFuncStart(gRegisteredWaitHandleFuncs) FCFuncElement("UnregisterWaitNative", ThreadPoolNative::CorUnregisterWait) FCFuncElement("WaitHandleCleanupNative", ThreadPoolNative::CorWaitHandleCleanupNative) FCFuncEnd() +FCFuncStart(gUnmanagedThreadPoolWorkItemFuncs) + QCFuncElement("ExecuteUnmanagedThreadPoolWorkItem", ThreadPoolNative::ExecuteUnmanagedThreadPoolWorkItem) +FCFuncEnd() + FCFuncStart(gWaitHandleFuncs) FCFuncElement("WaitOneCore", WaitHandleNative::CorWaitOneNative) FCFuncElement("WaitMultipleIgnoringSyncContext", WaitHandleNative::CorWaitMultipleNative) FCFuncElement("SignalAndWaitNative", WaitHandleNative::CorSignalAndWaitOneNative) FCFuncEnd() +#ifdef TARGET_UNIX +FCFuncStart(gLowLevelLifoSemaphoreFuncs) + QCFuncElement("WaitNative", WaitHandleNative::CorWaitOnePrioritizedNative) +FCFuncEnd() +#endif + #ifdef FEATURE_COMINTEROP FCFuncStart(gVariantFuncs) FCFuncElement("SetFieldsObject", COMVariant::SetFieldsObject) @@ -1151,6 +1169,9 @@ FCClassElement("Interlocked", "System.Threading", gInterlockedFuncs) FCClassElement("Kernel32", "", gPalKernel32Funcs) #endif FCClassElement("LoaderAllocatorScout", "System.Reflection", gLoaderAllocatorFuncs) +#ifdef TARGET_UNIX +FCClassElement("LowLevelLifoSemaphore", "System.Threading", gLowLevelLifoSemaphoreFuncs) +#endif FCClassElement("Marshal", "System.Runtime.InteropServices", gInteropMarshalFuncs) FCClassElement("Math", "System", gMathFuncs) FCClassElement("MathF", "System", gMathFFuncs) @@ -1178,7 +1199,7 @@ FCClassElement("OverlappedData", "System.Threading", gOverlappedFuncs) FCClassElement("PunkSafeHandle", "System.Reflection.Emit", gSymWrapperCodePunkSafeHandleFuncs) -FCClassElement("RegisteredWaitHandleSafe", "System.Threading", gRegisteredWaitHandleFuncs) +FCClassElement("RegisteredWaitHandle", "System.Threading", gRegisteredWaitHandleFuncs) FCClassElement("RuntimeAssembly", "System.Reflection", gRuntimeAssemblyFuncs) FCClassElement("RuntimeFieldHandle", "System", gCOMFieldHandleNewFuncs) @@ -1202,6 +1223,7 @@ FCClassElement("TypeBuilder", "System.Reflection.Emit", gCOMClassWriter) FCClassElement("TypeLoadException", "System", gTypeLoadExceptionFuncs) FCClassElement("TypeNameParser", "System", gTypeNameParser) FCClassElement("TypedReference", "System", gTypedReferenceFuncs) +FCClassElement("UnmanagedThreadPoolWorkItem", "System.Threading", gUnmanagedThreadPoolWorkItemFuncs) #ifdef FEATURE_UTF8STRING FCClassElement("Utf8String", "System", gUtf8StringFuncs) #endif // FEATURE_UTF8STRING diff --git a/src/coreclr/src/vm/eventpipeconfiguration.cpp b/src/coreclr/src/vm/eventpipeconfiguration.cpp index d07f280..346d015 100644 --- a/src/coreclr/src/vm/eventpipeconfiguration.cpp +++ b/src/coreclr/src/vm/eventpipeconfiguration.cpp @@ -8,6 +8,7 @@ #include "eventpipesessionprovider.h" #include "eventpipeprovider.h" #include "eventpipesession.h" +#include "win32threadpool.h" #ifdef FEATURE_PERFTRACING @@ -146,10 +147,18 @@ bool EventPipeConfiguration::RegisterProvider(EventPipeProvider &provider, Event } CONTRACTL_END; - // See if we've already registered this provider. - EventPipeProvider *pExistingProvider = GetProviderNoLock(provider.GetProviderName()); - if (pExistingProvider != nullptr) - return false; + // See if we've already registered this provider. When the portable thread pool is being used, allow there to be multiple + // DotNETRuntime providers, as the portable thread pool temporarily uses an event source on the managed side with the same + // provider name. + // TODO: This change to allow multiple DotNETRuntime providers is temporary to get EventPipe working for + // PortableThreadPoolEventSource. Once a long-term solution is figured out, this change should be reverted. See + // https://github.com/dotnet/runtime/issues/38763 for more information. + if (!ThreadpoolMgr::UsePortableThreadPool() || !provider.GetProviderName().Equals(W("Microsoft-Windows-DotNETRuntime"))) + { + EventPipeProvider *pExistingProvider = GetProviderNoLock(provider.GetProviderName()); + if (pExistingProvider != nullptr) + return false; + } // The provider list should be non-NULL, but can be NULL on shutdown. if (m_pProviderList != nullptr) diff --git a/src/coreclr/src/vm/hillclimbing.cpp b/src/coreclr/src/vm/hillclimbing.cpp index 0c6be1e..d8b1da1 100644 --- a/src/coreclr/src/vm/hillclimbing.cpp +++ b/src/coreclr/src/vm/hillclimbing.cpp @@ -43,6 +43,8 @@ void HillClimbing::Initialize() } CONTRACTL_END; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + m_wavePeriod = CLRConfig::GetConfigValue(CLRConfig::INTERNAL_HillClimbing_WavePeriod); m_maxThreadWaveMagnitude = CLRConfig::GetConfigValue(CLRConfig::INTERNAL_HillClimbing_MaxWaveMagnitude); m_threadMagnitudeMultiplier = (double)CLRConfig::GetConfigValue(CLRConfig::INTERNAL_HillClimbing_WaveMagnitudeMultiplier) / 100.0; @@ -78,6 +80,7 @@ void HillClimbing::Initialize() int HillClimbing::Update(int currentThreadCount, double sampleDuration, int numCompletions, int* pNewSampleInterval) { LIMITED_METHOD_CONTRACT; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); #ifdef DACCESS_COMPILE return 1; @@ -347,6 +350,7 @@ int HillClimbing::Update(int currentThreadCount, double sampleDuration, int numC void HillClimbing::ForceChange(int newThreadCount, HillClimbingStateTransition transition) { LIMITED_METHOD_CONTRACT; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); if (newThreadCount != m_lastThreadCount) { @@ -410,6 +414,7 @@ void HillClimbing::LogTransition(int threadCount, double throughput, HillClimbin Complex HillClimbing::GetWaveComponent(double* samples, int sampleCount, double period) { LIMITED_METHOD_CONTRACT; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); _ASSERTE(sampleCount >= period); //can't measure a wave that doesn't fit _ASSERTE(period >= 2); //can't measure above the Nyquist frequency diff --git a/src/coreclr/src/vm/object.h b/src/coreclr/src/vm/object.h index bcfe1be..0604d65 100644 --- a/src/coreclr/src/vm/object.h +++ b/src/coreclr/src/vm/object.h @@ -1371,9 +1371,12 @@ private: Thread *m_InternalThread; INT32 m_Priority; - //We need to cache the thread id in managed code for perf reasons. + // We need to cache the thread id in managed code for perf reasons. INT32 m_ManagedThreadId; + // Only used by managed code, see comment there + bool m_MayNeedResetForThreadPool; + protected: // the ctor and dtor can do no useful work. ThreadBaseObject() {LIMITED_METHOD_CONTRACT;}; diff --git a/src/coreclr/src/vm/threadpoolrequest.cpp b/src/coreclr/src/vm/threadpoolrequest.cpp index 470f8b4..c155f61 100644 --- a/src/coreclr/src/vm/threadpoolrequest.cpp +++ b/src/coreclr/src/vm/threadpoolrequest.cpp @@ -38,7 +38,11 @@ ArrayListStatic PerAppDomainTPCountList::s_appDomainIndexList; void PerAppDomainTPCountList::InitAppDomainIndexList() { LIMITED_METHOD_CONTRACT; - s_appDomainIndexList.Init(); + + if (!ThreadpoolMgr::UsePortableThreadPool()) + { + s_appDomainIndexList.Init(); + } } @@ -56,6 +60,11 @@ TPIndex PerAppDomainTPCountList::AddNewTPIndex() { STANDARD_VM_CONTRACT; + if (ThreadpoolMgr::UsePortableThreadPool()) + { + return TPIndex(); + } + DWORD count = s_appDomainIndexList.GetCount(); DWORD i = FindFirstFreeTpEntry(); @@ -88,7 +97,7 @@ TPIndex PerAppDomainTPCountList::AddNewTPIndex() DWORD PerAppDomainTPCountList::FindFirstFreeTpEntry() { - CONTRACTL + CONTRACTL { NOTHROW; MODE_ANY; @@ -96,6 +105,8 @@ DWORD PerAppDomainTPCountList::FindFirstFreeTpEntry() } CONTRACTL_END; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + DWORD DwnumADs = s_appDomainIndexList.GetCount(); DWORD Dwi; IPerAppDomainTPCount * pAdCount; @@ -142,6 +153,12 @@ void PerAppDomainTPCountList::ResetAppDomainIndex(TPIndex index) } CONTRACTL_END; + if (ThreadpoolMgr::UsePortableThreadPool()) + { + _ASSERTE(index.m_dwIndex == TPIndex().m_dwIndex); + return; + } + IPerAppDomainTPCount * pAdCount = dac_cast(s_appDomainIndexList.Get(index.m_dwIndex-1)); _ASSERTE(pAdCount); @@ -168,6 +185,8 @@ bool PerAppDomainTPCountList::AreRequestsPendingInAnyAppDomains() } CONTRACTL_END; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + DWORD DwnumADs = s_appDomainIndexList.GetCount(); DWORD Dwi; IPerAppDomainTPCount * pAdCount; @@ -217,6 +236,8 @@ LONG PerAppDomainTPCountList::GetAppDomainIndexForThreadpoolDispatch() } CONTRACTL_END; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + LONG hint = s_ADHint; DWORD count = s_appDomainIndexList.GetCount(); IPerAppDomainTPCount * pAdCount; @@ -298,6 +319,8 @@ HintDone: void UnManagedPerAppDomainTPCount::SetAppDomainRequestsActive() { WRAPPER_NO_CONTRACT; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + #ifndef DACCESS_COMPILE LONG count = VolatileLoad(&m_outstandingThreadRequestCount); while (count < (LONG)ThreadpoolMgr::NumberOfProcessors) @@ -317,6 +340,8 @@ void UnManagedPerAppDomainTPCount::SetAppDomainRequestsActive() bool FORCEINLINE UnManagedPerAppDomainTPCount::TakeActiveRequest() { LIMITED_METHOD_CONTRACT; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + LONG count = VolatileLoad(&m_outstandingThreadRequestCount); while (count > 0) @@ -344,6 +369,8 @@ void UnManagedPerAppDomainTPCount::QueueUnmanagedWorkRequest(LPTHREAD_START_ROUT } CONTRACTL_END;; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + #ifndef DACCESS_COMPILE WorkRequestHolder pWorkRequest; @@ -384,7 +411,9 @@ PVOID UnManagedPerAppDomainTPCount::DeQueueUnManagedWorkRequest(bool* lastOne) GC_TRIGGERS; MODE_ANY; } - CONTRACTL_END;; + CONTRACTL_END; + + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); *lastOne = true; @@ -408,6 +437,8 @@ PVOID UnManagedPerAppDomainTPCount::DeQueueUnManagedWorkRequest(bool* lastOne) // void UnManagedPerAppDomainTPCount::DispatchWorkItem(bool* foundWork, bool* wasNotRecalled) { + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + #ifndef DACCESS_COMPILE *foundWork = false; *wasNotRecalled = true; @@ -533,6 +564,7 @@ void ManagedPerAppDomainTPCount::SetAppDomainRequestsActive() // one. // + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); _ASSERTE(m_index.m_dwIndex != UNUSED_THREADPOOL_INDEX); #ifndef DACCESS_COMPILE @@ -554,6 +586,8 @@ void ManagedPerAppDomainTPCount::SetAppDomainRequestsActive() void ManagedPerAppDomainTPCount::ClearAppDomainRequestsActive() { LIMITED_METHOD_CONTRACT; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + //This function should either be called by managed code or during AD unload, but before //the TpIndex is set to unused. @@ -572,6 +606,8 @@ void ManagedPerAppDomainTPCount::ClearAppDomainRequestsActive() bool ManagedPerAppDomainTPCount::TakeActiveRequest() { LIMITED_METHOD_CONTRACT; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + LONG count = VolatileLoad(&m_numRequestsPending); while (count > 0) { @@ -592,6 +628,8 @@ bool ManagedPerAppDomainTPCount::TakeActiveRequest() // void ManagedPerAppDomainTPCount::DispatchWorkItem(bool* foundWork, bool* wasNotRecalled) { + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + *foundWork = false; *wasNotRecalled = true; diff --git a/src/coreclr/src/vm/threadpoolrequest.h b/src/coreclr/src/vm/threadpoolrequest.h index b741924..1f7b335 100644 --- a/src/coreclr/src/vm/threadpoolrequest.h +++ b/src/coreclr/src/vm/threadpoolrequest.h @@ -156,23 +156,6 @@ public: ResetState(); } - inline void InitResources() - { - CONTRACTL - { - THROWS; - MODE_ANY; - GC_NOTRIGGER; - INJECT_FAULT(COMPlusThrowOM()); - } - CONTRACTL_END; - - } - - inline void CleanupResources() - { - } - inline void ResetState() { LIMITED_METHOD_CONTRACT; diff --git a/src/coreclr/src/vm/threads.cpp b/src/coreclr/src/vm/threads.cpp index ec4a113..588de0e 100644 --- a/src/coreclr/src/vm/threads.cpp +++ b/src/coreclr/src/vm/threads.cpp @@ -7902,15 +7902,24 @@ UINT64 Thread::GetTotalThreadPoolCompletionCount() } CONTRACTL_END; + bool usePortableThreadPool = ThreadpoolMgr::UsePortableThreadPool(); + // enumerate all threads, summing their local counts. ThreadStoreLockHolder tsl; - UINT64 total = GetWorkerThreadPoolCompletionCountOverflow() + GetIOThreadPoolCompletionCountOverflow(); + UINT64 total = GetIOThreadPoolCompletionCountOverflow(); + if (!usePortableThreadPool) + { + total += GetWorkerThreadPoolCompletionCountOverflow(); + } Thread *pThread = NULL; while ((pThread = ThreadStore::GetAllThreadList(pThread, 0, 0)) != NULL) { - total += pThread->m_workerThreadPoolCompletionCount; + if (!usePortableThreadPool) + { + total += pThread->m_workerThreadPoolCompletionCount; + } total += pThread->m_ioThreadPoolCompletionCount; } diff --git a/src/coreclr/src/vm/threads.h b/src/coreclr/src/vm/threads.h index fc99747..7a96455 100644 --- a/src/coreclr/src/vm/threads.h +++ b/src/coreclr/src/vm/threads.h @@ -2505,6 +2505,12 @@ public: return m_State & (Thread::TS_TPWorkerThread | Thread::TS_CompletionPortThread); } + void SetIsThreadPoolThread() + { + LIMITED_METHOD_CONTRACT; + FastInterlockOr((ULONG *)&m_State, Thread::TS_TPWorkerThread); + } + // public suspend functions. System ones are internal, like for GC. User ones // correspond to suspend/resume calls on the exposed System.Thread object. static bool SysStartSuspendForDebug(AppDomain *pAppDomain); diff --git a/src/coreclr/src/vm/tieredcompilation.cpp b/src/coreclr/src/vm/tieredcompilation.cpp index 509bf9b..6b241de 100644 --- a/src/coreclr/src/vm/tieredcompilation.cpp +++ b/src/coreclr/src/vm/tieredcompilation.cpp @@ -93,6 +93,7 @@ TieredCompilationManager::TieredCompilationManager() : m_countOfNewMethodsCalledDuringDelay(0), m_methodsPendingCountingForTier1(nullptr), m_tieringDelayTimerHandle(nullptr), + m_doBackgroundWorkTimerHandle(nullptr), m_isBackgroundWorkScheduled(false), m_tier1CallCountingCandidateMethodRecentlyRecorded(false), m_isPendingCallCountingCompletion(false), @@ -566,17 +567,65 @@ void TieredCompilationManager::RequestBackgroundWork() WRAPPER_NO_CONTRACT; _ASSERTE(m_isBackgroundWorkScheduled); + if (ThreadpoolMgr::UsePortableThreadPool()) + { + // QueueUserWorkItem is not intended to be supported in this mode, and there are call sites of this function where + // managed code cannot be called instead to queue a work item. Use a timer with zero due time instead, which would on + // the timer thread call into managed code to queue a work item. + + NewHolder timerContextHolder = new ThreadpoolMgr::TimerInfoContext(); + timerContextHolder->TimerId = 0; + + _ASSERTE(m_doBackgroundWorkTimerHandle == nullptr); + if (!ThreadpoolMgr::CreateTimerQueueTimer( + &m_doBackgroundWorkTimerHandle, + DoBackgroundWorkTimerCallback, + timerContextHolder, + 0 /* DueTime */, + (DWORD)-1 /* Period, non-repeating */, + 0 /* Flags */)) + { + _ASSERTE(m_doBackgroundWorkTimerHandle == nullptr); + ThrowOutOfMemory(); + } + + timerContextHolder.SuppressRelease(); // the timer context is automatically deleted by the timer infrastructure + return; + } + if (!ThreadpoolMgr::QueueUserWorkItem(StaticBackgroundWorkCallback, this, QUEUE_ONLY, TRUE)) { ThrowOutOfMemory(); } } +void WINAPI TieredCompilationManager::DoBackgroundWorkTimerCallback(PVOID parameter, BOOLEAN timerFired) +{ + CONTRACTL + { + THROWS; + GC_TRIGGERS; + MODE_PREEMPTIVE; + } + CONTRACTL_END; + + _ASSERTE(ThreadpoolMgr::UsePortableThreadPool()); + _ASSERTE(timerFired); + + TieredCompilationManager *pTieredCompilationManager = GetAppDomain()->GetTieredCompilationManager(); + _ASSERTE(pTieredCompilationManager->m_doBackgroundWorkTimerHandle != nullptr); + ThreadpoolMgr::DeleteTimerQueueTimer(pTieredCompilationManager->m_doBackgroundWorkTimerHandle, nullptr); + pTieredCompilationManager->m_doBackgroundWorkTimerHandle = nullptr; + + pTieredCompilationManager->DoBackgroundWork(); +} + // This is the initial entrypoint for the background thread, called by // the threadpool. DWORD WINAPI TieredCompilationManager::StaticBackgroundWorkCallback(void *args) { STANDARD_VM_CONTRACT; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); TieredCompilationManager * pTieredCompilationManager = (TieredCompilationManager *)args; pTieredCompilationManager->DoBackgroundWork(); @@ -590,6 +639,7 @@ DWORD WINAPI TieredCompilationManager::StaticBackgroundWorkCallback(void *args) void TieredCompilationManager::DoBackgroundWork() { WRAPPER_NO_CONTRACT; + _ASSERTE(m_doBackgroundWorkTimerHandle == nullptr); AutoResetIsBackgroundWorkScheduled autoResetIsBackgroundWorkScheduled(this); diff --git a/src/coreclr/src/vm/tieredcompilation.h b/src/coreclr/src/vm/tieredcompilation.h index b3f32a6..87ed936 100644 --- a/src/coreclr/src/vm/tieredcompilation.h +++ b/src/coreclr/src/vm/tieredcompilation.h @@ -57,6 +57,7 @@ public: void ScheduleBackgroundWork(); private: void RequestBackgroundWork(); + static void WINAPI DoBackgroundWorkTimerCallback(PVOID parameter, BOOLEAN timerFired); static DWORD StaticBackgroundWorkCallback(void* args); void DoBackgroundWork(); @@ -109,13 +110,12 @@ private: UINT32 m_countOfNewMethodsCalledDuringDelay; SArray* m_methodsPendingCountingForTier1; HANDLE m_tieringDelayTimerHandle; + HANDLE m_doBackgroundWorkTimerHandle; bool m_isBackgroundWorkScheduled; bool m_tier1CallCountingCandidateMethodRecentlyRecorded; bool m_isPendingCallCountingCompletion; bool m_recentlyRequestedCallCountingCompletionAgain; - CLREvent m_asyncWorkDoneEvent; - #endif // FEATURE_TIERED_COMPILATION }; diff --git a/src/coreclr/src/vm/win32threadpool.cpp b/src/coreclr/src/vm/win32threadpool.cpp index 20d65a8..9ea3bfc 100644 --- a/src/coreclr/src/vm/win32threadpool.cpp +++ b/src/coreclr/src/vm/win32threadpool.cpp @@ -91,7 +91,6 @@ SVAL_IMPL(LONG,ThreadpoolMgr,MinLimitTotalWorkerThreads); // = MaxLimit SVAL_IMPL(LONG,ThreadpoolMgr,MaxLimitTotalWorkerThreads); // = MaxLimitCPThreadsPerCPU * number of CPUS SVAL_IMPL(LONG,ThreadpoolMgr,cpuUtilization); -LONG ThreadpoolMgr::cpuUtilizationAverage = 0; HillClimbing ThreadpoolMgr::HillClimbingInstance; @@ -112,10 +111,11 @@ int ThreadpoolMgr::ThreadAdjustmentInterval; #define GATE_THREAD_DELAY 500 /*milliseconds*/ #define GATE_THREAD_DELAY_TOLERANCE 50 /*milliseconds*/ #define DELAY_BETWEEN_SUSPENDS (5000 + GATE_THREAD_DELAY) // time to delay between suspensions -#define SUSPEND_TIME (GATE_THREAD_DELAY + 100) // milliseconds to suspend during SuspendProcessing LONG ThreadpoolMgr::Initialization=0; // indicator of whether the threadpool is initialized. +bool ThreadpoolMgr::s_usePortableThreadPool = false; + // Cacheline aligned, hot variable DECLSPEC_ALIGN(MAX_CACHE_LINE_SIZE) unsigned int ThreadpoolMgr::LastDequeueTime; // used to determine if work items are getting thread starved @@ -144,10 +144,6 @@ Thread *ThreadpoolMgr::pTimerThread=NULL; // Cacheline aligned, hot variable DECLSPEC_ALIGN(MAX_CACHE_LINE_SIZE) DWORD ThreadpoolMgr::LastTickCount; -#ifdef _DEBUG -DWORD ThreadpoolMgr::TickCountAdjustment=0; -#endif - // Cacheline aligned, hot variable DECLSPEC_ALIGN(MAX_CACHE_LINE_SIZE) LONG ThreadpoolMgr::GateThreadStatus=GATE_THREAD_STATUS_NOT_RUNNING; @@ -290,6 +286,8 @@ DWORD GetDefaultMaxLimitWorkerThreads(DWORD minLimit) } CONTRACTL_END; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + // // We determine the max limit for worker threads as follows: // @@ -328,12 +326,16 @@ DWORD GetDefaultMaxLimitWorkerThreads(DWORD minLimit) DWORD GetForceMinWorkerThreadsValue() { WRAPPER_NO_CONTRACT; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + return Configuration::GetKnobDWORDValue(W("System.Threading.ThreadPool.MinThreads"), CLRConfig::INTERNAL_ThreadPool_ForceMinWorkerThreads); } DWORD GetForceMaxWorkerThreadsValue() { WRAPPER_NO_CONTRACT; + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + return Configuration::GetKnobDWORDValue(W("System.Threading.ThreadPool.MaxThreads"), CLRConfig::INTERNAL_ThreadPool_ForceMaxWorkerThreads); } @@ -351,9 +353,6 @@ BOOL ThreadpoolMgr::Initialize() BOOL bRet = FALSE; BOOL bExceptionCaught = FALSE; - UnManagedPerAppDomainTPCount* pADTPCount; - pADTPCount = PerAppDomainTPCountList::GetUnmanagedTPCount(); - #ifndef TARGET_UNIX //ThreadPool_CPUGroup CPUGroupInfo::EnsureInitialized(); @@ -368,17 +367,22 @@ BOOL ThreadpoolMgr::Initialize() EX_TRY { - WorkerThreadSpinLimit = CLRConfig::GetConfigValue(CLRConfig::INTERNAL_ThreadPool_UnfairSemaphoreSpinLimit); - IsHillClimbingDisabled = CLRConfig::GetConfigValue(CLRConfig::INTERNAL_HillClimbing_Disable) != 0; - ThreadAdjustmentInterval = CLRConfig::GetConfigValue(CLRConfig::INTERNAL_HillClimbing_SampleIntervalLow); + if (!UsePortableThreadPool()) + { + WorkerThreadSpinLimit = CLRConfig::GetConfigValue(CLRConfig::INTERNAL_ThreadPool_UnfairSemaphoreSpinLimit); + IsHillClimbingDisabled = CLRConfig::GetConfigValue(CLRConfig::INTERNAL_HillClimbing_Disable) != 0; + ThreadAdjustmentInterval = CLRConfig::GetConfigValue(CLRConfig::INTERNAL_HillClimbing_SampleIntervalLow); - pADTPCount->InitResources(); + WaitThreadsCriticalSection.Init(CrstThreadpoolWaitThreads); + } WorkerCriticalSection.Init(CrstThreadpoolWorker); - WaitThreadsCriticalSection.Init(CrstThreadpoolWaitThreads); TimerQueueCriticalSection.Init(CrstThreadpoolTimerQueue); - // initialize WaitThreadsHead - InitializeListHead(&WaitThreadsHead); + if (!UsePortableThreadPool()) + { + // initialize WaitThreadsHead + InitializeListHead(&WaitThreadsHead); + } // initialize TimerQueue InitializeListHead(&TimerQueue); @@ -387,11 +391,14 @@ BOOL ThreadpoolMgr::Initialize() RetiredCPWakeupEvent->CreateAutoEvent(FALSE); _ASSERTE(RetiredCPWakeupEvent->IsValid()); - WorkerSemaphore = new CLRLifoSemaphore(); - WorkerSemaphore->Create(0, ThreadCounter::MaxPossibleCount); + if (!UsePortableThreadPool()) + { + WorkerSemaphore = new CLRLifoSemaphore(); + WorkerSemaphore->Create(0, ThreadCounter::MaxPossibleCount); - RetiredWorkerSemaphore = new CLRLifoSemaphore(); - RetiredWorkerSemaphore->Create(0, ThreadCounter::MaxPossibleCount); + RetiredWorkerSemaphore = new CLRLifoSemaphore(); + RetiredWorkerSemaphore->Create(0, ThreadCounter::MaxPossibleCount); + } #ifndef TARGET_UNIX //ThreadPool_CPUGroup @@ -405,8 +412,6 @@ BOOL ThreadpoolMgr::Initialize() } EX_CATCH { - pADTPCount->CleanupResources(); - if (RetiredCPWakeupEvent) { delete RetiredCPWakeupEvent; @@ -414,8 +419,11 @@ BOOL ThreadpoolMgr::Initialize() } // Note: It is fine to call Destroy on uninitialized critical sections + if (!UsePortableThreadPool()) + { + WaitThreadsCriticalSection.Destroy(); + } WorkerCriticalSection.Destroy(); - WaitThreadsCriticalSection.Destroy(); TimerQueueCriticalSection.Destroy(); bExceptionCaught = TRUE; @@ -427,25 +435,24 @@ BOOL ThreadpoolMgr::Initialize() goto end; } - // initialize Worker and CP thread settings - DWORD forceMin; - forceMin = GetForceMinWorkerThreadsValue(); - MinLimitTotalWorkerThreads = forceMin > 0 ? (LONG)forceMin : (LONG)NumberOfProcessors; - - DWORD forceMax; - forceMax = GetForceMaxWorkerThreadsValue(); - MaxLimitTotalWorkerThreads = forceMax > 0 ? (LONG)forceMax : (LONG)GetDefaultMaxLimitWorkerThreads(MinLimitTotalWorkerThreads); + if (!UsePortableThreadPool()) + { + // initialize Worker thread settings + DWORD forceMin; + forceMin = GetForceMinWorkerThreadsValue(); + MinLimitTotalWorkerThreads = forceMin > 0 ? (LONG)forceMin : (LONG)NumberOfProcessors; - ThreadCounter::Counts counts; - counts.NumActive = 0; - counts.NumWorking = 0; - counts.NumRetired = 0; - counts.MaxWorking = MinLimitTotalWorkerThreads; - WorkerCounter.counts.AsLongLong = counts.AsLongLong; + DWORD forceMax; + forceMax = GetForceMaxWorkerThreadsValue(); + MaxLimitTotalWorkerThreads = forceMax > 0 ? (LONG)forceMax : (LONG)GetDefaultMaxLimitWorkerThreads(MinLimitTotalWorkerThreads); -#ifdef _DEBUG - TickCountAdjustment = CLRConfig::GetConfigValue(CLRConfig::INTERNAL_ThreadpoolTickCountAdjustment); -#endif + ThreadCounter::Counts counts; + counts.NumActive = 0; + counts.NumWorking = 0; + counts.NumRetired = 0; + counts.MaxWorking = MinLimitTotalWorkerThreads; + WorkerCounter.counts.AsLongLong = counts.AsLongLong; + } // initialize CP thread settings MinLimitTotalCPThreads = NumberOfProcessors; @@ -453,6 +460,7 @@ BOOL ThreadpoolMgr::Initialize() // Use volatile store to guarantee make the value visible to the DAC (the store can be optimized out otherwise) VolatileStoreWithoutBarrier(&MaxFreeCPThreads, NumberOfProcessors*MaxFreeCPThreadsPerCPU); + ThreadCounter::Counts counts; counts.NumActive = 0; counts.NumWorking = 0; counts.NumRetired = 0; @@ -468,7 +476,10 @@ BOOL ThreadpoolMgr::Initialize() } #endif // !TARGET_UNIX - HillClimbingInstance.Initialize(); + if (!UsePortableThreadPool()) + { + HillClimbingInstance.Initialize(); + } bRet = TRUE; end: @@ -487,17 +498,20 @@ void ThreadpoolMgr::InitPlatformVariables() #ifndef TARGET_UNIX HINSTANCE hNtDll; - HINSTANCE hCoreSynch; + HINSTANCE hCoreSynch = nullptr; { CONTRACT_VIOLATION(GCViolation|FaultViolation); hNtDll = CLRLoadLibrary(W("ntdll.dll")); _ASSERTE(hNtDll); + if (!UsePortableThreadPool()) + { #ifdef FEATURE_CORESYSTEM - hCoreSynch = CLRLoadLibrary(W("api-ms-win-core-synch-l1-1-0.dll")); + hCoreSynch = CLRLoadLibrary(W("api-ms-win-core-synch-l1-1-0.dll")); #else - hCoreSynch = CLRLoadLibrary(W("kernel32.dll")); + hCoreSynch = CLRLoadLibrary(W("kernel32.dll")); #endif - _ASSERTE(hCoreSynch); + _ASSERTE(hCoreSynch); + } } // These APIs must be accessed via dynamic binding since they may be removed in future @@ -505,13 +519,40 @@ void ThreadpoolMgr::InitPlatformVariables() g_pufnNtQueryInformationThread = (NtQueryInformationThreadProc)GetProcAddress(hNtDll,"NtQueryInformationThread"); g_pufnNtQuerySystemInformation = (NtQuerySystemInformationProc)GetProcAddress(hNtDll,"NtQuerySystemInformation"); - - // These APIs are only supported on newer Windows versions - g_pufnCreateWaitableTimerEx = (CreateWaitableTimerExProc)GetProcAddress(hCoreSynch, "CreateWaitableTimerExW"); - g_pufnSetWaitableTimerEx = (SetWaitableTimerExProc)GetProcAddress(hCoreSynch, "SetWaitableTimerEx"); + if (!UsePortableThreadPool()) + { + // These APIs are only supported on newer Windows versions + g_pufnCreateWaitableTimerEx = (CreateWaitableTimerExProc)GetProcAddress(hCoreSynch, "CreateWaitableTimerExW"); + g_pufnSetWaitableTimerEx = (SetWaitableTimerExProc)GetProcAddress(hCoreSynch, "SetWaitableTimerEx"); + } #endif } +bool ThreadpoolMgr::CanSetMinIOCompletionThreads(DWORD ioCompletionThreads) +{ + WRAPPER_NO_CONTRACT; + _ASSERTE(UsePortableThreadPool()); + + EnsureInitialized(); + + // The lock used by SetMinThreads() and SetMaxThreads() is not taken here, the caller is expected to synchronize between + // them. The conditions here should be the same as in the corresponding Set function. + return ioCompletionThreads <= (DWORD)MaxLimitTotalCPThreads; +} + +bool ThreadpoolMgr::CanSetMaxIOCompletionThreads(DWORD ioCompletionThreads) +{ + WRAPPER_NO_CONTRACT; + _ASSERTE(UsePortableThreadPool()); + _ASSERTE(ioCompletionThreads != 0); + + EnsureInitialized(); + + // The lock used by SetMinThreads() and SetMaxThreads() is not taken here, the caller is expected to synchronize between + // them. The conditions here should be the same as in the corresponding Set function. + return ioCompletionThreads >= (DWORD)MinLimitTotalCPThreads; +} + BOOL ThreadpoolMgr::SetMaxThreadsHelper(DWORD MaxWorkerThreads, DWORD MaxIOCompletionThreads) { @@ -528,12 +569,18 @@ BOOL ThreadpoolMgr::SetMaxThreadsHelper(DWORD MaxWorkerThreads, // doesn't need to be WorkerCS, but using it to avoid race condition between setting min and max, and didn't want to create a new CS. CrstHolder csh(&WorkerCriticalSection); - if (MaxWorkerThreads >= (DWORD)MinLimitTotalWorkerThreads && + bool usePortableThreadPool = UsePortableThreadPool(); + if (( + usePortableThreadPool || + ( + MaxWorkerThreads >= (DWORD)MinLimitTotalWorkerThreads && + MaxWorkerThreads != 0 + ) + ) && MaxIOCompletionThreads >= (DWORD)MinLimitTotalCPThreads && - MaxWorkerThreads != 0 && MaxIOCompletionThreads != 0) { - if (GetForceMaxWorkerThreadsValue() == 0) + if (!usePortableThreadPool && GetForceMaxWorkerThreadsValue() == 0) { MaxLimitTotalWorkerThreads = min(MaxWorkerThreads, (DWORD)ThreadCounter::MaxPossibleCount); @@ -581,7 +628,6 @@ BOOL ThreadpoolMgr::GetMaxThreads(DWORD* MaxWorkerThreads, { LIMITED_METHOD_CONTRACT; - if (!MaxWorkerThreads || !MaxIOCompletionThreads) { SetLastHRError(ERROR_INVALID_DATA); @@ -590,7 +636,7 @@ BOOL ThreadpoolMgr::GetMaxThreads(DWORD* MaxWorkerThreads, EnsureInitialized(); - *MaxWorkerThreads = (DWORD)MaxLimitTotalWorkerThreads; + *MaxWorkerThreads = UsePortableThreadPool() ? 1 : (DWORD)MaxLimitTotalWorkerThreads; *MaxIOCompletionThreads = MaxLimitTotalCPThreads; return TRUE; } @@ -613,11 +659,18 @@ BOOL ThreadpoolMgr::SetMinThreads(DWORD MinWorkerThreads, BOOL init_result = FALSE; - if (MinWorkerThreads >= 0 && MinIOCompletionThreads >= 0 && - MinWorkerThreads <= (DWORD) MaxLimitTotalWorkerThreads && + bool usePortableThreadPool = UsePortableThreadPool(); + if (( + usePortableThreadPool || + ( + MinWorkerThreads >= 0 && + MinWorkerThreads <= (DWORD) MaxLimitTotalWorkerThreads + ) + ) && + MinIOCompletionThreads >= 0 && MinIOCompletionThreads <= (DWORD) MaxLimitTotalCPThreads) { - if (GetForceMinWorkerThreadsValue() == 0) + if (!usePortableThreadPool && GetForceMinWorkerThreadsValue() == 0) { MinLimitTotalWorkerThreads = max(1, min(MinWorkerThreads, (DWORD)ThreadCounter::MaxPossibleCount)); @@ -660,7 +713,6 @@ BOOL ThreadpoolMgr::GetMinThreads(DWORD* MinWorkerThreads, { LIMITED_METHOD_CONTRACT; - if (!MinWorkerThreads || !MinIOCompletionThreads) { SetLastHRError(ERROR_INVALID_DATA); @@ -669,7 +721,7 @@ BOOL ThreadpoolMgr::GetMinThreads(DWORD* MinWorkerThreads, EnsureInitialized(); - *MinWorkerThreads = (DWORD)MinLimitTotalWorkerThreads; + *MinWorkerThreads = UsePortableThreadPool() ? 1 : (DWORD)MinLimitTotalWorkerThreads; *MinIOCompletionThreads = MinLimitTotalCPThreads; return TRUE; } @@ -689,7 +741,7 @@ BOOL ThreadpoolMgr::GetAvailableThreads(DWORD* AvailableWorkerThreads, ThreadCounter::Counts counts = WorkerCounter.GetCleanCounts(); - if (MaxLimitTotalWorkerThreads < counts.NumActive) + if (UsePortableThreadPool() || MaxLimitTotalWorkerThreads < counts.NumActive) *AvailableWorkerThreads = 0; else *AvailableWorkerThreads = MaxLimitTotalWorkerThreads - counts.NumWorking; @@ -711,7 +763,8 @@ INT32 ThreadpoolMgr::GetThreadCount() return 0; } - return WorkerCounter.DangerousGetDirtyCounts().NumActive + CPThreadCounter.DangerousGetDirtyCounts().NumActive; + INT32 workerThreadCount = UsePortableThreadPool() ? 0 : WorkerCounter.DangerousGetDirtyCounts().NumActive; + return workerThreadCount + CPThreadCounter.DangerousGetDirtyCounts().NumActive; } void QueueUserWorkItemHelp(LPTHREAD_START_ROUTINE Function, PVOID Context) @@ -728,6 +781,8 @@ void QueueUserWorkItemHelp(LPTHREAD_START_ROUTINE Function, PVOID Context) } CONTRACTL_END;*/ + _ASSERTE(!ThreadpoolMgr::UsePortableThreadPool()); + Function(Context); Thread *pThread = GetThread(); @@ -842,6 +897,8 @@ BOOL ThreadpoolMgr::QueueUserWorkItem(LPTHREAD_START_ROUTINE Function, } CONTRACTL_END; + _ASSERTE_ALL_BUILDS(__FILE__, !UsePortableThreadPool()); + EnsureInitialized(); @@ -878,6 +935,7 @@ BOOL ThreadpoolMgr::QueueUserWorkItem(LPTHREAD_START_ROUTINE Function, bool ThreadpoolMgr::ShouldWorkerKeepRunning() { WRAPPER_NO_CONTRACT; + _ASSERTE(!UsePortableThreadPool()); // // Maybe this thread should retire now. Let's see. @@ -932,6 +990,7 @@ void ThreadpoolMgr::AdjustMaxWorkersActive() } CONTRACTL_END; + _ASSERTE(!UsePortableThreadPool()); _ASSERTE(ThreadAdjustmentLock.IsHeld()); LARGE_INTEGER startTime = CurrentSampleStartTime; @@ -1016,6 +1075,8 @@ void ThreadpoolMgr::MaybeAddWorkingWorker() } CONTRACTL_END; + _ASSERTE(!UsePortableThreadPool()); + // counts volatile read paired with CompareExchangeCounts loop set ThreadCounter::Counts counts = WorkerCounter.DangerousGetDirtyCounts(); ThreadCounter::Counts newCounts; @@ -1163,6 +1224,76 @@ void ThreadpoolMgr::WaitIOCompletionCallback( DWORD ret = AsyncCallbackCompletion((PVOID)lpOverlapped); } +#ifdef TARGET_WINDOWS // the IO completion thread pool is currently only available on Windows + +void WINAPI ThreadpoolMgr::ManagedWaitIOCompletionCallback( + DWORD dwErrorCode, + DWORD dwNumberOfBytesTransfered, + LPOVERLAPPED lpOverlapped) +{ + Thread *pThread = GetThread(); + if (pThread == NULL) + { + ClrFlsSetThreadType(ThreadType_Threadpool_Worker); + pThread = SetupThreadNoThrow(); + if (pThread == NULL) + { + return; + } + } + + CONTRACTL + { + THROWS; + GC_TRIGGERS; + MODE_PREEMPTIVE; + } + CONTRACTL_END; + + if (dwErrorCode != ERROR_SUCCESS) + { + return; + } + + _ASSERTE(lpOverlapped != NULL); + + { + GCX_COOP(); + ManagedThreadBase::ThreadPool(ManagedWaitIOCompletionCallback_Worker, lpOverlapped); + } + + Thread::IncrementIOThreadPoolCompletionCount(pThread); +} + +void ThreadpoolMgr::ManagedWaitIOCompletionCallback_Worker(LPVOID state) +{ + CONTRACTL + { + THROWS; + GC_TRIGGERS; + MODE_COOPERATIVE; + } + CONTRACTL_END; + + _ASSERTE(state != NULL); + + OBJECTHANDLE completeWaitWorkItemObjectHandle = (OBJECTHANDLE)state; + OBJECTREF completeWaitWorkItemObject = ObjectFromHandle(completeWaitWorkItemObjectHandle); + _ASSERTE(completeWaitWorkItemObject != NULL); + + GCPROTECT_BEGIN(completeWaitWorkItemObject); + + DestroyHandle(completeWaitWorkItemObjectHandle); + completeWaitWorkItemObjectHandle = NULL; + + ARG_SLOT args[] = { ObjToArgSlot(completeWaitWorkItemObject) }; + MethodDescCallSite(METHOD__COMPLETE_WAIT_THREAD_POOL_WORK_ITEM__COMPLETE_WAIT, &completeWaitWorkItemObject).Call(args); + + GCPROTECT_END(); +} + +#endif // TARGET_WINDOWS + #ifndef TARGET_UNIX // We need to make sure that the next jobs picked up by a completion port thread // is inserted into the queue after we start cleanup. The cleanup starts when a completion @@ -1376,6 +1507,13 @@ void ThreadpoolMgr::EnsureGateThreadRunning() { LIMITED_METHOD_CONTRACT; + if (UsePortableThreadPool()) + { + GCX_COOP(); + MethodDescCallSite(METHOD__THREAD_POOL__ENSURE_GATE_THREAD_RUNNING).Call(NULL); + return; + } + while (true) { switch (GateThreadStatus) @@ -1416,15 +1554,25 @@ void ThreadpoolMgr::EnsureGateThreadRunning() _ASSERTE(!"Invalid value of ThreadpoolMgr::GateThreadStatus"); } } - - return; } +bool ThreadpoolMgr::NeedGateThreadForIOCompletions() +{ + LIMITED_METHOD_CONTRACT; + + if (!InitCompletionPortThreadpool) + { + return false; + } + + ThreadCounter::Counts counts = CPThreadCounter.GetCleanCounts(); + return counts.NumActive <= counts.NumWorking; +} bool ThreadpoolMgr::ShouldGateThreadKeepRunning() { LIMITED_METHOD_CONTRACT; - + _ASSERTE(!UsePortableThreadPool()); _ASSERTE(GateThreadStatus == GATE_THREAD_STATUS_WAITING_FOR_REQUEST || GateThreadStatus == GATE_THREAD_STATUS_REQUESTED); @@ -1443,17 +1591,13 @@ bool ThreadpoolMgr::ShouldGateThreadKeepRunning() // Are there any free threads in the I/O completion pool? If there are, we don't need a gate thread. // This implies that whenever we decrement NumFreeCPThreads to 0, we need to call EnsureGateThreadRunning(). // - ThreadCounter::Counts counts = CPThreadCounter.GetCleanCounts(); - bool needGateThreadForCompletionPort = - InitCompletionPortThreadpool && - (counts.NumActive - counts.NumWorking) <= 0; + bool needGateThreadForCompletionPort = NeedGateThreadForIOCompletions(); // // Are there any work requests in any worker queue? If so, we need a gate thread. // This imples that whenever a work queue goes from empty to non-empty, we need to call EnsureGateThreadRunning(). // - bool needGateThreadForWorkerThreads = - PerAppDomainTPCountList::AreRequestsPendingInAnyAppDomains(); + bool needGateThreadForWorkerThreads = PerAppDomainTPCountList::AreRequestsPendingInAnyAppDomains(); // // If worker tracking is enabled, we need to fire periodic ETW events with active worker counts. This is @@ -1497,6 +1641,8 @@ void ThreadpoolMgr::EnqueueWorkRequest(WorkRequest* workRequest) } CONTRACTL_END; + _ASSERTE(!UsePortableThreadPool()); + AppendWorkRequest(workRequest); } @@ -1512,6 +1658,8 @@ WorkRequest* ThreadpoolMgr::DequeueWorkRequest() POSTCONDITION(CheckPointer(entry, NULL_OK)); } CONTRACT_END; + _ASSERTE(!UsePortableThreadPool()); + entry = RemoveWorkRequest(); RETURN entry; @@ -1527,6 +1675,8 @@ void ThreadpoolMgr::ExecuteWorkRequest(bool* foundWork, bool* wasNotRecalled) } CONTRACTL_END; + _ASSERTE(!UsePortableThreadPool()); + IPerAppDomainTPCount* pAdCount; LONG index = PerAppDomainTPCountList::GetAppDomainIndexForThreadpoolDispatch(); @@ -1572,6 +1722,8 @@ BOOL ThreadpoolMgr::SetAppDomainRequestsActive(BOOL UnmanagedTP) } CONTRACTL_END; + _ASSERTE(!UsePortableThreadPool()); + BOOL fShouldSignalEvent = FALSE; IPerAppDomainTPCount* pAdCount; @@ -1623,6 +1775,8 @@ void ThreadpoolMgr::ClearAppDomainRequestsActive(BOOL UnmanagedTP, LONG id) } CONTRACTL_END; + _ASSERTE(!UsePortableThreadPool()); + IPerAppDomainTPCount* pAdCount; if(UnmanagedTP) @@ -1823,6 +1977,8 @@ BOOL ThreadpoolMgr::CreateWorkerThread() } CONTRACTL_END; + _ASSERTE(!UsePortableThreadPool()); + Thread *pThread; BOOL fIsCLRThread; if ((pThread = CreateUnimpersonatedThread(WorkerThreadStart, NULL, &fIsCLRThread)) != NULL) @@ -1857,6 +2013,8 @@ DWORD WINAPI ThreadpoolMgr::WorkerThreadStart(LPVOID lpArgs) } CONTRACTL_END; + _ASSERTE_ALL_BUILDS(__FILE__, !UsePortableThreadPool()); + Thread *pThread = NULL; DWORD dwSwitchCount = 0; BOOL fThreadInit = FALSE; @@ -2167,34 +2325,6 @@ Exit: return ERROR_SUCCESS; } - -BOOL ThreadpoolMgr::SuspendProcessing() -{ - CONTRACTL - { - NOTHROW; - GC_NOTRIGGER; - MODE_PREEMPTIVE; - } - CONTRACTL_END; - - BOOL shouldRetire = TRUE; - DWORD sleepInterval = SUSPEND_TIME; - int oldCpuUtilization = cpuUtilization; - for (int i = 0; i < shouldRetire; i++) - { - __SwitchToThread(sleepInterval, CALLER_LIMITS_SPINNING); - if ((cpuUtilization <= (oldCpuUtilization - 4))) - { // if cpu util. dips by 4% or more, then put it back in circulation - shouldRetire = FALSE; - break; - } - } - - return shouldRetire; -} - - // this should only be called by unmanaged thread (i.e. there should be no mgd // caller on the stack) since we are swallowing terminal exceptions DWORD ThreadpoolMgr::SafeWait(CLREvent * ev, DWORD sleepTime, BOOL alertable) @@ -2239,6 +2369,9 @@ BOOL ThreadpoolMgr::RegisterWaitForSingleObject(PHANDLE phNewWaitObject, if (GetThread()) { GC_TRIGGERS;} else {DISABLED(GC_NOTRIGGER);} } CONTRACTL_END; + + _ASSERTE(!UsePortableThreadPool()); + EnsureInitialized(); ThreadCB* threadCB; @@ -2305,6 +2438,9 @@ ThreadpoolMgr::ThreadCB* ThreadpoolMgr::FindWaitThread() GC_TRIGGERS; } CONTRACTL_END; + + _ASSERTE(!UsePortableThreadPool()); + do { for (LIST_ENTRY* Node = (LIST_ENTRY*) WaitThreadsHead.Flink ; @@ -2345,6 +2481,9 @@ BOOL ThreadpoolMgr::CreateWaitThread() INJECT_FAULT(COMPlusThrowOM()); } CONTRACTL_END; + + _ASSERTE(!UsePortableThreadPool()); + DWORD threadId; if (g_fEEShutDown & ShutDown_Finalize2){ @@ -2423,6 +2562,7 @@ BOOL ThreadpoolMgr::CreateWaitThread() void ThreadpoolMgr::InsertNewWaitForSelf(WaitInfo* pArgs) { WRAPPER_NO_CONTRACT; + _ASSERTE(!UsePortableThreadPool()); WaitInfo* waitInfo = pArgs; @@ -2468,6 +2608,7 @@ void ThreadpoolMgr::InsertNewWaitForSelf(WaitInfo* pArgs) int ThreadpoolMgr::FindWaitIndex(const ThreadCB* threadCB, const HANDLE waitHandle) { LIMITED_METHOD_CONTRACT; + _ASSERTE(!UsePortableThreadPool()); for (int i=0;iNumActiveWaits; i++) if (threadCB->waitHandle[i] == waitHandle) @@ -2490,6 +2631,7 @@ int ThreadpoolMgr::FindWaitIndex(const ThreadCB* threadCB, const HANDLE waitHand DWORD ThreadpoolMgr::MinimumRemainingWait(LIST_ENTRY* waitInfo, unsigned int numWaits) { LIMITED_METHOD_CONTRACT; + _ASSERTE(!UsePortableThreadPool()); unsigned int min = (unsigned int) -1; DWORD currentTime = GetTickCount(); @@ -2547,6 +2689,8 @@ DWORD WINAPI ThreadpoolMgr::WaitThreadStart(LPVOID lpArgs) ClrFlsSetThreadType (ThreadType_Wait); + _ASSERTE_ALL_BUILDS(__FILE__, !UsePortableThreadPool()); + ThreadCB* threadCB = (ThreadCB*) lpArgs; Thread* pThread = SetupThreadNoThrow(); @@ -2876,6 +3020,7 @@ FOUND: void ThreadpoolMgr::DeactivateNthWait(WaitInfo* waitInfo, DWORD index) { LIMITED_METHOD_CONTRACT; + _ASSERTE(!UsePortableThreadPool()); ThreadCB* threadCB = waitInfo->threadCB; @@ -2967,6 +3112,7 @@ BOOL ThreadpoolMgr::UnregisterWaitEx(HANDLE hWaitObject,HANDLE Event) } CONTRACTL_END; + _ASSERTE(!UsePortableThreadPool()); _ASSERTE(IsInitialized()); // cannot call unregister before first registering const BOOL Blocking = (Event == (HANDLE) -1); @@ -3084,6 +3230,7 @@ void ThreadpoolMgr::DeregisterWait(WaitInfo* pArgs) void ThreadpoolMgr::WaitHandleCleanup(HANDLE hWaitObject) { LIMITED_METHOD_CONTRACT; + _ASSERTE(!UsePortableThreadPool()); _ASSERTE(IsInitialized()); // cannot call cleanup before first registering WaitInfo* waitInfo = (WaitInfo*) hWaitObject; @@ -3099,6 +3246,7 @@ void ThreadpoolMgr::WaitHandleCleanup(HANDLE hWaitObject) BOOL ThreadpoolMgr::CreateGateThread() { LIMITED_METHOD_CONTRACT; + _ASSERTE(!UsePortableThreadPool()); HANDLE threadHandle = Thread::CreateUtilityThread(Thread::StackSize_Small, GateThreadStart, NULL, W(".NET ThreadPool Gate")); @@ -4043,7 +4191,6 @@ public: } }; - DWORD WINAPI ThreadpoolMgr::GateThreadStart(LPVOID lpArgs) { ClrFlsSetThreadType (ThreadType_Gate); @@ -4056,6 +4203,7 @@ DWORD WINAPI ThreadpoolMgr::GateThreadStart(LPVOID lpArgs) } CONTRACTL_END; + _ASSERTE(!UsePortableThreadPool()); _ASSERTE(GateThreadStatus == GATE_THREAD_STATUS_REQUESTED); GateThreadTimer timer; @@ -4176,138 +4324,154 @@ DWORD WINAPI ThreadpoolMgr::GateThreadStart(LPVOID lpArgs) IgnoreNextSample = TRUE; } + PerformGateActivities(cpuUtilization); + } + while (ShouldGateThreadKeepRunning()); + + return 0; +} + +void ThreadpoolMgr::PerformGateActivities(int cpuUtilization) +{ + CONTRACTL + { + NOTHROW; + GC_TRIGGERS; + MODE_PREEMPTIVE; + } + CONTRACTL_END; + + ThreadpoolMgr::cpuUtilization = cpuUtilization; + #ifndef TARGET_UNIX - // don't mess with CP thread pool settings if not initialized yet - if (InitCompletionPortThreadpool) - { - ThreadCounter::Counts oldCounts, newCounts; - oldCounts = CPThreadCounter.GetCleanCounts(); + // don't mess with CP thread pool settings if not initialized yet + if (InitCompletionPortThreadpool) + { + ThreadCounter::Counts oldCounts, newCounts; + oldCounts = CPThreadCounter.GetCleanCounts(); - if (oldCounts.NumActive == oldCounts.NumWorking && - oldCounts.NumRetired == 0 && - oldCounts.NumActive < MaxLimitTotalCPThreads && - !g_fCompletionPortDrainNeeded && - NumCPInfrastructureThreads == 0 && // infrastructure threads count as "to be free as needed" - !GCHeapUtilities::IsGCInProgress(TRUE)) + if (oldCounts.NumActive == oldCounts.NumWorking && + oldCounts.NumRetired == 0 && + oldCounts.NumActive < MaxLimitTotalCPThreads && + !g_fCompletionPortDrainNeeded && + NumCPInfrastructureThreads == 0 && // infrastructure threads count as "to be free as needed" + !GCHeapUtilities::IsGCInProgress(TRUE)) + { + BOOL status; + DWORD numBytes; + size_t key; + LPOVERLAPPED pOverlapped; + DWORD errorCode; + + errorCode = S_OK; + + status = GetQueuedCompletionStatus( + GlobalCompletionPort, + &numBytes, + (PULONG_PTR)&key, + &pOverlapped, + 0 // immediate return + ); + + if (status == 0) { - BOOL status; - DWORD numBytes; - size_t key; - LPOVERLAPPED pOverlapped; - DWORD errorCode; - - errorCode = S_OK; - - status = GetQueuedCompletionStatus( - GlobalCompletionPort, - &numBytes, - (PULONG_PTR)&key, - &pOverlapped, - 0 // immediate return - ); + errorCode = GetLastError(); + } - if (status == 0) - { - errorCode = GetLastError(); - } + if(pOverlapped == &overlappedForContinueCleanup) + { + // if we picked up a "Continue Drainage" notification DO NOT create a new CP thread + } + else + if (errorCode != WAIT_TIMEOUT) + { + QueuedStatus *CompletionStatus = NULL; - if(pOverlapped == &overlappedForContinueCleanup) - { - // if we picked up a "Continue Drainage" notification DO NOT create a new CP thread - } - else - if (errorCode != WAIT_TIMEOUT) + // loop, retrying until memory is allocated. Under such conditions the gate + // thread is not useful anyway, so I feel comfortable with this behavior + do { - QueuedStatus *CompletionStatus = NULL; - - // loop, retrying until memory is allocated. Under such conditions the gate - // thread is not useful anyway, so I feel comfortable with this behavior - do + // make sure to free mem later in thread + CompletionStatus = new (nothrow) QueuedStatus; + if (CompletionStatus == NULL) { - // make sure to free mem later in thread - CompletionStatus = new (nothrow) QueuedStatus; - if (CompletionStatus == NULL) - { - __SwitchToThread(GATE_THREAD_DELAY, CALLER_LIMITS_SPINNING); - } + __SwitchToThread(GATE_THREAD_DELAY, CALLER_LIMITS_SPINNING); } - while (CompletionStatus == NULL); + } + while (CompletionStatus == NULL); - CompletionStatus->numBytes = numBytes; - CompletionStatus->key = (PULONG_PTR)key; - CompletionStatus->pOverlapped = pOverlapped; - CompletionStatus->errorCode = errorCode; + CompletionStatus->numBytes = numBytes; + CompletionStatus->key = (PULONG_PTR)key; + CompletionStatus->pOverlapped = pOverlapped; + CompletionStatus->errorCode = errorCode; - // IOCP threads are created as "active" and "working" - while (true) - { - // counts volatile read paired with CompareExchangeCounts loop set - oldCounts = CPThreadCounter.DangerousGetDirtyCounts(); - newCounts = oldCounts; - newCounts.NumActive++; - newCounts.NumWorking++; - if (oldCounts == CPThreadCounter.CompareExchangeCounts(newCounts, oldCounts)) - break; - } + // IOCP threads are created as "active" and "working" + while (true) + { + // counts volatile read paired with CompareExchangeCounts loop set + oldCounts = CPThreadCounter.DangerousGetDirtyCounts(); + newCounts = oldCounts; + newCounts.NumActive++; + newCounts.NumWorking++; + if (oldCounts == CPThreadCounter.CompareExchangeCounts(newCounts, oldCounts)) + break; + } - // loop, retrying until thread is created. - while (!CreateCompletionPortThread((LPVOID)CompletionStatus)) - { - __SwitchToThread(GATE_THREAD_DELAY, CALLER_LIMITS_SPINNING); - } + // loop, retrying until thread is created. + while (!CreateCompletionPortThread((LPVOID)CompletionStatus)) + { + __SwitchToThread(GATE_THREAD_DELAY, CALLER_LIMITS_SPINNING); } } - else if (cpuUtilization < CpuUtilizationLow) + } + else if (cpuUtilization < CpuUtilizationLow) + { + // this could be an indication that threads might be getting blocked or there is no work + if (oldCounts.NumWorking == oldCounts.NumActive && // don't bump the limit if there are already free threads + oldCounts.NumRetired > 0) { - // this could be an indication that threads might be getting blocked or there is no work - if (oldCounts.NumWorking == oldCounts.NumActive && // don't bump the limit if there are already free threads - oldCounts.NumRetired > 0) - { - RetiredCPWakeupEvent->Set(); - } + RetiredCPWakeupEvent->Set(); } } + } #endif // !TARGET_UNIX - if (0 == CLRConfig::GetConfigValue(CLRConfig::INTERNAL_ThreadPool_DisableStarvationDetection)) + if (!UsePortableThreadPool() && + 0 == CLRConfig::GetConfigValue(CLRConfig::INTERNAL_ThreadPool_DisableStarvationDetection)) + { + if (PerAppDomainTPCountList::AreRequestsPendingInAnyAppDomains() && SufficientDelaySinceLastDequeue()) { - if (PerAppDomainTPCountList::AreRequestsPendingInAnyAppDomains() && SufficientDelaySinceLastDequeue()) - { - DangerousNonHostedSpinLockHolder tal(&ThreadAdjustmentLock); + DangerousNonHostedSpinLockHolder tal(&ThreadAdjustmentLock); - ThreadCounter::Counts counts = WorkerCounter.GetCleanCounts(); - while (counts.NumActive < MaxLimitTotalWorkerThreads && //don't add a thread if we're at the max - counts.NumActive >= counts.MaxWorking) //don't add a thread if we're already in the process of adding threads + ThreadCounter::Counts counts = WorkerCounter.GetCleanCounts(); + while (counts.NumActive < MaxLimitTotalWorkerThreads && //don't add a thread if we're at the max + counts.NumActive >= counts.MaxWorking) //don't add a thread if we're already in the process of adding threads + { + bool breakIntoDebugger = (0 != CLRConfig::GetConfigValue(CLRConfig::INTERNAL_ThreadPool_DebugBreakOnWorkerStarvation)); + if (breakIntoDebugger) { - bool breakIntoDebugger = (0 != CLRConfig::GetConfigValue(CLRConfig::INTERNAL_ThreadPool_DebugBreakOnWorkerStarvation)); - if (breakIntoDebugger) - { - OutputDebugStringW(W("The CLR ThreadPool detected work queue starvation!")); - DebugBreak(); - } + OutputDebugStringW(W("The CLR ThreadPool detected work queue starvation!")); + DebugBreak(); + } - ThreadCounter::Counts newCounts = counts; - newCounts.MaxWorking = newCounts.NumActive + 1; + ThreadCounter::Counts newCounts = counts; + newCounts.MaxWorking = newCounts.NumActive + 1; - ThreadCounter::Counts oldCounts = WorkerCounter.CompareExchangeCounts(newCounts, counts); - if (oldCounts == counts) - { - HillClimbingInstance.ForceChange(newCounts.MaxWorking, Starvation); - MaybeAddWorkingWorker(); - break; - } - else - { - counts = oldCounts; - } + ThreadCounter::Counts oldCounts = WorkerCounter.CompareExchangeCounts(newCounts, counts); + if (oldCounts == counts) + { + HillClimbingInstance.ForceChange(newCounts.MaxWorking, Starvation); + MaybeAddWorkingWorker(); + break; + } + else + { + counts = oldCounts; } } } } - while (ShouldGateThreadKeepRunning()); - - return 0; } // called by logic to spawn a new completion port thread. @@ -4347,6 +4511,7 @@ BOOL ThreadpoolMgr::SufficientDelaySinceLastSample(unsigned int LastThreadCreati BOOL ThreadpoolMgr::SufficientDelaySinceLastDequeue() { LIMITED_METHOD_CONTRACT; + _ASSERTE(!UsePortableThreadPool()); #define DEQUEUE_DELAY_THRESHOLD (GATE_THREAD_DELAY * 2) @@ -4440,6 +4605,7 @@ BOOL ThreadpoolMgr::CreateTimerQueueTimer(PHANDLE phNewTimer, if (!params.setupSucceeded) { CloseHandle(TimerThreadHandle); + *phNewTimer = NULL; return FALSE; } @@ -4451,8 +4617,6 @@ BOOL ThreadpoolMgr::CreateTimerQueueTimer(PHANDLE phNewTimer, NewHolder timerInfoHolder; TimerInfo * timerInfo = new (nothrow) TimerInfo; - *phNewTimer = (HANDLE) timerInfo; - if (NULL == timerInfo) ThrowOutOfMemory(); @@ -4467,9 +4631,12 @@ BOOL ThreadpoolMgr::CreateTimerQueueTimer(PHANDLE phNewTimer, timerInfo->ExternalCompletionEvent = INVALID_HANDLE; timerInfo->ExternalEventSafeHandle = NULL; + *phNewTimer = (HANDLE)timerInfo; + BOOL status = QueueUserAPC((PAPCFUNC)InsertNewTimer,TimerThread,(size_t)timerInfo); if (FALSE == status) { + *phNewTimer = NULL; return FALSE; } @@ -4654,9 +4821,19 @@ DWORD ThreadpoolMgr::FireTimers() InterlockedIncrement(&timerInfo->refCount); - QueueUserWorkItem(AsyncTimerCallbackCompletion, - timerInfo, - QUEUE_ONLY /* TimerInfo take care of deleting*/); + if (UsePortableThreadPool()) + { + GCX_COOP(); + + ARG_SLOT args[] = { PtrToArgSlot(AsyncTimerCallbackCompletion), PtrToArgSlot(timerInfo) }; + MethodDescCallSite(METHOD__THREAD_POOL__UNSAFE_QUEUE_UNMANAGED_WORK_ITEM).Call(args); + } + else + { + QueueUserWorkItem(AsyncTimerCallbackCompletion, + timerInfo, + QUEUE_ONLY /* TimerInfo take care of deleting*/); + } if (timerInfo->Period != 0 && timerInfo->Period != (ULONG)-1) { diff --git a/src/coreclr/src/vm/win32threadpool.h b/src/coreclr/src/vm/win32threadpool.h index d25341e..d1d2c1b 100644 --- a/src/coreclr/src/vm/win32threadpool.h +++ b/src/coreclr/src/vm/win32threadpool.h @@ -223,11 +223,28 @@ public: INT32 TimerId; } TimerInfoContext; +#ifndef DACCESS_COMPILE + static void StaticInitialize() + { + WRAPPER_NO_CONTRACT; + s_usePortableThreadPool = CLRConfig::GetConfigValue(CLRConfig::INTERNAL_ThreadPool_UsePortableThreadPool) != 0; + } +#endif + + static bool UsePortableThreadPool() + { + LIMITED_METHOD_CONTRACT; + return s_usePortableThreadPool; + } + static BOOL Initialize(); static BOOL SetMaxThreadsHelper(DWORD MaxWorkerThreads, DWORD MaxIOCompletionThreads); + static bool CanSetMinIOCompletionThreads(DWORD ioCompletionThreads); + static bool CanSetMaxIOCompletionThreads(DWORD ioCompletionThreads); + static BOOL SetMaxThreads(DWORD MaxWorkerThreads, DWORD MaxIOCompletionThreads); @@ -280,6 +297,12 @@ public: DWORD numBytesTransferred, LPOVERLAPPED lpOverlapped); +#ifdef TARGET_WINDOWS // the IO completion thread pool is currently only available on Windows + static void WINAPI ManagedWaitIOCompletionCallback(DWORD dwErrorCode, + DWORD dwNumberOfBytesTransfered, + LPOVERLAPPED lpOverlapped); +#endif + static VOID WINAPI CallbackForInitiateDrainageOfCompletionPortQueue( DWORD dwErrorCode, DWORD dwNumberOfBytesTransfered, @@ -340,7 +363,11 @@ public: // We handle registered waits at a higher abstraction level return (Function == ThreadpoolMgr::CallbackForInitiateDrainageOfCompletionPortQueue || Function == ThreadpoolMgr::CallbackForContinueDrainageOfCompletionPortQueue - || Function == ThreadpoolMgr::WaitIOCompletionCallback); + || Function == ThreadpoolMgr::WaitIOCompletionCallback +#ifdef TARGET_WINDOWS // the IO completion thread pool is currently only available on Windows + || Function == ThreadpoolMgr::ManagedWaitIOCompletionCallback +#endif + ); } #endif @@ -787,6 +814,8 @@ public: } CONTRACTL_END; + _ASSERTE(!UsePortableThreadPool()); + if (WorkRequestTail) { _ASSERTE(WorkRequestHead != NULL); @@ -812,6 +841,8 @@ public: } CONTRACTL_END; + _ASSERTE(!UsePortableThreadPool()); + WorkRequest* entry = NULL; if (WorkRequestHead) { @@ -842,6 +873,8 @@ public: static void NotifyWorkItemCompleted() { WRAPPER_NO_CONTRACT; + _ASSERTE(!UsePortableThreadPool()); + Thread::IncrementWorkerThreadPoolCompletionCount(GetThread()); UpdateLastDequeueTime(); } @@ -849,6 +882,7 @@ public: static bool ShouldAdjustMaxWorkersActive() { WRAPPER_NO_CONTRACT; + _ASSERTE(!UsePortableThreadPool()); DWORD priorTime = PriorCompletedWorkRequestsTime; MemoryBarrier(); // read fresh value for NextCompletedWorkRequestsTime below @@ -867,8 +901,6 @@ public: static void AdjustMaxWorkersActive(); static bool ShouldWorkerKeepRunning(); - static BOOL SuspendProcessing(); - static DWORD SafeWait(CLREvent * ev, DWORD sleepTime, BOOL alertable); static DWORD WINAPI WorkerThreadStart(LPVOID lpArgs); @@ -890,6 +922,10 @@ public: unsigned index, // array index BOOL waitTimedOut); +#ifdef TARGET_WINDOWS // the IO completion thread pool is currently only available on Windows + static void ManagedWaitIOCompletionCallback_Worker(LPVOID state); +#endif + static DWORD WINAPI WaitThreadStart(LPVOID lpArgs); static DWORD WINAPI AsyncCallbackCompletion(PVOID pArgs); @@ -953,8 +989,10 @@ private: static BOOL CreateGateThread(); static void EnsureGateThreadRunning(); + static bool NeedGateThreadForIOCompletions(); static bool ShouldGateThreadKeepRunning(); static DWORD WINAPI GateThreadStart(LPVOID lpArgs); + static void PerformGateActivities(int cpuUtilization); static BOOL SufficientDelaySinceLastSample(unsigned int LastThreadCreationTime, unsigned NumThreads, // total number of threads of that type (worker or CP) double throttleRate=0.0 // the delay is increased by this percentage for each extra thread @@ -985,6 +1023,8 @@ private: } CONTRACTL_END; + _ASSERTE(!UsePortableThreadPool()); + DWORD result = QueueUserAPC(reinterpret_cast(DeregisterWait), waitThread, reinterpret_cast(waitInfo)); SetWaitThreadAPCPending(); return result; @@ -995,19 +1035,13 @@ private: inline static void ResetWaitThreadAPCPending() {IsApcPendingOnWaitThread = FALSE;} inline static BOOL IsWaitThreadAPCPending() {return IsApcPendingOnWaitThread;} -#ifdef _DEBUG - inline static DWORD GetTickCount() - { - LIMITED_METHOD_CONTRACT; - return ::GetTickCount() + TickCountAdjustment; - } -#endif - #endif // #ifndef DACCESS_COMPILE // Private variables static LONG Initialization; // indicator of whether the threadpool is initialized. + static bool s_usePortableThreadPool; + SVAL_DECL(LONG,MinLimitTotalWorkerThreads); // same as MinLimitTotalCPThreads SVAL_DECL(LONG,MaxLimitTotalWorkerThreads); // same as MaxLimitTotalCPThreads @@ -1089,13 +1123,8 @@ private: static Volatile NumCPInfrastructureThreads; // number of threads currently busy handling draining cycle SVAL_DECL(LONG,cpuUtilization); - static LONG cpuUtilizationAverage; DECLSPEC_ALIGN(MAX_CACHE_LINE_SIZE) static RecycledListsWrapper RecycledLists; - -#ifdef _DEBUG - static DWORD TickCountAdjustment; // add this value to value returned by GetTickCount -#endif }; diff --git a/src/libraries/System.Private.CoreLib/src/System.Private.CoreLib.Shared.projitems b/src/libraries/System.Private.CoreLib/src/System.Private.CoreLib.Shared.projitems index 8c2111e..7a8192e 100644 --- a/src/libraries/System.Private.CoreLib/src/System.Private.CoreLib.Shared.projitems +++ b/src/libraries/System.Private.CoreLib/src/System.Private.CoreLib.Shared.projitems @@ -954,6 +954,7 @@ + @@ -1934,8 +1935,8 @@ - - + + @@ -1944,12 +1945,13 @@ + - - + + diff --git a/src/libraries/System.Private.CoreLib/src/System/AppContextConfigHelper.cs b/src/libraries/System.Private.CoreLib/src/System/AppContextConfigHelper.cs index 2144b98..9175a3e 100644 --- a/src/libraries/System.Private.CoreLib/src/System/AppContextConfigHelper.cs +++ b/src/libraries/System.Private.CoreLib/src/System/AppContextConfigHelper.cs @@ -7,6 +7,9 @@ namespace System { internal static class AppContextConfigHelper { + internal static bool GetBooleanConfig(string configName, bool defaultValue) => + AppContext.TryGetSwitch(configName, out bool value) ? value : defaultValue; + internal static int GetInt32Config(string configName, int defaultValue, bool allowNegative = true) { try @@ -15,6 +18,9 @@ namespace System int result = defaultValue; switch (config) { + case uint value: + result = (int)value; + break; case string str: if (str.StartsWith('0')) { @@ -57,6 +63,15 @@ namespace System short result = defaultValue; switch (config) { + case uint value: + { + result = (short)value; + if ((uint)result != value) + { + return defaultValue; // overflow + } + break; + } case string str: if (str.StartsWith("0x")) { diff --git a/src/libraries/System.Private.CoreLib/src/System/Collections/Concurrent/ConcurrentQueue.cs b/src/libraries/System.Private.CoreLib/src/System/Collections/Concurrent/ConcurrentQueue.cs index d46f450..e45ed1e 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Collections/Concurrent/ConcurrentQueue.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Collections/Concurrent/ConcurrentQueue.cs @@ -666,9 +666,28 @@ namespace System.Collections.Concurrent /// true if an element was removed and returned from the beginning of the /// successfully; otherwise, false. /// - public bool TryDequeue([MaybeNullWhen(false)] out T result) => - _head.TryDequeue(out result) || // fast-path that operates just on the head segment - TryDequeueSlow(out result); // slow path that needs to fix up segments + public bool TryDequeue([MaybeNullWhen(false)] out T result) + { + // Get the current head + ConcurrentQueueSegment head = _head; + + // Try to take. If we're successful, we're done. + if (head.TryDequeue(out result)) + { + return true; + } + + // Check to see whether this segment is the last. If it is, we can consider + // this to be a moment-in-time empty condition (even though between the TryDequeue + // check and this check, another item could have arrived). + if (head._nextSegment == null) + { + result = default!; + return false; + } + + return TryDequeueSlow(out result); // slow path that needs to fix up segments + } /// Tries to dequeue an item, removing empty segments as needed. private bool TryDequeueSlow([MaybeNullWhen(false)] out T item) diff --git a/src/libraries/System.Private.CoreLib/src/System/Diagnostics/Tracing/FrameworkEventSource.cs b/src/libraries/System.Private.CoreLib/src/System/Diagnostics/Tracing/FrameworkEventSource.cs index bf77b19..1d3b01f 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Diagnostics/Tracing/FrameworkEventSource.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Diagnostics/Tracing/FrameworkEventSource.cs @@ -1,6 +1,7 @@ // Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. +using System.Runtime.CompilerServices; using Internal.Runtime.CompilerServices; namespace System.Diagnostics.Tracing @@ -93,12 +94,13 @@ namespace System.Diagnostics.Tracing WriteEvent(30, workID); } + // The object's current location in memory was being used before. Since objects can be moved, it may be difficult to + // associate Enqueue/Dequeue events with the object's at-the-time location in memory, the ETW listeners would have to + // know specifics about the events and track GC movements to associate events. The hash code is a stable value and + // easier to use for association, though there may be collisions. [NonEvent] - public unsafe void ThreadPoolEnqueueWorkObject(object workID) - { - // convert the Object Id to a long - ThreadPoolEnqueueWork((long)*((void**)Unsafe.AsPointer(ref workID))); - } + [MethodImpl(MethodImplOptions.NoInlining)] + public void ThreadPoolEnqueueWorkObject(object workID) => ThreadPoolEnqueueWork(workID.GetHashCode()); [Event(31, Level = EventLevel.Verbose, Keywords = Keywords.ThreadPool | Keywords.ThreadTransfer)] public void ThreadPoolDequeueWork(long workID) @@ -106,12 +108,13 @@ namespace System.Diagnostics.Tracing WriteEvent(31, workID); } + // The object's current location in memory was being used before. Since objects can be moved, it may be difficult to + // associate Enqueue/Dequeue events with the object's at-the-time location in memory, the ETW listeners would have to + // know specifics about the events and track GC movements to associate events. The hash code is a stable value and + // easier to use for association, though there may be collisions. [NonEvent] - public unsafe void ThreadPoolDequeueWorkObject(object workID) - { - // convert the Object Id to a long - ThreadPoolDequeueWork((long)*((void**)Unsafe.AsPointer(ref workID))); - } + [MethodImpl(MethodImplOptions.NoInlining)] + public void ThreadPoolDequeueWorkObject(object workID) => ThreadPoolDequeueWork(workID.GetHashCode()); // id - represents a correlation ID that allows correlation of two activities, one stamped by // ThreadTransferSend, the other by ThreadTransferReceive @@ -127,17 +130,17 @@ namespace System.Diagnostics.Tracing WriteEvent(150, id, kind, info, multiDequeues, intInfo1, intInfo2); } - // id - is a managed object. it gets translated to the object's address. ETW listeners must - // keep track of GC movements in order to correlate the value passed to XyzSend with the - // (possibly changed) value passed to XyzReceive + // id - is a managed object's hash code [NonEvent] - public unsafe void ThreadTransferSendObj(object id, int kind, string info, bool multiDequeues, int intInfo1, int intInfo2) - { - ThreadTransferSend((long)*((void**)Unsafe.AsPointer(ref id)), kind, info, multiDequeues, intInfo1, intInfo2); - } + public void ThreadTransferSendObj(object id, int kind, string info, bool multiDequeues, int intInfo1, int intInfo2) => + ThreadTransferSend(id.GetHashCode(), kind, info, multiDequeues, intInfo1, intInfo2); // id - represents a correlation ID that allows correlation of two activities, one stamped by // ThreadTransferSend, the other by ThreadTransferReceive + // - The object's current location in memory was being used before. Since objects can be moved, it may be difficult to + // associate Enqueue/Dequeue events with the object's at-the-time location in memory, the ETW listeners would have to + // know specifics about the events and track GC movements to associate events. The hash code is a stable value and + // easier to use for association, though there may be collisions. // kind - identifies the transfer: values below 64 are reserved for the runtime. Currently used values: // 1 - managed Timers ("roaming" ID) // 2 - managed async IO operations (FileStream, PipeStream, a.o.) @@ -148,13 +151,14 @@ namespace System.Diagnostics.Tracing { WriteEvent(151, id, kind, info); } - // id - is a managed object. it gets translated to the object's address. ETW listeners must - // keep track of GC movements in order to correlate the value passed to XyzSend with the - // (possibly changed) value passed to XyzReceive + + // id - is a managed object. it gets translated to the object's address. + // - The object's current location in memory was being used before. Since objects can be moved, it may be difficult to + // associate Enqueue/Dequeue events with the object's at-the-time location in memory, the ETW listeners would have to + // know specifics about the events and track GC movements to associate events. The hash code is a stable value and + // easier to use for association, though there may be collisions. [NonEvent] - public unsafe void ThreadTransferReceiveObj(object id, int kind, string? info) - { - ThreadTransferReceive((long)*((void**)Unsafe.AsPointer(ref id)), kind, info); - } + public void ThreadTransferReceiveObj(object id, int kind, string? info) => + ThreadTransferReceive(id.GetHashCode(), kind, info); } } diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelLifoSemaphore.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelLifoSemaphore.cs index 6853a5e..9dc3b47 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelLifoSemaphore.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelLifoSemaphore.cs @@ -16,10 +16,11 @@ namespace System.Threading private readonly int _maximumSignalCount; private readonly int _spinCount; + private readonly Action _onWait; private const int SpinSleep0Threshold = 10; - public LowLevelLifoSemaphore(int initialSignalCount, int maximumSignalCount, int spinCount) + public LowLevelLifoSemaphore(int initialSignalCount, int maximumSignalCount, int spinCount, Action onWait) { Debug.Assert(initialSignalCount >= 0); Debug.Assert(initialSignalCount <= maximumSignalCount); @@ -27,9 +28,10 @@ namespace System.Threading Debug.Assert(spinCount >= 0); _separated = default; - _separated._counts._signalCount = (uint)initialSignalCount; + _separated._counts.SignalCount = (uint)initialSignalCount; _maximumSignalCount = maximumSignalCount; _spinCount = spinCount; + _onWait = onWait; Create(maximumSignalCount); } @@ -38,6 +40,8 @@ namespace System.Threading { Debug.Assert(timeoutMs >= -1); + int spinCount = _spinCount; + // Try to acquire the semaphore or // a) register as a spinner if spinCount > 0 and timeoutMs > 0 // b) register as a waiter if there's already too many spinners or spinCount == 0 and timeoutMs > 0 @@ -45,35 +49,33 @@ namespace System.Threading Counts counts = _separated._counts; while (true) { - Debug.Assert(counts._signalCount <= _maximumSignalCount); + Debug.Assert(counts.SignalCount <= _maximumSignalCount); Counts newCounts = counts; - - if (counts._signalCount != 0) + if (counts.SignalCount != 0) { - newCounts._signalCount--; + newCounts.DecrementSignalCount(); } else if (timeoutMs != 0) { - if (_spinCount > 0 && newCounts._spinnerCount < byte.MaxValue) + if (spinCount > 0 && newCounts.SpinnerCount < byte.MaxValue) { - newCounts._spinnerCount++; + newCounts.IncrementSpinnerCount(); } else { // Maximum number of spinners reached, register as a waiter instead - newCounts._waiterCount++; - Debug.Assert(newCounts._waiterCount != 0); // overflow check, this many waiters is currently not supported + newCounts.IncrementWaiterCount(); } } - Counts countsBeforeUpdate = _separated._counts.CompareExchange(newCounts, counts); + Counts countsBeforeUpdate = _separated._counts.InterlockedCompareExchange(newCounts, counts); if (countsBeforeUpdate == counts) { - if (counts._signalCount != 0) + if (counts.SignalCount != 0) { return true; } - if (newCounts._waiterCount != counts._waiterCount) + if (newCounts.WaiterCount != counts.WaiterCount) { return WaitForSignal(timeoutMs); } @@ -87,22 +89,26 @@ namespace System.Threading counts = countsBeforeUpdate; } +#if CORECLR && TARGET_UNIX + // The PAL's wait subsystem is slower, spin more to compensate for the more expensive wait + spinCount *= 2; +#endif int processorCount = Environment.ProcessorCount; int spinIndex = processorCount > 1 ? 0 : SpinSleep0Threshold; - while (spinIndex < _spinCount) + while (spinIndex < spinCount) { LowLevelSpinWaiter.Wait(spinIndex, SpinSleep0Threshold, processorCount); spinIndex++; // Try to acquire the semaphore and unregister as a spinner counts = _separated._counts; - while (counts._signalCount > 0) + while (counts.SignalCount > 0) { Counts newCounts = counts; - newCounts._signalCount--; - newCounts._spinnerCount--; + newCounts.DecrementSignalCount(); + newCounts.DecrementSpinnerCount(); - Counts countsBeforeUpdate = _separated._counts.CompareExchange(newCounts, counts); + Counts countsBeforeUpdate = _separated._counts.InterlockedCompareExchange(newCounts, counts); if (countsBeforeUpdate == counts) { return true; @@ -117,21 +123,20 @@ namespace System.Threading while (true) { Counts newCounts = counts; - newCounts._spinnerCount--; - if (counts._signalCount != 0) + newCounts.DecrementSpinnerCount(); + if (counts.SignalCount != 0) { - newCounts._signalCount--; + newCounts.DecrementSignalCount(); } else { - newCounts._waiterCount++; - Debug.Assert(newCounts._waiterCount != 0); // overflow check, this many waiters is currently not supported + newCounts.IncrementWaiterCount(); } - Counts countsBeforeUpdate = _separated._counts.CompareExchange(newCounts, counts); + Counts countsBeforeUpdate = _separated._counts.InterlockedCompareExchange(newCounts, counts); if (countsBeforeUpdate == counts) { - return counts._signalCount != 0 || WaitForSignal(timeoutMs); + return counts.SignalCount != 0 || WaitForSignal(timeoutMs); } counts = countsBeforeUpdate; @@ -151,15 +156,14 @@ namespace System.Threading Counts newCounts = counts; // Increase the signal count. The addition doesn't overflow because of the limit on the max signal count in constructor. - newCounts._signalCount += (uint)releaseCount; - Debug.Assert(newCounts._signalCount > counts._signalCount); + newCounts.AddSignalCount((uint)releaseCount); // Determine how many waiters to wake, taking into account how many spinners and waiters there are and how many waiters // have previously been signaled to wake but have not yet woken countOfWaitersToWake = - (int)Math.Min(newCounts._signalCount, (uint)newCounts._waiterCount + newCounts._spinnerCount) - - newCounts._spinnerCount - - newCounts._countOfWaitersSignaledToWake; + (int)Math.Min(newCounts.SignalCount, (uint)counts.WaiterCount + counts.SpinnerCount) - + counts.SpinnerCount - + counts.CountOfWaitersSignaledToWake; if (countOfWaitersToWake > 0) { // Ideally, limiting to a maximum of releaseCount would not be necessary and could be an assert instead, but since @@ -173,17 +177,13 @@ namespace System.Threading // Cap countOfWaitersSignaledToWake to its max value. It's ok to ignore some woken threads in this count, it just // means some more threads will be woken next time. Typically, it won't reach the max anyway. - newCounts._countOfWaitersSignaledToWake += (byte)Math.Min(countOfWaitersToWake, byte.MaxValue); - if (newCounts._countOfWaitersSignaledToWake <= counts._countOfWaitersSignaledToWake) - { - newCounts._countOfWaitersSignaledToWake = byte.MaxValue; - } + newCounts.AddUpToMaxCountOfWaitersSignaledToWake((uint)countOfWaitersToWake); } - Counts countsBeforeUpdate = _separated._counts.CompareExchange(newCounts, counts); + Counts countsBeforeUpdate = _separated._counts.InterlockedCompareExchange(newCounts, counts); if (countsBeforeUpdate == counts) { - Debug.Assert(releaseCount <= _maximumSignalCount - counts._signalCount); + Debug.Assert(releaseCount <= _maximumSignalCount - counts.SignalCount); if (countOfWaitersToWake > 0) ReleaseCore(countOfWaitersToWake); return; @@ -197,16 +197,15 @@ namespace System.Threading { Debug.Assert(timeoutMs > 0 || timeoutMs == -1); + _onWait(); + while (true) { if (!WaitCore(timeoutMs)) { // Unregister the waiter. The wait subsystem used above guarantees that a thread that wakes due to a timeout does // not observe a signal to the object being waited upon. - Counts toSubtract = default; - toSubtract._waiterCount++; - Counts newCounts = _separated._counts.Subtract(toSubtract); - Debug.Assert(newCounts._waiterCount != ushort.MaxValue); // Check for underflow + _separated._counts.InterlockedDecrementWaiterCount(); return false; } @@ -214,24 +213,24 @@ namespace System.Threading Counts counts = _separated._counts; while (true) { - Debug.Assert(counts._waiterCount != 0); + Debug.Assert(counts.WaiterCount != 0); Counts newCounts = counts; - if (counts._signalCount != 0) + if (counts.SignalCount != 0) { - --newCounts._signalCount; - --newCounts._waiterCount; + newCounts.DecrementSignalCount(); + newCounts.DecrementWaiterCount(); } // This waiter has woken up and this needs to be reflected in the count of waiters signaled to wake - if (counts._countOfWaitersSignaledToWake != 0) + if (counts.CountOfWaitersSignaledToWake != 0) { - --newCounts._countOfWaitersSignaledToWake; + newCounts.DecrementCountOfWaitersSignaledToWake(); } - Counts countsBeforeUpdate = _separated._counts.CompareExchange(newCounts, counts); + Counts countsBeforeUpdate = _separated._counts.InterlockedCompareExchange(newCounts, counts); if (countsBeforeUpdate == counts) { - if (counts._signalCount != 0) + if (counts.SignalCount != 0) { return true; } @@ -243,44 +242,119 @@ namespace System.Threading } } - [StructLayout(LayoutKind.Explicit)] private struct Counts { - [FieldOffset(0)] - public uint _signalCount; - [FieldOffset(4)] - public ushort _waiterCount; - [FieldOffset(6)] - public byte _spinnerCount; - [FieldOffset(7)] - public byte _countOfWaitersSignaledToWake; - - [FieldOffset(0)] - private long _asLong; - - public Counts CompareExchange(Counts newCounts, Counts oldCounts) + private const byte SignalCountShift = 0; + private const byte WaiterCountShift = 32; + private const byte SpinnerCountShift = 48; + private const byte CountOfWaitersSignaledToWakeShift = 56; + + private ulong _data; + + private Counts(ulong data) => _data = data; + + private uint GetUInt32Value(byte shift) => (uint)(_data >> shift); + private void SetUInt32Value(uint value, byte shift) => + _data = (_data & ~((ulong)uint.MaxValue << shift)) | ((ulong)value << shift); + private ushort GetUInt16Value(byte shift) => (ushort)(_data >> shift); + private void SetUInt16Value(ushort value, byte shift) => + _data = (_data & ~((ulong)ushort.MaxValue << shift)) | ((ulong)value << shift); + private byte GetByteValue(byte shift) => (byte)(_data >> shift); + private void SetByteValue(byte value, byte shift) => + _data = (_data & ~((ulong)byte.MaxValue << shift)) | ((ulong)value << shift); + + public uint SignalCount + { + get => GetUInt32Value(SignalCountShift); + set => SetUInt32Value(value, SignalCountShift); + } + + public void AddSignalCount(uint value) + { + Debug.Assert(value <= uint.MaxValue - SignalCount); + _data += (ulong)value << SignalCountShift; + } + + public void IncrementSignalCount() => AddSignalCount(1); + + public void DecrementSignalCount() + { + Debug.Assert(SignalCount != 0); + _data -= (ulong)1 << SignalCountShift; + } + + public ushort WaiterCount + { + get => GetUInt16Value(WaiterCountShift); + set => SetUInt16Value(value, WaiterCountShift); + } + + public void IncrementWaiterCount() { - return new Counts { _asLong = Interlocked.CompareExchange(ref _asLong, newCounts._asLong, oldCounts._asLong) }; + Debug.Assert(WaiterCount < ushort.MaxValue); + _data += (ulong)1 << WaiterCountShift; } - public Counts Subtract(Counts subtractCounts) + public void DecrementWaiterCount() { - return new Counts { _asLong = Interlocked.Add(ref _asLong, -subtractCounts._asLong) }; + Debug.Assert(WaiterCount != 0); + _data -= (ulong)1 << WaiterCountShift; } - public static bool operator ==(Counts lhs, Counts rhs) => lhs._asLong == rhs._asLong; + public void InterlockedDecrementWaiterCount() + { + var countsAfterUpdate = new Counts(Interlocked.Add(ref _data, unchecked((ulong)-1) << WaiterCountShift)); + Debug.Assert(countsAfterUpdate.WaiterCount != ushort.MaxValue); // underflow check + } + + public byte SpinnerCount + { + get => GetByteValue(SpinnerCountShift); + set => SetByteValue(value, SpinnerCountShift); + } + + public void IncrementSpinnerCount() + { + Debug.Assert(SpinnerCount < byte.MaxValue); + _data += (ulong)1 << SpinnerCountShift; + } + + public void DecrementSpinnerCount() + { + Debug.Assert(SpinnerCount != 0); + _data -= (ulong)1 << SpinnerCountShift; + } - public static bool operator !=(Counts lhs, Counts rhs) => lhs._asLong != rhs._asLong; + public byte CountOfWaitersSignaledToWake + { + get => GetByteValue(CountOfWaitersSignaledToWakeShift); + set => SetByteValue(value, CountOfWaitersSignaledToWakeShift); + } - public override bool Equals(object? obj) + public void AddUpToMaxCountOfWaitersSignaledToWake(uint value) { - return obj is Counts counts && this._asLong == counts._asLong; + uint availableCount = (uint)(byte.MaxValue - CountOfWaitersSignaledToWake); + if (value > availableCount) + { + value = availableCount; + } + _data += (ulong)value << CountOfWaitersSignaledToWakeShift; } - public override int GetHashCode() + public void DecrementCountOfWaitersSignaledToWake() { - return (int)(_asLong >> 8); + Debug.Assert(CountOfWaitersSignaledToWake != 0); + _data -= (ulong)1 << CountOfWaitersSignaledToWakeShift; } + + public Counts InterlockedCompareExchange(Counts newCounts, Counts oldCounts) => + new Counts(Interlocked.CompareExchange(ref _data, newCounts._data, oldCounts._data)); + + public static bool operator ==(Counts lhs, Counts rhs) => lhs._data == rhs._data; + public static bool operator !=(Counts lhs, Counts rhs) => lhs._data != rhs._data; + + public override bool Equals(object? obj) => obj is Counts counts && _data == counts._data; + public override int GetHashCode() => (int)_data + (int)(_data >> 32); } [StructLayout(LayoutKind.Sequential)] diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelLock.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelLock.cs new file mode 100644 index 0000000..1c31475 --- /dev/null +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelLock.cs @@ -0,0 +1,215 @@ +// Licensed to the .NET Foundation under one or more agreements. +// The .NET Foundation licenses this file to you under the MIT license. + +using System.Diagnostics; + +namespace System.Threading +{ + /// + /// A lightweight non-recursive mutex. Waits on this lock are uninterruptible (from Thread.Interrupt(), which is supported + /// in some runtimes). That is the main reason this lock type would be used over interruptible locks, such as in a + /// low-level-infrastructure component that was historically not susceptible to a pending interrupt, and for compatibility + /// reasons, to ensure that it still would not be susceptible after porting that component to managed code. + /// + internal sealed class LowLevelLock : IDisposable + { + private const int SpinCount = 8; + private const int SpinSleep0Threshold = 4; + + private const int LockedMask = 1; + private const int WaiterCountIncrement = 2; + + // Layout: + // - Bit 0: 1 if the lock is locked, 0 otherwise + // - Remaining bits: Number of threads waiting to acquire a lock + private int _state; + +#if DEBUG + private Thread? _ownerThread; +#endif + + // Indicates whether a thread has been signaled, but has not yet been released from the wait. See SignalWaiter. Reads + // and writes must occur while _monitor is locked. + private bool _isAnyWaitingThreadSignaled; + + private LowLevelSpinWaiter _spinWaiter; + private readonly Func _spinWaitTryAcquireCallback; + private LowLevelMonitor _monitor; + + public LowLevelLock() + { + _spinWaiter = default(LowLevelSpinWaiter); + _spinWaitTryAcquireCallback = SpinWaitTryAcquireCallback; + _monitor.Initialize(); + } + + ~LowLevelLock() => Dispose(); + + public void Dispose() + { + VerifyIsNotLockedByAnyThread(); + + _monitor.Dispose(); + GC.SuppressFinalize(this); + } + + [Conditional("DEBUG")] + public void VerifyIsLocked() + { +#if DEBUG + Debug.Assert(_ownerThread == Thread.CurrentThread); +#endif + Debug.Assert((_state & LockedMask) != 0); + } + + [Conditional("DEBUG")] + public void VerifyIsNotLocked() + { +#if DEBUG + Debug.Assert(_ownerThread != Thread.CurrentThread); +#endif + } + + [Conditional("DEBUG")] + private void VerifyIsNotLockedByAnyThread() + { +#if DEBUG + Debug.Assert(_ownerThread == null); +#endif + } + + [Conditional("DEBUG")] + private void ResetOwnerThread() + { + VerifyIsLocked(); +#if DEBUG + _ownerThread = null; +#endif + } + + [Conditional("DEBUG")] + private void SetOwnerThreadToCurrent() + { + VerifyIsNotLockedByAnyThread(); +#if DEBUG + _ownerThread = Thread.CurrentThread; +#endif + } + + public bool TryAcquire() + { + VerifyIsNotLocked(); + + // A common case is that there are no waiters, so hope for that and try to acquire the lock + int state = Interlocked.CompareExchange(ref _state, LockedMask, 0); + if (state == 0 || TryAcquire_NoFastPath(state)) + { + SetOwnerThreadToCurrent(); + return true; + } + return false; + } + + private bool TryAcquire_NoFastPath(int state) + { + // The lock may be available, but there may be waiters. This thread could acquire the lock in that case. Acquiring + // the lock means that if this thread is repeatedly acquiring and releasing the lock, it could permanently starve + // waiters. Waiting instead in the same situation would deterministically create a lock convoy. Here, we opt for + // acquiring the lock to prevent a deterministic lock convoy in that situation, and rely on the system's + // waiting/waking implementation to mitigate starvation, even in cases where there are enough logical processors to + // accommodate all threads. + return (state & LockedMask) == 0 && Interlocked.CompareExchange(ref _state, state + LockedMask, state) == state; + } + + private bool SpinWaitTryAcquireCallback() => TryAcquire_NoFastPath(_state); + + public void Acquire() + { + if (!TryAcquire()) + { + WaitAndAcquire(); + } + } + + private void WaitAndAcquire() + { + VerifyIsNotLocked(); + + // Spin a bit to see if the lock becomes available, before forcing the thread into a wait state + if (_spinWaiter.SpinWaitForCondition(_spinWaitTryAcquireCallback, SpinCount, SpinSleep0Threshold)) + { + Debug.Assert((_state & LockedMask) != 0); + SetOwnerThreadToCurrent(); + return; + } + + _monitor.Acquire(); + + // Register this thread as a waiter by incrementing the waiter count. Incrementing the waiter count and waiting on + // the monitor need to appear atomic to SignalWaiter so that its signal won't be lost. + int state = Interlocked.Add(ref _state, WaiterCountIncrement); + + // Wait on the monitor until signaled, repeatedly until the lock can be acquired by this thread + while (true) + { + // The lock may have been released before the waiter count was incremented above, so try to acquire the lock + // with the new state before waiting + if ((state & LockedMask) == 0 && + Interlocked.CompareExchange(ref _state, state + (LockedMask - WaiterCountIncrement), state) == state) + { + break; + } + + _monitor.Wait(); + + // Indicate to SignalWaiter that the signaled thread has woken up + _isAnyWaitingThreadSignaled = false; + + state = _state; + Debug.Assert((uint)state >= WaiterCountIncrement); + } + + _monitor.Release(); + + Debug.Assert((_state & LockedMask) != 0); + SetOwnerThreadToCurrent(); + } + + public void Release() + { + Debug.Assert((_state & LockedMask) != 0); + ResetOwnerThread(); + + if (Interlocked.Decrement(ref _state) != 0) + { + SignalWaiter(); + } + } + + private void SignalWaiter() + { + // Since the lock was already released by the caller, there are no guarantees on the state at this point. For + // instance, if there was only one thread waiting before the lock was released, then after the lock was released, + // another thread may have acquired and released the lock, and signaled the waiter, before the first thread arrives + // here. The monitor's lock is used to synchronize changes to the waiter count, so acquire the monitor and recheck + // the waiter count before signaling. + _monitor.Acquire(); + + // Keep track of whether a thread has been signaled but has not yet been released from the wait. + // _isAnyWaitingThreadSignaled is set to false when a signaled thread wakes up. Since threads can preempt waiting + // threads and acquire the lock (see TryAcquire), it allows for example, one thread to acquire and release the lock + // multiple times while there are multiple waiting threads. In such a case, we don't want that thread to signal a + // waiter every time it releases the lock, as that will cause unnecessary context switches with more and more + // signaled threads waking up, finding that the lock is still locked, and going right back into a wait state. So, + // signal only one waiting thread at a time. + if ((uint)_state >= WaiterCountIncrement && !_isAnyWaitingThreadSignaled) + { + _isAnyWaitingThreadSignaled = true; + _monitor.Signal_Release(); + return; + } + + _monitor.Release(); + } + } +} diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelMonitor.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelMonitor.cs index e10c63e..6aa1e0c 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelMonitor.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelMonitor.cs @@ -55,8 +55,8 @@ namespace System.Threading [Conditional("DEBUG")] private void ResetOwnerThread() { -#if DEBUG VerifyIsLocked(); +#if DEBUG _ownerThread = null; #endif } @@ -64,8 +64,8 @@ namespace System.Threading [Conditional("DEBUG")] private void SetOwnerThreadToCurrent() { -#if DEBUG VerifyIsNotLockedByAnyThread(); +#if DEBUG _ownerThread = Thread.CurrentThread; #endif } diff --git a/src/mono/netcore/System.Private.CoreLib/src/System/Threading/LowLevelSpinWaiter.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelSpinWaiter.cs similarity index 54% rename from src/mono/netcore/System.Private.CoreLib/src/System/Threading/LowLevelSpinWaiter.cs rename to src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelSpinWaiter.cs index 462780d..e1c0766 100644 --- a/src/mono/netcore/System.Private.CoreLib/src/System/Threading/LowLevelSpinWaiter.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/LowLevelSpinWaiter.cs @@ -5,8 +5,52 @@ using System.Diagnostics; namespace System.Threading { + /// + /// A lightweight spin-waiter intended to be used as the first-level wait for a condition before the user forces the thread + /// into a wait state, and where the condition to be checked in each iteration is relatively cheap, like just an interlocked + /// operation. + /// + /// Used by the wait subsystem on Unix, so this class cannot have any dependencies on the wait subsystem. + /// internal struct LowLevelSpinWaiter { + private int _spinningThreadCount; + + public bool SpinWaitForCondition(Func condition, int spinCount, int sleep0Threshold) + { + Debug.Assert(condition != null); + + int processorCount = Environment.ProcessorCount; + int spinningThreadCount = Interlocked.Increment(ref _spinningThreadCount); + try + { + // Limit the maximum spinning thread count to the processor count to prevent unnecessary context switching + // caused by an excessive number of threads spin waiting, perhaps even slowing down the thread holding the + // resource being waited upon + if (spinningThreadCount <= processorCount) + { + // For uniprocessor systems, start at the yield threshold since the pause instructions used for waiting + // prior to that threshold would not help other threads make progress + for (int spinIndex = processorCount > 1 ? 0 : sleep0Threshold; spinIndex < spinCount; ++spinIndex) + { + // The caller should check the condition in a fast path before calling this method, so wait first + Wait(spinIndex, sleep0Threshold, processorCount); + + if (condition()) + { + return true; + } + } + } + } + finally + { + Interlocked.Decrement(ref _spinningThreadCount); + } + + return false; + } + public static void Wait(int spinIndex, int sleep0Threshold, int processorCount) { Debug.Assert(spinIndex >= 0); @@ -40,10 +84,8 @@ namespace System.Threading return; } - // Thread.Sleep(int) is interruptible. The current operation may not allow thread interrupt - // (for instance, LowLevelLock.Acquire as part of EventWaitHandle.Set). Use the - // uninterruptible version of Sleep(0). Not doing Thread.Yield, it does not seem to have any - // benefit over Sleep(0). + // Thread.Sleep is interruptible. The current operation may not allow thread interrupt. Use the uninterruptible + // version of Sleep(0). Not doing Thread.Yield, it does not seem to have any benefit over Sleep(0). Thread.UninterruptibleSleep0(); // Don't want to Sleep(1) in this spin wait: diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.GateThread.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.GateThread.cs index bcdaa2f..c509fdf 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.GateThread.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.GateThread.cs @@ -2,6 +2,9 @@ // The .NET Foundation licenses this file to you under the MIT license. using System.Diagnostics; +using System.Diagnostics.Tracing; +using System.Runtime.CompilerServices; +using System.Runtime.InteropServices; namespace System.Threading { @@ -13,114 +16,147 @@ namespace System.Threading private const int DequeueDelayThresholdMs = GateThreadDelayMs * 2; private const int GateThreadRunningMask = 0x4; - private static int s_runningState; - private static readonly AutoResetEvent s_runGateThreadEvent = new AutoResetEvent(initialState: true); - private static CpuUtilizationReader s_cpu; private const int MaxRuns = 2; - // TODO: CoreCLR: Worker Tracking in CoreCLR? (Config name: ThreadPool_EnableWorkerTracking) private static void GateThreadStart() { - _ = s_cpu.CurrentUtilization; // The first reading is over a time range other than what we are focusing on, so we do not use the read. + bool disableStarvationDetection = + AppContextConfigHelper.GetBooleanConfig("System.Threading.ThreadPool.DisableStarvationDetection", false); + bool debuggerBreakOnWorkStarvation = + AppContextConfigHelper.GetBooleanConfig("System.Threading.ThreadPool.DebugBreakOnWorkerStarvation", false); + + // The first reading is over a time range other than what we are focusing on, so we do not use the read other + // than to send it to any runtime-specific implementation that may also use the CPU utilization. + CpuUtilizationReader cpuUtilizationReader = default; + _ = cpuUtilizationReader.CurrentUtilization; - AppContext.TryGetSwitch("System.Threading.ThreadPool.DisableStarvationDetection", out bool disableStarvationDetection); - AppContext.TryGetSwitch("System.Threading.ThreadPool.DebugBreakOnWorkerStarvation", out bool debuggerBreakOnWorkStarvation); + PortableThreadPool threadPoolInstance = ThreadPoolInstance; + LowLevelLock hillClimbingThreadAdjustmentLock = threadPoolInstance._hillClimbingThreadAdjustmentLock; while (true) { s_runGateThreadEvent.WaitOne(); + + bool needGateThreadForRuntime; do { Thread.Sleep(GateThreadDelayMs); - ThreadPoolInstance._cpuUtilization = s_cpu.CurrentUtilization; + if (ThreadPool.EnableWorkerTracking && + PortableThreadPoolEventSource.Log.IsEnabled( + EventLevel.Verbose, + PortableThreadPoolEventSource.Keywords.ThreadingKeyword)) + { + PortableThreadPoolEventSource.Log.ThreadPoolWorkingThreadCount( + (uint)threadPoolInstance.GetAndResetHighWatermarkCountOfThreadsProcessingUserCallbacks()); + } - if (!disableStarvationDetection) + int cpuUtilization = cpuUtilizationReader.CurrentUtilization; + threadPoolInstance._cpuUtilization = cpuUtilization; + + needGateThreadForRuntime = ThreadPool.PerformRuntimeSpecificGateActivities(cpuUtilization); + + if (!disableStarvationDetection && + threadPoolInstance._separated.numRequestedWorkers > 0 && + SufficientDelaySinceLastDequeue(threadPoolInstance)) { - if (ThreadPoolInstance._numRequestedWorkers > 0 && SufficientDelaySinceLastDequeue()) + try { - try + hillClimbingThreadAdjustmentLock.Acquire(); + ThreadCounts counts = threadPoolInstance._separated.counts.VolatileRead(); + + // Don't add a thread if we're at max or if we are already in the process of adding threads. + // This logic is slightly different from the native implementation in CoreCLR because there are + // no retired threads. In the native implementation, when hill climbing reduces the thread count + // goal, threads that are stopped from processing work are switched to "retired" state, and they + // don't count towards the equivalent existing thread count. In this implementation, the + // existing thread count includes any worker thread that has not yet exited, including those + // stopped from working by hill climbing, so here the number of threads processing work, instead + // of the number of existing threads, is compared with the goal. There may be alternative + // solutions, for now this is only to maintain consistency in behavior. + while ( + counts.NumExistingThreads < threadPoolInstance._maxThreads && + counts.NumProcessingWork >= counts.NumThreadsGoal) { - ThreadPoolInstance._hillClimbingThreadAdjustmentLock.Acquire(); - ThreadCounts counts = ThreadCounts.VolatileReadCounts(ref ThreadPoolInstance._separated.counts); - // don't add a thread if we're at max or if we are already in the process of adding threads - while (counts.numExistingThreads < ThreadPoolInstance._maxThreads && counts.numExistingThreads >= counts.numThreadsGoal) + if (debuggerBreakOnWorkStarvation) { - if (debuggerBreakOnWorkStarvation) - { - Debugger.Break(); - } - - ThreadCounts newCounts = counts; - newCounts.numThreadsGoal = (short)(newCounts.numExistingThreads + 1); - ThreadCounts oldCounts = ThreadCounts.CompareExchangeCounts(ref ThreadPoolInstance._separated.counts, newCounts, counts); - if (oldCounts == counts) - { - HillClimbing.ThreadPoolHillClimber.ForceChange(newCounts.numThreadsGoal, HillClimbing.StateOrTransition.Starvation); - WorkerThread.MaybeAddWorkingWorker(); - break; - } - counts = oldCounts; + Debugger.Break(); } + + ThreadCounts newCounts = counts; + short newNumThreadsGoal = (short)(counts.NumProcessingWork + 1); + newCounts.NumThreadsGoal = newNumThreadsGoal; + + ThreadCounts oldCounts = threadPoolInstance._separated.counts.InterlockedCompareExchange(newCounts, counts); + if (oldCounts == counts) + { + HillClimbing.ThreadPoolHillClimber.ForceChange(newNumThreadsGoal, HillClimbing.StateOrTransition.Starvation); + WorkerThread.MaybeAddWorkingWorker(threadPoolInstance); + break; + } + + counts = oldCounts; } - finally - { - ThreadPoolInstance._hillClimbingThreadAdjustmentLock.Release(); - } + } + finally + { + hillClimbingThreadAdjustmentLock.Release(); } } - } while (ThreadPoolInstance._numRequestedWorkers > 0 || Interlocked.Decrement(ref s_runningState) > GetRunningStateForNumRuns(0)); + } while ( + needGateThreadForRuntime || + threadPoolInstance._separated.numRequestedWorkers > 0 || + Interlocked.Decrement(ref threadPoolInstance._separated.gateThreadRunningState) > GetRunningStateForNumRuns(0)); } } // called by logic to spawn new worker threads, return true if it's been too long // since the last dequeue operation - takes number of worker threads into account // in deciding "too long" - private static bool SufficientDelaySinceLastDequeue() + private static bool SufficientDelaySinceLastDequeue(PortableThreadPool threadPoolInstance) { - int delay = Environment.TickCount - Volatile.Read(ref ThreadPoolInstance._separated.lastDequeueTime); + int delay = Environment.TickCount - Volatile.Read(ref threadPoolInstance._separated.lastDequeueTime); int minimumDelay; - if (ThreadPoolInstance._cpuUtilization < CpuUtilizationLow) + if (threadPoolInstance._cpuUtilization < CpuUtilizationLow) { minimumDelay = GateThreadDelayMs; } else { - ThreadCounts counts = ThreadCounts.VolatileReadCounts(ref ThreadPoolInstance._separated.counts); - int numThreads = counts.numThreadsGoal; + ThreadCounts counts = threadPoolInstance._separated.counts.VolatileRead(); + int numThreads = counts.NumThreadsGoal; minimumDelay = numThreads * DequeueDelayThresholdMs; } return delay > minimumDelay; } // This is called by a worker thread - internal static void EnsureRunning() + internal static void EnsureRunning(PortableThreadPool threadPoolInstance) { - int numRunsMask = Interlocked.Exchange(ref s_runningState, GetRunningStateForNumRuns(MaxRuns)); - if ((numRunsMask & GateThreadRunningMask) == 0) + // The callers ensure that this speculative load is sufficient to ensure that the gate thread is activated when + // it is needed + if (threadPoolInstance._separated.gateThreadRunningState != GetRunningStateForNumRuns(MaxRuns)) { - bool created = false; - try - { - CreateGateThread(); - created = true; - } - finally - { - if (!created) - { - Interlocked.Exchange(ref s_runningState, 0); - } - } + EnsureRunningSlow(threadPoolInstance); } - else if (numRunsMask == GetRunningStateForNumRuns(0)) + } + + [MethodImpl(MethodImplOptions.NoInlining)] + internal static void EnsureRunningSlow(PortableThreadPool threadPoolInstance) + { + int numRunsMask = Interlocked.Exchange(ref threadPoolInstance._separated.gateThreadRunningState, GetRunningStateForNumRuns(MaxRuns)); + if (numRunsMask == GetRunningStateForNumRuns(0)) { s_runGateThreadEvent.Set(); } + else if ((numRunsMask & GateThreadRunningMask) == 0) + { + CreateGateThread(threadPoolInstance); + } } private static int GetRunningStateForNumRuns(int numRuns) @@ -130,12 +166,29 @@ namespace System.Threading return GateThreadRunningMask | numRuns; } - private static void CreateGateThread() + [MethodImpl(MethodImplOptions.NoInlining)] + private static void CreateGateThread(PortableThreadPool threadPoolInstance) { - Thread gateThread = new Thread(GateThreadStart); - gateThread.IsBackground = true; - gateThread.Start(); + bool created = false; + try + { + Thread gateThread = new Thread(GateThreadStart, SmallStackSizeBytes); + gateThread.IsThreadPoolThread = true; + gateThread.IsBackground = true; + gateThread.Name = ".NET ThreadPool Gate"; + gateThread.Start(); + created = true; + } + finally + { + if (!created) + { + Interlocked.Exchange(ref threadPoolInstance._separated.gateThreadRunningState, 0); + } + } } } + + internal static void EnsureGateThreadRunning() => GateThread.EnsureRunning(ThreadPoolInstance); } } diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.HillClimbing.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.HillClimbing.cs index a84b752..c7ed0ee 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.HillClimbing.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.HillClimbing.cs @@ -12,32 +12,16 @@ namespace System.Threading /// private partial class HillClimbing { - private static readonly Lazy s_threadPoolHillClimber = new Lazy(CreateHillClimber, true); - public static HillClimbing ThreadPoolHillClimber => s_threadPoolHillClimber.Value; - + private const int LogCapacity = 200; private const int DefaultSampleIntervalMsLow = 10; private const int DefaultSampleIntervalMsHigh = 200; - private static HillClimbing CreateHillClimber() - { - // Default values pulled from CoreCLR - return new HillClimbing(wavePeriod: AppContextConfigHelper.GetInt32Config("HillClimbing_WavePeriod", 4, false), - maxWaveMagnitude: AppContextConfigHelper.GetInt32Config("HillClimbing_MaxWaveMagnitude", 20, false), - waveMagnitudeMultiplier: AppContextConfigHelper.GetInt32Config("HillClimbing_WaveMagnitudeMultiplier", 100, false) / 100.0, - waveHistorySize: AppContextConfigHelper.GetInt32Config("HillClimbing_WaveHistorySize", 8, false), - targetThroughputRatio: AppContextConfigHelper.GetInt32Config("HillClimbing_Bias", 15, false) / 100.0, - targetSignalToNoiseRatio: AppContextConfigHelper.GetInt32Config("HillClimbing_TargetSignalToNoiseRatio", 300, false) / 100.0, - maxChangePerSecond: AppContextConfigHelper.GetInt32Config("HillClimbing_MaxChangePerSecond", 4, false), - maxChangePerSample: AppContextConfigHelper.GetInt32Config("HillClimbing_MaxChangePerSample", 20, false), - sampleIntervalMsLow: AppContextConfigHelper.GetInt32Config("HillClimbing_SampleIntervalLow", DefaultSampleIntervalMsLow, false), - sampleIntervalMsHigh: AppContextConfigHelper.GetInt32Config("HillClimbing_SampleIntervalHigh", DefaultSampleIntervalMsHigh, false), - errorSmoothingFactor: AppContextConfigHelper.GetInt32Config("HillClimbing_ErrorSmoothingFactor", 1, false) / 100.0, - gainExponent: AppContextConfigHelper.GetInt32Config("HillClimbing_GainExponent", 200, false) / 100.0, - maxSampleError: AppContextConfigHelper.GetInt32Config("HillClimbing_MaxSampleErrorPercent", 15, false) / 100.0 - ); - } - private const int LogCapacity = 200; + public static readonly bool IsDisabled = AppContextConfigHelper.GetBooleanConfig("System.Threading.ThreadPool.HillClimbing.Disable", false); + + // SOS's ThreadPool command depends on this name + public static readonly HillClimbing ThreadPoolHillClimber = new HillClimbing(); + // SOS's ThreadPool command depends on the enum values public enum StateOrTransition { Warmup, @@ -46,17 +30,18 @@ namespace System.Threading ClimbingMove, ChangePoint, Stabilizing, - Starvation, // Used as a message from the thread pool for a forced transition - ThreadTimedOut, // Usage as a message from the thread pool for a forced transition + Starvation, + ThreadTimedOut, } + // SOS's ThreadPool command depends on the names of all fields private struct LogEntry { public int tickCount; public StateOrTransition stateOrTransition; public int newControlSetting; public int lastHistoryCount; - public double lastHistoryMean; + public float lastHistoryMean; } private readonly int _wavePeriod; @@ -87,22 +72,22 @@ namespace System.Threading private readonly Random _randomIntervalGenerator = new Random(); - private readonly LogEntry[] _log = new LogEntry[LogCapacity]; - private int _logStart; - private int _logSize; + private readonly LogEntry[] _log = new LogEntry[LogCapacity]; // SOS's ThreadPool command depends on this name + private int _logStart; // SOS's ThreadPool command depends on this name + private int _logSize; // SOS's ThreadPool command depends on this name - public HillClimbing(int wavePeriod, int maxWaveMagnitude, double waveMagnitudeMultiplier, int waveHistorySize, double targetThroughputRatio, - double targetSignalToNoiseRatio, double maxChangePerSecond, double maxChangePerSample, int sampleIntervalMsLow, int sampleIntervalMsHigh, - double errorSmoothingFactor, double gainExponent, double maxSampleError) + public HillClimbing() { - _wavePeriod = wavePeriod; - _maxThreadWaveMagnitude = maxWaveMagnitude; - _threadMagnitudeMultiplier = waveMagnitudeMultiplier; - _samplesToMeasure = wavePeriod * waveHistorySize; - _targetThroughputRatio = targetThroughputRatio; - _targetSignalToNoiseRatio = targetSignalToNoiseRatio; - _maxChangePerSecond = maxChangePerSecond; - _maxChangePerSample = maxChangePerSample; + _wavePeriod = AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.WavePeriod", 4, false); + _maxThreadWaveMagnitude = AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.MaxWaveMagnitude", 20, false); + _threadMagnitudeMultiplier = AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.WaveMagnitudeMultiplier", 100, false) / 100.0; + _samplesToMeasure = _wavePeriod * AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.WaveHistorySize", 8, false); + _targetThroughputRatio = AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.Bias", 15, false) / 100.0; + _targetSignalToNoiseRatio = AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.TargetSignalToNoiseRatio", 300, false) / 100.0; + _maxChangePerSecond = AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.MaxChangePerSecond", 4, false); + _maxChangePerSample = AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.MaxChangePerSample", 20, false); + int sampleIntervalMsLow = AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.SampleIntervalLow", DefaultSampleIntervalMsLow, false); + int sampleIntervalMsHigh = AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.SampleIntervalHigh", DefaultSampleIntervalMsHigh, false); if (sampleIntervalMsLow <= sampleIntervalMsHigh) { _sampleIntervalMsLow = sampleIntervalMsLow; @@ -113,9 +98,9 @@ namespace System.Threading _sampleIntervalMsLow = DefaultSampleIntervalMsLow; _sampleIntervalMsHigh = DefaultSampleIntervalMsHigh; } - _throughputErrorSmoothingFactor = errorSmoothingFactor; - _gainExponent = gainExponent; - _maxSampleError = maxSampleError; + _throughputErrorSmoothingFactor = AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.ErrorSmoothingFactor", 1, false) / 100.0; + _gainExponent = AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.GainExponent", 200, false) / 100.0; + _maxSampleError = AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.HillClimbing.MaxSampleErrorPercent", 15, false) / 100.0; _samples = new double[_samplesToMeasure]; _threadCounts = new double[_samplesToMeasure]; @@ -184,10 +169,9 @@ namespace System.Threading // Add the current thread count and throughput sample to our history // double throughput = numCompletions / sampleDurationSeconds; - PortableThreadPoolEventSource log = PortableThreadPoolEventSource.Log; - if (log.IsEnabled()) + if (PortableThreadPoolEventSource.Log.IsEnabled()) { - log.WorkerThreadAdjustmentSample(throughput); + PortableThreadPoolEventSource.Log.ThreadPoolWorkerThreadAdjustmentSample(throughput); } int sampleIndex = (int)(_totalSamples % _samplesToMeasure); @@ -318,7 +302,8 @@ namespace System.Threading // // If the result was positive, and CPU is > 95%, refuse the move. // - if (move > 0.0 && ThreadPoolInstance._cpuUtilization > CpuUtilizationHigh) + PortableThreadPool threadPoolInstance = ThreadPoolInstance; + if (move > 0.0 && threadPoolInstance._cpuUtilization > CpuUtilizationHigh) move = 0.0; // @@ -337,8 +322,8 @@ namespace System.Threading // // Make sure our control setting is within the ThreadPool's limits // - int maxThreads = ThreadPoolInstance._maxThreads; - int minThreads = ThreadPoolInstance._minThreads; + int maxThreads = threadPoolInstance._maxThreads; + int minThreads = threadPoolInstance._minThreads; _currentControlSetting = Math.Min(maxThreads - newThreadWaveMagnitude, _currentControlSetting); _currentControlSetting = Math.Max(minThreads, _currentControlSetting); @@ -358,10 +343,10 @@ namespace System.Threading // Record these numbers for posterity // - if (log.IsEnabled()) + if (PortableThreadPoolEventSource.Log.IsEnabled()) { - log.WorkerThreadAdjustmentStats(sampleDurationSeconds, throughput, threadWaveComponent.Real, throughputWaveComponent.Real, - throughputErrorEstimate, _averageThroughputNoise, ratio.Real, confidence, _currentControlSetting, (ushort)newThreadWaveMagnitude); + PortableThreadPoolEventSource.Log.ThreadPoolWorkerThreadAdjustmentStats(sampleDurationSeconds, throughput, threadWaveComponent.Real, throughputWaveComponent.Real, + throughputErrorEstimate, _averageThroughputNoise, ratio.Real, confidence, _currentControlSetting, (ushort)newThreadWaveMagnitude); } @@ -381,7 +366,7 @@ namespace System.Threading // int newSampleInterval; if (ratio.Real < 0.0 && newThreadCount == minThreads) - newSampleInterval = (int)(0.5 + _currentSampleMs * (10.0 * Math.Max(-ratio.Real, 1.0))); + newSampleInterval = (int)(0.5 + _currentSampleMs * (10.0 * Math.Min(-ratio.Real, 1.0))); else newSampleInterval = _currentSampleMs; @@ -414,15 +399,17 @@ namespace System.Threading entry.tickCount = Environment.TickCount; entry.stateOrTransition = stateOrTransition; entry.newControlSetting = newThreadCount; - entry.lastHistoryCount = ((int)Math.Min(_totalSamples, _samplesToMeasure) / _wavePeriod) * _wavePeriod; - entry.lastHistoryMean = throughput; + entry.lastHistoryCount = (int)(Math.Min(_totalSamples, _samplesToMeasure) / _wavePeriod) * _wavePeriod; + entry.lastHistoryMean = (float)throughput; _logSize++; - PortableThreadPoolEventSource log = PortableThreadPoolEventSource.Log; - if (log.IsEnabled()) + if (PortableThreadPoolEventSource.Log.IsEnabled()) { - log.WorkerThreadAdjustmentAdjustment(throughput, newThreadCount, (int)stateOrTransition); + PortableThreadPoolEventSource.Log.ThreadPoolWorkerThreadAdjustmentAdjustment( + throughput, + (uint)newThreadCount, + (PortableThreadPoolEventSource.ThreadAdjustmentReasonMap)stateOrTransition); } } diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.ThreadCounts.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.ThreadCounts.cs index bc5e989..1587a30 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.ThreadCounts.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.ThreadCounts.cs @@ -2,7 +2,6 @@ // The .NET Foundation licenses this file to you under the MIT license. using System.Diagnostics; -using System.Runtime.InteropServices; namespace System.Threading { @@ -11,74 +10,86 @@ namespace System.Threading /// /// Tracks information on the number of threads we want/have in different states in our thread pool. /// - [StructLayout(LayoutKind.Explicit)] private struct ThreadCounts { - /// - /// Max possible thread pool threads we want to have. - /// - [FieldOffset(0)] - public short numThreadsGoal; + // SOS's ThreadPool command depends on this layout + private const byte NumProcessingWorkShift = 0; + private const byte NumExistingThreadsShift = 16; + private const byte NumThreadsGoalShift = 32; - /// - /// Number of thread pool threads that currently exist. - /// - [FieldOffset(2)] - public short numExistingThreads; + private ulong _data; // SOS's ThreadPool command depends on this name + + private ThreadCounts(ulong data) => _data = data; + + private short GetInt16Value(byte shift) => (short)(_data >> shift); + private void SetInt16Value(short value, byte shift) => + _data = (_data & ~((ulong)ushort.MaxValue << shift)) | ((ulong)(ushort)value << shift); /// /// Number of threads processing work items. /// - [FieldOffset(4)] - public short numProcessingWork; - - [FieldOffset(0)] - private long _asLong; - - public static ThreadCounts VolatileReadCounts(ref ThreadCounts counts) + public short NumProcessingWork { - return new ThreadCounts + get => GetInt16Value(NumProcessingWorkShift); + set { - _asLong = Volatile.Read(ref counts._asLong) - }; + Debug.Assert(value >= 0); + SetInt16Value(value, NumProcessingWorkShift); + } } - public static ThreadCounts CompareExchangeCounts(ref ThreadCounts location, ThreadCounts newCounts, ThreadCounts oldCounts) + public void SubtractNumProcessingWork(short value) { - ThreadCounts result = new ThreadCounts - { - _asLong = Interlocked.CompareExchange(ref location._asLong, newCounts._asLong, oldCounts._asLong) - }; + Debug.Assert(value >= 0); + Debug.Assert(value <= NumProcessingWork); - if (result == oldCounts) - { - result.Validate(); - newCounts.Validate(); - } - return result; + _data -= (ulong)(ushort)value << NumProcessingWorkShift; } - public static bool operator ==(ThreadCounts lhs, ThreadCounts rhs) => lhs._asLong == rhs._asLong; - - public static bool operator !=(ThreadCounts lhs, ThreadCounts rhs) => lhs._asLong != rhs._asLong; - - public override bool Equals(object? obj) + /// + /// Number of thread pool threads that currently exist. + /// + public short NumExistingThreads { - return obj is ThreadCounts counts && this._asLong == counts._asLong; + get => GetInt16Value(NumExistingThreadsShift); + set + { + Debug.Assert(value >= 0); + SetInt16Value(value, NumExistingThreadsShift); + } } - public override int GetHashCode() + public void SubtractNumExistingThreads(short value) { - return (int)(_asLong >> 8) + numThreadsGoal; + Debug.Assert(value >= 0); + Debug.Assert(value <= NumExistingThreads); + + _data -= (ulong)(ushort)value << NumExistingThreadsShift; } - private void Validate() + /// + /// Max possible thread pool threads we want to have. + /// + public short NumThreadsGoal { - Debug.Assert(numThreadsGoal > 0, "Goal must be positive"); - Debug.Assert(numExistingThreads >= 0, "Number of existing threads must be non-zero"); - Debug.Assert(numProcessingWork >= 0, "Number of threads processing work must be non-zero"); - Debug.Assert(numProcessingWork <= numExistingThreads, $"Num processing work ({numProcessingWork}) must be less than or equal to Num existing threads ({numExistingThreads})"); + get => GetInt16Value(NumThreadsGoalShift); + set + { + Debug.Assert(value > 0); + SetInt16Value(value, NumThreadsGoalShift); + } } + + public ThreadCounts VolatileRead() => new ThreadCounts(Volatile.Read(ref _data)); + + public ThreadCounts InterlockedCompareExchange(ThreadCounts newCounts, ThreadCounts oldCounts) => + new ThreadCounts(Interlocked.CompareExchange(ref _data, newCounts._data, oldCounts._data)); + + public static bool operator ==(ThreadCounts lhs, ThreadCounts rhs) => lhs._data == rhs._data; + public static bool operator !=(ThreadCounts lhs, ThreadCounts rhs) => lhs._data != rhs._data; + + public override bool Equals(object? obj) => obj is ThreadCounts other && _data == other._data; + public override int GetHashCode() => (int)_data + (int)(_data >> 32); } } } diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.WaitThread.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.WaitThread.cs index 78a3b2d..5e43ef3 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.WaitThread.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.WaitThread.cs @@ -2,6 +2,7 @@ // The .NET Foundation licenses this file to you under the MIT license. using System.Diagnostics; +using Microsoft.Win32.SafeHandles; namespace System.Threading { @@ -20,23 +21,25 @@ namespace System.Threading /// A description of the requested registration. internal void RegisterWaitHandle(RegisteredWaitHandle handle) { + if (PortableThreadPoolEventSource.Log.IsEnabled()) + { + PortableThreadPoolEventSource.Log.ThreadPoolIOEnqueue(handle); + } + _waitThreadLock.Acquire(); try { - if (_waitThreadsHead == null) // Lazily create the first wait thread. + WaitThreadNode? current = _waitThreadsHead; + if (current == null) // Lazily create the first wait thread. { - _waitThreadsHead = new WaitThreadNode - { - Thread = new WaitThread() - }; + _waitThreadsHead = current = new WaitThreadNode(new WaitThread()); } // Register the wait handle on the first wait thread that is not at capacity. WaitThreadNode prev; - WaitThreadNode? current = _waitThreadsHead; do { - if (current.Thread!.RegisterWaitHandle(handle)) + if (current.Thread.RegisterWaitHandle(handle)) { return; } @@ -45,10 +48,7 @@ namespace System.Threading } while (current != null); // If all wait threads are full, create a new one. - prev.Next = new WaitThreadNode - { - Thread = new WaitThread() - }; + prev.Next = new WaitThreadNode(new WaitThread()); prev.Next.Thread.RegisterWaitHandle(handle); return; } @@ -58,6 +58,16 @@ namespace System.Threading } } + internal static void CompleteWait(RegisteredWaitHandle handle, bool timedOut) + { + if (PortableThreadPoolEventSource.Log.IsEnabled()) + { + PortableThreadPoolEventSource.Log.ThreadPoolIODequeue(handle); + } + + handle.PerformCallback(timedOut); + } + /// /// Attempt to remove the given wait thread from the list. It is only removed if there are no user-provided waits on the thread. /// @@ -87,14 +97,14 @@ namespace System.Threading /// The wait thread to remove from the list. private void RemoveWaitThread(WaitThread thread) { - if (_waitThreadsHead!.Thread == thread) + WaitThreadNode? current = _waitThreadsHead!; + if (current.Thread == thread) { - _waitThreadsHead = _waitThreadsHead.Next; + _waitThreadsHead = current.Next; return; } WaitThreadNode prev; - WaitThreadNode? current = _waitThreadsHead; do { @@ -112,8 +122,10 @@ namespace System.Threading private class WaitThreadNode { - public WaitThread? Thread { get; set; } + public WaitThread Thread { get; } public WaitThreadNode? Next { get; set; } + + public WaitThreadNode(WaitThread thread) => Thread = thread; } /// @@ -122,21 +134,6 @@ namespace System.Threading internal class WaitThread { /// - /// The info for a completed wait on a specific . - /// - private struct CompletedWaitHandle - { - public CompletedWaitHandle(RegisteredWaitHandle completedHandle, bool timedOut) - { - CompletedHandle = completedHandle; - TimedOut = timedOut; - } - - public RegisteredWaitHandle CompletedHandle { get; } - public bool TimedOut { get; } - } - - /// /// The wait handles registered on this wait thread. /// private readonly RegisteredWaitHandle[] _registeredWaits = new RegisteredWaitHandle[WaitHandle.MaxWaitHandles - 1]; @@ -146,7 +143,7 @@ namespace System.Threading /// /// The zeroth element of this array is always . /// - private readonly WaitHandle[] _waitHandles = new WaitHandle[WaitHandle.MaxWaitHandles]; + private readonly SafeWaitHandle[] _waitHandles = new SafeWaitHandle[WaitHandle.MaxWaitHandles]; /// /// The number of user-registered waits on this wait thread. /// @@ -170,9 +167,11 @@ namespace System.Threading public WaitThread() { - _waitHandles[0] = _changeHandlesEvent; - Thread waitThread = new Thread(WaitThreadStart); + _waitHandles[0] = _changeHandlesEvent.SafeWaitHandle; + Thread waitThread = new Thread(WaitThreadStart, SmallStackSizeBytes); + waitThread.IsThreadPoolThread = true; waitThread.IsBackground = true; + waitThread.Name = ".NET ThreadPool Wait"; waitThread.Start(); } @@ -183,9 +182,12 @@ namespace System.Threading { while (true) { - ProcessRemovals(); - int numUserWaits = _numUserWaits; - int preWaitTimeMs = Environment.TickCount; + // This value is taken inside the lock after processing removals. In this iteration these are the number of + // user waits that will be waited upon. Any new waits will wake the wait and the next iteration would + // consider them. + int numUserWaits = ProcessRemovals(); + + int currentTimeMs = Environment.TickCount; // Recalculate Timeout int timeoutDurationMs = Timeout.Infinite; @@ -197,20 +199,22 @@ namespace System.Threading { for (int i = 0; i < numUserWaits; i++) { - if (_registeredWaits[i].IsInfiniteTimeout) + RegisteredWaitHandle registeredWait = _registeredWaits[i]; + Debug.Assert(registeredWait != null); + if (registeredWait.IsInfiniteTimeout) { continue; } - int handleTimeoutDurationMs = _registeredWaits[i].TimeoutTimeMs - preWaitTimeMs; + int handleTimeoutDurationMs = Math.Max(0, registeredWait.TimeoutTimeMs - currentTimeMs); if (timeoutDurationMs == Timeout.Infinite) { - timeoutDurationMs = handleTimeoutDurationMs > 0 ? handleTimeoutDurationMs : 0; + timeoutDurationMs = handleTimeoutDurationMs; } else { - timeoutDurationMs = Math.Min(handleTimeoutDurationMs > 0 ? handleTimeoutDurationMs : 0, timeoutDurationMs); + timeoutDurationMs = Math.Min(handleTimeoutDurationMs, timeoutDurationMs); } if (timeoutDurationMs == 0) @@ -220,38 +224,42 @@ namespace System.Threading } } - int signaledHandleIndex = WaitHandle.WaitAny(new ReadOnlySpan(_waitHandles, 0, numUserWaits + 1), timeoutDurationMs); + int signaledHandleIndex = WaitHandle.WaitAny(new ReadOnlySpan(_waitHandles, 0, numUserWaits + 1), timeoutDurationMs); + + if (signaledHandleIndex >= WaitHandle.WaitAbandoned && + signaledHandleIndex < WaitHandle.WaitAbandoned + 1 + numUserWaits) + { + // For compatibility, treat an abandoned mutex wait result as a success and ignore the abandonment + Debug.Assert(signaledHandleIndex != WaitHandle.WaitAbandoned); // the first wait handle is an event + signaledHandleIndex += WaitHandle.WaitSuccess - WaitHandle.WaitAbandoned; + } if (signaledHandleIndex == 0) // If we were woken up for a change in our handles, continue. { continue; } - RegisteredWaitHandle? signaledHandle = signaledHandleIndex != WaitHandle.WaitTimeout ? _registeredWaits[signaledHandleIndex - 1] : null; - - if (signaledHandle != null) + if (signaledHandleIndex != WaitHandle.WaitTimeout) { + RegisteredWaitHandle signaledHandle = _registeredWaits[signaledHandleIndex - 1]; + Debug.Assert(signaledHandle != null); QueueWaitCompletion(signaledHandle, false); + continue; } - else + + if (numUserWaits == 0 && ThreadPoolInstance.TryRemoveWaitThread(this)) { - if (numUserWaits == 0) - { - if (ThreadPoolInstance.TryRemoveWaitThread(this)) - { - return; - } - } + return; + } - int elapsedDurationMs = Environment.TickCount - preWaitTimeMs; // Calculate using relative time to ensure we don't have issues with overflow wraparound - for (int i = 0; i < numUserWaits; i++) + currentTimeMs = Environment.TickCount; + for (int i = 0; i < numUserWaits; i++) + { + RegisteredWaitHandle registeredHandle = _registeredWaits[i]; + Debug.Assert(registeredHandle != null); + if (!registeredHandle.IsInfiniteTimeout && currentTimeMs - registeredHandle.TimeoutTimeMs >= 0) { - RegisteredWaitHandle registeredHandle = _registeredWaits[i]; - int handleTimeoutDurationMs = registeredHandle.TimeoutTimeMs - preWaitTimeMs; - if (elapsedDurationMs >= handleTimeoutDurationMs) - { - QueueWaitCompletion(registeredHandle, true); - } + QueueWaitCompletion(registeredHandle, true); } } } @@ -261,9 +269,10 @@ namespace System.Threading /// Go through the array and remove those registered wait handles from the /// and arrays, filling the holes along the way. /// - private void ProcessRemovals() + private int ProcessRemovals() { - ThreadPoolInstance._waitThreadLock.Acquire(); + PortableThreadPool threadPoolInstance = ThreadPoolInstance; + threadPoolInstance._waitThreadLock.Acquire(); try { Debug.Assert(_numPendingRemoves >= 0); @@ -274,7 +283,7 @@ namespace System.Threading if (_numPendingRemoves == 0 || _numUserWaits == 0) { - return; + return _numUserWaits; // return the value taken inside the lock for the caller } int originalNumUserWaits = _numUserWaits; int originalNumPendingRemoves = _numPendingRemoves; @@ -282,61 +291,79 @@ namespace System.Threading // This is O(N^2), but max(N) = 63 and N will usually be very low for (int i = 0; i < _numPendingRemoves; i++) { - for (int j = 0; j < _numUserWaits; j++) + RegisteredWaitHandle waitHandleToRemove = _pendingRemoves[i]!; + int numUserWaits = _numUserWaits; + int j = 0; + for (; j < numUserWaits && waitHandleToRemove != _registeredWaits[j]; j++) { - if (_pendingRemoves[i] == _registeredWaits[j]) - { - _registeredWaits[j].OnRemoveWait(); - _registeredWaits[j] = _registeredWaits[_numUserWaits - 1]; - _waitHandles[j + 1] = _waitHandles[_numUserWaits]; - _registeredWaits[_numUserWaits - 1] = null!; - _waitHandles[_numUserWaits] = null!; - --_numUserWaits; - _pendingRemoves[i] = null; - break; - } } - Debug.Assert(_pendingRemoves[i] == null); + Debug.Assert(j < numUserWaits); + + waitHandleToRemove.OnRemoveWait(); + + if (j + 1 < numUserWaits) + { + // Not removing the last element. Due to the possibility of there being duplicate system wait + // objects in the wait array, perhaps even with different handle values due to the use of + // DuplicateHandle(), don't reorder handles for fairness. When there are duplicate system wait + // objects in the wait array and the wait object gets signaled, the system may release the wait in + // in deterministic order based on the order in the wait array. Instead, shift the array. + + int removeAt = j; + int count = numUserWaits; + Array.Copy(_registeredWaits, removeAt + 1, _registeredWaits, removeAt, count - (removeAt + 1)); + _registeredWaits[count - 1] = null!; + + // Corresponding elements in the wait handles array are shifted up by one + removeAt++; + count++; + Array.Copy(_waitHandles, removeAt + 1, _waitHandles, removeAt, count - (removeAt + 1)); + _waitHandles[count - 1] = null!; + } + else + { + // Removing the last element + _registeredWaits[j] = null!; + _waitHandles[j + 1] = null!; + } + + _numUserWaits = numUserWaits - 1; + _pendingRemoves[i] = null; + + waitHandleToRemove.Handle.DangerousRelease(); } _numPendingRemoves = 0; Debug.Assert(originalNumUserWaits - originalNumPendingRemoves == _numUserWaits, $"{originalNumUserWaits} - {originalNumPendingRemoves} == {_numUserWaits}"); + return _numUserWaits; // return the value taken inside the lock for the caller } finally { - ThreadPoolInstance._waitThreadLock.Release(); + threadPoolInstance._waitThreadLock.Release(); } } /// - /// Queue a call to on the ThreadPool. + /// Queue a call to complete the wait on the ThreadPool. /// /// The handle that completed. /// Whether or not the wait timed out. private void QueueWaitCompletion(RegisteredWaitHandle registeredHandle, bool timedOut) { registeredHandle.RequestCallback(); + // If the handle is a repeating handle, set up the next call. Otherwise, remove it from the wait thread. if (registeredHandle.Repeating) { - registeredHandle.RestartTimeout(Environment.TickCount); + registeredHandle.RestartTimeout(); } else { UnregisterWait(registeredHandle, blocking: false); // We shouldn't block the wait thread on the unregistration. } - ThreadPool.QueueUserWorkItem(CompleteWait, new CompletedWaitHandle(registeredHandle, timedOut)); - } - /// - /// Process the completion of a user-registered wait (call the callback). - /// - /// A object representing the wait completion. - private void CompleteWait(object? state) - { - CompletedWaitHandle handle = (CompletedWaitHandle)state!; - handle.CompletedHandle.PerformCallback(handle.TimedOut); + ThreadPool.UnsafeQueueWaitCompletion(new CompleteWaitThreadPoolWorkItem(registeredHandle, timedOut)); } /// @@ -352,6 +379,10 @@ namespace System.Threading return false; } + bool success = false; + handle.Handle.DangerousAddRef(ref success); + Debug.Assert(success); + _registeredWaits[_numUserWaits] = handle; _waitHandles[_numUserWaits + 1] = handle.Handle; _numUserWaits++; @@ -385,20 +416,25 @@ namespace System.Threading { bool pendingRemoval = false; // TODO: Optimization: Try to unregister wait directly if it isn't being waited on. - ThreadPoolInstance._waitThreadLock.Acquire(); + PortableThreadPool threadPoolInstance = ThreadPoolInstance; + threadPoolInstance._waitThreadLock.Acquire(); try { // If this handle is not already pending removal and hasn't already been removed - if (Array.IndexOf(_registeredWaits, handle) != -1 && Array.IndexOf(_pendingRemoves, handle) == -1) + if (Array.IndexOf(_registeredWaits, handle) != -1) { - _pendingRemoves[_numPendingRemoves++] = handle; - _changeHandlesEvent.Set(); // Tell the wait thread that there are changes pending. + if (Array.IndexOf(_pendingRemoves, handle) == -1) + { + _pendingRemoves[_numPendingRemoves++] = handle; + _changeHandlesEvent.Set(); // Tell the wait thread that there are changes pending. + } + pendingRemoval = true; } } finally { - ThreadPoolInstance._waitThreadLock.Release(); + threadPoolInstance._waitThreadLock.Release(); } if (blocking) diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.WorkerThread.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.WorkerThread.cs index 2cedaa5..1f52e9d 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.WorkerThread.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.WorkerThread.cs @@ -13,79 +13,103 @@ namespace System.Threading /// /// Semaphore for controlling how many threads are currently working. /// - private static readonly LowLevelLifoSemaphore s_semaphore = new LowLevelLifoSemaphore(0, MaxPossibleThreadCount, SemaphoreSpinCount); - - /// - /// Maximum number of spins a thread pool worker thread performs before waiting for work - /// - private static int SemaphoreSpinCount - { - get => AppContextConfigHelper.GetInt16Config("ThreadPool_UnfairSemaphoreSpinLimit", 70, false); - } + private static readonly LowLevelLifoSemaphore s_semaphore = + new LowLevelLifoSemaphore( + 0, + MaxPossibleThreadCount, + AppContextConfigHelper.GetInt32Config("System.Threading.ThreadPool.UnfairSemaphoreSpinLimit", 70, false), + onWait: () => + { + if (PortableThreadPoolEventSource.Log.IsEnabled()) + { + PortableThreadPoolEventSource.Log.ThreadPoolWorkerThreadWait( + (uint)ThreadPoolInstance._separated.counts.VolatileRead().NumExistingThreads); + } + }); private static void WorkerThreadStart() { - PortableThreadPoolEventSource log = PortableThreadPoolEventSource.Log; - if (log.IsEnabled()) + Thread.CurrentThread.SetThreadPoolWorkerThreadName(); + + PortableThreadPool threadPoolInstance = ThreadPoolInstance; + + if (PortableThreadPoolEventSource.Log.IsEnabled()) { - log.WorkerThreadStart(ThreadCounts.VolatileReadCounts(ref ThreadPoolInstance._separated.counts).numExistingThreads); + PortableThreadPoolEventSource.Log.ThreadPoolWorkerThreadStart( + (uint)threadPoolInstance._separated.counts.VolatileRead().NumExistingThreads); } + LowLevelLock hillClimbingThreadAdjustmentLock = threadPoolInstance._hillClimbingThreadAdjustmentLock; + while (true) { while (WaitForRequest()) { - if (TakeActiveRequest()) + bool alreadyRemovedWorkingWorker = false; + while (TakeActiveRequest(threadPoolInstance)) { - Volatile.Write(ref ThreadPoolInstance._separated.lastDequeueTime, Environment.TickCount); - if (ThreadPoolWorkQueue.Dispatch()) + Volatile.Write(ref threadPoolInstance._separated.lastDequeueTime, Environment.TickCount); + if (!ThreadPoolWorkQueue.Dispatch()) { - // If the queue runs out of work for us, we need to update the number of working workers to reflect that we are done working for now - RemoveWorkingWorker(); + // ShouldStopProcessingWorkNow() caused the thread to stop processing work, and it would have + // already removed this working worker in the counts + alreadyRemovedWorkingWorker = true; + break; } } - else + + if (!alreadyRemovedWorkingWorker) { - // If we woke up but couldn't find a request, we need to update the number of working workers to reflect that we are done working for now - RemoveWorkingWorker(); + // If we woke up but couldn't find a request, or ran out of work items to process, we need to update + // the number of working workers to reflect that we are done working for now + RemoveWorkingWorker(threadPoolInstance); } } - ThreadPoolInstance._hillClimbingThreadAdjustmentLock.Acquire(); + hillClimbingThreadAdjustmentLock.Acquire(); try { // At this point, the thread's wait timed out. We are shutting down this thread. // We are going to decrement the number of exisiting threads to no longer include this one // and then change the max number of threads in the thread pool to reflect that we don't need as many // as we had. Finally, we are going to tell hill climbing that we changed the max number of threads. - ThreadCounts counts = ThreadCounts.VolatileReadCounts(ref ThreadPoolInstance._separated.counts); + ThreadCounts counts = threadPoolInstance._separated.counts.VolatileRead(); while (true) { - if (counts.numExistingThreads == counts.numProcessingWork) + // Since this thread is currently registered as an existing thread, if more work comes in meanwhile, + // this thread would be expected to satisfy the new work. Ensure that NumExistingThreads is not + // decreased below NumProcessingWork, as that would be indicative of such a case. + short numExistingThreads = counts.NumExistingThreads; + if (numExistingThreads <= counts.NumProcessingWork) { // In this case, enough work came in that this thread should not time out and should go back to work. break; } ThreadCounts newCounts = counts; - newCounts.numExistingThreads--; - newCounts.numThreadsGoal = Math.Max(ThreadPoolInstance._minThreads, Math.Min(newCounts.numExistingThreads, newCounts.numThreadsGoal)); - ThreadCounts oldCounts = ThreadCounts.CompareExchangeCounts(ref ThreadPoolInstance._separated.counts, newCounts, counts); + newCounts.SubtractNumExistingThreads(1); + short newNumExistingThreads = (short)(numExistingThreads - 1); + short newNumThreadsGoal = Math.Max(threadPoolInstance._minThreads, Math.Min(newNumExistingThreads, newCounts.NumThreadsGoal)); + newCounts.NumThreadsGoal = newNumThreadsGoal; + + ThreadCounts oldCounts = threadPoolInstance._separated.counts.InterlockedCompareExchange(newCounts, counts); if (oldCounts == counts) { - HillClimbing.ThreadPoolHillClimber.ForceChange(newCounts.numThreadsGoal, HillClimbing.StateOrTransition.ThreadTimedOut); + HillClimbing.ThreadPoolHillClimber.ForceChange(newNumThreadsGoal, HillClimbing.StateOrTransition.ThreadTimedOut); - if (log.IsEnabled()) + if (PortableThreadPoolEventSource.Log.IsEnabled()) { - log.WorkerThreadStop(newCounts.numExistingThreads); + PortableThreadPoolEventSource.Log.ThreadPoolWorkerThreadStop((uint)newNumExistingThreads); } return; } + + counts = oldCounts; } } finally { - ThreadPoolInstance._hillClimbingThreadAdjustmentLock.Release(); + hillClimbingThreadAdjustmentLock.Release(); } } } @@ -94,27 +118,19 @@ namespace System.Threading /// Waits for a request to work. /// /// If this thread was woken up before it timed out. - private static bool WaitForRequest() - { - PortableThreadPoolEventSource log = PortableThreadPoolEventSource.Log; - if (log.IsEnabled()) - { - log.WorkerThreadWait(ThreadCounts.VolatileReadCounts(ref ThreadPoolInstance._separated.counts).numExistingThreads); - } - return s_semaphore.Wait(ThreadPoolThreadTimeoutMs); - } + private static bool WaitForRequest() => s_semaphore.Wait(ThreadPoolThreadTimeoutMs); /// /// Reduce the number of working workers by one, but maybe add back a worker (possibily this thread) if a thread request comes in while we are marking this thread as not working. /// - private static void RemoveWorkingWorker() + private static void RemoveWorkingWorker(PortableThreadPool threadPoolInstance) { - ThreadCounts currentCounts = ThreadCounts.VolatileReadCounts(ref ThreadPoolInstance._separated.counts); + ThreadCounts currentCounts = threadPoolInstance._separated.counts.VolatileRead(); while (true) { ThreadCounts newCounts = currentCounts; - newCounts.numProcessingWork--; - ThreadCounts oldCounts = ThreadCounts.CompareExchangeCounts(ref ThreadPoolInstance._separated.counts, newCounts, currentCounts); + newCounts.SubtractNumProcessingWork(1); + ThreadCounts oldCounts = threadPoolInstance._separated.counts.InterlockedCompareExchange(newCounts, currentCounts); if (oldCounts == currentCounts) { @@ -123,32 +139,52 @@ namespace System.Threading currentCounts = oldCounts; } + if (currentCounts.NumProcessingWork > 1) + { + // In highly bursty cases with short bursts of work, especially in the portable thread pool implementation, + // worker threads are being released and entering Dispatch very quickly, not finding much work in Dispatch, + // and soon afterwards going back to Dispatch, causing extra thrashing on data and some interlocked + // operations. If this is not the last thread to stop processing work, introduce a slight delay to help + // other threads make more efficient progress. The spin-wait is mainly for when the sleep is not effective + // due to there being no other threads to schedule. + Thread.UninterruptibleSleep0(); + if (!Environment.IsSingleProcessor) + { + Thread.SpinWait(1); + } + } + // It's possible that we decided we had thread requests just before a request came in, // but reduced the worker count *after* the request came in. In this case, we might // miss the notification of a thread request. So we wake up a thread (maybe this one!) // if there is work to do. - if (ThreadPoolInstance._numRequestedWorkers > 0) + if (threadPoolInstance._separated.numRequestedWorkers > 0) { - MaybeAddWorkingWorker(); + MaybeAddWorkingWorker(threadPoolInstance); } } - internal static void MaybeAddWorkingWorker() + internal static void MaybeAddWorkingWorker(PortableThreadPool threadPoolInstance) { - ThreadCounts counts = ThreadCounts.VolatileReadCounts(ref ThreadPoolInstance._separated.counts); - ThreadCounts newCounts; + ThreadCounts counts = threadPoolInstance._separated.counts.VolatileRead(); + short numExistingThreads, numProcessingWork, newNumExistingThreads, newNumProcessingWork; while (true) { - newCounts = counts; - newCounts.numProcessingWork = Math.Max(counts.numProcessingWork, Math.Min((short)(counts.numProcessingWork + 1), counts.numThreadsGoal)); - newCounts.numExistingThreads = Math.Max(counts.numExistingThreads, newCounts.numProcessingWork); - - if (newCounts == counts) + numProcessingWork = counts.NumProcessingWork; + if (numProcessingWork >= counts.NumThreadsGoal) { return; } - ThreadCounts oldCounts = ThreadCounts.CompareExchangeCounts(ref ThreadPoolInstance._separated.counts, newCounts, counts); + newNumProcessingWork = (short)(numProcessingWork + 1); + numExistingThreads = counts.NumExistingThreads; + newNumExistingThreads = Math.Max(numExistingThreads, newNumProcessingWork); + + ThreadCounts newCounts = counts; + newCounts.NumProcessingWork = newNumProcessingWork; + newCounts.NumExistingThreads = newNumExistingThreads; + + ThreadCounts oldCounts = threadPoolInstance._separated.counts.InterlockedCompareExchange(newCounts, counts); if (oldCounts == counts) { @@ -158,8 +194,8 @@ namespace System.Threading counts = oldCounts; } - int toCreate = newCounts.numExistingThreads - counts.numExistingThreads; - int toRelease = newCounts.numProcessingWork - counts.numProcessingWork; + int toCreate = newNumExistingThreads - numExistingThreads; + int toRelease = newNumProcessingWork - numProcessingWork; if (toRelease > 0) { @@ -171,25 +207,24 @@ namespace System.Threading if (TryCreateWorkerThread()) { toCreate--; + continue; } - else + + counts = threadPoolInstance._separated.counts.VolatileRead(); + while (true) { - counts = ThreadCounts.VolatileReadCounts(ref ThreadPoolInstance._separated.counts); - while (true) - { - newCounts = counts; - newCounts.numProcessingWork -= (short)toCreate; - newCounts.numExistingThreads -= (short)toCreate; + ThreadCounts newCounts = counts; + newCounts.SubtractNumProcessingWork((short)toCreate); + newCounts.SubtractNumExistingThreads((short)toCreate); - ThreadCounts oldCounts = ThreadCounts.CompareExchangeCounts(ref ThreadPoolInstance._separated.counts, newCounts, counts); - if (oldCounts == counts) - { - break; - } - counts = oldCounts; + ThreadCounts oldCounts = threadPoolInstance._separated.counts.InterlockedCompareExchange(newCounts, counts); + if (oldCounts == counts) + { + break; } - toCreate = 0; + counts = oldCounts; } + break; } } @@ -199,9 +234,9 @@ namespace System.Threading /// there are more worker threads in the thread pool than we currently want. /// /// Whether or not this thread should stop processing work even if there is still work in the queue. - internal static bool ShouldStopProcessingWorkNow() + internal static bool ShouldStopProcessingWorkNow(PortableThreadPool threadPoolInstance) { - ThreadCounts counts = ThreadCounts.VolatileReadCounts(ref ThreadPoolInstance._separated.counts); + ThreadCounts counts = threadPoolInstance._separated.counts.VolatileRead(); while (true) { // When there are more threads processing work than the thread count goal, hill climbing must have decided @@ -211,15 +246,15 @@ namespace System.Threading // code from which this implementation was ported, which turns a processing thread into a retired thread // and checks for pending requests like RemoveWorkingWorker. In this implementation there are // no retired threads, so only the count of threads processing work is considered. - if (counts.numProcessingWork <= counts.numThreadsGoal) + if (counts.NumProcessingWork <= counts.NumThreadsGoal) { return false; } ThreadCounts newCounts = counts; - newCounts.numProcessingWork--; + newCounts.SubtractNumProcessingWork(1); - ThreadCounts oldCounts = ThreadCounts.CompareExchangeCounts(ref ThreadPoolInstance._separated.counts, newCounts, counts); + ThreadCounts oldCounts = threadPoolInstance._separated.counts.InterlockedCompareExchange(newCounts, counts); if (oldCounts == counts) { @@ -229,12 +264,12 @@ namespace System.Threading } } - private static bool TakeActiveRequest() + private static bool TakeActiveRequest(PortableThreadPool threadPoolInstance) { - int count = ThreadPoolInstance._numRequestedWorkers; + int count = threadPoolInstance._separated.numRequestedWorkers; while (count > 0) { - int prevCount = Interlocked.CompareExchange(ref ThreadPoolInstance._numRequestedWorkers, count - 1, count); + int prevCount = Interlocked.CompareExchange(ref threadPoolInstance._separated.numRequestedWorkers, count - 1, count); if (prevCount == count) { return true; diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.WorkerTracking.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.WorkerTracking.cs new file mode 100644 index 0000000..49e0174 --- /dev/null +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.WorkerTracking.cs @@ -0,0 +1,126 @@ +// Licensed to the .NET Foundation under one or more agreements. +// The .NET Foundation licenses this file to you under the MIT license. + +using System.Diagnostics; + +namespace System.Threading +{ + internal partial class PortableThreadPool + { + private CountsOfThreadsProcessingUserCallbacks _countsOfThreadsProcessingUserCallbacks; + + public void ReportThreadStatus(bool isProcessingUserCallback) + { + CountsOfThreadsProcessingUserCallbacks counts = _countsOfThreadsProcessingUserCallbacks; + while (true) + { + CountsOfThreadsProcessingUserCallbacks newCounts = counts; + if (isProcessingUserCallback) + { + newCounts.IncrementCurrent(); + } + else + { + newCounts.DecrementCurrent(); + } + + CountsOfThreadsProcessingUserCallbacks countsBeforeUpdate = + _countsOfThreadsProcessingUserCallbacks.InterlockedCompareExchange(newCounts, counts); + if (countsBeforeUpdate == counts) + { + break; + } + + counts = countsBeforeUpdate; + } + } + + private short GetAndResetHighWatermarkCountOfThreadsProcessingUserCallbacks() + { + CountsOfThreadsProcessingUserCallbacks counts = _countsOfThreadsProcessingUserCallbacks; + while (true) + { + CountsOfThreadsProcessingUserCallbacks newCounts = counts; + newCounts.ResetHighWatermark(); + + CountsOfThreadsProcessingUserCallbacks countsBeforeUpdate = + _countsOfThreadsProcessingUserCallbacks.InterlockedCompareExchange(newCounts, counts); + if (countsBeforeUpdate == counts || countsBeforeUpdate.HighWatermark == countsBeforeUpdate.Current) + { + return countsBeforeUpdate.HighWatermark; + } + + counts = countsBeforeUpdate; + } + } + + /// + /// Tracks thread count information that is used when the EnableWorkerTracking config option is enabled. + /// + private struct CountsOfThreadsProcessingUserCallbacks + { + private const byte CurrentShift = 0; + private const byte HighWatermarkShift = 16; + + private uint _data; + + private CountsOfThreadsProcessingUserCallbacks(uint data) => _data = data; + + private short GetInt16Value(byte shift) => (short)(_data >> shift); + private void SetInt16Value(short value, byte shift) => + _data = (_data & ~((uint)ushort.MaxValue << shift)) | ((uint)(ushort)value << shift); + + /// + /// Number of threads currently processing user callbacks + /// + public short Current => GetInt16Value(CurrentShift); + + public void IncrementCurrent() + { + if (Current < HighWatermark) + { + _data += (uint)1 << CurrentShift; + } + else + { + Debug.Assert(Current == HighWatermark); + Debug.Assert(Current != short.MaxValue); + _data += ((uint)1 << CurrentShift) | ((uint)1 << HighWatermarkShift); + } + } + + public void DecrementCurrent() + { + Debug.Assert(Current > 0); + _data -= (uint)1 << CurrentShift; + } + + /// + /// The high-warkmark of number of threads processing user callbacks since the high-watermark was last reset + /// + public short HighWatermark => GetInt16Value(HighWatermarkShift); + + public void ResetHighWatermark() => SetInt16Value(Current, HighWatermarkShift); + + public CountsOfThreadsProcessingUserCallbacks InterlockedCompareExchange( + CountsOfThreadsProcessingUserCallbacks newCounts, + CountsOfThreadsProcessingUserCallbacks oldCounts) + { + return + new CountsOfThreadsProcessingUserCallbacks( + Interlocked.CompareExchange(ref _data, newCounts._data, oldCounts._data)); + } + + public static bool operator ==( + CountsOfThreadsProcessingUserCallbacks lhs, + CountsOfThreadsProcessingUserCallbacks rhs) => lhs._data == rhs._data; + public static bool operator !=( + CountsOfThreadsProcessingUserCallbacks lhs, + CountsOfThreadsProcessingUserCallbacks rhs) => lhs._data != rhs._data; + + public override bool Equals(object? obj) => + obj is CountsOfThreadsProcessingUserCallbacks other && _data == other._data; + public override int GetHashCode() => (int)_data; + } + } +} diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.cs index de23cda..ead851a 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPool.cs @@ -2,6 +2,7 @@ // The .NET Foundation licenses this file to you under the MIT license. using System.Diagnostics; +using System.Runtime.CompilerServices; using System.Runtime.InteropServices; namespace System.Threading @@ -11,11 +12,8 @@ namespace System.Threading /// internal sealed partial class PortableThreadPool { -#pragma warning disable IDE1006 // Naming Styles - public static readonly PortableThreadPool ThreadPoolInstance = new PortableThreadPool(); -#pragma warning restore IDE1006 // Naming Styles - private const int ThreadPoolThreadTimeoutMs = 20 * 1000; // If you change this make sure to change the timeout times in the tests. + private const int SmallStackSizeBytes = 256 * 1024; #if TARGET_64BIT private const short MaxPossibleThreadCount = short.MaxValue; @@ -27,44 +25,50 @@ namespace System.Threading private const int CpuUtilizationHigh = 95; private const int CpuUtilizationLow = 80; - private int _cpuUtilization; private static readonly short s_forcedMinWorkerThreads = AppContextConfigHelper.GetInt16Config("System.Threading.ThreadPool.MinThreads", 0, false); private static readonly short s_forcedMaxWorkerThreads = AppContextConfigHelper.GetInt16Config("System.Threading.ThreadPool.MaxThreads", 0, false); + [ThreadStatic] + private static object? t_completionCountObject; + +#pragma warning disable IDE1006 // Naming Styles + // The singleton must be initialized after the static variables above, as the constructor may be dependent on them. + // SOS's ThreadPool command depends on this name. + public static readonly PortableThreadPool ThreadPoolInstance = new PortableThreadPool(); +#pragma warning restore IDE1006 // Naming Styles + + private int _cpuUtilization; // SOS's ThreadPool command depends on this name private short _minThreads; private short _maxThreads; private readonly LowLevelLock _maxMinThreadLock = new LowLevelLock(); - [StructLayout(LayoutKind.Explicit, Size = CacheLineSize * 5)] + [StructLayout(LayoutKind.Explicit, Size = Internal.PaddingHelpers.CACHE_LINE_SIZE * 6)] private struct CacheLineSeparated { -#if TARGET_ARM64 - private const int CacheLineSize = 128; -#else - private const int CacheLineSize = 64; -#endif - [FieldOffset(CacheLineSize * 1)] - public ThreadCounts counts; - [FieldOffset(CacheLineSize * 2)] + [FieldOffset(Internal.PaddingHelpers.CACHE_LINE_SIZE * 1)] + public ThreadCounts counts; // SOS's ThreadPool command depends on this name + [FieldOffset(Internal.PaddingHelpers.CACHE_LINE_SIZE * 2)] public int lastDequeueTime; - [FieldOffset(CacheLineSize * 3)] + [FieldOffset(Internal.PaddingHelpers.CACHE_LINE_SIZE * 3)] public int priorCompletionCount; - [FieldOffset(CacheLineSize * 3 + sizeof(int))] + [FieldOffset(Internal.PaddingHelpers.CACHE_LINE_SIZE * 3 + sizeof(int))] public int priorCompletedWorkRequestsTime; - [FieldOffset(CacheLineSize * 3 + sizeof(int) * 2)] + [FieldOffset(Internal.PaddingHelpers.CACHE_LINE_SIZE * 3 + sizeof(int) * 2)] public int nextCompletedWorkRequestsTime; + [FieldOffset(Internal.PaddingHelpers.CACHE_LINE_SIZE * 4)] + public volatile int numRequestedWorkers; + [FieldOffset(Internal.PaddingHelpers.CACHE_LINE_SIZE * 5)] + public int gateThreadRunningState; } - private CacheLineSeparated _separated; + private CacheLineSeparated _separated; // SOS's ThreadPool command depends on this name private long _currentSampleStartTime; private readonly ThreadInt64PersistentCounter _completionCounter = new ThreadInt64PersistentCounter(); private int _threadAdjustmentIntervalMs; private readonly LowLevelLock _hillClimbingThreadAdjustmentLock = new LowLevelLock(); - private volatile int _numRequestedWorkers; - private PortableThreadPool() { _minThreads = s_forcedMinWorkerThreads > 0 ? s_forcedMinWorkerThreads : (short)Environment.ProcessorCount; @@ -83,51 +87,56 @@ namespace System.Threading { counts = new ThreadCounts { - numThreadsGoal = _minThreads + NumThreadsGoal = _minThreads } }; } - public bool SetMinThreads(int minThreads) + public bool SetMinThreads(int workerThreads, int ioCompletionThreads) { + if (workerThreads < 0 || ioCompletionThreads < 0) + { + return false; + } + _maxMinThreadLock.Acquire(); try { - if (minThreads < 0 || minThreads > _maxThreads) + if (workerThreads > _maxThreads || !ThreadPool.CanSetMinIOCompletionThreads(ioCompletionThreads)) { return false; } - else + + ThreadPool.SetMinIOCompletionThreads(ioCompletionThreads); + + if (s_forcedMinWorkerThreads != 0) { - short threads = (short)Math.Min(minThreads, MaxPossibleThreadCount); - if (s_forcedMinWorkerThreads == 0) - { - _minThreads = threads; + return true; + } - ThreadCounts counts = ThreadCounts.VolatileReadCounts(ref _separated.counts); - while (counts.numThreadsGoal < _minThreads) - { - ThreadCounts newCounts = counts; - newCounts.numThreadsGoal = _minThreads; + short newMinThreads = (short)Math.Max(1, Math.Min(workerThreads, MaxPossibleThreadCount)); + _minThreads = newMinThreads; - ThreadCounts oldCounts = ThreadCounts.CompareExchangeCounts(ref _separated.counts, newCounts, counts); - if (oldCounts == counts) - { - counts = newCounts; + ThreadCounts counts = _separated.counts.VolatileRead(); + while (counts.NumThreadsGoal < newMinThreads) + { + ThreadCounts newCounts = counts; + newCounts.NumThreadsGoal = newMinThreads; - if (newCounts.numThreadsGoal > oldCounts.numThreadsGoal && _numRequestedWorkers > 0) - { - WorkerThread.MaybeAddWorkingWorker(); - } - } - else - { - counts = oldCounts; - } + ThreadCounts oldCounts = _separated.counts.InterlockedCompareExchange(newCounts, counts); + if (oldCounts == counts) + { + if (_separated.numRequestedWorkers > 0) + { + WorkerThread.MaybeAddWorkingWorker(this); } + break; } - return true; + + counts = oldCounts; } + + return true; } finally { @@ -135,43 +144,49 @@ namespace System.Threading } } - public int GetMinThreads() => _minThreads; + public int GetMinThreads() => Volatile.Read(ref _minThreads); - public bool SetMaxThreads(int maxThreads) + public bool SetMaxThreads(int workerThreads, int ioCompletionThreads) { + if (workerThreads <= 0 || ioCompletionThreads <= 0) + { + return false; + } + _maxMinThreadLock.Acquire(); try { - if (maxThreads < _minThreads || maxThreads == 0) + if (workerThreads < _minThreads || !ThreadPool.CanSetMaxIOCompletionThreads(ioCompletionThreads)) { return false; } - else + + ThreadPool.SetMaxIOCompletionThreads(ioCompletionThreads); + + if (s_forcedMaxWorkerThreads != 0) { - short threads = (short)Math.Min(maxThreads, MaxPossibleThreadCount); - if (s_forcedMaxWorkerThreads == 0) - { - _maxThreads = threads; + return true; + } - ThreadCounts counts = ThreadCounts.VolatileReadCounts(ref _separated.counts); - while (counts.numThreadsGoal > _maxThreads) - { - ThreadCounts newCounts = counts; - newCounts.numThreadsGoal = _maxThreads; + short newMaxThreads = (short)Math.Min(workerThreads, MaxPossibleThreadCount); + _maxThreads = newMaxThreads; - ThreadCounts oldCounts = ThreadCounts.CompareExchangeCounts(ref _separated.counts, newCounts, counts); - if (oldCounts == counts) - { - counts = newCounts; - } - else - { - counts = oldCounts; - } - } + ThreadCounts counts = _separated.counts.VolatileRead(); + while (counts.NumThreadsGoal > newMaxThreads) + { + ThreadCounts newCounts = counts; + newCounts.NumThreadsGoal = newMaxThreads; + + ThreadCounts oldCounts = _separated.counts.InterlockedCompareExchange(newCounts, counts); + if (oldCounts == counts) + { + break; } - return true; + + counts = oldCounts; } + + return true; } finally { @@ -179,12 +194,12 @@ namespace System.Threading } } - public int GetMaxThreads() => _maxThreads; + public int GetMaxThreads() => Volatile.Read(ref _maxThreads); public int GetAvailableThreads() { - ThreadCounts counts = ThreadCounts.VolatileReadCounts(ref _separated.counts); - int count = _maxThreads - counts.numProcessingWork; + ThreadCounts counts = _separated.counts.VolatileRead(); + int count = _maxThreads - counts.NumProcessingWork; if (count < 0) { return 0; @@ -192,27 +207,42 @@ namespace System.Threading return count; } - public int ThreadCount => ThreadCounts.VolatileReadCounts(ref _separated.counts).numExistingThreads; + public int ThreadCount => _separated.counts.VolatileRead().NumExistingThreads; public long CompletedWorkItemCount => _completionCounter.Count; - internal bool NotifyWorkItemComplete() + public object GetOrCreateThreadLocalCompletionCountObject() => + t_completionCountObject ?? CreateThreadLocalCompletionCountObject(); + + [MethodImpl(MethodImplOptions.NoInlining)] + private object CreateThreadLocalCompletionCountObject() + { + Debug.Assert(t_completionCountObject == null); + + object threadLocalCompletionCountObject = _completionCounter.CreateThreadLocalCountObject(); + t_completionCountObject = threadLocalCompletionCountObject; + return threadLocalCompletionCountObject; + } + + private void NotifyWorkItemProgress(object threadLocalCompletionCountObject, int currentTimeMs) { - _completionCounter.Increment(); + ThreadInt64PersistentCounter.Increment(threadLocalCompletionCountObject); Volatile.Write(ref _separated.lastDequeueTime, Environment.TickCount); - if (ShouldAdjustMaxWorkersActive() && _hillClimbingThreadAdjustmentLock.TryAcquire()) + if (ShouldAdjustMaxWorkersActive(currentTimeMs)) { - try - { - AdjustMaxWorkersActive(); - } - finally - { - _hillClimbingThreadAdjustmentLock.Release(); - } + AdjustMaxWorkersActive(); } + } - return !WorkerThread.ShouldStopProcessingWorkNow(); + internal void NotifyWorkItemProgress() => + NotifyWorkItemProgress(GetOrCreateThreadLocalCompletionCountObject(), Environment.TickCount); + + internal bool NotifyWorkItemComplete(object? threadLocalCompletionCountObject, int currentTimeMs) + { + Debug.Assert(threadLocalCompletionCountObject != null); + + NotifyWorkItemProgress(threadLocalCompletionCountObject!, currentTimeMs); + return !WorkerThread.ShouldStopProcessingWorkNow(this); } // @@ -221,45 +251,53 @@ namespace System.Threading // private void AdjustMaxWorkersActive() { - _hillClimbingThreadAdjustmentLock.VerifyIsLocked(); - int currentTicks = Environment.TickCount; - int totalNumCompletions = (int)_completionCounter.Count; - int numCompletions = totalNumCompletions - _separated.priorCompletionCount; - long startTime = _currentSampleStartTime; - long endTime = Stopwatch.GetTimestamp(); - long freq = Stopwatch.Frequency; - - double elapsedSeconds = (double)(endTime - startTime) / freq; + LowLevelLock hillClimbingThreadAdjustmentLock = _hillClimbingThreadAdjustmentLock; + if (!hillClimbingThreadAdjustmentLock.TryAcquire()) + { + // The lock is held by someone else, they will take care of this for us + return; + } - if (elapsedSeconds * 1000 >= _threadAdjustmentIntervalMs / 2) + try { - ThreadCounts currentCounts = ThreadCounts.VolatileReadCounts(ref _separated.counts); - int newMax; - (newMax, _threadAdjustmentIntervalMs) = HillClimbing.ThreadPoolHillClimber.Update(currentCounts.numThreadsGoal, elapsedSeconds, numCompletions); + long startTime = _currentSampleStartTime; + long endTime = Stopwatch.GetTimestamp(); + long freq = Stopwatch.Frequency; + + double elapsedSeconds = (double)(endTime - startTime) / freq; - while (newMax != currentCounts.numThreadsGoal) + if (elapsedSeconds * 1000 >= _threadAdjustmentIntervalMs / 2) { - ThreadCounts newCounts = currentCounts; - newCounts.numThreadsGoal = (short)newMax; + int currentTicks = Environment.TickCount; + int totalNumCompletions = (int)_completionCounter.Count; + int numCompletions = totalNumCompletions - _separated.priorCompletionCount; + + ThreadCounts currentCounts = _separated.counts.VolatileRead(); + int newMax; + (newMax, _threadAdjustmentIntervalMs) = HillClimbing.ThreadPoolHillClimber.Update(currentCounts.NumThreadsGoal, elapsedSeconds, numCompletions); - ThreadCounts oldCounts = ThreadCounts.CompareExchangeCounts(ref _separated.counts, newCounts, currentCounts); - if (oldCounts == currentCounts) + while (newMax != currentCounts.NumThreadsGoal) { - // - // If we're increasing the max, inject a thread. If that thread finds work, it will inject - // another thread, etc., until nobody finds work or we reach the new maximum. - // - // If we're reducing the max, whichever threads notice this first will sleep and timeout themselves. - // - if (newMax > oldCounts.numThreadsGoal) + ThreadCounts newCounts = currentCounts; + newCounts.NumThreadsGoal = (short)newMax; + + ThreadCounts oldCounts = _separated.counts.InterlockedCompareExchange(newCounts, currentCounts); + if (oldCounts == currentCounts) { - WorkerThread.MaybeAddWorkingWorker(); + // + // If we're increasing the max, inject a thread. If that thread finds work, it will inject + // another thread, etc., until nobody finds work or we reach the new maximum. + // + // If we're reducing the max, whichever threads notice this first will sleep and timeout themselves. + // + if (newMax > oldCounts.NumThreadsGoal) + { + WorkerThread.MaybeAddWorkingWorker(this); + } + break; } - break; - } - else - { - if (oldCounts.numThreadsGoal > currentCounts.numThreadsGoal && oldCounts.numThreadsGoal >= newMax) + + if (oldCounts.NumThreadsGoal > currentCounts.NumThreadsGoal && oldCounts.NumThreadsGoal >= newMax) { // someone (probably the gate thread) increased the thread count more than // we are about to do. Don't interfere. @@ -268,20 +306,25 @@ namespace System.Threading currentCounts = oldCounts; } + + _separated.priorCompletionCount = totalNumCompletions; + _separated.nextCompletedWorkRequestsTime = currentTicks + _threadAdjustmentIntervalMs; + Volatile.Write(ref _separated.priorCompletedWorkRequestsTime, currentTicks); + _currentSampleStartTime = endTime; } - _separated.priorCompletionCount = totalNumCompletions; - _separated.nextCompletedWorkRequestsTime = currentTicks + _threadAdjustmentIntervalMs; - Volatile.Write(ref _separated.priorCompletedWorkRequestsTime, currentTicks); - _currentSampleStartTime = endTime; + } + finally + { + hillClimbingThreadAdjustmentLock.Release(); } } - private bool ShouldAdjustMaxWorkersActive() + private bool ShouldAdjustMaxWorkersActive(int currentTimeMs) { // We need to subtract by prior time because Environment.TickCount can wrap around, making a comparison of absolute times unreliable. int priorTime = Volatile.Read(ref _separated.priorCompletedWorkRequestsTime); int requiredInterval = _separated.nextCompletedWorkRequestsTime - priorTime; - int elapsedInterval = Environment.TickCount - priorTime; + int elapsedInterval = currentTimeMs - priorTime; if (elapsedInterval >= requiredInterval) { // Avoid trying to adjust the thread count goal if there are already more threads than the thread count goal. @@ -291,17 +334,19 @@ namespace System.Threading // threads processing work to stop in response to a decreased thread count goal. The logic here is a bit // different from the original CoreCLR code from which this implementation was ported because in this // implementation there are no retired threads, so only the count of threads processing work is considered. - ThreadCounts counts = ThreadCounts.VolatileReadCounts(ref _separated.counts); - return counts.numProcessingWork <= counts.numThreadsGoal; + ThreadCounts counts = _separated.counts.VolatileRead(); + return counts.NumProcessingWork <= counts.NumThreadsGoal && !HillClimbing.IsDisabled; } return false; } internal void RequestWorker() { - Interlocked.Increment(ref _numRequestedWorkers); - WorkerThread.MaybeAddWorkingWorker(); - GateThread.EnsureRunning(); + // The order of operations here is important. MaybeAddWorkingWorker() and EnsureRunning() use speculative checks to + // do their work and the memory barrier from the interlocked operation is necessary in this case for correctness. + Interlocked.Increment(ref _separated.numRequestedWorkers); + WorkerThread.MaybeAddWorkingWorker(this); + GateThread.EnsureRunning(this); } } } diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPoolEventSource.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPoolEventSource.cs index 7e2ee39..711893f 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPoolEventSource.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/PortableThreadPoolEventSource.cs @@ -2,117 +2,291 @@ // The .NET Foundation licenses this file to you under the MIT license. using System.Diagnostics.Tracing; +using System.Runtime.CompilerServices; +using Internal.Runtime.CompilerServices; namespace System.Threading { - [EventSource(Name = "Microsoft-Windows-DotNETRuntime", Guid = "{e13c0d23-ccbc-4e12-931b-d9cc2eee27e4}")] - public sealed class PortableThreadPoolEventSource : EventSource + // Currently with EventPipe there isn't a way to move events from the native side to the managed side and get the same + // experience. For now, the same provider name and guid are used as the native side and a temporary change has been made to + // EventPipe in CoreCLR to get thread pool events in performance profiles when the portable thread pool is enabled, as that + // seems to be the easiest way currently and the closest to the experience when the portable thread pool is disabled. + // TODO: Long-term options (also see https://github.com/dotnet/runtime/issues/38763): + // - Use NativeRuntimeEventSource instead, change its guid to match the provider guid from the native side, and fix the + // underlying issues such that duplicate events are not sent. This should get the same experience as sending events from + // the native side, and would allow easily moving other events from the native side to the managed side in the future if + // necessary. + // - Use a different provider name and guid (maybe "System.Threading.ThreadPool"), update PerfView and dotnet-trace to + // enable the provider by default when the Threading or other ThreadPool-related keywords are specified for the runtime + // provider, and update PerfView with a trace event parser for the new provider so that it knows about the events and may + // use them to identify thread pool threads. + [EventSource(Name = "Microsoft-Windows-DotNETRuntime", Guid = "e13c0d23-ccbc-4e12-931b-d9cc2eee27e4")] + internal sealed class PortableThreadPoolEventSource : EventSource { - private const string WorkerThreadMessage = "WorkerThreadCount=%1"; - private const string WorkerThreadAdjustmentSampleMessage = "Throughput=%1"; - private const string WorkerThreadAdjustmentAdjustmentEventMessage = "AverageThroughput=%1;%nNewWorkerThreadCount=%2;%nReason=%3"; - private const string WorkerThreadAdjustmentStatsEventMessage = "Duration=%1;%nThroughput=%2;%nThreadWave=%3;%nThroughputWave=%4;%nThroughputErrorEstimate=%5;%nAverageThroughputErrorEstimate=%6;%nThroughputRatio=%7;%nConfidence=%8;%nNewControlSetting=%9;%nNewThreadWaveMagnitude=%10"; + // This value does not seem to be used, leaving it as zero for now. It may be useful for a scenario that may involve + // multiple instances of the runtime within the same process, but then it seems unlikely that both instances' thread + // pools would be in moderate use. + private const ushort DefaultClrInstanceId = 0; + + private static class Messages + { + public const string WorkerThread = "ActiveWorkerThreadCount={0};\nRetiredWorkerThreadCount={1};\nClrInstanceID={2}"; + public const string WorkerThreadAdjustmentSample = "Throughput={0};\nClrInstanceID={1}"; + public const string WorkerThreadAdjustmentAdjustment = "AverageThroughput={0};\nNewWorkerThreadCount={1};\nReason={2};\nClrInstanceID={3}"; + public const string WorkerThreadAdjustmentStats = "Duration={0};\nThroughput={1};\nThreadWave={2};\nThroughputWave={3};\nThroughputErrorEstimate={4};\nAverageThroughputErrorEstimate={5};\nThroughputRatio={6};\nConfidence={7};\nNewControlSetting={8};\nNewThreadWaveMagnitude={9};\nClrInstanceID={10}"; + public const string IOEnqueue = "NativeOverlapped={0};\nOverlapped={1};\nMultiDequeues={2};\nClrInstanceID={3}"; + public const string IO = "NativeOverlapped={0};\nOverlapped={1};\nClrInstanceID={2}"; + public const string WorkingThreadCount = "Count={0};\nClrInstanceID={1}"; + } // The task definitions for the ETW manifest - public static class Tasks + public static class Tasks // this name and visibility is important for EventSource { - public const EventTask WorkerThreadTask = (EventTask)16; - public const EventTask WorkerThreadAdjustmentTask = (EventTask)18; + public const EventTask ThreadPoolWorkerThread = (EventTask)16; + public const EventTask ThreadPoolWorkerThreadAdjustment = (EventTask)18; + public const EventTask ThreadPool = (EventTask)23; + public const EventTask ThreadPoolWorkingThreadCount = (EventTask)22; } - public static class Opcodes + public static class Opcodes // this name and visibility is important for EventSource { - public const EventOpcode WaitOpcode = (EventOpcode)90; - public const EventOpcode SampleOpcode = (EventOpcode)100; - public const EventOpcode AdjustmentOpcode = (EventOpcode)101; - public const EventOpcode StatsOpcode = (EventOpcode)102; + public const EventOpcode IOEnqueue = (EventOpcode)13; + public const EventOpcode IODequeue = (EventOpcode)14; + public const EventOpcode Wait = (EventOpcode)90; + public const EventOpcode Sample = (EventOpcode)100; + public const EventOpcode Adjustment = (EventOpcode)101; + public const EventOpcode Stats = (EventOpcode)102; } - public static class Keywords + public static class Keywords // this name and visibility is important for EventSource { public const EventKeywords ThreadingKeyword = (EventKeywords)0x10000; + public const EventKeywords ThreadTransferKeyword = (EventKeywords)0x80000000; + } + + public enum ThreadAdjustmentReasonMap : uint + { + Warmup, + Initializing, + RandomMove, + ClimbingMove, + ChangePoint, + Stabilizing, + Starvation, + ThreadTimedOut } private PortableThreadPoolEventSource() + : base( + new Guid(0xe13c0d23, 0xccbc, 0x4e12, 0x93, 0x1b, 0xd9, 0xcc, 0x2e, 0xee, 0x27, 0xe4), + "Microsoft-Windows-DotNETRuntime") + { + } + + [NonEvent] + private unsafe void WriteThreadEvent(int eventId, uint numExistingThreads) + { + uint retiredWorkerThreadCount = 0; + ushort clrInstanceId = DefaultClrInstanceId; + + EventData* data = stackalloc EventData[3]; + data[0].DataPointer = (IntPtr)(&numExistingThreads); + data[0].Size = sizeof(uint); + data[0].Reserved = 0; + data[1].DataPointer = (IntPtr)(&retiredWorkerThreadCount); + data[1].Size = sizeof(uint); + data[1].Reserved = 0; + data[2].DataPointer = (IntPtr)(&clrInstanceId); + data[2].Size = sizeof(ushort); + data[2].Reserved = 0; + WriteEventCore(eventId, 3, data); + } + + [Event(50, Level = EventLevel.Informational, Message = Messages.WorkerThread, Task = Tasks.ThreadPoolWorkerThread, Opcode = EventOpcode.Start, Version = 0, Keywords = Keywords.ThreadingKeyword)] + public unsafe void ThreadPoolWorkerThreadStart( + uint ActiveWorkerThreadCount, + uint RetiredWorkerThreadCount = 0, + ushort ClrInstanceID = DefaultClrInstanceId) + { + WriteThreadEvent(50, ActiveWorkerThreadCount); + } + + [Event(51, Level = EventLevel.Informational, Message = Messages.WorkerThread, Task = Tasks.ThreadPoolWorkerThread, Opcode = EventOpcode.Stop, Version = 0, Keywords = Keywords.ThreadingKeyword)] + public void ThreadPoolWorkerThreadStop( + uint ActiveWorkerThreadCount, + uint RetiredWorkerThreadCount = 0, + ushort ClrInstanceID = DefaultClrInstanceId) { + WriteThreadEvent(51, ActiveWorkerThreadCount); } - [Event(1, Level = EventLevel.Informational, Message = WorkerThreadMessage, Task = Tasks.WorkerThreadTask, Opcode = EventOpcode.Start, Version = 0, Keywords = Keywords.ThreadingKeyword)] - public void WorkerThreadStart(short numExistingThreads) + [Event(57, Level = EventLevel.Informational, Message = Messages.WorkerThread, Task = Tasks.ThreadPoolWorkerThread, Opcode = Opcodes.Wait, Version = 0, Keywords = Keywords.ThreadingKeyword)] + public void ThreadPoolWorkerThreadWait( + uint ActiveWorkerThreadCount, + uint RetiredWorkerThreadCount = 0, + ushort ClrInstanceID = DefaultClrInstanceId) { - WriteEvent(1, numExistingThreads); + WriteThreadEvent(57, ActiveWorkerThreadCount); } - [Event(2, Level = EventLevel.Informational, Message = WorkerThreadMessage, Task = Tasks.WorkerThreadTask, Opcode = EventOpcode.Stop, Version = 0, Keywords = Keywords.ThreadingKeyword)] - public void WorkerThreadStop(short numExistingThreads) + [Event(54, Level = EventLevel.Informational, Message = Messages.WorkerThreadAdjustmentSample, Task = Tasks.ThreadPoolWorkerThreadAdjustment, Opcode = Opcodes.Sample, Version = 0, Keywords = Keywords.ThreadingKeyword)] + public unsafe void ThreadPoolWorkerThreadAdjustmentSample( + double Throughput, + ushort ClrInstanceID = DefaultClrInstanceId) { - WriteEvent(2, numExistingThreads); + EventData* data = stackalloc EventData[2]; + data[0].DataPointer = (IntPtr)(&Throughput); + data[0].Size = sizeof(double); + data[0].Reserved = 0; + data[1].DataPointer = (IntPtr)(&ClrInstanceID); + data[1].Size = sizeof(ushort); + data[1].Reserved = 0; + WriteEventCore(54, 2, data); } - [Event(3, Level = EventLevel.Informational, Message = WorkerThreadMessage, Task = Tasks.WorkerThreadTask, Opcode = Opcodes.WaitOpcode, Version = 0, Keywords = Keywords.ThreadingKeyword)] - public void WorkerThreadWait(short numExistingThreads) + [Event(55, Level = EventLevel.Informational, Message = Messages.WorkerThreadAdjustmentAdjustment, Task = Tasks.ThreadPoolWorkerThreadAdjustment, Opcode = Opcodes.Adjustment, Version = 0, Keywords = Keywords.ThreadingKeyword)] + public unsafe void ThreadPoolWorkerThreadAdjustmentAdjustment( + double AverageThroughput, + uint NewWorkerThreadCount, + ThreadAdjustmentReasonMap Reason, + ushort ClrInstanceID = DefaultClrInstanceId) { - WriteEvent(3, numExistingThreads); + EventData* data = stackalloc EventData[4]; + data[0].DataPointer = (IntPtr)(&AverageThroughput); + data[0].Size = sizeof(double); + data[0].Reserved = 0; + data[1].DataPointer = (IntPtr)(&NewWorkerThreadCount); + data[1].Size = sizeof(uint); + data[1].Reserved = 0; + data[2].DataPointer = (IntPtr)(&Reason); + data[2].Size = sizeof(ThreadAdjustmentReasonMap); + data[2].Reserved = 0; + data[3].DataPointer = (IntPtr)(&ClrInstanceID); + data[3].Size = sizeof(ushort); + data[3].Reserved = 0; + WriteEventCore(55, 4, data); } - [Event(4, Level = EventLevel.Informational, Message = WorkerThreadAdjustmentSampleMessage, Opcode = Opcodes.SampleOpcode, Version = 0, Task = Tasks.WorkerThreadAdjustmentTask, Keywords = Keywords.ThreadingKeyword)] - public unsafe void WorkerThreadAdjustmentSample(double throughput) + [Event(56, Level = EventLevel.Verbose, Message = Messages.WorkerThreadAdjustmentStats, Task = Tasks.ThreadPoolWorkerThreadAdjustment, Opcode = Opcodes.Stats, Version = 0, Keywords = Keywords.ThreadingKeyword)] + public unsafe void ThreadPoolWorkerThreadAdjustmentStats( + double Duration, + double Throughput, + double ThreadWave, + double ThroughputWave, + double ThroughputErrorEstimate, + double AverageThroughputErrorEstimate, + double ThroughputRatio, + double Confidence, + double NewControlSetting, + ushort NewThreadWaveMagnitude, + ushort ClrInstanceID = DefaultClrInstanceId) { - if (IsEnabled()) - { - EventData* data = stackalloc EventData[1]; - data[0].DataPointer = (IntPtr)(&throughput); - data[0].Size = sizeof(double); - WriteEventCore(4, 1, data); - } + EventData* data = stackalloc EventData[11]; + data[0].DataPointer = (IntPtr)(&Duration); + data[0].Size = sizeof(double); + data[0].Reserved = 0; + data[1].DataPointer = (IntPtr)(&Throughput); + data[1].Size = sizeof(double); + data[1].Reserved = 0; + data[2].DataPointer = (IntPtr)(&ThreadWave); + data[2].Size = sizeof(double); + data[2].Reserved = 0; + data[3].DataPointer = (IntPtr)(&ThroughputWave); + data[3].Size = sizeof(double); + data[3].Reserved = 0; + data[4].DataPointer = (IntPtr)(&ThroughputErrorEstimate); + data[4].Size = sizeof(double); + data[4].Reserved = 0; + data[5].DataPointer = (IntPtr)(&AverageThroughputErrorEstimate); + data[5].Size = sizeof(double); + data[5].Reserved = 0; + data[6].DataPointer = (IntPtr)(&ThroughputRatio); + data[6].Size = sizeof(double); + data[6].Reserved = 0; + data[7].DataPointer = (IntPtr)(&Confidence); + data[7].Size = sizeof(double); + data[7].Reserved = 0; + data[8].DataPointer = (IntPtr)(&NewControlSetting); + data[8].Size = sizeof(double); + data[8].Reserved = 0; + data[9].DataPointer = (IntPtr)(&NewThreadWaveMagnitude); + data[9].Size = sizeof(ushort); + data[9].Reserved = 0; + data[10].DataPointer = (IntPtr)(&ClrInstanceID); + data[10].Size = sizeof(ushort); + data[10].Reserved = 0; + WriteEventCore(56, 11, data); } - [Event(5, Level = EventLevel.Informational, Message = WorkerThreadAdjustmentAdjustmentEventMessage, Opcode = Opcodes.AdjustmentOpcode, Version = 0, Task = Tasks.WorkerThreadAdjustmentTask, Keywords = Keywords.ThreadingKeyword)] - public unsafe void WorkerThreadAdjustmentAdjustment(double averageThroughput, int newWorkerThreadCount, int stateOrTransition) + [Event(63, Level = EventLevel.Verbose, Message = Messages.IOEnqueue, Task = Tasks.ThreadPool, Opcode = Opcodes.IOEnqueue, Version = 0, Keywords = Keywords.ThreadingKeyword | Keywords.ThreadTransferKeyword)] + private unsafe void ThreadPoolIOEnqueue( + IntPtr NativeOverlapped, + IntPtr Overlapped, + bool MultiDequeues, + ushort ClrInstanceID = DefaultClrInstanceId) { - if (IsEnabled()) - { - EventData* data = stackalloc EventData[3]; - data[0].DataPointer = (IntPtr)(&averageThroughput); - data[0].Size = sizeof(double); - data[1].DataPointer = (IntPtr)(&newWorkerThreadCount); - data[1].Size = sizeof(int); - data[2].DataPointer = (IntPtr)(&stateOrTransition); - data[2].Size = sizeof(int); - WriteEventCore(5, 3, data); - } + int multiDequeuesInt = Convert.ToInt32(MultiDequeues); // bool maps to "win:Boolean", a 4-byte boolean + + EventData* data = stackalloc EventData[4]; + data[0].DataPointer = (IntPtr)(&NativeOverlapped); + data[0].Size = IntPtr.Size; + data[0].Reserved = 0; + data[1].DataPointer = (IntPtr)(&Overlapped); + data[1].Size = IntPtr.Size; + data[1].Reserved = 0; + data[2].DataPointer = (IntPtr)(&multiDequeuesInt); + data[2].Size = sizeof(int); + data[2].Reserved = 0; + data[3].DataPointer = (IntPtr)(&ClrInstanceID); + data[3].Size = sizeof(ushort); + data[3].Reserved = 0; + WriteEventCore(63, 4, data); } - [Event(6, Level = EventLevel.Verbose, Message = WorkerThreadAdjustmentStatsEventMessage, Opcode = Opcodes.StatsOpcode, Version = 0, Task = Tasks.WorkerThreadAdjustmentTask, Keywords = Keywords.ThreadingKeyword)] - [CLSCompliant(false)] - public unsafe void WorkerThreadAdjustmentStats(double duration, double throughput, double threadWave, double throughputWave, double throughputErrorEstimate, - double averageThroughputNoise, double ratio, double confidence, double currentControlSetting, ushort newThreadWaveMagnitude) + // TODO: This event is fired for minor compat with CoreCLR in this case. Consider removing this method and use + // FrameworkEventSource's thread transfer send/receive events instead at callers. + [NonEvent] + [MethodImpl(MethodImplOptions.NoInlining)] + public void ThreadPoolIOEnqueue(RegisteredWaitHandle registeredWaitHandle) => + ThreadPoolIOEnqueue((IntPtr)registeredWaitHandle.GetHashCode(), IntPtr.Zero, registeredWaitHandle.Repeating); + + [Event(64, Level = EventLevel.Verbose, Message = Messages.IO, Task = Tasks.ThreadPool, Opcode = Opcodes.IODequeue, Version = 0, Keywords = Keywords.ThreadingKeyword | Keywords.ThreadTransferKeyword)] + private unsafe void ThreadPoolIODequeue( + IntPtr NativeOverlapped, + IntPtr Overlapped, + ushort ClrInstanceID = DefaultClrInstanceId) + { + EventData* data = stackalloc EventData[3]; + data[0].DataPointer = (IntPtr)(&NativeOverlapped); + data[0].Size = IntPtr.Size; + data[0].Reserved = 0; + data[1].DataPointer = (IntPtr)(&Overlapped); + data[1].Size = IntPtr.Size; + data[1].Reserved = 0; + data[2].DataPointer = (IntPtr)(&ClrInstanceID); + data[2].Size = sizeof(ushort); + data[2].Reserved = 0; + WriteEventCore(64, 3, data); + } + + // TODO: This event is fired for minor compat with CoreCLR in this case. Consider removing this method and use + // FrameworkEventSource's thread transfer send/receive events instead at callers. + [NonEvent] + [MethodImpl(MethodImplOptions.NoInlining)] + public void ThreadPoolIODequeue(RegisteredWaitHandle registeredWaitHandle) => + ThreadPoolIODequeue((IntPtr)registeredWaitHandle.GetHashCode(), IntPtr.Zero); + + [Event(60, Level = EventLevel.Verbose, Message = Messages.WorkingThreadCount, Task = Tasks.ThreadPoolWorkingThreadCount, Opcode = EventOpcode.Start, Version = 0, Keywords = Keywords.ThreadingKeyword)] + public unsafe void ThreadPoolWorkingThreadCount(uint Count, ushort ClrInstanceID = DefaultClrInstanceId) { - if (IsEnabled()) - { - EventData* data = stackalloc EventData[10]; - data[0].DataPointer = (IntPtr)(&duration); - data[0].Size = sizeof(double); - data[1].DataPointer = (IntPtr)(&throughput); - data[1].Size = sizeof(double); - data[2].DataPointer = (IntPtr)(&threadWave); - data[2].Size = sizeof(double); - data[3].DataPointer = (IntPtr)(&throughputWave); - data[3].Size = sizeof(double); - data[4].DataPointer = (IntPtr)(&throughputErrorEstimate); - data[4].Size = sizeof(double); - data[5].DataPointer = (IntPtr)(&averageThroughputNoise); - data[5].Size = sizeof(double); - data[6].DataPointer = (IntPtr)(&ratio); - data[6].Size = sizeof(double); - data[7].DataPointer = (IntPtr)(&confidence); - data[7].Size = sizeof(double); - data[8].DataPointer = (IntPtr)(¤tControlSetting); - data[8].Size = sizeof(double); - data[9].DataPointer = (IntPtr)(&newThreadWaveMagnitude); - data[9].Size = sizeof(ushort); - WriteEventCore(6, 10, data); - } + EventData* data = stackalloc EventData[2]; + data[0].DataPointer = (IntPtr)(&Count); + data[0].Size = sizeof(uint); + data[0].Reserved = 0; + data[1].DataPointer = (IntPtr)(&ClrInstanceID); + data[1].Size = sizeof(ushort); + data[1].Reserved = 0; + WriteEventCore(60, 2, data); } #pragma warning disable IDE1006 // Naming Styles diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/RegisteredWaitHandle.Portable.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/RegisteredWaitHandle.Portable.cs new file mode 100644 index 0000000..6f67c19 --- /dev/null +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/RegisteredWaitHandle.Portable.cs @@ -0,0 +1,83 @@ +// Licensed to the .NET Foundation under one or more agreements. +// The .NET Foundation licenses this file to you under the MIT license. + +using System.Diagnostics; + +namespace System.Threading +{ + public sealed partial class RegisteredWaitHandle : MarshalByRefObject + { + /// + /// The this was registered on. + /// + internal PortableThreadPool.WaitThread? WaitThread { get; set; } + + private bool UnregisterPortable(WaitHandle waitObject) + { + // The registered wait handle must have been registered by this time, otherwise the instance is not handed out to + // the caller of the public variants of RegisterWaitForSingleObject + Debug.Assert(WaitThread != null); + + s_callbackLock.Acquire(); + bool needToRollBackRefCountOnException = false; + try + { + if (_unregisterCalled) + { + return false; + } + + UserUnregisterWaitHandle = waitObject?.SafeWaitHandle; + UserUnregisterWaitHandle?.DangerousAddRef(ref needToRollBackRefCountOnException); + + UserUnregisterWaitHandleValue = UserUnregisterWaitHandle?.DangerousGetHandle() ?? IntPtr.Zero; + + if (_unregistered) + { + SignalUserWaitHandle(); + return true; + } + + if (IsBlocking) + { + _callbacksComplete = RentEvent(); + } + else + { + _removed = RentEvent(); + } + } + catch (Exception) // Rollback state on exception + { + if (_removed != null) + { + ReturnEvent(_removed); + _removed = null; + } + else if (_callbacksComplete != null) + { + ReturnEvent(_callbacksComplete); + _callbacksComplete = null; + } + + UserUnregisterWaitHandleValue = IntPtr.Zero; + + if (needToRollBackRefCountOnException) + { + UserUnregisterWaitHandle?.DangerousRelease(); + } + + UserUnregisterWaitHandle = null; + throw; + } + finally + { + _unregisterCalled = true; + s_callbackLock.Release(); + } + + WaitThread!.UnregisterWait(this); + return true; + } + } +} diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/Thread.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/Thread.cs index bf85ffe..3fc1d7e 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/Thread.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/Thread.cs @@ -5,6 +5,7 @@ using System.Collections.Generic; using System.Diagnostics; using System.Diagnostics.CodeAnalysis; using System.Globalization; +using System.Runtime.CompilerServices; using System.Runtime.ConstrainedExecution; using System.Security.Principal; using System.Runtime.Versioning; @@ -171,12 +172,68 @@ namespace System.Threading } _name = value; - ThreadNameChanged(value); + if (value != null) + { + _mayNeedResetForThreadPool = true; + } } } } + internal void SetThreadPoolWorkerThreadName() + { + Debug.Assert(this == CurrentThread); + Debug.Assert(IsThreadPoolThread); + + lock (this) + { + // Bypass the exception from setting the property + _name = ThreadPool.WorkerThreadName; + ThreadNameChanged(ThreadPool.WorkerThreadName); + _name = null; + } + } + +#if !CORECLR + [MethodImpl(MethodImplOptions.AggressiveInlining)] + internal void ResetThreadPoolThread() + { + Debug.Assert(this == CurrentThread); + Debug.Assert(!IsThreadStartSupported || IsThreadPoolThread); // there are no dedicated threadpool threads on runtimes where we can't start threads + + if (_mayNeedResetForThreadPool) + { + ResetThreadPoolThreadSlow(); + } + } +#endif + + [MethodImpl(MethodImplOptions.NoInlining)] + private void ResetThreadPoolThreadSlow() + { + Debug.Assert(this == CurrentThread); + Debug.Assert(IsThreadPoolThread); + Debug.Assert(_mayNeedResetForThreadPool); + + _mayNeedResetForThreadPool = false; + + if (_name != null) + { + SetThreadPoolWorkerThreadName(); + } + + if (!IsBackground) + { + IsBackground = true; + } + + if (Priority != ThreadPriority.Normal) + { + Priority = ThreadPriority.Normal; + } + } + [Obsolete(Obsoletions.ThreadAbortMessage, DiagnosticId = Obsoletions.ThreadAbortDiagId, UrlFormat = Obsoletions.SharedUrlFormat)] public void Abort() { diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadInt64PersistentCounter.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadInt64PersistentCounter.cs index 5867f0c..61fe075 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadInt64PersistentCounter.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadInt64PersistentCounter.cs @@ -1,79 +1,67 @@ // Licensed to the .NET Foundation under one or more agreements. // The .NET Foundation licenses this file to you under the MIT license. +using System.Collections.Generic; using System.Diagnostics; using System.Runtime.CompilerServices; +using Internal.Runtime.CompilerServices; namespace System.Threading { internal sealed class ThreadInt64PersistentCounter { - // This type is used by Monitor for lock contention counting, so can't use an object for a lock. Also it's preferable - // (though currently not required) to disallow/ignore thread interrupt for uses of this lock here. Using Lock directly - // is a possibility but maybe less compatible with other runtimes. Lock cases are relatively rare, static instance - // should be ok. private static readonly LowLevelLock s_lock = new LowLevelLock(); - private readonly ThreadLocal _threadLocalNode = new ThreadLocal(trackAllValues: true); private long _overflowCount; + private HashSet _nodes = new HashSet(); [MethodImpl(MethodImplOptions.AggressiveInlining)] - public void Increment() + public static void Increment(object threadLocalCountObject) { - ThreadLocalNode? node = _threadLocalNode.Value; - if (node != null) - { - node.Increment(); - return; - } - - TryCreateNode(); + Debug.Assert(threadLocalCountObject is ThreadLocalNode); + Unsafe.As(threadLocalCountObject).Increment(); } - [MethodImpl(MethodImplOptions.NoInlining)] - private void TryCreateNode() + public object CreateThreadLocalCountObject() { - Debug.Assert(_threadLocalNode.Value == null); + var node = new ThreadLocalNode(this); + s_lock.Acquire(); try { - _threadLocalNode.Value = new ThreadLocalNode(this); + _nodes.Add(node); } - catch (OutOfMemoryException) + finally { + s_lock.Release(); } + + return node; } public long Count { get { - long count = 0; + s_lock.Acquire(); + long count = _overflowCount; try { - s_lock.Acquire(); - try - { - count = _overflowCount; - foreach (ThreadLocalNode node in _threadLocalNode.ValuesAsEnumerable) - { - if (node != null) - { - count += node.Count; - } - } - return count; - } - finally + foreach (ThreadLocalNode node in _nodes) { - s_lock.Release(); + count += node.Count; } } catch (OutOfMemoryException) { // Some allocation occurs above and it may be a bit awkward to get an OOM from this property getter - return count; } + finally + { + s_lock.Release(); + } + + return count; } } @@ -85,8 +73,6 @@ namespace System.Threading public ThreadLocalNode(ThreadInt64PersistentCounter counter) { Debug.Assert(counter != null); - - _count = 1; _counter = counter; } diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPool.Portable.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPool.Portable.cs index 7b49045..25ed127 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPool.Portable.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPool.Portable.cs @@ -2,8 +2,7 @@ // The .NET Foundation licenses this file to you under the MIT license. using System.Diagnostics; -using Microsoft.Win32.SafeHandles; -using System.Runtime.Versioning; +using System.Runtime.CompilerServices; namespace System.Threading { @@ -11,107 +10,8 @@ namespace System.Threading // Portable implementation of ThreadPool // - /// - /// An object representing the registration of a via . - /// - [UnsupportedOSPlatform("browser")] - public sealed class RegisteredWaitHandle : MarshalByRefObject + public sealed partial class RegisteredWaitHandle : MarshalByRefObject { - internal RegisteredWaitHandle(WaitHandle waitHandle, _ThreadPoolWaitOrTimerCallback callbackHelper, - int millisecondsTimeout, bool repeating) - { - Handle = waitHandle; - Callback = callbackHelper; - TimeoutDurationMs = millisecondsTimeout; - Repeating = repeating; - RestartTimeout(Environment.TickCount); - } - - ~RegisteredWaitHandle() - { - if (WaitThread != null) - { - Unregister(null); - } - } - - private static AutoResetEvent? s_cachedEvent; - - private static AutoResetEvent RentEvent() => - Interlocked.Exchange(ref s_cachedEvent, null) ?? - new AutoResetEvent(false); - - private static void ReturnEvent(AutoResetEvent resetEvent) - { - if (Interlocked.CompareExchange(ref s_cachedEvent, resetEvent, null) != null) - { - resetEvent.Dispose(); - } - } - - /// - /// The callback to execute when the wait on either times out or completes. - /// - internal _ThreadPoolWaitOrTimerCallback Callback { get; } - - /// - /// The that was registered. - /// - internal WaitHandle Handle { get; } - - /// - /// The time this handle times out at in ms. - /// - internal int TimeoutTimeMs { get; private set; } - - private int TimeoutDurationMs { get; } - - internal bool IsInfiniteTimeout => TimeoutDurationMs == -1; - - internal void RestartTimeout(int currentTimeMs) - { - TimeoutTimeMs = currentTimeMs + TimeoutDurationMs; - } - - /// - /// Whether or not the wait is a repeating wait. - /// - internal bool Repeating { get; } - - /// - /// The the user passed in via . - /// - private SafeWaitHandle? UserUnregisterWaitHandle { get; set; } - - private IntPtr UserUnregisterWaitHandleValue { get; set; } - - internal bool IsBlocking => UserUnregisterWaitHandleValue == (IntPtr)(-1); - - /// - /// The this was registered on. - /// - internal PortableThreadPool.WaitThread? WaitThread { get; set; } - - /// - /// The number of callbacks that are currently queued on the Thread Pool or executing. - /// - private int _numRequestedCallbacks; - - private readonly LowLevelLock _callbackLock = new LowLevelLock(); - - /// - /// Notes if we need to signal the user's unregister event after all callbacks complete. - /// - private bool _signalAfterCallbacksComplete; - - private bool _unregisterCalled; - - private bool _unregistered; - - private AutoResetEvent? _callbacksComplete; - - private AutoResetEvent? _removed; - /// /// Unregisters this wait handle registration from the wait threads. /// @@ -121,238 +21,50 @@ namespace System.Threading /// This method will only return true on the first call. /// Passing in a wait handle with a value of -1 will result in a blocking wait, where Unregister will not return until the full unregistration is completed. /// - public bool Unregister(WaitHandle? waitObject) - { - GC.SuppressFinalize(this); - _callbackLock.Acquire(); - bool needToRollBackRefCountOnException = false; - try - { - if (_unregisterCalled) - { - return false; - } - - UserUnregisterWaitHandle = waitObject?.SafeWaitHandle; - UserUnregisterWaitHandle?.DangerousAddRef(ref needToRollBackRefCountOnException); - - UserUnregisterWaitHandleValue = UserUnregisterWaitHandle?.DangerousGetHandle() ?? IntPtr.Zero; - - if (_unregistered) - { - SignalUserWaitHandle(); - return true; - } - - if (IsBlocking) - { - _callbacksComplete = RentEvent(); - } - else - { - _removed = RentEvent(); - } - _unregisterCalled = true; - } - catch (Exception) // Rollback state on exception - { - if (_removed != null) - { - ReturnEvent(_removed); - _removed = null; - } - else if (_callbacksComplete != null) - { - ReturnEvent(_callbacksComplete); - _callbacksComplete = null; - } - - UserUnregisterWaitHandleValue = IntPtr.Zero; - - if (needToRollBackRefCountOnException) - { - UserUnregisterWaitHandle?.DangerousRelease(); - } - - UserUnregisterWaitHandle = null; - throw; - } - finally - { - _callbackLock.Release(); - } - - WaitThread!.UnregisterWait(this); - return true; - } - - /// - /// Signal if it has not been signaled yet and is a valid handle. - /// - private void SignalUserWaitHandle() - { - _callbackLock.VerifyIsLocked(); - SafeWaitHandle? handle = UserUnregisterWaitHandle; - IntPtr handleValue = UserUnregisterWaitHandleValue; - try - { - if (handleValue != IntPtr.Zero && handleValue != (IntPtr)(-1)) - { - Debug.Assert(handleValue == handle!.DangerousGetHandle()); - EventWaitHandle.Set(handle); - } - } - finally - { - handle?.DangerousRelease(); - _callbacksComplete?.Set(); - _unregistered = true; - } - } - - /// - /// Perform the registered callback if the has not been signaled. - /// - /// Whether or not the wait timed out. - internal void PerformCallback(bool timedOut) - { -#if DEBUG - _callbackLock.Acquire(); - try - { - Debug.Assert(_numRequestedCallbacks != 0); - } - finally - { - _callbackLock.Release(); - } -#endif - _ThreadPoolWaitOrTimerCallback.PerformWaitOrTimerCallback(Callback, timedOut); - CompleteCallbackRequest(); - } - - /// - /// Tell this handle that there is a callback queued on the thread pool for this handle. - /// - internal void RequestCallback() - { - _callbackLock.Acquire(); - try - { - _numRequestedCallbacks++; - } - finally - { - _callbackLock.Release(); - } - } - - /// - /// Called when the wait thread removes this handle registration. This will signal the user's event if there are no callbacks pending, - /// or note that the user's event must be signaled when the callbacks complete. - /// - internal void OnRemoveWait() - { - _callbackLock.Acquire(); - try - { - _removed?.Set(); - if (_numRequestedCallbacks == 0) - { - SignalUserWaitHandle(); - } - else - { - _signalAfterCallbacksComplete = true; - } - } - finally - { - _callbackLock.Release(); - } - } - - /// - /// Reduces the number of callbacks requested. If there are no more callbacks and the user's handle is queued to be signaled, signal it. - /// - private void CompleteCallbackRequest() - { - _callbackLock.Acquire(); - try - { - --_numRequestedCallbacks; - if (_numRequestedCallbacks == 0 && _signalAfterCallbacksComplete) - { - SignalUserWaitHandle(); - } - } - finally - { - _callbackLock.Release(); - } - } - - /// - /// Wait for all queued callbacks and the full unregistration to complete. - /// - internal void WaitForCallbacks() - { - Debug.Assert(IsBlocking); - Debug.Assert(_unregisterCalled); // Should only be called when the wait is unregistered by the user. - - _callbacksComplete!.WaitOne(); - ReturnEvent(_callbacksComplete); - _callbacksComplete = null; - } - - internal void WaitForRemoval() - { - Debug.Assert(!IsBlocking); - Debug.Assert(_unregisterCalled); // Should only be called when the wait is unregistered by the user. + public bool Unregister(WaitHandle waitObject) => UnregisterPortable(waitObject); + } - _removed!.WaitOne(); - ReturnEvent(_removed); - _removed = null; - } + internal sealed partial class CompleteWaitThreadPoolWorkItem : IThreadPoolWorkItem + { + void IThreadPoolWorkItem.Execute() => PortableThreadPool.CompleteWait(_registeredWaitHandle, _timedOut); } public static partial class ThreadPool { - internal const bool EnableWorkerTracking = false; + // Time-sensitive work items are those that may need to run ahead of normal work items at least periodically. For a + // runtime that does not support time-sensitive work items on the managed side, the thread pool yields the thread to the + // runtime periodically (by exiting the dispatch loop) so that the runtime may use that thread for processing + // any time-sensitive work. For a runtime that supports time-sensitive work items on the managed side, the thread pool + // does not yield the thread and instead processes time-sensitive work items queued by specific APIs periodically. + internal const bool SupportsTimeSensitiveWorkItems = true; - internal static void InitializeForThreadPoolThread() { } + internal static readonly bool EnableWorkerTracking = + AppContextConfigHelper.GetBooleanConfig("System.Threading.ThreadPool.EnableWorkerTracking", false); - public static bool SetMaxThreads(int workerThreads, int completionPortThreads) - { - if (workerThreads < 0 || completionPortThreads < 0) - { - return false; - } - return PortableThreadPool.ThreadPoolInstance.SetMaxThreads(workerThreads); - } + internal static bool CanSetMinIOCompletionThreads(int ioCompletionThreads) => true; + internal static void SetMinIOCompletionThreads(int ioCompletionThreads) { } + + internal static bool CanSetMaxIOCompletionThreads(int ioCompletionThreads) => true; + internal static void SetMaxIOCompletionThreads(int ioCompletionThreads) { } + + public static bool SetMaxThreads(int workerThreads, int completionPortThreads) => + PortableThreadPool.ThreadPoolInstance.SetMaxThreads(workerThreads, completionPortThreads); public static void GetMaxThreads(out int workerThreads, out int completionPortThreads) { // Note that worker threads and completion port threads share the same thread pool. - // The total number of threads cannot exceed MaxThreadCount. + // The total number of threads cannot exceed MaxPossibleThreadCount. workerThreads = PortableThreadPool.ThreadPoolInstance.GetMaxThreads(); completionPortThreads = 1; } - public static bool SetMinThreads(int workerThreads, int completionPortThreads) - { - if (workerThreads < 0 || completionPortThreads < 0) - { - return false; - } - return PortableThreadPool.ThreadPoolInstance.SetMinThreads(workerThreads); - } + public static bool SetMinThreads(int workerThreads, int completionPortThreads) => + PortableThreadPool.ThreadPoolInstance.SetMinThreads(workerThreads, completionPortThreads); public static void GetMinThreads(out int workerThreads, out int completionPortThreads) { - // All threads are pre-created at present workerThreads = PortableThreadPool.ThreadPoolInstance.GetMinThreads(); - completionPortThreads = 0; + completionPortThreads = 1; } public static void GetAvailableThreads(out int workerThreads, out int completionPortThreads) @@ -380,47 +92,28 @@ namespace System.Threading /// /// This method is called to request a new thread pool worker to handle pending work. /// - internal static void RequestWorkerThread() - { - PortableThreadPool.ThreadPoolInstance.RequestWorker(); - } + internal static void RequestWorkerThread() => PortableThreadPool.ThreadPoolInstance.RequestWorker(); - internal static bool KeepDispatching(int startTickCount) - { - return true; - } + /// + /// Called from the gate thread periodically to perform runtime-specific gate activities + /// + /// CPU utilization as a percentage since the last call + /// True if the runtime still needs to perform gate activities, false otherwise + internal static bool PerformRuntimeSpecificGateActivities(int cpuUtilization) => false; - internal static void NotifyWorkItemProgress() - { - PortableThreadPool.ThreadPoolInstance.NotifyWorkItemComplete(); - } + internal static void NotifyWorkItemProgress() => PortableThreadPool.ThreadPoolInstance.NotifyWorkItemProgress(); - internal static bool NotifyWorkItemComplete() - { - return PortableThreadPool.ThreadPoolInstance.NotifyWorkItemComplete(); - } + [MethodImpl(MethodImplOptions.AggressiveInlining)] + internal static bool NotifyWorkItemComplete(object? threadLocalCompletionCountObject, int currentTimeMs) => + PortableThreadPool.ThreadPoolInstance.NotifyWorkItemComplete(threadLocalCompletionCountObject, currentTimeMs); - private static RegisteredWaitHandle RegisterWaitForSingleObject( - WaitHandle waitObject, - WaitOrTimerCallback callBack, - object? state, - uint millisecondsTimeOutInterval, - bool executeOnlyOnce, - bool flowExecutionContext) - { - if (waitObject == null) - throw new ArgumentNullException(nameof(waitObject)); + internal static object GetOrCreateThreadLocalCompletionCountObject() => + PortableThreadPool.ThreadPoolInstance.GetOrCreateThreadLocalCompletionCountObject(); - if (callBack == null) - throw new ArgumentNullException(nameof(callBack)); + private static void RegisterWaitForSingleObjectCore(WaitHandle? waitObject, RegisteredWaitHandle registeredWaitHandle) => + PortableThreadPool.ThreadPoolInstance.RegisterWaitHandle(registeredWaitHandle); - RegisteredWaitHandle registeredHandle = new RegisteredWaitHandle( - waitObject, - new _ThreadPoolWaitOrTimerCallback(callBack, state, flowExecutionContext), - (int)millisecondsTimeOutInterval, - !executeOnlyOnce); - PortableThreadPool.ThreadPoolInstance.RegisterWaitHandle(registeredHandle); - return registeredHandle; - } + internal static void UnsafeQueueWaitCompletion(CompleteWaitThreadPoolWorkItem completeWaitWorkItem) => + UnsafeQueueUserWorkItemInternal(completeWaitWorkItem, preferLocal: false); } } diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPool.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPool.cs index a05c688..3bbb27f 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPool.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/ThreadPool.cs @@ -16,13 +16,13 @@ using System.Diagnostics; using System.Diagnostics.Tracing; using System.Runtime.CompilerServices; using System.Runtime.InteropServices; -using System.Threading.Tasks; using System.Runtime.Versioning; +using System.Threading.Tasks; using Internal.Runtime.CompilerServices; +using Microsoft.Win32.SafeHandles; namespace System.Threading { - [StructLayout(LayoutKind.Sequential)] // enforce layout so that padding reduces false sharing internal sealed class ThreadPoolWorkQueue { internal static class WorkStealingQueueList @@ -117,33 +117,7 @@ namespace System.Threading // We're going to increment the tail; if we'll overflow, then we need to reset our counts if (tail == int.MaxValue) { - bool lockTaken = false; - try - { - m_foreignLock.Enter(ref lockTaken); - - if (m_tailIndex == int.MaxValue) - { - // - // Rather than resetting to zero, we'll just mask off the bits we don't care about. - // This way we don't need to rearrange the items already in the queue; they'll be found - // correctly exactly where they are. One subtlety here is that we need to make sure that - // if head is currently < tail, it remains that way. This happens to just fall out from - // the bit-masking, because we only do this if tail == int.MaxValue, meaning that all - // bits are set, so all of the bits we're keeping will also be set. Thus it's impossible - // for the head to end up > than the tail, since you can't set any more bits than all of - // them. - // - m_headIndex &= m_mask; - m_tailIndex = tail = m_tailIndex & m_mask; - Debug.Assert(m_headIndex <= m_tailIndex); - } - } - finally - { - if (lockTaken) - m_foreignLock.Exit(useMemoryBarrier: true); - } + tail = LocalPush_HandleTailOverflow(); } // When there are at least 2 elements' worth of space, we can take the fast path. @@ -189,6 +163,41 @@ namespace System.Threading } } + [MethodImpl(MethodImplOptions.NoInlining)] + private int LocalPush_HandleTailOverflow() + { + bool lockTaken = false; + try + { + m_foreignLock.Enter(ref lockTaken); + + int tail = m_tailIndex; + if (tail == int.MaxValue) + { + // + // Rather than resetting to zero, we'll just mask off the bits we don't care about. + // This way we don't need to rearrange the items already in the queue; they'll be found + // correctly exactly where they are. One subtlety here is that we need to make sure that + // if head is currently < tail, it remains that way. This happens to just fall out from + // the bit-masking, because we only do this if tail == int.MaxValue, meaning that all + // bits are set, so all of the bits we're keeping will also be set. Thus it's impossible + // for the head to end up > than the tail, since you can't set any more bits than all of + // them. + // + m_headIndex &= m_mask; + m_tailIndex = tail = m_tailIndex & m_mask; + Debug.Assert(m_headIndex <= m_tailIndex); + } + + return tail; + } + finally + { + if (lockTaken) + m_foreignLock.Exit(useMemoryBarrier: true); + } + } + public bool LocalFindAndPop(object obj) { // Fast path: check the tail. If equal, we can skip the lock. @@ -381,16 +390,24 @@ namespace System.Threading internal bool loggingEnabled; internal readonly ConcurrentQueue workItems = new ConcurrentQueue(); // SOS's ThreadPool command depends on this name + internal readonly ConcurrentQueue? timeSensitiveWorkQueue = + ThreadPool.SupportsTimeSensitiveWorkItems ? new ConcurrentQueue() : null; - private readonly Internal.PaddingFor32 pad1; + [StructLayout(LayoutKind.Sequential)] + private struct CacheLineSeparated + { + private readonly Internal.PaddingFor32 pad1; + + public volatile int numOutstandingThreadRequests; - private volatile int numOutstandingThreadRequests; + private readonly Internal.PaddingFor32 pad2; + } - private readonly Internal.PaddingFor32 pad2; + private CacheLineSeparated _separated; public ThreadPoolWorkQueue() { - loggingEnabled = FrameworkEventSource.Log.IsEnabled(EventLevel.Verbose, FrameworkEventSource.Keywords.ThreadPool | FrameworkEventSource.Keywords.ThreadTransfer); + RefreshLoggingEnabled(); } public ThreadPoolWorkQueueThreadLocals GetOrCreateThreadLocals() => @@ -404,6 +421,27 @@ namespace System.Threading return ThreadPoolWorkQueueThreadLocals.threadLocals = new ThreadPoolWorkQueueThreadLocals(this); } + [MethodImpl(MethodImplOptions.AggressiveInlining)] + public void RefreshLoggingEnabled() + { + if (!FrameworkEventSource.Log.IsEnabled()) + { + if (loggingEnabled) + { + loggingEnabled = false; + } + return; + } + + RefreshLoggingEnabledFull(); + } + + [MethodImpl(MethodImplOptions.NoInlining)] + public void RefreshLoggingEnabledFull() + { + loggingEnabled = FrameworkEventSource.Log.IsEnabled(EventLevel.Verbose, FrameworkEventSource.Keywords.ThreadPool | FrameworkEventSource.Keywords.ThreadTransfer); + } + internal void EnsureThreadRequested() { // @@ -412,10 +450,10 @@ namespace System.Threading // CoreCLR: Note that there is a separate count in the VM which has already been incremented // by the VM by the time we reach this point. // - int count = numOutstandingThreadRequests; + int count = _separated.numOutstandingThreadRequests; while (count < Environment.ProcessorCount) { - int prev = Interlocked.CompareExchange(ref numOutstandingThreadRequests, count + 1, count); + int prev = Interlocked.CompareExchange(ref _separated.numOutstandingThreadRequests, count + 1, count); if (prev == count) { ThreadPool.RequestWorkerThread(); @@ -434,10 +472,10 @@ namespace System.Threading // CoreCLR: Note that there is a separate count in the VM which has already been decremented // by the VM by the time we reach this point. // - int count = numOutstandingThreadRequests; + int count = _separated.numOutstandingThreadRequests; while (count > 0) { - int prev = Interlocked.CompareExchange(ref numOutstandingThreadRequests, count - 1, count); + int prev = Interlocked.CompareExchange(ref _separated.numOutstandingThreadRequests, count - 1, count); if (prev == count) { break; @@ -446,12 +484,35 @@ namespace System.Threading } } + public void EnqueueTimeSensitiveWorkItem(IThreadPoolWorkItem timeSensitiveWorkItem) + { + Debug.Assert(ThreadPool.SupportsTimeSensitiveWorkItems); + + if (loggingEnabled && FrameworkEventSource.Log.IsEnabled()) + { + FrameworkEventSource.Log.ThreadPoolEnqueueWorkObject(timeSensitiveWorkItem); + } + + timeSensitiveWorkQueue!.Enqueue(timeSensitiveWorkItem); + EnsureThreadRequested(); + } + + [MethodImpl(MethodImplOptions.NoInlining)] + public IThreadPoolWorkItem? TryDequeueTimeSensitiveWorkItem() + { + Debug.Assert(ThreadPool.SupportsTimeSensitiveWorkItems); + + bool success = timeSensitiveWorkQueue!.TryDequeue(out IThreadPoolWorkItem? timeSensitiveWorkItem); + Debug.Assert(success == (timeSensitiveWorkItem != null)); + return timeSensitiveWorkItem; + } + public void Enqueue(object callback, bool forceGlobal) { Debug.Assert((callback is IThreadPoolWorkItem) ^ (callback is Task)); if (loggingEnabled && FrameworkEventSource.Log.IsEnabled()) - System.Diagnostics.Tracing.FrameworkEventSource.Log.ThreadPoolEnqueueWorkObject(callback); + FrameworkEventSource.Log.ThreadPoolEnqueueWorkObject(callback); ThreadPoolWorkQueueThreadLocals? tl = null; if (!forceGlobal) @@ -498,11 +559,21 @@ namespace System.Threading callback = otherQueue.TrySteal(ref missedSteal); if (callback != null) { - break; + return callback; } } c--; } + + Debug.Assert(callback == null); + +#pragma warning disable CS0162 // Unreachable code detected. SupportsTimeSensitiveWorkItems may be a constant in some runtimes. + // No work in the normal queues, check for time-sensitive work items + if (ThreadPool.SupportsTimeSensitiveWorkItems) + { + callback = TryDequeueTimeSensitiveWorkItem(); + } +#pragma warning restore CS0162 } return callback; @@ -521,7 +592,13 @@ namespace System.Threading } } - public long GlobalCount => workItems.Count; + public long GlobalCount => + (ThreadPool.SupportsTimeSensitiveWorkItems ? timeSensitiveWorkQueue!.Count : 0) + workItems.Count; + + // Time in ms for which ThreadPoolWorkQueue.Dispatch keeps executing normal work items before either returning from + // Dispatch (if SupportsTimeSensitiveWorkItems is false), or checking for and dispatching a time-sensitive work item + // before continuing with normal work items + private const uint DispatchQuantumMs = 30; /// /// Dispatches work items to this thread. @@ -535,11 +612,6 @@ namespace System.Threading ThreadPoolWorkQueue outerWorkQueue = ThreadPool.s_workQueue; // - // Save the start time - // - int startTickCount = Environment.TickCount; - - // // Update our records to indicate that an outstanding request for a thread has now been fulfilled. // From this point on, we are responsible for requesting another thread if we stop working for any // reason, and we believe there might still be work in the queue. @@ -550,7 +622,7 @@ namespace System.Threading outerWorkQueue.MarkThreadRequestSatisfied(); // Has the desire for logging changed since the last time we entered? - outerWorkQueue.loggingEnabled = FrameworkEventSource.Log.IsEnabled(EventLevel.Verbose, FrameworkEventSource.Keywords.ThreadPool | FrameworkEventSource.Keywords.ThreadTransfer); + outerWorkQueue.RefreshLoggingEnabled(); // // Assume that we're going to need another thread if this one returns to the VM. We'll set this to @@ -565,6 +637,7 @@ namespace System.Threading // Use operate on workQueue local to try block so it can be enregistered ThreadPoolWorkQueue workQueue = outerWorkQueue; ThreadPoolWorkQueueThreadLocals tl = workQueue.GetOrCreateThreadLocals(); + object? threadLocalCompletionCountObject = tl.threadLocalCompletionCountObject; Thread currentThread = tl.currentThread; // Start on clean ExecutionContext and SynchronizationContext @@ -572,31 +645,42 @@ namespace System.Threading currentThread._synchronizationContext = null; // + // Save the start time + // + int startTickCount = Environment.TickCount; + + object? workItem = null; + + // // Loop until our quantum expires or there is no work. // - while (ThreadPool.KeepDispatching(startTickCount)) + while (true) { - bool missedSteal = false; - // Use operate on workItem local to try block so it can be enregistered - object? workItem = workQueue.Dequeue(tl, ref missedSteal); - if (workItem == null) { - // - // No work. - // If we missed a steal, though, there may be more work in the queue. - // Instead of looping around and trying again, we'll just request another thread. Hopefully the thread - // that owns the contended work-stealing queue will pick up its own workitems in the meantime, - // which will be more efficient than this thread doing it anyway. - // - needAnotherThread = missedSteal; + bool missedSteal = false; + // Operate on 'workQueue' instead of 'outerWorkQueue', as 'workQueue' is local to the try block and it + // may be enregistered + workItem = workQueue.Dequeue(tl, ref missedSteal); - // Tell the VM we're returning normally, not because Hill Climbing asked us to return. - return true; + if (workItem == null) + { + // + // No work. + // If we missed a steal, though, there may be more work in the queue. + // Instead of looping around and trying again, we'll just request another thread. Hopefully the thread + // that owns the contended work-stealing queue will pick up its own workitems in the meantime, + // which will be more efficient than this thread doing it anyway. + // + needAnotherThread = missedSteal; + + // Tell the VM we're returning normally, not because Hill Climbing asked us to return. + return true; + } } if (workQueue.loggingEnabled && FrameworkEventSource.Log.IsEnabled()) - System.Diagnostics.Tracing.FrameworkEventSource.Log.ThreadPoolDequeueWorkObject(workItem); + FrameworkEventSource.Log.ThreadPoolDequeueWorkObject(workItem); // // If we found work, there may be more work. Ask for another thread so that the other work can be processed @@ -607,31 +691,11 @@ namespace System.Threading // // Execute the workitem outside of any finally blocks, so that it can be aborted if needed. // -#pragma warning disable CS0162 // Unreachable code detected. EnableWorkerTracking may be constant false in some runtimes. +#pragma warning disable CS0162 // Unreachable code detected. EnableWorkerTracking may be a constant in some runtimes. if (ThreadPool.EnableWorkerTracking) { - bool reportedStatus = false; - try - { - ThreadPool.ReportThreadStatus(isWorking: true); - reportedStatus = true; - if (workItem is Task task) - { - task.ExecuteFromThreadPool(currentThread); - } - else - { - Debug.Assert(workItem is IThreadPoolWorkItem); - Unsafe.As(workItem).Execute(); - } - } - finally - { - if (reportedStatus) - ThreadPool.ReportThreadStatus(isWorking: false); - } + DispatchWorkItemWithWorkerTracking(workItem, currentThread); } -#pragma warning restore CS0162 else if (workItem is Task task) { // Check for Task first as it's currently faster to type check @@ -645,25 +709,55 @@ namespace System.Threading Debug.Assert(workItem is IThreadPoolWorkItem); Unsafe.As(workItem).Execute(); } - - currentThread.ResetThreadPoolThread(); +#pragma warning restore CS0162 // Release refs workItem = null; - // Return to clean ExecutionContext and SynchronizationContext + // Return to clean ExecutionContext and SynchronizationContext. This may call user code (AsyncLocal value + // change notifications). ExecutionContext.ResetThreadPoolThread(currentThread); + // Reset thread state after all user code for the work item has completed + currentThread.ResetThreadPoolThread(); + // // Notify the VM that we executed this workitem. This is also our opportunity to ask whether Hill Climbing wants // us to return the thread to the pool or not. // - if (!ThreadPool.NotifyWorkItemComplete()) + int currentTickCount = Environment.TickCount; + if (!ThreadPool.NotifyWorkItemComplete(threadLocalCompletionCountObject, currentTickCount)) return false; - } - // If we get here, it's because our quantum expired. Tell the VM we're returning normally. - return true; + // Check if the dispatch quantum has expired + if ((uint)(currentTickCount - startTickCount) < DispatchQuantumMs) + { + continue; + } + + // The quantum expired, do any necessary periodic activities + +#pragma warning disable CS0162 // Unreachable code detected. SupportsTimeSensitiveWorkItems may be a constant in some runtimes. + if (!ThreadPool.SupportsTimeSensitiveWorkItems) + { + // The runtime-specific thread pool implementation does not support managed time-sensitive work, need to + // return to the VM to let it perform its own time-sensitive work. Tell the VM we're returning normally. + return true; + } + + // This method will continue to dispatch work items. Refresh the start tick count for the next dispatch + // quantum and do some periodic activities. + startTickCount = currentTickCount; + + // Periodically refresh whether logging is enabled + workQueue.RefreshLoggingEnabled(); + + // Consistent with CoreCLR currently, only one time-sensitive work item is run periodically between quantums + // of time spent running work items in the normal thread pool queues, until the normal queues are depleted. + // These are basically lower-priority but time-sensitive work items. + workItem = workQueue.TryDequeueTimeSensitiveWorkItem(); +#pragma warning restore CS0162 + } } finally { @@ -675,6 +769,34 @@ namespace System.Threading outerWorkQueue.EnsureThreadRequested(); } } + + [MethodImpl(MethodImplOptions.NoInlining)] + private static void DispatchWorkItemWithWorkerTracking(object workItem, Thread currentThread) + { + Debug.Assert(ThreadPool.EnableWorkerTracking); + Debug.Assert(currentThread == Thread.CurrentThread); + + bool reportedStatus = false; + try + { + ThreadPool.ReportThreadStatus(isWorking: true); + reportedStatus = true; + if (workItem is Task task) + { + task.ExecuteFromThreadPool(currentThread); + } + else + { + Debug.Assert(workItem is IThreadPoolWorkItem); + Unsafe.As(workItem).Execute(); + } + } + finally + { + if (reportedStatus) + ThreadPool.ReportThreadStatus(isWorking: false); + } + } } // Simple random number generator. We don't need great randomness, we just need a little and for it to be fast. @@ -711,6 +833,7 @@ namespace System.Threading public readonly ThreadPoolWorkQueue workQueue; public readonly ThreadPoolWorkQueue.WorkStealingQueue workStealingQueue; public readonly Thread currentThread; + public readonly object? threadLocalCompletionCountObject; public FastRandom random = new FastRandom(Environment.CurrentManagedThreadId); // mutable struct, do not copy or make readonly public ThreadPoolWorkQueueThreadLocals(ThreadPoolWorkQueue tpq) @@ -719,6 +842,7 @@ namespace System.Threading workStealingQueue = new ThreadPoolWorkQueue.WorkStealingQueue(); ThreadPoolWorkQueue.WorkStealingQueueList.Add(workStealingQueue); currentThread = Thread.CurrentThread; + threadLocalCompletionCountObject = ThreadPool.GetOrCreateThreadLocalCompletionCountObject(); } ~ThreadPoolWorkQueueThreadLocals() @@ -934,8 +1058,259 @@ namespace System.Threading } } + /// + /// An object representing the registration of a via . + /// + [UnsupportedOSPlatform("browser")] + public sealed partial class RegisteredWaitHandle : MarshalByRefObject + { + internal RegisteredWaitHandle(WaitHandle waitHandle, _ThreadPoolWaitOrTimerCallback callbackHelper, + int millisecondsTimeout, bool repeating) + { + Handle = waitHandle.SafeWaitHandle; + Callback = callbackHelper; + TimeoutDurationMs = millisecondsTimeout; + Repeating = repeating; + if (!IsInfiniteTimeout) + { + RestartTimeout(); + } + } + + private static AutoResetEvent? s_cachedEvent; + + private static AutoResetEvent RentEvent() + { + AutoResetEvent? resetEvent = Interlocked.Exchange(ref s_cachedEvent, (AutoResetEvent?)null); + if (resetEvent == null) + { + resetEvent = new AutoResetEvent(false); + } + return resetEvent; + } + + private static void ReturnEvent(AutoResetEvent resetEvent) + { + if (Interlocked.CompareExchange(ref s_cachedEvent, resetEvent, null) != null) + { + resetEvent.Dispose(); + } + } + + private static readonly LowLevelLock s_callbackLock = new LowLevelLock(); + + /// + /// The callback to execute when the wait on either times out or completes. + /// + internal _ThreadPoolWaitOrTimerCallback Callback { get; } + + + /// + /// The that was registered. + /// + internal SafeWaitHandle Handle { get; } + + /// + /// The time this handle times out at in ms. + /// + internal int TimeoutTimeMs { get; private set; } + + internal int TimeoutDurationMs { get; } + + internal bool IsInfiniteTimeout => TimeoutDurationMs == -1; + + internal void RestartTimeout() + { + Debug.Assert(!IsInfiniteTimeout); + TimeoutTimeMs = Environment.TickCount + TimeoutDurationMs; + } + + /// + /// Whether or not the wait is a repeating wait. + /// + internal bool Repeating { get; } + + /// + /// The the user passed in via . + /// + private SafeWaitHandle? UserUnregisterWaitHandle { get; set; } + + private IntPtr UserUnregisterWaitHandleValue { get; set; } + + private static IntPtr InvalidHandleValue => new IntPtr(-1); + + internal bool IsBlocking => UserUnregisterWaitHandleValue == InvalidHandleValue; + + /// + /// The number of callbacks that are currently queued on the Thread Pool or executing. + /// + private int _numRequestedCallbacks; + + /// + /// Notes if we need to signal the user's unregister event after all callbacks complete. + /// + private bool _signalAfterCallbacksComplete; + + private bool _unregisterCalled; + +#pragma warning disable CS0414 // The field is assigned but its value is never used. Some runtimes may not support registered wait handles. + private bool _unregistered; +#pragma warning restore CS0414 + + private AutoResetEvent? _callbacksComplete; + + private AutoResetEvent? _removed; + + /// + /// Signal if it has not been signaled yet and is a valid handle. + /// + private void SignalUserWaitHandle() + { + s_callbackLock.VerifyIsLocked(); + SafeWaitHandle? handle = UserUnregisterWaitHandle; + IntPtr handleValue = UserUnregisterWaitHandleValue; + try + { + if (handleValue != IntPtr.Zero && handleValue != InvalidHandleValue) + { + Debug.Assert(handleValue == handle!.DangerousGetHandle()); + EventWaitHandle.Set(handle); + } + } + finally + { + handle?.DangerousRelease(); + _callbacksComplete?.Set(); + _unregistered = true; + } + } + + /// + /// Perform the registered callback if the has not been signaled. + /// + /// Whether or not the wait timed out. + internal void PerformCallback(bool timedOut) + { +#if DEBUG + s_callbackLock.Acquire(); + try + { + Debug.Assert(_numRequestedCallbacks != 0); + } + finally + { + s_callbackLock.Release(); + } +#endif + + _ThreadPoolWaitOrTimerCallback.PerformWaitOrTimerCallback(Callback, timedOut); + CompleteCallbackRequest(); + } + + /// + /// Tell this handle that there is a callback queued on the thread pool for this handle. + /// + internal void RequestCallback() + { + s_callbackLock.Acquire(); + try + { + _numRequestedCallbacks++; + } + finally + { + s_callbackLock.Release(); + } + } + + /// + /// Called when the wait thread removes this handle registration. This will signal the user's event if there are no callbacks pending, + /// or note that the user's event must be signaled when the callbacks complete. + /// + internal void OnRemoveWait() + { + s_callbackLock.Acquire(); + try + { + _removed?.Set(); + if (_numRequestedCallbacks == 0) + { + SignalUserWaitHandle(); + } + else + { + _signalAfterCallbacksComplete = true; + } + } + finally + { + s_callbackLock.Release(); + } + } + + /// + /// Reduces the number of callbacks requested. If there are no more callbacks and the user's handle is queued to be signaled, signal it. + /// + private void CompleteCallbackRequest() + { + s_callbackLock.Acquire(); + try + { + --_numRequestedCallbacks; + if (_numRequestedCallbacks == 0 && _signalAfterCallbacksComplete) + { + SignalUserWaitHandle(); + } + } + finally + { + s_callbackLock.Release(); + } + } + + /// + /// Wait for all queued callbacks and the full unregistration to complete. + /// + internal void WaitForCallbacks() + { + Debug.Assert(IsBlocking); + Debug.Assert(_unregisterCalled); // Should only be called when the wait is unregistered by the user. + + _callbacksComplete!.WaitOne(); + ReturnEvent(_callbacksComplete); + _callbacksComplete = null; + } + + internal void WaitForRemoval() + { + Debug.Assert(!IsBlocking); + Debug.Assert(_unregisterCalled); // Should only be called when the wait is unregistered by the user. + + _removed!.WaitOne(); + ReturnEvent(_removed); + _removed = null; + } + } + + /// + /// The info for a completed wait on a specific . + /// + internal sealed partial class CompleteWaitThreadPoolWorkItem : IThreadPoolWorkItem + { + private RegisteredWaitHandle _registeredWaitHandle; + private bool _timedOut; + + public CompleteWaitThreadPoolWorkItem(RegisteredWaitHandle registeredWaitHandle, bool timedOut) + { + _registeredWaitHandle = registeredWaitHandle; + _timedOut = timedOut; + } + } + public static partial class ThreadPool { + internal const string WorkerThreadName = ".NET ThreadPool Worker"; + internal static readonly ThreadPoolWorkQueue s_workQueue = new ThreadPoolWorkQueue(); /// Shim used to invoke of the supplied . @@ -1074,6 +1449,29 @@ namespace System.Threading return RegisterWaitForSingleObject(waitObject, callBack, state, (uint)tm, executeOnlyOnce, false); } + private static RegisteredWaitHandle RegisterWaitForSingleObject( + WaitHandle? waitObject, + WaitOrTimerCallback? callBack, + object? state, + uint millisecondsTimeOutInterval, + bool executeOnlyOnce, + bool flowExecutionContext) + { + if (waitObject == null) + throw new ArgumentNullException(nameof(waitObject)); + + if (callBack == null) + throw new ArgumentNullException(nameof(callBack)); + + RegisteredWaitHandle registeredHandle = new RegisteredWaitHandle( + waitObject, + new _ThreadPoolWaitOrTimerCallback(callBack, state, flowExecutionContext), + (int)millisecondsTimeOutInterval, + !executeOnlyOnce); + RegisterWaitForSingleObjectCore(waitObject, registeredHandle); + return registeredHandle; + } + public static bool QueueUserWorkItem(WaitCallback callBack) => QueueUserWorkItem(callBack, null); @@ -1182,6 +1580,22 @@ namespace System.Threading s_workQueue.Enqueue(callBack, forceGlobal: !preferLocal); } + internal static void UnsafeQueueTimeSensitiveWorkItem(IThreadPoolWorkItem timeSensitiveWorkItem) + { +#pragma warning disable CS0162 // Unreachable code detected. SupportsTimeSensitiveWorkItems may be constant true in some runtimes. + if (SupportsTimeSensitiveWorkItems) + { + UnsafeQueueTimeSensitiveWorkItemInternal(timeSensitiveWorkItem); + return; + } + + UnsafeQueueUserWorkItemInternal(timeSensitiveWorkItem, preferLocal: false); +#pragma warning restore CS0162 + } + + internal static void UnsafeQueueTimeSensitiveWorkItemInternal(IThreadPoolWorkItem timeSensitiveWorkItem) => + s_workQueue.EnqueueTimeSensitiveWorkItem(timeSensitiveWorkItem); + // This method tries to take the target callback out of the current thread's queue. internal static bool TryPopCustomWorkItem(object workItem) { @@ -1192,6 +1606,17 @@ namespace System.Threading // Get all workitems. Called by TaskScheduler in its debugger hooks. internal static IEnumerable GetQueuedWorkItems() { +#pragma warning disable CS0162 // Unreachable code detected. SupportsTimeSensitiveWorkItems may be a constant in some runtimes. + if (ThreadPool.SupportsTimeSensitiveWorkItems) + { + // Enumerate time-sensitive work item queue + foreach (object workItem in s_workQueue.timeSensitiveWorkQueue!) + { + yield return workItem; + } + } +#pragma warning restore CS0162 + // Enumerate global queue foreach (object workItem in s_workQueue.workItems) { @@ -1231,7 +1656,25 @@ namespace System.Threading } } - internal static IEnumerable GetGloballyQueuedWorkItems() => s_workQueue.workItems; + internal static IEnumerable GetGloballyQueuedWorkItems() + { +#pragma warning disable CS0162 // Unreachable code detected. SupportsTimeSensitiveWorkItems may be a constant in some runtimes. + if (ThreadPool.SupportsTimeSensitiveWorkItems) + { + // Enumerate time-sensitive work item queue + foreach (object workItem in s_workQueue.timeSensitiveWorkQueue!) + { + yield return workItem; + } + } +#pragma warning restore CS0162 + + // Enumerate global queue + foreach (object workItem in s_workQueue.workItems) + { + yield return workItem; + } + } private static object[] ToObjectArray(IEnumerable workitems) { diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/TimerQueue.Portable.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/TimerQueue.Portable.cs index 72afc1f..39fd738 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/TimerQueue.Portable.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/TimerQueue.Portable.cs @@ -121,7 +121,7 @@ namespace System.Threading { foreach (TimerQueue timerToFire in timersToFire) { - ThreadPool.UnsafeQueueUserWorkItemInternal(timerToFire, preferLocal: false); + ThreadPool.UnsafeQueueTimeSensitiveWorkItem(timerToFire); } timersToFire.Clear(); } diff --git a/src/libraries/System.Private.CoreLib/src/System/Threading/WaitHandle.cs b/src/libraries/System.Private.CoreLib/src/System/Threading/WaitHandle.cs index 9fcbf20..1459ee6 100644 --- a/src/libraries/System.Private.CoreLib/src/System/Threading/WaitHandle.cs +++ b/src/libraries/System.Private.CoreLib/src/System/Threading/WaitHandle.cs @@ -314,6 +314,45 @@ namespace System.Threading } } + private static int WaitAnyMultiple(ReadOnlySpan safeWaitHandles, int millisecondsTimeout) + { + // - Callers are expected to manage the lifetimes of the safe wait handles such that they would not expire during + // this wait + // - If the safe wait handle that satisfies the wait is an abandoned mutex, the wait result would reflect that and + // handling of that is left up to the caller + + Debug.Assert(safeWaitHandles.Length != 0); + Debug.Assert(safeWaitHandles.Length <= MaxWaitHandles); + Debug.Assert(millisecondsTimeout >= -1); + + SynchronizationContext? context = SynchronizationContext.Current; + bool useWaitContext = context != null && context.IsWaitNotificationRequired(); + + int waitResult; + if (useWaitContext) + { + IntPtr[] unsafeWaitHandles = new IntPtr[safeWaitHandles.Length]; + for (int i = 0; i < safeWaitHandles.Length; ++i) + { + Debug.Assert(safeWaitHandles[i] != null); + unsafeWaitHandles[i] = safeWaitHandles[i].DangerousGetHandle(); + } + waitResult = context!.Wait(unsafeWaitHandles, false, millisecondsTimeout); + } + else + { + Span unsafeWaitHandles = stackalloc IntPtr[safeWaitHandles.Length]; + for (int i = 0; i < safeWaitHandles.Length; ++i) + { + Debug.Assert(safeWaitHandles[i] != null); + unsafeWaitHandles[i] = safeWaitHandles[i].DangerousGetHandle(); + } + waitResult = WaitMultipleIgnoringSyncContext(unsafeWaitHandles, false, millisecondsTimeout); + } + + return waitResult; + } + private static bool SignalAndWait(WaitHandle toSignal, WaitHandle toWaitOn, int millisecondsTimeout) { if (toSignal == null) @@ -388,6 +427,8 @@ namespace System.Threading public static int WaitAny(WaitHandle[] waitHandles, int millisecondsTimeout) => WaitMultiple(waitHandles, false, millisecondsTimeout); + internal static int WaitAny(ReadOnlySpan safeWaitHandles, int millisecondsTimeout) => + WaitAnyMultiple(safeWaitHandles, millisecondsTimeout); public static int WaitAny(WaitHandle[] waitHandles, TimeSpan timeout) => WaitMultiple(waitHandles, false, ToTimeoutMilliseconds(timeout)); public static int WaitAny(WaitHandle[] waitHandles) => diff --git a/src/libraries/System.Threading.ThreadPool/tests/RegisteredWaitTests.cs b/src/libraries/System.Threading.ThreadPool/tests/RegisteredWaitTests.cs new file mode 100644 index 0000000..f985c1a --- /dev/null +++ b/src/libraries/System.Threading.ThreadPool/tests/RegisteredWaitTests.cs @@ -0,0 +1,517 @@ +// Licensed to the .NET Foundation under one or more agreements. +// The .NET Foundation licenses this file to you under the MIT license. + +using System.Threading.Tests; +using Microsoft.DotNet.RemoteExecutor; +using Xunit; + +namespace System.Threading.ThreadPools.Tests +{ + public partial class RegisteredWaitTests + { + private const int UnexpectedTimeoutMilliseconds = ThreadTestHelpers.UnexpectedTimeoutMilliseconds; + private const int ExpectedTimeoutMilliseconds = ThreadTestHelpers.ExpectedTimeoutMilliseconds; + + private sealed class InvalidWaitHandle : WaitHandle + { + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void QueueRegisterPositiveAndFlowTest() + { + var asyncLocal = new AsyncLocal(); + asyncLocal.Value = 1; + + var obj = new object(); + var registerWaitEvent = new AutoResetEvent(false); + var threadDone = new AutoResetEvent(false); + RegisteredWaitHandle registeredWaitHandle = null; + Exception backgroundEx = null; + int backgroundAsyncLocalValue = 0; + + Action commonBackgroundTest = + (isRegisteredWaitCallback, test) => + { + try + { + if (isRegisteredWaitCallback) + { + RegisteredWaitHandle toUnregister = registeredWaitHandle; + registeredWaitHandle = null; + Assert.True(toUnregister.Unregister(threadDone)); + } + test(); + backgroundAsyncLocalValue = asyncLocal.Value; + } + catch (Exception ex) + { + backgroundEx = ex; + } + finally + { + if (!isRegisteredWaitCallback) + { + threadDone.Set(); + } + } + }; + Action waitForBackgroundWork = + isWaitForRegisteredWaitCallback => + { + if (isWaitForRegisteredWaitCallback) + { + registerWaitEvent.Set(); + } + threadDone.CheckedWait(); + if (backgroundEx != null) + { + throw new AggregateException(backgroundEx); + } + }; + + ThreadPool.QueueUserWorkItem( + state => + { + commonBackgroundTest(false, () => + { + Assert.Same(obj, state); + }); + }, + obj); + waitForBackgroundWork(false); + Assert.Equal(1, backgroundAsyncLocalValue); + + ThreadPool.UnsafeQueueUserWorkItem( + state => + { + commonBackgroundTest(false, () => + { + Assert.Same(obj, state); + }); + }, + obj); + waitForBackgroundWork(false); + Assert.Equal(0, backgroundAsyncLocalValue); + + registeredWaitHandle = + ThreadPool.RegisterWaitForSingleObject( + registerWaitEvent, + (state, timedOut) => + { + commonBackgroundTest(true, () => + { + Assert.Same(obj, state); + Assert.False(timedOut); + }); + }, + obj, + UnexpectedTimeoutMilliseconds, + false); + waitForBackgroundWork(true); + Assert.Equal(1, backgroundAsyncLocalValue); + + registeredWaitHandle = + ThreadPool.UnsafeRegisterWaitForSingleObject( + registerWaitEvent, + (state, timedOut) => + { + commonBackgroundTest(true, () => + { + Assert.Same(obj, state); + Assert.False(timedOut); + }); + }, + obj, + UnexpectedTimeoutMilliseconds, + false); + waitForBackgroundWork(true); + Assert.Equal(0, backgroundAsyncLocalValue); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void QueueRegisterNegativeTest() + { + Assert.Throws(() => ThreadPool.QueueUserWorkItem(null)); + Assert.Throws(() => ThreadPool.UnsafeQueueUserWorkItem(null, null)); + + WaitHandle waitHandle = new ManualResetEvent(true); + WaitOrTimerCallback callback = (state, timedOut) => { }; + Assert.Throws(() => ThreadPool.RegisterWaitForSingleObject(null, callback, null, 0, true)); + Assert.Throws(() => ThreadPool.RegisterWaitForSingleObject(waitHandle, null, null, 0, true)); + AssertExtensions.Throws("millisecondsTimeOutInterval", () => + ThreadPool.RegisterWaitForSingleObject(waitHandle, callback, null, -2, true)); + AssertExtensions.Throws("millisecondsTimeOutInterval", () => + ThreadPool.RegisterWaitForSingleObject(waitHandle, callback, null, (long)-2, true)); + if (!PlatformDetection.IsNetFramework) // .NET Framework silently overflows the timeout + { + AssertExtensions.Throws("millisecondsTimeOutInterval", () => + ThreadPool.RegisterWaitForSingleObject(waitHandle, callback, null, (long)int.MaxValue + 1, true)); + } + AssertExtensions.Throws("timeout", () => + ThreadPool.RegisterWaitForSingleObject(waitHandle, callback, null, TimeSpan.FromMilliseconds(-2), true)); + AssertExtensions.Throws("timeout", () => + ThreadPool.RegisterWaitForSingleObject( + waitHandle, + callback, + null, + TimeSpan.FromMilliseconds((double)int.MaxValue + 1), + true)); + + Assert.Throws(() => ThreadPool.UnsafeRegisterWaitForSingleObject(null, callback, null, 0, true)); + Assert.Throws(() => ThreadPool.UnsafeRegisterWaitForSingleObject(waitHandle, null, null, 0, true)); + AssertExtensions.Throws("millisecondsTimeOutInterval", () => + ThreadPool.UnsafeRegisterWaitForSingleObject(waitHandle, callback, null, -2, true)); + AssertExtensions.Throws("millisecondsTimeOutInterval", () => + ThreadPool.UnsafeRegisterWaitForSingleObject(waitHandle, callback, null, (long)-2, true)); + if (!PlatformDetection.IsNetFramework) // .NET Framework silently overflows the timeout + { + AssertExtensions.Throws("millisecondsTimeOutInterval", () => + ThreadPool.UnsafeRegisterWaitForSingleObject(waitHandle, callback, null, (long)int.MaxValue + 1, true)); + } + AssertExtensions.Throws("timeout", () => + ThreadPool.UnsafeRegisterWaitForSingleObject(waitHandle, callback, null, TimeSpan.FromMilliseconds(-2), true)); + AssertExtensions.Throws("timeout", () => + ThreadPool.UnsafeRegisterWaitForSingleObject( + waitHandle, + callback, + null, + TimeSpan.FromMilliseconds((double)int.MaxValue + 1), + true)); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void SignalingRegisteredWaitHandleCallsCallback() + { + var waitEvent = new AutoResetEvent(false); + var waitCallbackInvoked = new AutoResetEvent(false); + bool timedOut = false; + ThreadPool.RegisterWaitForSingleObject(waitEvent, (_, timedOut2) => + { + timedOut = timedOut2; + waitCallbackInvoked.Set(); + }, null, UnexpectedTimeoutMilliseconds, true); + + waitEvent.Set(); + waitCallbackInvoked.CheckedWait(); + Assert.False(timedOut); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void TimingOutRegisteredWaitHandleCallsCallback() + { + var waitEvent = new AutoResetEvent(false); + var waitCallbackInvoked = new AutoResetEvent(false); + bool timedOut = false; + ThreadPool.RegisterWaitForSingleObject(waitEvent, (_, timedOut2) => + { + timedOut = timedOut2; + waitCallbackInvoked.Set(); + }, null, ExpectedTimeoutMilliseconds, true); + + waitCallbackInvoked.CheckedWait(); + Assert.True(timedOut); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void UnregisteringWaitWithInvalidWaitHandleBeforeSignalingDoesNotCallCallback() + { + var waitEvent = new AutoResetEvent(false); + var waitCallbackInvoked = new AutoResetEvent(false); + var registeredWaitHandle = ThreadPool.RegisterWaitForSingleObject(waitEvent, (_, __) => + { + waitCallbackInvoked.Set(); + }, null, UnexpectedTimeoutMilliseconds, true); + + Assert.True(registeredWaitHandle.Unregister(new InvalidWaitHandle())); // blocking unregister + waitEvent.Set(); + Assert.False(waitCallbackInvoked.WaitOne(ExpectedTimeoutMilliseconds)); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void UnregisteringWaitWithEventBeforeSignalingDoesNotCallCallback() + { + var waitEvent = new AutoResetEvent(false); + var waitUnregistered = new AutoResetEvent(false); + var waitCallbackInvoked = new AutoResetEvent(false); + var registeredWaitHandle = ThreadPool.RegisterWaitForSingleObject(waitEvent, (_, __) => + { + waitCallbackInvoked.Set(); + }, null, UnexpectedTimeoutMilliseconds, true); + + Assert.True(registeredWaitHandle.Unregister(waitUnregistered)); + waitUnregistered.CheckedWait(); + waitEvent.Set(); + Assert.False(waitCallbackInvoked.WaitOne(ExpectedTimeoutMilliseconds)); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void NonrepeatingWaitFiresOnlyOnce() + { + var waitEvent = new AutoResetEvent(false); + var waitCallbackInvoked = new AutoResetEvent(false); + bool anyTimedOut = false; + var registeredWaitHandle = ThreadPool.RegisterWaitForSingleObject(waitEvent, (_, timedOut) => + { + anyTimedOut |= timedOut; + waitCallbackInvoked.Set(); + }, null, UnexpectedTimeoutMilliseconds, true); + + waitEvent.Set(); + waitCallbackInvoked.CheckedWait(); + waitEvent.Set(); + Assert.False(waitCallbackInvoked.WaitOne(ExpectedTimeoutMilliseconds)); + Assert.False(anyTimedOut); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void RepeatingWaitFiresUntilUnregistered() + { + var waitEvent = new AutoResetEvent(false); + var waitCallbackInvoked = new AutoResetEvent(false); + bool anyTimedOut = false; + var registeredWaitHandle = ThreadPool.RegisterWaitForSingleObject(waitEvent, (_, timedOut) => + { + anyTimedOut |= timedOut; + waitCallbackInvoked.Set(); + }, null, UnexpectedTimeoutMilliseconds, false); + + for (int i = 0; i < 4; ++i) + { + waitEvent.Set(); + waitCallbackInvoked.CheckedWait(); + } + + Assert.True(registeredWaitHandle.Unregister(new InvalidWaitHandle())); // blocking unregister + waitEvent.Set(); + Assert.False(waitCallbackInvoked.WaitOne(ExpectedTimeoutMilliseconds)); + Assert.False(anyTimedOut); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void UnregisterEventSignaledWhenUnregistered() + { + var waitEvent = new AutoResetEvent(false); + var waitCallbackInvoked = new AutoResetEvent(false); + var waitUnregistered = new AutoResetEvent(false); + bool timedOut = false; + WaitOrTimerCallback waitCallback = (_, timedOut2) => + { + timedOut = timedOut2; + waitCallbackInvoked.Set(); + }; + + // executeOnlyOnce = true, no timeout and no callback invocation + var registeredWaitHandle = + ThreadPool.RegisterWaitForSingleObject(waitEvent, waitCallback, null, Timeout.Infinite, executeOnlyOnce: true); + Assert.False(waitCallbackInvoked.WaitOne(ExpectedTimeoutMilliseconds)); + Assert.True(registeredWaitHandle.Unregister(waitUnregistered)); + waitUnregistered.CheckedWait(); + Assert.False(timedOut); + + // executeOnlyOnce = true, no timeout with callback invocation + registeredWaitHandle = + ThreadPool.RegisterWaitForSingleObject(waitEvent, waitCallback, null, Timeout.Infinite, executeOnlyOnce: true); + waitEvent.Set(); + waitCallbackInvoked.CheckedWait(); + Assert.True(registeredWaitHandle.Unregister(waitUnregistered)); + waitUnregistered.CheckedWait(); + Assert.False(timedOut); + + // executeOnlyOnce = true, with timeout + registeredWaitHandle = + ThreadPool.RegisterWaitForSingleObject( + waitEvent, waitCallback, null, ExpectedTimeoutMilliseconds, executeOnlyOnce: true); + waitCallbackInvoked.CheckedWait(); + Assert.False(waitCallbackInvoked.WaitOne(ExpectedTimeoutMilliseconds)); + Assert.True(registeredWaitHandle.Unregister(waitUnregistered)); + waitUnregistered.CheckedWait(); + Assert.True(timedOut); + timedOut = false; + + // executeOnlyOnce = false + registeredWaitHandle = + ThreadPool.RegisterWaitForSingleObject( + waitEvent, waitCallback, null, UnexpectedTimeoutMilliseconds, executeOnlyOnce: false); + Assert.False(waitCallbackInvoked.WaitOne(ExpectedTimeoutMilliseconds)); + Assert.True(registeredWaitHandle.Unregister(waitUnregistered)); + waitUnregistered.CheckedWait(); + Assert.False(timedOut); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void CanRegisterMoreThan64Waits() + { + RegisteredWaitHandle[] registeredWaitHandles = new RegisteredWaitHandle[65]; + WaitOrTimerCallback waitCallback = (_, __) => { }; + for (int i = 0; i < registeredWaitHandles.Length; ++i) + { + registeredWaitHandles[i] = + ThreadPool.RegisterWaitForSingleObject( + new AutoResetEvent(false), waitCallback, null, UnexpectedTimeoutMilliseconds, true); + } + for (int i = 0; i < registeredWaitHandles.Length; ++i) + { + Assert.True(registeredWaitHandles[i].Unregister(null)); + } + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void StateIsPassedThroughToCallback() + { + object state = new object(); + var waitCallbackInvoked = new AutoResetEvent(false); + object statePassedToCallback = null; + ThreadPool.RegisterWaitForSingleObject(new AutoResetEvent(true), (callbackState, _) => + { + statePassedToCallback = callbackState; + waitCallbackInvoked.Set(); + }, state, 0, true); + + waitCallbackInvoked.CheckedWait(); + Assert.Same(state, statePassedToCallback); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void UnregisterWaitHandleIsNotSignaledWhenCallbackIsRunning() + { + var waitEvent = new AutoResetEvent(false); + var waitCallbackProgressMade = new AutoResetEvent(false); + var completeWaitCallback = new AutoResetEvent(false); + var waitUnregistered = new AutoResetEvent(false); + RegisteredWaitHandle registeredWaitHandle = ThreadPool.RegisterWaitForSingleObject(waitEvent, (_, __) => + { + waitCallbackProgressMade.Set(); + completeWaitCallback.WaitOne(UnexpectedTimeoutMilliseconds); + waitCallbackProgressMade.Set(); + }, null, UnexpectedTimeoutMilliseconds, false); + + waitEvent.Set(); + waitCallbackProgressMade.CheckedWait(); // one callback running + waitEvent.Set(); + waitCallbackProgressMade.CheckedWait(); // two callbacks running + Assert.True(registeredWaitHandle.Unregister(waitUnregistered)); + Assert.False(waitUnregistered.WaitOne(ExpectedTimeoutMilliseconds)); + completeWaitCallback.Set(); // complete one callback + waitCallbackProgressMade.CheckedWait(); + Assert.False(waitUnregistered.WaitOne(ExpectedTimeoutMilliseconds)); + completeWaitCallback.Set(); // complete other callback + waitCallbackProgressMade.CheckedWait(); + waitUnregistered.CheckedWait(); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void BlockingUnregisterBlocksWhileCallbackIsRunning() + { + var waitEvent = new AutoResetEvent(false); + var waitCallbackProgressMade = new AutoResetEvent(false); + var completeWaitCallback = new AutoResetEvent(false); + RegisteredWaitHandle registeredWaitHandle = ThreadPool.RegisterWaitForSingleObject(waitEvent, (_, __) => + { + waitCallbackProgressMade.Set(); + completeWaitCallback.WaitOne(UnexpectedTimeoutMilliseconds); + waitCallbackProgressMade.Set(); + }, null, UnexpectedTimeoutMilliseconds, false); + + waitEvent.Set(); + waitCallbackProgressMade.CheckedWait(); // one callback running + waitEvent.Set(); + waitCallbackProgressMade.CheckedWait(); // two callbacks running + + Thread t = ThreadTestHelpers.CreateGuardedThread(out Action waitForThread, () => + Assert.True(registeredWaitHandle.Unregister(new InvalidWaitHandle()))); + t.IsBackground = true; + t.Start(); + + Assert.False(t.Join(ExpectedTimeoutMilliseconds)); + completeWaitCallback.Set(); // complete one callback + waitCallbackProgressMade.CheckedWait(); + Assert.False(t.Join(ExpectedTimeoutMilliseconds)); + completeWaitCallback.Set(); // complete other callback + waitCallbackProgressMade.CheckedWait(); + waitForThread(); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void CallingUnregisterOnAutomaticallyUnregisteredHandleReturnsTrue() + { + var waitCallbackInvoked = new AutoResetEvent(false); + RegisteredWaitHandle registeredWaitHandle = + ThreadPool.RegisterWaitForSingleObject( + new AutoResetEvent(true), + (_, __) => waitCallbackInvoked.Set(), + null, + UnexpectedTimeoutMilliseconds, + true); + waitCallbackInvoked.CheckedWait(); + Thread.Sleep(ExpectedTimeoutMilliseconds); // wait for callback to exit + Assert.True(registeredWaitHandle.Unregister(null)); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void EventSetAfterUnregisterNotObservedOnWaitThread() + { + var waitEvent = new AutoResetEvent(false); + RegisteredWaitHandle registeredWaitHandle = + ThreadPool.RegisterWaitForSingleObject(waitEvent, (_, __) => { }, null, UnexpectedTimeoutMilliseconds, true); + Assert.True(registeredWaitHandle.Unregister(null)); + waitEvent.Set(); + Thread.Sleep(ExpectedTimeoutMilliseconds); // give wait thread a chance to observe the signal + waitEvent.CheckedWait(); // signal should not have been observed by wait thread + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void CanDisposeEventAfterNonblockingUnregister() + { + using (var waitEvent = new AutoResetEvent(false)) + { + RegisteredWaitHandle registeredWaitHandle = + ThreadPool.RegisterWaitForSingleObject(waitEvent, (_, __) => { }, null, UnexpectedTimeoutMilliseconds, true); + Assert.True(registeredWaitHandle.Unregister(null)); + } + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void MultipleRegisteredWaitsUnregisterHandleShiftTest() + { + var handlePendingRemoval = new AutoResetEvent(false); + var completeWaitCallback = new AutoResetEvent(false); + WaitOrTimerCallback waitCallback = (_, __) => + { + handlePendingRemoval.Set(); + completeWaitCallback.CheckedWait(); + }; + + var waitEvent = new AutoResetEvent(false); + RegisteredWaitHandle registeredWaitHandle = + ThreadPool.RegisterWaitForSingleObject(waitEvent, waitCallback, null, UnexpectedTimeoutMilliseconds, true); + + var waitEvent2 = new AutoResetEvent(false); + RegisteredWaitHandle registeredWaitHandle2 = + ThreadPool.RegisterWaitForSingleObject(waitEvent2, waitCallback, null, UnexpectedTimeoutMilliseconds, true); + + var waitEvent3 = new AutoResetEvent(false); + RegisteredWaitHandle registeredWaitHandle3 = + ThreadPool.RegisterWaitForSingleObject(waitEvent3, waitCallback, null, UnexpectedTimeoutMilliseconds, true); + + void SetAndUnregister(AutoResetEvent waitEvent, RegisteredWaitHandle registeredWaitHandle) + { + waitEvent.Set(); + handlePendingRemoval.CheckedWait(); + Thread.Sleep(ExpectedTimeoutMilliseconds); // wait for removal + Assert.True(registeredWaitHandle.Unregister(null)); + completeWaitCallback.Set(); + waitEvent.Dispose(); + } + + SetAndUnregister(waitEvent, registeredWaitHandle); + SetAndUnregister(waitEvent2, registeredWaitHandle2); + + var waitEvent4 = new AutoResetEvent(false); + RegisteredWaitHandle registeredWaitHandle4 = + ThreadPool.RegisterWaitForSingleObject(waitEvent4, waitCallback, null, UnexpectedTimeoutMilliseconds, true); + + SetAndUnregister(waitEvent3, registeredWaitHandle3); + SetAndUnregister(waitEvent4, registeredWaitHandle4); + } + } +} diff --git a/src/libraries/System.Threading.ThreadPool/tests/System.Threading.ThreadPool.Tests.csproj b/src/libraries/System.Threading.ThreadPool/tests/System.Threading.ThreadPool.Tests.csproj index f4741b4..2a956d5 100644 --- a/src/libraries/System.Threading.ThreadPool/tests/System.Threading.ThreadPool.Tests.csproj +++ b/src/libraries/System.Threading.ThreadPool/tests/System.Threading.ThreadPool.Tests.csproj @@ -6,6 +6,7 @@ + OneBool() => @@ -31,32 +34,34 @@ namespace System.Threading.ThreadPools.Tests from b2 in new[] { true, false } select new object[] { b1, b2 }; - // Tests concurrent calls to ThreadPool.SetMinThreads - [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] - public static void ConcurrentInitializeTest() + // Tests concurrent calls to ThreadPool.SetMinThreads. Invoked from the static constructor. + private static void ConcurrentInitializeTest() { - int processorCount = Environment.ProcessorCount; - var countdownEvent = new CountdownEvent(processorCount); - Action threadMain = - () => - { - countdownEvent.Signal(); - countdownEvent.Wait(ThreadTestHelpers.UnexpectedTimeoutMilliseconds); - Assert.True(ThreadPool.SetMinThreads(processorCount, processorCount)); - }; - - var waitForThreadArray = new Action[processorCount]; - for (int i = 0; i < processorCount; ++i) + RemoteExecutor.Invoke(() => { - var t = ThreadTestHelpers.CreateGuardedThread(out waitForThreadArray[i], threadMain); - t.IsBackground = true; - t.Start(); - } + int processorCount = Environment.ProcessorCount; + var countdownEvent = new CountdownEvent(processorCount); + Action threadMain = + () => + { + countdownEvent.Signal(); + countdownEvent.Wait(ThreadTestHelpers.UnexpectedTimeoutMilliseconds); + Assert.True(ThreadPool.SetMinThreads(processorCount, processorCount)); + }; - foreach (Action waitForThread in waitForThreadArray) - { - waitForThread(); - } + var waitForThreadArray = new Action[processorCount]; + for (int i = 0; i < processorCount; ++i) + { + var t = ThreadTestHelpers.CreateGuardedThread(out waitForThreadArray[i], threadMain); + t.IsBackground = true; + t.Start(); + } + + foreach (Action waitForThread in waitForThreadArray) + { + waitForThread(); + } + }).Dispose(); } [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] @@ -87,90 +92,95 @@ namespace System.Threading.ThreadPools.Tests Assert.True(c <= maxc); } - [Fact] + [ConditionalFact(nameof(IsThreadingAndRemoteExecutorSupported))] [ActiveIssue("https://github.com/mono/mono/issues/15164", TestRuntimes.Mono)] public static void SetMinMaxThreadsTest() { - int minw, minc, maxw, maxc; - ThreadPool.GetMinThreads(out minw, out minc); - ThreadPool.GetMaxThreads(out maxw, out maxc); - - try - { - int mint = Environment.ProcessorCount * 2; - int maxt = mint + 1; - ThreadPool.SetMinThreads(mint, mint); - ThreadPool.SetMaxThreads(maxt, maxt); - - Assert.False(ThreadPool.SetMinThreads(maxt + 1, mint)); - Assert.False(ThreadPool.SetMinThreads(mint, maxt + 1)); - Assert.False(ThreadPool.SetMinThreads(MaxPossibleThreadCount, mint)); - Assert.False(ThreadPool.SetMinThreads(mint, MaxPossibleThreadCount)); - Assert.False(ThreadPool.SetMinThreads(MaxPossibleThreadCount + 1, mint)); - Assert.False(ThreadPool.SetMinThreads(mint, MaxPossibleThreadCount + 1)); - Assert.False(ThreadPool.SetMinThreads(-1, mint)); - Assert.False(ThreadPool.SetMinThreads(mint, -1)); - - Assert.False(ThreadPool.SetMaxThreads(mint - 1, maxt)); - Assert.False(ThreadPool.SetMaxThreads(maxt, mint - 1)); - - VerifyMinThreads(mint, mint); - VerifyMaxThreads(maxt, maxt); - - Assert.True(ThreadPool.SetMaxThreads(MaxPossibleThreadCount, MaxPossibleThreadCount)); - VerifyMaxThreads(MaxPossibleThreadCount, MaxPossibleThreadCount); - Assert.True(ThreadPool.SetMaxThreads(MaxPossibleThreadCount + 1, MaxPossibleThreadCount + 1)); - VerifyMaxThreads(MaxPossibleThreadCount, MaxPossibleThreadCount); - Assert.Equal(PlatformDetection.IsNetFramework, ThreadPool.SetMaxThreads(-1, -1)); - VerifyMaxThreads(MaxPossibleThreadCount, MaxPossibleThreadCount); - - Assert.True(ThreadPool.SetMinThreads(MaxPossibleThreadCount, MaxPossibleThreadCount)); - VerifyMinThreads(MaxPossibleThreadCount, MaxPossibleThreadCount); - - Assert.False(ThreadPool.SetMinThreads(MaxPossibleThreadCount + 1, MaxPossibleThreadCount)); - Assert.False(ThreadPool.SetMinThreads(MaxPossibleThreadCount, MaxPossibleThreadCount + 1)); - Assert.False(ThreadPool.SetMinThreads(-1, MaxPossibleThreadCount)); - Assert.False(ThreadPool.SetMinThreads(MaxPossibleThreadCount, -1)); - VerifyMinThreads(MaxPossibleThreadCount, MaxPossibleThreadCount); - - Assert.True(ThreadPool.SetMinThreads(0, 0)); - Assert.True(ThreadPool.SetMaxThreads(1, 1)); - VerifyMaxThreads(1, 1); - Assert.True(ThreadPool.SetMinThreads(1, 1)); - VerifyMinThreads(1, 1); - } - finally + RemoteExecutor.Invoke(() => { - Assert.True(ThreadPool.SetMaxThreads(maxw, maxc)); - VerifyMaxThreads(maxw, maxc); - Assert.True(ThreadPool.SetMinThreads(minw, minc)); - VerifyMinThreads(minw, minc); - } + int minw, minc, maxw, maxc; + ThreadPool.GetMinThreads(out minw, out minc); + ThreadPool.GetMaxThreads(out maxw, out maxc); + + try + { + int mint = Environment.ProcessorCount * 2; + int maxt = mint + 1; + ThreadPool.SetMinThreads(mint, mint); + ThreadPool.SetMaxThreads(maxt, maxt); + + Assert.False(ThreadPool.SetMinThreads(maxt + 1, mint)); + Assert.False(ThreadPool.SetMinThreads(mint, maxt + 1)); + Assert.False(ThreadPool.SetMinThreads(MaxPossibleThreadCount, mint)); + Assert.False(ThreadPool.SetMinThreads(mint, MaxPossibleThreadCount)); + Assert.False(ThreadPool.SetMinThreads(MaxPossibleThreadCount + 1, mint)); + Assert.False(ThreadPool.SetMinThreads(mint, MaxPossibleThreadCount + 1)); + Assert.False(ThreadPool.SetMinThreads(-1, mint)); + Assert.False(ThreadPool.SetMinThreads(mint, -1)); + + Assert.False(ThreadPool.SetMaxThreads(mint - 1, maxt)); + Assert.False(ThreadPool.SetMaxThreads(maxt, mint - 1)); + + VerifyMinThreads(mint, mint); + VerifyMaxThreads(maxt, maxt); + + Assert.True(ThreadPool.SetMaxThreads(MaxPossibleThreadCount, MaxPossibleThreadCount)); + VerifyMaxThreads(MaxPossibleThreadCount, MaxPossibleThreadCount); + Assert.True(ThreadPool.SetMaxThreads(MaxPossibleThreadCount + 1, MaxPossibleThreadCount + 1)); + VerifyMaxThreads(MaxPossibleThreadCount, MaxPossibleThreadCount); + Assert.Equal(PlatformDetection.IsNetFramework, ThreadPool.SetMaxThreads(-1, -1)); + VerifyMaxThreads(MaxPossibleThreadCount, MaxPossibleThreadCount); + + Assert.True(ThreadPool.SetMinThreads(MaxPossibleThreadCount, MaxPossibleThreadCount)); + VerifyMinThreads(MaxPossibleThreadCount, MaxPossibleThreadCount); + + Assert.False(ThreadPool.SetMinThreads(MaxPossibleThreadCount + 1, MaxPossibleThreadCount)); + Assert.False(ThreadPool.SetMinThreads(MaxPossibleThreadCount, MaxPossibleThreadCount + 1)); + Assert.False(ThreadPool.SetMinThreads(-1, MaxPossibleThreadCount)); + Assert.False(ThreadPool.SetMinThreads(MaxPossibleThreadCount, -1)); + VerifyMinThreads(MaxPossibleThreadCount, MaxPossibleThreadCount); + + Assert.True(ThreadPool.SetMinThreads(0, 0)); + Assert.True(ThreadPool.SetMaxThreads(1, 1)); + VerifyMaxThreads(1, 1); + Assert.True(ThreadPool.SetMinThreads(1, 1)); + VerifyMinThreads(1, 1); + } + finally + { + Assert.True(ThreadPool.SetMaxThreads(maxw, maxc)); + VerifyMaxThreads(maxw, maxc); + Assert.True(ThreadPool.SetMinThreads(minw, minc)); + VerifyMinThreads(minw, minc); + } + }).Dispose(); } - [Fact] - [ActiveIssue("https://github.com/dotnet/runtime/issues/32020", TestRuntimes.Mono)] + [ConditionalFact(nameof(IsThreadingAndRemoteExecutorSupported))] public static void SetMinMaxThreadsTest_ChangedInDotNetCore() { - int minw, minc, maxw, maxc; - ThreadPool.GetMinThreads(out minw, out minc); - ThreadPool.GetMaxThreads(out maxw, out maxc); - - try - { - Assert.True(ThreadPool.SetMinThreads(0, 0)); - VerifyMinThreads(1, 1); - Assert.False(ThreadPool.SetMaxThreads(0, 1)); - Assert.False(ThreadPool.SetMaxThreads(1, 0)); - VerifyMaxThreads(maxw, maxc); - } - finally + RemoteExecutor.Invoke(() => { - Assert.True(ThreadPool.SetMaxThreads(maxw, maxc)); - VerifyMaxThreads(maxw, maxc); - Assert.True(ThreadPool.SetMinThreads(minw, minc)); - VerifyMinThreads(minw, minc); - } + int minw, minc, maxw, maxc; + ThreadPool.GetMinThreads(out minw, out minc); + ThreadPool.GetMaxThreads(out maxw, out maxc); + + try + { + Assert.True(ThreadPool.SetMinThreads(0, 0)); + VerifyMinThreads(1, 1); + Assert.False(ThreadPool.SetMaxThreads(0, 1)); + Assert.False(ThreadPool.SetMaxThreads(1, 0)); + VerifyMaxThreads(maxw, maxc); + } + finally + { + Assert.True(ThreadPool.SetMaxThreads(maxw, maxc)); + VerifyMaxThreads(maxw, maxc); + Assert.True(ThreadPool.SetMinThreads(minw, minc)); + VerifyMinThreads(minw, minc); + } + }).Dispose(); } private static void VerifyMinThreads(int expectedMinw, int expectedMinc) @@ -189,204 +199,44 @@ namespace System.Threading.ThreadPools.Tests Assert.Equal(expectedMaxc, maxc); } - [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + [ConditionalFact(nameof(IsThreadingAndRemoteExecutorSupported))] public static void SetMinThreadsTo0Test() { - int minw, minc, maxw, maxc; - ThreadPool.GetMinThreads(out minw, out minc); - ThreadPool.GetMaxThreads(out maxw, out maxc); - - try + RemoteExecutor.Invoke(() => { - Assert.True(ThreadPool.SetMinThreads(0, minc)); - Assert.True(ThreadPool.SetMaxThreads(1, maxc)); + int minw, minc, maxw, maxc; + ThreadPool.GetMinThreads(out minw, out minc); + ThreadPool.GetMaxThreads(out maxw, out maxc); - int count = 0; - var done = new ManualResetEvent(false); - WaitCallback callback = null; - callback = state => + try { - ++count; - if (count > 100) - { - done.Set(); - } - else - { - ThreadPool.QueueUserWorkItem(callback); - } - }; - ThreadPool.QueueUserWorkItem(callback); - done.WaitOne(ThreadTestHelpers.UnexpectedTimeoutMilliseconds); - } - finally - { - Assert.True(ThreadPool.SetMaxThreads(maxw, maxc)); - Assert.True(ThreadPool.SetMinThreads(minw, minc)); - } - } + Assert.True(ThreadPool.SetMinThreads(0, minc)); + Assert.True(ThreadPool.SetMaxThreads(1, maxc)); - [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] - public static void QueueRegisterPositiveAndFlowTest() - { - var asyncLocal = new AsyncLocal(); - asyncLocal.Value = 1; - - var obj = new object(); - var registerWaitEvent = new AutoResetEvent(false); - var threadDone = new AutoResetEvent(false); - RegisteredWaitHandle registeredWaitHandle = null; - Exception backgroundEx = null; - int backgroundAsyncLocalValue = 0; - - Action commonBackgroundTest = - (isRegisteredWaitCallback, test) => - { - try + int count = 0; + var done = new ManualResetEvent(false); + WaitCallback callback = null; + callback = state => { - if (isRegisteredWaitCallback) + ++count; + if (count > 100) { - RegisteredWaitHandle toUnregister = registeredWaitHandle; - registeredWaitHandle = null; - Assert.True(toUnregister.Unregister(threadDone)); + done.Set(); } - test(); - backgroundAsyncLocalValue = asyncLocal.Value; - } - catch (Exception ex) - { - backgroundEx = ex; - } - finally - { - if (!isRegisteredWaitCallback) + else { - threadDone.Set(); + ThreadPool.QueueUserWorkItem(callback); } - } - }; - Action waitForBackgroundWork = - isWaitForRegisteredWaitCallback => - { - if (isWaitForRegisteredWaitCallback) - { - registerWaitEvent.Set(); - } - threadDone.CheckedWait(); - if (backgroundEx != null) - { - throw new AggregateException(backgroundEx); - } - }; - - ThreadPool.QueueUserWorkItem( - state => - { - commonBackgroundTest(false, () => - { - Assert.Same(obj, state); - }); - }, - obj); - waitForBackgroundWork(false); - Assert.Equal(1, backgroundAsyncLocalValue); - - ThreadPool.UnsafeQueueUserWorkItem( - state => + }; + ThreadPool.QueueUserWorkItem(callback); + done.WaitOne(ThreadTestHelpers.UnexpectedTimeoutMilliseconds); + } + finally { - commonBackgroundTest(false, () => - { - Assert.Same(obj, state); - }); - }, - obj); - waitForBackgroundWork(false); - Assert.Equal(0, backgroundAsyncLocalValue); - - registeredWaitHandle = - ThreadPool.RegisterWaitForSingleObject( - registerWaitEvent, - (state, timedOut) => - { - commonBackgroundTest(true, () => - { - Assert.Same(obj, state); - Assert.False(timedOut); - }); - }, - obj, - UnexpectedTimeoutMilliseconds, - false); - waitForBackgroundWork(true); - Assert.Equal(1, backgroundAsyncLocalValue); - - registeredWaitHandle = - ThreadPool.UnsafeRegisterWaitForSingleObject( - registerWaitEvent, - (state, timedOut) => - { - commonBackgroundTest(true, () => - { - Assert.Same(obj, state); - Assert.False(timedOut); - }); - }, - obj, - UnexpectedTimeoutMilliseconds, - false); - waitForBackgroundWork(true); - Assert.Equal(0, backgroundAsyncLocalValue); - } - - [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] - public static void QueueRegisterNegativeTest() - { - Assert.Throws(() => ThreadPool.QueueUserWorkItem(null)); - Assert.Throws(() => ThreadPool.UnsafeQueueUserWorkItem(null, null)); - - WaitHandle waitHandle = new ManualResetEvent(true); - WaitOrTimerCallback callback = (state, timedOut) => { }; - Assert.Throws(() => ThreadPool.RegisterWaitForSingleObject(null, callback, null, 0, true)); - Assert.Throws(() => ThreadPool.RegisterWaitForSingleObject(waitHandle, null, null, 0, true)); - AssertExtensions.Throws("millisecondsTimeOutInterval", () => - ThreadPool.RegisterWaitForSingleObject(waitHandle, callback, null, -2, true)); - AssertExtensions.Throws("millisecondsTimeOutInterval", () => - ThreadPool.RegisterWaitForSingleObject(waitHandle, callback, null, (long)-2, true)); - if (!PlatformDetection.IsNetFramework) // .NET Framework silently overflows the timeout - { - AssertExtensions.Throws("millisecondsTimeOutInterval", () => - ThreadPool.RegisterWaitForSingleObject(waitHandle, callback, null, (long)int.MaxValue + 1, true)); - } - AssertExtensions.Throws("timeout", () => - ThreadPool.RegisterWaitForSingleObject(waitHandle, callback, null, TimeSpan.FromMilliseconds(-2), true)); - AssertExtensions.Throws("timeout", () => - ThreadPool.RegisterWaitForSingleObject( - waitHandle, - callback, - null, - TimeSpan.FromMilliseconds((double)int.MaxValue + 1), - true)); - - Assert.Throws(() => ThreadPool.UnsafeRegisterWaitForSingleObject(null, callback, null, 0, true)); - Assert.Throws(() => ThreadPool.UnsafeRegisterWaitForSingleObject(waitHandle, null, null, 0, true)); - AssertExtensions.Throws("millisecondsTimeOutInterval", () => - ThreadPool.UnsafeRegisterWaitForSingleObject(waitHandle, callback, null, -2, true)); - AssertExtensions.Throws("millisecondsTimeOutInterval", () => - ThreadPool.UnsafeRegisterWaitForSingleObject(waitHandle, callback, null, (long)-2, true)); - if (!PlatformDetection.IsNetFramework) // .NET Framework silently overflows the timeout - { - AssertExtensions.Throws("millisecondsTimeOutInterval", () => - ThreadPool.UnsafeRegisterWaitForSingleObject(waitHandle, callback, null, (long)int.MaxValue + 1, true)); - } - AssertExtensions.Throws("timeout", () => - ThreadPool.UnsafeRegisterWaitForSingleObject(waitHandle, callback, null, TimeSpan.FromMilliseconds(-2), true)); - AssertExtensions.Throws("timeout", () => - ThreadPool.UnsafeRegisterWaitForSingleObject( - waitHandle, - callback, - null, - TimeSpan.FromMilliseconds((double)int.MaxValue + 1), - true)); + Assert.True(ThreadPool.SetMaxThreads(maxw, maxc)); + Assert.True(ThreadPool.SetMinThreads(minw, minc)); + } + }).Dispose(); } [ConditionalTheory(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] @@ -558,7 +408,9 @@ namespace System.Threading.ThreadPools.Tests public void Execute() { } } - [ConditionalFact(nameof(HasAtLeastThreeProcessorsAndRemoteExecutorSupported))] + public static bool IsMetricsTestSupported => Environment.ProcessorCount >= 3 && IsThreadingAndRemoteExecutorSupported; + + [ConditionalFact(nameof(IsMetricsTestSupported))] public void MetricsTest() { RemoteExecutor.Invoke(() => @@ -696,6 +548,292 @@ namespace System.Threading.ThreadPools.Tests }).Dispose(); } - public static bool HasAtLeastThreeProcessorsAndRemoteExecutorSupported => Environment.ProcessorCount >= 3 && RemoteExecutor.IsSupported; + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void RunProcessorCountItemsInParallel() + { + int processorCount = Environment.ProcessorCount; + AutoResetEvent allWorkItemsStarted = new AutoResetEvent(false); + int startedWorkItemCount = 0; + WaitCallback workItem = _ => + { + if (Interlocked.Increment(ref startedWorkItemCount) == processorCount) + { + allWorkItemsStarted.Set(); + } + }; + + // Run the test twice to make sure we can reuse the threads. + for (int j = 0; j < 2; ++j) + { + for (int i = 0; i < processorCount; ++i) + { + ThreadPool.QueueUserWorkItem(workItem); + } + + allWorkItemsStarted.CheckedWait(); + Interlocked.Exchange(ref startedWorkItemCount, 0); + } + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void ThreadPoolCanPickUpOneOrMoreWorkItemsWhenThreadIsAvailable() + { + int processorCount = Environment.ProcessorCount; + AutoResetEvent allBlockingWorkItemsStarted = new AutoResetEvent(false); + AutoResetEvent allTestWorkItemsStarted = new AutoResetEvent(false); + ManualResetEvent unblockWorkItems = new ManualResetEvent(false); + int startedBlockingWorkItemCount = 0; + int startedTestWorkItemCount = 0; + WaitCallback blockingWorkItem = _ => + { + if (Interlocked.Increment(ref startedBlockingWorkItemCount) == processorCount - 1) + { + allBlockingWorkItemsStarted.Set(); + } + unblockWorkItems.CheckedWait(); + }; + WaitCallback testWorkItem = _ => + { + if (Interlocked.Increment(ref startedTestWorkItemCount) == processorCount) + { + allTestWorkItemsStarted.Set(); + } + }; + + for (int i = 0; i < processorCount - 1; ++i) + { + ThreadPool.QueueUserWorkItem(blockingWorkItem); + } + + allBlockingWorkItemsStarted.CheckedWait(); + for (int i = 0; i < processorCount; ++i) + { + ThreadPool.QueueUserWorkItem(testWorkItem); + } + + allTestWorkItemsStarted.CheckedWait(); + unblockWorkItems.Set(); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void RunMoreThanMaxWorkItemsMakesOneWorkItemWaitForStarvationDetection() + { + int processorCount = Environment.ProcessorCount; + AutoResetEvent allBlockingWorkItemsStarted = new AutoResetEvent(false); + AutoResetEvent testWorkItemStarted = new AutoResetEvent(false); + ManualResetEvent unblockWorkItems = new ManualResetEvent(false); + int startedBlockingWorkItemCount = 0; + WaitCallback blockingWorkItem = _ => + { + if (Interlocked.Increment(ref startedBlockingWorkItemCount) == processorCount) + { + allBlockingWorkItemsStarted.Set(); + } + unblockWorkItems.CheckedWait(); + }; + + for (int i = 0; i < processorCount; ++i) + { + ThreadPool.QueueUserWorkItem(blockingWorkItem); + } + + allBlockingWorkItemsStarted.CheckedWait(); + ThreadPool.QueueUserWorkItem(_ => testWorkItemStarted.Set()); + testWorkItemStarted.CheckedWait(); + unblockWorkItems.Set(); + } + + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void WorkQueueDepletionTest() + { + ManualResetEvent done = new ManualResetEvent(false); + int numLocalScheduled = 1; + int numGlobalScheduled = 1; + int numOfEachTypeToSchedule = Environment.ProcessorCount * 64; + int numTotalCompleted = 0; + Action workItem = null; + workItem = preferLocal => + { + int numScheduled = + preferLocal ? Interlocked.Increment(ref numLocalScheduled) : Interlocked.Increment(ref numGlobalScheduled); + if (numScheduled <= numOfEachTypeToSchedule) + { + ThreadPool.QueueUserWorkItem(workItem, preferLocal, preferLocal); + if (Interlocked.Increment(ref numScheduled) <= numOfEachTypeToSchedule) + { + ThreadPool.QueueUserWorkItem(workItem, preferLocal, preferLocal); + } + } + + if (Interlocked.Increment(ref numTotalCompleted) == numOfEachTypeToSchedule * 2) + { + done.Set(); + } + }; + + ThreadPool.QueueUserWorkItem(workItem, true, preferLocal: true); + ThreadPool.QueueUserWorkItem(workItem, false, preferLocal: false); + done.CheckedWait(); + } + + [ConditionalFact(nameof(IsThreadingAndRemoteExecutorSupported))] + public static void WorkerThreadStateResetTest() + { + RemoteExecutor.Invoke(() => + { + ThreadPool.GetMinThreads(out int minw, out int minc); + ThreadPool.GetMaxThreads(out int maxw, out int maxc); + try + { + // Use maximum one worker thread to have all work items below run on the same thread + Assert.True(ThreadPool.SetMinThreads(1, minc)); + Assert.True(ThreadPool.SetMaxThreads(1, maxc)); + + var done = new AutoResetEvent(false); + string failureMessage = string.Empty; + WaitCallback setNameWorkItem = null; + WaitCallback verifyNameWorkItem = null; + WaitCallback setIsBackgroundWorkItem = null; + WaitCallback verifyIsBackgroundWorkItem = null; + WaitCallback setPriorityWorkItem = null; + WaitCallback verifyPriorityWorkItem = null; + + setNameWorkItem = _ => + { + Thread.CurrentThread.Name = nameof(WorkerThreadStateResetTest); + ThreadPool.QueueUserWorkItem(verifyNameWorkItem); + }; + + verifyNameWorkItem = _ => + { + Thread currentThread = Thread.CurrentThread; + if (currentThread.Name != null) + { + failureMessage += $"Name was not reset: {currentThread.Name}{Environment.NewLine}"; + } + ThreadPool.QueueUserWorkItem(setIsBackgroundWorkItem); + }; + + setIsBackgroundWorkItem = _ => + { + Thread.CurrentThread.IsBackground = false; + ThreadPool.QueueUserWorkItem(verifyIsBackgroundWorkItem); + }; + + verifyIsBackgroundWorkItem = _ => + { + Thread currentThread = Thread.CurrentThread; + if (!currentThread.IsBackground) + { + failureMessage += $"IsBackground was not reset: {currentThread.IsBackground}{Environment.NewLine}"; + currentThread.IsBackground = true; + } + ThreadPool.QueueUserWorkItem(setPriorityWorkItem); + }; + + setPriorityWorkItem = _ => + { + Thread.CurrentThread.Priority = ThreadPriority.AboveNormal; + ThreadPool.QueueUserWorkItem(verifyPriorityWorkItem); + }; + + verifyPriorityWorkItem = _ => + { + Thread currentThread = Thread.CurrentThread; + if (currentThread.Priority != ThreadPriority.Normal) + { + failureMessage += $"Priority was not reset: {currentThread.Priority}{Environment.NewLine}"; + currentThread.Priority = ThreadPriority.Normal; + } + done.Set(); + }; + + ThreadPool.QueueUserWorkItem(setNameWorkItem); + done.CheckedWait(); + Assert.Equal(string.Empty, failureMessage); + } + finally + { + Assert.True(ThreadPool.SetMaxThreads(maxw, maxc)); + Assert.True(ThreadPool.SetMinThreads(minw, minc)); + } + }).Dispose(); + } + + [ConditionalFact(nameof(IsThreadingAndRemoteExecutorSupported))] + public static void SettingMinWorkerThreadsWillCreateThreadsUpToMinimum() + { + RemoteExecutor.Invoke(() => + { + ThreadPool.GetMinThreads(out int minWorkerThreads, out int minIocpThreads); + ThreadPool.GetMaxThreads(out int maxWorkerThreads, out int maxIocpThreads); + + AutoResetEvent allWorkItemsExceptOneStarted = new AutoResetEvent(false); + AutoResetEvent allWorkItemsStarted = new AutoResetEvent(false); + ManualResetEvent unblockWorkItems = new ManualResetEvent(false); + int startedWorkItemCount = 0; + WaitCallback workItem = _ => + { + int newStartedWorkItemCount = Interlocked.Increment(ref startedWorkItemCount); + if (newStartedWorkItemCount == minWorkerThreads) + { + allWorkItemsExceptOneStarted.Set(); + } + else if (newStartedWorkItemCount == minWorkerThreads + 1) + { + allWorkItemsStarted.Set(); + } + + unblockWorkItems.CheckedWait(); + }; + + ThreadPool.SetMaxThreads(minWorkerThreads, maxIocpThreads); + for (int i = 0; i < minWorkerThreads + 1; ++i) + { + ThreadPool.QueueUserWorkItem(workItem); + } + + allWorkItemsExceptOneStarted.CheckedWait(); + Assert.False(allWorkItemsStarted.WaitOne(ThreadTestHelpers.ExpectedTimeoutMilliseconds)); + + Assert.True(ThreadPool.SetMaxThreads(minWorkerThreads + 1, maxIocpThreads)); + Assert.True(ThreadPool.SetMinThreads(minWorkerThreads + 1, minIocpThreads)); + allWorkItemsStarted.CheckedWait(); + + unblockWorkItems.Set(); + }).Dispose(); + } + + // See https://github.com/dotnet/corert/pull/6822 + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void ThreadPoolCanProcessManyWorkItemsInParallelWithoutDeadlocking() + { + int processorCount = Environment.ProcessorCount; + int iterationCount = 100_000; + var done = new ManualResetEvent(false); + + WaitCallback workItem = null; + workItem = _ => + { + if (Interlocked.Decrement(ref iterationCount) > 0) + { + ThreadPool.QueueUserWorkItem(workItem); + } + else + { + done.Set(); + } + }; + + for (int i = 0; i < processorCount; ++i) + { + ThreadPool.QueueUserWorkItem(workItem); + } + + done.CheckedWait(); + } + + public static bool IsThreadingAndRemoteExecutorSupported => + PlatformDetection.IsThreadingSupported && RemoteExecutor.IsSupported; } } diff --git a/src/libraries/System.Threading.Timer/tests/TimerFiringTests.cs b/src/libraries/System.Threading.Timer/tests/TimerFiringTests.cs index 65a570f..d7c963d 100644 --- a/src/libraries/System.Threading.Timer/tests/TimerFiringTests.cs +++ b/src/libraries/System.Threading.Timer/tests/TimerFiringTests.cs @@ -362,6 +362,59 @@ namespace System.Threading.Tests } } + [ConditionalFact(typeof(PlatformDetection), nameof(PlatformDetection.IsThreadingSupported))] + public static void TimersCreatedConcurrentlyOnDifferentThreadsAllFire() + { + int processorCount = Environment.ProcessorCount; + + int timerTickCount = 0; + TimerCallback timerCallback = data => Interlocked.Increment(ref timerTickCount); + + var threadStarted = new AutoResetEvent(false); + var createTimers = new ManualResetEvent(false); + var timers = new Timer[processorCount]; + Action createTimerThreadStart = data => + { + int i = (int)data; + var sw = new Stopwatch(); + threadStarted.Set(); + createTimers.WaitOne(); + + // Use the CPU a bit around creating the timer to try to have some of these threads run concurrently + sw.Restart(); + do + { + Thread.SpinWait(1000); + } while (sw.ElapsedMilliseconds < 10); + + timers[i] = new Timer(timerCallback, null, 1, Timeout.Infinite); + + // Use the CPU a bit around creating the timer to try to have some of these threads run concurrently + sw.Restart(); + do + { + Thread.SpinWait(1000); + } while (sw.ElapsedMilliseconds < 10); + }; + + var waitsForThread = new Action[timers.Length]; + for (int i = 0; i < timers.Length; ++i) + { + var t = ThreadTestHelpers.CreateGuardedThread(out waitsForThread[i], createTimerThreadStart); + t.IsBackground = true; + t.Start(i); + threadStarted.CheckedWait(); + } + + createTimers.Set(); + ThreadTestHelpers.WaitForCondition(() => timerTickCount == timers.Length); + + foreach (var waitForThread in waitsForThread) + { + waitForThread(); + } + } + private static Task DueTimeAsync(int dueTime) { // We could just use Task.Delay, but it only uses Timer as an implementation detail. diff --git a/src/mono/mono/metadata/threads.c b/src/mono/mono/metadata/threads.c index fe3de4d..4eb05cc 100644 --- a/src/mono/mono/metadata/threads.c +++ b/src/mono/mono/metadata/threads.c @@ -2134,8 +2134,19 @@ ves_icall_System_Threading_Thread_SetName_icall (MonoInternalThreadHandle thread char* name8 = name16 ? g_utf16_to_utf8 (name16, name16_length, NULL, &name8_length, NULL) : NULL; +#ifdef ENABLE_NETCORE + // The managed thread implementation prevents the Name property from being set multiple times on normal threads. On thread + // pool threads, for compatibility the thread's name should be changeable and this function may be called to force-reset the + // thread's name if user code had changed it. So for the flags, MonoSetThreadNameFlag_Reset is passed instead of + // MonoSetThreadNameFlag_Permanent for all threads, relying on the managed side to prevent multiple changes where + // appropriate. + MonoSetThreadNameFlags flags = MonoSetThreadNameFlag_Reset; +#else + MonoSetThreadNameFlags flags = MonoSetThreadNameFlag_Permanent; +#endif + mono_thread_set_name (mono_internal_thread_handle_ptr (thread_handle), - name8, (gint32)name8_length, name16, MonoSetThreadNameFlag_Permanent, error); + name8, (gint32)name8_length, name16, flags, error); } #ifndef ENABLE_NETCORE diff --git a/src/mono/netcore/System.Private.CoreLib/System.Private.CoreLib.csproj b/src/mono/netcore/System.Private.CoreLib/System.Private.CoreLib.csproj index fd39b8f..0c8cde1 100644 --- a/src/mono/netcore/System.Private.CoreLib/System.Private.CoreLib.csproj +++ b/src/mono/netcore/System.Private.CoreLib/System.Private.CoreLib.csproj @@ -259,7 +259,6 @@ - diff --git a/src/mono/netcore/System.Private.CoreLib/src/System/Threading/Thread.Mono.cs b/src/mono/netcore/System.Private.CoreLib/src/System/Threading/Thread.Mono.cs index 05fda86..9148e6d 100644 --- a/src/mono/netcore/System.Private.CoreLib/src/System/Threading/Thread.Mono.cs +++ b/src/mono/netcore/System.Private.CoreLib/src/System/Threading/Thread.Mono.cs @@ -79,6 +79,11 @@ namespace System.Threading internal ExecutionContext? _executionContext; internal SynchronizationContext? _synchronizationContext; + // This is used for a quick check on thread pool threads after running a work item to determine if the name, background + // state, or priority were changed by the work item, and if so to reset it. Other threads may also change some of those, + // but those types of changes may race with the reset anyway, so this field doesn't need to be synchronized. + private bool _mayNeedResetForThreadPool; + private Thread() { InitInternal(this); @@ -123,6 +128,7 @@ namespace System.Threading else { ClrState(this, ThreadState.Background); + _mayNeedResetForThreadPool = true; } } } @@ -162,6 +168,10 @@ namespace System.Threading { // TODO: arguments check SetPriority(this, (int)value); + if (value != ThreadPriority.Normal) + { + _mayNeedResetForThreadPool = true; + } } } @@ -207,18 +217,6 @@ namespace System.Threading return JoinInternal(this, millisecondsTimeout); } - internal void ResetThreadPoolThread() - { - if (_name != null) - Name = null; - - if ((state & ThreadState.Background) == 0) - IsBackground = true; - - if ((ThreadPriority)priority != ThreadPriority.Normal) - Priority = ThreadPriority.Normal; - } - private void SetCultureOnUnstartedThreadNoCheck(CultureInfo value, bool uiCulture) { if (uiCulture) diff --git a/src/mono/netcore/System.Private.CoreLib/src/System/Threading/ThreadPool.Browser.Mono.cs b/src/mono/netcore/System.Private.CoreLib/src/System/Threading/ThreadPool.Browser.Mono.cs index e71edb9..500c694 100644 --- a/src/mono/netcore/System.Private.CoreLib/src/System/Threading/ThreadPool.Browser.Mono.cs +++ b/src/mono/netcore/System.Private.CoreLib/src/System/Threading/ThreadPool.Browser.Mono.cs @@ -4,34 +4,40 @@ using System.Diagnostics; using System.Collections.Generic; using System.Runtime.CompilerServices; -using System.Runtime.Versioning; using System.Diagnostics.CodeAnalysis; using Microsoft.Win32.SafeHandles; namespace System.Threading { - [UnsupportedOSPlatform("browser")] - public sealed class RegisteredWaitHandle : MarshalByRefObject + public sealed partial class RegisteredWaitHandle : MarshalByRefObject { - internal RegisteredWaitHandle(WaitHandle waitHandle, _ThreadPoolWaitOrTimerCallback callbackHelper, - int millisecondsTimeout, bool repeating) + public bool Unregister(WaitHandle? waitObject) { + throw new PlatformNotSupportedException(); } + } - public bool Unregister(WaitHandle? waitObject) + internal sealed partial class CompleteWaitThreadPoolWorkItem : IThreadPoolWorkItem + { + void IThreadPoolWorkItem.Execute() { - throw new PlatformNotSupportedException(); + Debug.Fail("Registered wait handles are currently not supported"); } } public static partial class ThreadPool { + // Time-senstiive work items are those that may need to run ahead of normal work items at least periodically. For a + // runtime that does not support time-sensitive work items on the managed side, the thread pool yields the thread to the + // runtime periodically (by exiting the dispatch loop) so that the runtime may use that thread for processing + // any time-sensitive work. For a runtime that supports time-sensitive work items on the managed side, the thread pool + // does not yield the thread and instead processes time-sensitive work items queued by specific APIs periodically. + internal const bool SupportsTimeSensitiveWorkItems = false; // the timer currently doesn't queue time-sensitive work + internal const bool EnableWorkerTracking = false; private static bool _callbackQueued; - internal static void InitializeForThreadPoolThread() { } - public static bool SetMaxThreads(int workerThreads, int completionPortThreads) { if (workerThreads == 1 && completionPortThreads == 1) @@ -76,36 +82,20 @@ namespace System.Threading QueueCallback(); } - internal static bool KeepDispatching(int startTickCount) - { - return true; - } - internal static void NotifyWorkItemProgress() { } - internal static bool NotifyWorkItemComplete() + [MethodImpl(MethodImplOptions.AggressiveInlining)] + internal static bool NotifyWorkItemComplete(object? threadLocalCompletionCountObject, int currentTimeMs) { return true; } - private static RegisteredWaitHandle RegisterWaitForSingleObject( - WaitHandle waitObject, - WaitOrTimerCallback callBack, - object? state, - uint millisecondsTimeOutInterval, - bool executeOnlyOnce, - bool flowExecutionContext) - { - if (waitObject == null) - throw new ArgumentNullException(nameof(waitObject)); - - if (callBack == null) - throw new ArgumentNullException(nameof(callBack)); + internal static object? GetOrCreateThreadLocalCompletionCountObject() => null; + private static void RegisterWaitForSingleObjectCore(WaitHandle? waitObject, RegisteredWaitHandle registeredWaitHandle) => throw new PlatformNotSupportedException(); - } [DynamicDependency("Callback")] [DynamicDependency("PumpThreadPool")]