1 // Licensed to the .NET Foundation under one or more agreements.
2 // The .NET Foundation licenses this file to you under the MIT license.
3 // See the LICENSE file in the project root for more information.
10 // Currently represents a logical and physical COM+ thread. Later, these concepts will be separated.
14 // #RuntimeThreadLocals.
16 // Windows has a feature call Thread Local Storage (TLS, which is data that the OS allocates every time it
17 // creates a thread). Programs access this storage by using the Windows TlsAlloc, TlsGetValue, TlsSetValue
18 // APIs (see http://msdn2.microsoft.com/en-us/library/ms686812.aspx). The runtime allocates two such slots
21 // * A slot that holds a pointer to the runtime thread object code:Thread (see code:#ThreadClass). The
22 // runtime has a special optimized version of this helper code:GetThread (we actually emit assembly
23 // code on the fly so it is as fast as possible). These code:Thread objects live in the
26 // * The other slot holds the current code:AppDomain (a managed equivalent of a process). The
27 // runtime thread object also has a pointer to the thread's AppDomain (see code:Thread.m_pDomain,
28 // so in theory this TLS is redundant. It is there for speed (one less pointer indirection). The
29 // optimized helper for this is code:GetAppDomain (we emit assembly code on the fly for this one
32 // Initially these TLS slots are empty (when the OS starts up), however before we run managed code, we must
33 // set them properly so that managed code knows what AppDomain it is in and we can suspend threads properly
34 // for a GC (see code:#SuspendingTheRuntime)
36 // #SuspendingTheRuntime
38 // One of the primary differences between runtime code (managed code), and traditional (unmanaged code) is
39 // the existence of the GC heap (see file:gc.cpp#Overview). For the GC to do its job, it must be able to
40 // traverse all references to the GC heap, including ones on the stack of every thread, as well as any in
41 // hardware registers. While it is simple to state this requirement, it has long reaching effects, because
42 // properly accounting for all GC heap references ALL the time turns out to be quite hard. When we make a
43 // bookkeeping mistake, a GC reference is not reported at GC time, which means it will not be updated when the
44 // GC happens. Since memory in the GC heap can move, this can cause the pointer to point at 'random' places
45 // in the GC heap, causing data corruption. This is a 'GC Hole', and is very bad. We have special modes (see
46 // code:EEConfig.GetGCStressLevel) called GCStress to help find such issues.
48 // In order to find all GC references on the stacks we need insure that no thread is manipulating a GC
49 // reference at the time of the scan. This is the job of code:Thread.SuspendRuntime. Logically it suspends
50 // every thread in the process. Unfortunately it can not literally simply call the OS SuspendThread API on
51 // all threads. The reason is that the other threads MIGHT hold important locks (for example there is a lock
52 // that is taken when unmanaged heap memory is requested, or when a DLL is loaded). In general process
53 // global structures in the OS will be protected by locks, and if you suspend a thread it might hold that
54 // lock. If you happen to need that OS service (eg you might need to allocated unmanaged memory), then
55 // deadlock will occur (as you wait on the suspended thread, that never wakes up).
57 // Luckily, we don't need to actually suspend the threads, we just need to insure that all GC references on
58 // the stack are stable. This is where the concept of cooperative mode and preemptive mode (a bad name) come
63 // The runtime keeps a table of all threads that have ever run managed code in the code:ThreadStore table.
64 // The ThreadStore table holds a list of Thread objects (see code:#ThreadClass). This object holds all
65 // infomation about managed threads. Cooperative mode is defined as the mode the thread is in when the field
66 // code:Thread.m_fPreemptiveGCDisabled is non-zero. When this field is zero the thread is said to be in
67 // Preemptive mode (named because if you preempt the thread in this mode, it is guaranteed to be in a place
68 // where a GC can occur).
70 // When a thread is in cooperative mode, it is basically saying that it is potentially modifying GC
71 // references, and so the runtime must Cooperate with it to get to a 'GC Safe' location where the GC
72 // references can be enumerated. This is the mode that a thread is in MOST times when it is running managed
73 // code (in fact if the EIP is in JIT compiled code, there is only one place where you are NOT in cooperative
74 // mode (Inlined PINVOKE transition code)). Conversely, any time non-runtime unmanaged code is running, the
75 // thread MUST NOT be in cooperative mode (you risk deadlock otherwise). Only code in mscorwks.dll might be
76 // running in either cooperative or preemptive mode.
78 // It is easier to describe the invariant associated with being in Preemptive mode. When the thread is in
79 // preemptive mode (when code:Thread.m_fPreemptiveGCDisabled is zero), the thread guarantees two things
81 // * That it not currently running code that manipulates GC references.
82 // * That it has set the code:Thread.m_pFrame pointer in the code:Thread to be a subclass of the class
83 // code:Frame which marks the location on the stack where the last managed method frame is. This
84 // allows the GC to start crawling the stack from there (essentially skip over the unmanaged frames).
85 // * That the thread will not reenter managed code if the global variable code:g_TrapReturningThreads is
86 // set (it will call code:Thread.RareDisablePreemptiveGC first which will block if a a suspension is
89 // The basic idea is that the suspension logic in code:Thread.SuspendRuntime first sets the global variable
90 // code:g_TrapReturningThreads and then checks if each thread in the ThreadStore is in Cooperative mode. If a
91 // thread is NOT in cooperative mode, the logic simply skips the thread, because it knows that the thread
92 // will stop itself before reentering managed code (because code:g_TrapReturningThreads is set). This avoids
93 // the deadlock problem mentioned earlier, because threads that are running unmanaged code are allowed to
94 // run. Enumeration of GC references starts at the first managed frame (pointed at by code:Thread.m_pFrame).
96 // When a thread is in cooperative mode, it means that GC references might be being manipulated. There are
97 // two important possibilities
99 // * The CPU is running JIT compiled code
100 // * The CPU is running code elsewhere (which should only be in mscorwks.dll, because everywhere else a
101 // transition to preemptive mode should have happened first)
103 // * #PartiallyInteruptibleCode
104 // * #FullyInteruptibleCode
106 // If the Instruction pointer (x86/x64: EIP, ARM: R15/PC) is in JIT compiled code, we can detect this because we have tables that
107 // map the ranges of every method back to their code:MethodDesc (this the code:ICodeManager interface). In
108 // addition to knowing the method, these tables also point at 'GCInfo' that tell for that method which stack
109 // locations and which registers hold GC references at any particular instruction pointer. If the method is
110 // what is called FullyInterruptible, then we have information for any possible instruction pointer in the
111 // method and we can simply stop the thread (however we have to do this carefully TODO explain).
113 // However for most methods, we only keep GC information for paticular EIP's, in particular we keep track of
114 // GC reference liveness only at call sites. Thus not every location is 'GC Safe' (that is we can enumerate
115 // all references, but must be 'driven' to a GC safe location).
117 // We drive threads to GC safe locations by hijacking. This is a term for updating the return address on the
118 // stack so that we gain control when a method returns. If we find that we are in JITTed code but NOT at a GC
119 // safe location, then we find the return address for the method and modfiy it to cause the runtime to stop.
120 // We then let the method run. Hopefully the method quickly returns, and hits our hijack, and we are now at a
121 // GC-safe location (all call sites are GC-safe). If not we repeat the procedure (possibly moving the
122 // hijack). At some point a method returns, and we get control. For methods that have loops that don't make
123 // calls, we are forced to make the method FullyInterruptible, so we can be sure to stop the mehod.
125 // This leaves only the case where we are in cooperative modes, but not in JIT compiled code (we should be in
126 // clr.dll). In this case we simply let the thread run. The idea is that code in clr.dll makes the
127 // promise that it will not do ANYTHING that will block (which includes taking a lock), while in cooperative
128 // mode, or do anything that might take a long time without polling to see if a GC is needed. Thus this code
129 // 'cooperates' to insure that GCs can happen in a timely fashion.
131 // If you need to switch the GC mode of the current thread, look for the GCX_COOP() and GCX_PREEMP() macros.
134 #ifndef __threads_h__
135 #define __threads_h__
139 #include "eventstore.hpp"
143 #include "gcheaputilities.h"
144 #include "gchandleutilities.h"
145 #include "gcinfotypes.h"
155 class ThreadBaseObject;
156 class AppDomainStack;
157 class LoadLevelLimiter;
159 class DeadlockAwareLock;
160 struct HelperMethodFrameCallerList;
161 class ThreadLocalIBCInfo;
163 class DebuggerPatchSkip;
164 class FaultingExceptionFrame;
165 class ContextTransitionFrame;
166 enum BinderMethodID : int;
169 class PendingTypeLoadHolder;
170 class PrepareCodeConfig;
171 class NativeCodeVersion;
173 struct ThreadLocalBlock;
174 typedef DPTR(struct ThreadLocalBlock) PTR_ThreadLocalBlock;
175 typedef DPTR(PTR_ThreadLocalBlock) PTR_PTR_ThreadLocalBlock;
177 typedef void(*ADCallBackFcnType)(LPVOID);
179 #include "stackwalktypes.h"
181 #include "stackingallocator.h"
185 #include "threaddebugblockinginfo.h"
186 #include "interoputil.h"
187 #include "eventtrace.h"
189 #ifdef FEATURE_PERFTRACING
190 class EventPipeBufferList;
191 #endif // FEATURE_PERFTRACING
193 struct TLMTableEntry;
195 typedef DPTR(struct TLMTableEntry) PTR_TLMTableEntry;
196 typedef DPTR(struct ThreadLocalModule) PTR_ThreadLocalModule;
198 class ThreadStaticHandleTable;
199 struct ThreadLocalModule;
202 struct ThreadLocalBlock
204 friend class ClrDataAccess;
207 PTR_TLMTableEntry m_pTLMTable; // Table of ThreadLocalModules
208 SIZE_T m_TLMTableSize; // Current size of table
209 SpinLock m_TLMTableLock; // Spinlock used to synchronize growing the table and freeing TLM by other threads
211 // Each ThreadLocalBlock has its own ThreadStaticHandleTable. The ThreadStaticHandleTable works
212 // by allocating Object arrays on the GC heap and keeping them alive with pinning handles.
214 // We use the ThreadStaticHandleTable to allocate space for GC thread statics. A GC thread
215 // static is thread static that is either a reference type or a value type whose layout
216 // contains a pointer to a reference type.
218 ThreadStaticHandleTable * m_pThreadStaticHandleTable;
220 // Need to keep a list of the pinning handles we've created
221 // so they can be cleaned up when the thread dies
222 ObjectHandleList m_PinningHandleList;
226 #ifndef DACCESS_COMPILE
227 void AddPinningHandleToList(OBJECTHANDLE oh);
228 void FreePinningHandles();
229 void AllocateThreadStaticHandles(Module * pModule, ThreadLocalModule * pThreadLocalModule);
230 OBJECTHANDLE AllocateStaticFieldObjRefPtrs(int nRequested, OBJECTHANDLE* ppLazyAllocate = NULL);
231 void InitThreadStaticHandleTable();
233 void AllocateThreadStaticBoxes(MethodTable* pMT);
236 public: // used by code generators
237 static SIZE_T GetOffsetOfModuleSlotsPointer() { return offsetof(ThreadLocalBlock, m_pTLMTable); }
241 #ifndef DACCESS_COMPILE
243 : m_pTLMTable(NULL), m_TLMTableSize(0), m_pThreadStaticHandleTable(NULL)
245 m_TLMTableLock.Init(LOCK_TYPE_DEFAULT);
248 void FreeTLM(SIZE_T i, BOOL isThreadShuttingDown);
252 void EnsureModuleIndex(ModuleIndex index);
256 void SetModuleSlot(ModuleIndex index, PTR_ThreadLocalModule pLocalModule);
258 PTR_ThreadLocalModule GetTLMIfExists(ModuleIndex index);
259 PTR_ThreadLocalModule GetTLMIfExists(MethodTable* pMT);
261 #ifdef DACCESS_COMPILE
262 void EnumMemoryRegions(CLRDataEnumMemoryFlags flags);
266 #ifdef CROSSGEN_COMPILE
268 #include "asmconstants.h"
272 friend class ThreadStatics;
274 ThreadLocalBlock m_ThreadLocalBlock;
277 BOOL IsAddressInStack (PTR_VOID addr) const { return TRUE; }
278 static BOOL IsAddressInCurrentStack (PTR_VOID addr) { return TRUE; }
280 StackingAllocator* m_stackLocalAllocator = NULL;
281 bool CheckCanUseStackAlloc() { return true; }
284 LoadLevelLimiter *m_pLoadLimiter;
287 LoadLevelLimiter *GetLoadLevelLimiter()
289 LIMITED_METHOD_CONTRACT;
290 return m_pLoadLimiter;
293 void SetLoadLevelLimiter(LoadLevelLimiter *limiter)
295 LIMITED_METHOD_CONTRACT;
296 m_pLoadLimiter = limiter;
299 PTR_Frame GetFrame() { return NULL; }
300 void SetFrame(Frame *pFrame) { }
301 DWORD CatchAtSafePoint() { return 0; }
302 DWORD CatchAtSafePointOpportunistic() { return 0; }
304 static void ObjectRefProtected(const OBJECTREF* ref) { }
305 static void ObjectRefNew(const OBJECTREF* ref) { }
307 void EnablePreemptiveGC() { }
308 void DisablePreemptiveGC() { }
310 inline void IncLockCount() { }
311 inline void DecLockCount() { }
313 static LPVOID GetStaticFieldAddress(FieldDesc *pFD) { return NULL; }
315 PTR_AppDomain GetDomain() { return ::GetAppDomain(); }
317 DWORD GetThreadId() { return 0; }
319 inline DWORD GetOverridesCount() { return 0; }
320 inline BOOL CheckThreadWideSpecialFlag(DWORD flags) { return 0; }
322 BOOL PreemptiveGCDisabled() { return false; }
323 void PulseGCMode() { }
325 OBJECTREF GetThrowable() { return NULL; }
327 OBJECTREF LastThrownObject() { return NULL; }
329 static BOOL Debug_AllowCallout() { return TRUE; }
331 static void IncForbidSuspendThread() { }
332 static void DecForbidSuspendThread() { }
334 typedef StateHolder<Thread::IncForbidSuspendThread, Thread::DecForbidSuspendThread> ForbidSuspendThreadHolder;
336 static BYTE GetOffsetOfCurrentFrame()
338 LIMITED_METHOD_CONTRACT;
339 size_t ofs = Thread_m_pFrame;
340 _ASSERTE(FitsInI1(ofs));
344 static BYTE GetOffsetOfGCFlag()
346 LIMITED_METHOD_CONTRACT;
347 size_t ofs = Thread_m_fPreemptiveGCDisabled;
348 _ASSERTE(FitsInI1(ofs));
352 void SetLoadingFile(DomainFile *pFile)
356 typedef Holder<Thread *, DoNothing, DoNothing> LoadingFileHolder;
362 BOOL HasThreadState(ThreadState ts)
364 LIMITED_METHOD_CONTRACT;
365 return ((DWORD)m_State & ts);
368 BOOL HasThreadStateOpportunistic(ThreadState ts)
370 LIMITED_METHOD_CONTRACT;
371 return m_State.LoadWithoutBarrier() & ts;
374 Volatile<ThreadState> m_State;
376 enum ThreadStateNoConcurrency
378 TSNC_OwnsSpinLock = 0x00000400, // The thread owns a spinlock.
380 TSNC_LoadsTypeViolation = 0x40000000, // Use by type loader to break deadlocks caused by type load level ordering violations
383 ThreadStateNoConcurrency m_StateNC;
385 void SetThreadStateNC(ThreadStateNoConcurrency tsnc)
387 LIMITED_METHOD_CONTRACT;
388 m_StateNC = (ThreadStateNoConcurrency)((DWORD)m_StateNC | tsnc);
391 void ResetThreadStateNC(ThreadStateNoConcurrency tsnc)
393 LIMITED_METHOD_CONTRACT;
394 m_StateNC = (ThreadStateNoConcurrency)((DWORD)m_StateNC & ~tsnc);
397 BOOL HasThreadStateNC(ThreadStateNoConcurrency tsnc)
399 LIMITED_METHOD_DAC_CONTRACT;
400 return ((DWORD)m_StateNC & tsnc);
403 PendingTypeLoadHolder* m_pPendingTypeLoad;
405 #ifndef DACCESS_COMPILE
406 PendingTypeLoadHolder* GetPendingTypeLoad()
408 LIMITED_METHOD_CONTRACT;
409 return m_pPendingTypeLoad;
412 void SetPendingTypeLoad(PendingTypeLoadHolder* pPendingTypeLoad)
414 LIMITED_METHOD_CONTRACT;
415 m_pPendingTypeLoad = pPendingTypeLoad;
418 void SetProfilerCallbackFullState(DWORD dwFullState)
420 LIMITED_METHOD_CONTRACT;
423 DWORD SetProfilerCallbackStateFlags(DWORD dwFlags)
425 LIMITED_METHOD_CONTRACT;
429 #ifdef FEATURE_COMINTEROP_APARTMENT_SUPPORT
430 enum ApartmentState { AS_Unknown };
436 class AVInRuntimeImplOkayHolder
439 AVInRuntimeImplOkayHolder()
441 LIMITED_METHOD_CONTRACT;
443 AVInRuntimeImplOkayHolder(Thread * pThread)
445 LIMITED_METHOD_CONTRACT;
447 ~AVInRuntimeImplOkayHolder()
449 LIMITED_METHOD_CONTRACT;
453 inline BOOL dbgOnly_IsSpecialEEThread() { return FALSE; }
455 #define INCTHREADLOCKCOUNT() { }
456 #define DECTHREADLOCKCOUNT() { }
457 #define INCTHREADLOCKCOUNTTHREAD(thread) { }
458 #define DECTHREADLOCKCOUNTTHREAD(thread) { }
460 #define FORBIDGC_LOADER_USE_ENABLED() false
461 #define ENABLE_FORBID_GC_LOADER_USE_IN_THIS_SCOPE() ;
463 #define BEGIN_FORBID_TYPELOAD()
464 #define END_FORBID_TYPELOAD()
465 #define TRIGGERS_TYPELOAD()
467 #define TRIGGERSGC() ANNOTATION_GC_TRIGGERS
469 inline void CommonTripThread() { }
471 class DeadlockAwareLock
474 DeadlockAwareLock(const char *description = NULL) { }
475 ~DeadlockAwareLock() { }
477 BOOL CanEnterLock() { return TRUE; }
479 BOOL TryBeginEnterLock() { return TRUE; }
480 void BeginEnterLock() { }
482 void EndEnterLock() { }
487 typedef StateHolder<DoNothing,DoNothing> BlockingLockHolder;
490 // Do not include threads.inl
493 typedef Thread::ForbidSuspendThreadHolder ForbidSuspendThreadHolder;
495 #else // CROSSGEN_COMPILE
497 #if (defined(_TARGET_ARM_) && defined(FEATURE_EMULATE_SINGLESTEP))
498 #include "armsinglestepper.h"
500 #if (defined(_TARGET_ARM64_) && defined(FEATURE_EMULATE_SINGLESTEP))
501 #include "arm64singlestepper.h"
504 #if !defined(PLATFORM_SUPPORTS_SAFE_THREADSUSPEND)
505 // DISABLE_THREADSUSPEND controls whether Thread::SuspendThread will be used at all.
506 // This API is dangerous on non-Windows platforms, as it can lead to deadlocks,
507 // due to low level OS resources that the PAL is not aware of, or due to the fact that
508 // PAL-unaware code in the process may hold onto some OS resources.
509 #define DISABLE_THREADSUSPEND
512 // NT thread priorities range from -15 to +15.
513 #define INVALID_THREAD_PRIORITY ((DWORD)0x80000000)
515 // For a fiber which switched out, we set its OSID to a special number
516 // Note: there's a copy of this macro in strike.cpp
517 #define SWITCHED_OUT_FIBER_OSID 0xbaadf00d;
520 // A thread doesn't recieve its id until fully constructed.
521 #define UNINITIALIZED_THREADID 0xbaadf00d
524 // Capture all the synchronization requests, for debugging purposes
525 #if defined(_DEBUG) && defined(TRACK_SYNC)
527 // Each thread has a stack that tracks all enter and leave requests
530 virtual ~Dbg_TrackSync() = default;
532 virtual void EnterSync (UINT_PTR caller, void *pAwareLock) = 0;
533 virtual void LeaveSync (UINT_PTR caller, void *pAwareLock) = 0;
536 EXTERN_C void EnterSyncHelper (UINT_PTR caller, void *pAwareLock);
537 EXTERN_C void LeaveSyncHelper (UINT_PTR caller, void *pAwareLock);
541 //***************************************************************************
542 #ifdef FEATURE_HIJACK
544 // Used to capture information about the state of execution of a *SUSPENDED* thread.
545 struct ExecutionState;
547 #ifndef PLATFORM_UNIX
548 // This is the type of the start function of a redirected thread pulled from
549 // a HandledJITCase during runtime suspension
550 typedef void (__stdcall *PFN_REDIRECTTARGET)();
552 // Describes the weird argument sets during hijacking
554 #endif // !PLATFORM_UNIX
556 #endif // FEATURE_HIJACK
558 //***************************************************************************
559 #ifdef ENABLE_CONTRACTS_IMPL
560 inline Thread* GetThreadNULLOk()
562 LIMITED_METHOD_CONTRACT;
564 BEGIN_GETTHREAD_ALLOWED_IN_NO_THROW_REGION;
565 pThread = GetThread();
566 END_GETTHREAD_ALLOWED_IN_NO_THROW_REGION;
570 #define GetThreadNULLOk() GetThread()
573 // manifest constant for waiting in the exposed classlibs
574 const INT32 INFINITE_TIMEOUT = -1;
576 /***************************************************************************/
577 // Public enum shared between thread and threadpool
578 // These are two kinds of threadpool thread that the threadpool mgr needs
580 enum ThreadpoolThreadType
583 CompletionPortThread,
587 //***************************************************************************
590 // Thread* GetThread() - returns current Thread
591 // Thread* SetupThread() - creates new Thread.
592 // Thread* SetupUnstartedThread() - creates new unstarted Thread which
593 // (obviously) isn't in a TLS.
594 // void DestroyThread() - the underlying logical thread is going
596 // void DetachThread() - the underlying logical thread is going
597 // away but we don't want to destroy it yet.
599 // Public functions for ASM code generators
601 // Thread* __stdcall CreateThreadBlockThrow() - creates new Thread on reverse p-invoke
603 // Public functions for one-time init/cleanup
605 // void InitThreadManager() - onetime init
606 // void TerminateThreadManager() - onetime cleanup
608 // Public functions for taking control of a thread at a safe point
610 // VOID OnHijackTripThread() - we've hijacked a JIT method
611 // VOID OnHijackFPTripThread() - we've hijacked a JIT method,
612 // and need to save the x87 FP stack.
614 //***************************************************************************
617 //***************************************************************************
619 //***************************************************************************
621 //---------------------------------------------------------------------------
623 //---------------------------------------------------------------------------
624 Thread* SetupThread(BOOL fInternal);
625 inline Thread* SetupThread()
628 return SetupThread(FALSE);
630 // A host can deny a thread entering runtime by returning a NULL IHostTask.
631 // But we do want threads used by threadpool.
632 inline Thread* SetupInternalThread()
635 return SetupThread(TRUE);
637 Thread* SetupThreadNoThrow(HRESULT *phresult = NULL);
638 // WARNING : only GC calls this with bRequiresTSL set to FALSE.
639 Thread* SetupUnstartedThread(BOOL bRequiresTSL=TRUE);
640 void DestroyThread(Thread *th);
642 DWORD GetRuntimeId();
644 EXTERN_C Thread* WINAPI CreateThreadBlockThrow();
646 //---------------------------------------------------------------------------
647 // One-time initialization. Called during Dll initialization.
648 //---------------------------------------------------------------------------
649 void InitThreadManager();
652 // When we want to take control of a thread at a safe point, the thread will
653 // eventually come back to us in one of the following trip functions:
655 #ifdef FEATURE_HIJACK
657 EXTERN_C void WINAPI OnHijackTripThread();
659 EXTERN_C void WINAPI OnHijackFPTripThread(); // hijacked JIT code is returning an FP value
660 #endif // _TARGET_X86_
662 #endif // FEATURE_HIJACK
664 void CommonTripThread();
666 // When we resume a thread at a new location, to get an exception thrown, we have to
667 // pretend the exception originated elsewhere.
668 EXTERN_C void ThrowControlForThread(
669 #ifdef WIN64EXCEPTIONS
670 FaultingExceptionFrame *pfef
671 #endif // WIN64EXCEPTIONS
674 // RWLock state inside TLS
677 LockEntry *pNext; // next entry
678 LockEntry *pPrev; // prev entry
680 LONG dwLLockID; // owning lock
681 WORD wReaderLevel; // reader nesting level
685 BOOL MatchThreadHandleToOsId ( HANDLE h, DWORD osId );
688 #ifdef FEATURE_COMINTEROP
690 #define RCW_STACK_SIZE 64
697 LIMITED_METHOD_CONTRACT;
698 memset(this, 0, sizeof(RCWStack));
701 inline VOID SetEntry(unsigned int index, RCW* pRCW)
708 PRECONDITION(index < RCW_STACK_SIZE);
709 PRECONDITION(CheckPointer(pRCW, NULL_OK));
713 m_pList[index] = pRCW;
716 inline RCW* GetEntry(unsigned int index)
723 PRECONDITION(index < RCW_STACK_SIZE);
727 RETURN m_pList[index];
730 inline VOID SetNextStack(RCWStack* pStack)
737 PRECONDITION(CheckPointer(pStack));
738 PRECONDITION(m_pNext == NULL);
745 inline RCWStack* GetNextStack()
752 POSTCONDITION(CheckPointer(RETVAL, NULL_OK));
761 RCW* m_pList[RCW_STACK_SIZE];
779 m_iSize = RCW_STACK_SIZE;
780 m_pHead = new RCWStack();
793 RCWStack* pStack = m_pHead;
794 RCWStack* pNextStack = NULL;
798 pNextStack = pStack->GetNextStack();
811 PRECONDITION(CheckPointer(pRCW, NULL_OK));
815 if (!GrowListIfNeeded())
819 if (m_iIndex < RCW_STACK_SIZE)
821 m_pHead->SetEntry(m_iIndex, pRCW);
827 unsigned int count = m_iIndex;
828 RCWStack* pStack = m_pHead;
829 while (count >= RCW_STACK_SIZE)
831 pStack = pStack->GetNextStack();
834 count -= RCW_STACK_SIZE;
837 pStack->SetEntry(count, pRCW);
849 PRECONDITION(m_iIndex > 0);
850 POSTCONDITION(CheckPointer(RETVAL, NULL_OK));
859 if (m_iIndex < RCW_STACK_SIZE)
861 pRCW = m_pHead->GetEntry(m_iIndex);
862 m_pHead->SetEntry(m_iIndex, NULL);
867 unsigned int count = m_iIndex;
868 RCWStack* pStack = m_pHead;
869 while (count >= RCW_STACK_SIZE)
871 pStack = pStack->GetNextStack();
873 count -= RCW_STACK_SIZE;
876 pRCW = pStack->GetEntry(count);
877 pStack->SetEntry(count, NULL);
882 BOOL IsInStack(RCW* pRCW)
889 PRECONDITION(CheckPointer(pRCW));
897 if (m_iIndex <= RCW_STACK_SIZE)
899 for (int i = 0; i < (int)m_iIndex; i++)
901 if (pRCW == m_pHead->GetEntry(i))
909 RCWStack* pStack = m_pHead;
911 while (pStack != NULL)
913 for (int i = 0; (i < RCW_STACK_SIZE) && (totalcount < m_iIndex); i++, totalcount++)
915 if (pRCW == pStack->GetEntry(i))
919 pStack = pStack->GetNextStack();
926 bool GrowListIfNeeded()
933 INJECT_FAULT(COMPlusThrowOM());
934 PRECONDITION(CheckPointer(m_pHead));
938 if (m_iIndex == m_iSize)
940 RCWStack* pStack = m_pHead;
941 RCWStack* pNextStack = NULL;
942 while ( (pNextStack = pStack->GetNextStack()) != NULL)
945 RCWStack* pNewStack = new (nothrow) RCWStack();
946 if (NULL == pNewStack)
949 pStack->SetNextStack(pNewStack);
951 m_iSize += RCW_STACK_SIZE;
957 // Zero-based index to the first free element in the list.
960 // Total size of the list, including all stacks.
963 // Pointer to the first stack.
967 #endif // FEATURE_COMINTEROP
970 typedef DWORD (*AppropriateWaitFunc) (void *args, DWORD timeout, DWORD option);
972 // The Thread class represents a managed thread. This thread could be internal
973 // or external (i.e. it wandered in from outside the runtime). For internal
974 // threads, it could correspond to an exposed System.Thread object or it
975 // could correspond to an internal worker thread of the runtime.
977 // If there's a physical Win32 thread underneath this object (i.e. it isn't an
978 // unstarted System.Thread), then this instance can be found in the TLS
979 // of that physical thread.
981 // FEATURE_MULTIREG_RETURN is set for platforms where a struct return value
982 // [GcInfo v2 only] can be returned in multiple registers
983 // ex: Windows/Unix ARM/ARM64, Unix-AMD64.
986 // UNIX_AMD64_ABI is a specific kind of FEATURE_MULTIREG_RETURN
987 // [GcInfo v1 and v2] specified by SystemV ABI for AMD64
990 #ifdef FEATURE_HIJACK // Hijack function returning
991 EXTERN_C void STDCALL OnHijackWorker(HijackArgs * pArgs);
992 #endif // FEATURE_HIJACK
994 // This is the code we pass around for Thread.Interrupt, mainly for assertions
995 #define APC_Code 0xEECEECEE
997 #ifdef DACCESS_COMPILE
998 class BaseStackGuard;
1003 // A code:Thread contains all the per-thread information needed by the runtime. You can get at this
1004 // structure throught the and OS TLS slot see code:#RuntimeThreadLocals for more
1005 // Implementing IUnknown would prevent the field (e.g. m_Context) layout from being rearranged (which will need to be fixed in
1006 // "asmconstants.h" for the respective architecture). As it is, ICLRTask derives from IUnknown and would have got IUnknown implemented
1007 // here - so doing this explicitly and maintaining layout sanity should be just fine.
1008 class Thread: public IUnknown
1010 friend struct ThreadQueue; // used to enqueue & dequeue threads onto SyncBlocks
1011 friend class ThreadStore;
1012 friend class ThreadSuspend;
1013 friend class SyncBlock;
1014 friend struct PendingSync;
1015 friend class AppDomain;
1016 friend class ThreadNative;
1017 friend class DeadlockAwareLock;
1019 friend class EEContract;
1021 #ifdef DACCESS_COMPILE
1022 friend class ClrDataAccess;
1023 friend class ClrDataTask;
1026 friend BOOL NTGetThreadContext(Thread *pThread, T_CONTEXT *pContext);
1027 friend BOOL NTSetThreadContext(Thread *pThread, const T_CONTEXT *pContext);
1029 friend void CommonTripThread();
1031 #ifdef FEATURE_HIJACK
1032 // MapWin32FaultToCOMPlusException needs access to Thread::IsAddrOfRedirectFunc()
1033 friend DWORD MapWin32FaultToCOMPlusException(EXCEPTION_RECORD *pExceptionRecord);
1034 friend void STDCALL OnHijackWorker(HijackArgs * pArgs);
1035 #ifdef PLATFORM_UNIX
1036 friend void HandleGCSuspensionForInterruptedThread(CONTEXT *interruptedContext);
1037 #endif // PLATFORM_UNIX
1039 #endif // FEATURE_HIJACK
1041 friend void InitThreadManager();
1042 friend void ThreadBaseObject::SetDelegate(OBJECTREF delegate);
1044 friend void CallFinalizerOnThreadObject(Object *obj);
1046 friend class ContextTransitionFrame; // To set m_dwBeginLockCount
1048 // Debug and Profiler caches ThreadHandle.
1049 friend class Debugger; // void Debugger::ThreadStarted(Thread* pRuntimeThread, BOOL fAttaching);
1050 #if defined(DACCESS_COMPILE)
1051 friend class DacDbiInterfaceImpl; // DacDbiInterfaceImpl::GetThreadHandle(HANDLE * phThread);
1052 #endif // DACCESS_COMPILE
1053 friend class ProfToEEInterfaceImpl; // HRESULT ProfToEEInterfaceImpl::GetHandleFromThread(ThreadID threadId, HANDLE *phThread);
1054 friend class CExecutionEngine;
1056 friend class CheckAsmOffsets;
1058 friend class ExceptionTracker;
1059 friend class ThreadExceptionState;
1061 friend class StackFrameIterator;
1063 friend class ThreadStatics;
1065 VPTR_BASE_CONCRETE_VTABLE_CLASS(Thread)
1068 enum SetThreadStackGuaranteeScope { STSGuarantee_Force, STSGuarantee_OnlyIfEnabled };
1069 static BOOL IsSetThreadStackGuaranteeInUse(SetThreadStackGuaranteeScope fScope = STSGuarantee_OnlyIfEnabled)
1071 WRAPPER_NO_CONTRACT;
1073 if(STSGuarantee_Force == fScope)
1076 //The runtime must be hosted to have escalation policy
1077 //If escalation policy is enabled but StackOverflow is not part of the policy
1078 // then we don't use SetThreadStackGuarantee
1080 GetEEPolicy()->GetActionOnFailure(FAIL_StackOverflow) == eRudeExitProcess)
1082 //FAIL_StackOverflow is ProcessExit so don't use SetThreadStackGuarantee
1090 // If we are trying to suspend a thread, we set the appropriate pending bit to
1091 // indicate why we want to suspend it (TS_GCSuspendPending, TS_UserSuspendPending,
1092 // TS_DebugSuspendPending).
1094 // If instead the thread has blocked itself, via WaitSuspendEvent, we indicate
1095 // this with TS_SyncSuspended. However, we need to know whether the synchronous
1096 // suspension is for a user request, or for an internal one (GC & Debug). That's
1097 // because a user request is not allowed to resume a thread suspended for
1098 // debugging or GC. -- That's not stricly true. It is allowed to resume such a
1099 // thread so long as it was ALSO suspended by the user. In other words, this
1100 // ensures that user resumptions aren't unbalanced from user suspensions.
1104 TS_Unknown = 0x00000000, // threads are initialized this way
1106 TS_AbortRequested = 0x00000001, // Abort the thread
1107 TS_GCSuspendPending = 0x00000002, // waiting to get to safe spot for GC
1108 TS_UserSuspendPending = 0x00000004, // user suspension at next opportunity
1109 TS_DebugSuspendPending = 0x00000008, // Is the debugger suspending threads?
1110 TS_GCOnTransitions = 0x00000010, // Force a GC on stub transitions (GCStress only)
1112 TS_LegalToJoin = 0x00000020, // Is it now legal to attempt a Join()
1114 // unused = 0x00000040,
1116 #ifdef FEATURE_HIJACK
1117 TS_Hijacked = 0x00000080, // Return address has been hijacked
1118 #endif // FEATURE_HIJACK
1120 TS_BlockGCForSO = 0x00000100, // If a thread does not have enough stack, WaitUntilGCComplete may fail.
1121 // Either GC suspension will wait until the thread has cleared this bit,
1122 // Or the current thread is going to spin if GC has suspended all threads.
1123 TS_Background = 0x00000200, // Thread is a background thread
1124 TS_Unstarted = 0x00000400, // Thread has never been started
1125 TS_Dead = 0x00000800, // Thread is dead
1127 TS_WeOwn = 0x00001000, // Exposed object initiated this thread
1128 #ifdef FEATURE_COMINTEROP_APARTMENT_SUPPORT
1129 TS_CoInitialized = 0x00002000, // CoInitialize has been called for this thread
1131 TS_InSTA = 0x00004000, // Thread hosts an STA
1132 TS_InMTA = 0x00008000, // Thread is part of the MTA
1133 #endif // FEATURE_COMINTEROP_APARTMENT_SUPPORT
1135 // Some bits that only have meaning for reporting the state to clients.
1136 TS_ReportDead = 0x00010000, // in WaitForOtherThreads()
1137 TS_FullyInitialized = 0x00020000, // Thread is fully initialized and we are ready to broadcast its existence to external clients
1139 TS_TaskReset = 0x00040000, // The task is reset
1141 TS_SyncSuspended = 0x00080000, // Suspended via WaitSuspendEvent
1142 TS_DebugWillSync = 0x00100000, // Debugger will wait for this thread to sync
1144 TS_StackCrawlNeeded = 0x00200000, // A stackcrawl is needed on this thread, such as for thread abort
1145 // See comment for s_pWaitForStackCrawlEvent for reason.
1147 TS_SuspendUnstarted = 0x00400000, // latch a user suspension on an unstarted thread
1149 TS_Aborted = 0x00800000, // is the thread aborted?
1150 TS_TPWorkerThread = 0x01000000, // is this a threadpool worker thread?
1152 TS_Interruptible = 0x02000000, // sitting in a Sleep(), Wait(), Join()
1153 TS_Interrupted = 0x04000000, // was awakened by an interrupt APC. !!! This can be moved to TSNC
1155 TS_CompletionPortThread = 0x08000000, // Completion port thread
1157 TS_AbortInitiated = 0x10000000, // set when abort is begun
1159 TS_Finalized = 0x20000000, // The associated managed Thread object has been finalized.
1160 // We can clean up the unmanaged part now.
1162 TS_FailStarted = 0x40000000, // The thread fails during startup.
1163 TS_Detached = 0x80000000, // Thread was detached by DllMain
1165 // <TODO> @TODO: We need to reclaim the bits that have no concurrency issues (i.e. they are only
1166 // manipulated by the owning thread) and move them off to a different DWORD. Note if this
1167 // enum is changed, we also need to update SOS to reflect this.</TODO>
1169 // We require (and assert) that the following bits are less than 0x100.
1170 TS_CatchAtSafePoint = (TS_UserSuspendPending | TS_AbortRequested |
1171 TS_GCSuspendPending | TS_DebugSuspendPending | TS_GCOnTransitions),
1174 // Thread flags that aren't really states in themselves but rather things the thread
1178 TT_CleanupSyncBlock = 0x00000001, // The synch block needs to be cleaned up.
1179 #ifdef FEATURE_COMINTEROP_APARTMENT_SUPPORT
1180 TT_CallCoInitialize = 0x00000002, // CoInitialize needs to be called.
1181 #endif // FEATURE_COMINTEROP_APARTMENT_SUPPORT
1184 // Thread flags that have no concurrency issues (i.e., they are only manipulated by the owning thread). Use these
1185 // state flags when you have a new thread state that doesn't belong in the ThreadState enum above.
1187 // <TODO>@TODO: its possible that the ThreadTasks from above and these flags should be merged.</TODO>
1188 enum ThreadStateNoConcurrency
1190 TSNC_Unknown = 0x00000000, // threads are initialized this way
1192 TSNC_DebuggerUserSuspend = 0x00000001, // marked "suspended" by the debugger
1193 TSNC_DebuggerReAbort = 0x00000002, // thread needs to re-abort itself when resumed by the debugger
1194 TSNC_DebuggerIsStepping = 0x00000004, // debugger is stepping this thread
1195 TSNC_DebuggerIsManagedException = 0x00000008, // EH is re-raising a managed exception.
1196 TSNC_WaitUntilGCFinished = 0x00000010, // The current thread is waiting for GC. If host returns
1197 // SO during wait, we will either spin or make GC wait.
1198 TSNC_BlockedForShutdown = 0x00000020, // Thread is blocked in WaitForEndOfShutdown. We should not hit WaitForEndOfShutdown again.
1199 TSNC_SOWorkNeeded = 0x00000040, // The thread needs to wake up AD unload helper thread to finish SO work
1200 TSNC_CLRCreatedThread = 0x00000080, // The thread was created through Thread::CreateNewThread
1201 TSNC_ExistInThreadStore = 0x00000100, // For dtor to know if it needs to be removed from ThreadStore
1202 TSNC_UnsafeSkipEnterCooperative = 0x00000200, // This is a "fix" for deadlocks caused when cleaning up COM
1203 TSNC_OwnsSpinLock = 0x00000400, // The thread owns a spinlock.
1204 TSNC_PreparingAbort = 0x00000800, // Preparing abort. This avoids recursive HandleThreadAbort call.
1205 TSNC_OSAlertableWait = 0x00001000, // Preparing abort. This avoids recursive HandleThreadAbort call.
1206 // unused = 0x00002000,
1207 TSNC_CreatingTypeInitException = 0x00004000, // Thread is trying to create a TypeInitException
1208 // unused = 0x00008000,
1209 // unused = 0x00010000,
1210 TSNC_InRestoringSyncBlock = 0x00020000, // The thread is restoring its SyncBlock for Object.Wait.
1211 // After the thread is interrupted once, we turn off interruption
1212 // at the beginning of wait.
1213 // unused = 0x00040000,
1214 TSNC_CannotRecycle = 0x00080000, // A host can not recycle this Thread object. When a thread
1215 // has orphaned lock, we will apply this.
1216 TSNC_RaiseUnloadEvent = 0x00100000, // Finalize thread is raising managed unload event which
1217 // may call AppDomain.Unload.
1218 TSNC_UnbalancedLocks = 0x00200000, // Do not rely on lock accounting for this thread:
1219 // we left an app domain with a lock count different from
1220 // when we entered it
1221 // unused = 0x00400000,
1222 TSNC_IgnoreUnhandledExceptions = 0x00800000, // Set for a managed thread born inside an appdomain created with the APPDOMAIN_IGNORE_UNHANDLED_EXCEPTIONS flag.
1223 TSNC_ProcessedUnhandledException = 0x01000000,// Set on a thread on which we have done unhandled exception processing so that
1224 // we dont perform it again when OS invokes our UEF. Currently, applicable threads include:
1225 // 1) entry point thread of a managed app
1226 // 2) new managed thread created in default domain
1228 // For such threads, we will return to the OS after our UE processing is done
1229 // and the OS will start invoking the UEFs. If our UEF gets invoked, it will try to
1230 // perform the UE processing again. We will use this flag to prevent the duplicated
1233 // Once we are completely independent of the OS UEF, we could remove this.
1234 TSNC_InsideSyncContextWait = 0x02000000, // Whether we are inside DoSyncContextWait
1235 TSNC_DebuggerSleepWaitJoin = 0x04000000, // Indicates to the debugger that this thread is in a sleep wait or join state
1236 // This almost mirrors the TS_Interruptible state however that flag can change
1237 // during GC-preemptive mode whereas this one cannot.
1238 #ifdef FEATURE_COMINTEROP
1239 TSNC_WinRTInitialized = 0x08000000, // the thread has initialized WinRT
1240 #endif // FEATURE_COMINTEROP
1242 // TSNC_Unused = 0x10000000,
1244 TSNC_CallingManagedCodeDisabled = 0x20000000, // Use by multicore JIT feature to asert on calling managed code/loading module in background thread
1245 // Exception, system module is allowed, security demand is allowed
1247 TSNC_LoadsTypeViolation = 0x40000000, // Use by type loader to break deadlocks caused by type load level ordering violations
1249 TSNC_EtwStackWalkInProgress = 0x80000000, // Set on the thread so that ETW can know that stackwalking is in progress
1250 // and does not proceed with a stackwalk on the same thread
1251 // There are cases during managed debugging when we can run into this situation
1254 // Functions called by host
1255 STDMETHODIMP QueryInterface(REFIID riid, void** ppv)
1256 DAC_EMPTY_RET(E_NOINTERFACE);
1257 STDMETHODIMP_(ULONG) AddRef(void)
1259 STDMETHODIMP_(ULONG) Release(void)
1261 STDMETHODIMP Abort()
1262 DAC_EMPTY_RET(E_FAIL);
1263 STDMETHODIMP RudeAbort()
1264 DAC_EMPTY_RET(E_FAIL);
1265 STDMETHODIMP NeedsPriorityScheduling(BOOL *pbNeedsPriorityScheduling)
1266 DAC_EMPTY_RET(E_FAIL);
1268 STDMETHODIMP YieldTask()
1269 DAC_EMPTY_RET(E_FAIL);
1270 STDMETHODIMP LocksHeld(SIZE_T *pLockCount)
1271 DAC_EMPTY_RET(E_FAIL);
1273 STDMETHODIMP BeginPreventAsyncAbort()
1274 DAC_EMPTY_RET(E_FAIL);
1275 STDMETHODIMP EndPreventAsyncAbort()
1276 DAC_EMPTY_RET(E_FAIL);
1278 void InternalReset (BOOL fNotFinalizerThread=FALSE, BOOL fThreadObjectResetNeeded=TRUE, BOOL fResetAbort=TRUE);
1279 INT32 ResetManagedThreadObject(INT32 nPriority);
1280 INT32 ResetManagedThreadObjectInCoopMode(INT32 nPriority);
1281 BOOL IsRealThreadPoolResetNeeded();
1283 HRESULT DetachThread(BOOL fDLLThreadDetach);
1285 void SetThreadState(ThreadState ts)
1287 LIMITED_METHOD_CONTRACT;
1288 FastInterlockOr((DWORD*)&m_State, ts);
1291 void ResetThreadState(ThreadState ts)
1293 LIMITED_METHOD_CONTRACT;
1294 FastInterlockAnd((DWORD*)&m_State, ~ts);
1297 BOOL HasThreadState(ThreadState ts)
1299 LIMITED_METHOD_CONTRACT;
1300 return ((DWORD)m_State & ts);
1304 // This is meant to be used for quick opportunistic checks for thread abort and similar conditions. This method
1305 // does not erect memory barrier and so it may return wrong result sometime that the caller has to handle.
1307 BOOL HasThreadStateOpportunistic(ThreadState ts)
1309 LIMITED_METHOD_CONTRACT;
1310 return m_State.LoadWithoutBarrier() & ts;
1313 void SetThreadStateNC(ThreadStateNoConcurrency tsnc)
1315 LIMITED_METHOD_CONTRACT;
1316 m_StateNC = (ThreadStateNoConcurrency)((DWORD)m_StateNC | tsnc);
1319 void ResetThreadStateNC(ThreadStateNoConcurrency tsnc)
1321 LIMITED_METHOD_CONTRACT;
1322 m_StateNC = (ThreadStateNoConcurrency)((DWORD)m_StateNC & ~tsnc);
1325 BOOL HasThreadStateNC(ThreadStateNoConcurrency tsnc)
1327 LIMITED_METHOD_DAC_CONTRACT;
1328 return ((DWORD)m_StateNC & tsnc);
1331 void MarkEtwStackWalkInProgress()
1333 WRAPPER_NO_CONTRACT;
1334 SetThreadStateNC(Thread::TSNC_EtwStackWalkInProgress);
1337 void MarkEtwStackWalkCompleted()
1339 WRAPPER_NO_CONTRACT;
1340 ResetThreadStateNC(Thread::TSNC_EtwStackWalkInProgress);
1343 BOOL IsEtwStackWalkInProgress()
1345 WRAPPER_NO_CONTRACT;
1346 return HasThreadStateNC(Thread::TSNC_EtwStackWalkInProgress);
1349 DWORD RequireSyncBlockCleanup()
1351 LIMITED_METHOD_CONTRACT;
1352 return (m_ThreadTasks & TT_CleanupSyncBlock);
1355 void SetSyncBlockCleanup()
1357 LIMITED_METHOD_CONTRACT;
1358 FastInterlockOr((ULONG *)&m_ThreadTasks, TT_CleanupSyncBlock);
1361 void ResetSyncBlockCleanup()
1363 LIMITED_METHOD_CONTRACT;
1364 FastInterlockAnd((ULONG *)&m_ThreadTasks, ~TT_CleanupSyncBlock);
1367 #ifdef FEATURE_COMINTEROP_APARTMENT_SUPPORT
1368 DWORD IsCoInitialized()
1370 LIMITED_METHOD_CONTRACT;
1371 return (m_State & TS_CoInitialized);
1374 void SetCoInitialized()
1376 LIMITED_METHOD_CONTRACT;
1377 FastInterlockOr((ULONG *)&m_State, TS_CoInitialized);
1378 FastInterlockAnd((ULONG*)&m_ThreadTasks, ~TT_CallCoInitialize);
1381 void ResetCoInitialized()
1383 LIMITED_METHOD_CONTRACT;
1384 FastInterlockAnd((ULONG *)&m_State,~TS_CoInitialized);
1387 #ifdef FEATURE_COMINTEROP
1388 BOOL IsWinRTInitialized()
1390 LIMITED_METHOD_CONTRACT;
1391 return HasThreadStateNC(TSNC_WinRTInitialized);
1394 void ResetWinRTInitialized()
1396 LIMITED_METHOD_CONTRACT;
1397 ResetThreadStateNC(TSNC_WinRTInitialized);
1399 #endif // FEATURE_COMINTEROP
1401 DWORD RequiresCoInitialize()
1403 LIMITED_METHOD_CONTRACT;
1404 return (m_ThreadTasks & TT_CallCoInitialize);
1407 void SetRequiresCoInitialize()
1409 LIMITED_METHOD_CONTRACT;
1410 FastInterlockOr((ULONG *)&m_ThreadTasks, TT_CallCoInitialize);
1413 void ResetRequiresCoInitialize()
1415 LIMITED_METHOD_CONTRACT;
1416 FastInterlockAnd((ULONG *)&m_ThreadTasks,~TT_CallCoInitialize);
1419 void CleanupCOMState();
1421 #endif // FEATURE_COMINTEROP_APARTMENT_SUPPORT
1423 #ifdef FEATURE_COMINTEROP
1424 bool IsDisableComObjectEagerCleanup()
1426 LIMITED_METHOD_CONTRACT;
1427 return m_fDisableComObjectEagerCleanup;
1429 void SetDisableComObjectEagerCleanup()
1431 LIMITED_METHOD_CONTRACT;
1432 m_fDisableComObjectEagerCleanup = true;
1434 #endif //FEATURE_COMINTEROP
1436 #ifndef DACCESS_COMPILE
1437 bool HasDeadThreadBeenConsideredForGCTrigger()
1439 LIMITED_METHOD_CONTRACT;
1442 return m_fHasDeadThreadBeenConsideredForGCTrigger;
1445 void SetHasDeadThreadBeenConsideredForGCTrigger()
1447 LIMITED_METHOD_CONTRACT;
1450 m_fHasDeadThreadBeenConsideredForGCTrigger = true;
1452 #endif // !DACCESS_COMPILE
1454 // returns if there is some extra work for the finalizer thread.
1455 BOOL HaveExtraWorkForFinalizer();
1457 // do the extra finalizer work.
1458 void DoExtraWorkForFinalizer();
1460 #ifndef DACCESS_COMPILE
1461 DWORD CatchAtSafePoint()
1463 LIMITED_METHOD_CONTRACT;
1464 return (m_State & TS_CatchAtSafePoint);
1467 DWORD CatchAtSafePointOpportunistic()
1469 LIMITED_METHOD_CONTRACT;
1470 return HasThreadStateOpportunistic(TS_CatchAtSafePoint);
1472 #endif // DACCESS_COMPILE
1474 DWORD IsBackground()
1476 LIMITED_METHOD_CONTRACT;
1477 return (m_State & TS_Background);
1482 LIMITED_METHOD_CONTRACT;
1484 return (m_State & TS_Unstarted);
1489 LIMITED_METHOD_CONTRACT;
1490 return (m_State & TS_Dead);
1495 LIMITED_METHOD_CONTRACT;
1496 return (m_State & TS_Aborted);
1501 FastInterlockOr((ULONG *) &m_State, TS_Aborted);
1506 FastInterlockAnd((ULONG *) &m_State, ~TS_Aborted);
1511 LIMITED_METHOD_CONTRACT;
1512 return (m_State & TS_WeOwn);
1515 // For reporting purposes, grab a consistent snapshot of the thread's state
1516 ThreadState GetSnapshotState();
1518 // For delayed destruction of threads
1521 LIMITED_METHOD_CONTRACT;
1522 return (m_State & TS_Detached);
1525 static LONG m_DetachCount;
1526 static LONG m_ActiveDetachCount; // Count how many non-background detached
1528 static Volatile<LONG> m_threadsAtUnsafePlaces;
1530 // Offsets for the following variables need to fit in 1 byte, so keep near
1531 // the top of the object. Also, we want cache line filling to work for us
1532 // so the critical stuff is ordered based on frequency of use.
1534 Volatile<ThreadState> m_State; // Bits for the state of the thread
1536 // If TRUE, GC is scheduled cooperatively with this thread.
1537 // NOTE: This "byte" is actually a boolean - we don't allow
1538 // recursive disables.
1539 Volatile<ULONG> m_fPreemptiveGCDisabled;
1541 PTR_Frame m_pFrame; // The Current Frame
1543 //-----------------------------------------------------------
1544 // If the thread has wandered in from the outside this is
1546 //-----------------------------------------------------------
1547 PTR_AppDomain m_pDomain;
1549 // Track the number of locks (critical section, spin lock, syncblock lock,
1550 // EE Crst, GC lock) held by the current thread.
1551 DWORD m_dwLockCount;
1553 // Unique thread id used for thin locks - kept as small as possible, as we have limited space
1554 // in the object header to store it.
1560 LockEntry m_embeddedEntry;
1562 #ifndef DACCESS_COMPILE
1563 Frame* NotifyFrameChainOfExceptionUnwind(Frame* pStartFrame, LPVOID pvLimitSP);
1564 #endif // DACCESS_COMPILE
1566 #if defined(FEATURE_COMINTEROP) && !defined(DACCESS_COMPILE)
1567 void RegisterRCW(RCW *pRCW)
1574 PRECONDITION(CheckPointer(pRCW));
1578 if (!m_pRCWStack->Push(pRCW))
1584 // Returns false on OOM.
1585 BOOL RegisterRCWNoThrow(RCW *pRCW)
1592 PRECONDITION(CheckPointer(pRCW, NULL_OK));
1596 return m_pRCWStack->Push(pRCW);
1599 RCW *UnregisterRCW(INDEBUG(SyncBlock *pSB))
1606 PRECONDITION(CheckPointer(pSB));
1610 RCW* pPoppedRCW = m_pRCWStack->Pop();
1613 // The RCW we popped must be the one pointed to by pSB if pSB still points to an RCW.
1614 RCW* pCurrentRCW = pSB->GetInteropInfoNoCreate()->GetRawRCW();
1615 _ASSERTE(pCurrentRCW == NULL || pPoppedRCW == NULL || pCurrentRCW == pPoppedRCW);
1621 BOOL RCWIsInUse(RCW* pRCW)
1628 PRECONDITION(CheckPointer(pRCW));
1632 return m_pRCWStack->IsInStack(pRCW);
1634 #endif // FEATURE_COMINTEROP && !DACCESS_COMPILE
1636 // Lock thread is trying to acquire
1637 VolatilePtr<DeadlockAwareLock> m_pBlockingLock;
1641 // on MP systems, each thread has its own allocation chunk so we can avoid
1642 // lock prefixes and expensive MP cache snooping stuff
1643 gc_alloc_context m_alloc_context;
1645 inline gc_alloc_context *GetAllocContext() { LIMITED_METHOD_CONTRACT; return &m_alloc_context; }
1647 // This is the type handle of the first object in the alloc context at the time
1648 // we fire the AllocationTick event. It's only for tooling purpose.
1649 TypeHandle m_thAllocContextObj;
1656 LIMITED_METHOD_CONTRACT;
1659 PEXCEPTION_REGISTRATION_RECORD *GetExceptionListPtr() {
1660 WRAPPER_NO_CONTRACT;
1661 return &GetTEB()->ExceptionList;
1663 #endif // !FEATURE_PAL
1665 inline void SetTHAllocContextObj(TypeHandle th) {LIMITED_METHOD_CONTRACT; m_thAllocContextObj = th; }
1667 inline TypeHandle GetTHAllocContextObj() {LIMITED_METHOD_CONTRACT; return m_thAllocContextObj; }
1669 #ifdef FEATURE_COMINTEROP
1670 // The header for the per-thread in-use RCW stack.
1671 RCWStackHeader* m_pRCWStack;
1672 #endif // FEATURE_COMINTEROP
1674 // Allocator used during marshaling for temporary buffers, much faster than
1677 // Uses of this allocator should be effectively statically scoped, i.e. a "region"
1678 // is started using a CheckPointHolder and GetCheckpoint, and this region can then be used for allocations
1679 // from that point onwards, and then all memory is reclaimed when the static scope for the
1680 // checkpoint is exited by the running thread.
1681 StackingAllocator* m_stackLocalAllocator = NULL;
1683 // Flags used to indicate tasks the thread has to do.
1684 ThreadTasks m_ThreadTasks;
1686 // Flags for thread states that have no concurrency issues.
1687 ThreadStateNoConcurrency m_StateNC;
1689 inline void IncLockCount();
1690 inline void DecLockCount();
1693 DWORD m_dwBeginLockCount; // lock count when the thread enters current domain
1696 DWORD dbg_m_cSuspendedThreads;
1697 // Count of suspended threads that we know are not in native code (and therefore cannot hold OS lock which prevents us calling out to host)
1698 DWORD dbg_m_cSuspendedThreadsWithoutOSLock;
1699 EEThreadId m_Creater;
1702 // After we suspend a thread, we may need to call EEJitManager::JitCodeToMethodInfo
1703 // or StressLog which may waits on a spinlock. It is unsafe to suspend a thread while it
1704 // is in this state.
1705 Volatile<LONG> m_dwForbidSuspendThread;
1708 static void IncForbidSuspendThread()
1718 #ifndef DACCESS_COMPILE
1719 Thread * pThread = GetThreadNULLOk();
1722 _ASSERTE (pThread->m_dwForbidSuspendThread != (LONG)MAXLONG);
1726 STRESS_LOG2(LF_SYNC, LL_INFO100000, "Set forbid suspend [%d] for thread %p.\n", pThread->m_dwForbidSuspendThread.Load(), pThread);
1729 FastInterlockIncrement(&pThread->m_dwForbidSuspendThread);
1731 #endif //!DACCESS_COMPILE
1734 static void DecForbidSuspendThread()
1744 #ifndef DACCESS_COMPILE
1745 Thread * pThread = GetThreadNULLOk();
1748 _ASSERTE (pThread->m_dwForbidSuspendThread != (LONG)0);
1749 FastInterlockDecrement(&pThread->m_dwForbidSuspendThread);
1753 STRESS_LOG2(LF_SYNC, LL_INFO100000, "Reset forbid suspend [%d] for thread %p.\n", pThread->m_dwForbidSuspendThread.Load(), pThread);
1757 #endif //!DACCESS_COMPILE
1760 bool IsInForbidSuspendRegion()
1762 return m_dwForbidSuspendThread != (LONG)0;
1765 typedef StateHolder<Thread::IncForbidSuspendThread, Thread::DecForbidSuspendThread> ForbidSuspendThreadHolder;
1768 // Per thread counter to dispense hash code - kept in the thread so we don't need a lock
1769 // or interlocked operations to get a new hash code;
1770 DWORD m_dwHashCodeSeed;
1774 inline BOOL HasLockInCurrentDomain()
1776 LIMITED_METHOD_CONTRACT;
1778 _ASSERTE(m_dwLockCount >= m_dwBeginLockCount);
1780 // Equivalent to (m_dwLockCount != m_dwBeginLockCount ||
1781 // m_dwCriticalRegionCount ! m_dwBeginCriticalRegionCount),
1782 // but without branching instructions
1783 BOOL fHasLock = (m_dwLockCount ^ m_dwBeginLockCount);
1788 inline BOOL HasCriticalRegion()
1790 LIMITED_METHOD_CONTRACT;
1794 inline DWORD GetNewHashCode()
1796 LIMITED_METHOD_CONTRACT;
1797 // Every thread has its own generator for hash codes so that we won't get into a situation
1798 // where two threads consistently give out the same hash codes.
1799 // Choice of multiplier guarantees period of 2**32 - see Knuth Vol 2 p16 (3.2.1.2 Theorem A).
1800 DWORD multiplier = GetThreadId()*4 + 5;
1801 m_dwHashCodeSeed = m_dwHashCodeSeed*multiplier + 1;
1802 return m_dwHashCodeSeed;
1806 // If the current thread suspends other threads, we need to make sure that the thread
1807 // only allocates memory if the suspended threads do not have OS Heap lock.
1808 static BOOL Debug_AllowCallout()
1810 LIMITED_METHOD_CONTRACT;
1811 Thread * pThread = GetThreadNULLOk();
1812 return ((pThread == NULL) || (pThread->dbg_m_cSuspendedThreads == pThread->dbg_m_cSuspendedThreadsWithoutOSLock));
1815 // Returns number of threads that are currently suspended by the current thread and that can potentially hold OS lock
1816 BOOL Debug_GetUnsafeSuspendeeCount()
1818 LIMITED_METHOD_CONTRACT;
1819 return (dbg_m_cSuspendedThreads - dbg_m_cSuspendedThreadsWithoutOSLock);
1825 BOOL HasThreadAffinity()
1827 LIMITED_METHOD_CONTRACT;
1832 LoadLevelLimiter *m_pLoadLimiter;
1835 LoadLevelLimiter *GetLoadLevelLimiter()
1837 LIMITED_METHOD_CONTRACT;
1838 return m_pLoadLimiter;
1841 void SetLoadLevelLimiter(LoadLevelLimiter *limiter)
1843 LIMITED_METHOD_CONTRACT;
1844 m_pLoadLimiter = limiter;
1851 //--------------------------------------------------------------
1853 //--------------------------------------------------------------
1854 #ifndef DACCESS_COMPILE
1858 //--------------------------------------------------------------
1859 // Failable initialization occurs here.
1860 //--------------------------------------------------------------
1861 BOOL InitThread(BOOL fInternal);
1862 BOOL AllocHandles();
1864 void SetupThreadForHost();
1866 //--------------------------------------------------------------
1867 // If the thread was setup through SetupUnstartedThread, rather
1868 // than SetupThread, complete the setup here when the thread is
1869 // actually running.
1870 // WARNING : only GC calls this with bRequiresTSL set to FALSE.
1871 //--------------------------------------------------------------
1872 BOOL HasStarted(BOOL bRequiresTSL=TRUE);
1874 // We don't want ::CreateThread() calls scattered throughout the source.
1875 // Create all new threads here. The thread is created as suspended, so
1876 // you must ::ResumeThread to kick it off. It is guaranteed to create the
1877 // thread, or throw.
1878 BOOL CreateNewThread(SIZE_T stackSize, LPTHREAD_START_ROUTINE start, void *args, LPCWSTR pName=NULL);
1881 enum StackSizeBucket
1889 // Creates a raw OS thread; use this only for CLR-internal threads that never execute user code.
1890 // StackSizeBucket determines how large the stack should be.
1892 static HANDLE CreateUtilityThread(StackSizeBucket stackSizeBucket, LPTHREAD_START_ROUTINE start, void *args, LPCWSTR pName, DWORD flags = 0, DWORD* pThreadId = NULL);
1894 //--------------------------------------------------------------
1896 //--------------------------------------------------------------
1897 #ifndef DACCESS_COMPILE
1900 virtual ~Thread() {}
1903 #ifdef FEATURE_COMINTEROP_APARTMENT_SUPPORT
1904 void CoUninitialize();
1905 void BaseCoUninitialize();
1906 void BaseWinRTUninitialize();
1907 #endif // FEATURE_COMINTEROP_APARTMENT_SUPPORT
1909 void OnThreadTerminate(BOOL holdingLock);
1911 static void CleanupDetachedThreads();
1912 //--------------------------------------------------------------
1913 // Returns innermost active Frame.
1914 //--------------------------------------------------------------
1915 PTR_Frame GetFrame()
1919 #ifndef DACCESS_COMPILE
1921 WRAPPER_NO_CONTRACT;
1922 if (this == GetThreadNULLOk())
1925 curSP = (void *)GetCurrentSP();
1926 _ASSERTE((curSP <= m_pFrame && m_pFrame < m_CacheStackBase) || m_pFrame == (Frame*) -1);
1929 LIMITED_METHOD_CONTRACT;
1932 #endif // #ifndef DACCESS_COMPILE
1936 //--------------------------------------------------------------
1937 // Replaces innermost active Frames.
1938 //--------------------------------------------------------------
1939 #ifndef DACCESS_COMPILE
1940 void SetFrame(Frame *pFrame)
1945 LIMITED_METHOD_CONTRACT;
1951 inline Frame* FindFrame(SIZE_T StackPointer);
1953 bool DetectHandleILStubsForDebugger();
1955 void SetWin32FaultAddress(DWORD eip)
1957 LIMITED_METHOD_CONTRACT;
1958 m_Win32FaultAddress = eip;
1961 void SetWin32FaultCode(DWORD code)
1963 LIMITED_METHOD_CONTRACT;
1964 m_Win32FaultCode = code;
1967 DWORD GetWin32FaultAddress()
1969 LIMITED_METHOD_CONTRACT;
1970 return m_Win32FaultAddress;
1973 DWORD GetWin32FaultCode()
1975 LIMITED_METHOD_CONTRACT;
1976 return m_Win32FaultCode;
1979 #ifdef ENABLE_CONTRACTS
1980 ClrDebugState *GetClrDebugState()
1982 LIMITED_METHOD_CONTRACT;
1983 return m_pClrDebugState;
1987 //**************************************************************
1989 //**************************************************************
1991 //--------------------------------------------------------------
1992 // Enter cooperative GC mode. NOT NESTABLE.
1993 //--------------------------------------------------------------
1994 FORCEINLINE_NONDEBUG void DisablePreemptiveGC()
1996 #ifndef DACCESS_COMPILE
1997 WRAPPER_NO_CONTRACT;
1998 _ASSERTE(this == GetThread());
1999 _ASSERTE(!m_fPreemptiveGCDisabled);
2000 // holding a spin lock in preemp mode and transit to coop mode will cause other threads
2001 // spinning waiting for GC
2002 _ASSERTE ((m_StateNC & Thread::TSNC_OwnsSpinLock) == 0);
2004 #ifdef ENABLE_CONTRACTS_IMPL
2008 // Logically, we just want to check whether a GC is in progress and halt
2009 // at the boundary if it is -- before we disable preemptive GC. However
2010 // this opens up a race condition where the GC starts after we make the
2011 // check. SuspendRuntime will ignore such a thread because it saw it as
2012 // outside the EE. So the thread would run wild during the GC.
2014 // Instead, enter cooperative mode and then check if a GC is in progress.
2015 // If so, go back out and try again. The reason we go back out before we
2016 // try again, is that SuspendRuntime might have seen us as being in
2017 // cooperative mode if it checks us between the next two statements.
2018 // In that case, it will be trying to move us to a safe spot. If
2019 // we don't let it see us leave, it will keep waiting on us indefinitely.
2021 // ------------------------------------------------------------------------
2022 // ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** |
2023 // ------------------------------------------------------------------------
2025 // DO NOT CHANGE THIS METHOD WITHOUT VISITING ALL THE STUB GENERATORS
2026 // THAT EFFECTIVELY INLINE IT INTO THEIR STUBS
2028 // ------------------------------------------------------------------------
2029 // ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** |
2030 // ------------------------------------------------------------------------
2032 m_fPreemptiveGCDisabled.StoreWithoutBarrier(1);
2034 if (g_TrapReturningThreads.LoadWithoutBarrier())
2036 RareDisablePreemptiveGC();
2039 LIMITED_METHOD_CONTRACT;
2043 NOINLINE void RareDisablePreemptiveGC();
2045 void HandleThreadAbort();
2047 void PreWorkForThreadAbort();
2050 void HandleThreadAbortTimeout();
2053 //--------------------------------------------------------------
2054 // Leave cooperative GC mode. NOT NESTABLE.
2055 //--------------------------------------------------------------
2056 FORCEINLINE_NONDEBUG void EnablePreemptiveGC()
2058 LIMITED_METHOD_CONTRACT;
2060 #ifndef DACCESS_COMPILE
2061 _ASSERTE(this == GetThread());
2062 _ASSERTE(m_fPreemptiveGCDisabled);
2063 // holding a spin lock in coop mode and transit to preemp mode will cause deadlock on GC
2064 _ASSERTE ((m_StateNC & Thread::TSNC_OwnsSpinLock) == 0);
2066 #ifdef ENABLE_CONTRACTS_IMPL
2067 _ASSERTE(!GCForbidden());
2071 // ------------------------------------------------------------------------
2072 // ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** |
2073 // ------------------------------------------------------------------------
2075 // DO NOT CHANGE THIS METHOD WITHOUT VISITING ALL THE STUB GENERATORS
2076 // THAT EFFECTIVELY INLINE IT INTO THEIR STUBS
2078 // ------------------------------------------------------------------------
2079 // ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** |
2080 // ------------------------------------------------------------------------
2082 m_fPreemptiveGCDisabled.StoreWithoutBarrier(0);
2083 #ifdef ENABLE_CONTRACTS
2084 m_ulEnablePreemptiveGCCount ++;
2087 if (CatchAtSafePoint())
2088 RareEnablePreemptiveGC();
2092 #if defined(STRESS_HEAP) && defined(_DEBUG)
2093 void PerformPreemptiveGC();
2095 void RareEnablePreemptiveGC();
2098 //--------------------------------------------------------------
2100 //--------------------------------------------------------------
2101 BOOL PreemptiveGCDisabled()
2103 WRAPPER_NO_CONTRACT;
2104 _ASSERTE(this == GetThread());
2106 // m_fPreemptiveGCDisabled is always modified by the thread itself, and so the thread itself
2107 // can read it without memory barrier.
2109 return m_fPreemptiveGCDisabled.LoadWithoutBarrier();
2112 BOOL PreemptiveGCDisabledOther()
2114 LIMITED_METHOD_CONTRACT;
2115 return (m_fPreemptiveGCDisabled);
2118 #ifdef ENABLE_CONTRACTS_IMPL
2120 void BeginNoTriggerGC(const char *szFile, int lineNum)
2122 WRAPPER_NO_CONTRACT;
2123 m_pClrDebugState->IncrementGCNoTriggerCount();
2124 if (PreemptiveGCDisabled())
2126 m_pClrDebugState->IncrementGCForbidCount();
2130 void EndNoTriggerGC()
2132 WRAPPER_NO_CONTRACT;
2133 _ASSERTE(m_pClrDebugState->GetGCNoTriggerCount() != 0 || (m_pClrDebugState->ViolationMask() & BadDebugState));
2134 m_pClrDebugState->DecrementGCNoTriggerCount();
2136 if (m_pClrDebugState->GetGCForbidCount())
2138 m_pClrDebugState->DecrementGCForbidCount();
2142 void BeginForbidGC(const char *szFile, int lineNum)
2144 WRAPPER_NO_CONTRACT;
2145 _ASSERTE(this == GetThread());
2146 #ifdef PROFILING_SUPPORTED
2147 _ASSERTE(PreemptiveGCDisabled()
2148 || CORProfilerPresent() || // This added to allow profiler to use GetILToNativeMapping
2149 // while in preemptive GC mode
2150 (g_fEEShutDown & (ShutDown_Finalize2 | ShutDown_Profiler)) == ShutDown_Finalize2);
2151 #else // PROFILING_SUPPORTED
2152 _ASSERTE(PreemptiveGCDisabled());
2153 #endif // PROFILING_SUPPORTED
2154 BeginNoTriggerGC(szFile, lineNum);
2159 WRAPPER_NO_CONTRACT;
2160 _ASSERTE(this == GetThread());
2161 #ifdef PROFILING_SUPPORTED
2162 _ASSERTE(PreemptiveGCDisabled() ||
2163 CORProfilerPresent() || // This added to allow profiler to use GetILToNativeMapping
2164 // while in preemptive GC mode
2165 (g_fEEShutDown & (ShutDown_Finalize2 | ShutDown_Profiler)) == ShutDown_Finalize2);
2166 #else // PROFILING_SUPPORTED
2167 _ASSERTE(PreemptiveGCDisabled());
2168 #endif // PROFILING_SUPPORTED
2174 WRAPPER_NO_CONTRACT;
2175 _ASSERTE(this == GetThread());
2176 if ( (GCViolation|BadDebugState) & m_pClrDebugState->ViolationMask() )
2180 return m_pClrDebugState->GetGCNoTriggerCount();
2185 WRAPPER_NO_CONTRACT;
2186 _ASSERTE(this == GetThread());
2187 if ( (GCViolation|BadDebugState) & m_pClrDebugState->ViolationMask())
2191 return m_pClrDebugState->GetGCForbidCount();
2194 BOOL RawGCNoTrigger()
2196 LIMITED_METHOD_CONTRACT;
2197 if (m_pClrDebugState->ViolationMask() & BadDebugState)
2201 return m_pClrDebugState->GetGCNoTriggerCount();
2204 BOOL RawGCForbidden()
2206 LIMITED_METHOD_CONTRACT;
2207 if (m_pClrDebugState->ViolationMask() & BadDebugState)
2211 return m_pClrDebugState->GetGCForbidCount();
2213 #endif // ENABLE_CONTRACTS_IMPL
2215 //---------------------------------------------------------------
2216 // Expose key offsets and values for stub generation.
2217 //---------------------------------------------------------------
2218 static BYTE GetOffsetOfCurrentFrame()
2220 LIMITED_METHOD_CONTRACT;
2221 size_t ofs = offsetof(class Thread, m_pFrame);
2222 _ASSERTE(FitsInI1(ofs));
2226 static BYTE GetOffsetOfState()
2228 LIMITED_METHOD_CONTRACT;
2229 size_t ofs = offsetof(class Thread, m_State);
2230 _ASSERTE(FitsInI1(ofs));
2234 static BYTE GetOffsetOfGCFlag()
2236 LIMITED_METHOD_CONTRACT;
2237 size_t ofs = offsetof(class Thread, m_fPreemptiveGCDisabled);
2238 _ASSERTE(FitsInI1(ofs));
2242 static void StaticDisablePreemptiveGC( Thread *pThread)
2244 WRAPPER_NO_CONTRACT;
2245 _ASSERTE(pThread != NULL);
2246 pThread->DisablePreemptiveGC();
2249 static void StaticEnablePreemptiveGC( Thread *pThread)
2251 WRAPPER_NO_CONTRACT;
2252 _ASSERTE(pThread != NULL);
2253 pThread->EnablePreemptiveGC();
2257 //---------------------------------------------------------------
2258 // Expose offset of the app domain word for the interop and delegate callback
2259 //---------------------------------------------------------------
2260 static SIZE_T GetOffsetOfAppDomain()
2262 LIMITED_METHOD_CONTRACT;
2263 return (SIZE_T)(offsetof(class Thread, m_pDomain));
2266 //---------------------------------------------------------------
2267 // Expose offset of the place for storing the filter context for the debugger.
2268 //---------------------------------------------------------------
2269 static SIZE_T GetOffsetOfDebuggerFilterContext()
2271 LIMITED_METHOD_CONTRACT;
2272 return (SIZE_T)(offsetof(class Thread, m_debuggerFilterContext));
2275 //---------------------------------------------------------------
2276 // Expose offset of the debugger cant stop count for the debugger
2277 //---------------------------------------------------------------
2278 static SIZE_T GetOffsetOfCantStop()
2280 LIMITED_METHOD_CONTRACT;
2281 return (SIZE_T)(offsetof(class Thread, m_debuggerCantStop));
2284 //---------------------------------------------------------------
2285 // Expose offset of m_StateNC
2286 //---------------------------------------------------------------
2287 static SIZE_T GetOffsetOfStateNC()
2289 LIMITED_METHOD_CONTRACT;
2290 return (SIZE_T)(offsetof(class Thread, m_StateNC));
2293 //---------------------------------------------------------------
2294 // Last exception to be thrown
2295 //---------------------------------------------------------------
2296 inline void SetThrowable(OBJECTREF pThrowable
2297 DEBUG_ARG(ThreadExceptionState::SetThrowableErrorChecking stecFlags = ThreadExceptionState::STEC_All));
2299 OBJECTREF GetThrowable()
2301 WRAPPER_NO_CONTRACT;
2303 return m_ExceptionState.GetThrowable();
2306 // An unmnaged thread can check if a managed is processing an exception
2309 LIMITED_METHOD_CONTRACT;
2310 OBJECTHANDLE pThrowable = m_ExceptionState.GetThrowableAsHandle();
2311 return pThrowable && *PTR_UNCHECKED_OBJECTREF(pThrowable);
2314 OBJECTHANDLE GetThrowableAsHandle()
2316 LIMITED_METHOD_CONTRACT;
2317 return m_ExceptionState.GetThrowableAsHandle();
2320 // special null test (for use when we're in the wrong GC mode)
2321 BOOL IsThrowableNull()
2323 WRAPPER_NO_CONTRACT;
2324 return IsHandleNullUnchecked(m_ExceptionState.GetThrowableAsHandle());
2327 BOOL IsExceptionInProgress()
2330 LIMITED_METHOD_CONTRACT;
2331 return m_ExceptionState.IsExceptionInProgress();
2335 void SyncManagedExceptionState(bool fIsDebuggerThread);
2337 //---------------------------------------------------------------
2338 // Per-thread information used by handler
2339 //---------------------------------------------------------------
2340 // exception handling info stored in thread
2341 // can't allocate this as needed because can't make exception-handling depend upon memory allocation
2343 PTR_ThreadExceptionState GetExceptionState()
2345 LIMITED_METHOD_CONTRACT;
2348 return PTR_ThreadExceptionState(PTR_HOST_MEMBER_TADDR(Thread, this, m_ExceptionState));
2353 void DECLSPEC_NORETURN RaiseCrossContextException(Exception* pEx, ContextTransitionFrame* pFrame);
2355 // ClearContext are to be called only during shutdown
2356 void ClearContext();
2359 // don't ever call these except when creating thread!!!!!
2363 PTR_AppDomain GetDomain(INDEBUG(BOOL fMidContextTransitionOK = FALSE))
2365 LIMITED_METHOD_DAC_CONTRACT;
2370 //---------------------------------------------------------------
2371 // Track use of the thread block. See the general comments on
2372 // thread destruction in threads.cpp, for details.
2373 //---------------------------------------------------------------
2374 int IncExternalCount();
2375 int DecExternalCount(BOOL holdingLock);
2378 //---------------------------------------------------------------
2379 // !!!! THESE ARE NOT SAFE FOR GENERAL USE !!!!
2380 // IncExternalCountDANGEROUSProfilerOnly()
2381 // DecExternalCountDANGEROUSProfilerOnly()
2382 // Currently only the profiler API should be using these
2383 // functions, because the profiler is responsible for ensuring
2384 // that the thread exists, undestroyed, before operating on it.
2385 // All other clients should use IncExternalCount/DecExternalCount
2387 //---------------------------------------------------------------
2388 int IncExternalCountDANGEROUSProfilerOnly()
2390 LIMITED_METHOD_CONTRACT;
2397 FastInterlockIncrement((LONG*)&m_ExternalRefCount);
2400 // This should never be called on a thread being destroyed
2401 _ASSERTE(cRefs != 1);
2406 int DecExternalCountDANGEROUSProfilerOnly()
2408 LIMITED_METHOD_CONTRACT;
2415 FastInterlockDecrement((LONG*)&m_ExternalRefCount);
2418 // This should never cause the last reference on the thread to be released
2419 _ASSERTE(cRefs != 0);
2424 // Get and Set the exposed System.Thread object which corresponds to
2425 // this thread. Also the thread handle and Id.
2426 OBJECTREF GetExposedObject();
2427 OBJECTREF GetExposedObjectRaw();
2428 void SetExposedObject(OBJECTREF exposed);
2429 OBJECTHANDLE GetExposedObjectHandleForDebugger()
2431 LIMITED_METHOD_CONTRACT;
2432 return m_ExposedObject;
2435 // Query whether the exposed object exists
2436 BOOL IsExposedObjectSet()
2445 return (ObjectFromHandle(m_ExposedObject) != NULL) ;
2448 void GetSynchronizationContext(OBJECTREF *pSyncContextObj)
2455 PRECONDITION(CheckPointer(pSyncContextObj));
2459 *pSyncContextObj = NULL;
2461 THREADBASEREF ExposedThreadObj = (THREADBASEREF)GetExposedObjectRaw();
2462 if (ExposedThreadObj != NULL)
2463 *pSyncContextObj = ExposedThreadObj->GetSynchronizationContext();
2467 // When we create a managed thread, the thread is suspended. We call StartThread to get
2468 // the thread start.
2469 DWORD StartThread();
2471 // The result of attempting to OS-suspend an EE thread.
2472 enum SuspendThreadResult
2474 // We successfully suspended the thread. This is the only
2475 // case where the caller should subsequently call ResumeThread.
2478 // The underlying call to the operating system's SuspendThread
2479 // or GetThreadContext failed. This is usually taken to mean
2480 // that the OS thread has exited. (This can possibly also mean
2482 // that the suspension count exceeded the allowed maximum, but
2483 // Thread::SuspendThread asserts that does not happen.)
2486 // The thread handle is invalid. This means that the thread
2487 // is dead (or dying), or that the object has been created for
2488 // an exposed System.Thread that has not been started yet.
2489 STR_UnstartedOrDead,
2491 // The fOneTryOnly flag was set, and we managed to OS suspend the
2492 // thread, but we found that it had its m_dwForbidSuspendThread
2493 // flag set. If fOneTryOnly is not set, Thread::Suspend will
2494 // retry in this case.
2497 // Stress logging is turned on, but no stress log had been created
2498 // for the thread yet, and we failed to create one. This can mean
2499 // that either we are not allowed to call into the host, or we ran
2503 // The EE thread is currently switched out. This can only happen
2504 // if we are hosted and the host schedules EE threads on fibers.
2508 #if defined(FEATURE_HIJACK) && defined(PLATFORM_UNIX)
2509 bool InjectGcSuspension();
2510 #endif // FEATURE_HIJACK && PLATFORM_UNIX
2512 #ifndef DISABLE_THREADSUSPEND
2514 // Attempts to OS-suspend the thread, whichever GC mode it is in.
2516 // fOneTryOnly - If TRUE, report failure if the thread has its
2517 // m_dwForbidSuspendThread flag set. If FALSE, retry.
2518 // pdwSuspendCount - If non-NULL, will contain the return code
2519 // of the underlying OS SuspendThread call on success,
2520 // undefined on any kind of failure.
2522 // A SuspendThreadResult value indicating success or failure.
2523 SuspendThreadResult SuspendThread(BOOL fOneTryOnly = FALSE, DWORD *pdwSuspendCount = NULL);
2525 DWORD ResumeThread();
2527 #endif // DISABLE_THREADSUSPEND
2529 int GetThreadPriority();
2530 BOOL SetThreadPriority(
2531 int nPriority // thread priority level
2534 DWORD Join(DWORD timeout, BOOL alertable);
2535 DWORD JoinEx(DWORD timeout, WaitMode mode);
2537 BOOL GetThreadContext(
2538 LPCONTEXT lpContext // context structure
2541 WRAPPER_NO_CONTRACT;
2542 return ::GetThreadContext (GetThreadHandle(), lpContext);
2545 #ifndef DACCESS_COMPILE
2546 BOOL SetThreadContext(
2547 CONST CONTEXT *lpContext // context structure
2550 WRAPPER_NO_CONTRACT;
2551 return ::SetThreadContext (GetThreadHandle(), lpContext);
2555 BOOL HasValidThreadHandle ()
2557 WRAPPER_NO_CONTRACT;
2558 return GetThreadHandle() != INVALID_HANDLE_VALUE;
2563 LIMITED_METHOD_DAC_CONTRACT;
2564 _ASSERTE(m_ThreadId != UNINITIALIZED_THREADID);
2568 // The actual OS thread ID may be 64 bit on some platforms but
2569 // the runtime has historically used 32 bit IDs. We continue to
2570 // downcast by default to limit the impact but GetOSThreadId64()
2571 // is available for code-paths which correctly handle it.
2572 DWORD GetOSThreadId()
2574 LIMITED_METHOD_CONTRACT;
2576 #ifndef DACCESS_COMPILE
2577 _ASSERTE (m_OSThreadId != 0xbaadf00d);
2578 #endif // !DACCESS_COMPILE
2579 return (DWORD)m_OSThreadId;
2582 // Allows access to the full 64 bit id on platforms which use it
2583 SIZE_T GetOSThreadId64()
2585 LIMITED_METHOD_CONTRACT;
2587 #ifndef DACCESS_COMPILE
2588 _ASSERTE(m_OSThreadId != 0xbaadf00d);
2589 #endif // !DACCESS_COMPILE
2590 return m_OSThreadId;
2593 // This API is to be used for Debugger only.
2594 // We need to be able to return the true value of m_OSThreadId.
2595 // On platforms with 64 bit thread IDs we downcast to 32 bit.
2597 DWORD GetOSThreadIdForDebugger()
2600 LIMITED_METHOD_CONTRACT;
2601 return (DWORD) m_OSThreadId;
2604 BOOL IsThreadPoolThread()
2606 LIMITED_METHOD_CONTRACT;
2607 return m_State & (Thread::TS_TPWorkerThread | Thread::TS_CompletionPortThread);
2610 // public suspend functions. System ones are internal, like for GC. User ones
2611 // correspond to suspend/resume calls on the exposed System.Thread object.
2612 static bool SysStartSuspendForDebug(AppDomain *pAppDomain);
2613 static bool SysSweepThreadsForDebug(bool forceSync);
2614 static void SysResumeFromDebug(AppDomain *pAppDomain);
2616 void UserSleep(INT32 time);
2618 // AD unload uses ThreadAbort support. We need to distinguish pure ThreadAbort and AD unload
2620 enum ThreadAbortRequester
2622 TAR_Thread = 0x00000001, // Request by Thread
2623 TAR_FuncEval = 0x00000004, // Request by Func-Eval
2624 TAR_ALL = 0xFFFFFFFF,
2630 // Bit mask for tracking which aborts came in and why.
2632 enum ThreadAbortInfo
2634 TAI_ThreadAbort = 0x00000001,
2635 TAI_ThreadRudeAbort = 0x00000004,
2636 TAI_FuncEvalAbort = 0x00000040,
2637 TAI_FuncEvalRudeAbort = 0x00000100,
2640 static const DWORD TAI_AnySafeAbort = (TAI_ThreadAbort |
2644 static const DWORD TAI_AnyRudeAbort = (TAI_ThreadRudeAbort |
2645 TAI_FuncEvalRudeAbort
2648 static const DWORD TAI_AnyFuncEvalAbort = (TAI_FuncEvalAbort |
2649 TAI_FuncEvalRudeAbort
2653 // Specifies type of thread abort.
2656 ULONGLONG m_AbortEndTime;
2657 ULONGLONG m_RudeAbortEndTime;
2658 BOOL m_fRudeAbortInitiated;
2659 LONG m_AbortController;
2661 static ULONGLONG s_NextSelfAbortEndTime;
2663 void SetRudeAbortEndTimeFromEEPolicy();
2665 // This is a spin lock to serialize setting/resetting of AbortType and AbortRequest.
2666 LONG m_AbortRequestLock;
2668 static void LockAbortRequest(Thread *pThread);
2669 static void UnlockAbortRequest(Thread *pThread);
2671 typedef Holder<Thread*, Thread::LockAbortRequest, Thread::UnlockAbortRequest> AbortRequestLockHolder;
2673 static void AcquireAbortControl(Thread *pThread)
2675 LIMITED_METHOD_CONTRACT;
2676 FastInterlockIncrement (&pThread->m_AbortController);
2679 static void ReleaseAbortControl(Thread *pThread)
2681 LIMITED_METHOD_CONTRACT;
2682 _ASSERTE (pThread->m_AbortController > 0);
2683 FastInterlockDecrement (&pThread->m_AbortController);
2686 typedef Holder<Thread*, Thread::AcquireAbortControl, Thread::ReleaseAbortControl> AbortControlHolder;
2690 BOOL m_fRudeAborted;
2691 DWORD m_dwAbortPoint;
2696 enum UserAbort_Client
2699 UAC_Host, // Called by host through IClrTask::Abort
2702 HRESULT UserAbort(ThreadAbortRequester requester,
2703 EEPolicy::ThreadAbortTypes abortType,
2705 UserAbort_Client client
2708 BOOL HandleJITCaseForAbort();
2710 void UserResetAbort(ThreadAbortRequester requester)
2712 InternalResetAbort(requester, FALSE);
2714 void EEResetAbort(ThreadAbortRequester requester)
2716 InternalResetAbort(requester, TRUE);
2720 void InternalResetAbort(ThreadAbortRequester requester, BOOL fResetRudeAbort);
2722 void SetAbortEndTime(ULONGLONG endTime, BOOL fRudeAbort);
2726 ULONGLONG GetAbortEndTime()
2728 WRAPPER_NO_CONTRACT;
2729 return IsRudeAbort()?m_RudeAbortEndTime:m_AbortEndTime;
2732 // We distinguish interrupting a thread between Thread.Interrupt and other usage.
2733 // For Thread.Interrupt usage, we will interrupt an alertable wait using the same
2734 // rule as ReadyForAbort. Wait in EH clause or CER region is not interrupted.
2735 // For other usage, we will try to Abort the thread.
2736 // If we can not do the operation, we will delay until next wait.
2737 enum ThreadInterruptMode
2739 TI_Interrupt = 0x00000001, // Requested by Thread.Interrupt
2740 TI_Abort = 0x00000002, // Requested by Thread.Abort or AppDomain.Unload
2744 BOOL ReadyForAsyncException();
2747 void UserInterrupt(ThreadInterruptMode mode);
2749 BOOL ReadyForAbort()
2751 return ReadyForAsyncException();
2755 BOOL IsFuncEvalAbort();
2757 #if defined(_TARGET_AMD64_) && defined(FEATURE_HIJACK)
2758 BOOL IsSafeToInjectThreadAbort(PTR_CONTEXT pContextToCheck);
2759 #endif // defined(_TARGET_AMD64_) && defined(FEATURE_HIJACK)
2761 inline BOOL IsAbortRequested()
2763 LIMITED_METHOD_CONTRACT;
2764 return (m_State & TS_AbortRequested);
2767 inline BOOL IsAbortInitiated()
2769 LIMITED_METHOD_CONTRACT;
2770 return (m_State & TS_AbortInitiated);
2773 inline BOOL IsRudeAbortInitiated()
2775 LIMITED_METHOD_CONTRACT;
2776 return IsAbortRequested() && m_fRudeAbortInitiated;
2779 inline void SetAbortInitiated()
2781 WRAPPER_NO_CONTRACT;
2782 if (IsRudeAbort()) {
2783 m_fRudeAbortInitiated = TRUE;
2785 FastInterlockOr((ULONG *)&m_State, TS_AbortInitiated);
2786 // The following should be factored better, but I'm looking for a minimal V1 change.
2787 ResetUserInterrupted();
2790 inline void ResetAbortInitiated()
2792 LIMITED_METHOD_CONTRACT;
2793 FastInterlockAnd((ULONG *)&m_State, ~TS_AbortInitiated);
2794 m_fRudeAbortInitiated = FALSE;
2797 inline void SetPreparingAbort()
2799 WRAPPER_NO_CONTRACT;
2800 SetThreadStateNC(TSNC_PreparingAbort);
2803 inline void ResetPreparingAbort()
2805 WRAPPER_NO_CONTRACT;
2806 ResetThreadStateNC(TSNC_PreparingAbort);
2810 inline static void SetPreparingAbortForHolder()
2812 GetThread()->SetPreparingAbort();
2814 inline static void ResetPreparingAbortForHolder()
2816 GetThread()->ResetPreparingAbort();
2818 typedef StateHolder<Thread::SetPreparingAbortForHolder, Thread::ResetPreparingAbortForHolder> PreparingAbortHolder;
2822 inline void SetIsCreatingTypeInitException()
2824 WRAPPER_NO_CONTRACT;
2825 SetThreadStateNC(TSNC_CreatingTypeInitException);
2828 inline void ResetIsCreatingTypeInitException()
2830 WRAPPER_NO_CONTRACT;
2831 ResetThreadStateNC(TSNC_CreatingTypeInitException);
2834 inline BOOL IsCreatingTypeInitException()
2836 WRAPPER_NO_CONTRACT;
2837 return HasThreadStateNC(TSNC_CreatingTypeInitException);
2841 void SetAbortRequestBit();
2843 void RemoveAbortRequestBit();
2846 void MarkThreadForAbort(ThreadAbortRequester requester, EEPolicy::ThreadAbortTypes abortType);
2847 void UnmarkThreadForAbort(ThreadAbortRequester requester, BOOL fForce = TRUE);
2849 static ULONGLONG GetNextSelfAbortEndTime()
2851 LIMITED_METHOD_CONTRACT;
2852 return s_NextSelfAbortEndTime;
2855 #if defined(FEATURE_HIJACK) && !defined(PLATFORM_UNIX)
2856 // Tricks for resuming threads from fully interruptible code with a ThreadStop.
2857 BOOL ResumeUnderControl(T_CONTEXT *pCtx);
2858 #endif // FEATURE_HIJACK && !PLATFORM_UNIX
2860 enum InducedThrowReason {
2861 InducedThreadStop = 1,
2862 InducedThreadRedirect = 2,
2863 InducedThreadRedirectAtEndOfCatch = 3,
2866 DWORD m_ThrewControlForThread; // flag that is set when the thread deliberately raises an exception for stop/abort
2868 inline DWORD ThrewControlForThread()
2870 LIMITED_METHOD_CONTRACT;
2871 return m_ThrewControlForThread;
2874 inline void SetThrowControlForThread(InducedThrowReason reason)
2876 LIMITED_METHOD_CONTRACT;
2877 m_ThrewControlForThread = reason;
2880 inline void ResetThrowControlForThread()
2882 LIMITED_METHOD_CONTRACT;
2883 m_ThrewControlForThread = 0;
2886 PTR_CONTEXT m_OSContext; // ptr to a Context structure used to record the OS specific ThreadContext for a thread
2887 // this is used for thread stop/abort and is intialized on demand
2889 PT_CONTEXT GetAbortContext ();
2891 // These will only ever be called from the debugger's helper
2894 // When a thread is being created after a debug suspension has
2895 // started, we get the event on the debugger helper thread. It
2896 // will turn around and call this to set the debug suspend pending
2897 // flag on the newly created flag, since it was missed by
2898 // SysStartSuspendForGC as it didn't exist when that function was
2900 void MarkForDebugSuspend();
2902 // When the debugger uses the trace flag to single step a thread,
2903 // it also calls this function to mark this info in the thread's
2904 // state. The out-of-process portion of the debugger will read the
2905 // thread's state for a variety of reasons, including looking for
2907 void MarkDebuggerIsStepping(bool onOff)
2909 WRAPPER_NO_CONTRACT;
2911 SetThreadStateNC(Thread::TSNC_DebuggerIsStepping);
2913 ResetThreadStateNC(Thread::TSNC_DebuggerIsStepping);
2916 #ifdef FEATURE_EMULATE_SINGLESTEP
2917 // ARM doesn't currently support any reliable hardware mechanism for single-stepping.
2918 // ARM64 unix doesn't currently support any reliable hardware mechanism for single-stepping.
2919 // For each we emulate single step in software. This support is used only by the debugger.
2921 #if defined(_TARGET_ARM_)
2922 ArmSingleStepper m_singleStepper;
2924 Arm64SingleStepper m_singleStepper;
2927 #ifndef DACCESS_COMPILE
2928 // Given the context with which this thread shall be resumed and the first WORD of the instruction that
2929 // should be executed next (this is not always the WORD under PC since the debugger uses this mechanism to
2930 // skip breakpoints written into the code), set the thread up to execute one instruction and then throw an
2931 // EXCEPTION_SINGLE_STEP. (In fact an EXCEPTION_BREAKPOINT will be thrown, but this is fixed up in our
2932 // first chance exception handler, see IsDebuggerFault in excep.cpp).
2933 void EnableSingleStep()
2935 m_singleStepper.Enable();
2938 void BypassWithSingleStep(const void* ip ARM_ARG(WORD opcode1) ARM_ARG(WORD opcode2) ARM64_ARG(uint32_t opcode))
2940 #if defined(_TARGET_ARM_)
2941 m_singleStepper.Bypass((DWORD)ip, opcode1, opcode2);
2943 m_singleStepper.Bypass((uint64_t)ip, opcode);
2947 void DisableSingleStep()
2949 m_singleStepper.Disable();
2952 void ApplySingleStep(T_CONTEXT *pCtx)
2954 m_singleStepper.Apply(pCtx);
2957 bool IsSingleStepEnabled() const
2959 return m_singleStepper.IsEnabled();
2962 // Fixup code called by our vectored exception handler to complete the emulation of single stepping
2963 // initiated by EnableSingleStep above. Returns true if the exception was indeed encountered during
2965 bool HandleSingleStep(T_CONTEXT *pCtx, DWORD dwExceptionCode)
2967 return m_singleStepper.Fixup(pCtx, dwExceptionCode);
2969 #endif // !DACCESS_COMPILE
2970 #endif // FEATURE_EMULATE_SINGLESTEP
2974 PendingTypeLoadHolder* m_pPendingTypeLoad;
2978 #ifndef DACCESS_COMPILE
2979 PendingTypeLoadHolder* GetPendingTypeLoad()
2981 LIMITED_METHOD_CONTRACT;
2982 return m_pPendingTypeLoad;
2985 void SetPendingTypeLoad(PendingTypeLoadHolder* pPendingTypeLoad)
2987 LIMITED_METHOD_CONTRACT;
2988 m_pPendingTypeLoad = pPendingTypeLoad;
2994 ThreadLocalIBCInfo* m_pIBCInfo;
2998 #ifndef DACCESS_COMPILE
3000 ThreadLocalIBCInfo* GetIBCInfo()
3002 LIMITED_METHOD_CONTRACT;
3003 _ASSERTE(g_IBCLogger.InstrEnabled());
3007 void SetIBCInfo(ThreadLocalIBCInfo* pInfo)
3009 LIMITED_METHOD_CONTRACT;
3010 _ASSERTE(g_IBCLogger.InstrEnabled());
3016 WRAPPER_NO_CONTRACT;
3017 if (m_pIBCInfo != NULL)
3018 m_pIBCInfo->FlushDelayedCallbacks();
3021 #endif // #ifndef DACCESS_COMPILE
3023 // Indicate whether this thread should run in the background. Background threads
3024 // don't interfere with the EE shutting down. Whereas a running non-background
3025 // thread prevents us from shutting down (except through System.Exit(), of course)
3026 // WARNING : only GC calls this with bRequiresTSL set to FALSE.
3027 void SetBackground(BOOL isBack, BOOL bRequiresTSL=TRUE);
3029 // When the thread starts running, make sure it is running in the correct apartment
3031 BOOL PrepareApartmentAndContext();
3033 #ifdef FEATURE_COMINTEROP_APARTMENT_SUPPORT
3034 // Retrieve the apartment state of the current thread. There are three possible
3035 // states: thread hosts an STA, thread is part of the MTA or thread state is
3036 // undecided. The last state may indicate that the apartment has not been set at
3037 // all (nobody has called CoInitializeEx) or that the EE does not know the
3038 // current state (EE has not called CoInitializeEx).
3039 enum ApartmentState { AS_InSTA, AS_InMTA, AS_Unknown };
3040 ApartmentState GetApartment();
3041 ApartmentState GetApartmentRare(Thread::ApartmentState as);
3042 ApartmentState GetExplicitApartment();
3044 // Sets the apartment state if it has not already been set and
3045 // returns the state.
3046 ApartmentState GetFinalApartment();
3048 // Attempt to set current thread's apartment state. The actual apartment state
3049 // achieved is returned and may differ from the input state if someone managed to
3050 // call CoInitializeEx on this thread first (note that calls to SetApartment made
3051 // before the thread has started are guaranteed to succeed).
3052 // The fFireMDAOnMismatch indicates if we should fire the apartment state probe
3053 // on an apartment state mismatch.
3054 ApartmentState SetApartment(ApartmentState state, BOOL fFireMDAOnMismatch);
3056 // when we get apartment tear-down notification,
3057 // we want reset the apartment state we cache on the thread
3058 VOID ResetApartment();
3059 #endif // FEATURE_COMINTEROP_APARTMENT_SUPPORT
3061 // Either perform WaitForSingleObject or MsgWaitForSingleObject as appropriate.
3062 DWORD DoAppropriateWait(int countHandles, HANDLE *handles, BOOL waitAll,
3063 DWORD millis, WaitMode mode,
3064 PendingSync *syncInfo = 0);
3066 DWORD DoAppropriateWait(AppropriateWaitFunc func, void *args, DWORD millis,
3067 WaitMode mode, PendingSync *syncInfo = 0);
3068 DWORD DoSignalAndWait(HANDLE *handles, DWORD millis, BOOL alertable,
3069 PendingSync *syncState = 0);
3071 void DoAppropriateWaitWorkerAlertableHelper(WaitMode mode);
3072 DWORD DoAppropriateWaitWorker(int countHandles, HANDLE *handles, BOOL waitAll,
3073 DWORD millis, WaitMode mode);
3074 DWORD DoAppropriateWaitWorker(AppropriateWaitFunc func, void *args,
3075 DWORD millis, WaitMode mode);
3076 DWORD DoSignalAndWaitWorker(HANDLE* pHandles, DWORD millis,BOOL alertable);
3077 DWORD DoAppropriateAptStateWait(int numWaiters, HANDLE* pHandles, BOOL bWaitAll, DWORD timeout, WaitMode mode);
3078 DWORD DoSyncContextWait(OBJECTREF *pSyncCtxObj, int countHandles, HANDLE *handles, BOOL waitAll, DWORD millis);
3081 //************************************************************************
3082 // Enumerate all frames.
3083 //************************************************************************
3085 /* Flags used for StackWalkFramesEx */
3087 // FUNCTIONSONLY excludes all functionless frames and all funclets
3088 #define FUNCTIONSONLY 0x0001
3090 // SKIPFUNCLETS includes functionless frames but excludes all funclets and everything between funclets and their parent methods
3091 #define SKIPFUNCLETS 0x0002
3093 #define POPFRAMES 0x0004
3095 /* use the following flag only if you REALLY know what you are doing !!! */
3096 #define QUICKUNWIND 0x0008 // do not restore all registers during unwind
3098 #define HANDLESKIPPEDFRAMES 0x0010 // temporary to handle skipped frames for appdomain unload
3099 // stack crawl. Eventually need to always do this but it
3100 // breaks the debugger right now.
3102 #define LIGHTUNWIND 0x0020 // allow using cache schema (see StackwalkCache class)
3104 #define NOTIFY_ON_U2M_TRANSITIONS 0x0040 // Provide a callback for native transitions.
3105 // This is only useful to a debugger trying to find native code
3108 #define DISABLE_MISSING_FRAME_DETECTION 0x0080 // disable detection of missing TransitionFrames
3110 // One thread may be walking the stack of another thread
3111 // If you need to use this, you may also need to put a call to CrawlFrame::CheckGSCookies
3112 // in your callback routine if it does any potentially time-consuming activity.
3113 #define ALLOW_ASYNC_STACK_WALK 0x0100
3115 #define THREAD_IS_SUSPENDED 0x0200 // Be careful not to cause deadlocks, this thread is suspended
3117 // Stackwalk tries to verify some objects, but it could be called in relocate phase of GC,
3118 // where objects could be in invalid state, this flag is to tell stackwalk to skip the validation
3119 #define ALLOW_INVALID_OBJECTS 0x0400
3121 // Caller has verified that the thread to be walked is in the middle of executing
3122 // JITd or NGENd code, according to the thread's current context (or seeded
3123 // context if one was provided). The caller ensures this when the stackwalk
3124 // is initiated by a profiler.
3125 #define THREAD_EXECUTING_MANAGED_CODE 0x0800
3127 // This stackwalk is due to the DoStackSnapshot profiler API
3128 #define PROFILER_DO_STACK_SNAPSHOT 0x1000
3130 // When this flag is set, the stackwalker does not automatically advance to the
3131 // faulting managed stack frame when it encounters an ExInfo. This should only be
3132 // necessary for native debuggers doing mixed-mode stackwalking.
3133 #define NOTIFY_ON_NO_FRAME_TRANSITIONS 0x2000
3135 // Normally, the stackwalker does not stop at the initial CONTEXT if the IP is in native code.
3136 // This flag changes the stackwalker behaviour. Currently this is only used in the debugger stackwalking
3138 #define NOTIFY_ON_INITIAL_NATIVE_CONTEXT 0x4000
3140 // Indicates that we are enumerating GC references and should follow appropriate
3141 // callback rules for parent methods vs funclets. Only supported on non-x86 platforms.
3143 // Refer to StackFrameIterator::Filter for detailed comments on this flag.
3144 #define GC_FUNCLET_REFERENCE_REPORTING 0x8000
3146 // Stackwalking normally checks GS cookies on the fly, but there are cases in which the JIT reports
3147 // incorrect epilog information. This causes the debugger to request stack walks in the epilog, checking
3148 // an now invalid cookie. This flag allows the debugger stack walks to disable GS cookie checking.
3150 // This is a workaround for the debugger stackwalking. In general, the stackwalker and CrawlFrame
3151 // may still execute GS cookie tracking/checking code paths.
3152 #define SKIP_GSCOOKIE_CHECK 0x10000
3154 StackWalkAction StackWalkFramesEx(
3155 PREGDISPLAY pRD, // virtual register set at crawl start
3156 PSTACKWALKFRAMESCALLBACK pCallback,
3159 PTR_Frame pStartFrame = PTR_NULL);
3162 // private helpers used by StackWalkFramesEx and StackFrameIterator
3163 StackWalkAction MakeStackwalkerCallback(CrawlFrame* pCF, PSTACKWALKFRAMESCALLBACK pCallback, VOID* pData DEBUG_ARG(UINT32 uLoopIteration));
3166 void DebugLogStackWalkInfo(CrawlFrame* pCF, __in_z LPCSTR pszTag, UINT32 uLoopIteration);
3171 StackWalkAction StackWalkFrames(
3172 PSTACKWALKFRAMESCALLBACK pCallback,
3175 PTR_Frame pStartFrame = PTR_NULL);
3177 bool InitRegDisplay(const PREGDISPLAY, const PT_CONTEXT, bool validContext);
3178 void FillRegDisplay(const PREGDISPLAY pRD, PT_CONTEXT pctx);
3180 #ifdef WIN64EXCEPTIONS
3181 static PCODE VirtualUnwindCallFrame(T_CONTEXT* pContext, T_KNONVOLATILE_CONTEXT_POINTERS* pContextPointers = NULL,
3182 EECodeInfo * pCodeInfo = NULL);
3183 static UINT_PTR VirtualUnwindCallFrame(PREGDISPLAY pRD, EECodeInfo * pCodeInfo = NULL);
3184 #ifndef DACCESS_COMPILE
3185 static PCODE VirtualUnwindLeafCallFrame(T_CONTEXT* pContext);
3186 static PCODE VirtualUnwindNonLeafCallFrame(T_CONTEXT* pContext, T_KNONVOLATILE_CONTEXT_POINTERS* pContextPointers = NULL,
3187 PT_RUNTIME_FUNCTION pFunctionEntry = NULL, UINT_PTR uImageBase = NULL);
3188 static UINT_PTR VirtualUnwindToFirstManagedCallFrame(T_CONTEXT* pContext);
3189 #endif // DACCESS_COMPILE
3190 #endif // WIN64EXCEPTIONS
3192 // During a <clinit>, this thread must not be asynchronously
3193 // stopped or interrupted. That would leave the class unavailable
3194 // and is therefore a security hole.
3195 static void IncPreventAsync()
3197 WRAPPER_NO_CONTRACT;
3198 Thread *pThread = GetThread();
3199 FastInterlockIncrement((LONG*)&pThread->m_PreventAsync);
3201 static void DecPreventAsync()
3203 WRAPPER_NO_CONTRACT;
3204 Thread *pThread = GetThread();
3205 FastInterlockDecrement((LONG*)&pThread->m_PreventAsync);
3208 bool IsAsyncPrevented()
3210 return m_PreventAsync != 0;
3213 typedef StateHolder<Thread::IncPreventAsync, Thread::DecPreventAsync> ThreadPreventAsyncHolder;
3215 // During a <clinit>, this thread must not be asynchronously
3216 // stopped or interrupted. That would leave the class unavailable
3217 // and is therefore a security hole.
3218 static void IncPreventAbort()
3220 WRAPPER_NO_CONTRACT;
3221 Thread *pThread = GetThread();
3222 FastInterlockIncrement((LONG*)&pThread->m_PreventAbort);
3224 static void DecPreventAbort()
3226 WRAPPER_NO_CONTRACT;
3227 Thread *pThread = GetThread();
3228 FastInterlockDecrement((LONG*)&pThread->m_PreventAbort);
3231 BOOL IsAbortPrevented()
3233 return m_PreventAbort != 0;
3236 typedef StateHolder<Thread::IncPreventAbort, Thread::DecPreventAbort> ThreadPreventAbortHolder;
3238 // The ThreadStore manages a list of all the threads in the system. I
3239 // can't figure out how to expand the ThreadList template type without
3240 // making m_Link public.
3243 // For N/Direct calls with the "setLastError" bit, this field stores
3244 // the errorcode from that call.
3245 DWORD m_dwLastError;
3247 #ifdef FEATURE_INTERPRETER
3248 // When we're interpreting IL stubs for N/Direct calls with the "setLastError" bit,
3249 // the interpretation will trash the last error before we get to the call to "SetLastError".
3250 // Therefore, we record it here immediately after the calli, and treat "SetLastError" as an
3251 // intrinsic that transfers the value stored here into the field above.
3252 DWORD m_dwLastErrorInterp;
3255 // Debugger per-thread flag for enabling notification on "manual"
3256 // method calls, for stepping logic
3257 void IncrementTraceCallCount();
3258 void DecrementTraceCallCount();
3260 FORCEINLINE int IsTraceCall()
3262 LIMITED_METHOD_CONTRACT;
3263 return m_TraceCallCount;
3266 // Functions to get/set culture information for current thread.
3267 static OBJECTREF GetCulture(BOOL bUICulture);
3268 static void SetCulture(OBJECTREF *CultureObj, BOOL bUICulture);
3271 #if defined(FEATURE_HIJACK) && !defined(PLATFORM_UNIX)
3272 // Used in suspension code to redirect a thread at a HandledJITCase
3273 BOOL RedirectThreadAtHandledJITCase(PFN_REDIRECTTARGET pTgt);
3274 BOOL RedirectCurrentThreadAtHandledJITCase(PFN_REDIRECTTARGET pTgt, T_CONTEXT *pCurrentThreadCtx);
3276 // Will Redirect the thread using RedirectThreadAtHandledJITCase if necessary
3277 BOOL CheckForAndDoRedirect(PFN_REDIRECTTARGET pRedirectTarget);
3278 BOOL CheckForAndDoRedirectForDbg();
3279 BOOL CheckForAndDoRedirectForGC();
3280 BOOL CheckForAndDoRedirectForUserSuspend();
3282 // Exception handling must be very aware of redirection, so we provide a helper
3283 // to identifying redirection targets
3284 static BOOL IsAddrOfRedirectFunc(void * pFuncAddr);
3286 #if defined(HAVE_GCCOVER) && defined(USE_REDIRECT_FOR_GCSTRESS)
3288 BOOL CheckForAndDoRedirectForGCStress (T_CONTEXT *pCurrentThreadCtx);
3290 bool m_fPreemptiveGCDisabledForGCStress;
3291 #endif // HAVE_GCCOVER && USE_REDIRECT_FOR_GCSTRESS
3292 #endif // FEATURE_HIJACK && !PLATFORM_UNIX
3296 #ifndef DACCESS_COMPILE
3297 // These re-calculate the proper value on each call for the currently executing thread. Use GetCachedStackLimit
3298 // and GetCachedStackBase for the cached values on this Thread.
3299 static void * GetStackLowerBound();
3300 static void * GetStackUpperBound();
3302 bool CheckCanUseStackAlloc()
3305 UINT_PTR current = reinterpret_cast<UINT_PTR>(&local);
3306 UINT_PTR limit = GetCachedStackStackAllocNonRiskyExecutionLimit();
3307 return (current > limit);
3309 #else // DACCESS_COMPILE
3310 bool CheckCanUseStackAlloc() { return true; }
3313 enum SetStackLimitScope { fAll, fAllowableOnly };
3314 BOOL SetStackLimits(SetStackLimitScope scope);
3316 // These access the stack base and limit values for this thread. (They are cached during InitThread.) The
3317 // "stack base" is the "upper bound", i.e., where the stack starts growing from. (Main's call frame is at the
3318 // upper bound.) The "stack limit" is the "lower bound", i.e., how far the stack can grow down to.
3319 // The "stack sufficient execution limit" is used by EnsureSufficientExecutionStack() to limit how much stack
3320 // should remain to execute the average Framework method.
3321 PTR_VOID GetCachedStackBase() {LIMITED_METHOD_DAC_CONTRACT; return m_CacheStackBase; }
3322 PTR_VOID GetCachedStackLimit() {LIMITED_METHOD_DAC_CONTRACT; return m_CacheStackLimit;}
3323 UINT_PTR GetCachedStackSufficientExecutionLimit() {LIMITED_METHOD_DAC_CONTRACT; return m_CacheStackSufficientExecutionLimit;}
3324 UINT_PTR GetCachedStackStackAllocNonRiskyExecutionLimit() {LIMITED_METHOD_DAC_CONTRACT; return m_CacheStackStackAllocNonRiskyExecutionLimit;}
3327 // Access the base and limit of the stack. (I.e. the memory ranges that the thread has reserved for its stack).
3329 // Note that the base is at a higher address than the limit, since the stack grows downwards.
3331 // Note that we generally access the stack of the thread we are crawling, which is cached in the ScanContext.
3332 PTR_VOID m_CacheStackBase;
3333 PTR_VOID m_CacheStackLimit;
3334 UINT_PTR m_CacheStackSufficientExecutionLimit;
3335 UINT_PTR m_CacheStackStackAllocNonRiskyExecutionLimit;
3337 #define HARD_GUARD_REGION_SIZE GetOsPageSize()
3341 static HRESULT CLRSetThreadStackGuarantee(SetThreadStackGuaranteeScope fScope = STSGuarantee_OnlyIfEnabled);
3343 // try to turn a page into a guard page
3344 static BOOL MarkPageAsGuard(UINT_PTR uGuardPageBase);
3346 // scan a region for a guard page
3347 static BOOL DoesRegionContainGuardPage(UINT_PTR uLowAddress, UINT_PTR uHighAddress);
3349 // Every stack has a single reserved page at its limit that we call the 'hard guard page'. This page is never
3350 // committed, and access to it after a stack overflow will terminate the thread.
3351 #define HARD_GUARD_REGION_SIZE GetOsPageSize()
3352 #define SIZEOF_DEFAULT_STACK_GUARANTEE 1 * GetOsPageSize()
3355 // This will return the last stack address that one could write to before a stack overflow.
3356 static UINT_PTR GetLastNormalStackAddress(UINT_PTR stackBase);
3357 UINT_PTR GetLastNormalStackAddress();
3359 UINT_PTR GetLastAllowableStackAddress()
3361 return m_LastAllowableStackAddress;
3364 UINT_PTR GetProbeLimit()
3366 return m_ProbeLimit;
3369 void ResetStackLimits()
3378 if (!IsSetThreadStackGuaranteeInUse())
3382 SetStackLimits(fAllowableOnly);
3385 BOOL IsSPBeyondLimit();
3387 INDEBUG(static void DebugLogStackMBIs());
3389 #if defined(_DEBUG_IMPL) && !defined(DACCESS_COMPILE)
3390 // Verify that the cached stack base is for the current thread.
3391 BOOL HasRightCacheStackBase()
3393 WRAPPER_NO_CONTRACT;
3394 return m_CacheStackBase == GetStackUpperBound();
3399 static BOOL UniqueStack(void* startLoc = 0);
3401 BOOL IsAddressInStack (PTR_VOID addr) const
3403 LIMITED_METHOD_DAC_CONTRACT;
3404 _ASSERTE(m_CacheStackBase != NULL);
3405 _ASSERTE(m_CacheStackLimit != NULL);
3406 _ASSERTE(m_CacheStackLimit < m_CacheStackBase);
3407 return m_CacheStackLimit < addr && addr <= m_CacheStackBase;
3410 static BOOL IsAddressInCurrentStack (PTR_VOID addr)
3412 LIMITED_METHOD_DAC_CONTRACT;
3413 Thread* currentThread = GetThread();
3414 if (currentThread == NULL)
3419 PTR_VOID sp = dac_cast<PTR_VOID>(GetCurrentSP());
3420 _ASSERTE(currentThread->m_CacheStackBase != NULL);
3421 _ASSERTE(sp < currentThread->m_CacheStackBase);
3422 return sp < addr && addr <= currentThread->m_CacheStackBase;
3425 // DetermineIfGuardPagePresent returns TRUE if the thread's stack contains a proper guard page. This function
3426 // makes a physical check of the stack, rather than relying on whether or not the CLR is currently processing a
3427 // stack overflow exception.
3428 BOOL DetermineIfGuardPagePresent();
3430 // Returns the amount of stack available after an SO but before the OS rips the process.
3431 static UINT_PTR GetStackGuarantee();
3433 // RestoreGuardPage will replace the guard page on this thread's stack. The assumption is that it was removed
3434 // by the OS due to a stack overflow exception. This function requires that you know that you have enough stack
3435 // space to restore the guard page, so make sure you know what you're doing when you decide to call this.
3436 VOID RestoreGuardPage();
3438 #if defined(FEATURE_HIJACK) && !defined(PLATFORM_UNIX)
3440 // Redirecting of threads in managed code at suspension
3442 enum RedirectReason {
3443 RedirectReason_GCSuspension,
3444 RedirectReason_DebugSuspension,
3445 RedirectReason_UserSuspension,
3446 #if defined(HAVE_GCCOVER) && defined(USE_REDIRECT_FOR_GCSTRESS) // GCCOVER
3447 RedirectReason_GCStress,
3448 #endif // HAVE_GCCOVER && USE_REDIRECT_FOR_GCSTRESS
3450 static void __stdcall RedirectedHandledJITCase(RedirectReason reason);
3451 static void __stdcall RedirectedHandledJITCaseForDbgThreadControl();
3452 static void __stdcall RedirectedHandledJITCaseForGCThreadControl();
3453 static void __stdcall RedirectedHandledJITCaseForUserSuspend();
3454 #if defined(HAVE_GCCOVER) && defined(USE_REDIRECT_FOR_GCSTRESS) // GCCOVER
3455 static void __stdcall RedirectedHandledJITCaseForGCStress();
3456 #endif // defined(HAVE_GCCOVER) && USE_REDIRECT_FOR_GCSTRESS
3458 friend void CPFH_AdjustContextForThreadSuspensionRace(T_CONTEXT *pContext, Thread *pThread);
3459 #endif // FEATURE_HIJACK && !PLATFORM_UNIX
3462 //-------------------------------------------------------------
3463 // Waiting & Synchronization
3464 //-------------------------------------------------------------
3466 // For suspends. The thread waits on this event. A client sets the event to cause
3467 // the thread to resume.
3468 void WaitSuspendEvents(BOOL fDoWait = TRUE);
3469 BOOL WaitSuspendEventsHelper(void);
3471 // Helpers to ensure that the bits for suspension and the number of active
3472 // traps remain coordinated.
3473 void MarkForSuspension(ULONG bit);
3474 void UnmarkForSuspension(ULONG bit);
3476 void SetupForSuspension(ULONG bit)
3478 WRAPPER_NO_CONTRACT;
3480 // CoreCLR does not support user-requested thread suspension
3481 _ASSERTE(!(bit & TS_UserSuspendPending));
3484 if (bit & TS_DebugSuspendPending) {
3485 m_DebugSuspendEvent.Reset();
3489 void ReleaseFromSuspension(ULONG bit)
3491 WRAPPER_NO_CONTRACT;
3493 UnmarkForSuspension(~bit);
3496 // If the thread is set free, mark it as not-suspended now
3498 ThreadState oldState = m_State;
3500 // CoreCLR does not support user-requested thread suspension
3501 _ASSERTE(!(oldState & TS_UserSuspendPending));
3503 while ((oldState & (TS_UserSuspendPending | TS_DebugSuspendPending)) == 0)
3505 // CoreCLR does not support user-requested thread suspension
3506 _ASSERTE(!(oldState & TS_UserSuspendPending));
3509 // Construct the destination state we desire - all suspension bits turned off.
3511 ThreadState newState = (ThreadState)(oldState & ~(TS_UserSuspendPending |
3512 TS_DebugSuspendPending |
3515 if (FastInterlockCompareExchange((LONG *)&m_State, newState, oldState) == (LONG)oldState)
3521 // The state changed underneath us, refresh it and try again.
3526 // CoreCLR does not support user-requested thread suspension
3527 _ASSERTE(!(bit & TS_UserSuspendPending));
3529 if (bit & TS_DebugSuspendPending) {
3530 m_DebugSuspendEvent.Set();
3536 FORCEINLINE void UnhijackThreadNoAlloc()
3538 #if defined(FEATURE_HIJACK) && !defined(DACCESS_COMPILE)
3539 if (m_State & TS_Hijacked)
3541 *m_ppvHJRetAddrPtr = m_pvHJRetAddr;
3542 FastInterlockAnd((ULONG *) &m_State, ~TS_Hijacked);
3547 void UnhijackThread();
3549 // Flags that may be passed to GetSafelyRedirectableThreadContext, to customize
3550 // which checks it should perform. This allows a subset of the context verification
3551 // logic used by HandledJITCase to be shared with other callers, such as profiler
3553 enum GetSafelyRedirectableThreadContextOptions
3555 // Perform the default thread context checks
3556 kDefaultChecks = 0x00000000,
3558 // Compares the thread context's IP against m_LastRedirectIP, and potentially
3559 // updates m_LastRedirectIP, when determining the safeness of the thread's
3560 // context. HandledJITCase will always set this flag.
3561 // This flag is ignored on non-x86 platforms, and also on x86 if the OS supports
3562 // trap frame reporting.
3563 kPerfomLastRedirectIPCheck = 0x00000001,
3565 // Use g_pDebugInterface->IsThreadContextInvalid() to see if breakpoints might
3566 // confuse the stack walker. HandledJITCase will always set this flag.
3567 kCheckDebuggerBreakpoints = 0x00000002,
3570 // Helper used by HandledJITCase and others who need an absolutely reliable
3571 // register context.
3572 BOOL GetSafelyRedirectableThreadContext(DWORD dwOptions, T_CONTEXT * pCtx, REGDISPLAY * pRD);
3575 #ifdef FEATURE_HIJACK
3576 void HijackThread(VOID *pvHijackAddr, ExecutionState *esb);
3578 VOID *m_pvHJRetAddr; // original return address (before hijack)
3579 VOID **m_ppvHJRetAddrPtr; // place we bashed a new return address
3580 MethodDesc *m_HijackedFunction; // remember what we hijacked
3582 #ifndef PLATFORM_UNIX
3583 BOOL HandledJITCase(BOOL ForTaskSwitchIn = FALSE);
3586 PCODE m_LastRedirectIP;
3588 #endif // _TARGET_X86_
3590 #endif // !PLATFORM_UNIX
3592 #endif // FEATURE_HIJACK
3594 DWORD m_Win32FaultAddress;
3595 DWORD m_Win32FaultCode;
3597 // Support for Wait/Notify
3598 BOOL Block(INT32 timeOut, PendingSync *syncInfo);
3599 void Wake(SyncBlock *psb);
3600 DWORD Wait(HANDLE *objs, int cntObjs, INT32 timeOut, PendingSync *syncInfo);
3601 DWORD Wait(CLREvent* pEvent, INT32 timeOut, PendingSync *syncInfo);
3603 // support for Thread.Interrupt() which breaks out of Waits, Sleeps, Joins
3604 LONG m_UserInterrupt;
3605 DWORD IsUserInterrupted()
3607 LIMITED_METHOD_CONTRACT;
3608 return m_UserInterrupt;
3610 void ResetUserInterrupted()
3612 LIMITED_METHOD_CONTRACT;
3613 FastInterlockExchange(&m_UserInterrupt, 0);
3616 void HandleThreadInterrupt();
3619 static void WINAPI UserInterruptAPC(ULONG_PTR ignore);
3621 #if defined(_DEBUG) && defined(TRACK_SYNC)
3623 // Each thread has a stack that tracks all enter and leave requests
3625 Dbg_TrackSync *m_pTrackSync;
3627 #endif // TRACK_SYNC
3630 #ifdef ENABLE_CONTRACTS_DATA
3631 struct ClrDebugState *m_pClrDebugState; // Pointer to ClrDebugState for quick access
3633 ULONG m_ulEnablePreemptiveGCCount;
3638 CLREvent m_DebugSuspendEvent;
3640 // For Object::Wait, Notify and NotifyAll, we use an Event inside the
3641 // thread and we queue the threads onto the SyncBlock of the object they
3643 CLREvent m_EventWait;
3644 WaitEventLink m_WaitEventLink;
3645 WaitEventLink* WaitEventLinkForSyncBlock (SyncBlock *psb)
3647 LIMITED_METHOD_CONTRACT;
3648 WaitEventLink *walk = &m_WaitEventLink;
3649 while (walk->m_Next) {
3650 _ASSERTE (walk->m_Next->m_Thread == this);
3651 if ((SyncBlock*)(((DWORD_PTR)walk->m_Next->m_WaitSB) & ~1)== psb) {
3654 walk = walk->m_Next;
3659 // Access to thread handle and ThreadId.
3660 HANDLE GetThreadHandle()
3662 LIMITED_METHOD_CONTRACT;
3663 #if defined(_DEBUG) && !defined(DACCESS_COMPILE)
3665 CounterHolder handleHolder(&m_dwThreadHandleBeingUsed);
3666 HANDLE handle = m_ThreadHandle;
3667 _ASSERTE ( handle == INVALID_HANDLE_VALUE
3668 || handle == SWITCHOUT_HANDLE_VALUE
3669 || m_OSThreadId == 0
3670 || m_OSThreadId == 0xbaadf00d
3671 || ::MatchThreadHandleToOsId(handle, (DWORD)m_OSThreadId) );
3675 DACCOP_IGNORE(FieldAccess, "Treated as raw address, no marshaling is necessary");
3676 return m_ThreadHandle;
3679 void SetThreadHandle(HANDLE h)
3681 LIMITED_METHOD_CONTRACT;
3683 _ASSERTE ( h == INVALID_HANDLE_VALUE
3684 || h == SWITCHOUT_HANDLE_VALUE
3685 || m_OSThreadId == 0
3686 || m_OSThreadId == 0xbaadf00d
3687 || ::MatchThreadHandleToOsId(h, (DWORD)m_OSThreadId) );
3689 FastInterlockExchangePointer(&m_ThreadHandle, h);
3692 // We maintain a correspondence between this object, the ThreadId and ThreadHandle
3693 // in Win32, and the exposed Thread object.
3694 HANDLE m_ThreadHandle;
3696 // <TODO> It would be nice to remove m_ThreadHandleForClose to simplify Thread.Join,
3697 // but at the moment that isn't possible without extensive work.
3698 // This handle is used by SwitchOut to store the old handle which may need to be closed
3699 // if we are the owner. The handle can't be closed before checking the external count
3700 // which we can't do in SwitchOut since that may require locking or switching threads.</TODO>
3701 HANDLE m_ThreadHandleForClose;
3702 HANDLE m_ThreadHandleForResume;
3703 BOOL m_WeOwnThreadHandle;
3704 SIZE_T m_OSThreadId;
3706 BOOL CreateNewOSThread(SIZE_T stackSize, LPTHREAD_START_ROUTINE start, void *args);
3708 OBJECTHANDLE m_ExposedObject;
3709 OBJECTHANDLE m_StrongHndToExposedObject;
3711 DWORD m_Priority; // initialized to INVALID_THREAD_PRIORITY, set to actual priority when a
3712 // thread does a busy wait for GC, reset to INVALID_THREAD_PRIORITY after wait is over
3713 friend class NDirect; // Quick access to thread stub creation
3716 friend void DoGcStress (PT_CONTEXT regs, NativeCodeVersion nativeCodeVersion); // Needs to call UnhijackThread
3717 #endif // HAVE_GCCOVER
3719 ULONG m_ExternalRefCount;
3721 ULONG m_UnmanagedRefCount;
3723 LONG m_TraceCallCount;
3725 //-----------------------------------------------------------
3726 // Bytes promoted on this thread since the last GC?
3727 //-----------------------------------------------------------
3730 void SetHasPromotedBytes ();
3731 DWORD GetHasPromotedBytes ()
3733 LIMITED_METHOD_CONTRACT;
3738 //-----------------------------------------------------------
3739 // Last exception to be thrown.
3740 //-----------------------------------------------------------
3741 friend class EEDbgInterfaceImpl;
3744 // Stores the most recently thrown exception. We need to have a handle in case a GC occurs before
3745 // we catch so we don't lose the object. Having a static allows others to catch outside of COM+ w/o leaking
3746 // a handler and allows rethrow outside of COM+ too.
3747 // Differs from m_pThrowable in that it doesn't stack on nested exceptions.
3748 OBJECTHANDLE m_LastThrownObjectHandle; // Unsafe to use directly. Use accessors instead.
3750 // Indicates that the throwable in m_lastThrownObjectHandle should be treated as
3751 // unhandled. This occurs during fatal error and a few other early error conditions
3752 // before EH is fully set up.
3753 BOOL m_ltoIsUnhandled;
3755 friend void DECLSPEC_NORETURN EEPolicy::HandleFatalStackOverflow(EXCEPTION_POINTERS *pExceptionInfo, BOOL fSkipDebugger);
3759 BOOL IsLastThrownObjectNull() { WRAPPER_NO_CONTRACT; return (m_LastThrownObjectHandle == NULL); }
3761 OBJECTREF LastThrownObject()
3763 WRAPPER_NO_CONTRACT;
3765 if (m_LastThrownObjectHandle == NULL)
3771 // We only have a handle if we have an object to keep in it.
3772 _ASSERTE(ObjectFromHandle(m_LastThrownObjectHandle) != NULL);
3773 return ObjectFromHandle(m_LastThrownObjectHandle);
3777 OBJECTHANDLE LastThrownObjectHandle()
3779 LIMITED_METHOD_DAC_CONTRACT;
3781 return m_LastThrownObjectHandle;
3784 void SetLastThrownObject(OBJECTREF throwable, BOOL isUnhandled = FALSE);
3785 void SetSOForLastThrownObject();
3786 OBJECTREF SafeSetLastThrownObject(OBJECTREF throwable);
3788 // Inidcates that the last thrown object is now treated as unhandled
3789 void MarkLastThrownObjectUnhandled()
3791 LIMITED_METHOD_CONTRACT;
3792 m_ltoIsUnhandled = TRUE;
3795 // TRUE if the throwable in LTO should be treated as unhandled
3796 BOOL IsLastThrownObjectUnhandled()
3798 LIMITED_METHOD_DAC_CONTRACT;
3799 return m_ltoIsUnhandled;
3802 void SafeUpdateLastThrownObject(void);
3803 OBJECTREF SafeSetThrowables(OBJECTREF pThrowable
3804 DEBUG_ARG(ThreadExceptionState::SetThrowableErrorChecking stecFlags = ThreadExceptionState::STEC_All),
3805 BOOL isUnhandled = FALSE);
3807 bool IsLastThrownObjectStackOverflowException()
3809 LIMITED_METHOD_CONTRACT;
3810 CONSISTENCY_CHECK(NULL != g_pPreallocatedStackOverflowException);
3812 return (m_LastThrownObjectHandle == g_pPreallocatedStackOverflowException);
3815 // get the current notification (if any) from this thread
3816 OBJECTHANDLE GetThreadCurrNotification();
3818 // set the current notification on this thread
3819 void SetThreadCurrNotification(OBJECTHANDLE handle);
3821 // clear the current notification (if any) from this thread
3822 void ClearThreadCurrNotification();
3825 void SetLastThrownObjectHandle(OBJECTHANDLE h);
3827 ThreadExceptionState m_ExceptionState;
3829 //-----------------------------------------------------------
3830 // For stack probing. These are the last allowable addresses that a thread
3831 // can touch. Going beyond is a stack overflow. The ProbeLimit will be
3832 // set based on whether SO probing is enabled. The LastAllowableAddress
3833 // will always represent the true stack limit.
3834 //-----------------------------------------------------------
3835 UINT_PTR m_ProbeLimit;
3837 UINT_PTR m_LastAllowableStackAddress;
3840 //---------------------------------------------------------------
3841 // m_debuggerFilterContext holds the thread's "filter context" for the
3842 // debugger. This filter context is used by the debugger to seed
3843 // stack walks on the thread.
3844 //---------------------------------------------------------------
3845 PTR_CONTEXT m_debuggerFilterContext;
3847 //---------------------------------------------------------------
3848 // m_profilerFilterContext holds an additional context for the
3849 // case when a (sampling) profiler wishes to hijack the thread
3850 // and do a stack walk on the same thread.
3851 //---------------------------------------------------------------
3852 T_CONTEXT *m_pProfilerFilterContext;
3854 //---------------------------------------------------------------
3855 // m_hijackLock holds a BOOL that is used for mutual exclusion
3856 // between profiler stack walks and thread hijacks (bashing
3857 // return addresses on the stack)
3858 //---------------------------------------------------------------
3859 Volatile<LONG> m_hijackLock;
3860 //---------------------------------------------------------------
3861 // m_debuggerCantStop holds a count of entries into "can't stop"
3862 // areas that the Interop Debugging Services must know about.
3863 //---------------------------------------------------------------
3864 DWORD m_debuggerCantStop;
3866 //---------------------------------------------------------------
3867 // The current custom notification data object (or NULL if none
3869 //---------------------------------------------------------------
3870 OBJECTHANDLE m_hCurrNotification;
3872 //---------------------------------------------------------------
3873 // For Interop-Debugging; track if a thread is hijacked.
3874 //---------------------------------------------------------------
3875 BOOL m_fInteropDebuggingHijacked;
3877 //---------------------------------------------------------------
3878 // Bitmask to remember per-thread state useful for the profiler API. See
3879 // COR_PRF_CALLBACKSTATE_* flags in clr\src\inc\ProfilePriv.h for bit values.
3880 //---------------------------------------------------------------
3881 DWORD m_profilerCallbackState;
3883 #if defined(PROFILING_SUPPORTED) || defined(PROFILING_SUPPORTED_DATA)
3884 //---------------------------------------------------------------
3885 // m_dwProfilerEvacuationCounter keeps track of how many profiler
3886 // callback calls remain on the stack
3887 //---------------------------------------------------------------
3889 // See code:ProfilingAPIUtility::InitializeProfiling#LoadUnloadCallbackSynchronization.
3890 Volatile<DWORD> m_dwProfilerEvacuationCounter;
3891 #endif // defined(PROFILING_SUPPORTED) || defined(PROFILING_SUPPORTED_DATA)
3894 UINT32 m_workerThreadPoolCompletionCount;
3895 static UINT64 s_workerThreadPoolCompletionCountOverflow;
3896 UINT32 m_ioThreadPoolCompletionCount;
3897 static UINT64 s_ioThreadPoolCompletionCountOverflow;
3898 UINT32 m_monitorLockContentionCount;
3899 static UINT64 s_monitorLockContentionCountOverflow;
3901 #ifndef DACCESS_COMPILE
3903 static UINT32 *GetThreadLocalCountRef(Thread *pThread, SIZE_T threadLocalCountOffset)
3905 WRAPPER_NO_CONTRACT;
3906 _ASSERTE(threadLocalCountOffset <= sizeof(Thread) - sizeof(UINT32));
3908 return (UINT32 *)((SIZE_T)pThread + threadLocalCountOffset);
3911 static void IncrementCount(Thread *pThread, SIZE_T threadLocalCountOffset, UINT64 *overflowCount)
3913 WRAPPER_NO_CONTRACT;
3914 _ASSERTE(overflowCount != nullptr);
3916 if (pThread != nullptr)
3918 UINT32 *threadLocalCount = GetThreadLocalCountRef(pThread, threadLocalCountOffset);
3919 UINT32 newCount = *threadLocalCount + 1;
3922 VolatileStoreWithoutBarrier(threadLocalCount, newCount);
3926 OnIncrementCountOverflow(threadLocalCount, overflowCount);
3931 InterlockedIncrement64((LONGLONG *)overflowCount);
3935 static void OnIncrementCountOverflow(UINT32 *threadLocalCount, UINT64 *overflowCount);
3937 static UINT64 GetOverflowCount(UINT64 *overflowCount)
3939 WRAPPER_NO_CONTRACT;
3941 if (sizeof(void *) >= sizeof(*overflowCount))
3943 return VolatileLoad(overflowCount);
3945 return InterlockedCompareExchange64((LONGLONG *)overflowCount, 0, 0); // prevent tearing
3948 static UINT64 GetTotalCount(SIZE_T threadLocalCountOffset, UINT64 *overflowCount);
3951 static void IncrementWorkerThreadPoolCompletionCount(Thread *pThread)
3953 WRAPPER_NO_CONTRACT;
3954 IncrementCount(pThread, offsetof(Thread, m_workerThreadPoolCompletionCount), &s_workerThreadPoolCompletionCountOverflow);
3957 static UINT64 GetWorkerThreadPoolCompletionCountOverflow()
3959 WRAPPER_NO_CONTRACT;
3960 return GetOverflowCount(&s_workerThreadPoolCompletionCountOverflow);
3963 static UINT64 GetTotalWorkerThreadPoolCompletionCount()
3965 WRAPPER_NO_CONTRACT;
3966 return GetTotalCount(offsetof(Thread, m_workerThreadPoolCompletionCount), &s_workerThreadPoolCompletionCountOverflow);
3969 static void IncrementIOThreadPoolCompletionCount(Thread *pThread)
3971 WRAPPER_NO_CONTRACT;
3972 IncrementCount(pThread, offsetof(Thread, m_ioThreadPoolCompletionCount), &s_ioThreadPoolCompletionCountOverflow);
3975 static UINT64 GetIOThreadPoolCompletionCountOverflow()
3977 WRAPPER_NO_CONTRACT;
3978 return GetOverflowCount(&s_ioThreadPoolCompletionCountOverflow);
3981 static UINT64 GetTotalThreadPoolCompletionCount();
3983 static void IncrementMonitorLockContentionCount(Thread *pThread)
3985 WRAPPER_NO_CONTRACT;
3986 IncrementCount(pThread, offsetof(Thread, m_monitorLockContentionCount), &s_monitorLockContentionCountOverflow);
3989 static UINT64 GetMonitorLockContentionCountOverflow()
3991 WRAPPER_NO_CONTRACT;
3992 return GetOverflowCount(&s_monitorLockContentionCountOverflow);
3995 static UINT64 GetTotalMonitorLockContentionCount()
3997 WRAPPER_NO_CONTRACT;
3998 return GetTotalCount(offsetof(Thread, m_monitorLockContentionCount), &s_monitorLockContentionCountOverflow);
4000 #endif // !DACCESS_COMPILE
4004 //-------------------------------------------------------------------------
4005 // Support creation of assemblies in DllMain (see ceemain.cpp)
4006 //-------------------------------------------------------------------------
4007 DomainFile* m_pLoadingFile;
4012 void SetInteropDebuggingHijacked(BOOL f)
4014 LIMITED_METHOD_CONTRACT;
4015 m_fInteropDebuggingHijacked = f;
4017 BOOL GetInteropDebuggingHijacked()
4019 LIMITED_METHOD_CONTRACT;
4020 return m_fInteropDebuggingHijacked;
4023 void SetFilterContext(T_CONTEXT *pContext);
4024 T_CONTEXT *GetFilterContext(void);
4026 void SetProfilerFilterContext(T_CONTEXT *pContext)
4028 LIMITED_METHOD_CONTRACT;
4030 m_pProfilerFilterContext = pContext;
4033 // Used by the profiler API to find which flags have been set on the Thread object,
4034 // in order to authorize a profiler's call into ICorProfilerInfo(2).
4035 DWORD GetProfilerCallbackFullState()
4037 LIMITED_METHOD_CONTRACT;
4038 _ASSERTE(GetThread() == this);
4039 return m_profilerCallbackState;
4042 // Used by profiler API to set at once all callback flag bits stored on the Thread object.
4043 // Used to reinstate the previous state that had been modified by a previous call to
4044 // SetProfilerCallbackStateFlags
4045 void SetProfilerCallbackFullState(DWORD dwFullState)
4047 LIMITED_METHOD_CONTRACT;
4048 _ASSERTE(GetThread() == this);
4049 m_profilerCallbackState = dwFullState;
4052 // Used by profiler API to set individual callback flags on the Thread object.
4053 // Returns the previous state of all flags.
4054 DWORD SetProfilerCallbackStateFlags(DWORD dwFlags)
4056 LIMITED_METHOD_CONTRACT;
4057 _ASSERTE(GetThread() == this);
4059 DWORD dwRet = m_profilerCallbackState;
4060 m_profilerCallbackState |= dwFlags;
4064 T_CONTEXT *GetProfilerFilterContext(void)
4066 LIMITED_METHOD_CONTRACT;
4067 return m_pProfilerFilterContext;
4070 #ifdef PROFILING_SUPPORTED
4072 FORCEINLINE DWORD GetProfilerEvacuationCounter(void)
4074 LIMITED_METHOD_CONTRACT;
4075 return m_dwProfilerEvacuationCounter;
4078 FORCEINLINE void IncProfilerEvacuationCounter(void)
4080 LIMITED_METHOD_CONTRACT;
4081 m_dwProfilerEvacuationCounter++;
4082 _ASSERTE(m_dwProfilerEvacuationCounter != 0U);
4085 FORCEINLINE void DecProfilerEvacuationCounter(void)
4087 LIMITED_METHOD_CONTRACT;
4088 _ASSERTE(m_dwProfilerEvacuationCounter != 0U);
4089 m_dwProfilerEvacuationCounter--;
4092 #endif // PROFILING_SUPPORTED
4094 //-------------------------------------------------------------------------
4095 // The hijack lock enforces that a thread on which a profiler is currently
4096 // performing a stack walk cannot be hijacked.
4098 // Note that the hijack lock cannot be managed by the host (i.e., this
4099 // cannot be a Crst), because this could lead to a deadlock: YieldTask,
4100 // which is called by the host, may need to hijack, for which it would
4101 // need to take this lock - but since the host needs not be reentrant,
4102 // taking the lock cannot cause a call back into the host.
4103 //-------------------------------------------------------------------------
4104 static BOOL EnterHijackLock(Thread *pThread)
4106 LIMITED_METHOD_CONTRACT;
4108 return ::InterlockedCompareExchange(&(pThread->m_hijackLock), TRUE, FALSE) == FALSE;
4111 static void LeaveHijackLock(Thread *pThread)
4113 LIMITED_METHOD_CONTRACT;
4115 pThread->m_hijackLock = FALSE;
4118 typedef ConditionalStateHolder<Thread *, Thread::EnterHijackLock, Thread::LeaveHijackLock> HijackLockHolder;
4119 //-------------------------------------------------------------------------
4121 static bool ThreadsAtUnsafePlaces(void)
4123 LIMITED_METHOD_CONTRACT;
4125 return (m_threadsAtUnsafePlaces != (LONG)0);
4128 static void IncThreadsAtUnsafePlaces(void)
4130 LIMITED_METHOD_CONTRACT;
4131 InterlockedIncrement(&m_threadsAtUnsafePlaces);
4134 static void DecThreadsAtUnsafePlaces(void)
4136 LIMITED_METHOD_CONTRACT;
4137 InterlockedDecrement(&m_threadsAtUnsafePlaces);
4140 void PrepareForEERestart(BOOL SuspendSucceeded)
4142 WRAPPER_NO_CONTRACT;
4144 #ifdef FEATURE_HIJACK
4145 // Only unhijack the thread if the suspend succeeded. If it failed,
4146 // the target thread may currently be using the original stack
4147 // location of the return address for something else.
4148 if (SuspendSucceeded)
4150 #endif // FEATURE_HIJACK
4152 ResetThreadState(TS_GCSuspendPending);
4155 void SetDebugCantStop(bool fCantStop);
4156 bool GetDebugCantStop(void);
4158 static LPVOID GetStaticFieldAddress(FieldDesc *pFD);
4159 TADDR GetStaticFieldAddrNoCreate(FieldDesc *pFD);
4161 void SetLoadingFile(DomainFile *pFile)
4163 LIMITED_METHOD_CONTRACT;
4164 CONSISTENCY_CHECK(m_pLoadingFile == NULL);
4165 m_pLoadingFile = pFile;
4168 void ClearLoadingFile()
4170 LIMITED_METHOD_CONTRACT;
4171 m_pLoadingFile = NULL;
4174 DomainFile *GetLoadingFile()
4176 LIMITED_METHOD_CONTRACT;
4177 return m_pLoadingFile;
4181 static void LoadingFileRelease(Thread *pThread)
4183 WRAPPER_NO_CONTRACT;
4184 pThread->ClearLoadingFile();
4188 typedef Holder<Thread *, DoNothing, Thread::LoadingFileRelease> LoadingFileHolder;
4191 // Don't allow a thread to be asynchronously stopped or interrupted (e.g. because
4192 // it is performing a <clinit>)
4195 int m_nNestedMarshalingExceptions;
4196 BOOL IsMarshalingException()
4198 LIMITED_METHOD_CONTRACT;
4199 return (m_nNestedMarshalingExceptions != 0);
4201 int StartedMarshalingException()
4203 LIMITED_METHOD_CONTRACT;
4204 return m_nNestedMarshalingExceptions++;
4206 void FinishedMarshalingException()
4208 LIMITED_METHOD_CONTRACT;
4209 _ASSERTE(m_nNestedMarshalingExceptions > 0);
4210 m_nNestedMarshalingExceptions--;
4213 static LONG m_DebugWillSyncCount;
4215 // IP cache used by QueueCleanupIP.
4216 #define CLEANUP_IPS_PER_CHUNK 4
4218 IUnknown *m_Slots[CLEANUP_IPS_PER_CHUNK];
4220 CleanupIPs() {LIMITED_METHOD_CONTRACT; memset(this, 0, sizeof(*this)); }
4222 CleanupIPs m_CleanupIPs;
4224 #define BEGIN_FORBID_TYPELOAD() _ASSERTE_IMPL((GetThreadNULLOk() == 0) || ++GetThreadNULLOk()->m_ulForbidTypeLoad)
4225 #define END_FORBID_TYPELOAD() _ASSERTE_IMPL((GetThreadNULLOk() == 0) || GetThreadNULLOk()->m_ulForbidTypeLoad--)
4226 #define TRIGGERS_TYPELOAD() _ASSERTE_IMPL((GetThreadNULLOk() == 0) || !GetThreadNULLOk()->m_ulForbidTypeLoad)
4230 DWORD m_GCOnTransitionsOK;
4231 ULONG m_ulForbidTypeLoad;
4234 /****************************************************************************/
4235 /* The code below an attempt to catch people who don't protect GC pointers that
4236 they should be protecting. Basically, OBJECTREF's constructor, adds the slot
4237 to a table. When we protect a slot, we remove it from the table. When GC
4238 could happen, all entries in the table are marked as bad. When access to
4239 an OBJECTREF happens (the -> operator) we assert the slot is not bad. To make
4240 this fast, the table is not perfect (there can be collisions), but this should
4241 not cause false positives, but it may allow errors to go undetected */
4244 #define OBJREF_HASH_SHIFT_AMOUNT 3
4246 #define OBJREF_HASH_SHIFT_AMOUNT 2
4249 // For debugging, you may want to make this number very large, (8K)
4250 // should basically insure that no collisions happen
4251 #define OBJREF_TABSIZE 256
4252 DWORD_PTR dangerousObjRefs[OBJREF_TABSIZE]; // Really objectRefs with lower bit stolen
4253 // m_allObjRefEntriesBad is TRUE iff dangerousObjRefs are all marked as GC happened
4254 // It's purely a perf optimization for debug builds that'll help for the cases where we make 2 successive calls
4255 // to Thread::TriggersGC. In that case, the entire array doesn't need to be walked and marked, since we just did
4257 BOOL m_allObjRefEntriesBad;
4259 static DWORD_PTR OBJREF_HASH;
4260 // Remembers that this object ref pointer is 'alive' and unprotected (Bad if GC happens)
4261 static void ObjectRefNew(const OBJECTREF* ref) {
4262 WRAPPER_NO_CONTRACT;
4263 Thread * curThread = GetThreadNULLOk();
4264 if (curThread == 0) return;
4266 curThread->dangerousObjRefs[((size_t)ref >> OBJREF_HASH_SHIFT_AMOUNT) % OBJREF_HASH] = (size_t)ref;
4267 curThread->m_allObjRefEntriesBad = FALSE;
4270 static void ObjectRefAssign(const OBJECTREF* ref) {
4271 WRAPPER_NO_CONTRACT;
4272 Thread * curThread = GetThreadNULLOk();
4273 if (curThread == 0) return;
4275 curThread->m_allObjRefEntriesBad = FALSE;
4276 DWORD_PTR* slot = &curThread->dangerousObjRefs[((DWORD_PTR) ref >> OBJREF_HASH_SHIFT_AMOUNT) % OBJREF_HASH];
4277 if ((*slot & ~3) == (size_t) ref)
4278 *slot = *slot & ~1; // Don't care about GC's that have happened
4281 // If an object is protected, it can be removed from the 'dangerous table'
4282 static void ObjectRefProtected(const OBJECTREF* ref) {
4283 #ifdef USE_CHECKED_OBJECTREFS
4284 WRAPPER_NO_CONTRACT;
4285 _ASSERTE(IsObjRefValid(ref));
4286 Thread * curThread = GetThreadNULLOk();
4287 if (curThread == 0) return;
4289 curThread->m_allObjRefEntriesBad = FALSE;
4290 DWORD_PTR* slot = &curThread->dangerousObjRefs[((DWORD_PTR) ref >> OBJREF_HASH_SHIFT_AMOUNT) % OBJREF_HASH];
4291 if ((*slot & ~3) == (DWORD_PTR) ref)
4292 *slot = (size_t) ref | 2; // mark has being protected
4294 LIMITED_METHOD_CONTRACT;
4298 static bool IsObjRefValid(const OBJECTREF* ref) {
4299 WRAPPER_NO_CONTRACT;
4300 Thread * curThread = GetThreadNULLOk();
4301 if (curThread == 0) return(true);
4303 // If the object ref is NULL, we'll let it pass.
4304 if (*((DWORD_PTR*) ref) == 0)
4307 DWORD_PTR val = curThread->dangerousObjRefs[((DWORD_PTR) ref >> OBJREF_HASH_SHIFT_AMOUNT) % OBJREF_HASH];
4308 // if not in the table, or not the case that it was unprotected and GC happened, return true.
4309 if((val & ~3) != (size_t) ref || (val & 3) != 1)
4311 // If the pointer lives in the GC heap, than it is protected, and thus valid.
4312 if (dac_cast<TADDR>(g_lowest_address) <= val && val < dac_cast<TADDR>(g_highest_address))
4317 // Clears the table. Useful to do when crossing the managed-code - EE boundary
4318 // as you ususally only care about OBJECTREFS that have been created after that
4319 static void STDCALL ObjectRefFlush(Thread* thread);
4322 #ifdef ENABLE_CONTRACTS_IMPL
4323 // Marks all Objrefs in the table as bad (since they are unprotected)
4324 static void TriggersGC(Thread* thread) {
4325 WRAPPER_NO_CONTRACT;
4326 if ((GCViolation|BadDebugState) & (UINT_PTR)(GetViolationMask()))
4330 if (!thread->m_allObjRefEntriesBad)
4332 thread->m_allObjRefEntriesBad = TRUE;
4333 for(unsigned i = 0; i < OBJREF_TABSIZE; i++)
4334 thread->dangerousObjRefs[i] |= 1; // mark all slots as GC happened
4337 #endif // ENABLE_CONTRACTS_IMPL
4342 PTR_CONTEXT m_pSavedRedirectContext;
4344 BOOL IsContextSafeToRedirect(T_CONTEXT* pContext);
4347 PT_CONTEXT GetSavedRedirectContext()
4349 LIMITED_METHOD_CONTRACT;
4350 return (m_pSavedRedirectContext);
4353 #ifndef DACCESS_COMPILE
4354 void SetSavedRedirectContext(PT_CONTEXT pCtx)
4356 LIMITED_METHOD_CONTRACT;
4357 m_pSavedRedirectContext = pCtx;
4361 void EnsurePreallocatedContext();
4363 ThreadLocalBlock m_ThreadLocalBlock;
4365 // Called during AssemblyLoadContext teardown to clean up all structures
4366 // associated with thread statics for the specific Module
4367 void DeleteThreadStaticData(ModuleIndex index);
4371 // Called during Thread death to clean up all structures
4372 // associated with thread statics
4373 void DeleteThreadStaticData();
4377 // When we create an object, or create an OBJECTREF, or create an Interior Pointer, or enter EE from managed
4378 // code, we will set this flag.
4379 // Inside GCHeapUtilities::StressHeap, we only do GC if this flag is TRUE. Then we reset it to zero.
4380 BOOL m_fStressHeapCount;
4382 void EnableStressHeap()
4384 LIMITED_METHOD_CONTRACT;
4385 m_fStressHeapCount = TRUE;
4387 void DisableStressHeap()
4389 LIMITED_METHOD_CONTRACT;
4390 m_fStressHeapCount = FALSE;
4392 BOOL StressHeapIsEnabled()
4394 LIMITED_METHOD_CONTRACT;
4395 return m_fStressHeapCount;
4398 size_t *m_pCleanedStackBase;
4401 #ifdef DACCESS_COMPILE
4403 void EnumMemoryRegions(CLRDataEnumMemoryFlags flags);
4404 void EnumMemoryRegionsWorker(CLRDataEnumMemoryFlags flags);
4408 // Is the current thread currently executing within a constrained execution region?
4409 static BOOL IsExecutingWithinCer();
4411 // Determine whether the method at the given frame in the thread's execution stack is executing within a CER.
4412 BOOL IsWithinCer(CrawlFrame *pCf);
4415 // used to pad stack on thread creation to avoid aliasing penalty in P4 HyperThread scenarios
4417 static DWORD WINAPI intermediateThreadProc(PVOID arg);
4418 static int m_offset_counter;
4419 static const int offset_multiplier = 128;
4422 LPTHREAD_START_ROUTINE lpThreadFunction;
4424 } intermediateThreadParam;
4427 // when the thread is doing a stressing GC, some Crst violation could be ignored, by a non-elegant solution.
4429 BOOL m_bGCStressing; // the flag to indicate if the thread is doing a stressing GC
4430 BOOL m_bUniqueStacking; // the flag to indicate if the thread is doing a UniqueStack
4432 BOOL GetGCStressing ()
4434 return m_bGCStressing;
4436 BOOL GetUniqueStacking ()
4438 return m_bUniqueStacking;
4443 //-----------------------------------------------------------------------------
4444 // AVInRuntimeImplOkay : its okay to have an AV in Runtime implemetation while
4445 // this holder is in effect.
4448 // AVInRuntimeImplOkayHolder foo();
4449 // } // make AV's in the Runtime illegal on out of scope.
4450 //-----------------------------------------------------------------------------
4451 DWORD m_dwAVInRuntimeImplOkayCount;
4453 static void AVInRuntimeImplOkayAcquire(Thread * pThread)
4455 LIMITED_METHOD_CONTRACT;
4459 _ASSERTE(pThread->m_dwAVInRuntimeImplOkayCount != (DWORD)-1);
4460 pThread->m_dwAVInRuntimeImplOkayCount++;
4464 static void AVInRuntimeImplOkayRelease(Thread * pThread)
4466 LIMITED_METHOD_CONTRACT;
4470 _ASSERTE(pThread->m_dwAVInRuntimeImplOkayCount > 0);
4471 pThread->m_dwAVInRuntimeImplOkayCount--;
4476 static BOOL AVInRuntimeImplOkay(void)
4478 LIMITED_METHOD_CONTRACT;
4480 Thread * pThread = GetThreadNULLOk();
4484 return (pThread->m_dwAVInRuntimeImplOkayCount > 0);
4492 class AVInRuntimeImplOkayHolder
4494 Thread * const m_pThread;
4496 AVInRuntimeImplOkayHolder() :
4497 m_pThread(GetThread())
4499 LIMITED_METHOD_CONTRACT;
4500 AVInRuntimeImplOkayAcquire(m_pThread);
4502 AVInRuntimeImplOkayHolder(Thread * pThread) :
4505 LIMITED_METHOD_CONTRACT;
4506 AVInRuntimeImplOkayAcquire(m_pThread);
4508 ~AVInRuntimeImplOkayHolder()
4510 LIMITED_METHOD_CONTRACT;
4511 AVInRuntimeImplOkayRelease(m_pThread);
4517 DWORD m_dwUnbreakableLockCount;
4519 void IncUnbreakableLockCount()
4521 LIMITED_METHOD_CONTRACT;
4522 _ASSERTE (m_dwUnbreakableLockCount != (DWORD)-1);
4523 m_dwUnbreakableLockCount ++;
4525 void DecUnbreakableLockCount()
4527 LIMITED_METHOD_CONTRACT;
4528 _ASSERTE (m_dwUnbreakableLockCount > 0);
4529 m_dwUnbreakableLockCount --;
4531 BOOL HasUnbreakableLock() const
4533 LIMITED_METHOD_CONTRACT;
4534 return m_dwUnbreakableLockCount != 0;
4536 DWORD GetUnbreakableLockCount() const
4538 LIMITED_METHOD_CONTRACT;
4539 return m_dwUnbreakableLockCount;
4545 friend class FCallTransitionState;
4546 friend class PermitHelperMethodFrameState;
4547 friend class CompletedFCallTransitionState;
4548 HelperMethodFrameCallerList *m_pHelperMethodFrameCallerList;
4552 LONG m_dwHostTaskRefCount;
4555 // If HasStarted fails, we cache the exception here, and rethrow on the thread which
4556 // calls Thread.Start.
4557 Exception* m_pExceptionDuringStartup;
4560 void HandleThreadStartupFailure();
4566 #if defined(GCCOVER_TOLERATE_SPURIOUS_AV)
4567 LPVOID m_pLastAVAddress;
4568 #endif // defined(GCCOVER_TOLERATE_SPURIOUS_AV)
4571 void CommitGCStressInstructionUpdate();
4572 void PostGCStressInstructionUpdate(BYTE* pbDestCode, BYTE* pbSrcCode)
4574 LIMITED_METHOD_CONTRACT;
4575 PRECONDITION(!HasPendingGCStressInstructionUpdate());
4577 VolatileStoreWithoutBarrier<BYTE*>(&m_pbSrcCode, pbSrcCode);
4578 VolatileStore<BYTE*>(&m_pbDestCode, pbDestCode);
4580 bool HasPendingGCStressInstructionUpdate()
4582 LIMITED_METHOD_CONTRACT;
4583 BYTE* dest = VolatileLoad(&m_pbDestCode);
4584 return dest != NULL;
4586 bool TryClearGCStressInstructionUpdate(BYTE** ppbDestCode, BYTE** ppbSrcCode)
4588 LIMITED_METHOD_CONTRACT;
4589 bool result = false;
4591 if(HasPendingGCStressInstructionUpdate())
4593 *ppbDestCode = FastInterlockExchangePointer(&m_pbDestCode, NULL);
4595 if(*ppbDestCode != NULL)
4598 *ppbSrcCode = FastInterlockExchangePointer(&m_pbSrcCode, NULL);
4600 CONSISTENCY_CHECK(*ppbSrcCode != NULL);
4605 #if defined(GCCOVER_TOLERATE_SPURIOUS_AV)
4606 void SetLastAVAddress(LPVOID address)
4608 LIMITED_METHOD_CONTRACT;
4609 m_pLastAVAddress = address;
4611 LPVOID GetLastAVAddress()
4613 LIMITED_METHOD_CONTRACT;
4614 return m_pLastAVAddress;
4616 #endif // defined(GCCOVER_TOLERATE_SPURIOUS_AV)
4617 #endif // HAVE_GCCOVER
4620 BOOL m_fCompletionPortDrained;
4622 void MarkCompletionPortDrained()
4624 LIMITED_METHOD_CONTRACT;
4625 FastInterlockExchange ((LONG*)&m_fCompletionPortDrained, TRUE);
4627 void UnmarkCompletionPortDrained()
4629 LIMITED_METHOD_CONTRACT;
4630 FastInterlockExchange ((LONG*)&m_fCompletionPortDrained, FALSE);
4632 BOOL IsCompletionPortDrained()
4634 LIMITED_METHOD_CONTRACT;
4635 return m_fCompletionPortDrained;
4638 // --------------------------------
4639 // Store the maxReservedStackSize
4640 // This is passed in from managed code in the thread constructor
4641 // ---------------------------------
4643 SIZE_T m_RequestedStackSize;
4647 // Get the MaxStackSize
4648 SIZE_T RequestedThreadStackSize()
4650 LIMITED_METHOD_CONTRACT;
4651 return (m_RequestedStackSize);
4654 // Set the MaxStackSize
4655 void RequestedThreadStackSize(SIZE_T requestedStackSize)
4657 LIMITED_METHOD_CONTRACT;
4658 m_RequestedStackSize = requestedStackSize;
4661 static BOOL CheckThreadStackSize(SIZE_T *SizeToCommitOrReserve,
4662 BOOL isSizeToReserve // When TRUE, the previous argument is the stack size to reserve.
4663 // Otherwise, it is the size to commit.
4666 static BOOL GetProcessDefaultStackSize(SIZE_T* reserveSize, SIZE_T* commitSize);
4670 // Although this is a pointer, it is used as a flag to indicate the current context is unsafe
4671 // to inspect. When NULL the context is safe to use, otherwise it points to the active patch skipper
4672 // and the context is unsafe to use. When running a patch skipper we could be in one of two
4673 // debug-only situations that the context inspecting/modifying code isn't generally prepared
4675 // a) We have set the IP to point somewhere in the patch skip table but have not yet run the
4677 // b) We executed the instruction in the patch skip table and now the IP could be anywhere
4678 // The debugger may need to fix up the IP to compensate for the instruction being run
4679 // from a different address.
4680 VolatilePtr<DebuggerPatchSkip> m_debuggerActivePatchSkipper;
4683 VOID BeginDebuggerPatchSkip(DebuggerPatchSkip* patchSkipper)
4685 LIMITED_METHOD_CONTRACT;
4686 _ASSERTE(!m_debuggerActivePatchSkipper.Load());
4687 FastInterlockExchangePointer(m_debuggerActivePatchSkipper.GetPointer(), patchSkipper);
4688 _ASSERTE(m_debuggerActivePatchSkipper.Load());
4691 VOID EndDebuggerPatchSkip()
4693 LIMITED_METHOD_CONTRACT;
4694 _ASSERTE(m_debuggerActivePatchSkipper.Load());
4695 FastInterlockExchangePointer(m_debuggerActivePatchSkipper.GetPointer(), NULL);
4696 _ASSERTE(!m_debuggerActivePatchSkipper.Load());
4701 static BOOL EnterWorkingOnThreadContext(Thread *pThread)
4703 LIMITED_METHOD_CONTRACT;
4705 if(pThread->m_debuggerActivePatchSkipper.Load() != NULL)
4712 static void LeaveWorkingOnThreadContext(Thread *pThread)
4714 LIMITED_METHOD_CONTRACT;
4717 typedef ConditionalStateHolder<Thread *, Thread::EnterWorkingOnThreadContext, Thread::LeaveWorkingOnThreadContext> WorkingOnThreadContextHolder;
4720 void PrepareThreadForSOWork()
4722 WRAPPER_NO_CONTRACT;
4724 #ifdef FEATURE_HIJACK
4726 #endif // FEATURE_HIJACK
4728 ResetThrowControlForThread();
4730 // Since this Thread has taken an SO, there may be state left-over after we
4731 // short-circuited exception or other error handling, and so we don't want
4732 // to risk recycling it.
4733 SetThreadStateNC(TSNC_CannotRecycle);
4736 void SetSOWorkNeeded()
4738 SetThreadStateNC(TSNC_SOWorkNeeded);
4741 BOOL IsSOWorkNeeded()
4743 return HasThreadStateNC(TSNC_SOWorkNeeded);
4746 void FinishSOWork();
4748 void ClearExceptionStateAfterSO(void* pStackFrameSP)
4750 WRAPPER_NO_CONTRACT;
4752 // Clear any stale exception state.
4753 m_ExceptionState.ClearExceptionStateAfterSO(pStackFrameSP);
4757 BOOL m_fAllowProfilerCallbacks;
4761 // These two methods are for profiler support. The profiler clears the allowed
4762 // value once it has delivered a ThreadDestroyed callback, so that it does not
4763 // deliver any notifications to the profiler afterwards which reference this
4764 // thread. Callbacks on this thread which do not reference this thread are
4767 BOOL ProfilerCallbacksAllowed(void)
4769 return m_fAllowProfilerCallbacks;
4772 void SetProfilerCallbacksAllowed(BOOL fValue)
4774 m_fAllowProfilerCallbacks = fValue;
4779 //This context is used for optimizations on I/O thread pool thread. In case the
4780 //overlapped structure is from a different appdomain, it is stored in this structure
4781 //to be processed later correctly by entering the right domain.
4782 PVOID m_pIOCompletionContext;
4783 BOOL AllocateIOCompletionContext();
4784 VOID FreeIOCompletionContext();
4786 inline PVOID GetIOCompletionContext()
4788 return m_pIOCompletionContext;
4792 // Inside a host, we don't own a thread handle, and we avoid DuplicateHandle call.
4793 // If a thread is dying after we obtain the thread handle, our SuspendThread may fail
4794 // because the handle may be closed and reused for a completely different type of handle.
4795 // To solve this problem, we have a counter m_dwThreadHandleBeingUsed. Before we grab
4796 // the thread handle, we increment the counter. Before we return a thread back to SQL
4797 // in Reset and ExitTask, we wait until the counter drops to 0.
4798 Volatile<LONG> m_dwThreadHandleBeingUsed;
4802 static BOOL s_fCleanFinalizedThread;
4805 #ifndef DACCESS_COMPILE
4806 static void SetCleanupNeededForFinalizedThread()
4808 LIMITED_METHOD_CONTRACT;
4809 _ASSERTE (IsFinalizerThread());
4810 s_fCleanFinalizedThread = TRUE;
4812 #endif //!DACCESS_COMPILE
4814 static BOOL CleanupNeededForFinalizedThread()
4816 LIMITED_METHOD_CONTRACT;
4817 return s_fCleanFinalizedThread;
4821 // When we create throwable for an exception, we need to run managed code.
4822 // If the same type of exception is thrown while creating managed object, like InvalidProgramException,
4823 // we may be in an infinite recursive case.
4824 Exception *m_pCreatingThrowableForException;
4825 friend OBJECTREF CLRException::GetThrowable();
4829 int m_dwDisableAbortCheckCount; // Disable check before calling managed code.
4830 // !!! Use this very carefully. If managed code runs user code
4831 // !!! or blocks on locks, the thread may not be aborted.
4833 static void DisableAbortCheck()
4835 WRAPPER_NO_CONTRACT;
4836 Thread *pThread = GetThread();
4837 FastInterlockIncrement((LONG*)&pThread->m_dwDisableAbortCheckCount);
4839 static void EnableAbortCheck()
4841 WRAPPER_NO_CONTRACT;
4842 Thread *pThread = GetThread();
4843 _ASSERTE (pThread->m_dwDisableAbortCheckCount > 0);
4844 FastInterlockDecrement((LONG*)&pThread->m_dwDisableAbortCheckCount);
4847 BOOL IsAbortCheckDisabled()
4849 return m_dwDisableAbortCheckCount > 0;
4852 typedef StateHolder<Thread::DisableAbortCheck, Thread::EnableAbortCheck> DisableAbortCheckHolder;
4856 // At the end of a catch, we may raise ThreadAbortException. If catch clause set IP to resume in the
4857 // corresponding try block, our exception system will execute the same catch clause again and again.
4858 // So we save reference to the clause post which TA was reraised, which is used in ExceptionTracker::ProcessManagedCallFrame
4859 // to make ThreadAbort proceed ahead instead of going in a loop.
4860 // This problem only happens on Win64 due to JIT64. The common scenario is VB's "On error resume next"
4861 #ifdef WIN64EXCEPTIONS
4862 DWORD m_dwIndexClauseForCatch;
4863 StackFrame m_sfEstablisherOfActualHandlerFrame;
4864 #endif // WIN64EXCEPTIONS
4867 // Holds per-thread information the debugger uses to expose locking information
4868 // See ThreadDebugBlockingInfo.h for more details
4869 ThreadDebugBlockingInfo DebugBlockingInfo;
4873 // Disables pumping and thread join in RCW creation
4874 bool m_fDisableComObjectEagerCleanup;
4876 // See ThreadStore::TriggerGCForDeadThreadsIfNecessary()
4877 bool m_fHasDeadThreadBeenConsideredForGCTrigger;
4882 CLRRandom* GetRandom() {return &m_random;}
4884 #ifdef FEATURE_COMINTEROP
4886 // Cookie returned from CoRegisterInitializeSpy
4887 ULARGE_INTEGER m_uliInitializeSpyCookie;
4889 // True if m_uliInitializeSpyCookie is valid
4890 bool m_fInitializeSpyRegistered;
4892 // The last STA COM context we saw - used to speed up RCW creation
4893 LPVOID m_pLastSTACtxCookie;
4896 inline void RevokeApartmentSpy();
4897 inline LPVOID GetLastSTACtxCookie(BOOL *pfNAContext);
4898 inline void SetLastSTACtxCookie(LPVOID pCtxCookie, BOOL fNAContext);
4899 #endif // FEATURE_COMINTEROP
4902 // This duplicates the ThreadType_GC bit stored in TLS (TlsIdx_ThreadType). It exists
4903 // so that any thread can query whether any other thread is a "GC Special" thread.
4904 // (In contrast, ::IsGCSpecialThread() only gives this info about the currently
4905 // executing thread.) The Profiling API uses this to determine whether it should
4906 // "hide" the thread from profilers. GC Special threads (in particular the bgc
4907 // thread) need to be hidden from profilers because the bgc thread creation path
4908 // occurs while the EE is suspended, and while the thread that's suspending the
4909 // runtime is waiting for the bgc thread to signal an event. The bgc thread cannot
4910 // switch to preemptive mode and call into a profiler at this time, or else a
4911 // deadlock will result when toggling back to cooperative mode (bgc thread toggling
4912 // to coop will block due to the suspension, and the thread suspending the runtime
4913 // continues to block waiting for the bgc thread to signal its creation events).
4914 // Furthermore, profilers have no need to be aware of GC special threads anyway,
4915 // since managed code never runs on them.
4919 // Profiling API uses this to determine whether it should hide this thread from the
4923 // GC calls this when creating special threads that also happen to have an EE Thread
4924 // object associated with them (e.g., the bgc thread).
4925 void SetGCSpecial(bool fGCSpecial);
4930 DWORD_PTR m_pAffinityMask;
4931 #endif // !FEATURE_PAL
4933 void ChooseThreadCPUGroupAffinity();
4934 void ClearThreadCPUGroupAffinity();
4937 // Per thread table used to implement allocation sampling.
4938 AllLoggedTypes * m_pAllLoggedTypes;
4941 AllLoggedTypes * GetAllocationSamplingTable()
4943 LIMITED_METHOD_CONTRACT;
4945 return m_pAllLoggedTypes;
4948 void SetAllocationSamplingTable(AllLoggedTypes * pAllLoggedTypes)
4950 LIMITED_METHOD_CONTRACT;
4952 // Assert if we try to set the m_pAllLoggedTypes to a non NULL value if it is already non-NULL.
4953 // This implies a memory leak.
4954 _ASSERTE(pAllLoggedTypes != NULL ? m_pAllLoggedTypes == NULL : TRUE);
4955 m_pAllLoggedTypes = pAllLoggedTypes;
4958 #ifdef FEATURE_PERFTRACING
4961 // SampleProfiler thread state. This is set on suspension and cleared before restart.
4962 // True if the thread was in cooperative mode. False if it was in preemptive when the suspension started.
4963 Volatile<ULONG> m_gcModeOnSuspension;
4965 // The activity ID for the current thread.
4966 // An activity ID of zero means the thread is not executing in the context of an activity.
4970 bool GetGCModeOnSuspension()
4972 LIMITED_METHOD_CONTRACT;
4973 return m_gcModeOnSuspension != 0U;
4976 void SaveGCModeOnSuspension()
4978 LIMITED_METHOD_CONTRACT;
4979 m_gcModeOnSuspension = m_fPreemptiveGCDisabled;
4982 void ClearGCModeOnSuspension()
4984 m_gcModeOnSuspension = 0;
4987 LPCGUID GetActivityId() const
4989 LIMITED_METHOD_CONTRACT;
4990 return &m_activityId;
4993 void SetActivityId(LPCGUID pActivityId)
4995 LIMITED_METHOD_CONTRACT;
4996 _ASSERTE(pActivityId != NULL);
4998 m_activityId = *pActivityId;
5000 #endif // FEATURE_PERFTRACING
5002 #ifdef FEATURE_HIJACK
5005 // By the time a frame is scanned by the runtime, m_pHijackReturnKind always
5006 // identifies the gc-ness of the return register(s)
5007 // If the ReturnKind information is not available from the GcInfo, the runtime
5008 // computes it using the return types's class handle.
5010 ReturnKind m_HijackReturnKind;
5014 ReturnKind GetHijackReturnKind()
5016 LIMITED_METHOD_CONTRACT;
5018 return m_HijackReturnKind;
5021 void SetHijackReturnKind(ReturnKind returnKind)
5023 LIMITED_METHOD_CONTRACT;
5025 m_HijackReturnKind = returnKind;
5027 #endif // FEATURE_HIJACK
5030 OBJECTHANDLE GetOrCreateDeserializationTracker();
5033 OBJECTHANDLE m_DeserializationTracker;
5036 static uint64_t dead_threads_non_alloc_bytes;
5038 #ifndef DACCESS_COMPILE
5040 class CurrentPrepareCodeConfigHolder
5043 Thread *const m_thread;
5045 PrepareCodeConfig *const m_config;
5049 CurrentPrepareCodeConfigHolder(Thread *thread, PrepareCodeConfig *config);
5050 ~CurrentPrepareCodeConfigHolder();
5054 PrepareCodeConfig *GetCurrentPrepareCodeConfig() const
5056 LIMITED_METHOD_CONTRACT;
5057 return m_currentPrepareCodeConfig;
5059 #endif // !DACCESS_COMPILE
5062 PrepareCodeConfig *m_currentPrepareCodeConfig;
5065 // End of class Thread
5067 typedef Thread::ForbidSuspendThreadHolder ForbidSuspendThreadHolder;
5068 typedef Thread::ThreadPreventAsyncHolder ThreadPreventAsyncHolder;
5069 typedef Thread::ThreadPreventAbortHolder ThreadPreventAbortHolder;
5071 // Combines ForBindSuspendThreadHolder and CrstHolder into one.
5072 class ForbidSuspendThreadCrstHolder
5075 // Note: member initialization is intentionally ordered.
5076 ForbidSuspendThreadCrstHolder(CrstBase * pCrst)
5077 : m_forbid_suspend_holder()
5078 , m_lock_holder(pCrst)
5079 { WRAPPER_NO_CONTRACT; }
5082 ForbidSuspendThreadHolder m_forbid_suspend_holder;
5083 CrstHolder m_lock_holder;
5086 ETaskType GetCurrentTaskType();
5090 typedef Thread::AVInRuntimeImplOkayHolder AVInRuntimeImplOkayHolder;
5092 BOOL RevertIfImpersonated(BOOL *bReverted, HANDLE *phToken);
5093 void UndoRevert(BOOL bReverted, HANDLE hToken);
5095 // ---------------------------------------------------------------------------
5097 // The ThreadStore manages all the threads in the system.
5099 // There is one ThreadStore in the system, available through
5100 // ThreadStore::m_pThreadStore.
5101 // ---------------------------------------------------------------------------
5103 typedef SList<Thread, false, PTR_Thread> ThreadList;
5106 // The ThreadStore is a singleton class
5107 #define CHECK_ONE_STORE() _ASSERTE(this == ThreadStore::s_pThreadStore);
5109 typedef DPTR(class ThreadStore) PTR_ThreadStore;
5110 typedef DPTR(class ExceptionTracker) PTR_ExceptionTracker;
5114 friend class Thread;
5115 friend class ThreadSuspend;
5116 friend Thread* SetupThread(BOOL);
5117 friend class AppDomain;
5118 #ifdef DACCESS_COMPILE
5119 friend class ClrDataAccess;
5120 friend Thread* __stdcall DacGetThread(ULONG32 osThreadID);
5127 static void InitThreadStore();
5128 static void LockThreadStore();
5129 static void UnlockThreadStore();
5131 // Add a Thread to the ThreadStore
5132 // WARNING : only GC calls this with bRequiresTSL set to FALSE.
5133 static void AddThread(Thread *newThread, BOOL bRequiresTSL=TRUE);
5135 // RemoveThread finds the thread in the ThreadStore and discards it.
5136 static BOOL RemoveThread(Thread *target);
5138 static BOOL CanAcquireLock();
5140 // Transfer a thread from the unstarted to the started list.
5141 // WARNING : only GC calls this with bRequiresTSL set to FALSE.
5142 static void TransferStartedThread(Thread *target, BOOL bRequiresTSL=TRUE);
5144 // Before using the thread list, be sure to take the critical section. Otherwise
5145 // it can change underneath you, perhaps leading to an exception after Remove.
5146 // Prev==NULL to get the first entry in the list.
5147 static Thread *GetAllThreadList(Thread *Prev, ULONG mask, ULONG bits);
5148 static Thread *GetThreadList(Thread *Prev);
5150 // Every EE process can lazily create a GUID that uniquely identifies it (for
5151 // purposes of remoting).
5152 const GUID &GetUniqueEEId();
5154 // We shut down the EE when the last non-background thread terminates. This event
5155 // is used to signal the main thread when this condition occurs.
5156 void WaitForOtherThreads();
5157 static void CheckForEEShutdown();
5158 CLREvent m_TerminationEvent;
5160 // Have all the foreground threads completed? In other words, can we release
5162 BOOL OtherThreadsComplete()
5164 LIMITED_METHOD_CONTRACT;
5165 _ASSERTE(m_ThreadCount - m_UnstartedThreadCount - m_DeadThreadCount - Thread::m_ActiveDetachCount + m_PendingThreadCount >= m_BackgroundThreadCount);
5167 return (m_ThreadCount - m_UnstartedThreadCount - m_DeadThreadCount
5168 - Thread::m_ActiveDetachCount + m_PendingThreadCount
5169 == m_BackgroundThreadCount);
5172 // If you want to trap threads re-entering the EE (be this for GC, or debugging,
5173 // or Thread.Suspend() or whatever, you need to TrapReturningThreads(TRUE). When
5174 // you are finished snagging threads, call TrapReturningThreads(FALSE). This
5175 // counts internally.
5177 // Of course, you must also fix RareDisablePreemptiveGC to do the right thing
5178 // when the trap occurs.
5179 static void TrapReturningThreads(BOOL yes);
5183 // Enter and leave the critical section around the thread store. Clients should
5184 // use LockThreadStore and UnlockThreadStore.
5188 // Critical section for adding and removing threads to the store
5191 // List of all the threads known to the ThreadStore (started & unstarted).
5192 ThreadList m_ThreadList;
5194 // m_ThreadCount is the count of all threads in m_ThreadList. This includes
5195 // background threads / unstarted threads / whatever.
5197 // m_UnstartedThreadCount is the subset of m_ThreadCount that have not yet been
5200 // m_BackgroundThreadCount is the subset of m_ThreadCount that have been started
5201 // but which are running in the background. So this is a misnomer in the sense
5202 // that unstarted background threads are not reflected in this count.
5204 // m_PendingThreadCount is used to solve a race condition. The main thread could
5205 // start another thread running and then exit. The main thread might then start
5206 // tearing down the EE before the new thread moves itself out of m_UnstartedThread-
5207 // Count in TransferUnstartedThread. This count is atomically bumped in
5208 // CreateNewThread, and atomically reduced within a locked thread store.
5210 // m_DeadThreadCount is the subset of m_ThreadCount which have died. The Win32
5211 // thread has disappeared, but something (like the exposed object) has kept the
5212 // refcount non-zero so we can't destruct yet.
5214 // m_MaxThreadCount is the maximum value of m_ThreadCount. ie. the largest number
5215 // of simultaneously active threads
5219 LONG m_MaxThreadCount;
5221 LONG ThreadCountInEE ()
5223 LIMITED_METHOD_CONTRACT;
5224 return m_ThreadCount;
5226 #if defined(_DEBUG) || defined(DACCESS_COMPILE)
5227 LONG MaxThreadCountInEE ()
5229 LIMITED_METHOD_CONTRACT;
5230 return m_MaxThreadCount;
5234 LONG m_UnstartedThreadCount;
5235 LONG m_BackgroundThreadCount;
5236 LONG m_PendingThreadCount;
5238 LONG m_DeadThreadCount;
5239 LONG m_DeadThreadCountForGCTrigger;
5240 bool m_TriggerGCForDeadThreads;
5243 // Space for the lazily-created GUID.
5247 // Even in the release product, we need to know what thread holds the lock on
5248 // the ThreadStore. This is so we never deadlock when the GC thread halts a
5249 // thread that holds this lock.
5250 Thread *m_HoldingThread;
5251 EEThreadId m_holderthreadid; // current holder (or NULL)
5254 static LONG s_DeadThreadCountThresholdForGCTrigger;
5255 static DWORD s_DeadThreadGCTriggerPeriodMilliseconds;
5256 static SIZE_T *s_DeadThreadGenerationCounts;
5260 static BOOL HoldingThreadStore()
5262 WRAPPER_NO_CONTRACT;
5263 // Note that GetThread() may be 0 if it is the debugger thread
5264 // or perhaps a concurrent GC thread.
5265 return HoldingThreadStore(GetThread());
5268 static BOOL HoldingThreadStore(Thread *pThread);
5270 #ifdef DACCESS_COMPILE
5271 static void EnumMemoryRegions(CLRDataEnumMemoryFlags flags);
5274 SPTR_DECL(ThreadStore, s_pThreadStore);
5278 BOOL DbgFindThread(Thread *target);
5279 LONG DbgBackgroundThreadCount()
5281 LIMITED_METHOD_CONTRACT;
5282 return m_BackgroundThreadCount;
5285 BOOL IsCrstForThreadStore (const CrstBase* const pCrstBase)
5287 LIMITED_METHOD_CONTRACT;
5288 return (void *)pCrstBase == (void*)&m_Crst;
5293 static CONTEXT *s_pOSContext;
5295 // We can not do any memory allocation after we suspend a thread in order ot
5296 // avoid deadlock situation.
5297 static void AllocateOSContext();
5298 static CONTEXT *GrabOSContext();
5301 // Thread abort needs to walk stack to decide if thread abort can proceed.
5302 // It is unsafe to crawl a stack of thread if the thread is OS-suspended which we do during
5303 // thread abort. For example, Thread T1 aborts thread T2. T2 is suspended by T1. Inside SQL
5304 // this means that no thread sharing the same scheduler with T2 can run. If T1 needs a lock which
5305 // is owned by one thread on the scheduler, T1 will wait forever.
5306 // Our solution is to move T2 to a safe point, resume it, and then do stack crawl.
5307 static CLREvent *s_pWaitForStackCrawlEvent;
5309 static void WaitForStackCrawlEvent()
5319 s_pWaitForStackCrawlEvent->Wait(INFINITE,FALSE);
5321 static void SetStackCrawlEvent()
5323 LIMITED_METHOD_CONTRACT;
5324 s_pWaitForStackCrawlEvent->Set();
5326 static void ResetStackCrawlEvent()
5328 LIMITED_METHOD_CONTRACT;
5329 s_pWaitForStackCrawlEvent->Reset();
5333 void IncrementDeadThreadCountForGCTrigger();
5334 void DecrementDeadThreadCountForGCTrigger();
5336 void OnMaxGenerationGCStarted();
5337 bool ShouldTriggerGCForDeadThreads();
5338 void TriggerGCForDeadThreadsIfNecessary();
5341 struct TSSuspendHelper {
5342 static void SetTrap() { ThreadStore::TrapReturningThreads(TRUE); }
5343 static void UnsetTrap() { ThreadStore::TrapReturningThreads(FALSE); }
5345 typedef StateHolder<TSSuspendHelper::SetTrap, TSSuspendHelper::UnsetTrap> TSSuspendHolder;
5347 typedef StateHolder<ThreadStore::LockThreadStore,ThreadStore::UnlockThreadStore> ThreadStoreLockHolder;
5351 // This class dispenses small thread ids for the thin lock mechanism.
5352 // Recently we started using this class to dispense domain neutral module IDs as well.
5356 DWORD m_highestId; // highest id given out so far
5357 SIZE_T m_recycleBin; // link list to chain all ids returning to us
5358 Crst m_Crst; // lock to protect our data structures
5359 DPTR(PTR_Thread) m_idToThread; // map thread ids to threads
5360 DWORD m_idToThreadCapacity; // capacity of the map
5362 #ifndef DACCESS_COMPILE
5363 void GrowIdToThread()
5373 DWORD newCapacity = m_idToThreadCapacity == 0 ? 16 : m_idToThreadCapacity*2;
5374 Thread **newIdToThread = new Thread*[newCapacity];
5376 newIdToThread[0] = NULL;
5378 for (DWORD i = 1; i < m_idToThreadCapacity; i++)
5380 newIdToThread[i] = m_idToThread[i];
5382 for (DWORD j = m_idToThreadCapacity; j < newCapacity; j++)
5384 newIdToThread[j] = NULL;
5386 delete[] m_idToThread;
5387 m_idToThread = newIdToThread;
5388 m_idToThreadCapacity = newCapacity;
5390 #endif // !DACCESS_COMPILE
5394 // NOTE: CRST_UNSAFE_ANYMODE prevents a GC mode switch when entering this crst.
5395 // If you remove this flag, we will switch to preemptive mode when entering
5396 // m_Crst, which means all functions that enter it will become
5397 // GC_TRIGGERS. (This includes all uses of CrstHolder.) So be sure
5398 // to update the contracts if you remove this flag.
5399 m_Crst(CrstThreadIdDispenser, CRST_UNSAFE_ANYMODE)
5401 WRAPPER_NO_CONTRACT;
5404 m_idToThreadCapacity = 0;
5405 m_idToThread = NULL;
5410 LIMITED_METHOD_CONTRACT;
5411 delete[] m_idToThread;
5414 bool IsValidId(DWORD id)
5416 LIMITED_METHOD_CONTRACT;
5417 return (id > 0) && (id <= m_highestId);
5420 #ifndef DACCESS_COMPILE
5421 void NewId(Thread *pThread, DWORD & newId)
5423 WRAPPER_NO_CONTRACT;
5425 CrstHolder ch(&m_Crst);
5427 if (m_recycleBin != 0)
5429 _ASSERTE(FitsIn<DWORD>(m_recycleBin));
5430 result = static_cast<DWORD>(m_recycleBin);
5431 m_recycleBin = reinterpret_cast<SIZE_T>(m_idToThread[m_recycleBin]);
5435 // we make sure ids don't wrap around - before they do, we always return the highest possible
5436 // one and rely on our caller to detect this situation
5437 if (m_highestId + 1 > m_highestId)
5438 m_highestId = m_highestId + 1;
5439 result = m_highestId;
5440 if (result >= m_idToThreadCapacity)
5444 _ASSERTE(result < m_idToThreadCapacity);
5446 if (result < m_idToThreadCapacity)
5447 m_idToThread[result] = pThread;
5449 #endif // !DACCESS_COMPILE
5451 #ifndef DACCESS_COMPILE
5452 void DisposeId(DWORD id)
5462 CrstHolder ch(&m_Crst);
5464 _ASSERTE(IsValidId(id));
5465 if (id == m_highestId)
5471 m_idToThread[id] = reinterpret_cast<PTR_Thread>(m_recycleBin);
5474 size_t index = (size_t)m_idToThread[id];
5477 _ASSERTE(index != id);
5478 index = (size_t)m_idToThread[index];
5483 #endif // !DACCESS_COMPILE
5485 Thread *IdToThread(DWORD id)
5487 LIMITED_METHOD_CONTRACT;
5488 CrstHolder ch(&m_Crst);
5490 Thread *result = NULL;
5491 if (id <= m_highestId)
5492 result = m_idToThread[id];
5493 // m_idToThread may have Thread*, or the next free slot
5494 _ASSERTE ((size_t)result > m_idToThreadCapacity);
5499 Thread *IdToThreadWithValidation(DWORD id)
5501 WRAPPER_NO_CONTRACT;
5503 CrstHolder ch(&m_Crst);
5505 Thread *result = NULL;
5506 if (id <= m_highestId)
5507 result = m_idToThread[id];
5508 // m_idToThread may have Thread*, or the next free slot
5509 if ((size_t)result <= m_idToThreadCapacity)
5511 _ASSERTE(result == NULL || ((size_t)result & 0x3) == 0 || ((Thread*)result)->GetThreadId() == id);
5515 typedef DPTR(IdDispenser) PTR_IdDispenser;
5517 #ifndef CROSSGEN_COMPILE
5519 // Dispenser of small thread ids for thin lock mechanism
5520 GPTR_DECL(IdDispenser,g_pThinLockThreadIdDispenser);
5522 // forward declaration
5523 DWORD MsgWaitHelper(int numWaiters, HANDLE* phEvent, BOOL bWaitAll, DWORD millis, BOOL alertable = FALSE);
5525 // When a thread is being created after a debug suspension has started, it sends an event up to the
5526 // debugger. Afterwards, with the Debugger Lock still held, it will check to see if we had already asked to suspend the
5527 // Runtime. If we have, then it will turn around and call this to set the debug suspend pending flag on the newly
5528 // created thread, since it was missed by SysStartSuspendForDebug as it didn't exist when that function was run.
5530 inline void Thread::MarkForDebugSuspend(void)
5532 WRAPPER_NO_CONTRACT;
5533 if (!(m_State & TS_DebugSuspendPending))
5535 FastInterlockOr((ULONG *) &m_State, TS_DebugSuspendPending);
5536 ThreadStore::TrapReturningThreads(TRUE);
5540 // Debugger per-thread flag for enabling notification on "manual"
5541 // method calls, for stepping logic.
5543 inline void Thread::IncrementTraceCallCount()
5545 WRAPPER_NO_CONTRACT;
5546 FastInterlockIncrement(&m_TraceCallCount);
5547 ThreadStore::TrapReturningThreads(TRUE);
5550 inline void Thread::DecrementTraceCallCount()
5552 WRAPPER_NO_CONTRACT;
5553 ThreadStore::TrapReturningThreads(FALSE);
5554 FastInterlockDecrement(&m_TraceCallCount);
5557 // When we enter an Object.Wait() we are logically inside the synchronized
5558 // region of that object. Of course, we've actually completely left the region,
5559 // or else nobody could Notify us. But if we throw ThreadInterruptedException to
5560 // break out of the Wait, all the catchers are going to expect the synchronized
5561 // state to be correct. So we carry it around in case we need to restore it.
5565 WaitEventLink *m_WaitEventLink;
5567 Thread *m_OwnerThread;
5570 PendingSync(WaitEventLink *s) : m_WaitEventLink(s)
5572 WRAPPER_NO_CONTRACT;
5574 m_OwnerThread = GetThread();
5577 void Restore(BOOL bRemoveFromSB);
5581 #define INCTHREADLOCKCOUNT() { }
5582 #define DECTHREADLOCKCOUNT() { }
5583 #define INCTHREADLOCKCOUNTTHREAD(thread) { }
5584 #define DECTHREADLOCKCOUNTTHREAD(thread) { }
5587 // --------------------------------------------------------------------------------
5588 // GCHolder is used to implement the normal GCX_ macros.
5590 // GCHolder is normally used indirectly through GCX_ convenience macros, but can be used
5591 // directly if needed (e.g. due to multiple holders in one scope, or to use
5592 // in class definitions).
5594 // GCHolder (or derived types) should only be instantiated as automatic variables
5595 // --------------------------------------------------------------------------------
5597 #ifdef ENABLE_CONTRACTS_IMPL
5598 #define GCHOLDER_CONTRACT_ARGS_NoDtor , false, szConstruct, szFunction, szFile, lineNum
5599 #define GCHOLDER_CONTRACT_ARGS_HasDtor , true, szConstruct, szFunction, szFile, lineNum
5600 #define GCHOLDER_DECLARE_CONTRACT_ARGS_BARE \
5601 const char * szConstruct = "Unknown" \
5602 , const char * szFunction = "Unknown" \
5603 , const char * szFile = "Unknown" \
5605 #define GCHOLDER_DECLARE_CONTRACT_ARGS , GCHOLDER_DECLARE_CONTRACT_ARGS_BARE
5606 #define GCHOLDER_DECLARE_CONTRACT_ARGS_INTERNAL , bool fPushStackRecord = true, GCHOLDER_DECLARE_CONTRACT_ARGS_BARE
5608 #define GCHOLDER_SETUP_CONTRACT_STACK_RECORD(mode) \
5609 m_fPushedRecord = false; \
5611 if (fPushStackRecord && conditional) \
5613 m_pClrDebugState = GetClrDebugState(); \
5614 m_oldClrDebugState = *m_pClrDebugState; \
5616 m_pClrDebugState->ViolationMaskReset( ModeViolation ); \
5618 m_ContractStackRecord.m_szFunction = szFunction; \
5619 m_ContractStackRecord.m_szFile = szFile; \
5620 m_ContractStackRecord.m_lineNum = lineNum; \
5621 m_ContractStackRecord.m_testmask = \
5622 (Contract::ALL_Disabled & ~((UINT)(Contract::MODE_Mask))) \
5624 m_ContractStackRecord.m_construct = szConstruct; \
5625 m_pClrDebugState->LinkContractStackTrace( &m_ContractStackRecord ); \
5626 m_fPushedRecord = true; \
5628 #define GCHOLDER_CHECK_FOR_PREEMP_IN_NOTRIGGER(pThread) \
5629 if (pThread->GCNoTrigger()) \
5631 CONTRACT_ASSERT("Coop->preemp->coop switch attempted in a GC_NOTRIGGER scope", \
5632 Contract::GC_NoTrigger, \
5633 Contract::GC_Mask, \
5640 #define GCHOLDER_CONTRACT_ARGS_NoDtor
5641 #define GCHOLDER_CONTRACT_ARGS_HasDtor
5642 #define GCHOLDER_DECLARE_CONTRACT_ARGS_BARE
5643 #define GCHOLDER_DECLARE_CONTRACT_ARGS
5644 #define GCHOLDER_DECLARE_CONTRACT_ARGS_INTERNAL
5645 #define GCHOLDER_SETUP_CONTRACT_STACK_RECORD(mode)
5646 #define GCHOLDER_CHECK_FOR_PREEMP_IN_NOTRIGGER(pThread)
5647 #endif // ENABLE_CONTRACTS_IMPL
5649 #ifndef DACCESS_COMPILE
5653 // NOTE: This method is FORCEINLINE'ed into its callers, but the callers are just the
5654 // corresponding methods in the derived types, not all sites that use GC holders. This
5655 // is done so that the #pragma optimize will take affect since the optimize settings
5656 // are taken from the template instantiation site, not the template definition site.
5657 template <BOOL THREAD_EXISTS>
5658 FORCEINLINE_NONDEBUG
5662 WRAPPER_NO_CONTRACT;
5664 #ifdef ENABLE_CONTRACTS_IMPL
5665 if (m_fPushedRecord)
5667 *m_pClrDebugState = m_oldClrDebugState;
5669 // Make sure that we're using the version of this template that matches the
5670 // invariant setup in EnterInternal{Coop|Preemp}{_HackNoThread}
5671 _ASSERTE(!!THREAD_EXISTS == m_fThreadMustExist);
5676 // m_WasCoop is only TRUE if we've already verified there's an EE thread.
5677 BEGIN_GETTHREAD_ALLOWED;
5679 _ASSERTE(m_Thread != NULL); // Cannot switch to cooperative with no thread
5680 if (!m_Thread->PreemptiveGCDisabled())
5681 m_Thread->DisablePreemptiveGC();
5683 END_GETTHREAD_ALLOWED;
5687 // Either we initialized m_Thread explicitly with GetThread() in the
5688 // constructor, or our caller (instantiator of GCHolder) called our constructor
5689 // with GetThread() (which we already asserted in the constuctor)
5690 // (i.e., m_Thread == GetThread()). Also, note that if THREAD_EXISTS,
5691 // then m_Thread must be non-null (as it's == GetThread()). So the
5692 // "if" below looks a little hokey since we're checking for either condition.
5693 // But the template param THREAD_EXISTS allows us to statically early-out
5694 // when it's TRUE, so we check it for perf.
5695 if (THREAD_EXISTS || m_Thread != NULL)
5697 BEGIN_GETTHREAD_ALLOWED;
5698 if (m_Thread->PreemptiveGCDisabled())
5699 m_Thread->EnablePreemptiveGC();
5700 END_GETTHREAD_ALLOWED;
5704 // If we have a thread then we assert that we ended up in the same state
5705 // which we started in.
5706 if (THREAD_EXISTS || m_Thread != NULL)
5708 _ASSERTE(!!m_WasCoop == !!(m_Thread->PreemptiveGCDisabled()));
5712 // NOTE: The rest of these methods are all FORCEINLINE so that the uses where 'conditional==true'
5713 // can have the if-checks removed by the compiler. The callers are just the corresponding methods
5714 // in the derived types, not all sites that use GC holders.
5717 // This is broken - there is a potential race with the GC thread. It is currently
5718 // used for a few cases where (a) we potentially haven't started up the EE yet, or
5719 // (b) we are on a "special thread". We need a real solution here though.
5720 FORCEINLINE_NONDEBUG
5721 void EnterInternalCoop_HackNoThread(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS_INTERNAL)
5723 GCHOLDER_SETUP_CONTRACT_STACK_RECORD(Contract::MODE_Coop);
5725 m_Thread = GetThreadNULLOk();
5727 #ifdef ENABLE_CONTRACTS_IMPL
5728 m_fThreadMustExist = false;
5729 #endif // ENABLE_CONTRACTS_IMPL
5731 if (m_Thread != NULL)
5733 BEGIN_GETTHREAD_ALLOWED;
5734 m_WasCoop = m_Thread->PreemptiveGCDisabled();
5736 if (conditional && !m_WasCoop)
5738 m_Thread->DisablePreemptiveGC();
5739 _ASSERTE(m_Thread->PreemptiveGCDisabled());
5741 END_GETTHREAD_ALLOWED;
5749 FORCEINLINE_NONDEBUG
5750 void EnterInternalPreemp(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS_INTERNAL)
5752 GCHOLDER_SETUP_CONTRACT_STACK_RECORD(Contract::MODE_Preempt);
5754 m_Thread = GetThreadNULLOk();
5756 #ifdef ENABLE_CONTRACTS_IMPL
5757 m_fThreadMustExist = false;
5758 if (m_Thread != NULL && conditional)
5760 BEGIN_GETTHREAD_ALLOWED;
5761 GCHOLDER_CHECK_FOR_PREEMP_IN_NOTRIGGER(m_Thread);
5762 END_GETTHREAD_ALLOWED;
5764 #endif // ENABLE_CONTRACTS_IMPL
5766 if (m_Thread != NULL)
5768 BEGIN_GETTHREAD_ALLOWED;
5769 m_WasCoop = m_Thread->PreemptiveGCDisabled();
5771 if (conditional && m_WasCoop)
5773 m_Thread->EnablePreemptiveGC();
5774 _ASSERTE(!m_Thread->PreemptiveGCDisabled());
5776 END_GETTHREAD_ALLOWED;
5784 FORCEINLINE_NONDEBUG
5785 void EnterInternalCoop(Thread *pThread, bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS_INTERNAL)
5787 // This is the perf version. So we deliberately restrict the calls
5788 // to already setup threads to avoid the null checks and GetThread call
5789 _ASSERTE(pThread && (pThread == GetThread()));
5790 #ifdef ENABLE_CONTRACTS_IMPL
5791 m_fThreadMustExist = true;
5792 #endif // ENABLE_CONTRACTS_IMPL
5794 GCHOLDER_SETUP_CONTRACT_STACK_RECORD(Contract::MODE_Coop);
5797 m_WasCoop = m_Thread->PreemptiveGCDisabled();
5798 if (conditional && !m_WasCoop)
5800 m_Thread->DisablePreemptiveGC();
5801 _ASSERTE(m_Thread->PreemptiveGCDisabled());
5805 template <BOOL THREAD_EXISTS>
5806 FORCEINLINE_NONDEBUG
5807 void EnterInternalPreemp(Thread *pThread, bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS_INTERNAL)
5809 // This is the perf version. So we deliberately restrict the calls
5810 // to already setup threads to avoid the null checks and GetThread call
5811 _ASSERTE(!THREAD_EXISTS || (pThread && (pThread == GetThread())));
5812 #ifdef ENABLE_CONTRACTS_IMPL
5813 m_fThreadMustExist = !!THREAD_EXISTS;
5814 #endif // ENABLE_CONTRACTS_IMPL
5816 GCHOLDER_SETUP_CONTRACT_STACK_RECORD(Contract::MODE_Preempt);
5820 if (THREAD_EXISTS || (m_Thread != NULL))
5822 GCHOLDER_CHECK_FOR_PREEMP_IN_NOTRIGGER(m_Thread);
5823 m_WasCoop = m_Thread->PreemptiveGCDisabled();
5824 if (conditional && m_WasCoop)
5826 m_Thread->EnablePreemptiveGC();
5827 _ASSERTE(!m_Thread->PreemptiveGCDisabled());
5838 BOOL m_WasCoop; // This is BOOL and not 'bool' because PreemptiveGCDisabled returns BOOL,
5839 // so the codegen is better if we don't have to convert to 'bool'.
5840 #ifdef ENABLE_CONTRACTS_IMPL
5841 bool m_fThreadMustExist; // used to validate that the proper Pop<THREAD_EXISTS> method is used
5842 bool m_fPushedRecord;
5843 ClrDebugState m_oldClrDebugState;
5844 ClrDebugState *m_pClrDebugState;
5845 ContractStackRecord m_ContractStackRecord;
5849 class GCCoopNoDtor : public GCHolderBase
5853 void Enter(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5855 WRAPPER_NO_CONTRACT;
5859 STATIC_CONTRACT_MODE_COOPERATIVE;
5861 // The thread must be non-null to enter MODE_COOP
5862 this->EnterInternalCoop(GetThread(), conditional GCHOLDER_CONTRACT_ARGS_NoDtor);
5868 WRAPPER_NO_CONTRACT;
5870 this->PopInternal<TRUE>(); // Thread must be non-NULL
5874 class GCPreempNoDtor : public GCHolderBase
5878 void Enter(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5883 STATIC_CONTRACT_MODE_PREEMPTIVE;
5886 this->EnterInternalPreemp(conditional GCHOLDER_CONTRACT_ARGS_NoDtor);
5890 void Enter(Thread * pThreadNullOk, bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5895 STATIC_CONTRACT_MODE_PREEMPTIVE;
5898 this->EnterInternalPreemp<FALSE>( // Thread may be NULL
5899 pThreadNullOk, conditional GCHOLDER_CONTRACT_ARGS_NoDtor);
5906 this->PopInternal<FALSE>(); // Thread may be NULL
5910 class GCCoop : public GCHolderBase
5914 GCCoop(GCHOLDER_DECLARE_CONTRACT_ARGS_BARE)
5917 STATIC_CONTRACT_MODE_COOPERATIVE;
5919 // The thread must be non-null to enter MODE_COOP
5920 this->EnterInternalCoop(GetThread(), true GCHOLDER_CONTRACT_ARGS_HasDtor);
5924 GCCoop(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5929 STATIC_CONTRACT_MODE_COOPERATIVE;
5932 // The thread must be non-null to enter MODE_COOP
5933 this->EnterInternalCoop(GetThread(), conditional GCHOLDER_CONTRACT_ARGS_HasDtor);
5940 this->PopInternal<TRUE>(); // Thread must be non-NULL
5944 // This is broken - there is a potential race with the GC thread. It is currently
5945 // used for a few cases where (a) we potentially haven't started up the EE yet, or
5946 // (b) we are on a "special thread". We need a real solution here though.
5947 class GCCoopHackNoThread : public GCHolderBase
5951 GCCoopHackNoThread(GCHOLDER_DECLARE_CONTRACT_ARGS_BARE)
5954 STATIC_CONTRACT_MODE_COOPERATIVE;
5956 this->EnterInternalCoop_HackNoThread(true GCHOLDER_CONTRACT_ARGS_HasDtor);
5960 GCCoopHackNoThread(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5965 STATIC_CONTRACT_MODE_COOPERATIVE;
5968 this->EnterInternalCoop_HackNoThread(conditional GCHOLDER_CONTRACT_ARGS_HasDtor);
5972 ~GCCoopHackNoThread()
5975 this->PopInternal<FALSE>(); // Thread might be NULL
5979 class GCCoopThreadExists : public GCHolderBase
5983 GCCoopThreadExists(Thread * pThread GCHOLDER_DECLARE_CONTRACT_ARGS)
5986 STATIC_CONTRACT_MODE_COOPERATIVE;
5988 this->EnterInternalCoop(pThread, true GCHOLDER_CONTRACT_ARGS_HasDtor);
5992 GCCoopThreadExists(Thread * pThread, bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5997 STATIC_CONTRACT_MODE_COOPERATIVE;
6000 this->EnterInternalCoop(pThread, conditional GCHOLDER_CONTRACT_ARGS_HasDtor);
6004 ~GCCoopThreadExists()
6007 this->PopInternal<TRUE>(); // Thread must be non-NULL
6011 class GCPreemp : public GCHolderBase
6015 GCPreemp(GCHOLDER_DECLARE_CONTRACT_ARGS_BARE)
6018 STATIC_CONTRACT_MODE_PREEMPTIVE;
6020 this->EnterInternalPreemp(true GCHOLDER_CONTRACT_ARGS_HasDtor);
6024 GCPreemp(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
6029 STATIC_CONTRACT_MODE_PREEMPTIVE;
6032 this->EnterInternalPreemp(conditional GCHOLDER_CONTRACT_ARGS_HasDtor);
6039 this->PopInternal<FALSE>(); // Thread may be NULL
6043 class GCPreempThreadExists : public GCHolderBase
6047 GCPreempThreadExists(Thread * pThread GCHOLDER_DECLARE_CONTRACT_ARGS)
6050 STATIC_CONTRACT_MODE_PREEMPTIVE;
6052 this->EnterInternalPreemp<TRUE>( // Thread must be non-NULL
6053 pThread, true GCHOLDER_CONTRACT_ARGS_HasDtor);
6057 GCPreempThreadExists(Thread * pThread, bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
6062 STATIC_CONTRACT_MODE_PREEMPTIVE;
6065 this->EnterInternalPreemp<TRUE>( // Thread must be non-NULL
6066 pThread, conditional GCHOLDER_CONTRACT_ARGS_HasDtor);
6070 ~GCPreempThreadExists()
6073 this->PopInternal<TRUE>(); // Thread must be non-NULL
6076 #endif // DACCESS_COMPILE
6079 // --------------------------------------------------------------------------------
6080 // GCAssert is used to implement the assert GCX_ macros. Usage is similar to GCHolder.
6082 // GCAsserting for preemptive mode automatically passes on unmanaged threads.
6084 // Note that the assert is "2 sided"; it happens on entering and on leaving scope, to
6085 // help ensure mode integrity.
6087 // GCAssert is a noop in a free build
6088 // --------------------------------------------------------------------------------
6090 template<BOOL COOPERATIVE>
6094 DEBUG_NOINLINE void BeginGCAssert();
6095 DEBUG_NOINLINE void EndGCAssert()
6101 template<BOOL COOPERATIVE>
6102 class AutoCleanupGCAssert
6106 DEBUG_NOINLINE AutoCleanupGCAssert();
6108 DEBUG_NOINLINE ~AutoCleanupGCAssert()
6111 WRAPPER_NO_CONTRACT;
6112 // This is currently disabled; we currently have a lot of code which doesn't
6113 // back out the GC mode properly (instead relying on the EX_TRY macros.)
6115 // @todo enable this when we remove raw GC mode switching.
6122 FORCEINLINE void DoCheck()
6124 WRAPPER_NO_CONTRACT;
6125 Thread *pThread = GetThread();
6128 _ASSERTE(pThread != NULL);
6129 _ASSERTE(pThread->PreemptiveGCDisabled());
6133 _ASSERTE(pThread == NULL || !(pThread->PreemptiveGCDisabled()));
6140 // --------------------------------------------------------------------------------
6141 // GCForbid is used to add ForbidGC semantics to the current GC mode. Note that
6142 // it requires the thread to be in cooperative mode already.
6144 // GCForbid is a noop in a free build
6145 // --------------------------------------------------------------------------------
6146 #ifndef DACCESS_COMPILE
6147 class GCForbid : AutoCleanupGCAssert<TRUE>
6149 #ifdef ENABLE_CONTRACTS_IMPL
6151 DEBUG_NOINLINE GCForbid(BOOL fConditional, const char *szFunction, const char *szFile, int lineNum)
6156 STATIC_CONTRACT_MODE_COOPERATIVE;
6157 STATIC_CONTRACT_GC_NOTRIGGER;
6160 m_fConditional = fConditional;
6163 Thread *pThread = GetThread();
6164 m_pClrDebugState = pThread ? pThread->GetClrDebugState() : ::GetClrDebugState();
6165 m_oldClrDebugState = *m_pClrDebugState;
6167 m_pClrDebugState->ViolationMaskReset( GCViolation );
6169 GetThread()->BeginForbidGC(szFile, lineNum);
6171 m_ContractStackRecord.m_szFunction = szFunction;
6172 m_ContractStackRecord.m_szFile = (char*)szFile;
6173 m_ContractStackRecord.m_lineNum = lineNum;
6174 m_ContractStackRecord.m_testmask = (Contract::ALL_Disabled & ~((UINT)(Contract::GC_Mask))) | Contract::GC_NoTrigger;
6175 m_ContractStackRecord.m_construct = "GCX_FORBID";
6176 m_pClrDebugState->LinkContractStackTrace( &m_ContractStackRecord );
6180 DEBUG_NOINLINE GCForbid(const char *szFunction, const char *szFile, int lineNum)
6183 STATIC_CONTRACT_MODE_COOPERATIVE;
6184 STATIC_CONTRACT_GC_NOTRIGGER;
6186 m_fConditional = TRUE;
6188 Thread *pThread = GetThread();
6189 m_pClrDebugState = pThread ? pThread->GetClrDebugState() : ::GetClrDebugState();
6190 m_oldClrDebugState = *m_pClrDebugState;
6192 m_pClrDebugState->ViolationMaskReset( GCViolation );
6194 GetThread()->BeginForbidGC(szFile, lineNum);
6196 m_ContractStackRecord.m_szFunction = szFunction;
6197 m_ContractStackRecord.m_szFile = (char*)szFile;
6198 m_ContractStackRecord.m_lineNum = lineNum;
6199 m_ContractStackRecord.m_testmask = (Contract::ALL_Disabled & ~((UINT)(Contract::GC_Mask))) | Contract::GC_NoTrigger;
6200 m_ContractStackRecord.m_construct = "GCX_FORBID";
6201 m_pClrDebugState->LinkContractStackTrace( &m_ContractStackRecord );
6204 DEBUG_NOINLINE ~GCForbid()
6210 GetThread()->EndForbidGC();
6211 *m_pClrDebugState = m_oldClrDebugState;
6216 BOOL m_fConditional;
6217 ClrDebugState *m_pClrDebugState;
6218 ClrDebugState m_oldClrDebugState;
6219 ContractStackRecord m_ContractStackRecord;
6220 #endif // _DEBUG_IMPL
6222 #endif // !DACCESS_COMPILE
6224 // --------------------------------------------------------------------------------
6225 // GCNoTrigger is used to add NoTriggerGC semantics to the current GC mode. Unlike
6226 // GCForbid, it does not require a thread to be in cooperative mode.
6228 // GCNoTrigger is a noop in a free build
6229 // --------------------------------------------------------------------------------
6230 #ifndef DACCESS_COMPILE
6233 #ifdef ENABLE_CONTRACTS_IMPL
6235 DEBUG_NOINLINE GCNoTrigger(BOOL fConditional, const char *szFunction, const char *szFile, int lineNum)
6240 STATIC_CONTRACT_GC_NOTRIGGER;
6243 m_fConditional = fConditional;
6247 Thread * pThread = GetThreadNULLOk();
6248 m_pClrDebugState = pThread ? pThread->GetClrDebugState() : ::GetClrDebugState();
6249 m_oldClrDebugState = *m_pClrDebugState;
6251 m_pClrDebugState->ViolationMaskReset( GCViolation );
6253 if (pThread != NULL)
6255 pThread->BeginNoTriggerGC(szFile, lineNum);
6258 m_ContractStackRecord.m_szFunction = szFunction;
6259 m_ContractStackRecord.m_szFile = (char*)szFile;
6260 m_ContractStackRecord.m_lineNum = lineNum;
6261 m_ContractStackRecord.m_testmask = (Contract::ALL_Disabled & ~((UINT)(Contract::GC_Mask))) | Contract::GC_NoTrigger;
6262 m_ContractStackRecord.m_construct = "GCX_NOTRIGGER";
6263 m_pClrDebugState->LinkContractStackTrace( &m_ContractStackRecord );
6267 DEBUG_NOINLINE GCNoTrigger(const char *szFunction, const char *szFile, int lineNum)
6270 STATIC_CONTRACT_GC_NOTRIGGER;
6272 m_fConditional = TRUE;
6274 Thread * pThread = GetThreadNULLOk();
6275 m_pClrDebugState = pThread ? pThread->GetClrDebugState() : ::GetClrDebugState();
6276 m_oldClrDebugState = *m_pClrDebugState;
6278 m_pClrDebugState->ViolationMaskReset( GCViolation );
6280 if (pThread != NULL)
6282 pThread->BeginNoTriggerGC(szFile, lineNum);
6285 m_ContractStackRecord.m_szFunction = szFunction;
6286 m_ContractStackRecord.m_szFile = (char*)szFile;
6287 m_ContractStackRecord.m_lineNum = lineNum;
6288 m_ContractStackRecord.m_testmask = (Contract::ALL_Disabled & ~((UINT)(Contract::GC_Mask))) | Contract::GC_NoTrigger;
6289 m_ContractStackRecord.m_construct = "GCX_NOTRIGGER";
6290 m_pClrDebugState->LinkContractStackTrace( &m_ContractStackRecord );
6293 DEBUG_NOINLINE ~GCNoTrigger()
6299 Thread * pThread = GetThreadNULLOk();
6302 pThread->EndNoTriggerGC();
6304 *m_pClrDebugState = m_oldClrDebugState;
6309 BOOL m_fConditional;
6310 ClrDebugState *m_pClrDebugState;
6311 ClrDebugState m_oldClrDebugState;
6312 ContractStackRecord m_ContractStackRecord;
6313 #endif // _DEBUG_IMPL
6315 #endif //!DACCESS_COMPILE
6317 class CoopTransitionHolder
6322 CoopTransitionHolder(Thread * pThread)
6323 : m_pFrame(pThread->m_pFrame)
6325 LIMITED_METHOD_CONTRACT;
6328 ~CoopTransitionHolder()
6330 WRAPPER_NO_CONTRACT;
6331 if (m_pFrame != NULL)
6332 COMPlusCooperativeTransitionHandler(m_pFrame);
6335 void SuppressRelease()
6337 LIMITED_METHOD_CONTRACT;
6338 // FRAME_TOP and NULL must be distinct values.
6339 // static_assert_no_msg(FRAME_TOP_VALUE != NULL);
6344 // --------------------------------------------------------------------------------
6345 // GCX macros - see util.hpp
6346 // --------------------------------------------------------------------------------
6350 // Normally, any thread we operate on has a Thread block in its TLS. But there are
6351 // a few special threads we don't normally execute managed code on.
6352 BOOL dbgOnly_IsSpecialEEThread();
6353 void dbgOnly_IdentifySpecialEEThread();
6355 #ifdef USE_CHECKED_OBJECTREFS
6356 #define ASSERT_PROTECTED(objRef) Thread::ObjectRefProtected(objRef)
6358 #define ASSERT_PROTECTED(objRef)
6363 #define ASSERT_PROTECTED(objRef)
6368 #ifdef ENABLE_CONTRACTS_IMPL
6370 #define BEGINFORBIDGC() {if (GetThreadNULLOk() != NULL) GetThreadNULLOk()->BeginForbidGC(__FILE__, __LINE__);}
6371 #define ENDFORBIDGC() {if (GetThreadNULLOk() != NULL) GetThreadNULLOk()->EndForbidGC();}
6373 class FCallGCCanTrigger
6376 static DEBUG_NOINLINE void Enter()
6379 STATIC_CONTRACT_GC_TRIGGERS;
6380 Thread * pThread = GetThreadNULLOk();
6381 if (pThread != NULL)
6387 static DEBUG_NOINLINE void Enter(Thread* pThread)
6390 STATIC_CONTRACT_GC_TRIGGERS;
6391 pThread->EndForbidGC();
6394 static DEBUG_NOINLINE void Leave(const char *szFunction, const char *szFile, int lineNum)
6397 Thread * pThread = GetThreadNULLOk();
6398 if (pThread != NULL)
6400 Leave(pThread, szFunction, szFile, lineNum);
6404 static DEBUG_NOINLINE void Leave(Thread* pThread, const char *szFunction, const char *szFile, int lineNum)
6407 pThread->BeginForbidGC(szFile, lineNum);
6411 #define TRIGGERSGC_NOSTOMP() do { \
6412 ANNOTATION_GC_TRIGGERS; \
6413 Thread* curThread = GetThread(); \
6414 if(curThread->GCNoTrigger()) \
6416 CONTRACT_ASSERT("TRIGGERSGC found in a GC_NOTRIGGER region.", Contract::GC_NoTrigger, Contract::GC_Mask, __FUNCTION__, __FILE__, __LINE__); \
6421 #define TRIGGERSGC() do { \
6422 TRIGGERSGC_NOSTOMP(); \
6423 Thread::TriggersGC(GetThread()); \
6426 #else // ENABLE_CONTRACTS_IMPL
6428 #define BEGINFORBIDGC()
6429 #define ENDFORBIDGC()
6430 #define TRIGGERSGC_NOSTOMP() ANNOTATION_GC_TRIGGERS
6431 #define TRIGGERSGC() ANNOTATION_GC_TRIGGERS
6433 #endif // ENABLE_CONTRACTS_IMPL
6435 inline BOOL GC_ON_TRANSITIONS(BOOL val) {
6436 WRAPPER_NO_CONTRACT;
6438 Thread* thread = GetThread();
6441 BOOL ret = thread->m_GCOnTransitionsOK;
6442 thread->m_GCOnTransitionsOK = val;
6450 inline void ENABLESTRESSHEAP() {
6451 WRAPPER_NO_CONTRACT;
6452 Thread * thread = GetThreadNULLOk();
6454 thread->EnableStressHeap();
6458 void CleanStackForFastGCStress ();
6459 #define CLEANSTACKFORFASTGCSTRESS() \
6460 if (g_pConfig->GetGCStressLevel() && g_pConfig->FastGCStressLevel() > 1) { \
6461 CleanStackForFastGCStress (); \
6465 #define CLEANSTACKFORFASTGCSTRESS()
6470 // Holder for incrementing the ForbidGCLoaderUse counter.
6471 class GCForbidLoaderUseHolder
6474 GCForbidLoaderUseHolder()
6476 WRAPPER_NO_CONTRACT;
6477 ClrFlsIncrementValue(TlsIdx_ForbidGCLoaderUseCount, 1);
6480 ~GCForbidLoaderUseHolder()
6482 WRAPPER_NO_CONTRACT;
6483 ClrFlsIncrementValue(TlsIdx_ForbidGCLoaderUseCount, -1);
6489 // Declaring this macro turns off the GC_TRIGGERS/THROWS/INJECT_FAULT contract in LoadTypeHandle.
6490 // If you do this, you must restrict your use of the loader only to retrieve TypeHandles
6491 // for types that have already been loaded and resolved. If you fail to observe this restriction, you will
6492 // reach a GC_TRIGGERS point somewhere in the loader and assert. If you're lucky, that is.
6493 // (If you're not lucky, you will introduce a GC hole.)
6495 // The main user of this workaround is the GC stack crawl. It must parse signatures and retrieve
6496 // type handles for valuetypes in method parameters. Some other uses have creeped into the codebase -
6497 // some justified, others not.
6499 // ENABLE_FORBID_GC_LOADER is *not* the same as using tokenNotToLoad to suppress loading.
6500 // You should use tokenNotToLoad in preference to ENABLE_FORBID. ENABLE_FORBID is a fragile
6501 // workaround and places enormous responsibilities on the caller. The only reason it exists at all
6502 // is that the GC stack crawl simply cannot tolerate exceptions or new GC's - that's an immovable
6503 // rock we're faced with.
6505 // The key differences are:
6507 // ENABLE_FORBID tokenNotToLoad
6508 // -------------------------------------------- ------------------------------------------------------
6509 // caller must guarantee the type is already caller does not have to guarantee the type
6510 // loaded - otherwise, we will crash badly. is already loaded.
6512 // loader will not throw, trigger gc or OOM loader may throw, trigger GC or OOM.
6516 #ifdef ENABLE_CONTRACTS_IMPL
6517 #define ENABLE_FORBID_GC_LOADER_USE_IN_THIS_SCOPE() GCForbidLoaderUseHolder __gcfluh; \
6518 CANNOTTHROWCOMPLUSEXCEPTION(); \
6521 #else // _DEBUG_IMPL
6522 #define ENABLE_FORBID_GC_LOADER_USE_IN_THIS_SCOPE() ;
6523 #endif // _DEBUG_IMPL
6524 // This macro lets us define a conditional CONTRACT for the GC_TRIGGERS behavior.
6525 // This is for the benefit of a select group of callers that use the loader
6526 // in ForbidGC mode strictly to retrieve existing TypeHandles. The reason
6527 // we use a threadstate rather than an extra parameter is that these annoying
6528 // callers call the loader through intermediaries (MetaSig) and it proved to be too
6529 // cumbersome to pass this state down through all those callers.
6531 // Don't make GC_TRIGGERS conditional just because your function ends up calling
6532 // LoadTypeHandle indirectly. We don't want to proliferate conditonal contracts more
6533 // than necessary so declare such functions as GC_TRIGGERS until the need
6534 // for the conditional contract is actually proven through code inspection or
6536 #if defined(DACCESS_COMPILE)
6538 // Disable (<non-zero constant> || <expression>) is always a non-zero constant.
6539 // <expression> is never evaluated and might have side effects, because
6540 // FORBIDGC_LOADER_USE_ENABLED is used in that pattern and additionally the rule
6541 // has little value.
6543 #pragma warning(disable:6286)
6545 #define FORBIDGC_LOADER_USE_ENABLED() true
6547 #else // DACCESS_COMPILE
6548 #if defined (_DEBUG_IMPL) || defined(_PREFAST_)
6549 #ifndef DACCESS_COMPILE
6550 #define FORBIDGC_LOADER_USE_ENABLED() (ClrFlsGetValue(TlsIdx_ForbidGCLoaderUseCount))
6552 #define FORBIDGC_LOADER_USE_ENABLED() TRUE
6554 #else // _DEBUG_IMPL
6556 // If you got an error about FORBIDGC_LOADER_USE_ENABLED being undefined, it's because you tried
6557 // to use this predicate in a free build outside of a CONTRACT or ASSERT.
6559 #define FORBIDGC_LOADER_USE_ENABLED() (sizeof(YouCannotUseThisHere) != 0)
6560 #endif // _DEBUG_IMPL
6561 #endif // DACCESS_COMPILE
6563 // We have numerous places where we start up a managed thread. This includes several places in the
6564 // ThreadPool, the 'new Thread(...).Start()' case, and the Finalizer. Try to factor the code so our
6565 // base exception handling behavior is consistent across those places. The resulting code is convoluted,
6566 // but it's better than the prior situation of each thread being on a different plan.
6568 // If you add a new kind of managed thread (i.e. thread proc) to the system, you must:
6570 // 1) Call HasStarted() before calling any ManagedThreadBase_* routine.
6571 // 2) Define a ManagedThreadBase_* routine for your scenario and declare it below.
6572 // 3) Always perform any AD transitions through the ManagedThreadBase_* mechanism.
6573 // 4) Allow the ManagedThreadBase_* mechanism to perform all your exception handling, including
6574 // dispatching of unhandled exception events, deciding what to swallow, etc.
6575 // 5) If you must separate your base thread proc behavior from your AD transitioning behavior,
6576 // define a second ManagedThreadADCall_* helper and declare it below.
6577 // 6) Never decide this is too much work and that you will roll your own thread proc code.
6579 // intentionally opaque.
6580 struct ManagedThreadCallState;
6582 struct ManagedThreadBase
6584 // The 'new Thread(...).Start()' case from COMSynchronizable kickoff thread worker
6585 static void KickOff(ADCallBackFcnType pTarget,
6588 // The IOCompletion, QueueUserWorkItem, AddTimer, RegisterWaitForSingleObject cases in
6590 static void ThreadPool(ADCallBackFcnType pTarget, LPVOID args);
6592 // The Finalizer thread uses this path
6593 static void FinalizerBase(ADCallBackFcnType pTarget);
6597 // DeadlockAwareLock is a base for building deadlock-aware locks.
6598 // Note that DeadlockAwareLock only works if ALL locks involved in the deadlock are deadlock aware.
6600 class DeadlockAwareLock
6603 VolatilePtr<Thread> m_pHoldingThread;
6605 const char *m_description;
6609 DeadlockAwareLock(const char *description = NULL);
6610 ~DeadlockAwareLock();
6612 // Test for deadlock
6613 BOOL CanEnterLock();
6615 // Call BeginEnterLock before attempting to acquire the lock
6616 BOOL TryBeginEnterLock(); // returns FALSE if deadlock
6617 void BeginEnterLock(); // Asserts if deadlock
6619 // Call EndEnterLock after acquiring the lock
6620 void EndEnterLock();
6622 // Call LeaveLock after releasing the lock
6625 const char *GetDescription();
6628 CHECK CheckDeadlock(Thread *pThread);
6630 static void ReleaseBlockingLock()
6632 Thread *pThread = GetThread();
6634 pThread->m_pBlockingLock = NULL;
6637 typedef StateHolder<DoNothing,DeadlockAwareLock::ReleaseBlockingLock> BlockingLockHolder;
6640 inline void SetTypeHandleOnThreadForAlloc(TypeHandle th)
6642 // We are doing this unconditionally even though th is only used by ETW events in GC. When the ETW
6643 // event is not enabled we still need to set it because it may not be enabled here but by the
6644 // time we are checking in GC, the event is enabled - we don't want GC to read a random value
6645 // from before in this case.
6646 GetThread()->SetTHAllocContextObj(th);
6649 #endif // CROSSGEN_COMPILE
6652 // users of OFFSETOF__TLS__tls_CurrentThread macro expect the offset of these variables wrt to _tls_start to be stable.
6653 // Defining each of the following thread local variable separately without the struct causes the offsets to change in
6654 // different flavors of build. Eg. in chk build the offset of m_pThread is 0x4 while in ret build it becomes 0x8 as 0x4 is
6655 // occupied by m_pAddDomain. Packing all thread local variables in a struct and making struct instance to be thread local
6656 // ensures that the offsets of the variables are stable in all build flavors.
6657 struct ThreadLocalInfo
6660 AppDomain* m_pAppDomain; // This field is read only by the SOS plugin to get the AppDomain
6661 void** m_EETlsData; // ClrTlsInfo::data
6664 class ThreadStateHolder
6667 ThreadStateHolder (BOOL fNeed, DWORD state)
6669 LIMITED_METHOD_CONTRACT;
6670 _ASSERTE (GetThread());
6674 ~ThreadStateHolder ()
6676 LIMITED_METHOD_CONTRACT;
6680 Thread *pThread = GetThread();
6682 FastInterlockAnd((ULONG *) &pThread->m_State, ~m_state);
6690 // Sets an NC threadstate if not already set, and restores the old state
6691 // of that bit upon destruction
6693 // fNeed > 0, make sure state is set, restored in destructor
6694 // fNeed = 0, no change
6695 // fNeed < 0, make sure state is reset, restored in destructor
6697 class ThreadStateNCStackHolder
6700 ThreadStateNCStackHolder (BOOL fNeed, Thread::ThreadStateNoConcurrency state)
6702 LIMITED_METHOD_CONTRACT;
6704 _ASSERTE (GetThread());
6710 Thread *pThread = GetThread();
6715 // if the state is set, reset it
6716 if (pThread->HasThreadStateNC(state))
6718 pThread->ResetThreadStateNC(m_state);
6727 // if the state is already set then no change is
6728 // necessary during the back out
6729 if(pThread->HasThreadStateNC(state))
6735 pThread->SetThreadStateNC(state);
6741 ~ThreadStateNCStackHolder()
6743 LIMITED_METHOD_CONTRACT;
6747 Thread *pThread = GetThread();
6752 pThread->SetThreadStateNC(m_state); // set it
6756 pThread->ResetThreadStateNC(m_state);
6763 Thread::ThreadStateNoConcurrency m_state;
6766 BOOL Debug_IsLockedViaThreadSuspension();
6768 #ifdef FEATURE_WRITEBARRIER_COPY
6770 BYTE* GetWriteBarrierCodeLocation(VOID* barrier);
6771 BOOL IsIPInWriteBarrierCodeCopy(PCODE controlPc);
6772 PCODE AdjustWriteBarrierIP(PCODE controlPc);
6774 #else // FEATURE_WRITEBARRIER_COPY
6776 #define GetWriteBarrierCodeLocation(barrier) ((BYTE*)(barrier))
6778 #endif // FEATURE_WRITEBARRIER_COPY
6780 #endif //__threads_h__