1 // Licensed to the .NET Foundation under one or more agreements.
2 // The .NET Foundation licenses this file to you under the MIT license.
3 // See the LICENSE file in the project root for more information.
10 // Currently represents a logical and physical COM+ thread. Later, these concepts will be separated.
14 // #RuntimeThreadLocals.
16 // Windows has a feature call Thread Local Storage (TLS, which is data that the OS allocates every time it
17 // creates a thread). Programs access this storage by using the Windows TlsAlloc, TlsGetValue, TlsSetValue
18 // APIs (see http://msdn2.microsoft.com/en-us/library/ms686812.aspx). The runtime allocates two such slots
21 // * A slot that holds a pointer to the runtime thread object code:Thread (see code:#ThreadClass). The
22 // runtime has a special optimized version of this helper code:GetThread (we actually emit assembly
23 // code on the fly so it is as fast as possible). These code:Thread objects live in the
26 // * The other slot holds the current code:AppDomain (a managed equivalent of a process). The
27 // runtime thread object also has a pointer to the thread's AppDomain (see code:Thread.m_pDomain,
28 // so in theory this TLS is redundant. It is there for speed (one less pointer indirection). The
29 // optimized helper for this is code:GetAppDomain (we emit assembly code on the fly for this one
32 // Initially these TLS slots are empty (when the OS starts up), however before we run managed code, we must
33 // set them properly so that managed code knows what AppDomain it is in and we can suspend threads properly
34 // for a GC (see code:#SuspendingTheRuntime)
36 // #SuspendingTheRuntime
38 // One of the primary differences between runtime code (managed code), and traditional (unmanaged code) is
39 // the existence of the GC heap (see file:gc.cpp#Overview). For the GC to do its job, it must be able to
40 // traverse all references to the GC heap, including ones on the stack of every thread, as well as any in
41 // hardware registers. While it is simple to state this requirement, it has long reaching effects, because
42 // properly accounting for all GC heap references ALL the time turns out to be quite hard. When we make a
43 // bookkeeping mistake, a GC reference is not reported at GC time, which means it will not be updated when the
44 // GC happens. Since memory in the GC heap can move, this can cause the pointer to point at 'random' places
45 // in the GC heap, causing data corruption. This is a 'GC Hole', and is very bad. We have special modes (see
46 // code:EEConfig.GetGCStressLevel) called GCStress to help find such issues.
48 // In order to find all GC references on the stacks we need insure that no thread is manipulating a GC
49 // reference at the time of the scan. This is the job of code:Thread.SuspendRuntime. Logically it suspends
50 // every thread in the process. Unfortunately it can not literally simply call the OS SuspendThread API on
51 // all threads. The reason is that the other threads MIGHT hold important locks (for example there is a lock
52 // that is taken when unmanaged heap memory is requested, or when a DLL is loaded). In general process
53 // global structures in the OS will be protected by locks, and if you suspend a thread it might hold that
54 // lock. If you happen to need that OS service (eg you might need to allocated unmanaged memory), then
55 // deadlock will occur (as you wait on the suspended thread, that never wakes up).
57 // Luckily, we don't need to actually suspend the threads, we just need to insure that all GC references on
58 // the stack are stable. This is where the concept of cooperative mode and preemptive mode (a bad name) come
63 // The runtime keeps a table of all threads that have ever run managed code in the code:ThreadStore table.
64 // The ThreadStore table holds a list of Thread objects (see code:#ThreadClass). This object holds all
65 // infomation about managed threads. Cooperative mode is defined as the mode the thread is in when the field
66 // code:Thread.m_fPreemptiveGCDisabled is non-zero. When this field is zero the thread is said to be in
67 // Preemptive mode (named because if you preempt the thread in this mode, it is guaranteed to be in a place
68 // where a GC can occur).
70 // When a thread is in cooperative mode, it is basically saying that it is potentially modifying GC
71 // references, and so the runtime must Cooperate with it to get to a 'GC Safe' location where the GC
72 // references can be enumerated. This is the mode that a thread is in MOST times when it is running managed
73 // code (in fact if the EIP is in JIT compiled code, there is only one place where you are NOT in cooperative
74 // mode (Inlined PINVOKE transition code)). Conversely, any time non-runtime unmanaged code is running, the
75 // thread MUST NOT be in cooperative mode (you risk deadlock otherwise). Only code in mscorwks.dll might be
76 // running in either cooperative or preemptive mode.
78 // It is easier to describe the invariant associated with being in Preemptive mode. When the thread is in
79 // preemptive mode (when code:Thread.m_fPreemptiveGCDisabled is zero), the thread guarantees two things
81 // * That it not currently running code that manipulates GC references.
82 // * That it has set the code:Thread.m_pFrame pointer in the code:Thread to be a subclass of the class
83 // code:Frame which marks the location on the stack where the last managed method frame is. This
84 // allows the GC to start crawling the stack from there (essentially skip over the unmanaged frames).
85 // * That the thread will not reenter managed code if the global variable code:g_TrapReturningThreads is
86 // set (it will call code:Thread.RareDisablePreemptiveGC first which will block if a a suspension is
89 // The basic idea is that the suspension logic in code:Thread.SuspendRuntime first sets the global variable
90 // code:g_TrapReturningThreads and then checks if each thread in the ThreadStore is in Cooperative mode. If a
91 // thread is NOT in cooperative mode, the logic simply skips the thread, because it knows that the thread
92 // will stop itself before reentering managed code (because code:g_TrapReturningThreads is set). This avoids
93 // the deadlock problem mentioned earlier, because threads that are running unmanaged code are allowed to
94 // run. Enumeration of GC references starts at the first managed frame (pointed at by code:Thread.m_pFrame).
96 // When a thread is in cooperative mode, it means that GC references might be being manipulated. There are
97 // two important possibilities
99 // * The CPU is running JIT compiled code
100 // * The CPU is running code elsewhere (which should only be in mscorwks.dll, because everywhere else a
101 // transition to preemptive mode should have happened first)
103 // * #PartiallyInteruptibleCode
104 // * #FullyInteruptibleCode
106 // If the Instruction pointer (x86/x64: EIP, ARM: R15/PC) is in JIT compiled code, we can detect this because we have tables that
107 // map the ranges of every method back to their code:MethodDesc (this the code:ICodeManager interface). In
108 // addition to knowing the method, these tables also point at 'GCInfo' that tell for that method which stack
109 // locations and which registers hold GC references at any particular instruction pointer. If the method is
110 // what is called FullyInterruptible, then we have information for any possible instruction pointer in the
111 // method and we can simply stop the thread (however we have to do this carefully TODO explain).
113 // However for most methods, we only keep GC information for paticular EIP's, in particular we keep track of
114 // GC reference liveness only at call sites. Thus not every location is 'GC Safe' (that is we can enumerate
115 // all references, but must be 'driven' to a GC safe location).
117 // We drive threads to GC safe locations by hijacking. This is a term for updating the return address on the
118 // stack so that we gain control when a method returns. If we find that we are in JITTed code but NOT at a GC
119 // safe location, then we find the return address for the method and modfiy it to cause the runtime to stop.
120 // We then let the method run. Hopefully the method quickly returns, and hits our hijack, and we are now at a
121 // GC-safe location (all call sites are GC-safe). If not we repeat the procedure (possibly moving the
122 // hijack). At some point a method returns, and we get control. For methods that have loops that don't make
123 // calls, we are forced to make the method FullyInterruptible, so we can be sure to stop the mehod.
125 // This leaves only the case where we are in cooperative modes, but not in JIT compiled code (we should be in
126 // clr.dll). In this case we simply let the thread run. The idea is that code in clr.dll makes the
127 // promise that it will not do ANYTHING that will block (which includes taking a lock), while in cooperative
128 // mode, or do anything that might take a long time without polling to see if a GC is needed. Thus this code
129 // 'cooperates' to insure that GCs can happen in a timely fashion.
131 // If you need to switch the GC mode of the current thread, look for the GCX_COOP() and GCX_PREEMP() macros.
134 #ifndef __threads_h__
135 #define __threads_h__
139 #include "eventstore.hpp"
143 #include "gcheaputilities.h"
144 #include "gchandleutilities.h"
145 #include "gcinfotypes.h"
155 class ThreadBaseObject;
156 class AppDomainStack;
157 class LoadLevelLimiter;
159 class DeadlockAwareLock;
160 struct HelperMethodFrameCallerList;
161 class ThreadLocalIBCInfo;
163 class DebuggerPatchSkip;
164 class FaultingExceptionFrame;
165 class ContextTransitionFrame;
166 enum BinderMethodID : int;
169 class PendingTypeLoadHolder;
171 struct ThreadLocalBlock;
172 typedef DPTR(struct ThreadLocalBlock) PTR_ThreadLocalBlock;
173 typedef DPTR(PTR_ThreadLocalBlock) PTR_PTR_ThreadLocalBlock;
175 typedef void(*ADCallBackFcnType)(LPVOID);
177 #include "stackwalktypes.h"
179 #include "stackingallocator.h"
183 #include "threaddebugblockinginfo.h"
184 #include "interoputil.h"
185 #include "eventtrace.h"
187 #ifdef FEATURE_PERFTRACING
188 class EventPipeBufferList;
189 #endif // FEATURE_PERFTRACING
191 struct TLMTableEntry;
193 typedef DPTR(struct TLMTableEntry) PTR_TLMTableEntry;
194 typedef DPTR(struct ThreadLocalModule) PTR_ThreadLocalModule;
196 class ThreadStaticHandleTable;
197 struct ThreadLocalModule;
200 struct ThreadLocalBlock
202 friend class ClrDataAccess;
205 PTR_TLMTableEntry m_pTLMTable; // Table of ThreadLocalModules
206 SIZE_T m_TLMTableSize; // Current size of table
207 SpinLock m_TLMTableLock; // Spinlock used to synchronize growing the table and freeing TLM by other threads
209 // Each ThreadLocalBlock has its own ThreadStaticHandleTable. The ThreadStaticHandleTable works
210 // by allocating Object arrays on the GC heap and keeping them alive with pinning handles.
212 // We use the ThreadStaticHandleTable to allocate space for GC thread statics. A GC thread
213 // static is thread static that is either a reference type or a value type whose layout
214 // contains a pointer to a reference type.
216 ThreadStaticHandleTable * m_pThreadStaticHandleTable;
218 // Need to keep a list of the pinning handles we've created
219 // so they can be cleaned up when the thread dies
220 ObjectHandleList m_PinningHandleList;
224 #ifndef DACCESS_COMPILE
225 void AddPinningHandleToList(OBJECTHANDLE oh);
226 void FreePinningHandles();
227 void AllocateThreadStaticHandles(Module * pModule, ThreadLocalModule * pThreadLocalModule);
228 OBJECTHANDLE AllocateStaticFieldObjRefPtrs(int nRequested, OBJECTHANDLE* ppLazyAllocate = NULL);
229 void InitThreadStaticHandleTable();
231 void AllocateThreadStaticBoxes(MethodTable* pMT);
234 public: // used by code generators
235 static SIZE_T GetOffsetOfModuleSlotsPointer() { return offsetof(ThreadLocalBlock, m_pTLMTable); }
239 #ifndef DACCESS_COMPILE
241 : m_pTLMTable(NULL), m_TLMTableSize(0), m_pThreadStaticHandleTable(NULL)
243 m_TLMTableLock.Init(LOCK_TYPE_DEFAULT);
246 void FreeTLM(SIZE_T i, BOOL isThreadShuttingDown);
250 void EnsureModuleIndex(ModuleIndex index);
254 void SetModuleSlot(ModuleIndex index, PTR_ThreadLocalModule pLocalModule);
256 PTR_ThreadLocalModule GetTLMIfExists(ModuleIndex index);
257 PTR_ThreadLocalModule GetTLMIfExists(MethodTable* pMT);
259 #ifdef DACCESS_COMPILE
260 void EnumMemoryRegions(CLRDataEnumMemoryFlags flags);
264 #ifdef CROSSGEN_COMPILE
266 #include "asmconstants.h"
270 friend class ThreadStatics;
272 ThreadLocalBlock m_ThreadLocalBlock;
275 BOOL IsAddressInStack (PTR_VOID addr) const { return TRUE; }
276 static BOOL IsAddressInCurrentStack (PTR_VOID addr) { return TRUE; }
278 StackingAllocator m_MarshalAlloc;
281 LoadLevelLimiter *m_pLoadLimiter;
284 LoadLevelLimiter *GetLoadLevelLimiter()
286 LIMITED_METHOD_CONTRACT;
287 return m_pLoadLimiter;
290 void SetLoadLevelLimiter(LoadLevelLimiter *limiter)
292 LIMITED_METHOD_CONTRACT;
293 m_pLoadLimiter = limiter;
296 PTR_Frame GetFrame() { return NULL; }
297 void SetFrame(Frame *pFrame) { }
298 DWORD CatchAtSafePoint() { return 0; }
299 DWORD CatchAtSafePointOpportunistic() { return 0; }
301 static void ObjectRefProtected(const OBJECTREF* ref) { }
302 static void ObjectRefNew(const OBJECTREF* ref) { }
304 void EnablePreemptiveGC() { }
305 void DisablePreemptiveGC() { }
307 inline void IncLockCount() { }
308 inline void DecLockCount() { }
310 static LPVOID GetStaticFieldAddress(FieldDesc *pFD) { return NULL; }
312 PTR_AppDomain GetDomain() { return ::GetAppDomain(); }
314 DWORD GetThreadId() { return 0; }
316 inline DWORD GetOverridesCount() { return 0; }
317 inline BOOL CheckThreadWideSpecialFlag(DWORD flags) { return 0; }
319 BOOL PreemptiveGCDisabled() { return false; }
320 void PulseGCMode() { }
322 OBJECTREF GetThrowable() { return NULL; }
324 OBJECTREF LastThrownObject() { return NULL; }
326 static BOOL Debug_AllowCallout() { return TRUE; }
328 static void IncForbidSuspendThread() { }
329 static void DecForbidSuspendThread() { }
331 typedef StateHolder<Thread::IncForbidSuspendThread, Thread::DecForbidSuspendThread> ForbidSuspendThreadHolder;
333 static BYTE GetOffsetOfCurrentFrame()
335 LIMITED_METHOD_CONTRACT;
336 size_t ofs = Thread_m_pFrame;
337 _ASSERTE(FitsInI1(ofs));
341 static BYTE GetOffsetOfGCFlag()
343 LIMITED_METHOD_CONTRACT;
344 size_t ofs = Thread_m_fPreemptiveGCDisabled;
345 _ASSERTE(FitsInI1(ofs));
349 void SetLoadingFile(DomainFile *pFile)
353 typedef Holder<Thread *, DoNothing, DoNothing> LoadingFileHolder;
359 BOOL HasThreadState(ThreadState ts)
361 LIMITED_METHOD_CONTRACT;
362 return ((DWORD)m_State & ts);
365 BOOL HasThreadStateOpportunistic(ThreadState ts)
367 LIMITED_METHOD_CONTRACT;
368 return m_State.LoadWithoutBarrier() & ts;
371 Volatile<ThreadState> m_State;
373 enum ThreadStateNoConcurrency
375 TSNC_OwnsSpinLock = 0x00000400, // The thread owns a spinlock.
377 TSNC_DisableOleaut32Check = 0x00040000, // Disable oleaut32 delay load check. Oleaut32 has
380 TSNC_LoadsTypeViolation = 0x40000000, // Use by type loader to break deadlocks caused by type load level ordering violations
383 ThreadStateNoConcurrency m_StateNC;
385 void SetThreadStateNC(ThreadStateNoConcurrency tsnc)
387 LIMITED_METHOD_CONTRACT;
388 m_StateNC = (ThreadStateNoConcurrency)((DWORD)m_StateNC | tsnc);
391 void ResetThreadStateNC(ThreadStateNoConcurrency tsnc)
393 LIMITED_METHOD_CONTRACT;
394 m_StateNC = (ThreadStateNoConcurrency)((DWORD)m_StateNC & ~tsnc);
397 BOOL HasThreadStateNC(ThreadStateNoConcurrency tsnc)
399 LIMITED_METHOD_DAC_CONTRACT;
400 return ((DWORD)m_StateNC & tsnc);
403 PendingTypeLoadHolder* m_pPendingTypeLoad;
405 #ifndef DACCESS_COMPILE
406 PendingTypeLoadHolder* GetPendingTypeLoad()
408 LIMITED_METHOD_CONTRACT;
409 return m_pPendingTypeLoad;
412 void SetPendingTypeLoad(PendingTypeLoadHolder* pPendingTypeLoad)
414 LIMITED_METHOD_CONTRACT;
415 m_pPendingTypeLoad = pPendingTypeLoad;
419 #ifdef FEATURE_COMINTEROP_APARTMENT_SUPPORT
420 enum ApartmentState { AS_Unknown };
423 #if defined(FEATURE_COMINTEROP) && defined(MDA_SUPPORTED)
424 void RegisterRCW(RCW *pRCW)
428 BOOL RegisterRCWNoThrow(RCW *pRCW)
433 RCW *UnregisterRCW(INDEBUG(SyncBlock *pSB))
442 inline void DoReleaseCheckpoint(void *checkPointMarker)
445 GetThread()->m_MarshalAlloc.Collapse(checkPointMarker);
448 // CheckPointHolder : Back out to a checkpoint on the thread allocator.
449 typedef Holder<void*, DoNothing,DoReleaseCheckpoint> CheckPointHolder;
451 class AVInRuntimeImplOkayHolder
454 AVInRuntimeImplOkayHolder()
456 LIMITED_METHOD_CONTRACT;
458 AVInRuntimeImplOkayHolder(Thread * pThread)
460 LIMITED_METHOD_CONTRACT;
462 ~AVInRuntimeImplOkayHolder()
464 LIMITED_METHOD_CONTRACT;
468 inline BOOL dbgOnly_IsSpecialEEThread() { return FALSE; }
470 #define INCTHREADLOCKCOUNT() { }
471 #define DECTHREADLOCKCOUNT() { }
472 #define INCTHREADLOCKCOUNTTHREAD(thread) { }
473 #define DECTHREADLOCKCOUNTTHREAD(thread) { }
475 #define FORBIDGC_LOADER_USE_ENABLED() false
476 #define ENABLE_FORBID_GC_LOADER_USE_IN_THIS_SCOPE() ;
478 #define BEGIN_FORBID_TYPELOAD()
479 #define END_FORBID_TYPELOAD()
480 #define TRIGGERS_TYPELOAD()
482 #define TRIGGERSGC() ANNOTATION_GC_TRIGGERS
484 inline void CommonTripThread() { }
486 class DeadlockAwareLock
489 DeadlockAwareLock(const char *description = NULL) { }
490 ~DeadlockAwareLock() { }
492 BOOL CanEnterLock() { return TRUE; }
494 BOOL TryBeginEnterLock() { return TRUE; }
495 void BeginEnterLock() { }
497 void EndEnterLock() { }
502 typedef StateHolder<DoNothing,DoNothing> BlockingLockHolder;
505 // Do not include threads.inl
508 typedef Thread::ForbidSuspendThreadHolder ForbidSuspendThreadHolder;
510 #else // CROSSGEN_COMPILE
513 #include "armsinglestepper.h"
516 #if !defined(PLATFORM_SUPPORTS_SAFE_THREADSUSPEND)
517 // DISABLE_THREADSUSPEND controls whether Thread::SuspendThread will be used at all.
518 // This API is dangerous on non-Windows platforms, as it can lead to deadlocks,
519 // due to low level OS resources that the PAL is not aware of, or due to the fact that
520 // PAL-unaware code in the process may hold onto some OS resources.
521 #define DISABLE_THREADSUSPEND
524 // NT thread priorities range from -15 to +15.
525 #define INVALID_THREAD_PRIORITY ((DWORD)0x80000000)
527 // For a fiber which switched out, we set its OSID to a special number
528 // Note: there's a copy of this macro in strike.cpp
529 #define SWITCHED_OUT_FIBER_OSID 0xbaadf00d;
532 // A thread doesn't recieve its id until fully constructed.
533 #define UNINITIALIZED_THREADID 0xbaadf00d
536 // Capture all the synchronization requests, for debugging purposes
537 #if defined(_DEBUG) && defined(TRACK_SYNC)
539 // Each thread has a stack that tracks all enter and leave requests
542 virtual ~Dbg_TrackSync() = default;
544 virtual void EnterSync (UINT_PTR caller, void *pAwareLock) = 0;
545 virtual void LeaveSync (UINT_PTR caller, void *pAwareLock) = 0;
548 EXTERN_C void EnterSyncHelper (UINT_PTR caller, void *pAwareLock);
549 EXTERN_C void LeaveSyncHelper (UINT_PTR caller, void *pAwareLock);
553 //***************************************************************************
554 #ifdef FEATURE_HIJACK
556 // Used to capture information about the state of execution of a *SUSPENDED* thread.
557 struct ExecutionState;
559 #ifndef PLATFORM_UNIX
560 // This is the type of the start function of a redirected thread pulled from
561 // a HandledJITCase during runtime suspension
562 typedef void (__stdcall *PFN_REDIRECTTARGET)();
564 // Describes the weird argument sets during hijacking
566 #endif // !PLATFORM_UNIX
568 #endif // FEATURE_HIJACK
570 //***************************************************************************
571 #ifdef ENABLE_CONTRACTS_IMPL
572 inline Thread* GetThreadNULLOk()
574 LIMITED_METHOD_CONTRACT;
576 BEGIN_GETTHREAD_ALLOWED_IN_NO_THROW_REGION;
577 pThread = GetThread();
578 END_GETTHREAD_ALLOWED_IN_NO_THROW_REGION;
582 #define GetThreadNULLOk() GetThread()
585 // manifest constant for waiting in the exposed classlibs
586 const INT32 INFINITE_TIMEOUT = -1;
588 /***************************************************************************/
589 // Public enum shared between thread and threadpool
590 // These are two kinds of threadpool thread that the threadpool mgr needs
592 enum ThreadpoolThreadType
595 CompletionPortThread,
599 //***************************************************************************
602 // Thread* GetThread() - returns current Thread
603 // Thread* SetupThread() - creates new Thread.
604 // Thread* SetupUnstartedThread() - creates new unstarted Thread which
605 // (obviously) isn't in a TLS.
606 // void DestroyThread() - the underlying logical thread is going
608 // void DetachThread() - the underlying logical thread is going
609 // away but we don't want to destroy it yet.
611 // Public functions for ASM code generators
613 // Thread* __stdcall CreateThreadBlockThrow() - creates new Thread on reverse p-invoke
615 // Public functions for one-time init/cleanup
617 // void InitThreadManager() - onetime init
618 // void TerminateThreadManager() - onetime cleanup
620 // Public functions for taking control of a thread at a safe point
622 // VOID OnHijackTripThread() - we've hijacked a JIT method
623 // VOID OnHijackFPTripThread() - we've hijacked a JIT method,
624 // and need to save the x87 FP stack.
626 //***************************************************************************
629 //***************************************************************************
631 //***************************************************************************
633 //---------------------------------------------------------------------------
635 //---------------------------------------------------------------------------
636 Thread* SetupThread(BOOL fInternal);
637 inline Thread* SetupThread()
640 return SetupThread(FALSE);
642 // A host can deny a thread entering runtime by returning a NULL IHostTask.
643 // But we do want threads used by threadpool.
644 inline Thread* SetupInternalThread()
647 return SetupThread(TRUE);
649 Thread* SetupThreadNoThrow(HRESULT *phresult = NULL);
650 // WARNING : only GC calls this with bRequiresTSL set to FALSE.
651 Thread* SetupUnstartedThread(BOOL bRequiresTSL=TRUE);
652 void DestroyThread(Thread *th);
654 DWORD GetRuntimeId();
656 EXTERN_C Thread* WINAPI CreateThreadBlockThrow();
658 //---------------------------------------------------------------------------
659 // One-time initialization. Called during Dll initialization.
660 //---------------------------------------------------------------------------
661 void InitThreadManager();
664 // When we want to take control of a thread at a safe point, the thread will
665 // eventually come back to us in one of the following trip functions:
667 #ifdef FEATURE_HIJACK
669 EXTERN_C void WINAPI OnHijackTripThread();
671 EXTERN_C void WINAPI OnHijackFPTripThread(); // hijacked JIT code is returning an FP value
672 #endif // _TARGET_X86_
674 #endif // FEATURE_HIJACK
676 void CommonTripThread();
678 // When we resume a thread at a new location, to get an exception thrown, we have to
679 // pretend the exception originated elsewhere.
680 EXTERN_C void ThrowControlForThread(
681 #ifdef WIN64EXCEPTIONS
682 FaultingExceptionFrame *pfef
683 #endif // WIN64EXCEPTIONS
686 // RWLock state inside TLS
689 LockEntry *pNext; // next entry
690 LockEntry *pPrev; // prev entry
692 LONG dwLLockID; // owning lock
693 WORD wReaderLevel; // reader nesting level
697 BOOL MatchThreadHandleToOsId ( HANDLE h, DWORD osId );
700 #ifdef FEATURE_COMINTEROP
702 #define RCW_STACK_SIZE 64
709 LIMITED_METHOD_CONTRACT;
710 memset(this, 0, sizeof(RCWStack));
713 inline VOID SetEntry(unsigned int index, RCW* pRCW)
720 PRECONDITION(index < RCW_STACK_SIZE);
721 PRECONDITION(CheckPointer(pRCW, NULL_OK));
725 m_pList[index] = pRCW;
728 inline RCW* GetEntry(unsigned int index)
735 PRECONDITION(index < RCW_STACK_SIZE);
739 RETURN m_pList[index];
742 inline VOID SetNextStack(RCWStack* pStack)
749 PRECONDITION(CheckPointer(pStack));
750 PRECONDITION(m_pNext == NULL);
757 inline RCWStack* GetNextStack()
764 POSTCONDITION(CheckPointer(RETVAL, NULL_OK));
773 RCW* m_pList[RCW_STACK_SIZE];
791 m_iSize = RCW_STACK_SIZE;
792 m_pHead = new RCWStack();
805 RCWStack* pStack = m_pHead;
806 RCWStack* pNextStack = NULL;
810 pNextStack = pStack->GetNextStack();
823 PRECONDITION(CheckPointer(pRCW, NULL_OK));
827 if (!GrowListIfNeeded())
831 if (m_iIndex < RCW_STACK_SIZE)
833 m_pHead->SetEntry(m_iIndex, pRCW);
839 unsigned int count = m_iIndex;
840 RCWStack* pStack = m_pHead;
841 while (count >= RCW_STACK_SIZE)
843 pStack = pStack->GetNextStack();
846 count -= RCW_STACK_SIZE;
849 pStack->SetEntry(count, pRCW);
861 PRECONDITION(m_iIndex > 0);
862 POSTCONDITION(CheckPointer(RETVAL, NULL_OK));
871 if (m_iIndex < RCW_STACK_SIZE)
873 pRCW = m_pHead->GetEntry(m_iIndex);
874 m_pHead->SetEntry(m_iIndex, NULL);
879 unsigned int count = m_iIndex;
880 RCWStack* pStack = m_pHead;
881 while (count >= RCW_STACK_SIZE)
883 pStack = pStack->GetNextStack();
885 count -= RCW_STACK_SIZE;
888 pRCW = pStack->GetEntry(count);
889 pStack->SetEntry(count, NULL);
894 BOOL IsInStack(RCW* pRCW)
901 PRECONDITION(CheckPointer(pRCW));
909 if (m_iIndex <= RCW_STACK_SIZE)
911 for (int i = 0; i < (int)m_iIndex; i++)
913 if (pRCW == m_pHead->GetEntry(i))
921 RCWStack* pStack = m_pHead;
923 while (pStack != NULL)
925 for (int i = 0; (i < RCW_STACK_SIZE) && (totalcount < m_iIndex); i++, totalcount++)
927 if (pRCW == pStack->GetEntry(i))
931 pStack = pStack->GetNextStack();
938 bool GrowListIfNeeded()
945 INJECT_FAULT(COMPlusThrowOM());
946 PRECONDITION(CheckPointer(m_pHead));
950 if (m_iIndex == m_iSize)
952 RCWStack* pStack = m_pHead;
953 RCWStack* pNextStack = NULL;
954 while ( (pNextStack = pStack->GetNextStack()) != NULL)
957 RCWStack* pNewStack = new (nothrow) RCWStack();
958 if (NULL == pNewStack)
961 pStack->SetNextStack(pNewStack);
963 m_iSize += RCW_STACK_SIZE;
969 // Zero-based index to the first free element in the list.
972 // Total size of the list, including all stacks.
975 // Pointer to the first stack.
979 #endif // FEATURE_COMINTEROP
982 typedef DWORD (*AppropriateWaitFunc) (void *args, DWORD timeout, DWORD option);
984 // The Thread class represents a managed thread. This thread could be internal
985 // or external (i.e. it wandered in from outside the runtime). For internal
986 // threads, it could correspond to an exposed System.Thread object or it
987 // could correspond to an internal worker thread of the runtime.
989 // If there's a physical Win32 thread underneath this object (i.e. it isn't an
990 // unstarted System.Thread), then this instance can be found in the TLS
991 // of that physical thread.
993 // FEATURE_MULTIREG_RETURN is set for platforms where a struct return value
994 // [GcInfo v2 only] can be returned in multiple registers
995 // ex: Windows/Unix ARM/ARM64, Unix-AMD64.
998 // UNIX_AMD64_ABI is a specific kind of FEATURE_MULTIREG_RETURN
999 // [GcInfo v1 and v2] specified by SystemV ABI for AMD64
1002 #ifdef FEATURE_HIJACK // Hijack function returning
1003 EXTERN_C void STDCALL OnHijackWorker(HijackArgs * pArgs);
1004 #endif // FEATURE_HIJACK
1006 // This is the code we pass around for Thread.Interrupt, mainly for assertions
1007 #define APC_Code 0xEECEECEE
1009 #ifdef DACCESS_COMPILE
1010 class BaseStackGuard;
1015 // A code:Thread contains all the per-thread information needed by the runtime. You can get at this
1016 // structure throught the and OS TLS slot see code:#RuntimeThreadLocals for more
1017 // Implementing IUnknown would prevent the field (e.g. m_Context) layout from being rearranged (which will need to be fixed in
1018 // "asmconstants.h" for the respective architecture). As it is, ICLRTask derives from IUnknown and would have got IUnknown implemented
1019 // here - so doing this explicitly and maintaining layout sanity should be just fine.
1020 class Thread: public IUnknown
1022 friend struct ThreadQueue; // used to enqueue & dequeue threads onto SyncBlocks
1023 friend class ThreadStore;
1024 friend class ThreadSuspend;
1025 friend class SyncBlock;
1026 friend struct PendingSync;
1027 friend class AppDomain;
1028 friend class ThreadNative;
1029 friend class DeadlockAwareLock;
1031 friend class EEContract;
1033 #ifdef DACCESS_COMPILE
1034 friend class ClrDataAccess;
1035 friend class ClrDataTask;
1038 friend BOOL NTGetThreadContext(Thread *pThread, T_CONTEXT *pContext);
1039 friend BOOL NTSetThreadContext(Thread *pThread, const T_CONTEXT *pContext);
1041 friend void CommonTripThread();
1043 #ifdef FEATURE_HIJACK
1044 // MapWin32FaultToCOMPlusException needs access to Thread::IsAddrOfRedirectFunc()
1045 friend DWORD MapWin32FaultToCOMPlusException(EXCEPTION_RECORD *pExceptionRecord);
1046 friend void STDCALL OnHijackWorker(HijackArgs * pArgs);
1047 #ifdef PLATFORM_UNIX
1048 friend void HandleGCSuspensionForInterruptedThread(CONTEXT *interruptedContext);
1049 #endif // PLATFORM_UNIX
1051 #endif // FEATURE_HIJACK
1053 friend void InitThreadManager();
1054 friend void ThreadBaseObject::SetDelegate(OBJECTREF delegate);
1056 friend void CallFinalizerOnThreadObject(Object *obj);
1058 friend class ContextTransitionFrame; // To set m_dwBeginLockCount
1060 // Debug and Profiler caches ThreadHandle.
1061 friend class Debugger; // void Debugger::ThreadStarted(Thread* pRuntimeThread, BOOL fAttaching);
1062 #if defined(DACCESS_COMPILE)
1063 friend class DacDbiInterfaceImpl; // DacDbiInterfaceImpl::GetThreadHandle(HANDLE * phThread);
1064 #endif // DACCESS_COMPILE
1065 friend class ProfToEEInterfaceImpl; // HRESULT ProfToEEInterfaceImpl::GetHandleFromThread(ThreadID threadId, HANDLE *phThread);
1066 friend class CExecutionEngine;
1068 friend class CheckAsmOffsets;
1070 friend class ExceptionTracker;
1071 friend class ThreadExceptionState;
1073 friend class StackFrameIterator;
1075 friend class ThreadStatics;
1077 VPTR_BASE_CONCRETE_VTABLE_CLASS(Thread)
1080 enum SetThreadStackGuaranteeScope { STSGuarantee_Force, STSGuarantee_OnlyIfEnabled };
1081 static BOOL IsSetThreadStackGuaranteeInUse(SetThreadStackGuaranteeScope fScope = STSGuarantee_OnlyIfEnabled)
1083 WRAPPER_NO_CONTRACT;
1085 if(STSGuarantee_Force == fScope)
1088 //The runtime must be hosted to have escalation policy
1089 //If escalation policy is enabled but StackOverflow is not part of the policy
1090 // then we don't use SetThreadStackGuarantee
1092 GetEEPolicy()->GetActionOnFailure(FAIL_StackOverflow) == eRudeExitProcess)
1094 //FAIL_StackOverflow is ProcessExit so don't use SetThreadStackGuarantee
1102 // If we are trying to suspend a thread, we set the appropriate pending bit to
1103 // indicate why we want to suspend it (TS_GCSuspendPending, TS_UserSuspendPending,
1104 // TS_DebugSuspendPending).
1106 // If instead the thread has blocked itself, via WaitSuspendEvent, we indicate
1107 // this with TS_SyncSuspended. However, we need to know whether the synchronous
1108 // suspension is for a user request, or for an internal one (GC & Debug). That's
1109 // because a user request is not allowed to resume a thread suspended for
1110 // debugging or GC. -- That's not stricly true. It is allowed to resume such a
1111 // thread so long as it was ALSO suspended by the user. In other words, this
1112 // ensures that user resumptions aren't unbalanced from user suspensions.
1116 TS_Unknown = 0x00000000, // threads are initialized this way
1118 TS_AbortRequested = 0x00000001, // Abort the thread
1119 TS_GCSuspendPending = 0x00000002, // waiting to get to safe spot for GC
1120 TS_UserSuspendPending = 0x00000004, // user suspension at next opportunity
1121 TS_DebugSuspendPending = 0x00000008, // Is the debugger suspending threads?
1122 TS_GCOnTransitions = 0x00000010, // Force a GC on stub transitions (GCStress only)
1124 TS_LegalToJoin = 0x00000020, // Is it now legal to attempt a Join()
1126 // unused = 0x00000040,
1128 #ifdef FEATURE_HIJACK
1129 TS_Hijacked = 0x00000080, // Return address has been hijacked
1130 #endif // FEATURE_HIJACK
1132 TS_BlockGCForSO = 0x00000100, // If a thread does not have enough stack, WaitUntilGCComplete may fail.
1133 // Either GC suspension will wait until the thread has cleared this bit,
1134 // Or the current thread is going to spin if GC has suspended all threads.
1135 TS_Background = 0x00000200, // Thread is a background thread
1136 TS_Unstarted = 0x00000400, // Thread has never been started
1137 TS_Dead = 0x00000800, // Thread is dead
1139 TS_WeOwn = 0x00001000, // Exposed object initiated this thread
1140 #ifdef FEATURE_COMINTEROP_APARTMENT_SUPPORT
1141 TS_CoInitialized = 0x00002000, // CoInitialize has been called for this thread
1143 TS_InSTA = 0x00004000, // Thread hosts an STA
1144 TS_InMTA = 0x00008000, // Thread is part of the MTA
1145 #endif // FEATURE_COMINTEROP_APARTMENT_SUPPORT
1147 // Some bits that only have meaning for reporting the state to clients.
1148 TS_ReportDead = 0x00010000, // in WaitForOtherThreads()
1149 TS_FullyInitialized = 0x00020000, // Thread is fully initialized and we are ready to broadcast its existence to external clients
1151 TS_TaskReset = 0x00040000, // The task is reset
1153 TS_SyncSuspended = 0x00080000, // Suspended via WaitSuspendEvent
1154 TS_DebugWillSync = 0x00100000, // Debugger will wait for this thread to sync
1156 TS_StackCrawlNeeded = 0x00200000, // A stackcrawl is needed on this thread, such as for thread abort
1157 // See comment for s_pWaitForStackCrawlEvent for reason.
1159 TS_SuspendUnstarted = 0x00400000, // latch a user suspension on an unstarted thread
1161 TS_Aborted = 0x00800000, // is the thread aborted?
1162 TS_TPWorkerThread = 0x01000000, // is this a threadpool worker thread?
1164 TS_Interruptible = 0x02000000, // sitting in a Sleep(), Wait(), Join()
1165 TS_Interrupted = 0x04000000, // was awakened by an interrupt APC. !!! This can be moved to TSNC
1167 TS_CompletionPortThread = 0x08000000, // Completion port thread
1169 TS_AbortInitiated = 0x10000000, // set when abort is begun
1171 TS_Finalized = 0x20000000, // The associated managed Thread object has been finalized.
1172 // We can clean up the unmanaged part now.
1174 TS_FailStarted = 0x40000000, // The thread fails during startup.
1175 TS_Detached = 0x80000000, // Thread was detached by DllMain
1177 // <TODO> @TODO: We need to reclaim the bits that have no concurrency issues (i.e. they are only
1178 // manipulated by the owning thread) and move them off to a different DWORD. Note if this
1179 // enum is changed, we also need to update SOS to reflect this.</TODO>
1181 // We require (and assert) that the following bits are less than 0x100.
1182 TS_CatchAtSafePoint = (TS_UserSuspendPending | TS_AbortRequested |
1183 TS_GCSuspendPending | TS_DebugSuspendPending | TS_GCOnTransitions),
1186 // Thread flags that aren't really states in themselves but rather things the thread
1190 TT_CleanupSyncBlock = 0x00000001, // The synch block needs to be cleaned up.
1191 #ifdef FEATURE_COMINTEROP_APARTMENT_SUPPORT
1192 TT_CallCoInitialize = 0x00000002, // CoInitialize needs to be called.
1193 #endif // FEATURE_COMINTEROP_APARTMENT_SUPPORT
1196 // Thread flags that have no concurrency issues (i.e., they are only manipulated by the owning thread). Use these
1197 // state flags when you have a new thread state that doesn't belong in the ThreadState enum above.
1199 // <TODO>@TODO: its possible that the ThreadTasks from above and these flags should be merged.</TODO>
1200 enum ThreadStateNoConcurrency
1202 TSNC_Unknown = 0x00000000, // threads are initialized this way
1204 TSNC_DebuggerUserSuspend = 0x00000001, // marked "suspended" by the debugger
1205 TSNC_DebuggerReAbort = 0x00000002, // thread needs to re-abort itself when resumed by the debugger
1206 TSNC_DebuggerIsStepping = 0x00000004, // debugger is stepping this thread
1207 TSNC_DebuggerIsManagedException = 0x00000008, // EH is re-raising a managed exception.
1208 TSNC_WaitUntilGCFinished = 0x00000010, // The current thread is waiting for GC. If host returns
1209 // SO during wait, we will either spin or make GC wait.
1210 TSNC_BlockedForShutdown = 0x00000020, // Thread is blocked in WaitForEndOfShutdown. We should not hit WaitForEndOfShutdown again.
1211 TSNC_SOWorkNeeded = 0x00000040, // The thread needs to wake up AD unload helper thread to finish SO work
1212 TSNC_CLRCreatedThread = 0x00000080, // The thread was created through Thread::CreateNewThread
1213 TSNC_ExistInThreadStore = 0x00000100, // For dtor to know if it needs to be removed from ThreadStore
1214 TSNC_UnsafeSkipEnterCooperative = 0x00000200, // This is a "fix" for deadlocks caused when cleaning up COM
1215 TSNC_OwnsSpinLock = 0x00000400, // The thread owns a spinlock.
1216 TSNC_PreparingAbort = 0x00000800, // Preparing abort. This avoids recursive HandleThreadAbort call.
1217 TSNC_OSAlertableWait = 0x00001000, // Preparing abort. This avoids recursive HandleThreadAbort call.
1218 // unused = 0x00002000,
1219 TSNC_CreatingTypeInitException = 0x00004000, // Thread is trying to create a TypeInitException
1220 // unused = 0x00008000,
1221 // unused = 0x00010000,
1222 TSNC_InRestoringSyncBlock = 0x00020000, // The thread is restoring its SyncBlock for Object.Wait.
1223 // After the thread is interrupted once, we turn off interruption
1224 // at the beginning of wait.
1225 TSNC_DisableOleaut32Check = 0x00040000, // Disable oleaut32 delay load check. Oleaut32 has
1227 TSNC_CannotRecycle = 0x00080000, // A host can not recycle this Thread object. When a thread
1228 // has orphaned lock, we will apply this.
1229 TSNC_RaiseUnloadEvent = 0x00100000, // Finalize thread is raising managed unload event which
1230 // may call AppDomain.Unload.
1231 TSNC_UnbalancedLocks = 0x00200000, // Do not rely on lock accounting for this thread:
1232 // we left an app domain with a lock count different from
1233 // when we entered it
1234 // unused = 0x00400000,
1235 TSNC_IgnoreUnhandledExceptions = 0x00800000, // Set for a managed thread born inside an appdomain created with the APPDOMAIN_IGNORE_UNHANDLED_EXCEPTIONS flag.
1236 TSNC_ProcessedUnhandledException = 0x01000000,// Set on a thread on which we have done unhandled exception processing so that
1237 // we dont perform it again when OS invokes our UEF. Currently, applicable threads include:
1238 // 1) entry point thread of a managed app
1239 // 2) new managed thread created in default domain
1241 // For such threads, we will return to the OS after our UE processing is done
1242 // and the OS will start invoking the UEFs. If our UEF gets invoked, it will try to
1243 // perform the UE processing again. We will use this flag to prevent the duplicated
1246 // Once we are completely independent of the OS UEF, we could remove this.
1247 TSNC_InsideSyncContextWait = 0x02000000, // Whether we are inside DoSyncContextWait
1248 TSNC_DebuggerSleepWaitJoin = 0x04000000, // Indicates to the debugger that this thread is in a sleep wait or join state
1249 // This almost mirrors the TS_Interruptible state however that flag can change
1250 // during GC-preemptive mode whereas this one cannot.
1251 #ifdef FEATURE_COMINTEROP
1252 TSNC_WinRTInitialized = 0x08000000, // the thread has initialized WinRT
1253 #endif // FEATURE_COMINTEROP
1255 // TSNC_Unused = 0x10000000,
1257 TSNC_CallingManagedCodeDisabled = 0x20000000, // Use by multicore JIT feature to asert on calling managed code/loading module in background thread
1258 // Exception, system module is allowed, security demand is allowed
1260 TSNC_LoadsTypeViolation = 0x40000000, // Use by type loader to break deadlocks caused by type load level ordering violations
1262 TSNC_EtwStackWalkInProgress = 0x80000000, // Set on the thread so that ETW can know that stackwalking is in progress
1263 // and does not proceed with a stackwalk on the same thread
1264 // There are cases during managed debugging when we can run into this situation
1267 // Functions called by host
1268 STDMETHODIMP QueryInterface(REFIID riid, void** ppv)
1269 DAC_EMPTY_RET(E_NOINTERFACE);
1270 STDMETHODIMP_(ULONG) AddRef(void)
1272 STDMETHODIMP_(ULONG) Release(void)
1274 STDMETHODIMP Abort()
1275 DAC_EMPTY_RET(E_FAIL);
1276 STDMETHODIMP RudeAbort()
1277 DAC_EMPTY_RET(E_FAIL);
1278 STDMETHODIMP NeedsPriorityScheduling(BOOL *pbNeedsPriorityScheduling)
1279 DAC_EMPTY_RET(E_FAIL);
1281 STDMETHODIMP YieldTask()
1282 DAC_EMPTY_RET(E_FAIL);
1283 STDMETHODIMP LocksHeld(SIZE_T *pLockCount)
1284 DAC_EMPTY_RET(E_FAIL);
1286 STDMETHODIMP BeginPreventAsyncAbort()
1287 DAC_EMPTY_RET(E_FAIL);
1288 STDMETHODIMP EndPreventAsyncAbort()
1289 DAC_EMPTY_RET(E_FAIL);
1291 void InternalReset (BOOL fNotFinalizerThread=FALSE, BOOL fThreadObjectResetNeeded=TRUE, BOOL fResetAbort=TRUE);
1292 INT32 ResetManagedThreadObject(INT32 nPriority);
1293 INT32 ResetManagedThreadObjectInCoopMode(INT32 nPriority);
1294 BOOL IsRealThreadPoolResetNeeded();
1296 HRESULT DetachThread(BOOL fDLLThreadDetach);
1298 void SetThreadState(ThreadState ts)
1300 LIMITED_METHOD_CONTRACT;
1301 FastInterlockOr((DWORD*)&m_State, ts);
1304 void ResetThreadState(ThreadState ts)
1306 LIMITED_METHOD_CONTRACT;
1307 FastInterlockAnd((DWORD*)&m_State, ~ts);
1310 BOOL HasThreadState(ThreadState ts)
1312 LIMITED_METHOD_CONTRACT;
1313 return ((DWORD)m_State & ts);
1317 // This is meant to be used for quick opportunistic checks for thread abort and similar conditions. This method
1318 // does not erect memory barrier and so it may return wrong result sometime that the caller has to handle.
1320 BOOL HasThreadStateOpportunistic(ThreadState ts)
1322 LIMITED_METHOD_CONTRACT;
1323 return m_State.LoadWithoutBarrier() & ts;
1326 void SetThreadStateNC(ThreadStateNoConcurrency tsnc)
1328 LIMITED_METHOD_CONTRACT;
1329 m_StateNC = (ThreadStateNoConcurrency)((DWORD)m_StateNC | tsnc);
1332 void ResetThreadStateNC(ThreadStateNoConcurrency tsnc)
1334 LIMITED_METHOD_CONTRACT;
1335 m_StateNC = (ThreadStateNoConcurrency)((DWORD)m_StateNC & ~tsnc);
1338 BOOL HasThreadStateNC(ThreadStateNoConcurrency tsnc)
1340 LIMITED_METHOD_DAC_CONTRACT;
1341 return ((DWORD)m_StateNC & tsnc);
1344 void MarkEtwStackWalkInProgress()
1346 WRAPPER_NO_CONTRACT;
1347 SetThreadStateNC(Thread::TSNC_EtwStackWalkInProgress);
1350 void MarkEtwStackWalkCompleted()
1352 WRAPPER_NO_CONTRACT;
1353 ResetThreadStateNC(Thread::TSNC_EtwStackWalkInProgress);
1356 BOOL IsEtwStackWalkInProgress()
1358 WRAPPER_NO_CONTRACT;
1359 return HasThreadStateNC(Thread::TSNC_EtwStackWalkInProgress);
1362 DWORD RequireSyncBlockCleanup()
1364 LIMITED_METHOD_CONTRACT;
1365 return (m_ThreadTasks & TT_CleanupSyncBlock);
1368 void SetSyncBlockCleanup()
1370 LIMITED_METHOD_CONTRACT;
1371 FastInterlockOr((ULONG *)&m_ThreadTasks, TT_CleanupSyncBlock);
1374 void ResetSyncBlockCleanup()
1376 LIMITED_METHOD_CONTRACT;
1377 FastInterlockAnd((ULONG *)&m_ThreadTasks, ~TT_CleanupSyncBlock);
1380 #ifdef FEATURE_COMINTEROP_APARTMENT_SUPPORT
1381 DWORD IsCoInitialized()
1383 LIMITED_METHOD_CONTRACT;
1384 return (m_State & TS_CoInitialized);
1387 void SetCoInitialized()
1389 LIMITED_METHOD_CONTRACT;
1390 FastInterlockOr((ULONG *)&m_State, TS_CoInitialized);
1391 FastInterlockAnd((ULONG*)&m_ThreadTasks, ~TT_CallCoInitialize);
1394 void ResetCoInitialized()
1396 LIMITED_METHOD_CONTRACT;
1397 FastInterlockAnd((ULONG *)&m_State,~TS_CoInitialized);
1400 #ifdef FEATURE_COMINTEROP
1401 BOOL IsWinRTInitialized()
1403 LIMITED_METHOD_CONTRACT;
1404 return HasThreadStateNC(TSNC_WinRTInitialized);
1407 void ResetWinRTInitialized()
1409 LIMITED_METHOD_CONTRACT;
1410 ResetThreadStateNC(TSNC_WinRTInitialized);
1412 #endif // FEATURE_COMINTEROP
1414 DWORD RequiresCoInitialize()
1416 LIMITED_METHOD_CONTRACT;
1417 return (m_ThreadTasks & TT_CallCoInitialize);
1420 void SetRequiresCoInitialize()
1422 LIMITED_METHOD_CONTRACT;
1423 FastInterlockOr((ULONG *)&m_ThreadTasks, TT_CallCoInitialize);
1426 void ResetRequiresCoInitialize()
1428 LIMITED_METHOD_CONTRACT;
1429 FastInterlockAnd((ULONG *)&m_ThreadTasks,~TT_CallCoInitialize);
1432 void CleanupCOMState();
1434 #endif // FEATURE_COMINTEROP_APARTMENT_SUPPORT
1436 #ifdef FEATURE_COMINTEROP
1437 bool IsDisableComObjectEagerCleanup()
1439 LIMITED_METHOD_CONTRACT;
1440 return m_fDisableComObjectEagerCleanup;
1442 void SetDisableComObjectEagerCleanup()
1444 LIMITED_METHOD_CONTRACT;
1445 m_fDisableComObjectEagerCleanup = true;
1447 #endif //FEATURE_COMINTEROP
1449 #ifndef DACCESS_COMPILE
1450 bool HasDeadThreadBeenConsideredForGCTrigger()
1452 LIMITED_METHOD_CONTRACT;
1455 return m_fHasDeadThreadBeenConsideredForGCTrigger;
1458 void SetHasDeadThreadBeenConsideredForGCTrigger()
1460 LIMITED_METHOD_CONTRACT;
1463 m_fHasDeadThreadBeenConsideredForGCTrigger = true;
1465 #endif // !DACCESS_COMPILE
1467 // returns if there is some extra work for the finalizer thread.
1468 BOOL HaveExtraWorkForFinalizer();
1470 // do the extra finalizer work.
1471 void DoExtraWorkForFinalizer();
1473 #ifndef DACCESS_COMPILE
1474 DWORD CatchAtSafePoint()
1476 LIMITED_METHOD_CONTRACT;
1477 return (m_State & TS_CatchAtSafePoint);
1480 DWORD CatchAtSafePointOpportunistic()
1482 LIMITED_METHOD_CONTRACT;
1483 return HasThreadStateOpportunistic(TS_CatchAtSafePoint);
1485 #endif // DACCESS_COMPILE
1487 DWORD IsBackground()
1489 LIMITED_METHOD_CONTRACT;
1490 return (m_State & TS_Background);
1495 LIMITED_METHOD_CONTRACT;
1497 return (m_State & TS_Unstarted);
1502 LIMITED_METHOD_CONTRACT;
1503 return (m_State & TS_Dead);
1508 LIMITED_METHOD_CONTRACT;
1509 return (m_State & TS_Aborted);
1514 FastInterlockOr((ULONG *) &m_State, TS_Aborted);
1519 FastInterlockAnd((ULONG *) &m_State, ~TS_Aborted);
1524 LIMITED_METHOD_CONTRACT;
1525 return (m_State & TS_WeOwn);
1528 // For reporting purposes, grab a consistent snapshot of the thread's state
1529 ThreadState GetSnapshotState();
1531 // For delayed destruction of threads
1534 LIMITED_METHOD_CONTRACT;
1535 return (m_State & TS_Detached);
1538 static LONG m_DetachCount;
1539 static LONG m_ActiveDetachCount; // Count how many non-background detached
1541 static Volatile<LONG> m_threadsAtUnsafePlaces;
1543 // Offsets for the following variables need to fit in 1 byte, so keep near
1544 // the top of the object. Also, we want cache line filling to work for us
1545 // so the critical stuff is ordered based on frequency of use.
1547 Volatile<ThreadState> m_State; // Bits for the state of the thread
1549 // If TRUE, GC is scheduled cooperatively with this thread.
1550 // NOTE: This "byte" is actually a boolean - we don't allow
1551 // recursive disables.
1552 Volatile<ULONG> m_fPreemptiveGCDisabled;
1554 PTR_Frame m_pFrame; // The Current Frame
1556 //-----------------------------------------------------------
1557 // If the thread has wandered in from the outside this is
1559 //-----------------------------------------------------------
1560 PTR_AppDomain m_pDomain;
1562 // Track the number of locks (critical section, spin lock, syncblock lock,
1563 // EE Crst, GC lock) held by the current thread.
1564 DWORD m_dwLockCount;
1566 // Unique thread id used for thin locks - kept as small as possible, as we have limited space
1567 // in the object header to store it.
1573 LockEntry m_embeddedEntry;
1575 #ifndef DACCESS_COMPILE
1576 Frame* NotifyFrameChainOfExceptionUnwind(Frame* pStartFrame, LPVOID pvLimitSP);
1577 #endif // DACCESS_COMPILE
1579 #if defined(FEATURE_COMINTEROP) && !defined(DACCESS_COMPILE)
1580 void RegisterRCW(RCW *pRCW)
1587 PRECONDITION(CheckPointer(pRCW));
1591 if (!m_pRCWStack->Push(pRCW))
1597 // Returns false on OOM.
1598 BOOL RegisterRCWNoThrow(RCW *pRCW)
1605 PRECONDITION(CheckPointer(pRCW, NULL_OK));
1609 return m_pRCWStack->Push(pRCW);
1612 RCW *UnregisterRCW(INDEBUG(SyncBlock *pSB))
1619 PRECONDITION(CheckPointer(pSB));
1623 RCW* pPoppedRCW = m_pRCWStack->Pop();
1626 // The RCW we popped must be the one pointed to by pSB if pSB still points to an RCW.
1627 RCW* pCurrentRCW = pSB->GetInteropInfoNoCreate()->GetRawRCW();
1628 _ASSERTE(pCurrentRCW == NULL || pPoppedRCW == NULL || pCurrentRCW == pPoppedRCW);
1634 BOOL RCWIsInUse(RCW* pRCW)
1641 PRECONDITION(CheckPointer(pRCW));
1645 return m_pRCWStack->IsInStack(pRCW);
1647 #endif // FEATURE_COMINTEROP && !DACCESS_COMPILE
1649 // Lock thread is trying to acquire
1650 VolatilePtr<DeadlockAwareLock> m_pBlockingLock;
1654 // on MP systems, each thread has its own allocation chunk so we can avoid
1655 // lock prefixes and expensive MP cache snooping stuff
1656 gc_alloc_context m_alloc_context;
1658 inline gc_alloc_context *GetAllocContext() { LIMITED_METHOD_CONTRACT; return &m_alloc_context; }
1660 // This is the type handle of the first object in the alloc context at the time
1661 // we fire the AllocationTick event. It's only for tooling purpose.
1662 TypeHandle m_thAllocContextObj;
1669 LIMITED_METHOD_CONTRACT;
1672 PEXCEPTION_REGISTRATION_RECORD *GetExceptionListPtr() {
1673 WRAPPER_NO_CONTRACT;
1674 return &GetTEB()->ExceptionList;
1676 #endif // !FEATURE_PAL
1678 inline void SetTHAllocContextObj(TypeHandle th) {LIMITED_METHOD_CONTRACT; m_thAllocContextObj = th; }
1680 inline TypeHandle GetTHAllocContextObj() {LIMITED_METHOD_CONTRACT; return m_thAllocContextObj; }
1682 #ifdef FEATURE_COMINTEROP
1683 // The header for the per-thread in-use RCW stack.
1684 RCWStackHeader* m_pRCWStack;
1685 #endif // FEATURE_COMINTEROP
1687 // Allocator used during marshaling for temporary buffers, much faster than
1690 // Uses of this allocator should be effectively statically scoped, i.e. a "region"
1691 // is started using a CheckPointHolder and GetCheckpoint, and this region can then be used for allocations
1692 // from that point onwards, and then all memory is reclaimed when the static scope for the
1693 // checkpoint is exited by the running thread.
1694 StackingAllocator m_MarshalAlloc;
1696 // Flags used to indicate tasks the thread has to do.
1697 ThreadTasks m_ThreadTasks;
1699 // Flags for thread states that have no concurrency issues.
1700 ThreadStateNoConcurrency m_StateNC;
1702 inline void IncLockCount();
1703 inline void DecLockCount();
1706 DWORD m_dwBeginLockCount; // lock count when the thread enters current domain
1709 DWORD dbg_m_cSuspendedThreads;
1710 // Count of suspended threads that we know are not in native code (and therefore cannot hold OS lock which prevents us calling out to host)
1711 DWORD dbg_m_cSuspendedThreadsWithoutOSLock;
1712 EEThreadId m_Creater;
1715 // After we suspend a thread, we may need to call EEJitManager::JitCodeToMethodInfo
1716 // or StressLog which may waits on a spinlock. It is unsafe to suspend a thread while it
1717 // is in this state.
1718 Volatile<LONG> m_dwForbidSuspendThread;
1721 static void IncForbidSuspendThread()
1731 #ifndef DACCESS_COMPILE
1732 Thread * pThread = GetThreadNULLOk();
1735 _ASSERTE (pThread->m_dwForbidSuspendThread != (LONG)MAXLONG);
1739 STRESS_LOG2(LF_SYNC, LL_INFO100000, "Set forbid suspend [%d] for thread %p.\n", pThread->m_dwForbidSuspendThread.Load(), pThread);
1742 FastInterlockIncrement(&pThread->m_dwForbidSuspendThread);
1744 #endif //!DACCESS_COMPILE
1747 static void DecForbidSuspendThread()
1757 #ifndef DACCESS_COMPILE
1758 Thread * pThread = GetThreadNULLOk();
1761 _ASSERTE (pThread->m_dwForbidSuspendThread != (LONG)0);
1762 FastInterlockDecrement(&pThread->m_dwForbidSuspendThread);
1766 STRESS_LOG2(LF_SYNC, LL_INFO100000, "Reset forbid suspend [%d] for thread %p.\n", pThread->m_dwForbidSuspendThread.Load(), pThread);
1770 #endif //!DACCESS_COMPILE
1773 bool IsInForbidSuspendRegion()
1775 return m_dwForbidSuspendThread != (LONG)0;
1778 typedef StateHolder<Thread::IncForbidSuspendThread, Thread::DecForbidSuspendThread> ForbidSuspendThreadHolder;
1781 // Per thread counter to dispense hash code - kept in the thread so we don't need a lock
1782 // or interlocked operations to get a new hash code;
1783 DWORD m_dwHashCodeSeed;
1787 inline BOOL HasLockInCurrentDomain()
1789 LIMITED_METHOD_CONTRACT;
1791 _ASSERTE(m_dwLockCount >= m_dwBeginLockCount);
1793 // Equivalent to (m_dwLockCount != m_dwBeginLockCount ||
1794 // m_dwCriticalRegionCount ! m_dwBeginCriticalRegionCount),
1795 // but without branching instructions
1796 BOOL fHasLock = (m_dwLockCount ^ m_dwBeginLockCount);
1801 inline BOOL HasCriticalRegion()
1803 LIMITED_METHOD_CONTRACT;
1807 inline DWORD GetNewHashCode()
1809 LIMITED_METHOD_CONTRACT;
1810 // Every thread has its own generator for hash codes so that we won't get into a situation
1811 // where two threads consistently give out the same hash codes.
1812 // Choice of multiplier guarantees period of 2**32 - see Knuth Vol 2 p16 (3.2.1.2 Theorem A).
1813 DWORD multiplier = GetThreadId()*4 + 5;
1814 m_dwHashCodeSeed = m_dwHashCodeSeed*multiplier + 1;
1815 return m_dwHashCodeSeed;
1819 // If the current thread suspends other threads, we need to make sure that the thread
1820 // only allocates memory if the suspended threads do not have OS Heap lock.
1821 static BOOL Debug_AllowCallout()
1823 LIMITED_METHOD_CONTRACT;
1824 Thread * pThread = GetThreadNULLOk();
1825 return ((pThread == NULL) || (pThread->dbg_m_cSuspendedThreads == pThread->dbg_m_cSuspendedThreadsWithoutOSLock));
1828 // Returns number of threads that are currently suspended by the current thread and that can potentially hold OS lock
1829 BOOL Debug_GetUnsafeSuspendeeCount()
1831 LIMITED_METHOD_CONTRACT;
1832 return (dbg_m_cSuspendedThreads - dbg_m_cSuspendedThreadsWithoutOSLock);
1838 BOOL HasThreadAffinity()
1840 LIMITED_METHOD_CONTRACT;
1845 LoadLevelLimiter *m_pLoadLimiter;
1848 LoadLevelLimiter *GetLoadLevelLimiter()
1850 LIMITED_METHOD_CONTRACT;
1851 return m_pLoadLimiter;
1854 void SetLoadLevelLimiter(LoadLevelLimiter *limiter)
1856 LIMITED_METHOD_CONTRACT;
1857 m_pLoadLimiter = limiter;
1864 //--------------------------------------------------------------
1866 //--------------------------------------------------------------
1867 #ifndef DACCESS_COMPILE
1871 //--------------------------------------------------------------
1872 // Failable initialization occurs here.
1873 //--------------------------------------------------------------
1874 BOOL InitThread(BOOL fInternal);
1875 BOOL AllocHandles();
1877 void SetupThreadForHost();
1879 //--------------------------------------------------------------
1880 // If the thread was setup through SetupUnstartedThread, rather
1881 // than SetupThread, complete the setup here when the thread is
1882 // actually running.
1883 // WARNING : only GC calls this with bRequiresTSL set to FALSE.
1884 //--------------------------------------------------------------
1885 BOOL HasStarted(BOOL bRequiresTSL=TRUE);
1887 // We don't want ::CreateThread() calls scattered throughout the source.
1888 // Create all new threads here. The thread is created as suspended, so
1889 // you must ::ResumeThread to kick it off. It is guaranteed to create the
1890 // thread, or throw.
1891 BOOL CreateNewThread(SIZE_T stackSize, LPTHREAD_START_ROUTINE start, void *args, LPCWSTR pName=NULL);
1894 enum StackSizeBucket
1902 // Creates a raw OS thread; use this only for CLR-internal threads that never execute user code.
1903 // StackSizeBucket determines how large the stack should be.
1905 static HANDLE CreateUtilityThread(StackSizeBucket stackSizeBucket, LPTHREAD_START_ROUTINE start, void *args, LPCWSTR pName, DWORD flags = 0, DWORD* pThreadId = NULL);
1907 //--------------------------------------------------------------
1909 //--------------------------------------------------------------
1910 #ifndef DACCESS_COMPILE
1913 virtual ~Thread() {}
1916 #ifdef FEATURE_COMINTEROP_APARTMENT_SUPPORT
1917 void CoUninitialize();
1918 void BaseCoUninitialize();
1919 void BaseWinRTUninitialize();
1920 #endif // FEATURE_COMINTEROP_APARTMENT_SUPPORT
1922 void OnThreadTerminate(BOOL holdingLock);
1924 static void CleanupDetachedThreads();
1925 //--------------------------------------------------------------
1926 // Returns innermost active Frame.
1927 //--------------------------------------------------------------
1928 PTR_Frame GetFrame()
1932 #ifndef DACCESS_COMPILE
1934 WRAPPER_NO_CONTRACT;
1935 if (this == GetThreadNULLOk())
1938 curSP = (void *)GetCurrentSP();
1939 _ASSERTE((curSP <= m_pFrame && m_pFrame < m_CacheStackBase) || m_pFrame == (Frame*) -1);
1942 LIMITED_METHOD_CONTRACT;
1945 #endif // #ifndef DACCESS_COMPILE
1949 //--------------------------------------------------------------
1950 // Replaces innermost active Frames.
1951 //--------------------------------------------------------------
1952 #ifndef DACCESS_COMPILE
1953 void SetFrame(Frame *pFrame)
1958 LIMITED_METHOD_CONTRACT;
1964 inline Frame* FindFrame(SIZE_T StackPointer);
1966 bool DetectHandleILStubsForDebugger();
1968 void SetWin32FaultAddress(DWORD eip)
1970 LIMITED_METHOD_CONTRACT;
1971 m_Win32FaultAddress = eip;
1974 void SetWin32FaultCode(DWORD code)
1976 LIMITED_METHOD_CONTRACT;
1977 m_Win32FaultCode = code;
1980 DWORD GetWin32FaultAddress()
1982 LIMITED_METHOD_CONTRACT;
1983 return m_Win32FaultAddress;
1986 DWORD GetWin32FaultCode()
1988 LIMITED_METHOD_CONTRACT;
1989 return m_Win32FaultCode;
1992 #ifdef ENABLE_CONTRACTS
1993 ClrDebugState *GetClrDebugState()
1995 LIMITED_METHOD_CONTRACT;
1996 return m_pClrDebugState;
2000 //**************************************************************
2002 //**************************************************************
2004 //--------------------------------------------------------------
2005 // Enter cooperative GC mode. NOT NESTABLE.
2006 //--------------------------------------------------------------
2007 FORCEINLINE_NONDEBUG void DisablePreemptiveGC()
2009 #ifndef DACCESS_COMPILE
2010 WRAPPER_NO_CONTRACT;
2011 _ASSERTE(this == GetThread());
2012 _ASSERTE(!m_fPreemptiveGCDisabled);
2013 // holding a spin lock in preemp mode and transit to coop mode will cause other threads
2014 // spinning waiting for GC
2015 _ASSERTE ((m_StateNC & Thread::TSNC_OwnsSpinLock) == 0);
2017 #ifdef ENABLE_CONTRACTS_IMPL
2021 // Logically, we just want to check whether a GC is in progress and halt
2022 // at the boundary if it is -- before we disable preemptive GC. However
2023 // this opens up a race condition where the GC starts after we make the
2024 // check. SuspendRuntime will ignore such a thread because it saw it as
2025 // outside the EE. So the thread would run wild during the GC.
2027 // Instead, enter cooperative mode and then check if a GC is in progress.
2028 // If so, go back out and try again. The reason we go back out before we
2029 // try again, is that SuspendRuntime might have seen us as being in
2030 // cooperative mode if it checks us between the next two statements.
2031 // In that case, it will be trying to move us to a safe spot. If
2032 // we don't let it see us leave, it will keep waiting on us indefinitely.
2034 // ------------------------------------------------------------------------
2035 // ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** |
2036 // ------------------------------------------------------------------------
2038 // DO NOT CHANGE THIS METHOD WITHOUT VISITING ALL THE STUB GENERATORS
2039 // THAT EFFECTIVELY INLINE IT INTO THEIR STUBS
2041 // ------------------------------------------------------------------------
2042 // ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** |
2043 // ------------------------------------------------------------------------
2045 m_fPreemptiveGCDisabled.StoreWithoutBarrier(1);
2047 if (g_TrapReturningThreads.LoadWithoutBarrier())
2049 RareDisablePreemptiveGC();
2052 LIMITED_METHOD_CONTRACT;
2056 NOINLINE void RareDisablePreemptiveGC();
2058 void HandleThreadAbort();
2060 void PreWorkForThreadAbort();
2063 void HandleThreadAbortTimeout();
2066 //--------------------------------------------------------------
2067 // Leave cooperative GC mode. NOT NESTABLE.
2068 //--------------------------------------------------------------
2069 FORCEINLINE_NONDEBUG void EnablePreemptiveGC()
2071 LIMITED_METHOD_CONTRACT;
2073 #ifndef DACCESS_COMPILE
2074 _ASSERTE(this == GetThread());
2075 _ASSERTE(m_fPreemptiveGCDisabled);
2076 // holding a spin lock in coop mode and transit to preemp mode will cause deadlock on GC
2077 _ASSERTE ((m_StateNC & Thread::TSNC_OwnsSpinLock) == 0);
2079 #ifdef ENABLE_CONTRACTS_IMPL
2080 _ASSERTE(!GCForbidden());
2084 // ------------------------------------------------------------------------
2085 // ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** |
2086 // ------------------------------------------------------------------------
2088 // DO NOT CHANGE THIS METHOD WITHOUT VISITING ALL THE STUB GENERATORS
2089 // THAT EFFECTIVELY INLINE IT INTO THEIR STUBS
2091 // ------------------------------------------------------------------------
2092 // ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** WARNING ** |
2093 // ------------------------------------------------------------------------
2095 m_fPreemptiveGCDisabled.StoreWithoutBarrier(0);
2096 #ifdef ENABLE_CONTRACTS
2097 m_ulEnablePreemptiveGCCount ++;
2100 if (CatchAtSafePoint())
2101 RareEnablePreemptiveGC();
2105 #if defined(STRESS_HEAP) && defined(_DEBUG)
2106 void PerformPreemptiveGC();
2108 void RareEnablePreemptiveGC();
2111 //--------------------------------------------------------------
2113 //--------------------------------------------------------------
2114 BOOL PreemptiveGCDisabled()
2116 WRAPPER_NO_CONTRACT;
2117 _ASSERTE(this == GetThread());
2119 // m_fPreemptiveGCDisabled is always modified by the thread itself, and so the thread itself
2120 // can read it without memory barrier.
2122 return m_fPreemptiveGCDisabled.LoadWithoutBarrier();
2125 BOOL PreemptiveGCDisabledOther()
2127 LIMITED_METHOD_CONTRACT;
2128 return (m_fPreemptiveGCDisabled);
2131 #ifdef ENABLE_CONTRACTS_IMPL
2133 void BeginNoTriggerGC(const char *szFile, int lineNum)
2135 WRAPPER_NO_CONTRACT;
2136 m_pClrDebugState->IncrementGCNoTriggerCount();
2137 if (PreemptiveGCDisabled())
2139 m_pClrDebugState->IncrementGCForbidCount();
2143 void EndNoTriggerGC()
2145 WRAPPER_NO_CONTRACT;
2146 _ASSERTE(m_pClrDebugState->GetGCNoTriggerCount() != 0 || (m_pClrDebugState->ViolationMask() & BadDebugState));
2147 m_pClrDebugState->DecrementGCNoTriggerCount();
2149 if (m_pClrDebugState->GetGCForbidCount())
2151 m_pClrDebugState->DecrementGCForbidCount();
2155 void BeginForbidGC(const char *szFile, int lineNum)
2157 WRAPPER_NO_CONTRACT;
2158 _ASSERTE(this == GetThread());
2159 #ifdef PROFILING_SUPPORTED
2160 _ASSERTE(PreemptiveGCDisabled()
2161 || CORProfilerPresent() || // This added to allow profiler to use GetILToNativeMapping
2162 // while in preemptive GC mode
2163 (g_fEEShutDown & (ShutDown_Finalize2 | ShutDown_Profiler)) == ShutDown_Finalize2);
2164 #else // PROFILING_SUPPORTED
2165 _ASSERTE(PreemptiveGCDisabled());
2166 #endif // PROFILING_SUPPORTED
2167 BeginNoTriggerGC(szFile, lineNum);
2172 WRAPPER_NO_CONTRACT;
2173 _ASSERTE(this == GetThread());
2174 #ifdef PROFILING_SUPPORTED
2175 _ASSERTE(PreemptiveGCDisabled() ||
2176 CORProfilerPresent() || // This added to allow profiler to use GetILToNativeMapping
2177 // while in preemptive GC mode
2178 (g_fEEShutDown & (ShutDown_Finalize2 | ShutDown_Profiler)) == ShutDown_Finalize2);
2179 #else // PROFILING_SUPPORTED
2180 _ASSERTE(PreemptiveGCDisabled());
2181 #endif // PROFILING_SUPPORTED
2187 WRAPPER_NO_CONTRACT;
2188 _ASSERTE(this == GetThread());
2189 if ( (GCViolation|BadDebugState) & m_pClrDebugState->ViolationMask() )
2193 return m_pClrDebugState->GetGCNoTriggerCount();
2198 WRAPPER_NO_CONTRACT;
2199 _ASSERTE(this == GetThread());
2200 if ( (GCViolation|BadDebugState) & m_pClrDebugState->ViolationMask())
2204 return m_pClrDebugState->GetGCForbidCount();
2207 BOOL RawGCNoTrigger()
2209 LIMITED_METHOD_CONTRACT;
2210 if (m_pClrDebugState->ViolationMask() & BadDebugState)
2214 return m_pClrDebugState->GetGCNoTriggerCount();
2217 BOOL RawGCForbidden()
2219 LIMITED_METHOD_CONTRACT;
2220 if (m_pClrDebugState->ViolationMask() & BadDebugState)
2224 return m_pClrDebugState->GetGCForbidCount();
2226 #endif // ENABLE_CONTRACTS_IMPL
2228 //---------------------------------------------------------------
2229 // Expose key offsets and values for stub generation.
2230 //---------------------------------------------------------------
2231 static BYTE GetOffsetOfCurrentFrame()
2233 LIMITED_METHOD_CONTRACT;
2234 size_t ofs = offsetof(class Thread, m_pFrame);
2235 _ASSERTE(FitsInI1(ofs));
2239 static BYTE GetOffsetOfState()
2241 LIMITED_METHOD_CONTRACT;
2242 size_t ofs = offsetof(class Thread, m_State);
2243 _ASSERTE(FitsInI1(ofs));
2247 static BYTE GetOffsetOfGCFlag()
2249 LIMITED_METHOD_CONTRACT;
2250 size_t ofs = offsetof(class Thread, m_fPreemptiveGCDisabled);
2251 _ASSERTE(FitsInI1(ofs));
2255 static void StaticDisablePreemptiveGC( Thread *pThread)
2257 WRAPPER_NO_CONTRACT;
2258 _ASSERTE(pThread != NULL);
2259 pThread->DisablePreemptiveGC();
2262 static void StaticEnablePreemptiveGC( Thread *pThread)
2264 WRAPPER_NO_CONTRACT;
2265 _ASSERTE(pThread != NULL);
2266 pThread->EnablePreemptiveGC();
2270 //---------------------------------------------------------------
2271 // Expose offset of the app domain word for the interop and delegate callback
2272 //---------------------------------------------------------------
2273 static SIZE_T GetOffsetOfAppDomain()
2275 LIMITED_METHOD_CONTRACT;
2276 return (SIZE_T)(offsetof(class Thread, m_pDomain));
2279 //---------------------------------------------------------------
2280 // Expose offset of the place for storing the filter context for the debugger.
2281 //---------------------------------------------------------------
2282 static SIZE_T GetOffsetOfDebuggerFilterContext()
2284 LIMITED_METHOD_CONTRACT;
2285 return (SIZE_T)(offsetof(class Thread, m_debuggerFilterContext));
2288 //---------------------------------------------------------------
2289 // Expose offset of the debugger cant stop count for the debugger
2290 //---------------------------------------------------------------
2291 static SIZE_T GetOffsetOfCantStop()
2293 LIMITED_METHOD_CONTRACT;
2294 return (SIZE_T)(offsetof(class Thread, m_debuggerCantStop));
2297 //---------------------------------------------------------------
2298 // Expose offset of m_StateNC
2299 //---------------------------------------------------------------
2300 static SIZE_T GetOffsetOfStateNC()
2302 LIMITED_METHOD_CONTRACT;
2303 return (SIZE_T)(offsetof(class Thread, m_StateNC));
2306 //---------------------------------------------------------------
2307 // Last exception to be thrown
2308 //---------------------------------------------------------------
2309 inline void SetThrowable(OBJECTREF pThrowable
2310 DEBUG_ARG(ThreadExceptionState::SetThrowableErrorChecking stecFlags = ThreadExceptionState::STEC_All));
2312 OBJECTREF GetThrowable()
2314 WRAPPER_NO_CONTRACT;
2316 return m_ExceptionState.GetThrowable();
2319 // An unmnaged thread can check if a managed is processing an exception
2322 LIMITED_METHOD_CONTRACT;
2323 OBJECTHANDLE pThrowable = m_ExceptionState.GetThrowableAsHandle();
2324 return pThrowable && *PTR_UNCHECKED_OBJECTREF(pThrowable);
2327 OBJECTHANDLE GetThrowableAsHandle()
2329 LIMITED_METHOD_CONTRACT;
2330 return m_ExceptionState.GetThrowableAsHandle();
2333 // special null test (for use when we're in the wrong GC mode)
2334 BOOL IsThrowableNull()
2336 WRAPPER_NO_CONTRACT;
2337 return IsHandleNullUnchecked(m_ExceptionState.GetThrowableAsHandle());
2340 BOOL IsExceptionInProgress()
2343 LIMITED_METHOD_CONTRACT;
2344 return m_ExceptionState.IsExceptionInProgress();
2348 void SyncManagedExceptionState(bool fIsDebuggerThread);
2350 //---------------------------------------------------------------
2351 // Per-thread information used by handler
2352 //---------------------------------------------------------------
2353 // exception handling info stored in thread
2354 // can't allocate this as needed because can't make exception-handling depend upon memory allocation
2356 PTR_ThreadExceptionState GetExceptionState()
2358 LIMITED_METHOD_CONTRACT;
2361 return PTR_ThreadExceptionState(PTR_HOST_MEMBER_TADDR(Thread, this, m_ExceptionState));
2366 void DECLSPEC_NORETURN RaiseCrossContextException(Exception* pEx, ContextTransitionFrame* pFrame);
2368 // ClearContext are to be called only during shutdown
2369 void ClearContext();
2372 // don't ever call these except when creating thread!!!!!
2376 PTR_AppDomain GetDomain(INDEBUG(BOOL fMidContextTransitionOK = FALSE))
2378 LIMITED_METHOD_DAC_CONTRACT;
2383 //---------------------------------------------------------------
2384 // Track use of the thread block. See the general comments on
2385 // thread destruction in threads.cpp, for details.
2386 //---------------------------------------------------------------
2387 int IncExternalCount();
2388 int DecExternalCount(BOOL holdingLock);
2391 //---------------------------------------------------------------
2392 // !!!! THESE ARE NOT SAFE FOR GENERAL USE !!!!
2393 // IncExternalCountDANGEROUSProfilerOnly()
2394 // DecExternalCountDANGEROUSProfilerOnly()
2395 // Currently only the profiler API should be using these
2396 // functions, because the profiler is responsible for ensuring
2397 // that the thread exists, undestroyed, before operating on it.
2398 // All other clients should use IncExternalCount/DecExternalCount
2400 //---------------------------------------------------------------
2401 int IncExternalCountDANGEROUSProfilerOnly()
2403 LIMITED_METHOD_CONTRACT;
2410 FastInterlockIncrement((LONG*)&m_ExternalRefCount);
2413 // This should never be called on a thread being destroyed
2414 _ASSERTE(cRefs != 1);
2419 int DecExternalCountDANGEROUSProfilerOnly()
2421 LIMITED_METHOD_CONTRACT;
2428 FastInterlockDecrement((LONG*)&m_ExternalRefCount);
2431 // This should never cause the last reference on the thread to be released
2432 _ASSERTE(cRefs != 0);
2437 // Get and Set the exposed System.Thread object which corresponds to
2438 // this thread. Also the thread handle and Id.
2439 OBJECTREF GetExposedObject();
2440 OBJECTREF GetExposedObjectRaw();
2441 void SetExposedObject(OBJECTREF exposed);
2442 OBJECTHANDLE GetExposedObjectHandleForDebugger()
2444 LIMITED_METHOD_CONTRACT;
2445 return m_ExposedObject;
2448 // Query whether the exposed object exists
2449 BOOL IsExposedObjectSet()
2458 return (ObjectFromHandle(m_ExposedObject) != NULL) ;
2461 void GetSynchronizationContext(OBJECTREF *pSyncContextObj)
2468 PRECONDITION(CheckPointer(pSyncContextObj));
2472 *pSyncContextObj = NULL;
2474 THREADBASEREF ExposedThreadObj = (THREADBASEREF)GetExposedObjectRaw();
2475 if (ExposedThreadObj != NULL)
2476 *pSyncContextObj = ExposedThreadObj->GetSynchronizationContext();
2480 // When we create a managed thread, the thread is suspended. We call StartThread to get
2481 // the thread start.
2482 DWORD StartThread();
2484 // The result of attempting to OS-suspend an EE thread.
2485 enum SuspendThreadResult
2487 // We successfully suspended the thread. This is the only
2488 // case where the caller should subsequently call ResumeThread.
2491 // The underlying call to the operating system's SuspendThread
2492 // or GetThreadContext failed. This is usually taken to mean
2493 // that the OS thread has exited. (This can possibly also mean
2495 // that the suspension count exceeded the allowed maximum, but
2496 // Thread::SuspendThread asserts that does not happen.)
2499 // The thread handle is invalid. This means that the thread
2500 // is dead (or dying), or that the object has been created for
2501 // an exposed System.Thread that has not been started yet.
2502 STR_UnstartedOrDead,
2504 // The fOneTryOnly flag was set, and we managed to OS suspend the
2505 // thread, but we found that it had its m_dwForbidSuspendThread
2506 // flag set. If fOneTryOnly is not set, Thread::Suspend will
2507 // retry in this case.
2510 // Stress logging is turned on, but no stress log had been created
2511 // for the thread yet, and we failed to create one. This can mean
2512 // that either we are not allowed to call into the host, or we ran
2516 // The EE thread is currently switched out. This can only happen
2517 // if we are hosted and the host schedules EE threads on fibers.
2521 #if defined(FEATURE_HIJACK) && defined(PLATFORM_UNIX)
2522 bool InjectGcSuspension();
2523 #endif // FEATURE_HIJACK && PLATFORM_UNIX
2525 #ifndef DISABLE_THREADSUSPEND
2527 // Attempts to OS-suspend the thread, whichever GC mode it is in.
2529 // fOneTryOnly - If TRUE, report failure if the thread has its
2530 // m_dwForbidSuspendThread flag set. If FALSE, retry.
2531 // pdwSuspendCount - If non-NULL, will contain the return code
2532 // of the underlying OS SuspendThread call on success,
2533 // undefined on any kind of failure.
2535 // A SuspendThreadResult value indicating success or failure.
2536 SuspendThreadResult SuspendThread(BOOL fOneTryOnly = FALSE, DWORD *pdwSuspendCount = NULL);
2538 DWORD ResumeThread();
2540 #endif // DISABLE_THREADSUSPEND
2542 int GetThreadPriority();
2543 BOOL SetThreadPriority(
2544 int nPriority // thread priority level
2547 DWORD Join(DWORD timeout, BOOL alertable);
2548 DWORD JoinEx(DWORD timeout, WaitMode mode);
2550 BOOL GetThreadContext(
2551 LPCONTEXT lpContext // context structure
2554 WRAPPER_NO_CONTRACT;
2555 return ::GetThreadContext (GetThreadHandle(), lpContext);
2558 #ifndef DACCESS_COMPILE
2559 BOOL SetThreadContext(
2560 CONST CONTEXT *lpContext // context structure
2563 WRAPPER_NO_CONTRACT;
2564 return ::SetThreadContext (GetThreadHandle(), lpContext);
2568 BOOL HasValidThreadHandle ()
2570 WRAPPER_NO_CONTRACT;
2571 return GetThreadHandle() != INVALID_HANDLE_VALUE;
2576 LIMITED_METHOD_DAC_CONTRACT;
2577 _ASSERTE(m_ThreadId != UNINITIALIZED_THREADID);
2581 DWORD GetOSThreadId()
2583 LIMITED_METHOD_CONTRACT;
2585 #ifndef DACCESS_COMPILE
2586 _ASSERTE (m_OSThreadId != 0xbaadf00d);
2587 #endif // !DACCESS_COMPILE
2588 return m_OSThreadId;
2591 // This API is to be used for Debugger only.
2592 // We need to be able to return the true value of m_OSThreadId.
2594 DWORD GetOSThreadIdForDebugger()
2597 LIMITED_METHOD_CONTRACT;
2598 return m_OSThreadId;
2601 BOOL IsThreadPoolThread()
2603 LIMITED_METHOD_CONTRACT;
2604 return m_State & (Thread::TS_TPWorkerThread | Thread::TS_CompletionPortThread);
2607 // public suspend functions. System ones are internal, like for GC. User ones
2608 // correspond to suspend/resume calls on the exposed System.Thread object.
2609 static bool SysStartSuspendForDebug(AppDomain *pAppDomain);
2610 static bool SysSweepThreadsForDebug(bool forceSync);
2611 static void SysResumeFromDebug(AppDomain *pAppDomain);
2613 void UserSleep(INT32 time);
2615 // AD unload uses ThreadAbort support. We need to distinguish pure ThreadAbort and AD unload
2617 enum ThreadAbortRequester
2619 TAR_Thread = 0x00000001, // Request by Thread
2620 TAR_FuncEval = 0x00000004, // Request by Func-Eval
2621 TAR_ALL = 0xFFFFFFFF,
2627 // Bit mask for tracking which aborts came in and why.
2629 enum ThreadAbortInfo
2631 TAI_ThreadAbort = 0x00000001,
2632 TAI_ThreadRudeAbort = 0x00000004,
2633 TAI_FuncEvalAbort = 0x00000040,
2634 TAI_FuncEvalRudeAbort = 0x00000100,
2637 static const DWORD TAI_AnySafeAbort = (TAI_ThreadAbort |
2641 static const DWORD TAI_AnyRudeAbort = (TAI_ThreadRudeAbort |
2642 TAI_FuncEvalRudeAbort
2645 static const DWORD TAI_AnyFuncEvalAbort = (TAI_FuncEvalAbort |
2646 TAI_FuncEvalRudeAbort
2650 // Specifies type of thread abort.
2653 ULONGLONG m_AbortEndTime;
2654 ULONGLONG m_RudeAbortEndTime;
2655 BOOL m_fRudeAbortInitiated;
2656 LONG m_AbortController;
2658 static ULONGLONG s_NextSelfAbortEndTime;
2660 void SetRudeAbortEndTimeFromEEPolicy();
2662 // This is a spin lock to serialize setting/resetting of AbortType and AbortRequest.
2663 LONG m_AbortRequestLock;
2665 static void LockAbortRequest(Thread *pThread);
2666 static void UnlockAbortRequest(Thread *pThread);
2668 typedef Holder<Thread*, Thread::LockAbortRequest, Thread::UnlockAbortRequest> AbortRequestLockHolder;
2670 static void AcquireAbortControl(Thread *pThread)
2672 LIMITED_METHOD_CONTRACT;
2673 FastInterlockIncrement (&pThread->m_AbortController);
2676 static void ReleaseAbortControl(Thread *pThread)
2678 LIMITED_METHOD_CONTRACT;
2679 _ASSERTE (pThread->m_AbortController > 0);
2680 FastInterlockDecrement (&pThread->m_AbortController);
2683 typedef Holder<Thread*, Thread::AcquireAbortControl, Thread::ReleaseAbortControl> AbortControlHolder;
2687 BOOL m_fRudeAborted;
2688 DWORD m_dwAbortPoint;
2693 enum UserAbort_Client
2696 UAC_Host, // Called by host through IClrTask::Abort
2699 HRESULT UserAbort(ThreadAbortRequester requester,
2700 EEPolicy::ThreadAbortTypes abortType,
2702 UserAbort_Client client
2705 BOOL HandleJITCaseForAbort();
2707 void UserResetAbort(ThreadAbortRequester requester)
2709 InternalResetAbort(requester, FALSE);
2711 void EEResetAbort(ThreadAbortRequester requester)
2713 InternalResetAbort(requester, TRUE);
2717 void InternalResetAbort(ThreadAbortRequester requester, BOOL fResetRudeAbort);
2719 void SetAbortEndTime(ULONGLONG endTime, BOOL fRudeAbort);
2723 ULONGLONG GetAbortEndTime()
2725 WRAPPER_NO_CONTRACT;
2726 return IsRudeAbort()?m_RudeAbortEndTime:m_AbortEndTime;
2729 // We distinguish interrupting a thread between Thread.Interrupt and other usage.
2730 // For Thread.Interrupt usage, we will interrupt an alertable wait using the same
2731 // rule as ReadyForAbort. Wait in EH clause or CER region is not interrupted.
2732 // For other usage, we will try to Abort the thread.
2733 // If we can not do the operation, we will delay until next wait.
2734 enum ThreadInterruptMode
2736 TI_Interrupt = 0x00000001, // Requested by Thread.Interrupt
2737 TI_Abort = 0x00000002, // Requested by Thread.Abort or AppDomain.Unload
2741 BOOL ReadyForAsyncException();
2744 void UserInterrupt(ThreadInterruptMode mode);
2746 BOOL ReadyForAbort()
2748 return ReadyForAsyncException();
2752 BOOL IsFuncEvalAbort();
2754 #if defined(_TARGET_AMD64_) && defined(FEATURE_HIJACK)
2755 BOOL IsSafeToInjectThreadAbort(PTR_CONTEXT pContextToCheck);
2756 #endif // defined(_TARGET_AMD64_) && defined(FEATURE_HIJACK)
2758 inline BOOL IsAbortRequested()
2760 LIMITED_METHOD_CONTRACT;
2761 return (m_State & TS_AbortRequested);
2764 inline BOOL IsAbortInitiated()
2766 LIMITED_METHOD_CONTRACT;
2767 return (m_State & TS_AbortInitiated);
2770 inline BOOL IsRudeAbortInitiated()
2772 LIMITED_METHOD_CONTRACT;
2773 return IsAbortRequested() && m_fRudeAbortInitiated;
2776 inline void SetAbortInitiated()
2778 WRAPPER_NO_CONTRACT;
2779 if (IsRudeAbort()) {
2780 m_fRudeAbortInitiated = TRUE;
2782 FastInterlockOr((ULONG *)&m_State, TS_AbortInitiated);
2783 // The following should be factored better, but I'm looking for a minimal V1 change.
2784 ResetUserInterrupted();
2787 inline void ResetAbortInitiated()
2789 LIMITED_METHOD_CONTRACT;
2790 FastInterlockAnd((ULONG *)&m_State, ~TS_AbortInitiated);
2791 m_fRudeAbortInitiated = FALSE;
2794 inline void SetPreparingAbort()
2796 WRAPPER_NO_CONTRACT;
2797 SetThreadStateNC(TSNC_PreparingAbort);
2800 inline void ResetPreparingAbort()
2802 WRAPPER_NO_CONTRACT;
2803 ResetThreadStateNC(TSNC_PreparingAbort);
2807 inline static void SetPreparingAbortForHolder()
2809 GetThread()->SetPreparingAbort();
2811 inline static void ResetPreparingAbortForHolder()
2813 GetThread()->ResetPreparingAbort();
2815 typedef StateHolder<Thread::SetPreparingAbortForHolder, Thread::ResetPreparingAbortForHolder> PreparingAbortHolder;
2819 inline void SetIsCreatingTypeInitException()
2821 WRAPPER_NO_CONTRACT;
2822 SetThreadStateNC(TSNC_CreatingTypeInitException);
2825 inline void ResetIsCreatingTypeInitException()
2827 WRAPPER_NO_CONTRACT;
2828 ResetThreadStateNC(TSNC_CreatingTypeInitException);
2831 inline BOOL IsCreatingTypeInitException()
2833 WRAPPER_NO_CONTRACT;
2834 return HasThreadStateNC(TSNC_CreatingTypeInitException);
2838 void SetAbortRequestBit();
2840 void RemoveAbortRequestBit();
2843 void MarkThreadForAbort(ThreadAbortRequester requester, EEPolicy::ThreadAbortTypes abortType);
2844 void UnmarkThreadForAbort(ThreadAbortRequester requester, BOOL fForce = TRUE);
2846 static ULONGLONG GetNextSelfAbortEndTime()
2848 LIMITED_METHOD_CONTRACT;
2849 return s_NextSelfAbortEndTime;
2852 #if defined(FEATURE_HIJACK) && !defined(PLATFORM_UNIX)
2853 // Tricks for resuming threads from fully interruptible code with a ThreadStop.
2854 BOOL ResumeUnderControl(T_CONTEXT *pCtx);
2855 #endif // FEATURE_HIJACK && !PLATFORM_UNIX
2857 enum InducedThrowReason {
2858 InducedThreadStop = 1,
2859 InducedThreadRedirect = 2,
2860 InducedThreadRedirectAtEndOfCatch = 3,
2863 DWORD m_ThrewControlForThread; // flag that is set when the thread deliberately raises an exception for stop/abort
2865 inline DWORD ThrewControlForThread()
2867 LIMITED_METHOD_CONTRACT;
2868 return m_ThrewControlForThread;
2871 inline void SetThrowControlForThread(InducedThrowReason reason)
2873 LIMITED_METHOD_CONTRACT;
2874 m_ThrewControlForThread = reason;
2877 inline void ResetThrowControlForThread()
2879 LIMITED_METHOD_CONTRACT;
2880 m_ThrewControlForThread = 0;
2883 PTR_CONTEXT m_OSContext; // ptr to a Context structure used to record the OS specific ThreadContext for a thread
2884 // this is used for thread stop/abort and is intialized on demand
2886 PT_CONTEXT GetAbortContext ();
2888 // These will only ever be called from the debugger's helper
2891 // When a thread is being created after a debug suspension has
2892 // started, we get the event on the debugger helper thread. It
2893 // will turn around and call this to set the debug suspend pending
2894 // flag on the newly created flag, since it was missed by
2895 // SysStartSuspendForGC as it didn't exist when that function was
2897 void MarkForDebugSuspend();
2899 // When the debugger uses the trace flag to single step a thread,
2900 // it also calls this function to mark this info in the thread's
2901 // state. The out-of-process portion of the debugger will read the
2902 // thread's state for a variety of reasons, including looking for
2904 void MarkDebuggerIsStepping(bool onOff)
2906 WRAPPER_NO_CONTRACT;
2908 SetThreadStateNC(Thread::TSNC_DebuggerIsStepping);
2910 ResetThreadStateNC(Thread::TSNC_DebuggerIsStepping);
2914 // ARM doesn't currently support any reliable hardware mechanism for single-stepping. Instead we emulate
2915 // this in software. This support is used only by the debugger.
2917 ArmSingleStepper m_singleStepper;
2919 #ifndef DACCESS_COMPILE
2920 // Given the context with which this thread shall be resumed and the first WORD of the instruction that
2921 // should be executed next (this is not always the WORD under PC since the debugger uses this mechanism to
2922 // skip breakpoints written into the code), set the thread up to execute one instruction and then throw an
2923 // EXCEPTION_SINGLE_STEP. (In fact an EXCEPTION_BREAKPOINT will be thrown, but this is fixed up in our
2924 // first chance exception handler, see IsDebuggerFault in excep.cpp).
2925 void EnableSingleStep()
2927 m_singleStepper.Enable();
2930 void BypassWithSingleStep(DWORD ip, WORD opcode1, WORD opcode2)
2932 m_singleStepper.Bypass(ip, opcode1, opcode2);
2935 void DisableSingleStep()
2937 m_singleStepper.Disable();
2940 void ApplySingleStep(T_CONTEXT *pCtx)
2942 m_singleStepper.Apply(pCtx);
2945 bool IsSingleStepEnabled() const
2947 return m_singleStepper.IsEnabled();
2950 // Fixup code called by our vectored exception handler to complete the emulation of single stepping
2951 // initiated by EnableSingleStep above. Returns true if the exception was indeed encountered during
2953 bool HandleSingleStep(T_CONTEXT *pCtx, DWORD dwExceptionCode)
2955 return m_singleStepper.Fixup(pCtx, dwExceptionCode);
2957 #endif // !DACCESS_COMPILE
2958 #endif // _TARGET_ARM_
2962 PendingTypeLoadHolder* m_pPendingTypeLoad;
2966 #ifndef DACCESS_COMPILE
2967 PendingTypeLoadHolder* GetPendingTypeLoad()
2969 LIMITED_METHOD_CONTRACT;
2970 return m_pPendingTypeLoad;
2973 void SetPendingTypeLoad(PendingTypeLoadHolder* pPendingTypeLoad)
2975 LIMITED_METHOD_CONTRACT;
2976 m_pPendingTypeLoad = pPendingTypeLoad;
2980 #ifdef FEATURE_PREJIT
2984 ThreadLocalIBCInfo* m_pIBCInfo;
2988 #ifndef DACCESS_COMPILE
2990 ThreadLocalIBCInfo* GetIBCInfo()
2992 LIMITED_METHOD_CONTRACT;
2993 _ASSERTE(g_IBCLogger.InstrEnabled());
2997 void SetIBCInfo(ThreadLocalIBCInfo* pInfo)
2999 LIMITED_METHOD_CONTRACT;
3000 _ASSERTE(g_IBCLogger.InstrEnabled());
3006 WRAPPER_NO_CONTRACT;
3007 if (m_pIBCInfo != NULL)
3008 m_pIBCInfo->FlushDelayedCallbacks();
3011 #endif // #ifndef DACCESS_COMPILE
3013 #endif // #ifdef FEATURE_PREJIT
3015 // Indicate whether this thread should run in the background. Background threads
3016 // don't interfere with the EE shutting down. Whereas a running non-background
3017 // thread prevents us from shutting down (except through System.Exit(), of course)
3018 // WARNING : only GC calls this with bRequiresTSL set to FALSE.
3019 void SetBackground(BOOL isBack, BOOL bRequiresTSL=TRUE);
3021 // When the thread starts running, make sure it is running in the correct apartment
3023 BOOL PrepareApartmentAndContext();
3025 #ifdef FEATURE_COMINTEROP_APARTMENT_SUPPORT
3026 // Retrieve the apartment state of the current thread. There are three possible
3027 // states: thread hosts an STA, thread is part of the MTA or thread state is
3028 // undecided. The last state may indicate that the apartment has not been set at
3029 // all (nobody has called CoInitializeEx) or that the EE does not know the
3030 // current state (EE has not called CoInitializeEx).
3031 enum ApartmentState { AS_InSTA, AS_InMTA, AS_Unknown };
3032 ApartmentState GetApartment();
3033 ApartmentState GetApartmentRare(Thread::ApartmentState as);
3034 ApartmentState GetExplicitApartment();
3036 // Sets the apartment state if it has not already been set and
3037 // returns the state.
3038 ApartmentState GetFinalApartment();
3040 // Attempt to set current thread's apartment state. The actual apartment state
3041 // achieved is returned and may differ from the input state if someone managed to
3042 // call CoInitializeEx on this thread first (note that calls to SetApartment made
3043 // before the thread has started are guaranteed to succeed).
3044 // The fFireMDAOnMismatch indicates if we should fire the apartment state probe
3045 // on an apartment state mismatch.
3046 ApartmentState SetApartment(ApartmentState state, BOOL fFireMDAOnMismatch);
3048 // when we get apartment tear-down notification,
3049 // we want reset the apartment state we cache on the thread
3050 VOID ResetApartment();
3051 #endif // FEATURE_COMINTEROP_APARTMENT_SUPPORT
3053 // Either perform WaitForSingleObject or MsgWaitForSingleObject as appropriate.
3054 DWORD DoAppropriateWait(int countHandles, HANDLE *handles, BOOL waitAll,
3055 DWORD millis, WaitMode mode,
3056 PendingSync *syncInfo = 0);
3058 DWORD DoAppropriateWait(AppropriateWaitFunc func, void *args, DWORD millis,
3059 WaitMode mode, PendingSync *syncInfo = 0);
3060 DWORD DoSignalAndWait(HANDLE *handles, DWORD millis, BOOL alertable,
3061 PendingSync *syncState = 0);
3063 void DoAppropriateWaitWorkerAlertableHelper(WaitMode mode);
3064 DWORD DoAppropriateWaitWorker(int countHandles, HANDLE *handles, BOOL waitAll,
3065 DWORD millis, WaitMode mode);
3066 DWORD DoAppropriateWaitWorker(AppropriateWaitFunc func, void *args,
3067 DWORD millis, WaitMode mode);
3068 DWORD DoSignalAndWaitWorker(HANDLE* pHandles, DWORD millis,BOOL alertable);
3069 DWORD DoAppropriateAptStateWait(int numWaiters, HANDLE* pHandles, BOOL bWaitAll, DWORD timeout, WaitMode mode);
3070 DWORD DoSyncContextWait(OBJECTREF *pSyncCtxObj, int countHandles, HANDLE *handles, BOOL waitAll, DWORD millis);
3073 //************************************************************************
3074 // Enumerate all frames.
3075 //************************************************************************
3077 /* Flags used for StackWalkFramesEx */
3079 // FUNCTIONSONLY excludes all functionless frames and all funclets
3080 #define FUNCTIONSONLY 0x0001
3082 // SKIPFUNCLETS includes functionless frames but excludes all funclets and everything between funclets and their parent methods
3083 #define SKIPFUNCLETS 0x0002
3085 #define POPFRAMES 0x0004
3087 /* use the following flag only if you REALLY know what you are doing !!! */
3088 #define QUICKUNWIND 0x0008 // do not restore all registers during unwind
3090 #define HANDLESKIPPEDFRAMES 0x0010 // temporary to handle skipped frames for appdomain unload
3091 // stack crawl. Eventually need to always do this but it
3092 // breaks the debugger right now.
3094 #define LIGHTUNWIND 0x0020 // allow using cache schema (see StackwalkCache class)
3096 #define NOTIFY_ON_U2M_TRANSITIONS 0x0040 // Provide a callback for native transitions.
3097 // This is only useful to a debugger trying to find native code
3100 #define DISABLE_MISSING_FRAME_DETECTION 0x0080 // disable detection of missing TransitionFrames
3102 // One thread may be walking the stack of another thread
3103 // If you need to use this, you may also need to put a call to CrawlFrame::CheckGSCookies
3104 // in your callback routine if it does any potentially time-consuming activity.
3105 #define ALLOW_ASYNC_STACK_WALK 0x0100
3107 #define THREAD_IS_SUSPENDED 0x0200 // Be careful not to cause deadlocks, this thread is suspended
3109 // Stackwalk tries to verify some objects, but it could be called in relocate phase of GC,
3110 // where objects could be in invalid state, this flag is to tell stackwalk to skip the validation
3111 #define ALLOW_INVALID_OBJECTS 0x0400
3113 // Caller has verified that the thread to be walked is in the middle of executing
3114 // JITd or NGENd code, according to the thread's current context (or seeded
3115 // context if one was provided). The caller ensures this when the stackwalk
3116 // is initiated by a profiler.
3117 #define THREAD_EXECUTING_MANAGED_CODE 0x0800
3119 // This stackwalk is due to the DoStackSnapshot profiler API
3120 #define PROFILER_DO_STACK_SNAPSHOT 0x1000
3122 // When this flag is set, the stackwalker does not automatically advance to the
3123 // faulting managed stack frame when it encounters an ExInfo. This should only be
3124 // necessary for native debuggers doing mixed-mode stackwalking.
3125 #define NOTIFY_ON_NO_FRAME_TRANSITIONS 0x2000
3127 // Normally, the stackwalker does not stop at the initial CONTEXT if the IP is in native code.
3128 // This flag changes the stackwalker behaviour. Currently this is only used in the debugger stackwalking
3130 #define NOTIFY_ON_INITIAL_NATIVE_CONTEXT 0x4000
3132 // Indicates that we are enumerating GC references and should follow appropriate
3133 // callback rules for parent methods vs funclets. Only supported on non-x86 platforms.
3135 // Refer to StackFrameIterator::Filter for detailed comments on this flag.
3136 #define GC_FUNCLET_REFERENCE_REPORTING 0x8000
3138 // Stackwalking normally checks GS cookies on the fly, but there are cases in which the JIT reports
3139 // incorrect epilog information. This causes the debugger to request stack walks in the epilog, checking
3140 // an now invalid cookie. This flag allows the debugger stack walks to disable GS cookie checking.
3142 // This is a workaround for the debugger stackwalking. In general, the stackwalker and CrawlFrame
3143 // may still execute GS cookie tracking/checking code paths.
3144 #define SKIP_GSCOOKIE_CHECK 0x10000
3146 StackWalkAction StackWalkFramesEx(
3147 PREGDISPLAY pRD, // virtual register set at crawl start
3148 PSTACKWALKFRAMESCALLBACK pCallback,
3151 PTR_Frame pStartFrame = PTR_NULL);
3154 // private helpers used by StackWalkFramesEx and StackFrameIterator
3155 StackWalkAction MakeStackwalkerCallback(CrawlFrame* pCF, PSTACKWALKFRAMESCALLBACK pCallback, VOID* pData DEBUG_ARG(UINT32 uLoopIteration));
3158 void DebugLogStackWalkInfo(CrawlFrame* pCF, __in_z LPCSTR pszTag, UINT32 uLoopIteration);
3163 StackWalkAction StackWalkFrames(
3164 PSTACKWALKFRAMESCALLBACK pCallback,
3167 PTR_Frame pStartFrame = PTR_NULL);
3169 bool InitRegDisplay(const PREGDISPLAY, const PT_CONTEXT, bool validContext);
3170 void FillRegDisplay(const PREGDISPLAY pRD, PT_CONTEXT pctx);
3172 #ifdef WIN64EXCEPTIONS
3173 static PCODE VirtualUnwindCallFrame(T_CONTEXT* pContext, T_KNONVOLATILE_CONTEXT_POINTERS* pContextPointers = NULL,
3174 EECodeInfo * pCodeInfo = NULL);
3175 static UINT_PTR VirtualUnwindCallFrame(PREGDISPLAY pRD, EECodeInfo * pCodeInfo = NULL);
3176 #ifndef DACCESS_COMPILE
3177 static PCODE VirtualUnwindLeafCallFrame(T_CONTEXT* pContext);
3178 static PCODE VirtualUnwindNonLeafCallFrame(T_CONTEXT* pContext, T_KNONVOLATILE_CONTEXT_POINTERS* pContextPointers = NULL,
3179 PT_RUNTIME_FUNCTION pFunctionEntry = NULL, UINT_PTR uImageBase = NULL);
3180 static UINT_PTR VirtualUnwindToFirstManagedCallFrame(T_CONTEXT* pContext);
3181 #endif // DACCESS_COMPILE
3182 #endif // WIN64EXCEPTIONS
3184 // During a <clinit>, this thread must not be asynchronously
3185 // stopped or interrupted. That would leave the class unavailable
3186 // and is therefore a security hole.
3187 static void IncPreventAsync()
3189 WRAPPER_NO_CONTRACT;
3190 Thread *pThread = GetThread();
3191 FastInterlockIncrement((LONG*)&pThread->m_PreventAsync);
3193 static void DecPreventAsync()
3195 WRAPPER_NO_CONTRACT;
3196 Thread *pThread = GetThread();
3197 FastInterlockDecrement((LONG*)&pThread->m_PreventAsync);
3200 bool IsAsyncPrevented()
3202 return m_PreventAsync != 0;
3205 typedef StateHolder<Thread::IncPreventAsync, Thread::DecPreventAsync> ThreadPreventAsyncHolder;
3207 // During a <clinit>, this thread must not be asynchronously
3208 // stopped or interrupted. That would leave the class unavailable
3209 // and is therefore a security hole.
3210 static void IncPreventAbort()
3212 WRAPPER_NO_CONTRACT;
3213 Thread *pThread = GetThread();
3214 FastInterlockIncrement((LONG*)&pThread->m_PreventAbort);
3216 static void DecPreventAbort()
3218 WRAPPER_NO_CONTRACT;
3219 Thread *pThread = GetThread();
3220 FastInterlockDecrement((LONG*)&pThread->m_PreventAbort);
3223 BOOL IsAbortPrevented()
3225 return m_PreventAbort != 0;
3228 typedef StateHolder<Thread::IncPreventAbort, Thread::DecPreventAbort> ThreadPreventAbortHolder;
3230 // The ThreadStore manages a list of all the threads in the system. I
3231 // can't figure out how to expand the ThreadList template type without
3232 // making m_Link public.
3235 // For N/Direct calls with the "setLastError" bit, this field stores
3236 // the errorcode from that call.
3237 DWORD m_dwLastError;
3239 #ifdef FEATURE_INTERPRETER
3240 // When we're interpreting IL stubs for N/Direct calls with the "setLastError" bit,
3241 // the interpretation will trash the last error before we get to the call to "SetLastError".
3242 // Therefore, we record it here immediately after the calli, and treat "SetLastError" as an
3243 // intrinsic that transfers the value stored here into the field above.
3244 DWORD m_dwLastErrorInterp;
3247 // Debugger per-thread flag for enabling notification on "manual"
3248 // method calls, for stepping logic
3249 void IncrementTraceCallCount();
3250 void DecrementTraceCallCount();
3252 FORCEINLINE int IsTraceCall()
3254 LIMITED_METHOD_CONTRACT;
3255 return m_TraceCallCount;
3258 // Functions to get/set culture information for current thread.
3259 static OBJECTREF GetCulture(BOOL bUICulture);
3260 static void SetCulture(OBJECTREF *CultureObj, BOOL bUICulture);
3263 #if defined(FEATURE_HIJACK) && !defined(PLATFORM_UNIX)
3264 // Used in suspension code to redirect a thread at a HandledJITCase
3265 BOOL RedirectThreadAtHandledJITCase(PFN_REDIRECTTARGET pTgt);
3266 BOOL RedirectCurrentThreadAtHandledJITCase(PFN_REDIRECTTARGET pTgt, T_CONTEXT *pCurrentThreadCtx);
3268 // Will Redirect the thread using RedirectThreadAtHandledJITCase if necessary
3269 BOOL CheckForAndDoRedirect(PFN_REDIRECTTARGET pRedirectTarget);
3270 BOOL CheckForAndDoRedirectForDbg();
3271 BOOL CheckForAndDoRedirectForGC();
3272 BOOL CheckForAndDoRedirectForUserSuspend();
3274 // Exception handling must be very aware of redirection, so we provide a helper
3275 // to identifying redirection targets
3276 static BOOL IsAddrOfRedirectFunc(void * pFuncAddr);
3278 #if defined(HAVE_GCCOVER) && defined(USE_REDIRECT_FOR_GCSTRESS)
3280 BOOL CheckForAndDoRedirectForGCStress (T_CONTEXT *pCurrentThreadCtx);
3282 bool m_fPreemptiveGCDisabledForGCStress;
3283 #endif // HAVE_GCCOVER && USE_REDIRECT_FOR_GCSTRESS
3284 #endif // FEATURE_HIJACK && !PLATFORM_UNIX
3288 #ifndef DACCESS_COMPILE
3289 // These re-calculate the proper value on each call for the currently executing thread. Use GetCachedStackLimit
3290 // and GetCachedStackBase for the cached values on this Thread.
3291 static void * GetStackLowerBound();
3292 static void * GetStackUpperBound();
3295 enum SetStackLimitScope { fAll, fAllowableOnly };
3296 BOOL SetStackLimits(SetStackLimitScope scope);
3298 // These access the stack base and limit values for this thread. (They are cached during InitThread.) The
3299 // "stack base" is the "upper bound", i.e., where the stack starts growing from. (Main's call frame is at the
3300 // upper bound.) The "stack limit" is the "lower bound", i.e., how far the stack can grow down to.
3301 // The "stack sufficient execution limit" is used by EnsureSufficientExecutionStack() to limit how much stack
3302 // should remain to execute the average Framework method.
3303 PTR_VOID GetCachedStackBase() {LIMITED_METHOD_DAC_CONTRACT; return m_CacheStackBase; }
3304 PTR_VOID GetCachedStackLimit() {LIMITED_METHOD_DAC_CONTRACT; return m_CacheStackLimit;}
3305 UINT_PTR GetCachedStackSufficientExecutionLimit() {LIMITED_METHOD_DAC_CONTRACT; return m_CacheStackSufficientExecutionLimit;}
3308 // Access the base and limit of the stack. (I.e. the memory ranges that the thread has reserved for its stack).
3310 // Note that the base is at a higher address than the limit, since the stack grows downwards.
3312 // Note that we generally access the stack of the thread we are crawling, which is cached in the ScanContext.
3313 PTR_VOID m_CacheStackBase;
3314 PTR_VOID m_CacheStackLimit;
3315 UINT_PTR m_CacheStackSufficientExecutionLimit;
3317 #define HARD_GUARD_REGION_SIZE GetOsPageSize()
3321 static HRESULT CLRSetThreadStackGuarantee(SetThreadStackGuaranteeScope fScope = STSGuarantee_OnlyIfEnabled);
3323 // try to turn a page into a guard page
3324 static BOOL MarkPageAsGuard(UINT_PTR uGuardPageBase);
3326 // scan a region for a guard page
3327 static BOOL DoesRegionContainGuardPage(UINT_PTR uLowAddress, UINT_PTR uHighAddress);
3329 // Every stack has a single reserved page at its limit that we call the 'hard guard page'. This page is never
3330 // committed, and access to it after a stack overflow will terminate the thread.
3331 #define HARD_GUARD_REGION_SIZE GetOsPageSize()
3332 #define SIZEOF_DEFAULT_STACK_GUARANTEE 1 * GetOsPageSize()
3335 // This will return the last stack address that one could write to before a stack overflow.
3336 static UINT_PTR GetLastNormalStackAddress(UINT_PTR stackBase);
3337 UINT_PTR GetLastNormalStackAddress();
3339 UINT_PTR GetLastAllowableStackAddress()
3341 return m_LastAllowableStackAddress;
3344 UINT_PTR GetProbeLimit()
3346 return m_ProbeLimit;
3349 void ResetStackLimits()
3358 if (!IsSetThreadStackGuaranteeInUse())
3362 SetStackLimits(fAllowableOnly);
3365 BOOL IsSPBeyondLimit();
3367 INDEBUG(static void DebugLogStackMBIs());
3369 #if defined(_DEBUG_IMPL) && !defined(DACCESS_COMPILE)
3370 // Verify that the cached stack base is for the current thread.
3371 BOOL HasRightCacheStackBase()
3373 WRAPPER_NO_CONTRACT;
3374 return m_CacheStackBase == GetStackUpperBound();
3379 static BOOL UniqueStack(void* startLoc = 0);
3381 BOOL IsAddressInStack (PTR_VOID addr) const
3383 LIMITED_METHOD_DAC_CONTRACT;
3384 _ASSERTE(m_CacheStackBase != NULL);
3385 _ASSERTE(m_CacheStackLimit != NULL);
3386 _ASSERTE(m_CacheStackLimit < m_CacheStackBase);
3387 return m_CacheStackLimit < addr && addr <= m_CacheStackBase;
3390 static BOOL IsAddressInCurrentStack (PTR_VOID addr)
3392 LIMITED_METHOD_DAC_CONTRACT;
3393 Thread* currentThread = GetThread();
3394 if (currentThread == NULL)
3399 PTR_VOID sp = dac_cast<PTR_VOID>(GetCurrentSP());
3400 _ASSERTE(currentThread->m_CacheStackBase != NULL);
3401 _ASSERTE(sp < currentThread->m_CacheStackBase);
3402 return sp < addr && addr <= currentThread->m_CacheStackBase;
3405 // DetermineIfGuardPagePresent returns TRUE if the thread's stack contains a proper guard page. This function
3406 // makes a physical check of the stack, rather than relying on whether or not the CLR is currently processing a
3407 // stack overflow exception.
3408 BOOL DetermineIfGuardPagePresent();
3410 // Returns the amount of stack available after an SO but before the OS rips the process.
3411 static UINT_PTR GetStackGuarantee();
3413 // RestoreGuardPage will replace the guard page on this thread's stack. The assumption is that it was removed
3414 // by the OS due to a stack overflow exception. This function requires that you know that you have enough stack
3415 // space to restore the guard page, so make sure you know what you're doing when you decide to call this.
3416 VOID RestoreGuardPage();
3418 #if defined(FEATURE_HIJACK) && !defined(PLATFORM_UNIX)
3420 // Redirecting of threads in managed code at suspension
3422 enum RedirectReason {
3423 RedirectReason_GCSuspension,
3424 RedirectReason_DebugSuspension,
3425 RedirectReason_UserSuspension,
3426 #if defined(HAVE_GCCOVER) && defined(USE_REDIRECT_FOR_GCSTRESS) // GCCOVER
3427 RedirectReason_GCStress,
3428 #endif // HAVE_GCCOVER && USE_REDIRECT_FOR_GCSTRESS
3430 static void __stdcall RedirectedHandledJITCase(RedirectReason reason);
3431 static void __stdcall RedirectedHandledJITCaseForDbgThreadControl();
3432 static void __stdcall RedirectedHandledJITCaseForGCThreadControl();
3433 static void __stdcall RedirectedHandledJITCaseForUserSuspend();
3434 #if defined(HAVE_GCCOVER) && defined(USE_REDIRECT_FOR_GCSTRESS) // GCCOVER
3435 static void __stdcall RedirectedHandledJITCaseForGCStress();
3436 #endif // defined(HAVE_GCCOVER) && USE_REDIRECT_FOR_GCSTRESS
3438 friend void CPFH_AdjustContextForThreadSuspensionRace(T_CONTEXT *pContext, Thread *pThread);
3439 #endif // FEATURE_HIJACK && !PLATFORM_UNIX
3442 //-------------------------------------------------------------
3443 // Waiting & Synchronization
3444 //-------------------------------------------------------------
3446 // For suspends. The thread waits on this event. A client sets the event to cause
3447 // the thread to resume.
3448 void WaitSuspendEvents(BOOL fDoWait = TRUE);
3449 BOOL WaitSuspendEventsHelper(void);
3451 // Helpers to ensure that the bits for suspension and the number of active
3452 // traps remain coordinated.
3453 void MarkForSuspension(ULONG bit);
3454 void UnmarkForSuspension(ULONG bit);
3456 void SetupForSuspension(ULONG bit)
3458 WRAPPER_NO_CONTRACT;
3460 // CoreCLR does not support user-requested thread suspension
3461 _ASSERTE(!(bit & TS_UserSuspendPending));
3464 if (bit & TS_DebugSuspendPending) {
3465 m_DebugSuspendEvent.Reset();
3469 void ReleaseFromSuspension(ULONG bit)
3471 WRAPPER_NO_CONTRACT;
3473 UnmarkForSuspension(~bit);
3476 // If the thread is set free, mark it as not-suspended now
3478 ThreadState oldState = m_State;
3480 // CoreCLR does not support user-requested thread suspension
3481 _ASSERTE(!(oldState & TS_UserSuspendPending));
3483 while ((oldState & (TS_UserSuspendPending | TS_DebugSuspendPending)) == 0)
3485 // CoreCLR does not support user-requested thread suspension
3486 _ASSERTE(!(oldState & TS_UserSuspendPending));
3489 // Construct the destination state we desire - all suspension bits turned off.
3491 ThreadState newState = (ThreadState)(oldState & ~(TS_UserSuspendPending |
3492 TS_DebugSuspendPending |
3495 if (FastInterlockCompareExchange((LONG *)&m_State, newState, oldState) == (LONG)oldState)
3501 // The state changed underneath us, refresh it and try again.
3506 // CoreCLR does not support user-requested thread suspension
3507 _ASSERTE(!(bit & TS_UserSuspendPending));
3509 if (bit & TS_DebugSuspendPending) {
3510 m_DebugSuspendEvent.Set();
3516 FORCEINLINE void UnhijackThreadNoAlloc()
3518 #if defined(FEATURE_HIJACK) && !defined(DACCESS_COMPILE)
3519 if (m_State & TS_Hijacked)
3521 *m_ppvHJRetAddrPtr = m_pvHJRetAddr;
3522 FastInterlockAnd((ULONG *) &m_State, ~TS_Hijacked);
3527 void UnhijackThread();
3529 // Flags that may be passed to GetSafelyRedirectableThreadContext, to customize
3530 // which checks it should perform. This allows a subset of the context verification
3531 // logic used by HandledJITCase to be shared with other callers, such as profiler
3533 enum GetSafelyRedirectableThreadContextOptions
3535 // Perform the default thread context checks
3536 kDefaultChecks = 0x00000000,
3538 // Compares the thread context's IP against m_LastRedirectIP, and potentially
3539 // updates m_LastRedirectIP, when determining the safeness of the thread's
3540 // context. HandledJITCase will always set this flag.
3541 // This flag is ignored on non-x86 platforms, and also on x86 if the OS supports
3542 // trap frame reporting.
3543 kPerfomLastRedirectIPCheck = 0x00000001,
3545 // Use g_pDebugInterface->IsThreadContextInvalid() to see if breakpoints might
3546 // confuse the stack walker. HandledJITCase will always set this flag.
3547 kCheckDebuggerBreakpoints = 0x00000002,
3550 // Helper used by HandledJITCase and others who need an absolutely reliable
3551 // register context.
3552 BOOL GetSafelyRedirectableThreadContext(DWORD dwOptions, T_CONTEXT * pCtx, REGDISPLAY * pRD);
3555 #ifdef FEATURE_HIJACK
3556 void HijackThread(VOID *pvHijackAddr, ExecutionState *esb);
3558 VOID *m_pvHJRetAddr; // original return address (before hijack)
3559 VOID **m_ppvHJRetAddrPtr; // place we bashed a new return address
3560 MethodDesc *m_HijackedFunction; // remember what we hijacked
3562 #ifndef PLATFORM_UNIX
3563 BOOL HandledJITCase(BOOL ForTaskSwitchIn = FALSE);
3566 PCODE m_LastRedirectIP;
3568 #endif // _TARGET_X86_
3570 #endif // !PLATFORM_UNIX
3572 #endif // FEATURE_HIJACK
3574 DWORD m_Win32FaultAddress;
3575 DWORD m_Win32FaultCode;
3577 // Support for Wait/Notify
3578 BOOL Block(INT32 timeOut, PendingSync *syncInfo);
3579 void Wake(SyncBlock *psb);
3580 DWORD Wait(HANDLE *objs, int cntObjs, INT32 timeOut, PendingSync *syncInfo);
3581 DWORD Wait(CLREvent* pEvent, INT32 timeOut, PendingSync *syncInfo);
3583 // support for Thread.Interrupt() which breaks out of Waits, Sleeps, Joins
3584 LONG m_UserInterrupt;
3585 DWORD IsUserInterrupted()
3587 LIMITED_METHOD_CONTRACT;
3588 return m_UserInterrupt;
3590 void ResetUserInterrupted()
3592 LIMITED_METHOD_CONTRACT;
3593 FastInterlockExchange(&m_UserInterrupt, 0);
3596 void HandleThreadInterrupt();
3599 static void WINAPI UserInterruptAPC(ULONG_PTR ignore);
3601 #if defined(_DEBUG) && defined(TRACK_SYNC)
3603 // Each thread has a stack that tracks all enter and leave requests
3605 Dbg_TrackSync *m_pTrackSync;
3607 #endif // TRACK_SYNC
3610 #ifdef ENABLE_CONTRACTS_DATA
3611 struct ClrDebugState *m_pClrDebugState; // Pointer to ClrDebugState for quick access
3613 ULONG m_ulEnablePreemptiveGCCount;
3618 CLREvent m_DebugSuspendEvent;
3620 // For Object::Wait, Notify and NotifyAll, we use an Event inside the
3621 // thread and we queue the threads onto the SyncBlock of the object they
3623 CLREvent m_EventWait;
3624 WaitEventLink m_WaitEventLink;
3625 WaitEventLink* WaitEventLinkForSyncBlock (SyncBlock *psb)
3627 LIMITED_METHOD_CONTRACT;
3628 WaitEventLink *walk = &m_WaitEventLink;
3629 while (walk->m_Next) {
3630 _ASSERTE (walk->m_Next->m_Thread == this);
3631 if ((SyncBlock*)(((DWORD_PTR)walk->m_Next->m_WaitSB) & ~1)== psb) {
3634 walk = walk->m_Next;
3639 // Access to thread handle and ThreadId.
3640 HANDLE GetThreadHandle()
3642 LIMITED_METHOD_CONTRACT;
3643 #if defined(_DEBUG) && !defined(DACCESS_COMPILE)
3645 CounterHolder handleHolder(&m_dwThreadHandleBeingUsed);
3646 HANDLE handle = m_ThreadHandle;
3647 _ASSERTE ( handle == INVALID_HANDLE_VALUE
3648 || handle == SWITCHOUT_HANDLE_VALUE
3649 || m_OSThreadId == 0
3650 || m_OSThreadId == 0xbaadf00d
3651 || ::MatchThreadHandleToOsId(handle, m_OSThreadId) );
3655 DACCOP_IGNORE(FieldAccess, "Treated as raw address, no marshaling is necessary");
3656 return m_ThreadHandle;
3659 void SetThreadHandle(HANDLE h)
3661 LIMITED_METHOD_CONTRACT;
3663 _ASSERTE ( h == INVALID_HANDLE_VALUE
3664 || h == SWITCHOUT_HANDLE_VALUE
3665 || m_OSThreadId == 0
3666 || m_OSThreadId == 0xbaadf00d
3667 || ::MatchThreadHandleToOsId(h, m_OSThreadId) );
3669 FastInterlockExchangePointer(&m_ThreadHandle, h);
3672 // We maintain a correspondence between this object, the ThreadId and ThreadHandle
3673 // in Win32, and the exposed Thread object.
3674 HANDLE m_ThreadHandle;
3676 // <TODO> It would be nice to remove m_ThreadHandleForClose to simplify Thread.Join,
3677 // but at the moment that isn't possible without extensive work.
3678 // This handle is used by SwitchOut to store the old handle which may need to be closed
3679 // if we are the owner. The handle can't be closed before checking the external count
3680 // which we can't do in SwitchOut since that may require locking or switching threads.</TODO>
3681 HANDLE m_ThreadHandleForClose;
3682 HANDLE m_ThreadHandleForResume;
3683 BOOL m_WeOwnThreadHandle;
3686 BOOL CreateNewOSThread(SIZE_T stackSize, LPTHREAD_START_ROUTINE start, void *args);
3688 OBJECTHANDLE m_ExposedObject;
3689 OBJECTHANDLE m_StrongHndToExposedObject;
3691 DWORD m_Priority; // initialized to INVALID_THREAD_PRIORITY, set to actual priority when a
3692 // thread does a busy wait for GC, reset to INVALID_THREAD_PRIORITY after wait is over
3693 friend class NDirect; // Quick access to thread stub creation
3696 friend void DoGcStress (PT_CONTEXT regs, MethodDesc *pMD); // Needs to call UnhijackThread
3697 #endif // HAVE_GCCOVER
3699 ULONG m_ExternalRefCount;
3701 ULONG m_UnmanagedRefCount;
3703 LONG m_TraceCallCount;
3705 //-----------------------------------------------------------
3706 // Bytes promoted on this thread since the last GC?
3707 //-----------------------------------------------------------
3710 void SetHasPromotedBytes ();
3711 DWORD GetHasPromotedBytes ()
3713 LIMITED_METHOD_CONTRACT;
3718 //-----------------------------------------------------------
3719 // Last exception to be thrown.
3720 //-----------------------------------------------------------
3721 friend class EEDbgInterfaceImpl;
3724 // Stores the most recently thrown exception. We need to have a handle in case a GC occurs before
3725 // we catch so we don't lose the object. Having a static allows others to catch outside of COM+ w/o leaking
3726 // a handler and allows rethrow outside of COM+ too.
3727 // Differs from m_pThrowable in that it doesn't stack on nested exceptions.
3728 OBJECTHANDLE m_LastThrownObjectHandle; // Unsafe to use directly. Use accessors instead.
3730 // Indicates that the throwable in m_lastThrownObjectHandle should be treated as
3731 // unhandled. This occurs during fatal error and a few other early error conditions
3732 // before EH is fully set up.
3733 BOOL m_ltoIsUnhandled;
3735 friend void DECLSPEC_NORETURN EEPolicy::HandleFatalStackOverflow(EXCEPTION_POINTERS *pExceptionInfo, BOOL fSkipDebugger);
3739 BOOL IsLastThrownObjectNull() { WRAPPER_NO_CONTRACT; return (m_LastThrownObjectHandle == NULL); }
3741 OBJECTREF LastThrownObject()
3743 WRAPPER_NO_CONTRACT;
3745 if (m_LastThrownObjectHandle == NULL)
3751 // We only have a handle if we have an object to keep in it.
3752 _ASSERTE(ObjectFromHandle(m_LastThrownObjectHandle) != NULL);
3753 return ObjectFromHandle(m_LastThrownObjectHandle);
3757 OBJECTHANDLE LastThrownObjectHandle()
3759 LIMITED_METHOD_DAC_CONTRACT;
3761 return m_LastThrownObjectHandle;
3764 void SetLastThrownObject(OBJECTREF throwable, BOOL isUnhandled = FALSE);
3765 void SetSOForLastThrownObject();
3766 OBJECTREF SafeSetLastThrownObject(OBJECTREF throwable);
3768 // Inidcates that the last thrown object is now treated as unhandled
3769 void MarkLastThrownObjectUnhandled()
3771 LIMITED_METHOD_CONTRACT;
3772 m_ltoIsUnhandled = TRUE;
3775 // TRUE if the throwable in LTO should be treated as unhandled
3776 BOOL IsLastThrownObjectUnhandled()
3778 LIMITED_METHOD_DAC_CONTRACT;
3779 return m_ltoIsUnhandled;
3782 void SafeUpdateLastThrownObject(void);
3783 OBJECTREF SafeSetThrowables(OBJECTREF pThrowable
3784 DEBUG_ARG(ThreadExceptionState::SetThrowableErrorChecking stecFlags = ThreadExceptionState::STEC_All),
3785 BOOL isUnhandled = FALSE);
3787 bool IsLastThrownObjectStackOverflowException()
3789 LIMITED_METHOD_CONTRACT;
3790 CONSISTENCY_CHECK(NULL != g_pPreallocatedStackOverflowException);
3792 return (m_LastThrownObjectHandle == g_pPreallocatedStackOverflowException);
3795 // get the current notification (if any) from this thread
3796 OBJECTHANDLE GetThreadCurrNotification();
3798 // set the current notification on this thread
3799 void SetThreadCurrNotification(OBJECTHANDLE handle);
3801 // clear the current notification (if any) from this thread
3802 void ClearThreadCurrNotification();
3805 void SetLastThrownObjectHandle(OBJECTHANDLE h);
3807 ThreadExceptionState m_ExceptionState;
3809 //-----------------------------------------------------------
3810 // For stack probing. These are the last allowable addresses that a thread
3811 // can touch. Going beyond is a stack overflow. The ProbeLimit will be
3812 // set based on whether SO probing is enabled. The LastAllowableAddress
3813 // will always represent the true stack limit.
3814 //-----------------------------------------------------------
3815 UINT_PTR m_ProbeLimit;
3817 UINT_PTR m_LastAllowableStackAddress;
3820 //---------------------------------------------------------------
3821 // m_debuggerFilterContext holds the thread's "filter context" for the
3822 // debugger. This filter context is used by the debugger to seed
3823 // stack walks on the thread.
3824 //---------------------------------------------------------------
3825 PTR_CONTEXT m_debuggerFilterContext;
3827 //---------------------------------------------------------------
3828 // m_profilerFilterContext holds an additional context for the
3829 // case when a (sampling) profiler wishes to hijack the thread
3830 // and do a stack walk on the same thread.
3831 //---------------------------------------------------------------
3832 T_CONTEXT *m_pProfilerFilterContext;
3834 //---------------------------------------------------------------
3835 // m_hijackLock holds a BOOL that is used for mutual exclusion
3836 // between profiler stack walks and thread hijacks (bashing
3837 // return addresses on the stack)
3838 //---------------------------------------------------------------
3839 Volatile<LONG> m_hijackLock;
3840 //---------------------------------------------------------------
3841 // m_debuggerCantStop holds a count of entries into "can't stop"
3842 // areas that the Interop Debugging Services must know about.
3843 //---------------------------------------------------------------
3844 DWORD m_debuggerCantStop;
3846 //---------------------------------------------------------------
3847 // The current custom notification data object (or NULL if none
3849 //---------------------------------------------------------------
3850 OBJECTHANDLE m_hCurrNotification;
3852 //---------------------------------------------------------------
3853 // For Interop-Debugging; track if a thread is hijacked.
3854 //---------------------------------------------------------------
3855 BOOL m_fInteropDebuggingHijacked;
3857 //---------------------------------------------------------------
3858 // Bitmask to remember per-thread state useful for the profiler API. See
3859 // COR_PRF_CALLBACKSTATE_* flags in clr\src\inc\ProfilePriv.h for bit values.
3860 //---------------------------------------------------------------
3861 DWORD m_profilerCallbackState;
3863 #if defined(FEATURE_PROFAPI_ATTACH_DETACH) || defined(DATA_PROFAPI_ATTACH_DETACH)
3864 //---------------------------------------------------------------
3865 // m_dwProfilerEvacuationCounter keeps track of how many profiler
3866 // callback calls remain on the stack
3867 //---------------------------------------------------------------
3869 // See code:ProfilingAPIUtility::InitializeProfiling#LoadUnloadCallbackSynchronization.
3870 Volatile<DWORD> m_dwProfilerEvacuationCounter;
3871 #endif // defined(FEATURE_PROFAPI_ATTACH_DETACH) || defined(DATA_PROFAPI_ATTACH_DETACH)
3874 Volatile<LONG> m_threadPoolCompletionCount;
3875 static Volatile<LONG> s_threadPoolCompletionCountOverflow; //counts completions for threads that have been destroyed.
3878 static void IncrementThreadPoolCompletionCount()
3880 LIMITED_METHOD_CONTRACT;
3881 Thread* pThread = GetThread();
3883 pThread->m_threadPoolCompletionCount++;
3885 FastInterlockIncrement(&s_threadPoolCompletionCountOverflow);
3888 static LONG GetTotalThreadPoolCompletionCount();
3892 //-------------------------------------------------------------------------
3893 // Support creation of assemblies in DllMain (see ceemain.cpp)
3894 //-------------------------------------------------------------------------
3895 DomainFile* m_pLoadingFile;
3900 void SetInteropDebuggingHijacked(BOOL f)
3902 LIMITED_METHOD_CONTRACT;
3903 m_fInteropDebuggingHijacked = f;
3905 BOOL GetInteropDebuggingHijacked()
3907 LIMITED_METHOD_CONTRACT;
3908 return m_fInteropDebuggingHijacked;
3911 void SetFilterContext(T_CONTEXT *pContext);
3912 T_CONTEXT *GetFilterContext(void);
3914 void SetProfilerFilterContext(T_CONTEXT *pContext)
3916 LIMITED_METHOD_CONTRACT;
3918 m_pProfilerFilterContext = pContext;
3921 // Used by the profiler API to find which flags have been set on the Thread object,
3922 // in order to authorize a profiler's call into ICorProfilerInfo(2).
3923 DWORD GetProfilerCallbackFullState()
3925 LIMITED_METHOD_CONTRACT;
3926 _ASSERTE(GetThread() == this);
3927 return m_profilerCallbackState;
3930 // Used by profiler API to set at once all callback flag bits stored on the Thread object.
3931 // Used to reinstate the previous state that had been modified by a previous call to
3932 // SetProfilerCallbackStateFlags
3933 void SetProfilerCallbackFullState(DWORD dwFullState)
3935 LIMITED_METHOD_CONTRACT;
3936 _ASSERTE(GetThread() == this);
3937 m_profilerCallbackState = dwFullState;
3940 // Used by profiler API to set individual callback flags on the Thread object.
3941 // Returns the previous state of all flags.
3942 DWORD SetProfilerCallbackStateFlags(DWORD dwFlags)
3944 LIMITED_METHOD_CONTRACT;
3945 _ASSERTE(GetThread() == this);
3947 DWORD dwRet = m_profilerCallbackState;
3948 m_profilerCallbackState |= dwFlags;
3952 T_CONTEXT *GetProfilerFilterContext(void)
3954 LIMITED_METHOD_CONTRACT;
3955 return m_pProfilerFilterContext;
3958 #ifdef FEATURE_PROFAPI_ATTACH_DETACH
3960 FORCEINLINE DWORD GetProfilerEvacuationCounter(void)
3962 LIMITED_METHOD_CONTRACT;
3963 return m_dwProfilerEvacuationCounter;
3966 FORCEINLINE void IncProfilerEvacuationCounter(void)
3968 LIMITED_METHOD_CONTRACT;
3969 m_dwProfilerEvacuationCounter++;
3970 _ASSERTE(m_dwProfilerEvacuationCounter != 0U);
3973 FORCEINLINE void DecProfilerEvacuationCounter(void)
3975 LIMITED_METHOD_CONTRACT;
3976 _ASSERTE(m_dwProfilerEvacuationCounter != 0U);
3977 m_dwProfilerEvacuationCounter--;
3980 #endif // FEATURE_PROFAPI_ATTACH_DETACH
3982 //-------------------------------------------------------------------------
3983 // The hijack lock enforces that a thread on which a profiler is currently
3984 // performing a stack walk cannot be hijacked.
3986 // Note that the hijack lock cannot be managed by the host (i.e., this
3987 // cannot be a Crst), because this could lead to a deadlock: YieldTask,
3988 // which is called by the host, may need to hijack, for which it would
3989 // need to take this lock - but since the host needs not be reentrant,
3990 // taking the lock cannot cause a call back into the host.
3991 //-------------------------------------------------------------------------
3992 static BOOL EnterHijackLock(Thread *pThread)
3994 LIMITED_METHOD_CONTRACT;
3996 return ::InterlockedCompareExchange(&(pThread->m_hijackLock), TRUE, FALSE) == FALSE;
3999 static void LeaveHijackLock(Thread *pThread)
4001 LIMITED_METHOD_CONTRACT;
4003 pThread->m_hijackLock = FALSE;
4006 typedef ConditionalStateHolder<Thread *, Thread::EnterHijackLock, Thread::LeaveHijackLock> HijackLockHolder;
4007 //-------------------------------------------------------------------------
4009 static bool ThreadsAtUnsafePlaces(void)
4011 LIMITED_METHOD_CONTRACT;
4013 return (m_threadsAtUnsafePlaces != (LONG)0);
4016 static void IncThreadsAtUnsafePlaces(void)
4018 LIMITED_METHOD_CONTRACT;
4019 InterlockedIncrement(&m_threadsAtUnsafePlaces);
4022 static void DecThreadsAtUnsafePlaces(void)
4024 LIMITED_METHOD_CONTRACT;
4025 InterlockedDecrement(&m_threadsAtUnsafePlaces);
4028 void PrepareForEERestart(BOOL SuspendSucceeded)
4030 WRAPPER_NO_CONTRACT;
4032 #ifdef FEATURE_HIJACK
4033 // Only unhijack the thread if the suspend succeeded. If it failed,
4034 // the target thread may currently be using the original stack
4035 // location of the return address for something else.
4036 if (SuspendSucceeded)
4038 #endif // FEATURE_HIJACK
4040 ResetThreadState(TS_GCSuspendPending);
4043 void SetDebugCantStop(bool fCantStop);
4044 bool GetDebugCantStop(void);
4046 static LPVOID GetStaticFieldAddress(FieldDesc *pFD);
4047 TADDR GetStaticFieldAddrNoCreate(FieldDesc *pFD);
4049 void SetLoadingFile(DomainFile *pFile)
4051 LIMITED_METHOD_CONTRACT;
4052 CONSISTENCY_CHECK(m_pLoadingFile == NULL);
4053 m_pLoadingFile = pFile;
4056 void ClearLoadingFile()
4058 LIMITED_METHOD_CONTRACT;
4059 m_pLoadingFile = NULL;
4062 DomainFile *GetLoadingFile()
4064 LIMITED_METHOD_CONTRACT;
4065 return m_pLoadingFile;
4069 static void LoadingFileRelease(Thread *pThread)
4071 WRAPPER_NO_CONTRACT;
4072 pThread->ClearLoadingFile();
4076 typedef Holder<Thread *, DoNothing, Thread::LoadingFileRelease> LoadingFileHolder;
4079 // Don't allow a thread to be asynchronously stopped or interrupted (e.g. because
4080 // it is performing a <clinit>)
4083 int m_nNestedMarshalingExceptions;
4084 BOOL IsMarshalingException()
4086 LIMITED_METHOD_CONTRACT;
4087 return (m_nNestedMarshalingExceptions != 0);
4089 int StartedMarshalingException()
4091 LIMITED_METHOD_CONTRACT;
4092 return m_nNestedMarshalingExceptions++;
4094 void FinishedMarshalingException()
4096 LIMITED_METHOD_CONTRACT;
4097 _ASSERTE(m_nNestedMarshalingExceptions > 0);
4098 m_nNestedMarshalingExceptions--;
4101 static LONG m_DebugWillSyncCount;
4103 // IP cache used by QueueCleanupIP.
4104 #define CLEANUP_IPS_PER_CHUNK 4
4106 IUnknown *m_Slots[CLEANUP_IPS_PER_CHUNK];
4108 CleanupIPs() {LIMITED_METHOD_CONTRACT; memset(this, 0, sizeof(*this)); }
4110 CleanupIPs m_CleanupIPs;
4112 #define BEGIN_FORBID_TYPELOAD() _ASSERTE_IMPL((GetThreadNULLOk() == 0) || ++GetThreadNULLOk()->m_ulForbidTypeLoad)
4113 #define END_FORBID_TYPELOAD() _ASSERTE_IMPL((GetThreadNULLOk() == 0) || GetThreadNULLOk()->m_ulForbidTypeLoad--)
4114 #define TRIGGERS_TYPELOAD() _ASSERTE_IMPL((GetThreadNULLOk() == 0) || !GetThreadNULLOk()->m_ulForbidTypeLoad)
4118 DWORD m_GCOnTransitionsOK;
4119 ULONG m_ulForbidTypeLoad;
4122 /****************************************************************************/
4123 /* The code below an attempt to catch people who don't protect GC pointers that
4124 they should be protecting. Basically, OBJECTREF's constructor, adds the slot
4125 to a table. When we protect a slot, we remove it from the table. When GC
4126 could happen, all entries in the table are marked as bad. When access to
4127 an OBJECTREF happens (the -> operator) we assert the slot is not bad. To make
4128 this fast, the table is not perfect (there can be collisions), but this should
4129 not cause false positives, but it may allow errors to go undetected */
4132 #define OBJREF_HASH_SHIFT_AMOUNT 3
4134 #define OBJREF_HASH_SHIFT_AMOUNT 2
4137 // For debugging, you may want to make this number very large, (8K)
4138 // should basically insure that no collisions happen
4139 #define OBJREF_TABSIZE 256
4140 DWORD_PTR dangerousObjRefs[OBJREF_TABSIZE]; // Really objectRefs with lower bit stolen
4141 // m_allObjRefEntriesBad is TRUE iff dangerousObjRefs are all marked as GC happened
4142 // It's purely a perf optimization for debug builds that'll help for the cases where we make 2 successive calls
4143 // to Thread::TriggersGC. In that case, the entire array doesn't need to be walked and marked, since we just did
4145 BOOL m_allObjRefEntriesBad;
4147 static DWORD_PTR OBJREF_HASH;
4148 // Remembers that this object ref pointer is 'alive' and unprotected (Bad if GC happens)
4149 static void ObjectRefNew(const OBJECTREF* ref) {
4150 WRAPPER_NO_CONTRACT;
4151 Thread * curThread = GetThreadNULLOk();
4152 if (curThread == 0) return;
4154 curThread->dangerousObjRefs[((size_t)ref >> OBJREF_HASH_SHIFT_AMOUNT) % OBJREF_HASH] = (size_t)ref;
4155 curThread->m_allObjRefEntriesBad = FALSE;
4158 static void ObjectRefAssign(const OBJECTREF* ref) {
4159 WRAPPER_NO_CONTRACT;
4160 Thread * curThread = GetThreadNULLOk();
4161 if (curThread == 0) return;
4163 curThread->m_allObjRefEntriesBad = FALSE;
4164 DWORD_PTR* slot = &curThread->dangerousObjRefs[((DWORD_PTR) ref >> OBJREF_HASH_SHIFT_AMOUNT) % OBJREF_HASH];
4165 if ((*slot & ~3) == (size_t) ref)
4166 *slot = *slot & ~1; // Don't care about GC's that have happened
4169 // If an object is protected, it can be removed from the 'dangerous table'
4170 static void ObjectRefProtected(const OBJECTREF* ref) {
4171 #ifdef USE_CHECKED_OBJECTREFS
4172 WRAPPER_NO_CONTRACT;
4173 _ASSERTE(IsObjRefValid(ref));
4174 Thread * curThread = GetThreadNULLOk();
4175 if (curThread == 0) return;
4177 curThread->m_allObjRefEntriesBad = FALSE;
4178 DWORD_PTR* slot = &curThread->dangerousObjRefs[((DWORD_PTR) ref >> OBJREF_HASH_SHIFT_AMOUNT) % OBJREF_HASH];
4179 if ((*slot & ~3) == (DWORD_PTR) ref)
4180 *slot = (size_t) ref | 2; // mark has being protected
4182 LIMITED_METHOD_CONTRACT;
4186 static bool IsObjRefValid(const OBJECTREF* ref) {
4187 WRAPPER_NO_CONTRACT;
4188 Thread * curThread = GetThreadNULLOk();
4189 if (curThread == 0) return(true);
4191 // If the object ref is NULL, we'll let it pass.
4192 if (*((DWORD_PTR*) ref) == 0)
4195 DWORD_PTR val = curThread->dangerousObjRefs[((DWORD_PTR) ref >> OBJREF_HASH_SHIFT_AMOUNT) % OBJREF_HASH];
4196 // if not in the table, or not the case that it was unprotected and GC happened, return true.
4197 if((val & ~3) != (size_t) ref || (val & 3) != 1)
4199 // If the pointer lives in the GC heap, than it is protected, and thus valid.
4200 if (dac_cast<TADDR>(g_lowest_address) <= val && val < dac_cast<TADDR>(g_highest_address))
4205 // Clears the table. Useful to do when crossing the managed-code - EE boundary
4206 // as you ususally only care about OBJECTREFS that have been created after that
4207 static void STDCALL ObjectRefFlush(Thread* thread);
4210 #ifdef ENABLE_CONTRACTS_IMPL
4211 // Marks all Objrefs in the table as bad (since they are unprotected)
4212 static void TriggersGC(Thread* thread) {
4213 WRAPPER_NO_CONTRACT;
4214 if ((GCViolation|BadDebugState) & (UINT_PTR)(GetViolationMask()))
4218 if (!thread->m_allObjRefEntriesBad)
4220 thread->m_allObjRefEntriesBad = TRUE;
4221 for(unsigned i = 0; i < OBJREF_TABSIZE; i++)
4222 thread->dangerousObjRefs[i] |= 1; // mark all slots as GC happened
4225 #endif // ENABLE_CONTRACTS_IMPL
4230 PTR_CONTEXT m_pSavedRedirectContext;
4232 BOOL IsContextSafeToRedirect(T_CONTEXT* pContext);
4235 PT_CONTEXT GetSavedRedirectContext()
4237 LIMITED_METHOD_CONTRACT;
4238 return (m_pSavedRedirectContext);
4241 #ifndef DACCESS_COMPILE
4242 void SetSavedRedirectContext(PT_CONTEXT pCtx)
4244 LIMITED_METHOD_CONTRACT;
4245 m_pSavedRedirectContext = pCtx;
4249 void EnsurePreallocatedContext();
4251 ThreadLocalBlock m_ThreadLocalBlock;
4253 // Called during AssemblyLoadContext teardown to clean up all structures
4254 // associated with thread statics for the specific Module
4255 void DeleteThreadStaticData(ModuleIndex index);
4259 // Called during Thread death to clean up all structures
4260 // associated with thread statics
4261 void DeleteThreadStaticData();
4265 // When we create an object, or create an OBJECTREF, or create an Interior Pointer, or enter EE from managed
4266 // code, we will set this flag.
4267 // Inside GCHeapUtilities::StressHeap, we only do GC if this flag is TRUE. Then we reset it to zero.
4268 BOOL m_fStressHeapCount;
4270 void EnableStressHeap()
4272 LIMITED_METHOD_CONTRACT;
4273 m_fStressHeapCount = TRUE;
4275 void DisableStressHeap()
4277 LIMITED_METHOD_CONTRACT;
4278 m_fStressHeapCount = FALSE;
4280 BOOL StressHeapIsEnabled()
4282 LIMITED_METHOD_CONTRACT;
4283 return m_fStressHeapCount;
4286 size_t *m_pCleanedStackBase;
4289 #ifdef DACCESS_COMPILE
4291 void EnumMemoryRegions(CLRDataEnumMemoryFlags flags);
4292 void EnumMemoryRegionsWorker(CLRDataEnumMemoryFlags flags);
4296 // Is the current thread currently executing within a constrained execution region?
4297 static BOOL IsExecutingWithinCer();
4299 // Determine whether the method at the given frame in the thread's execution stack is executing within a CER.
4300 BOOL IsWithinCer(CrawlFrame *pCf);
4303 // used to pad stack on thread creation to avoid aliasing penalty in P4 HyperThread scenarios
4305 static DWORD WINAPI intermediateThreadProc(PVOID arg);
4306 static int m_offset_counter;
4307 static const int offset_multiplier = 128;
4310 LPTHREAD_START_ROUTINE lpThreadFunction;
4312 } intermediateThreadParam;
4315 // when the thread is doing a stressing GC, some Crst violation could be ignored, by a non-elegant solution.
4317 BOOL m_bGCStressing; // the flag to indicate if the thread is doing a stressing GC
4318 BOOL m_bUniqueStacking; // the flag to indicate if the thread is doing a UniqueStack
4320 BOOL GetGCStressing ()
4322 return m_bGCStressing;
4324 BOOL GetUniqueStacking ()
4326 return m_bUniqueStacking;
4331 //-----------------------------------------------------------------------------
4332 // AVInRuntimeImplOkay : its okay to have an AV in Runtime implemetation while
4333 // this holder is in effect.
4336 // AVInRuntimeImplOkayHolder foo();
4337 // } // make AV's in the Runtime illegal on out of scope.
4338 //-----------------------------------------------------------------------------
4339 DWORD m_dwAVInRuntimeImplOkayCount;
4341 static void AVInRuntimeImplOkayAcquire(Thread * pThread)
4343 LIMITED_METHOD_CONTRACT;
4347 _ASSERTE(pThread->m_dwAVInRuntimeImplOkayCount != (DWORD)-1);
4348 pThread->m_dwAVInRuntimeImplOkayCount++;
4352 static void AVInRuntimeImplOkayRelease(Thread * pThread)
4354 LIMITED_METHOD_CONTRACT;
4358 _ASSERTE(pThread->m_dwAVInRuntimeImplOkayCount > 0);
4359 pThread->m_dwAVInRuntimeImplOkayCount--;
4364 static BOOL AVInRuntimeImplOkay(void)
4366 LIMITED_METHOD_CONTRACT;
4368 Thread * pThread = GetThreadNULLOk();
4372 return (pThread->m_dwAVInRuntimeImplOkayCount > 0);
4380 class AVInRuntimeImplOkayHolder
4382 Thread * const m_pThread;
4384 AVInRuntimeImplOkayHolder() :
4385 m_pThread(GetThread())
4387 LIMITED_METHOD_CONTRACT;
4388 AVInRuntimeImplOkayAcquire(m_pThread);
4390 AVInRuntimeImplOkayHolder(Thread * pThread) :
4393 LIMITED_METHOD_CONTRACT;
4394 AVInRuntimeImplOkayAcquire(m_pThread);
4396 ~AVInRuntimeImplOkayHolder()
4398 LIMITED_METHOD_CONTRACT;
4399 AVInRuntimeImplOkayRelease(m_pThread);
4405 DWORD m_dwUnbreakableLockCount;
4407 void IncUnbreakableLockCount()
4409 LIMITED_METHOD_CONTRACT;
4410 _ASSERTE (m_dwUnbreakableLockCount != (DWORD)-1);
4411 m_dwUnbreakableLockCount ++;
4413 void DecUnbreakableLockCount()
4415 LIMITED_METHOD_CONTRACT;
4416 _ASSERTE (m_dwUnbreakableLockCount > 0);
4417 m_dwUnbreakableLockCount --;
4419 BOOL HasUnbreakableLock() const
4421 LIMITED_METHOD_CONTRACT;
4422 return m_dwUnbreakableLockCount != 0;
4424 DWORD GetUnbreakableLockCount() const
4426 LIMITED_METHOD_CONTRACT;
4427 return m_dwUnbreakableLockCount;
4433 friend class FCallTransitionState;
4434 friend class PermitHelperMethodFrameState;
4435 friend class CompletedFCallTransitionState;
4436 HelperMethodFrameCallerList *m_pHelperMethodFrameCallerList;
4440 LONG m_dwHostTaskRefCount;
4443 // If HasStarted fails, we cache the exception here, and rethrow on the thread which
4444 // calls Thread.Start.
4445 Exception* m_pExceptionDuringStartup;
4448 void HandleThreadStartupFailure();
4454 #if defined(GCCOVER_TOLERATE_SPURIOUS_AV)
4455 LPVOID m_pLastAVAddress;
4456 #endif // defined(GCCOVER_TOLERATE_SPURIOUS_AV)
4459 void CommitGCStressInstructionUpdate();
4460 void PostGCStressInstructionUpdate(BYTE* pbDestCode, BYTE* pbSrcCode)
4462 LIMITED_METHOD_CONTRACT;
4463 PRECONDITION(!HasPendingGCStressInstructionUpdate());
4465 VolatileStoreWithoutBarrier<BYTE*>(&m_pbSrcCode, pbSrcCode);
4466 VolatileStore<BYTE*>(&m_pbDestCode, pbDestCode);
4468 bool HasPendingGCStressInstructionUpdate()
4470 LIMITED_METHOD_CONTRACT;
4471 BYTE* dest = VolatileLoad(&m_pbDestCode);
4472 return dest != NULL;
4474 bool TryClearGCStressInstructionUpdate(BYTE** ppbDestCode, BYTE** ppbSrcCode)
4476 LIMITED_METHOD_CONTRACT;
4477 bool result = false;
4479 if(HasPendingGCStressInstructionUpdate())
4481 *ppbDestCode = FastInterlockExchangePointer(&m_pbDestCode, NULL);
4483 if(*ppbDestCode != NULL)
4486 *ppbSrcCode = FastInterlockExchangePointer(&m_pbSrcCode, NULL);
4488 CONSISTENCY_CHECK(*ppbSrcCode != NULL);
4493 #if defined(GCCOVER_TOLERATE_SPURIOUS_AV)
4494 void SetLastAVAddress(LPVOID address)
4496 LIMITED_METHOD_CONTRACT;
4497 m_pLastAVAddress = address;
4499 LPVOID GetLastAVAddress()
4501 LIMITED_METHOD_CONTRACT;
4502 return m_pLastAVAddress;
4504 #endif // defined(GCCOVER_TOLERATE_SPURIOUS_AV)
4505 #endif // HAVE_GCCOVER
4508 BOOL m_fCompletionPortDrained;
4510 void MarkCompletionPortDrained()
4512 LIMITED_METHOD_CONTRACT;
4513 FastInterlockExchange ((LONG*)&m_fCompletionPortDrained, TRUE);
4515 void UnmarkCompletionPortDrained()
4517 LIMITED_METHOD_CONTRACT;
4518 FastInterlockExchange ((LONG*)&m_fCompletionPortDrained, FALSE);
4520 BOOL IsCompletionPortDrained()
4522 LIMITED_METHOD_CONTRACT;
4523 return m_fCompletionPortDrained;
4526 // --------------------------------
4527 // Store the maxReservedStackSize
4528 // This is passed in from managed code in the thread constructor
4529 // ---------------------------------
4531 SIZE_T m_RequestedStackSize;
4535 // Get the MaxStackSize
4536 SIZE_T RequestedThreadStackSize()
4538 LIMITED_METHOD_CONTRACT;
4539 return (m_RequestedStackSize);
4542 // Set the MaxStackSize
4543 void RequestedThreadStackSize(SIZE_T requestedStackSize)
4545 LIMITED_METHOD_CONTRACT;
4546 m_RequestedStackSize = requestedStackSize;
4549 static BOOL CheckThreadStackSize(SIZE_T *SizeToCommitOrReserve,
4550 BOOL isSizeToReserve // When TRUE, the previous argument is the stack size to reserve.
4551 // Otherwise, it is the size to commit.
4554 static BOOL GetProcessDefaultStackSize(SIZE_T* reserveSize, SIZE_T* commitSize);
4558 // Although this is a pointer, it is used as a flag to indicate the current context is unsafe
4559 // to inspect. When NULL the context is safe to use, otherwise it points to the active patch skipper
4560 // and the context is unsafe to use. When running a patch skipper we could be in one of two
4561 // debug-only situations that the context inspecting/modifying code isn't generally prepared
4563 // a) We have set the IP to point somewhere in the patch skip table but have not yet run the
4565 // b) We executed the instruction in the patch skip table and now the IP could be anywhere
4566 // The debugger may need to fix up the IP to compensate for the instruction being run
4567 // from a different address.
4568 VolatilePtr<DebuggerPatchSkip> m_debuggerActivePatchSkipper;
4571 VOID BeginDebuggerPatchSkip(DebuggerPatchSkip* patchSkipper)
4573 LIMITED_METHOD_CONTRACT;
4574 _ASSERTE(!m_debuggerActivePatchSkipper.Load());
4575 FastInterlockExchangePointer(m_debuggerActivePatchSkipper.GetPointer(), patchSkipper);
4576 _ASSERTE(m_debuggerActivePatchSkipper.Load());
4579 VOID EndDebuggerPatchSkip()
4581 LIMITED_METHOD_CONTRACT;
4582 _ASSERTE(m_debuggerActivePatchSkipper.Load());
4583 FastInterlockExchangePointer(m_debuggerActivePatchSkipper.GetPointer(), NULL);
4584 _ASSERTE(!m_debuggerActivePatchSkipper.Load());
4589 static BOOL EnterWorkingOnThreadContext(Thread *pThread)
4591 LIMITED_METHOD_CONTRACT;
4593 if(pThread->m_debuggerActivePatchSkipper.Load() != NULL)
4600 static void LeaveWorkingOnThreadContext(Thread *pThread)
4602 LIMITED_METHOD_CONTRACT;
4605 typedef ConditionalStateHolder<Thread *, Thread::EnterWorkingOnThreadContext, Thread::LeaveWorkingOnThreadContext> WorkingOnThreadContextHolder;
4608 void PrepareThreadForSOWork()
4610 WRAPPER_NO_CONTRACT;
4612 #ifdef FEATURE_HIJACK
4614 #endif // FEATURE_HIJACK
4616 ResetThrowControlForThread();
4618 // Since this Thread has taken an SO, there may be state left-over after we
4619 // short-circuited exception or other error handling, and so we don't want
4620 // to risk recycling it.
4621 SetThreadStateNC(TSNC_CannotRecycle);
4624 void SetSOWorkNeeded()
4626 SetThreadStateNC(TSNC_SOWorkNeeded);
4629 BOOL IsSOWorkNeeded()
4631 return HasThreadStateNC(TSNC_SOWorkNeeded);
4634 void FinishSOWork();
4636 void ClearExceptionStateAfterSO(void* pStackFrameSP)
4638 WRAPPER_NO_CONTRACT;
4640 // Clear any stale exception state.
4641 m_ExceptionState.ClearExceptionStateAfterSO(pStackFrameSP);
4645 BOOL m_fAllowProfilerCallbacks;
4649 // These two methods are for profiler support. The profiler clears the allowed
4650 // value once it has delivered a ThreadDestroyed callback, so that it does not
4651 // deliver any notifications to the profiler afterwards which reference this
4652 // thread. Callbacks on this thread which do not reference this thread are
4655 BOOL ProfilerCallbacksAllowed(void)
4657 return m_fAllowProfilerCallbacks;
4660 void SetProfilerCallbacksAllowed(BOOL fValue)
4662 m_fAllowProfilerCallbacks = fValue;
4667 //This context is used for optimizations on I/O thread pool thread. In case the
4668 //overlapped structure is from a different appdomain, it is stored in this structure
4669 //to be processed later correctly by entering the right domain.
4670 PVOID m_pIOCompletionContext;
4671 BOOL AllocateIOCompletionContext();
4672 VOID FreeIOCompletionContext();
4674 inline PVOID GetIOCompletionContext()
4676 return m_pIOCompletionContext;
4680 // Inside a host, we don't own a thread handle, and we avoid DuplicateHandle call.
4681 // If a thread is dying after we obtain the thread handle, our SuspendThread may fail
4682 // because the handle may be closed and reused for a completely different type of handle.
4683 // To solve this problem, we have a counter m_dwThreadHandleBeingUsed. Before we grab
4684 // the thread handle, we increment the counter. Before we return a thread back to SQL
4685 // in Reset and ExitTask, we wait until the counter drops to 0.
4686 Volatile<LONG> m_dwThreadHandleBeingUsed;
4690 static BOOL s_fCleanFinalizedThread;
4693 #ifndef DACCESS_COMPILE
4694 static void SetCleanupNeededForFinalizedThread()
4696 LIMITED_METHOD_CONTRACT;
4697 _ASSERTE (IsFinalizerThread());
4698 s_fCleanFinalizedThread = TRUE;
4700 #endif //!DACCESS_COMPILE
4702 static BOOL CleanupNeededForFinalizedThread()
4704 LIMITED_METHOD_CONTRACT;
4705 return s_fCleanFinalizedThread;
4709 // When we create throwable for an exception, we need to run managed code.
4710 // If the same type of exception is thrown while creating managed object, like InvalidProgramException,
4711 // we may be in an infinite recursive case.
4712 Exception *m_pCreatingThrowableForException;
4713 friend OBJECTREF CLRException::GetThrowable();
4717 int m_dwDisableAbortCheckCount; // Disable check before calling managed code.
4718 // !!! Use this very carefully. If managed code runs user code
4719 // !!! or blocks on locks, the thread may not be aborted.
4721 static void DisableAbortCheck()
4723 WRAPPER_NO_CONTRACT;
4724 Thread *pThread = GetThread();
4725 FastInterlockIncrement((LONG*)&pThread->m_dwDisableAbortCheckCount);
4727 static void EnableAbortCheck()
4729 WRAPPER_NO_CONTRACT;
4730 Thread *pThread = GetThread();
4731 _ASSERTE (pThread->m_dwDisableAbortCheckCount > 0);
4732 FastInterlockDecrement((LONG*)&pThread->m_dwDisableAbortCheckCount);
4735 BOOL IsAbortCheckDisabled()
4737 return m_dwDisableAbortCheckCount > 0;
4740 typedef StateHolder<Thread::DisableAbortCheck, Thread::EnableAbortCheck> DisableAbortCheckHolder;
4744 // At the end of a catch, we may raise ThreadAbortException. If catch clause set IP to resume in the
4745 // corresponding try block, our exception system will execute the same catch clause again and again.
4746 // So we save reference to the clause post which TA was reraised, which is used in ExceptionTracker::ProcessManagedCallFrame
4747 // to make ThreadAbort proceed ahead instead of going in a loop.
4748 // This problem only happens on Win64 due to JIT64. The common scenario is VB's "On error resume next"
4749 #ifdef WIN64EXCEPTIONS
4750 DWORD m_dwIndexClauseForCatch;
4751 StackFrame m_sfEstablisherOfActualHandlerFrame;
4752 #endif // WIN64EXCEPTIONS
4755 // Holds per-thread information the debugger uses to expose locking information
4756 // See ThreadDebugBlockingInfo.h for more details
4757 ThreadDebugBlockingInfo DebugBlockingInfo;
4758 #ifdef FEATURE_APPDOMAIN_RESOURCE_MONITORING
4759 // For the purposes of tracking resource usage we implement a simple cpu resource usage counter on each
4760 // thread. Every time QueryThreadProcessorUsage() is invoked it returns the amount of cpu time (a
4761 // combination of user and kernel mode time) used since the last call to QueryThreadProcessorUsage(). The
4762 // result is in 100 nanosecond units.
4763 ULONGLONG QueryThreadProcessorUsage();
4766 // The amount of processor time (both user and kernel) in 100ns units used by this thread at the time of
4767 // the last call to QueryThreadProcessorUsage().
4768 ULONGLONG m_ullProcessorUsageBaseline;
4769 #endif // FEATURE_APPDOMAIN_RESOURCE_MONITORING
4771 // Disables pumping and thread join in RCW creation
4772 bool m_fDisableComObjectEagerCleanup;
4774 // See ThreadStore::TriggerGCForDeadThreadsIfNecessary()
4775 bool m_fHasDeadThreadBeenConsideredForGCTrigger;
4781 CLRRandom* GetRandom() {return &m_random;}
4783 #ifdef FEATURE_COMINTEROP
4785 // Cookie returned from CoRegisterInitializeSpy
4786 ULARGE_INTEGER m_uliInitializeSpyCookie;
4788 // True if m_uliInitializeSpyCookie is valid
4789 bool m_fInitializeSpyRegistered;
4791 // The last STA COM context we saw - used to speed up RCW creation
4792 LPVOID m_pLastSTACtxCookie;
4795 inline void RevokeApartmentSpy();
4796 inline LPVOID GetLastSTACtxCookie(BOOL *pfNAContext);
4797 inline void SetLastSTACtxCookie(LPVOID pCtxCookie, BOOL fNAContext);
4798 #endif // FEATURE_COMINTEROP
4801 // This duplicates the ThreadType_GC bit stored in TLS (TlsIdx_ThreadType). It exists
4802 // so that any thread can query whether any other thread is a "GC Special" thread.
4803 // (In contrast, ::IsGCSpecialThread() only gives this info about the currently
4804 // executing thread.) The Profiling API uses this to determine whether it should
4805 // "hide" the thread from profilers. GC Special threads (in particular the bgc
4806 // thread) need to be hidden from profilers because the bgc thread creation path
4807 // occurs while the EE is suspended, and while the thread that's suspending the
4808 // runtime is waiting for the bgc thread to signal an event. The bgc thread cannot
4809 // switch to preemptive mode and call into a profiler at this time, or else a
4810 // deadlock will result when toggling back to cooperative mode (bgc thread toggling
4811 // to coop will block due to the suspension, and the thread suspending the runtime
4812 // continues to block waiting for the bgc thread to signal its creation events).
4813 // Furthermore, profilers have no need to be aware of GC special threads anyway,
4814 // since managed code never runs on them.
4818 // Profiling API uses this to determine whether it should hide this thread from the
4822 // GC calls this when creating special threads that also happen to have an EE Thread
4823 // object associated with them (e.g., the bgc thread).
4824 void SetGCSpecial(bool fGCSpecial);
4828 DWORD_PTR m_pAffinityMask;
4831 void ChooseThreadCPUGroupAffinity();
4832 void ClearThreadCPUGroupAffinity();
4835 // Per thread table used to implement allocation sampling.
4836 AllLoggedTypes * m_pAllLoggedTypes;
4839 AllLoggedTypes * GetAllocationSamplingTable()
4841 LIMITED_METHOD_CONTRACT;
4843 return m_pAllLoggedTypes;
4846 void SetAllocationSamplingTable(AllLoggedTypes * pAllLoggedTypes)
4848 LIMITED_METHOD_CONTRACT;
4850 // Assert if we try to set the m_pAllLoggedTypes to a non NULL value if it is already non-NULL.
4851 // This implies a memory leak.
4852 _ASSERTE(pAllLoggedTypes != NULL ? m_pAllLoggedTypes == NULL : TRUE);
4853 m_pAllLoggedTypes = pAllLoggedTypes;
4856 #ifdef FEATURE_PERFTRACING
4859 // SampleProfiler thread state. This is set on suspension and cleared before restart.
4860 // True if the thread was in cooperative mode. False if it was in preemptive when the suspension started.
4861 Volatile<ULONG> m_gcModeOnSuspension;
4863 // The activity ID for the current thread.
4864 // An activity ID of zero means the thread is not executing in the context of an activity.
4868 bool GetGCModeOnSuspension()
4870 LIMITED_METHOD_CONTRACT;
4871 return m_gcModeOnSuspension != 0U;
4874 void SaveGCModeOnSuspension()
4876 LIMITED_METHOD_CONTRACT;
4877 m_gcModeOnSuspension = m_fPreemptiveGCDisabled;
4880 void ClearGCModeOnSuspension()
4882 m_gcModeOnSuspension = 0;
4885 LPCGUID GetActivityId() const
4887 LIMITED_METHOD_CONTRACT;
4888 return &m_activityId;
4891 void SetActivityId(LPCGUID pActivityId)
4893 LIMITED_METHOD_CONTRACT;
4894 _ASSERTE(pActivityId != NULL);
4896 m_activityId = *pActivityId;
4898 #endif // FEATURE_PERFTRACING
4900 #ifdef FEATURE_HIJACK
4903 // By the time a frame is scanned by the runtime, m_pHijackReturnKind always
4904 // identifies the gc-ness of the return register(s)
4905 // If the ReturnKind information is not available from the GcInfo, the runtime
4906 // computes it using the return types's class handle.
4908 ReturnKind m_HijackReturnKind;
4912 ReturnKind GetHijackReturnKind()
4914 LIMITED_METHOD_CONTRACT;
4916 return m_HijackReturnKind;
4919 void SetHijackReturnKind(ReturnKind returnKind)
4921 LIMITED_METHOD_CONTRACT;
4923 m_HijackReturnKind = returnKind;
4925 #endif // FEATURE_HIJACK
4928 OBJECTHANDLE GetOrCreateDeserializationTracker();
4931 OBJECTHANDLE m_DeserializationTracker;
4934 // End of class Thread
4936 typedef Thread::ForbidSuspendThreadHolder ForbidSuspendThreadHolder;
4937 typedef Thread::ThreadPreventAsyncHolder ThreadPreventAsyncHolder;
4938 typedef Thread::ThreadPreventAbortHolder ThreadPreventAbortHolder;
4940 // Combines ForBindSuspendThreadHolder and CrstHolder into one.
4941 class ForbidSuspendThreadCrstHolder
4944 // Note: member initialization is intentionally ordered.
4945 ForbidSuspendThreadCrstHolder(CrstBase * pCrst)
4946 : m_forbid_suspend_holder()
4947 , m_lock_holder(pCrst)
4948 { WRAPPER_NO_CONTRACT; }
4951 ForbidSuspendThreadHolder m_forbid_suspend_holder;
4952 CrstHolder m_lock_holder;
4955 ETaskType GetCurrentTaskType();
4959 typedef Thread::AVInRuntimeImplOkayHolder AVInRuntimeImplOkayHolder;
4961 BOOL RevertIfImpersonated(BOOL *bReverted, HANDLE *phToken);
4962 void UndoRevert(BOOL bReverted, HANDLE hToken);
4964 // ---------------------------------------------------------------------------
4966 // The ThreadStore manages all the threads in the system.
4968 // There is one ThreadStore in the system, available through
4969 // ThreadStore::m_pThreadStore.
4970 // ---------------------------------------------------------------------------
4972 typedef SList<Thread, false, PTR_Thread> ThreadList;
4975 // The ThreadStore is a singleton class
4976 #define CHECK_ONE_STORE() _ASSERTE(this == ThreadStore::s_pThreadStore);
4978 typedef DPTR(class ThreadStore) PTR_ThreadStore;
4979 typedef DPTR(class ExceptionTracker) PTR_ExceptionTracker;
4983 friend class Thread;
4984 friend class ThreadSuspend;
4985 friend Thread* SetupThread(BOOL);
4986 friend class AppDomain;
4987 #ifdef DACCESS_COMPILE
4988 friend class ClrDataAccess;
4989 friend Thread* __stdcall DacGetThread(ULONG32 osThreadID);
4996 static void InitThreadStore();
4997 static void LockThreadStore();
4998 static void UnlockThreadStore();
5000 // Add a Thread to the ThreadStore
5001 // WARNING : only GC calls this with bRequiresTSL set to FALSE.
5002 static void AddThread(Thread *newThread, BOOL bRequiresTSL=TRUE);
5004 // RemoveThread finds the thread in the ThreadStore and discards it.
5005 static BOOL RemoveThread(Thread *target);
5007 static BOOL CanAcquireLock();
5009 // Transfer a thread from the unstarted to the started list.
5010 // WARNING : only GC calls this with bRequiresTSL set to FALSE.
5011 static void TransferStartedThread(Thread *target, BOOL bRequiresTSL=TRUE);
5013 // Before using the thread list, be sure to take the critical section. Otherwise
5014 // it can change underneath you, perhaps leading to an exception after Remove.
5015 // Prev==NULL to get the first entry in the list.
5016 static Thread *GetAllThreadList(Thread *Prev, ULONG mask, ULONG bits);
5017 static Thread *GetThreadList(Thread *Prev);
5019 // Every EE process can lazily create a GUID that uniquely identifies it (for
5020 // purposes of remoting).
5021 const GUID &GetUniqueEEId();
5023 // We shut down the EE when the last non-background thread terminates. This event
5024 // is used to signal the main thread when this condition occurs.
5025 void WaitForOtherThreads();
5026 static void CheckForEEShutdown();
5027 CLREvent m_TerminationEvent;
5029 // Have all the foreground threads completed? In other words, can we release
5031 BOOL OtherThreadsComplete()
5033 LIMITED_METHOD_CONTRACT;
5034 _ASSERTE(m_ThreadCount - m_UnstartedThreadCount - m_DeadThreadCount - Thread::m_ActiveDetachCount + m_PendingThreadCount >= m_BackgroundThreadCount);
5036 return (m_ThreadCount - m_UnstartedThreadCount - m_DeadThreadCount
5037 - Thread::m_ActiveDetachCount + m_PendingThreadCount
5038 == m_BackgroundThreadCount);
5041 // If you want to trap threads re-entering the EE (be this for GC, or debugging,
5042 // or Thread.Suspend() or whatever, you need to TrapReturningThreads(TRUE). When
5043 // you are finished snagging threads, call TrapReturningThreads(FALSE). This
5044 // counts internally.
5046 // Of course, you must also fix RareDisablePreemptiveGC to do the right thing
5047 // when the trap occurs.
5048 static void TrapReturningThreads(BOOL yes);
5052 // Enter and leave the critical section around the thread store. Clients should
5053 // use LockThreadStore and UnlockThreadStore.
5057 // Critical section for adding and removing threads to the store
5060 // List of all the threads known to the ThreadStore (started & unstarted).
5061 ThreadList m_ThreadList;
5063 // m_ThreadCount is the count of all threads in m_ThreadList. This includes
5064 // background threads / unstarted threads / whatever.
5066 // m_UnstartedThreadCount is the subset of m_ThreadCount that have not yet been
5069 // m_BackgroundThreadCount is the subset of m_ThreadCount that have been started
5070 // but which are running in the background. So this is a misnomer in the sense
5071 // that unstarted background threads are not reflected in this count.
5073 // m_PendingThreadCount is used to solve a race condition. The main thread could
5074 // start another thread running and then exit. The main thread might then start
5075 // tearing down the EE before the new thread moves itself out of m_UnstartedThread-
5076 // Count in TransferUnstartedThread. This count is atomically bumped in
5077 // CreateNewThread, and atomically reduced within a locked thread store.
5079 // m_DeadThreadCount is the subset of m_ThreadCount which have died. The Win32
5080 // thread has disappeared, but something (like the exposed object) has kept the
5081 // refcount non-zero so we can't destruct yet.
5083 // m_MaxThreadCount is the maximum value of m_ThreadCount. ie. the largest number
5084 // of simultaneously active threads
5088 LONG m_MaxThreadCount;
5090 LONG ThreadCountInEE ()
5092 LIMITED_METHOD_CONTRACT;
5093 return m_ThreadCount;
5095 #if defined(_DEBUG) || defined(DACCESS_COMPILE)
5096 LONG MaxThreadCountInEE ()
5098 LIMITED_METHOD_CONTRACT;
5099 return m_MaxThreadCount;
5103 LONG m_UnstartedThreadCount;
5104 LONG m_BackgroundThreadCount;
5105 LONG m_PendingThreadCount;
5107 LONG m_DeadThreadCount;
5108 LONG m_DeadThreadCountForGCTrigger;
5109 bool m_TriggerGCForDeadThreads;
5112 // Space for the lazily-created GUID.
5116 // Even in the release product, we need to know what thread holds the lock on
5117 // the ThreadStore. This is so we never deadlock when the GC thread halts a
5118 // thread that holds this lock.
5119 Thread *m_HoldingThread;
5120 EEThreadId m_holderthreadid; // current holder (or NULL)
5123 static LONG s_DeadThreadCountThresholdForGCTrigger;
5124 static DWORD s_DeadThreadGCTriggerPeriodMilliseconds;
5125 static SIZE_T *s_DeadThreadGenerationCounts;
5129 static BOOL HoldingThreadStore()
5131 WRAPPER_NO_CONTRACT;
5132 // Note that GetThread() may be 0 if it is the debugger thread
5133 // or perhaps a concurrent GC thread.
5134 return HoldingThreadStore(GetThread());
5137 static BOOL HoldingThreadStore(Thread *pThread);
5139 #ifdef DACCESS_COMPILE
5140 static void EnumMemoryRegions(CLRDataEnumMemoryFlags flags);
5143 SPTR_DECL(ThreadStore, s_pThreadStore);
5147 BOOL DbgFindThread(Thread *target);
5148 LONG DbgBackgroundThreadCount()
5150 LIMITED_METHOD_CONTRACT;
5151 return m_BackgroundThreadCount;
5154 BOOL IsCrstForThreadStore (const CrstBase* const pCrstBase)
5156 LIMITED_METHOD_CONTRACT;
5157 return (void *)pCrstBase == (void*)&m_Crst;
5162 static CONTEXT *s_pOSContext;
5164 // We can not do any memory allocation after we suspend a thread in order ot
5165 // avoid deadlock situation.
5166 static void AllocateOSContext();
5167 static CONTEXT *GrabOSContext();
5170 // Thread abort needs to walk stack to decide if thread abort can proceed.
5171 // It is unsafe to crawl a stack of thread if the thread is OS-suspended which we do during
5172 // thread abort. For example, Thread T1 aborts thread T2. T2 is suspended by T1. Inside SQL
5173 // this means that no thread sharing the same scheduler with T2 can run. If T1 needs a lock which
5174 // is owned by one thread on the scheduler, T1 will wait forever.
5175 // Our solution is to move T2 to a safe point, resume it, and then do stack crawl.
5176 static CLREvent *s_pWaitForStackCrawlEvent;
5178 static void WaitForStackCrawlEvent()
5188 s_pWaitForStackCrawlEvent->Wait(INFINITE,FALSE);
5190 static void SetStackCrawlEvent()
5192 LIMITED_METHOD_CONTRACT;
5193 s_pWaitForStackCrawlEvent->Set();
5195 static void ResetStackCrawlEvent()
5197 LIMITED_METHOD_CONTRACT;
5198 s_pWaitForStackCrawlEvent->Reset();
5202 void IncrementDeadThreadCountForGCTrigger();
5203 void DecrementDeadThreadCountForGCTrigger();
5205 void OnMaxGenerationGCStarted();
5206 bool ShouldTriggerGCForDeadThreads();
5207 void TriggerGCForDeadThreadsIfNecessary();
5210 struct TSSuspendHelper {
5211 static void SetTrap() { ThreadStore::TrapReturningThreads(TRUE); }
5212 static void UnsetTrap() { ThreadStore::TrapReturningThreads(FALSE); }
5214 typedef StateHolder<TSSuspendHelper::SetTrap, TSSuspendHelper::UnsetTrap> TSSuspendHolder;
5216 typedef StateHolder<ThreadStore::LockThreadStore,ThreadStore::UnlockThreadStore> ThreadStoreLockHolder;
5220 // This class dispenses small thread ids for the thin lock mechanism.
5221 // Recently we started using this class to dispense domain neutral module IDs as well.
5225 DWORD m_highestId; // highest id given out so far
5226 SIZE_T m_recycleBin; // link list to chain all ids returning to us
5227 Crst m_Crst; // lock to protect our data structures
5228 DPTR(PTR_Thread) m_idToThread; // map thread ids to threads
5229 DWORD m_idToThreadCapacity; // capacity of the map
5231 #ifndef DACCESS_COMPILE
5232 void GrowIdToThread()
5242 DWORD newCapacity = m_idToThreadCapacity == 0 ? 16 : m_idToThreadCapacity*2;
5243 Thread **newIdToThread = new Thread*[newCapacity];
5245 newIdToThread[0] = NULL;
5247 for (DWORD i = 1; i < m_idToThreadCapacity; i++)
5249 newIdToThread[i] = m_idToThread[i];
5251 for (DWORD j = m_idToThreadCapacity; j < newCapacity; j++)
5253 newIdToThread[j] = NULL;
5255 delete[] m_idToThread;
5256 m_idToThread = newIdToThread;
5257 m_idToThreadCapacity = newCapacity;
5259 #endif // !DACCESS_COMPILE
5263 // NOTE: CRST_UNSAFE_ANYMODE prevents a GC mode switch when entering this crst.
5264 // If you remove this flag, we will switch to preemptive mode when entering
5265 // m_Crst, which means all functions that enter it will become
5266 // GC_TRIGGERS. (This includes all uses of CrstHolder.) So be sure
5267 // to update the contracts if you remove this flag.
5268 m_Crst(CrstThreadIdDispenser, CRST_UNSAFE_ANYMODE)
5270 WRAPPER_NO_CONTRACT;
5273 m_idToThreadCapacity = 0;
5274 m_idToThread = NULL;
5279 LIMITED_METHOD_CONTRACT;
5280 delete[] m_idToThread;
5283 bool IsValidId(DWORD id)
5285 LIMITED_METHOD_CONTRACT;
5286 return (id > 0) && (id <= m_highestId);
5289 #ifndef DACCESS_COMPILE
5290 void NewId(Thread *pThread, DWORD & newId)
5292 WRAPPER_NO_CONTRACT;
5294 CrstHolder ch(&m_Crst);
5296 if (m_recycleBin != 0)
5298 _ASSERTE(FitsIn<DWORD>(m_recycleBin));
5299 result = static_cast<DWORD>(m_recycleBin);
5300 m_recycleBin = reinterpret_cast<SIZE_T>(m_idToThread[m_recycleBin]);
5304 // we make sure ids don't wrap around - before they do, we always return the highest possible
5305 // one and rely on our caller to detect this situation
5306 if (m_highestId + 1 > m_highestId)
5307 m_highestId = m_highestId + 1;
5308 result = m_highestId;
5309 if (result >= m_idToThreadCapacity)
5313 _ASSERTE(result < m_idToThreadCapacity);
5315 if (result < m_idToThreadCapacity)
5316 m_idToThread[result] = pThread;
5318 #endif // !DACCESS_COMPILE
5320 #ifndef DACCESS_COMPILE
5321 void DisposeId(DWORD id)
5331 CrstHolder ch(&m_Crst);
5333 _ASSERTE(IsValidId(id));
5334 if (id == m_highestId)
5340 m_idToThread[id] = reinterpret_cast<PTR_Thread>(m_recycleBin);
5343 size_t index = (size_t)m_idToThread[id];
5346 _ASSERTE(index != id);
5347 index = (size_t)m_idToThread[index];
5352 #endif // !DACCESS_COMPILE
5354 Thread *IdToThread(DWORD id)
5356 LIMITED_METHOD_CONTRACT;
5357 CrstHolder ch(&m_Crst);
5359 Thread *result = NULL;
5360 if (id <= m_highestId)
5361 result = m_idToThread[id];
5362 // m_idToThread may have Thread*, or the next free slot
5363 _ASSERTE ((size_t)result > m_idToThreadCapacity);
5368 Thread *IdToThreadWithValidation(DWORD id)
5370 WRAPPER_NO_CONTRACT;
5372 CrstHolder ch(&m_Crst);
5374 Thread *result = NULL;
5375 if (id <= m_highestId)
5376 result = m_idToThread[id];
5377 // m_idToThread may have Thread*, or the next free slot
5378 if ((size_t)result <= m_idToThreadCapacity)
5380 _ASSERTE(result == NULL || ((size_t)result & 0x3) == 0 || ((Thread*)result)->GetThreadId() == id);
5384 typedef DPTR(IdDispenser) PTR_IdDispenser;
5386 #ifndef CROSSGEN_COMPILE
5388 // Dispenser of small thread ids for thin lock mechanism
5389 GPTR_DECL(IdDispenser,g_pThinLockThreadIdDispenser);
5391 // forward declaration
5392 DWORD MsgWaitHelper(int numWaiters, HANDLE* phEvent, BOOL bWaitAll, DWORD millis, BOOL alertable = FALSE);
5394 // When a thread is being created after a debug suspension has started, it sends an event up to the
5395 // debugger. Afterwards, with the Debugger Lock still held, it will check to see if we had already asked to suspend the
5396 // Runtime. If we have, then it will turn around and call this to set the debug suspend pending flag on the newly
5397 // created thread, since it was missed by SysStartSuspendForDebug as it didn't exist when that function was run.
5399 inline void Thread::MarkForDebugSuspend(void)
5401 WRAPPER_NO_CONTRACT;
5402 if (!(m_State & TS_DebugSuspendPending))
5404 FastInterlockOr((ULONG *) &m_State, TS_DebugSuspendPending);
5405 ThreadStore::TrapReturningThreads(TRUE);
5409 // Debugger per-thread flag for enabling notification on "manual"
5410 // method calls, for stepping logic.
5412 inline void Thread::IncrementTraceCallCount()
5414 WRAPPER_NO_CONTRACT;
5415 FastInterlockIncrement(&m_TraceCallCount);
5416 ThreadStore::TrapReturningThreads(TRUE);
5419 inline void Thread::DecrementTraceCallCount()
5421 WRAPPER_NO_CONTRACT;
5422 ThreadStore::TrapReturningThreads(FALSE);
5423 FastInterlockDecrement(&m_TraceCallCount);
5426 // When we enter an Object.Wait() we are logically inside the synchronized
5427 // region of that object. Of course, we've actually completely left the region,
5428 // or else nobody could Notify us. But if we throw ThreadInterruptedException to
5429 // break out of the Wait, all the catchers are going to expect the synchronized
5430 // state to be correct. So we carry it around in case we need to restore it.
5434 WaitEventLink *m_WaitEventLink;
5436 Thread *m_OwnerThread;
5439 PendingSync(WaitEventLink *s) : m_WaitEventLink(s)
5441 WRAPPER_NO_CONTRACT;
5443 m_OwnerThread = GetThread();
5446 void Restore(BOOL bRemoveFromSB);
5450 #define INCTHREADLOCKCOUNT() { }
5451 #define DECTHREADLOCKCOUNT() { }
5452 #define INCTHREADLOCKCOUNTTHREAD(thread) { }
5453 #define DECTHREADLOCKCOUNTTHREAD(thread) { }
5456 // --------------------------------------------------------------------------------
5457 // GCHolder is used to implement the normal GCX_ macros.
5459 // GCHolder is normally used indirectly through GCX_ convenience macros, but can be used
5460 // directly if needed (e.g. due to multiple holders in one scope, or to use
5461 // in class definitions).
5463 // GCHolder (or derived types) should only be instantiated as automatic variables
5464 // --------------------------------------------------------------------------------
5466 #ifdef ENABLE_CONTRACTS_IMPL
5467 #define GCHOLDER_CONTRACT_ARGS_NoDtor , false, szConstruct, szFunction, szFile, lineNum
5468 #define GCHOLDER_CONTRACT_ARGS_HasDtor , true, szConstruct, szFunction, szFile, lineNum
5469 #define GCHOLDER_DECLARE_CONTRACT_ARGS_BARE \
5470 const char * szConstruct = "Unknown" \
5471 , const char * szFunction = "Unknown" \
5472 , const char * szFile = "Unknown" \
5474 #define GCHOLDER_DECLARE_CONTRACT_ARGS , GCHOLDER_DECLARE_CONTRACT_ARGS_BARE
5475 #define GCHOLDER_DECLARE_CONTRACT_ARGS_INTERNAL , bool fPushStackRecord = true, GCHOLDER_DECLARE_CONTRACT_ARGS_BARE
5477 #define GCHOLDER_SETUP_CONTRACT_STACK_RECORD(mode) \
5478 m_fPushedRecord = false; \
5480 if (fPushStackRecord && conditional) \
5482 m_pClrDebugState = GetClrDebugState(); \
5483 m_oldClrDebugState = *m_pClrDebugState; \
5485 m_pClrDebugState->ViolationMaskReset( ModeViolation ); \
5487 m_ContractStackRecord.m_szFunction = szFunction; \
5488 m_ContractStackRecord.m_szFile = szFile; \
5489 m_ContractStackRecord.m_lineNum = lineNum; \
5490 m_ContractStackRecord.m_testmask = \
5491 (Contract::ALL_Disabled & ~((UINT)(Contract::MODE_Mask))) \
5493 m_ContractStackRecord.m_construct = szConstruct; \
5494 m_pClrDebugState->LinkContractStackTrace( &m_ContractStackRecord ); \
5495 m_fPushedRecord = true; \
5497 #define GCHOLDER_CHECK_FOR_PREEMP_IN_NOTRIGGER(pThread) \
5498 if (pThread->GCNoTrigger()) \
5500 CONTRACT_ASSERT("Coop->preemp->coop switch attempted in a GC_NOTRIGGER scope", \
5501 Contract::GC_NoTrigger, \
5502 Contract::GC_Mask, \
5509 #define GCHOLDER_CONTRACT_ARGS_NoDtor
5510 #define GCHOLDER_CONTRACT_ARGS_HasDtor
5511 #define GCHOLDER_DECLARE_CONTRACT_ARGS_BARE
5512 #define GCHOLDER_DECLARE_CONTRACT_ARGS
5513 #define GCHOLDER_DECLARE_CONTRACT_ARGS_INTERNAL
5514 #define GCHOLDER_SETUP_CONTRACT_STACK_RECORD(mode)
5515 #define GCHOLDER_CHECK_FOR_PREEMP_IN_NOTRIGGER(pThread)
5516 #endif // ENABLE_CONTRACTS_IMPL
5518 #ifndef DACCESS_COMPILE
5522 // NOTE: This method is FORCEINLINE'ed into its callers, but the callers are just the
5523 // corresponding methods in the derived types, not all sites that use GC holders. This
5524 // is done so that the #pragma optimize will take affect since the optimize settings
5525 // are taken from the template instantiation site, not the template definition site.
5526 template <BOOL THREAD_EXISTS>
5527 FORCEINLINE_NONDEBUG
5531 WRAPPER_NO_CONTRACT;
5533 #ifdef ENABLE_CONTRACTS_IMPL
5534 if (m_fPushedRecord)
5536 *m_pClrDebugState = m_oldClrDebugState;
5538 // Make sure that we're using the version of this template that matches the
5539 // invariant setup in EnterInternal{Coop|Preemp}{_HackNoThread}
5540 _ASSERTE(!!THREAD_EXISTS == m_fThreadMustExist);
5545 // m_WasCoop is only TRUE if we've already verified there's an EE thread.
5546 BEGIN_GETTHREAD_ALLOWED;
5548 _ASSERTE(m_Thread != NULL); // Cannot switch to cooperative with no thread
5549 if (!m_Thread->PreemptiveGCDisabled())
5550 m_Thread->DisablePreemptiveGC();
5552 END_GETTHREAD_ALLOWED;
5556 // Either we initialized m_Thread explicitly with GetThread() in the
5557 // constructor, or our caller (instantiator of GCHolder) called our constructor
5558 // with GetThread() (which we already asserted in the constuctor)
5559 // (i.e., m_Thread == GetThread()). Also, note that if THREAD_EXISTS,
5560 // then m_Thread must be non-null (as it's == GetThread()). So the
5561 // "if" below looks a little hokey since we're checking for either condition.
5562 // But the template param THREAD_EXISTS allows us to statically early-out
5563 // when it's TRUE, so we check it for perf.
5564 if (THREAD_EXISTS || m_Thread != NULL)
5566 BEGIN_GETTHREAD_ALLOWED;
5567 if (m_Thread->PreemptiveGCDisabled())
5568 m_Thread->EnablePreemptiveGC();
5569 END_GETTHREAD_ALLOWED;
5573 // If we have a thread then we assert that we ended up in the same state
5574 // which we started in.
5575 if (THREAD_EXISTS || m_Thread != NULL)
5577 _ASSERTE(!!m_WasCoop == !!(m_Thread->PreemptiveGCDisabled()));
5581 // NOTE: The rest of these methods are all FORCEINLINE so that the uses where 'conditional==true'
5582 // can have the if-checks removed by the compiler. The callers are just the corresponding methods
5583 // in the derived types, not all sites that use GC holders.
5586 // This is broken - there is a potential race with the GC thread. It is currently
5587 // used for a few cases where (a) we potentially haven't started up the EE yet, or
5588 // (b) we are on a "special thread". We need a real solution here though.
5589 FORCEINLINE_NONDEBUG
5590 void EnterInternalCoop_HackNoThread(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS_INTERNAL)
5592 GCHOLDER_SETUP_CONTRACT_STACK_RECORD(Contract::MODE_Coop);
5594 m_Thread = GetThreadNULLOk();
5596 #ifdef ENABLE_CONTRACTS_IMPL
5597 m_fThreadMustExist = false;
5598 #endif // ENABLE_CONTRACTS_IMPL
5600 if (m_Thread != NULL)
5602 BEGIN_GETTHREAD_ALLOWED;
5603 m_WasCoop = m_Thread->PreemptiveGCDisabled();
5605 if (conditional && !m_WasCoop)
5607 m_Thread->DisablePreemptiveGC();
5608 _ASSERTE(m_Thread->PreemptiveGCDisabled());
5610 END_GETTHREAD_ALLOWED;
5618 FORCEINLINE_NONDEBUG
5619 void EnterInternalPreemp(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS_INTERNAL)
5621 GCHOLDER_SETUP_CONTRACT_STACK_RECORD(Contract::MODE_Preempt);
5623 m_Thread = GetThreadNULLOk();
5625 #ifdef ENABLE_CONTRACTS_IMPL
5626 m_fThreadMustExist = false;
5627 if (m_Thread != NULL && conditional)
5629 BEGIN_GETTHREAD_ALLOWED;
5630 GCHOLDER_CHECK_FOR_PREEMP_IN_NOTRIGGER(m_Thread);
5631 END_GETTHREAD_ALLOWED;
5633 #endif // ENABLE_CONTRACTS_IMPL
5635 if (m_Thread != NULL)
5637 BEGIN_GETTHREAD_ALLOWED;
5638 m_WasCoop = m_Thread->PreemptiveGCDisabled();
5640 if (conditional && m_WasCoop)
5642 m_Thread->EnablePreemptiveGC();
5643 _ASSERTE(!m_Thread->PreemptiveGCDisabled());
5645 END_GETTHREAD_ALLOWED;
5653 FORCEINLINE_NONDEBUG
5654 void EnterInternalCoop(Thread *pThread, bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS_INTERNAL)
5656 // This is the perf version. So we deliberately restrict the calls
5657 // to already setup threads to avoid the null checks and GetThread call
5658 _ASSERTE(pThread && (pThread == GetThread()));
5659 #ifdef ENABLE_CONTRACTS_IMPL
5660 m_fThreadMustExist = true;
5661 #endif // ENABLE_CONTRACTS_IMPL
5663 GCHOLDER_SETUP_CONTRACT_STACK_RECORD(Contract::MODE_Coop);
5666 m_WasCoop = m_Thread->PreemptiveGCDisabled();
5667 if (conditional && !m_WasCoop)
5669 m_Thread->DisablePreemptiveGC();
5670 _ASSERTE(m_Thread->PreemptiveGCDisabled());
5674 template <BOOL THREAD_EXISTS>
5675 FORCEINLINE_NONDEBUG
5676 void EnterInternalPreemp(Thread *pThread, bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS_INTERNAL)
5678 // This is the perf version. So we deliberately restrict the calls
5679 // to already setup threads to avoid the null checks and GetThread call
5680 _ASSERTE(!THREAD_EXISTS || (pThread && (pThread == GetThread())));
5681 #ifdef ENABLE_CONTRACTS_IMPL
5682 m_fThreadMustExist = !!THREAD_EXISTS;
5683 #endif // ENABLE_CONTRACTS_IMPL
5685 GCHOLDER_SETUP_CONTRACT_STACK_RECORD(Contract::MODE_Preempt);
5689 if (THREAD_EXISTS || (m_Thread != NULL))
5691 GCHOLDER_CHECK_FOR_PREEMP_IN_NOTRIGGER(m_Thread);
5692 m_WasCoop = m_Thread->PreemptiveGCDisabled();
5693 if (conditional && m_WasCoop)
5695 m_Thread->EnablePreemptiveGC();
5696 _ASSERTE(!m_Thread->PreemptiveGCDisabled());
5707 BOOL m_WasCoop; // This is BOOL and not 'bool' because PreemptiveGCDisabled returns BOOL,
5708 // so the codegen is better if we don't have to convert to 'bool'.
5709 #ifdef ENABLE_CONTRACTS_IMPL
5710 bool m_fThreadMustExist; // used to validate that the proper Pop<THREAD_EXISTS> method is used
5711 bool m_fPushedRecord;
5712 ClrDebugState m_oldClrDebugState;
5713 ClrDebugState *m_pClrDebugState;
5714 ContractStackRecord m_ContractStackRecord;
5718 class GCCoopNoDtor : public GCHolderBase
5722 void Enter(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5724 WRAPPER_NO_CONTRACT;
5728 STATIC_CONTRACT_MODE_COOPERATIVE;
5730 // The thread must be non-null to enter MODE_COOP
5731 this->EnterInternalCoop(GetThread(), conditional GCHOLDER_CONTRACT_ARGS_NoDtor);
5737 WRAPPER_NO_CONTRACT;
5739 this->PopInternal<TRUE>(); // Thread must be non-NULL
5743 class GCPreempNoDtor : public GCHolderBase
5747 void Enter(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5752 STATIC_CONTRACT_MODE_PREEMPTIVE;
5755 this->EnterInternalPreemp(conditional GCHOLDER_CONTRACT_ARGS_NoDtor);
5759 void Enter(Thread * pThreadNullOk, bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5764 STATIC_CONTRACT_MODE_PREEMPTIVE;
5767 this->EnterInternalPreemp<FALSE>( // Thread may be NULL
5768 pThreadNullOk, conditional GCHOLDER_CONTRACT_ARGS_NoDtor);
5775 this->PopInternal<FALSE>(); // Thread may be NULL
5779 class GCCoop : public GCHolderBase
5783 GCCoop(GCHOLDER_DECLARE_CONTRACT_ARGS_BARE)
5786 STATIC_CONTRACT_MODE_COOPERATIVE;
5788 // The thread must be non-null to enter MODE_COOP
5789 this->EnterInternalCoop(GetThread(), true GCHOLDER_CONTRACT_ARGS_HasDtor);
5793 GCCoop(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5798 STATIC_CONTRACT_MODE_COOPERATIVE;
5801 // The thread must be non-null to enter MODE_COOP
5802 this->EnterInternalCoop(GetThread(), conditional GCHOLDER_CONTRACT_ARGS_HasDtor);
5809 this->PopInternal<TRUE>(); // Thread must be non-NULL
5813 // This is broken - there is a potential race with the GC thread. It is currently
5814 // used for a few cases where (a) we potentially haven't started up the EE yet, or
5815 // (b) we are on a "special thread". We need a real solution here though.
5816 class GCCoopHackNoThread : public GCHolderBase
5820 GCCoopHackNoThread(GCHOLDER_DECLARE_CONTRACT_ARGS_BARE)
5823 STATIC_CONTRACT_MODE_COOPERATIVE;
5825 this->EnterInternalCoop_HackNoThread(true GCHOLDER_CONTRACT_ARGS_HasDtor);
5829 GCCoopHackNoThread(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5834 STATIC_CONTRACT_MODE_COOPERATIVE;
5837 this->EnterInternalCoop_HackNoThread(conditional GCHOLDER_CONTRACT_ARGS_HasDtor);
5841 ~GCCoopHackNoThread()
5844 this->PopInternal<FALSE>(); // Thread might be NULL
5848 class GCCoopThreadExists : public GCHolderBase
5852 GCCoopThreadExists(Thread * pThread GCHOLDER_DECLARE_CONTRACT_ARGS)
5855 STATIC_CONTRACT_MODE_COOPERATIVE;
5857 this->EnterInternalCoop(pThread, true GCHOLDER_CONTRACT_ARGS_HasDtor);
5861 GCCoopThreadExists(Thread * pThread, bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5866 STATIC_CONTRACT_MODE_COOPERATIVE;
5869 this->EnterInternalCoop(pThread, conditional GCHOLDER_CONTRACT_ARGS_HasDtor);
5873 ~GCCoopThreadExists()
5876 this->PopInternal<TRUE>(); // Thread must be non-NULL
5880 class GCPreemp : public GCHolderBase
5884 GCPreemp(GCHOLDER_DECLARE_CONTRACT_ARGS_BARE)
5887 STATIC_CONTRACT_MODE_PREEMPTIVE;
5889 this->EnterInternalPreemp(true GCHOLDER_CONTRACT_ARGS_HasDtor);
5893 GCPreemp(bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5898 STATIC_CONTRACT_MODE_PREEMPTIVE;
5901 this->EnterInternalPreemp(conditional GCHOLDER_CONTRACT_ARGS_HasDtor);
5908 this->PopInternal<FALSE>(); // Thread may be NULL
5912 class GCPreempThreadExists : public GCHolderBase
5916 GCPreempThreadExists(Thread * pThread GCHOLDER_DECLARE_CONTRACT_ARGS)
5919 STATIC_CONTRACT_MODE_PREEMPTIVE;
5921 this->EnterInternalPreemp<TRUE>( // Thread must be non-NULL
5922 pThread, true GCHOLDER_CONTRACT_ARGS_HasDtor);
5926 GCPreempThreadExists(Thread * pThread, bool conditional GCHOLDER_DECLARE_CONTRACT_ARGS)
5931 STATIC_CONTRACT_MODE_PREEMPTIVE;
5934 this->EnterInternalPreemp<TRUE>( // Thread must be non-NULL
5935 pThread, conditional GCHOLDER_CONTRACT_ARGS_HasDtor);
5939 ~GCPreempThreadExists()
5942 this->PopInternal<TRUE>(); // Thread must be non-NULL
5945 #endif // DACCESS_COMPILE
5948 // --------------------------------------------------------------------------------
5949 // GCAssert is used to implement the assert GCX_ macros. Usage is similar to GCHolder.
5951 // GCAsserting for preemptive mode automatically passes on unmanaged threads.
5953 // Note that the assert is "2 sided"; it happens on entering and on leaving scope, to
5954 // help ensure mode integrity.
5956 // GCAssert is a noop in a free build
5957 // --------------------------------------------------------------------------------
5959 template<BOOL COOPERATIVE>
5963 DEBUG_NOINLINE void BeginGCAssert();
5964 DEBUG_NOINLINE void EndGCAssert()
5970 template<BOOL COOPERATIVE>
5971 class AutoCleanupGCAssert
5975 DEBUG_NOINLINE AutoCleanupGCAssert();
5977 DEBUG_NOINLINE ~AutoCleanupGCAssert()
5980 WRAPPER_NO_CONTRACT;
5981 // This is currently disabled; we currently have a lot of code which doesn't
5982 // back out the GC mode properly (instead relying on the EX_TRY macros.)
5984 // @todo enable this when we remove raw GC mode switching.
5991 FORCEINLINE void DoCheck()
5993 WRAPPER_NO_CONTRACT;
5994 Thread *pThread = GetThread();
5997 _ASSERTE(pThread != NULL);
5998 _ASSERTE(pThread->PreemptiveGCDisabled());
6002 _ASSERTE(pThread == NULL || !(pThread->PreemptiveGCDisabled()));
6009 // --------------------------------------------------------------------------------
6010 // GCForbid is used to add ForbidGC semantics to the current GC mode. Note that
6011 // it requires the thread to be in cooperative mode already.
6013 // GCForbid is a noop in a free build
6014 // --------------------------------------------------------------------------------
6015 #ifndef DACCESS_COMPILE
6016 class GCForbid : AutoCleanupGCAssert<TRUE>
6018 #ifdef ENABLE_CONTRACTS_IMPL
6020 DEBUG_NOINLINE GCForbid(BOOL fConditional, const char *szFunction, const char *szFile, int lineNum)
6025 STATIC_CONTRACT_MODE_COOPERATIVE;
6026 STATIC_CONTRACT_GC_NOTRIGGER;
6029 m_fConditional = fConditional;
6032 Thread *pThread = GetThread();
6033 m_pClrDebugState = pThread ? pThread->GetClrDebugState() : ::GetClrDebugState();
6034 m_oldClrDebugState = *m_pClrDebugState;
6036 m_pClrDebugState->ViolationMaskReset( GCViolation );
6038 GetThread()->BeginForbidGC(szFile, lineNum);
6040 m_ContractStackRecord.m_szFunction = szFunction;
6041 m_ContractStackRecord.m_szFile = (char*)szFile;
6042 m_ContractStackRecord.m_lineNum = lineNum;
6043 m_ContractStackRecord.m_testmask = (Contract::ALL_Disabled & ~((UINT)(Contract::GC_Mask))) | Contract::GC_NoTrigger;
6044 m_ContractStackRecord.m_construct = "GCX_FORBID";
6045 m_pClrDebugState->LinkContractStackTrace( &m_ContractStackRecord );
6049 DEBUG_NOINLINE GCForbid(const char *szFunction, const char *szFile, int lineNum)
6052 STATIC_CONTRACT_MODE_COOPERATIVE;
6053 STATIC_CONTRACT_GC_NOTRIGGER;
6055 m_fConditional = TRUE;
6057 Thread *pThread = GetThread();
6058 m_pClrDebugState = pThread ? pThread->GetClrDebugState() : ::GetClrDebugState();
6059 m_oldClrDebugState = *m_pClrDebugState;
6061 m_pClrDebugState->ViolationMaskReset( GCViolation );
6063 GetThread()->BeginForbidGC(szFile, lineNum);
6065 m_ContractStackRecord.m_szFunction = szFunction;
6066 m_ContractStackRecord.m_szFile = (char*)szFile;
6067 m_ContractStackRecord.m_lineNum = lineNum;
6068 m_ContractStackRecord.m_testmask = (Contract::ALL_Disabled & ~((UINT)(Contract::GC_Mask))) | Contract::GC_NoTrigger;
6069 m_ContractStackRecord.m_construct = "GCX_FORBID";
6070 m_pClrDebugState->LinkContractStackTrace( &m_ContractStackRecord );
6073 DEBUG_NOINLINE ~GCForbid()
6079 GetThread()->EndForbidGC();
6080 *m_pClrDebugState = m_oldClrDebugState;
6085 BOOL m_fConditional;
6086 ClrDebugState *m_pClrDebugState;
6087 ClrDebugState m_oldClrDebugState;
6088 ContractStackRecord m_ContractStackRecord;
6089 #endif // _DEBUG_IMPL
6091 #endif // !DACCESS_COMPILE
6093 // --------------------------------------------------------------------------------
6094 // GCNoTrigger is used to add NoTriggerGC semantics to the current GC mode. Unlike
6095 // GCForbid, it does not require a thread to be in cooperative mode.
6097 // GCNoTrigger is a noop in a free build
6098 // --------------------------------------------------------------------------------
6099 #ifndef DACCESS_COMPILE
6102 #ifdef ENABLE_CONTRACTS_IMPL
6104 DEBUG_NOINLINE GCNoTrigger(BOOL fConditional, const char *szFunction, const char *szFile, int lineNum)
6109 STATIC_CONTRACT_GC_NOTRIGGER;
6112 m_fConditional = fConditional;
6116 Thread * pThread = GetThreadNULLOk();
6117 m_pClrDebugState = pThread ? pThread->GetClrDebugState() : ::GetClrDebugState();
6118 m_oldClrDebugState = *m_pClrDebugState;
6120 m_pClrDebugState->ViolationMaskReset( GCViolation );
6122 if (pThread != NULL)
6124 pThread->BeginNoTriggerGC(szFile, lineNum);
6127 m_ContractStackRecord.m_szFunction = szFunction;
6128 m_ContractStackRecord.m_szFile = (char*)szFile;
6129 m_ContractStackRecord.m_lineNum = lineNum;
6130 m_ContractStackRecord.m_testmask = (Contract::ALL_Disabled & ~((UINT)(Contract::GC_Mask))) | Contract::GC_NoTrigger;
6131 m_ContractStackRecord.m_construct = "GCX_NOTRIGGER";
6132 m_pClrDebugState->LinkContractStackTrace( &m_ContractStackRecord );
6136 DEBUG_NOINLINE GCNoTrigger(const char *szFunction, const char *szFile, int lineNum)
6139 STATIC_CONTRACT_GC_NOTRIGGER;
6141 m_fConditional = TRUE;
6143 Thread * pThread = GetThreadNULLOk();
6144 m_pClrDebugState = pThread ? pThread->GetClrDebugState() : ::GetClrDebugState();
6145 m_oldClrDebugState = *m_pClrDebugState;
6147 m_pClrDebugState->ViolationMaskReset( GCViolation );
6149 if (pThread != NULL)
6151 pThread->BeginNoTriggerGC(szFile, lineNum);
6154 m_ContractStackRecord.m_szFunction = szFunction;
6155 m_ContractStackRecord.m_szFile = (char*)szFile;
6156 m_ContractStackRecord.m_lineNum = lineNum;
6157 m_ContractStackRecord.m_testmask = (Contract::ALL_Disabled & ~((UINT)(Contract::GC_Mask))) | Contract::GC_NoTrigger;
6158 m_ContractStackRecord.m_construct = "GCX_NOTRIGGER";
6159 m_pClrDebugState->LinkContractStackTrace( &m_ContractStackRecord );
6162 DEBUG_NOINLINE ~GCNoTrigger()
6168 Thread * pThread = GetThreadNULLOk();
6171 pThread->EndNoTriggerGC();
6173 *m_pClrDebugState = m_oldClrDebugState;
6178 BOOL m_fConditional;
6179 ClrDebugState *m_pClrDebugState;
6180 ClrDebugState m_oldClrDebugState;
6181 ContractStackRecord m_ContractStackRecord;
6182 #endif // _DEBUG_IMPL
6184 #endif //!DACCESS_COMPILE
6186 class CoopTransitionHolder
6191 CoopTransitionHolder(Thread * pThread)
6192 : m_pFrame(pThread->m_pFrame)
6194 LIMITED_METHOD_CONTRACT;
6197 ~CoopTransitionHolder()
6199 WRAPPER_NO_CONTRACT;
6200 if (m_pFrame != NULL)
6201 COMPlusCooperativeTransitionHandler(m_pFrame);
6204 void SuppressRelease()
6206 LIMITED_METHOD_CONTRACT;
6207 // FRAME_TOP and NULL must be distinct values.
6208 // static_assert_no_msg(FRAME_TOP_VALUE != NULL);
6213 // --------------------------------------------------------------------------------
6214 // GCX macros - see util.hpp
6215 // --------------------------------------------------------------------------------
6219 // Normally, any thread we operate on has a Thread block in its TLS. But there are
6220 // a few special threads we don't normally execute managed code on.
6221 BOOL dbgOnly_IsSpecialEEThread();
6222 void dbgOnly_IdentifySpecialEEThread();
6224 #ifdef USE_CHECKED_OBJECTREFS
6225 #define ASSERT_PROTECTED(objRef) Thread::ObjectRefProtected(objRef)
6227 #define ASSERT_PROTECTED(objRef)
6232 #define ASSERT_PROTECTED(objRef)
6237 #ifdef ENABLE_CONTRACTS_IMPL
6239 #define BEGINFORBIDGC() {if (GetThreadNULLOk() != NULL) GetThreadNULLOk()->BeginForbidGC(__FILE__, __LINE__);}
6240 #define ENDFORBIDGC() {if (GetThreadNULLOk() != NULL) GetThreadNULLOk()->EndForbidGC();}
6242 class FCallGCCanTrigger
6245 static DEBUG_NOINLINE void Enter()
6248 STATIC_CONTRACT_GC_TRIGGERS;
6249 Thread * pThread = GetThreadNULLOk();
6250 if (pThread != NULL)
6256 static DEBUG_NOINLINE void Enter(Thread* pThread)
6259 STATIC_CONTRACT_GC_TRIGGERS;
6260 pThread->EndForbidGC();
6263 static DEBUG_NOINLINE void Leave(const char *szFunction, const char *szFile, int lineNum)
6266 Thread * pThread = GetThreadNULLOk();
6267 if (pThread != NULL)
6269 Leave(pThread, szFunction, szFile, lineNum);
6273 static DEBUG_NOINLINE void Leave(Thread* pThread, const char *szFunction, const char *szFile, int lineNum)
6276 pThread->BeginForbidGC(szFile, lineNum);
6280 #define TRIGGERSGC_NOSTOMP() do { \
6281 ANNOTATION_GC_TRIGGERS; \
6282 Thread* curThread = GetThread(); \
6283 if(curThread->GCNoTrigger()) \
6285 CONTRACT_ASSERT("TRIGGERSGC found in a GC_NOTRIGGER region.", Contract::GC_NoTrigger, Contract::GC_Mask, __FUNCTION__, __FILE__, __LINE__); \
6290 #define TRIGGERSGC() do { \
6291 TRIGGERSGC_NOSTOMP(); \
6292 Thread::TriggersGC(GetThread()); \
6295 #else // ENABLE_CONTRACTS_IMPL
6297 #define BEGINFORBIDGC()
6298 #define ENDFORBIDGC()
6299 #define TRIGGERSGC_NOSTOMP() ANNOTATION_GC_TRIGGERS
6300 #define TRIGGERSGC() ANNOTATION_GC_TRIGGERS
6302 #endif // ENABLE_CONTRACTS_IMPL
6304 inline BOOL GC_ON_TRANSITIONS(BOOL val) {
6305 WRAPPER_NO_CONTRACT;
6307 Thread* thread = GetThread();
6310 BOOL ret = thread->m_GCOnTransitionsOK;
6311 thread->m_GCOnTransitionsOK = val;
6319 inline void ENABLESTRESSHEAP() {
6320 WRAPPER_NO_CONTRACT;
6321 Thread * thread = GetThreadNULLOk();
6323 thread->EnableStressHeap();
6327 void CleanStackForFastGCStress ();
6328 #define CLEANSTACKFORFASTGCSTRESS() \
6329 if (g_pConfig->GetGCStressLevel() && g_pConfig->FastGCStressLevel() > 1) { \
6330 CleanStackForFastGCStress (); \
6334 #define CLEANSTACKFORFASTGCSTRESS()
6341 inline void DoReleaseCheckpoint(void *checkPointMarker)
6343 WRAPPER_NO_CONTRACT;
6344 GetThread()->m_MarshalAlloc.Collapse(checkPointMarker);
6348 // CheckPointHolder : Back out to a checkpoint on the thread allocator.
6349 typedef Holder<void*, DoNothing, DoReleaseCheckpoint> CheckPointHolder;
6353 // Holder for incrementing the ForbidGCLoaderUse counter.
6354 class GCForbidLoaderUseHolder
6357 GCForbidLoaderUseHolder()
6359 WRAPPER_NO_CONTRACT;
6360 ClrFlsIncrementValue(TlsIdx_ForbidGCLoaderUseCount, 1);
6363 ~GCForbidLoaderUseHolder()
6365 WRAPPER_NO_CONTRACT;
6366 ClrFlsIncrementValue(TlsIdx_ForbidGCLoaderUseCount, -1);
6372 // Declaring this macro turns off the GC_TRIGGERS/THROWS/INJECT_FAULT contract in LoadTypeHandle.
6373 // If you do this, you must restrict your use of the loader only to retrieve TypeHandles
6374 // for types that have already been loaded and resolved. If you fail to observe this restriction, you will
6375 // reach a GC_TRIGGERS point somewhere in the loader and assert. If you're lucky, that is.
6376 // (If you're not lucky, you will introduce a GC hole.)
6378 // The main user of this workaround is the GC stack crawl. It must parse signatures and retrieve
6379 // type handles for valuetypes in method parameters. Some other uses have creeped into the codebase -
6380 // some justified, others not.
6382 // ENABLE_FORBID_GC_LOADER is *not* the same as using tokenNotToLoad to suppress loading.
6383 // You should use tokenNotToLoad in preference to ENABLE_FORBID. ENABLE_FORBID is a fragile
6384 // workaround and places enormous responsibilities on the caller. The only reason it exists at all
6385 // is that the GC stack crawl simply cannot tolerate exceptions or new GC's - that's an immovable
6386 // rock we're faced with.
6388 // The key differences are:
6390 // ENABLE_FORBID tokenNotToLoad
6391 // -------------------------------------------- ------------------------------------------------------
6392 // caller must guarantee the type is already caller does not have to guarantee the type
6393 // loaded - otherwise, we will crash badly. is already loaded.
6395 // loader will not throw, trigger gc or OOM loader may throw, trigger GC or OOM.
6399 #ifdef ENABLE_CONTRACTS_IMPL
6400 #define ENABLE_FORBID_GC_LOADER_USE_IN_THIS_SCOPE() GCForbidLoaderUseHolder __gcfluh; \
6401 CANNOTTHROWCOMPLUSEXCEPTION(); \
6404 #else // _DEBUG_IMPL
6405 #define ENABLE_FORBID_GC_LOADER_USE_IN_THIS_SCOPE() ;
6406 #endif // _DEBUG_IMPL
6407 // This macro lets us define a conditional CONTRACT for the GC_TRIGGERS behavior.
6408 // This is for the benefit of a select group of callers that use the loader
6409 // in ForbidGC mode strictly to retrieve existing TypeHandles. The reason
6410 // we use a threadstate rather than an extra parameter is that these annoying
6411 // callers call the loader through intermediaries (MetaSig) and it proved to be too
6412 // cumbersome to pass this state down through all those callers.
6414 // Don't make GC_TRIGGERS conditional just because your function ends up calling
6415 // LoadTypeHandle indirectly. We don't want to proliferate conditonal contracts more
6416 // than necessary so declare such functions as GC_TRIGGERS until the need
6417 // for the conditional contract is actually proven through code inspection or
6419 #if defined(DACCESS_COMPILE)
6421 // Disable (<non-zero constant> || <expression>) is always a non-zero constant.
6422 // <expression> is never evaluated and might have side effects, because
6423 // FORBIDGC_LOADER_USE_ENABLED is used in that pattern and additionally the rule
6424 // has little value.
6426 #pragma warning(disable:6286)
6428 #define FORBIDGC_LOADER_USE_ENABLED() true
6430 #else // DACCESS_COMPILE
6431 #if defined (_DEBUG_IMPL) || defined(_PREFAST_)
6432 #ifndef DACCESS_COMPILE
6433 #define FORBIDGC_LOADER_USE_ENABLED() (ClrFlsGetValue(TlsIdx_ForbidGCLoaderUseCount))
6435 #define FORBIDGC_LOADER_USE_ENABLED() TRUE
6437 #else // _DEBUG_IMPL
6439 // If you got an error about FORBIDGC_LOADER_USE_ENABLED being undefined, it's because you tried
6440 // to use this predicate in a free build outside of a CONTRACT or ASSERT.
6442 #define FORBIDGC_LOADER_USE_ENABLED() (sizeof(YouCannotUseThisHere) != 0)
6443 #endif // _DEBUG_IMPL
6444 #endif // DACCESS_COMPILE
6446 // There is an MDA which can detect illegal reentrancy into the CLR. For instance, if you call managed
6447 // code from a native vectored exception handler, this might cause a reverse PInvoke to occur. But if the
6448 // exception was triggered from code that was executing in cooperative GC mode, we now have GC holes and
6449 // general corruption.
6450 BOOL HasIllegalReentrancy();
6452 // We have numerous places where we start up a managed thread. This includes several places in the
6453 // ThreadPool, the 'new Thread(...).Start()' case, and the Finalizer. Try to factor the code so our
6454 // base exception handling behavior is consistent across those places. The resulting code is convoluted,
6455 // but it's better than the prior situation of each thread being on a different plan.
6457 // If you add a new kind of managed thread (i.e. thread proc) to the system, you must:
6459 // 1) Call HasStarted() before calling any ManagedThreadBase_* routine.
6460 // 2) Define a ManagedThreadBase_* routine for your scenario and declare it below.
6461 // 3) Always perform any AD transitions through the ManagedThreadBase_* mechanism.
6462 // 4) Allow the ManagedThreadBase_* mechanism to perform all your exception handling, including
6463 // dispatching of unhandled exception events, deciding what to swallow, etc.
6464 // 5) If you must separate your base thread proc behavior from your AD transitioning behavior,
6465 // define a second ManagedThreadADCall_* helper and declare it below.
6466 // 6) Never decide this is too much work and that you will roll your own thread proc code.
6468 // intentionally opaque.
6469 struct ManagedThreadCallState;
6471 struct ManagedThreadBase
6473 // The 'new Thread(...).Start()' case from COMSynchronizable kickoff thread worker
6474 static void KickOff(ADCallBackFcnType pTarget,
6477 // The IOCompletion, QueueUserWorkItem, AddTimer, RegisterWaitForSingleObject cases in
6479 static void ThreadPool(ADCallBackFcnType pTarget, LPVOID args);
6481 // The Finalizer thread uses this path
6482 static void FinalizerBase(ADCallBackFcnType pTarget);
6486 // DeadlockAwareLock is a base for building deadlock-aware locks.
6487 // Note that DeadlockAwareLock only works if ALL locks involved in the deadlock are deadlock aware.
6489 class DeadlockAwareLock
6492 VolatilePtr<Thread> m_pHoldingThread;
6494 const char *m_description;
6498 DeadlockAwareLock(const char *description = NULL);
6499 ~DeadlockAwareLock();
6501 // Test for deadlock
6502 BOOL CanEnterLock();
6504 // Call BeginEnterLock before attempting to acquire the lock
6505 BOOL TryBeginEnterLock(); // returns FALSE if deadlock
6506 void BeginEnterLock(); // Asserts if deadlock
6508 // Call EndEnterLock after acquiring the lock
6509 void EndEnterLock();
6511 // Call LeaveLock after releasing the lock
6514 const char *GetDescription();
6517 CHECK CheckDeadlock(Thread *pThread);
6519 static void ReleaseBlockingLock()
6521 Thread *pThread = GetThread();
6523 pThread->m_pBlockingLock = NULL;
6526 typedef StateHolder<DoNothing,DeadlockAwareLock::ReleaseBlockingLock> BlockingLockHolder;
6529 inline void SetTypeHandleOnThreadForAlloc(TypeHandle th)
6531 // We are doing this unconditionally even though th is only used by ETW events in GC. When the ETW
6532 // event is not enabled we still need to set it because it may not be enabled here but by the
6533 // time we are checking in GC, the event is enabled - we don't want GC to read a random value
6534 // from before in this case.
6535 GetThread()->SetTHAllocContextObj(th);
6538 #endif // CROSSGEN_COMPILE
6541 // users of OFFSETOF__TLS__tls_CurrentThread macro expect the offset of these variables wrt to _tls_start to be stable.
6542 // Defining each of the following thread local variable separately without the struct causes the offsets to change in
6543 // different flavors of build. Eg. in chk build the offset of m_pThread is 0x4 while in ret build it becomes 0x8 as 0x4 is
6544 // occupied by m_pAddDomain. Packing all thread local variables in a struct and making struct instance to be thread local
6545 // ensures that the offsets of the variables are stable in all build flavors.
6546 struct ThreadLocalInfo
6549 AppDomain* m_pAppDomain; // This field is read only by the SOS plugin to get the AppDomain
6550 void** m_EETlsData; // ClrTlsInfo::data
6553 class ThreadStateHolder
6556 ThreadStateHolder (BOOL fNeed, DWORD state)
6558 LIMITED_METHOD_CONTRACT;
6559 _ASSERTE (GetThread());
6563 ~ThreadStateHolder ()
6565 LIMITED_METHOD_CONTRACT;
6569 Thread *pThread = GetThread();
6571 FastInterlockAnd((ULONG *) &pThread->m_State, ~m_state);
6579 // Sets an NC threadstate if not already set, and restores the old state
6580 // of that bit upon destruction
6582 // fNeed > 0, make sure state is set, restored in destructor
6583 // fNeed = 0, no change
6584 // fNeed < 0, make sure state is reset, restored in destructor
6586 class ThreadStateNCStackHolder
6589 ThreadStateNCStackHolder (BOOL fNeed, Thread::ThreadStateNoConcurrency state)
6591 LIMITED_METHOD_CONTRACT;
6593 _ASSERTE (GetThread());
6599 Thread *pThread = GetThread();
6604 // if the state is set, reset it
6605 if (pThread->HasThreadStateNC(state))
6607 pThread->ResetThreadStateNC(m_state);
6616 // if the state is already set then no change is
6617 // necessary during the back out
6618 if(pThread->HasThreadStateNC(state))
6624 pThread->SetThreadStateNC(state);
6630 ~ThreadStateNCStackHolder()
6632 LIMITED_METHOD_CONTRACT;
6636 Thread *pThread = GetThread();
6641 pThread->SetThreadStateNC(m_state); // set it
6645 pThread->ResetThreadStateNC(m_state);
6652 Thread::ThreadStateNoConcurrency m_state;
6655 BOOL Debug_IsLockedViaThreadSuspension();
6657 #endif //__threads_h__