- [Contributing Workflow](project-docs/contributing-workflow.md)
- [Performance Guidelines](project-docs/performance-guidelines.md)
- [Garbage Collector Guidelines](project-docs/garbage-collector-guidelines.md)
-- [Adding new public APIs to mscorlib](project-docs/adding_new_public_apis.md)
+- [Public APIs in System.Private.CoreLib](project-docs/changing-corelib.md)
- [Project NuGet Dependencies](https://github.com/dotnet/corefx/blob/master/Documentation/project-docs/project-nuget-dependencies.md)
Coding Guidelines
+++ /dev/null
-Adding new public APIs to System.Private.CoreLib
-================================================
-
-Many of the CoreFX libraries type-forward their public APIs to the implementations in System.Private.CoreLib.
-- The CoreFX build uses System.Private.CoreLib via Microsoft.TargetingPack.Private.CoreCLR Nuget package.
-- Some of the CoreFX libraries are not built in the CoreFX repository. For example, System.Runtime.Loader.dll is purely a facade and type-forwards everything to System.Private.CoreLib. These libraries are built and published through a separate process.
-- Hence, when adding a new public API to System.Private.CoreLib, changes must be staged to ensure that new prerequisites are published before they are used.
-
-**Staging the changes**
-
-Make the changes to CoreCLR, including System.Private.CoreLib
-- Merge the changes
-- Wait for a new System.Private.CoreLib to be published. Check the latest published version [here](https://dotnet.myget.org/feed/dotnet-core/package/nuget/Microsoft.TargetingPack.Private.CoreCLR).
-
-Make the changes to CoreFX consuming the new APIs in System.Private.CoreLib
-- If the changes are to libraries that are built out of the CoreFX repository:
- - You will likely see a build failure until a new System.Private.CoreLib contract is published
-- If the changes are to libraries that are **not** built out of the CoreFX repository:
- - For example, pure facades such as System.Runtime.Loader.dll
- - There will likely not be a build failure
- - But you will still need to wait for the new System.Private.CoreLib contract to be published before merging the change, otherwise, facade generation will fail
-- Merge the changes
-- Wait for new contracts to be published for libraries with new APIs. Check the latest published versions [here](http://myget.org/gallery/dotnet-core).
-
-Add tests
-- You should now be able to consume the new APIs and add tests to the CoreFX test suite
- - Until new contracts are published, you will likely see a build failure indicating that the new APIs don't exist.
-- Note that on Windows, CoreFX tests currently use a potentially old published build of CoreCLR
- - You may need to disable the new tests on Windows until CoreFX tests are updated to use a newer build of CoreCLR.
--- /dev/null
+Changing or adding new public APIs to System.Private.CoreLib
+============================================================
+
+### Context
+Many of the CoreFX libraries type-forward their public APIs to the implementations in `System.Private.CoreLib`.
+- The CoreFX build uses `System.Private.CoreLib` via a NuGet package named `Microsoft.TargetingPack.Private.CoreCLR`
+- Some of the CoreFX libraries are not built in the CoreFX repository. For example, `System.Runtime.Loader.dll` is purely a facade and type-forwards everything to `System.Private.CoreLib`. These libraries are built and published through a separate process.
+- Hence, when adding a new public API to `System.Private.CoreLib` or changing the behavior of the existing public APIs in `System.Private.CoreLib`, you have to follow the sequence below to stage your changes so that new prerequisites are published before they are used.
+
+## How to stage your change
+
+### (1) Make the changes in both CoreCLR and CoreFX
+- `System.Private.CoreLib` implementation changes should be made in CoreCLR repo
+- Test and public API contract changes should be made in CoreFX repo
+- [Build and test](https://github.com/dotnet/corefx/blob/master/Documentation/project-docs/developer-guide.md#testing-with-private-coreclr-bits) both changes together
+
+### (2) Submit PR to both CoreCLR and CoreFX
+- Link the two PRs together via comment in PR description, and link both to the issue itself.
+- Both PRs will reviewed together by the project maintainers
+- CoreCLR runs CoreFX tests but they are behind CoreFX. You may need to disable the outdated tests in https://github.com/dotnet/coreclr/blob/master/tests/CoreFX/CoreFX.issues.json to make your PR green.
+
+### (3) What happens next
+- We will merge the CoreCLR PR first
+- Note: if your change is under [System.Private.CoreLib Shared Sources](https://github.com/dotnet/coreclr/tree/master/src/System.Private.CoreLib/shared), it will get mirrored to other repos that are reusing the CoreLib sources. This is a one-way mirror of sources for code reuse purposes: it does not bring your new API to CoreFX so it is not relevant to this staging process.
+- The CoreCLR changes will be consumed by CoreFX via an automatically created PR that updates a hash in the CoreFX repo. These PR's [look like this](https://github.com/dotnet/corefx/pulls?utf8=%E2%9C%93&q=is%3Apr+sort%3Aupdated-desc+coreclr++base%3Amaster+author%3Adotnet-maestro-bot+)
+- Depending on the nature of the change, we may cherry-pick your CoreFX PR into this automatically created PR; or, we may merge your PR after we merge the automatically created PR.
+- You are done! Thank you for contributing.
Contributions must maintain [API signature](https://github.com/dotnet/corefx/blob/master/Documentation/coding-guidelines/breaking-changes.md#bucket-1-public-contract) and behavioral compatibility. Contributions that include [breaking changes](https://github.com/dotnet/corefx/blob/master/Documentation/coding-guidelines/breaking-changes.md) will be rejected. Please file an issue to discuss your idea or change if you believe that it may affect managed code compatibility.
-Contributing to mscorlib library
---------------------------------
+Contributing to System.Private.CoreLib library
+----------------------------------------------
-Most managed code changes should be made in the [CoreFX](https://github.com/dotnet/corefx) repo. We have moved and are continuing to move many mscorlib types to CoreFX. Please use the following general rule-of-thumb for choosing the right repo to make your change (start by creating an issue):
-
-- The type or concept doesn't yet exist in .NET Core -> choose CoreFX.
-- The type exists in both CoreCLR and CoreFX repo -> choose CoreFX.
-- The type exists in CoreCLR only -> choose CoreCLR.
-- In doubt -> choose CoreFX.
+Most changes in managed libraries should be made in the [CoreFX](https://github.com/dotnet/corefx) repo. The CoreCLR repo contains implementation for the [System.Private.CoreLib.dll](https://github.com/dotnet/coreclr/tree/master/src/System.Private.CoreLib) library. Publicly visible changes in this library require [staging](changing-corelib.md) over the two repos.
Commit Messages
---------------
+++ /dev/null
-diff --git a/src/jit/flowgraph.cpp b/src/jit/flowgraph.cpp
-index ad1fd83fe9..bcc818f25b 100644
---- a/src/jit/flowgraph.cpp
-+++ b/src/jit/flowgraph.cpp
-@@ -8016,45 +8016,46 @@ GenTree* Compiler::fgCreateMonitorTree(unsigned lvaMonAcquired, unsigned lvaThis
- {
- GenTree* retNode = block->lastStmt()->gtStmtExpr;
- GenTree* retExpr = retNode->gtOp.gtOp1;
-
- if (retExpr != nullptr)
- {
- // have to insert this immediately before the GT_RETURN so we transform:
- // ret(...) ->
- // ret(comma(comma(tmp=...,call mon_exit), tmp)
- //
- //
- // Before morph stage, it is possible to have a case of GT_RETURN(TYP_LONG, op1) where op1's type is
- // TYP_STRUCT (of 8-bytes) and op1 is call node. See the big comment block in impReturnInstruction()
- // for details for the case where info.compRetType is not the same as info.compRetNativeType. For
- // this reason pass compMethodInfo->args.retTypeClass which is guaranteed to be a valid class handle
- // if the return type is a value class. Note that fgInsertCommFormTemp() in turn uses this class handle
- // if the type of op1 is TYP_STRUCT to perform lvaSetStruct() on the new temp that is created, which
- // in turn passes it to VM to know the size of value type.
- GenTree* temp = fgInsertCommaFormTemp(&retNode->gtOp.gtOp1, info.compMethodInfo->args.retTypeClass);
-
-- GenTree* lclVar = retNode->gtOp.gtOp1->gtOp.gtOp2;
-+ GenTree* lclVar = retNode->gtOp.gtOp1->gtOp.gtOp2;
-
- // The return can't handle all of the trees that could be on the right-hand-side of an assignment,
- // especially in the case of a struct. Therefore, we need to propagate GTF_DONT_CSE.
-- // If we don't, assertion propagation may, e.g., change a return of a local to a return of "CNS_INT struct 0",
-+ // If we don't, assertion propagation may, e.g., change a return of a local to a return of "CNS_INT struct
-+ // 0",
- // which downstream phases can't handle.
- lclVar->gtFlags |= (retExpr->gtFlags & GTF_DONT_CSE);
- retNode->gtOp.gtOp1->gtOp.gtOp2 = gtNewOperNode(GT_COMMA, retExpr->TypeGet(), tree, lclVar);
- }
- else
- {
- // Insert this immediately before the GT_RETURN
- fgInsertStmtNearEnd(block, tree);
- }
- }
- else
- {
- fgInsertStmtAtEnd(block, tree);
- }
-
- return tree;
- }
-
- // Convert a BBJ_RETURN block in a synchronized method to a BBJ_ALWAYS.
- // We've previously added a 'try' block around the original program code using fgAddSyncMethodEnterExit().
{
if (number.Scale > int.MaxValue - (long)absoluteExponent)
{
- bytesConsumed = 0;
- return false;
+ // A scale overflow means all non-zero digits are all so far to the right of the decimal point, no
+ // number format we have will be able to see them. Just pin the scale at the absolute maximum
+ // and let the converter produce a 0 with the max precision available for that type.
+ number.Scale = int.MaxValue;
+ }
+ else
+ {
+ number.Scale += (int)absoluteExponent;
}
- number.Scale += (int)absoluteExponent;
}
digits[dstIndex] = 0;
/// <summary>Tries to dequeue an element from the queue.</summary>
public bool TryDequeue(out T item)
{
+ Slot[] slots = _slots;
+
// Loop in case of contention...
var spinner = new SpinWait();
while (true)
int slotsIndex = currentHead & _slotsMask;
// Read the sequence number for the head position.
- int sequenceNumber = Volatile.Read(ref _slots[slotsIndex].SequenceNumber);
+ int sequenceNumber = Volatile.Read(ref slots[slotsIndex].SequenceNumber);
// We can dequeue from this slot if it's been filled by an enqueuer, which
// would have left the sequence number at pos+1.
{
// Successfully reserved the slot. Note that after the above CompareExchange, other threads
// trying to dequeue from this slot will end up spinning until we do the subsequent Write.
- item = _slots[slotsIndex].Item;
+ item = slots[slotsIndex].Item;
if (!Volatile.Read(ref _preservedForObservation))
{
// If we're preserving, though, we don't zero out the slot, as we need it for
// enumerations, peeking, ToArray, etc. And we don't update the sequence number,
// so that an enqueuer will see it as full and be forced to move to a new segment.
- _slots[slotsIndex].Item = default(T);
- Volatile.Write(ref _slots[slotsIndex].SequenceNumber, currentHead + _slots.Length);
+ slots[slotsIndex].Item = default(T);
+ Volatile.Write(ref slots[slotsIndex].SequenceNumber, currentHead + slots.Length);
}
return true;
}
Interlocked.MemoryBarrier();
}
+ Slot[] slots = _slots;
+
// Loop in case of contention...
var spinner = new SpinWait();
while (true)
int slotsIndex = currentHead & _slotsMask;
// Read the sequence number for the head position.
- int sequenceNumber = Volatile.Read(ref _slots[slotsIndex].SequenceNumber);
+ int sequenceNumber = Volatile.Read(ref slots[slotsIndex].SequenceNumber);
// We can peek from this slot if it's been filled by an enqueuer, which
// would have left the sequence number at pos+1.
int diff = sequenceNumber - (currentHead + 1);
if (diff == 0)
{
- result = resultUsed ? _slots[slotsIndex].Item : default(T);
+ result = resultUsed ? slots[slotsIndex].Item : default(T);
return true;
}
else if (diff < 0)
/// </summary>
public bool TryEnqueue(T item)
{
+ Slot[] slots = _slots;
+
// Loop in case of contention...
var spinner = new SpinWait();
while (true)
int slotsIndex = currentTail & _slotsMask;
// Read the sequence number for the tail position.
- int sequenceNumber = Volatile.Read(ref _slots[slotsIndex].SequenceNumber);
+ int sequenceNumber = Volatile.Read(ref slots[slotsIndex].SequenceNumber);
// The slot is empty and ready for us to enqueue into it if its sequence
// number matches the slot.
{
// Successfully reserved the slot. Note that after the above CompareExchange, other threads
// trying to return will end up spinning until we do the subsequent Write.
- _slots[slotsIndex].Item = item;
- Volatile.Write(ref _slots[slotsIndex].SequenceNumber, currentTail + 1);
+ slots[slotsIndex].Item = item;
+ Volatile.Write(ref slots[slotsIndex].SequenceNumber, currentTail + 1);
return true;
}
}
{
get
{
- return ref this [index.FromEnd ? _length - index.Value : index.Value];
+ // Evaluate the actual index first because it helps performance
+ int actualIndex = index.FromEnd ? _length - index.Value : index.Value;
+ return ref this [actualIndex];
}
}
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
public static Vector128<T> Zero
{
+ [Intrinsic]
get
{
ThrowIfUnsupportedType();
/// <typeparam name="U">The type of the vector the current instance should be reinterpreted as.</typeparam>
/// <returns>The current instance reinterpreted as a new <see cref="Vector128{U}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) or the type of the target (<typeparamref name="U" />) is not supported.</exception>
+ [Intrinsic]
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public Vector128<U> As<U>() where U : struct
{
/// <summary>Reinterprets the current instance as a new <see cref="Vector128{byte}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector128{byte}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector128<byte> AsByte() => As<byte>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector128{double}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector128{double}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector128<double> AsDouble() => As<double>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector128{short}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector128{short}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector128<short> AsInt16() => As<short>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector128{int}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector128{int}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector128<int> AsInt32() => As<int>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector128{long}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector128{long}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector128<long> AsInt64() => As<long>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector128{sbyte}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector128{sbyte}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
[CLSCompliant(false)]
public Vector128<sbyte> AsSByte() => As<sbyte>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector128{float}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector128{float}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector128<float> AsSingle() => As<float>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector128{ushort}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector128{ushort}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
[CLSCompliant(false)]
public Vector128<ushort> AsUInt16() => As<ushort>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector128{uint}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector128{uint}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
[CLSCompliant(false)]
public Vector128<uint> AsUInt32() => As<uint>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector128{ulong}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector128{ulong}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
[CLSCompliant(false)]
public Vector128<ulong> AsUInt64() => As<ulong>();
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
public static Vector256<T> Zero
{
+ [Intrinsic]
get
{
ThrowIfUnsupportedType();
/// <typeparam name="U">The type of the vector the current instance should be reinterpreted as.</typeparam>
/// <returns>The current instance reinterpreted as a new <see cref="Vector256{U}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) or the type of the target (<typeparamref name="U" />) is not supported.</exception>
+ [Intrinsic]
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public Vector256<U> As<U>() where U : struct
{
/// <summary>Reinterprets the current instance as a new <see cref="Vector256{byte}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector256{byte}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector256<byte> AsByte() => As<byte>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector256{double}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector256{double}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector256<double> AsDouble() => As<double>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector256{short}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector256{short}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector256<short> AsInt16() => As<short>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector256{int}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector256{int}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector256<int> AsInt32() => As<int>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector256{long}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector256{long}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector256<long> AsInt64() => As<long>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector256{sbyte}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector256{sbyte}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
[CLSCompliant(false)]
public Vector256<sbyte> AsSByte() => As<sbyte>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector256{float}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector256{float}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector256<float> AsSingle() => As<float>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector256{ushort}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector256{ushort}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
[CLSCompliant(false)]
public Vector256<ushort> AsUInt16() => As<ushort>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector256{uint}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector256{uint}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
[CLSCompliant(false)]
public Vector256<uint> AsUInt32() => As<uint>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector256{ulong}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector256{ulong}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
[CLSCompliant(false)]
public Vector256<ulong> AsUInt64() => As<ulong>();
/// <typeparam name="U">The type of the vector the current instance should be reinterpreted as.</typeparam>
/// <returns>The current instance reinterpreted as a new <see cref="Vector64{U}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) or the type of the target (<typeparamref name="U" />) is not supported.</exception>
+ [Intrinsic]
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public Vector64<U> As<U>() where U : struct
{
/// <summary>Reinterprets the current instance as a new <see cref="Vector64{byte}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector64{byte}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector64<byte> AsByte() => As<byte>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector64{double}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector64{double}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector64<double> AsDouble() => As<double>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector64{short}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector64{short}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector64<short> AsInt16() => As<short>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector64{int}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector64{int}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector64<int> AsInt32() => As<int>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector64{long}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector64{long}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector64<long> AsInt64() => As<long>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector64{sbyte}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector64{sbyte}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
[CLSCompliant(false)]
public Vector64<sbyte> AsSByte() => As<sbyte>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector64{float}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector64{float}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
public Vector64<float> AsSingle() => As<float>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector64{ushort}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector64{ushort}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
[CLSCompliant(false)]
public Vector64<ushort> AsUInt16() => As<ushort>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector64{uint}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector64{uint}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
[CLSCompliant(false)]
public Vector64<uint> AsUInt32() => As<uint>();
/// <summary>Reinterprets the current instance as a new <see cref="Vector64{ulong}" />.</summary>
/// <returns>The current instance reinterpreted as a new <see cref="Vector64{ulong}" />.</returns>
/// <exception cref="NotSupportedException">The type of the current instance (<typeparamref name="T" />) is not supported.</exception>
+ [Intrinsic]
[CLSCompliant(false)]
public Vector64<ulong> AsUInt64() => As<ulong>();
{
get
{
- return ref this [index.FromEnd ? _length - index.Value : index.Value];
+ // Evaluate the actual index first because it helps performance
+ int actualIndex = index.FromEnd ? _length - index.Value : index.Value;
+ return ref this [actualIndex];
}
}
/// </summary>
public int Value => (int)_value;
- private static Rune ChangeCase(Rune rune, CultureInfo culture, bool toUpper)
+ private static Rune ChangeCaseCultureAware(Rune rune, TextInfo textInfo, bool toUpper)
{
- if (culture == null)
- {
- ThrowHelper.ThrowArgumentNullException(ExceptionArgument.culture);
- }
-
- var textInfo = culture.TextInfo;
+ Debug.Assert(!GlobalizationMode.Invariant, "This should've been checked by the caller.");
+ Debug.Assert(textInfo != null, "This should've been checked by the caller.");
Span<char> original = stackalloc char[2]; // worst case scenario = 2 code units (for a surrogate pair)
Span<char> modified = stackalloc char[2]; // case change should preserve UTF-16 code unit count
return IsCategorySeparator(GetUnicodeCategoryNonAscii(value));
}
- public static Rune ToLower(Rune value, CultureInfo culture) => ChangeCase(value, culture, toUpper: false);
+ public static Rune ToLower(Rune value, CultureInfo culture)
+ {
+ if (culture is null)
+ {
+ ThrowHelper.ThrowArgumentNullException(ExceptionArgument.culture);
+ }
+
+ // We don't want to special-case ASCII here since the specified culture might handle
+ // ASCII characters differently than the invariant culture (e.g., Turkish I). Instead
+ // we'll just jump straight to the globalization tables if they're available.
+
+ if (GlobalizationMode.Invariant)
+ {
+ return ToLowerInvariant(value);
+ }
+
+ return ChangeCaseCultureAware(value, culture.TextInfo, toUpper: false);
+ }
public static Rune ToLowerInvariant(Rune value)
{
// Handle the most common case (ASCII data) first. Within the common case, we expect
// that there'll be a mix of lowercase & uppercase chars, so make the conversion branchless.
- if (value.IsAscii || GlobalizationMode.Invariant)
+ if (value.IsAscii)
{
// It's ok for us to use the UTF-16 conversion utility for this since the high
// 16 bits of the value will never be set so will be left unchanged.
return UnsafeCreate(Utf16Utility.ConvertAllAsciiCharsInUInt32ToLowercase(value._value));
}
+ if (GlobalizationMode.Invariant)
+ {
+ // If the value isn't ASCII and if the globalization tables aren't available,
+ // case changing has no effect.
+ return value;
+ }
+
// Non-ASCII data requires going through the case folding tables.
- return ToLower(value, CultureInfo.InvariantCulture);
+ return ChangeCaseCultureAware(value, TextInfo.Invariant, toUpper: false);
}
- public static Rune ToUpper(Rune value, CultureInfo culture) => ChangeCase(value, culture, toUpper: true);
+ public static Rune ToUpper(Rune value, CultureInfo culture)
+ {
+ if (culture is null)
+ {
+ ThrowHelper.ThrowArgumentNullException(ExceptionArgument.culture);
+ }
+
+ // We don't want to special-case ASCII here since the specified culture might handle
+ // ASCII characters differently than the invariant culture (e.g., Turkish I). Instead
+ // we'll just jump straight to the globalization tables if they're available.
+
+ if (GlobalizationMode.Invariant)
+ {
+ return ToUpperInvariant(value);
+ }
+
+ return ChangeCaseCultureAware(value, culture.TextInfo, toUpper: true);
+ }
public static Rune ToUpperInvariant(Rune value)
{
// Handle the most common case (ASCII data) first. Within the common case, we expect
// that there'll be a mix of lowercase & uppercase chars, so make the conversion branchless.
- if (value.IsAscii || GlobalizationMode.Invariant)
+ if (value.IsAscii)
{
// It's ok for us to use the UTF-16 conversion utility for this since the high
// 16 bits of the value will never be set so will be left unchanged.
return UnsafeCreate(Utf16Utility.ConvertAllAsciiCharsInUInt32ToUppercase(value._value));
}
+ if (GlobalizationMode.Invariant)
+ {
+ // If the value isn't ASCII and if the globalization tables aren't available,
+ // case changing has no effect.
+ return value;
+ }
+
// Non-ASCII data requires going through the case folding tables.
- return ToUpper(value, CultureInfo.InvariantCulture);
+ return ChangeCaseCultureAware(value, TextInfo.Invariant, toUpper: true);
}
}
}
private static bool _isProcessExiting;
// Id used by contextsToUnload
- private readonly long id;
+ private readonly long _id;
// synchronization primitive to protect against usage of this instance while unloading
- private readonly object unloadLock = new object();
+ private readonly object _unloadLock;
// Indicates the state of this ALC (Alive or in Unloading state)
- private InternalState state;
+ private InternalState _state;
[DllImport(JitHelpers.QCall, CharSet = CharSet.Unicode)]
private static extern IntPtr InitializeAssemblyLoadContext(IntPtr ptrAssemblyLoadContext, bool fRepresentsTPALoadContext, bool isCollectible);
{
// Initialize the VM side of AssemblyLoadContext if not already done.
IsCollectible = isCollectible;
+ // The _unloadLock needs to be assigned after the IsCollectible to ensure proper behavior of the finalizer
+ // even in case the following allocation fails or the thread is aborted between these two lines.
+ _unloadLock = new object();
+
+ if (!isCollectible)
+ {
+ // For non collectible AssemblyLoadContext, the finalizer should never be called and thus the AssemblyLoadContext should not
+ // be on the finalizer queue.
+ GC.SuppressFinalize(this);
+ }
// Add this instance to the list of alive ALC
lock (ContextsToUnload)
Resolving = null;
Unloading = null;
- id = _nextId++;
- ContextsToUnload.Add(id, new WeakReference<AssemblyLoadContext>(this, true));
+ _id = _nextId++;
+ ContextsToUnload.Add(_id, new WeakReference<AssemblyLoadContext>(this, true));
}
}
~AssemblyLoadContext()
{
- // Only valid for a Collectible ALC. Non-collectible ALCs have the finalizer suppressed.
- // We get here only in case the explicit Unload was not initiated.
- Debug.Assert(state != InternalState.Unloading);
- InitiateUnload();
+ // Use the _unloadLock as a guard to detect the corner case when the constructor of the AssemblyLoadContext was not executed
+ // e.g. due to the JIT failing to JIT it.
+ if (_unloadLock != null)
+ {
+ // Only valid for a Collectible ALC. Non-collectible ALCs have the finalizer suppressed.
+ Debug.Assert(IsCollectible);
+ // We get here only in case the explicit Unload was not initiated.
+ Debug.Assert(_state != InternalState.Unloading);
+ InitiateUnload();
+ }
}
private void InitiateUnload()
// When in Unloading state, we are not supposed to be called on the finalizer
// as the native side is holding a strong reference after calling Unload
- lock (unloadLock)
+ lock (_unloadLock)
{
if (!_isProcessExiting)
{
- Debug.Assert(state == InternalState.Alive);
+ Debug.Assert(_state == InternalState.Alive);
var thisStrongHandle = GCHandle.Alloc(this, GCHandleType.Normal);
var thisStrongHandlePtr = GCHandle.ToIntPtr(thisStrongHandle);
PrepareForAssemblyLoadContextRelease(m_pNativeAssemblyLoadContext, thisStrongHandlePtr);
}
- state = InternalState.Unloading;
+ _state = InternalState.Unloading;
}
if (!_isProcessExiting)
{
lock (ContextsToUnload)
{
- ContextsToUnload.Remove(id);
+ ContextsToUnload.Remove(_id);
}
}
}
throw new ArgumentNullException(nameof(assemblyPath));
}
- lock (unloadLock)
+ lock (_unloadLock)
{
VerifyIsAlive();
if (PathInternal.IsPartiallyQualified(assemblyPath))
throw new ArgumentNullException(nameof(nativeImagePath));
}
- lock (unloadLock)
+ lock (_unloadLock)
{
VerifyIsAlive();
{
throw new BadImageFormatException(SR.BadImageFormat_BadILFormat);
}
- lock (unloadLock)
+ lock (_unloadLock)
{
VerifyIsAlive();
private void VerifyIsAlive()
{
- if (state != InternalState.Alive)
+ if (_state != InternalState.Alive)
{
throw new InvalidOperationException(SR.GetResourceString("AssemblyLoadContext_Verify_NotUnloading"));
}
// callbacks parameters. These are strong references
RSExtSmartPtr<ICorDebugProcess> m_pProcess;
RSExtSmartPtr<ICorDebugThread> m_pThread;
- BYTE* m_pContext;
+ CONTEXT m_context;
ULONG32 m_contextSize;
public:
{
this->m_pProcess.Assign(pProcess);
this->m_pThread.Assign(pThread);
- this->m_pContext = pContext;
- this->m_contextSize = contextSize;
+
+ _ASSERTE(contextSize == sizeof(CONTEXT));
+ this->m_contextSize = min(contextSize, sizeof(CONTEXT));
+ memcpy(&(this->m_context), pContext, this->m_contextSize);
}
HRESULT Dispatch(DispatchArgs args)
{
- return args.GetCallback4()->DataBreakpoint(m_pProcess, m_pThread, m_pContext, m_contextSize);
+ return args.GetCallback4()->DataBreakpoint(m_pProcess, m_pThread, reinterpret_cast<BYTE*>(&m_context), m_contextSize);
}
}; // end class AfterGarbageCollectionEvent
#include <sys/resource.h>
#include <errno.h>
+#include "cgroup.h"
+
#ifndef SIZE_T_MAX
#define SIZE_T_MAX (~(size_t)0)
#endif
class CGroup
{
- char* m_memory_cgroup_path;
- char* m_cpu_cgroup_path;
+ static char* s_memory_cgroup_path;
+ static char* s_cpu_cgroup_path;
public:
- CGroup()
+ static void Initialize()
{
- m_memory_cgroup_path = FindCgroupPath(&IsMemorySubsystem);
- m_cpu_cgroup_path = FindCgroupPath(&IsCpuSubsystem);
+ s_memory_cgroup_path = FindCgroupPath(&IsMemorySubsystem);
+ s_cpu_cgroup_path = FindCgroupPath(&IsCpuSubsystem);
}
- ~CGroup()
+ static void Cleanup()
{
- free(m_memory_cgroup_path);
- free(m_cpu_cgroup_path);
+ free(s_memory_cgroup_path);
+ free(s_cpu_cgroup_path);
}
- bool GetPhysicalMemoryLimit(size_t *val)
+ static bool GetPhysicalMemoryLimit(size_t *val)
{
char *mem_limit_filename = nullptr;
bool result = false;
- if (m_memory_cgroup_path == nullptr)
+ if (s_memory_cgroup_path == nullptr)
return result;
- size_t len = strlen(m_memory_cgroup_path);
+ size_t len = strlen(s_memory_cgroup_path);
len += strlen(MEM_LIMIT_FILENAME);
mem_limit_filename = (char*)malloc(len+1);
if (mem_limit_filename == nullptr)
return result;
- strcpy(mem_limit_filename, m_memory_cgroup_path);
+ strcpy(mem_limit_filename, s_memory_cgroup_path);
strcat(mem_limit_filename, MEM_LIMIT_FILENAME);
result = ReadMemoryValueFromFile(mem_limit_filename, val);
free(mem_limit_filename);
return result;
}
- bool GetPhysicalMemoryUsage(size_t *val)
+ static bool GetPhysicalMemoryUsage(size_t *val)
{
char *mem_usage_filename = nullptr;
bool result = false;
- if (m_memory_cgroup_path == nullptr)
+ if (s_memory_cgroup_path == nullptr)
return result;
- size_t len = strlen(m_memory_cgroup_path);
+ size_t len = strlen(s_memory_cgroup_path);
len += strlen(MEM_USAGE_FILENAME);
mem_usage_filename = (char*)malloc(len+1);
if (mem_usage_filename == nullptr)
return result;
- strcpy(mem_usage_filename, m_memory_cgroup_path);
+ strcpy(mem_usage_filename, s_memory_cgroup_path);
strcat(mem_usage_filename, MEM_USAGE_FILENAME);
result = ReadMemoryValueFromFile(mem_usage_filename, val);
free(mem_usage_filename);
return result;
}
- bool GetCpuLimit(uint32_t *val)
+ static bool GetCpuLimit(uint32_t *val)
{
long long quota;
long long period;
return cgroup_path;
}
- bool ReadMemoryValueFromFile(const char* filename, size_t* val)
+ static bool ReadMemoryValueFromFile(const char* filename, size_t* val)
{
bool result = false;
char *line = nullptr;
return result;
}
- long long ReadCpuCGroupValue(const char* subsystemFilename){
+ static long long ReadCpuCGroupValue(const char* subsystemFilename){
char *filename = nullptr;
bool result = false;
long long val;
- if (m_cpu_cgroup_path == nullptr)
+ if (s_cpu_cgroup_path == nullptr)
return -1;
- filename = (char*)malloc(strlen(m_cpu_cgroup_path) + strlen(subsystemFilename) + 1);
+ filename = (char*)malloc(strlen(s_cpu_cgroup_path) + strlen(subsystemFilename) + 1);
if (filename == nullptr)
return -1;
- strcpy(filename, m_cpu_cgroup_path);
+ strcpy(filename, s_cpu_cgroup_path);
strcat(filename, subsystemFilename);
result = ReadLongLongValueFromFile(filename, &val);
free(filename);
return val;
}
- bool ReadLongLongValueFromFile(const char* filename, long long* val)
+ static bool ReadLongLongValueFromFile(const char* filename, long long* val)
{
bool result = false;
char *line = nullptr;
}
};
+char *CGroup::s_memory_cgroup_path = nullptr;
+char *CGroup::s_cpu_cgroup_path = nullptr;
+
+void InitializeCGroup()
+{
+ CGroup::Initialize();
+}
+
+void CleanupCGroup()
+{
+ CGroup::Cleanup();
+}
+
size_t GetRestrictedPhysicalMemoryLimit()
{
- CGroup cgroup;
size_t physical_memory_limit;
- if (!cgroup.GetPhysicalMemoryLimit(&physical_memory_limit))
+ if (!CGroup::GetPhysicalMemoryLimit(&physical_memory_limit))
physical_memory_limit = SIZE_T_MAX;
struct rlimit curr_rlimit;
bool result = false;
size_t linelen;
char* line = nullptr;
- CGroup cgroup;
if (val == nullptr)
return false;
// Linux uses cgroup usage to trigger oom kills.
- if (cgroup.GetPhysicalMemoryUsage(val))
+ if (CGroup::GetPhysicalMemoryUsage(val))
return true;
// process resident set size.
bool GetCpuLimit(uint32_t* val)
{
- CGroup cgroup;
-
if (val == nullptr)
return false;
- return cgroup.GetCpuLimit(val);
+ return CGroup::GetCpuLimit(val);
}
--- /dev/null
+// Licensed to the .NET Foundation under one or more agreements.
+// The .NET Foundation licenses this file to you under the MIT license.
+// See the LICENSE file in the project root for more information.
+
+
+#ifndef __CGROUP_H__
+#define __CGROUP_H__
+
+void InitializeCGroup();
+void CleanupCGroup();
+
+#endif // __CGROUP_H__
+
#include <errno.h>
#include <unistd.h> // sysconf
#include "globals.h"
+#include "cgroup.h"
#if defined(_ARM_) || defined(_ARM64_)
#define SYSCONF_GET_NUMPROCS _SC_NPROCESSORS_CONF
}
#endif // HAVE_MACH_ABSOLUTE_TIME
+ InitializeCGroup();
+
return true;
}
assert(ret == 0);
munmap(g_helperPage, OS_PAGE_SIZE);
+
+ CleanupCGroup();
}
// Get numeric id of the current thread if possible on the
}
else if (curAssertion->op1.kind == O1K_CONSTANT_LOOP_BND)
{
- printf("Loop_Bnd");
+ printf("Const_Loop_Bnd");
vnStore->vnDump(this, curAssertion->op1.vn);
}
else if (curAssertion->op1.kind == O1K_VALUE_NUMBER)
goto DONE_ASSERTION; // Don't make an assertion
}
+ // If we're making a copy of a "normalize on load" lclvar then the destination
+ // has to be "normalize on load" as well, otherwise we risk skipping normalization.
+ if (lclVar2->lvNormalizeOnLoad() && !lclVar->lvNormalizeOnLoad())
+ {
+ goto DONE_ASSERTION; // Don't make an assertion
+ }
+
// If the local variable has its address exposed then bail
if (lclVar2->lvAddrExposed)
{
/*****************************************************************************
*
+ * Given a set of "assertions" to search for, find an assertion that is either
+ * op == 0 or op != 0
+ *
+ */
+AssertionIndex Compiler::optGlobalAssertionIsEqualOrNotEqualZero(ASSERT_VALARG_TP assertions, GenTree* op1)
+{
+ if (BitVecOps::IsEmpty(apTraits, assertions))
+ {
+ return NO_ASSERTION_INDEX;
+ }
+ BitVecOps::Iter iter(apTraits, assertions);
+ unsigned index = 0;
+ while (iter.NextElem(&index))
+ {
+ AssertionIndex assertionIndex = GetAssertionIndex(index);
+ if (assertionIndex > optAssertionCount)
+ {
+ break;
+ }
+ AssertionDsc* curAssertion = optGetAssertion(assertionIndex);
+ if ((curAssertion->assertionKind != OAK_EQUAL && curAssertion->assertionKind != OAK_NOT_EQUAL))
+ {
+ continue;
+ }
+
+ if ((curAssertion->op1.vn == vnStore->VNConservativeNormalValue(op1->gtVNPair)) &&
+ (curAssertion->op2.vn == vnStore->VNZeroForType(op1->TypeGet())))
+ {
+ return assertionIndex;
+ }
+ }
+ return NO_ASSERTION_INDEX;
+}
+
+/*****************************************************************************
+ *
* Given a tree consisting of a RelOp and a set of available assertions
* we try to propagate an assertion and modify the RelOp tree if we can.
* We pass in the root of the tree via 'stmt', for local copy prop 'stmt' will be nullptr
{
assert(tree->OperKind() & GTK_RELOP);
- //
- // Currently only GT_EQ or GT_NE are supported Relops for AssertionProp
- //
- if ((tree->gtOper != GT_EQ) && (tree->gtOper != GT_NE))
- {
- return nullptr;
- }
-
if (!optLocalAssertionProp)
{
// If global assertion prop then use value numbering.
return optAssertionPropGlobal_RelOp(assertions, tree, stmt);
}
- else
+
+ //
+ // Currently only GT_EQ or GT_NE are supported Relops for local AssertionProp
+ //
+
+ if ((tree->gtOper != GT_EQ) && (tree->gtOper != GT_NE))
{
- // If local assertion prop then use variable based prop.
- return optAssertionPropLocal_RelOp(assertions, tree, stmt);
+ return nullptr;
}
+
+ // If local assertion prop then use variable based prop.
+ return optAssertionPropLocal_RelOp(assertions, tree, stmt);
}
/*************************************************************************************
*/
GenTree* Compiler::optAssertionPropGlobal_RelOp(ASSERT_VALARG_TP assertions, GenTree* tree, GenTree* stmt)
{
- assert(tree->OperGet() == GT_EQ || tree->OperGet() == GT_NE);
-
GenTree* newTree = tree;
GenTree* op1 = tree->gtOp.gtOp1;
GenTree* op2 = tree->gtOp.gtOp2;
+ // Look for assertions of the form (tree EQ/NE 0)
+ AssertionIndex index = optGlobalAssertionIsEqualOrNotEqualZero(assertions, tree);
+
+ if (index != NO_ASSERTION_INDEX)
+ {
+ // We know that this relop is either 0 or != 0 (1)
+ AssertionDsc* curAssertion = optGetAssertion(index);
+
+#ifdef DEBUG
+ if (verbose)
+ {
+ printf("\nVN relop based constant assertion prop in " FMT_BB ":\n", compCurBB->bbNum);
+ printf("Assertion index=#%02u: ", index);
+ printTreeID(tree);
+ printf(" %s 0\n", (curAssertion->assertionKind == OAK_EQUAL) ? "==" : "!=");
+ }
+#endif
+
+ // Bail out if tree is not side effect free.
+ if ((tree->gtFlags & GTF_SIDE_EFFECT) != 0)
+ {
+ JITDUMP("sorry, blocked by side effects\n");
+ return nullptr;
+ }
+
+ if (curAssertion->assertionKind == OAK_EQUAL)
+ {
+ tree->ChangeOperConst(GT_CNS_INT);
+ tree->gtIntCon.gtIconVal = 0;
+ }
+ else
+ {
+ tree->ChangeOperConst(GT_CNS_INT);
+ tree->gtIntCon.gtIconVal = 1;
+ }
+
+ newTree = fgMorphTree(tree);
+ DISPTREE(newTree);
+ return optAssertionProp_Update(newTree, tree, stmt);
+ }
+
+ // Else check if we have an equality check involving a local
+ if (!tree->OperIs(GT_EQ, GT_NE))
+ {
+ return nullptr;
+ }
+
if (op1->gtOper != GT_LCL_VAR)
{
return nullptr;
}
// Find an equal or not equal assertion involving "op1" and "op2".
- AssertionIndex index = optGlobalAssertionIsEqualOrNotEqual(assertions, op1, op2);
+ index = optGlobalAssertionIsEqualOrNotEqual(assertions, op1, op2);
+
if (index == NO_ASSERTION_INDEX)
{
return nullptr;
void genHWIntrinsic_R_R_RM_R(GenTreeHWIntrinsic* node, instruction ins);
void genHWIntrinsic_R_R_R_RM(
instruction ins, emitAttr attr, regNumber targetReg, regNumber op1Reg, regNumber op2Reg, GenTree* op3);
+ void genBaseIntrinsic(GenTreeHWIntrinsic* node);
void genSSEIntrinsic(GenTreeHWIntrinsic* node);
void genSSE2Intrinsic(GenTreeHWIntrinsic* node);
void genSSE41Intrinsic(GenTreeHWIntrinsic* node);
}
__fallthrough;
case InstructionSet_SSE:
- return JitConfig.EnableSSE() != 0;
+ if (JitConfig.EnableSSE() == 0)
+ {
+ return false;
+ }
+ __fallthrough;
+ case InstructionSet_Base:
+ return (JitConfig.EnableHWIntrinsic() != 0);
// TODO: BMI1/BMI2 actually don't depend on AVX, they depend on the VEX encoding; which is currently controlled
// by InstructionSet_AVX
if (!jitFlags.IsSet(JitFlags::JIT_FLAG_PREJIT))
{
+ if (configEnableISA(InstructionSet_Base))
+ {
+ opts.setSupportedISA(InstructionSet_Base);
+ }
if (configEnableISA(InstructionSet_SSE))
{
opts.setSupportedISA(InstructionSet_SSE);
NamedIntrinsic lookupNamedIntrinsic(CORINFO_METHOD_HANDLE method);
#ifdef FEATURE_HW_INTRINSICS
+ GenTree* impBaseIntrinsic(NamedIntrinsic intrinsic, CORINFO_METHOD_HANDLE method, CORINFO_SIG_INFO* sig);
GenTree* impHWIntrinsic(NamedIntrinsic intrinsic,
CORINFO_METHOD_HANDLE method,
CORINFO_SIG_INFO* sig,
// Used for Relop propagation.
AssertionIndex optGlobalAssertionIsEqualOrNotEqual(ASSERT_VALARG_TP assertions, GenTree* op1, GenTree* op2);
+ AssertionIndex optGlobalAssertionIsEqualOrNotEqualZero(ASSERT_VALARG_TP assertions, GenTree* op1);
AssertionIndex optLocalAssertionIsEqualOrNotEqual(
optOp1Kind op1Kind, unsigned lclNum, optOp2Kind op2Kind, ssize_t cnsVal, ASSERT_VALARG_TP assertions);
else
{
// Munge any pointers if we want diff-able disassembly
- if (emitComp->opts.disDiffable)
+ // It's assumed to be a pointer when disp is outside of the range (-1M, +1M); top bits are not 0 or -1
+ if (!frameRef && emitComp->opts.disDiffable && (static_cast<size_t>((disp >> 20) + 1) > 1))
{
- ssize_t top12bits = (disp >> 20);
- if ((top12bits != 0) && (top12bits != -1))
+ if (nsep)
{
- disp = 0xD1FFAB1E;
+ printf("+");
}
+ printf("D1FFAB1EH");
}
-
- if (disp > 0)
+ else if (disp > 0)
{
if (nsep)
{
{
printf("-%04XH", -disp);
}
- else if ((disp & 0x7F000000) != 0x7F000000)
+ else if (disp < -0xFFFFFF)
{
+ if (nsep)
+ {
+ printf("+");
+ }
printf("%08XH", disp);
}
else
{
printf("%d", val);
}
- else if ((val > 0) || ((val & 0x7F000000) != 0x7F000000))
+ else if ((val > 0) || (val < -0xFFFFFF))
{
printf("0x%IX", val);
}
switch (isa)
{
+ case InstructionSet_Base:
+ genBaseIntrinsic(node);
+ break;
case InstructionSet_SSE:
genSSEIntrinsic(node);
break;
}
//------------------------------------------------------------------------
+// genBaseIntrinsic: Generates the code for a base hardware intrinsic node
+//
+// Arguments:
+// node - The hardware intrinsic node
+//
+void CodeGen::genBaseIntrinsic(GenTreeHWIntrinsic* node)
+{
+ NamedIntrinsic intrinsicId = node->gtHWIntrinsicId;
+ regNumber targetReg = node->gtRegNum;
+ var_types targetType = node->TypeGet();
+ var_types baseType = node->gtSIMDBaseType;
+
+ assert(node->gtGetOp1() == nullptr);
+ assert(node->gtGetOp2() == nullptr);
+ assert(baseType >= TYP_BYTE && baseType <= TYP_DOUBLE);
+
+ emitter* emit = getEmitter();
+ emitAttr attr = EA_ATTR(node->gtSIMDSize);
+
+ switch (intrinsicId)
+ {
+ case NI_Base_Vector128_Zero:
+ {
+ // When SSE2 is supported, we generate pxor for integral types otherwise just use xorps
+ instruction ins =
+ (compiler->compSupports(InstructionSet_SSE2) && varTypeIsIntegral(baseType)) ? INS_pxor : INS_xorps;
+ emit->emitIns_SIMD_R_R_R(ins, attr, targetReg, targetReg, targetReg);
+ break;
+ }
+
+ case NI_Base_Vector256_Zero:
+ {
+ // When AVX2 is supported, we generate pxor for integral types otherwise just use xorps
+ instruction ins =
+ (compiler->compSupports(InstructionSet_AVX2) && varTypeIsIntegral(baseType)) ? INS_pxor : INS_xorps;
+ emit->emitIns_SIMD_R_R_R(ins, attr, targetReg, targetReg, targetReg);
+ break;
+ }
+
+ default:
+ {
+ unreached();
+ break;
+ }
+ }
+
+ genProduceReg(node);
+}
+
+//------------------------------------------------------------------------
// genSSEIntrinsic: Generates the code for an SSE hardware intrinsic node
//
// Arguments:
// Base
HARDWARE_INTRINSIC(NI_ARM64_BASE_CLS, Base, LeadingSignCount, UnaryOp, INS_invalid, INS_cls, INS_cls, None )
HARDWARE_INTRINSIC(NI_ARM64_BASE_CLZ, Base, LeadingZeroCount, UnaryOp, INS_invalid, INS_clz, INS_clz, None )
+HARDWARE_INTRINSIC(NI_Base_Vector64_AsByte, Base, AsByte, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector64_AsInt16, Base, AsInt16, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector64_AsInt32, Base, AsInt32, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector64_AsSByte, Base, AsSByte, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector64_AsSingle, Base, AsSingle, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector64_AsUInt16, Base, AsUInt16, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector64_AsUInt32, Base, AsUInt32, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector128_As, Base, As, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector128_AsByte, Base, AsByte, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector128_AsDouble, Base, AsDouble, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector128_AsInt16, Base, AsInt16, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector128_AsInt32, Base, AsInt32, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector128_AsInt64, Base, AsInt64, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector128_AsSByte, Base, AsSByte, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector128_AsSingle, Base, AsSingle, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector128_AsUInt16, Base, AsUInt16, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector128_AsUInt32, Base, AsUInt32, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
+HARDWARE_INTRINSIC(NI_Base_Vector128_AsUInt64, Base, AsUInt64, UnaryOp, INS_invalid, INS_invalid, INS_invalid, None )
#if NYI
// Crc32
HARDWARE_INTRINSIC(NI_ARM64_CRC32_CRC32, Crc32, Crc32, CrcOp, INS_invalid, INS_invalid, INS_crc32, None )
HARDWARE_INTRINSIC(NI_ARM64_SIMD_Min, Simd, Min, SimdBinaryOp, INS_fmin, INS_smin, INS_umin, None )
HARDWARE_INTRINSIC(NI_ARM64_SIMD_Mul, Simd, Multiply, SimdBinaryOp, INS_fmul, INS_mul, INS_mul, None )
HARDWARE_INTRINSIC(NI_ARM64_SIMD_Sqrt, Simd, Sqrt, SimdUnaryOp, INS_fsqrt, INS_invalid, INS_invalid, None )
-HARDWARE_INTRINSIC(NI_ARM64_SIMD_StaticCast, Simd, StaticCast, SimdUnaryOp, INS_mov, INS_mov, INS_mov, None )
HARDWARE_INTRINSIC(NI_ARM64_SIMD_Sub, Simd, Subtract, SimdBinaryOp, INS_fsub, INS_sub, INS_sub, None )
HARDWARE_INTRINSIC(NI_ARM64_SIMD_GetItem, Simd, Extract, SimdExtractOp, INS_mov, INS_mov, INS_mov, None )
HARDWARE_INTRINSIC(NI_ARM64_SIMD_SetItem, Simd, Insert, SimdInsertOp, INS_mov, INS_mov, INS_mov, None )
8) Each intrinsic has one category with type of `enum HWIntrinsicCategory`, please see the definition of HWIntrinsicCategory for details
9) Each intrinsic has one or more flags with type of `enum HWIntrinsicFlag`
*/
+// ***************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
+// Intrinsic ID Function name ISA ival SIMD size NumArg instructions Category Flags
+// {TYP_BYTE, TYP_UBYTE, TYP_SHORT, TYP_USHORT, TYP_INT, TYP_UINT, TYP_LONG, TYP_ULONG, TYP_FLOAT, TYP_DOUBLE}
+// ***************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
+// Base Intrinsics
+HARDWARE_INTRINSIC(Base_Vector128_As, "As`1", Base, -1, 16, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector128_AsByte, "AsByte", Base, -1, 16, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector128_AsDouble, "AsDouble", Base, -1, 16, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector128_AsInt16, "AsInt16", Base, -1, 16, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector128_AsInt32, "AsInt32", Base, -1, 16, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector128_AsInt64, "AsInt64", Base, -1, 16, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector128_AsSByte, "AsSByte", Base, -1, 16, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector128_AsSingle, "AsSingle", Base, -1, 16, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector128_AsUInt16, "AsUInt16", Base, -1, 16, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector128_AsUInt32, "AsUInt32", Base, -1, 16, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector128_AsUInt64, "AsUInt64", Base, -1, 16, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector128_Zero, "get_Zero", Base, -1, 16, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector256_As, "As`1", Base, -1, 32, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector256_AsByte, "AsByte", Base, -1, 32, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector256_AsDouble, "AsDouble", Base, -1, 32, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector256_AsInt16, "AsInt16", Base, -1, 32, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector256_AsInt32, "AsInt32", Base, -1, 32, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector256_AsInt64, "AsInt64", Base, -1, 32, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector256_AsSByte, "AsSByte", Base, -1, 32, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector256_AsSingle, "AsSingle", Base, -1, 32, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector256_AsUInt16, "AsUInt16", Base, -1, 32, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector256_AsUInt32, "AsUInt32", Base, -1, 32, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector256_AsUInt64, "AsUInt64", Base, -1, 32, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
+HARDWARE_INTRINSIC(Base_Vector256_Zero, "get_Zero", Base, -1, 32, 0, {INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid, INS_invalid}, HW_Category_Helper, HW_Flag_NoContainment|HW_Flag_NoRMWSemantics)
// ***************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
// Intrinsic ID Function name ISA ival SIMD size NumArg instructions Category Flags
}
}
- if ((HWIntrinsicInfo::IsOneTypeGeneric(intrinsic) || HWIntrinsicInfo::IsTwoTypeGeneric(intrinsic)) &&
- !HWIntrinsicInfo::HasSpecialImport(intrinsic))
+ if (HWIntrinsicInfo::IsOneTypeGeneric(intrinsic) && !HWIntrinsicInfo::HasSpecialImport(intrinsic))
{
if (!varTypeIsArithmetic(baseType))
{
return impUnsupportedHWIntrinsic(CORINFO_HELP_THROW_TYPE_NOT_SUPPORTED, method, sig, mustExpand);
}
-
- if (HWIntrinsicInfo::IsTwoTypeGeneric(intrinsic))
- {
- // StaticCast<T, U> has two type parameters.
- assert(numArgs == 1);
- var_types srcType = getBaseTypeOfSIMDType(info.compCompHnd->getArgClass(sig, sig->args));
- if (!varTypeIsArithmetic(srcType))
- {
- return impUnsupportedHWIntrinsic(CORINFO_HELP_THROW_TYPE_NOT_SUPPORTED, method, sig, mustExpand);
- }
- }
}
if (HWIntrinsicInfo::IsFloatingPointUsed(intrinsic))
// Generic
// - must throw NotSupportException if the type argument is not numeric type
HW_Flag_OneTypeGeneric = 0x4,
- // Two-type Generic
- // - the intrinsic has two type parameters
- HW_Flag_TwoTypeGeneric = 0x8,
// NoCodeGen
// - should be transformed in the compiler front-end, cannot reach CodeGen
return (flags & HW_Flag_OneTypeGeneric) != 0;
}
- static bool IsTwoTypeGeneric(NamedIntrinsic id)
- {
- HWIntrinsicFlag flags = lookupFlags(id);
- return (flags & HW_Flag_TwoTypeGeneric) != 0;
- }
-
static bool RequiresCodegen(NamedIntrinsic id)
{
HWIntrinsicFlag flags = lookupFlags(id);
ni = lookupNamedIntrinsic(method);
#ifdef FEATURE_HW_INTRINSICS
- if (ni > NI_HW_INTRINSIC_START && ni < NI_HW_INTRINSIC_END)
+ switch (ni)
+ {
+#if defined(_TARGET_ARM64_)
+ case NI_Base_Vector64_AsByte:
+ case NI_Base_Vector64_AsInt16:
+ case NI_Base_Vector64_AsInt32:
+ case NI_Base_Vector64_AsSByte:
+ case NI_Base_Vector64_AsSingle:
+ case NI_Base_Vector64_AsUInt16:
+ case NI_Base_Vector64_AsUInt32:
+#endif // _TARGET_ARM64_
+ case NI_Base_Vector128_As:
+ case NI_Base_Vector128_AsByte:
+ case NI_Base_Vector128_AsDouble:
+ case NI_Base_Vector128_AsInt16:
+ case NI_Base_Vector128_AsInt32:
+ case NI_Base_Vector128_AsInt64:
+ case NI_Base_Vector128_AsSByte:
+ case NI_Base_Vector128_AsSingle:
+ case NI_Base_Vector128_AsUInt16:
+ case NI_Base_Vector128_AsUInt32:
+ case NI_Base_Vector128_AsUInt64:
+#if defined(_TARGET_XARCH_)
+ case NI_Base_Vector128_Zero:
+ case NI_Base_Vector256_As:
+ case NI_Base_Vector256_AsByte:
+ case NI_Base_Vector256_AsDouble:
+ case NI_Base_Vector256_AsInt16:
+ case NI_Base_Vector256_AsInt32:
+ case NI_Base_Vector256_AsInt64:
+ case NI_Base_Vector256_AsSByte:
+ case NI_Base_Vector256_AsSingle:
+ case NI_Base_Vector256_AsUInt16:
+ case NI_Base_Vector256_AsUInt32:
+ case NI_Base_Vector256_AsUInt64:
+ case NI_Base_Vector256_Zero:
+#endif // _TARGET_XARCH_
+ {
+ return impBaseIntrinsic(ni, method, sig);
+ }
+
+ default:
+ {
+ break;
+ }
+ }
+
+ if ((ni > NI_HW_INTRINSIC_START) && (ni < NI_HW_INTRINSIC_END))
{
GenTree* hwintrinsic = impHWIntrinsic(ni, method, sig, mustExpand);
return retNode;
}
+#ifdef FEATURE_HW_INTRINSICS
+//------------------------------------------------------------------------
+// impBaseIntrinsic: dispatch intrinsics to their own implementation
+//
+// Arguments:
+// intrinsic -- id of the intrinsic function.
+// method -- method handle of the intrinsic function.
+// sig -- signature of the intrinsic call
+//
+// Return Value:
+// the expanded intrinsic.
+//
+GenTree* Compiler::impBaseIntrinsic(NamedIntrinsic intrinsic, CORINFO_METHOD_HANDLE method, CORINFO_SIG_INFO* sig)
+{
+ GenTree* retNode = nullptr;
+ unsigned simdSize = 0;
+ var_types baseType = getBaseTypeAndSizeOfSIMDType(sig->retTypeClass, &simdSize);
+ var_types retType = getSIMDTypeForSize(simdSize);
+
+ if (sig->hasThis())
+ {
+ CORINFO_CLASS_HANDLE thisClass = info.compCompHnd->getArgClass(sig, sig->args);
+ var_types thisType = getBaseTypeOfSIMDType(thisClass);
+
+ if (!varTypeIsArithmetic(thisType))
+ {
+ return nullptr;
+ }
+ }
+
+ if (!varTypeIsArithmetic(baseType))
+ {
+ return nullptr;
+ }
+
+ switch (intrinsic)
+ {
+#if defined(_TARGET_ARM64_)
+ case NI_Base_Vector64_AsByte:
+ case NI_Base_Vector64_AsInt16:
+ case NI_Base_Vector64_AsInt32:
+ case NI_Base_Vector64_AsSByte:
+ case NI_Base_Vector64_AsSingle:
+ case NI_Base_Vector64_AsUInt16:
+ case NI_Base_Vector64_AsUInt32:
+#endif // _TARGET_ARM64_
+ case NI_Base_Vector128_As:
+ case NI_Base_Vector128_AsByte:
+ case NI_Base_Vector128_AsDouble:
+ case NI_Base_Vector128_AsInt16:
+ case NI_Base_Vector128_AsInt32:
+ case NI_Base_Vector128_AsInt64:
+ case NI_Base_Vector128_AsSByte:
+ case NI_Base_Vector128_AsSingle:
+ case NI_Base_Vector128_AsUInt16:
+ case NI_Base_Vector128_AsUInt32:
+ case NI_Base_Vector128_AsUInt64:
+#if defined(_TARGET_XARCH_)
+ case NI_Base_Vector256_As:
+ case NI_Base_Vector256_AsByte:
+ case NI_Base_Vector256_AsDouble:
+ case NI_Base_Vector256_AsInt16:
+ case NI_Base_Vector256_AsInt32:
+ case NI_Base_Vector256_AsInt64:
+ case NI_Base_Vector256_AsSByte:
+ case NI_Base_Vector256_AsSingle:
+ case NI_Base_Vector256_AsUInt16:
+ case NI_Base_Vector256_AsUInt32:
+ case NI_Base_Vector256_AsUInt64:
+#endif // _TARGET_XARCH_
+ {
+ // We fold away the cast here, as it only exists to satisfy
+ // the type system. It is safe to do this here since the retNode type
+ // and the signature return type are both the same TYP_SIMD.
+
+ assert(sig->numArgs == 0);
+ assert(sig->hasThis());
+
+ retNode = impSIMDPopStack(retType, true, sig->retTypeClass);
+ SetOpLclRelatedToSIMDIntrinsic(retNode);
+ assert(retNode->gtType == getSIMDTypeForSize(getSIMDTypeSizeInBytes(sig->retTypeSigClass)));
+ break;
+ }
+
+#ifdef _TARGET_XARCH_
+ case NI_Base_Vector128_Zero:
+ {
+ assert(sig->numArgs == 0);
+
+ if (compSupports(InstructionSet_SSE))
+ {
+ retNode = gtNewSimdHWIntrinsicNode(retType, intrinsic, baseType, simdSize);
+ }
+ break;
+ }
+
+ case NI_Base_Vector256_Zero:
+ {
+ assert(sig->numArgs == 0);
+
+ if (compSupports(InstructionSet_AVX))
+ {
+ retNode = gtNewSimdHWIntrinsicNode(retType, intrinsic, baseType, simdSize);
+ }
+ break;
+ }
+#endif // _TARGET_XARCH_
+
+ default:
+ {
+ unreached();
+ break;
+ }
+ }
+
+ return retNode;
+}
+#endif // FEATURE_HW_INTRINSICS
+
GenTree* Compiler::impMathIntrinsic(CORINFO_METHOD_HANDLE method,
CORINFO_SIG_INFO* sig,
var_types callType,
result = NI_System_Collections_Generic_EqualityComparer_get_Default;
}
}
-
#ifdef FEATURE_HW_INTRINSICS
-#if defined(_TARGET_XARCH_)
- if (strcmp(namespaceName, "System.Runtime.Intrinsics.X86") == 0)
+ else if (strncmp(namespaceName, "System.Runtime.Intrinsics", 25) == 0)
{
- result = HWIntrinsicInfo::lookupId(className, methodName);
- }
+ namespaceName += 25;
+
+ if (namespaceName[0] == '\0')
+ {
+ if (strncmp(className, "Vector", 6) == 0)
+ {
+ className += 6;
+
+#if defined(_TARGET_ARM64_)
+ if (strncmp(className, "64", 2) == 0)
+ {
+ className += 2;
+
+ if (strcmp(className, "`1") == 0)
+ {
+ if (strncmp(methodName, "As", 2) == 0)
+ {
+ methodName += 2;
+
+ // Vector64_As, Vector64_AsDouble, Vector64_AsInt64, and Vector64_AsUInt64
+ // are not currently supported as they require additional plumbing to be
+ // supported by the JIT as TYP_SIMD8.
+
+ if (strcmp(methodName, "Byte") == 0)
+ {
+ result = NI_Base_Vector64_AsByte;
+ }
+ else if (strcmp(methodName, "Int16") == 0)
+ {
+ result = NI_Base_Vector64_AsInt16;
+ }
+ else if (strcmp(methodName, "Int32") == 0)
+ {
+ result = NI_Base_Vector64_AsInt32;
+ }
+ else if (strcmp(methodName, "SByte") == 0)
+ {
+ result = NI_Base_Vector64_AsSByte;
+ }
+ else if (strcmp(methodName, "Single") == 0)
+ {
+ result = NI_Base_Vector64_AsSingle;
+ }
+ else if (strcmp(methodName, "UInt16") == 0)
+ {
+ result = NI_Base_Vector64_AsUInt16;
+ }
+ else if (strcmp(methodName, "UInt32") == 0)
+ {
+ result = NI_Base_Vector64_AsUInt32;
+ }
+ }
+ }
+ }
+ else
+#endif // _TARGET_ARM64_
+ if (strncmp(className, "128", 3) == 0)
+ {
+ className += 3;
+
+ if (strcmp(className, "`1") == 0)
+ {
+ if (strncmp(methodName, "As", 2) == 0)
+ {
+ methodName += 2;
+
+ if (strcmp(methodName, "`1") == 0)
+ {
+ result = NI_Base_Vector128_As;
+ }
+ else if (strcmp(methodName, "Byte") == 0)
+ {
+ result = NI_Base_Vector128_AsByte;
+ }
+ else if (strcmp(methodName, "Double") == 0)
+ {
+ result = NI_Base_Vector128_AsDouble;
+ }
+ else if (strcmp(methodName, "Int16") == 0)
+ {
+ result = NI_Base_Vector128_AsInt16;
+ }
+ else if (strcmp(methodName, "Int32") == 0)
+ {
+ result = NI_Base_Vector128_AsInt32;
+ }
+ else if (strcmp(methodName, "Int64") == 0)
+ {
+ result = NI_Base_Vector128_AsInt64;
+ }
+ else if (strcmp(methodName, "SByte") == 0)
+ {
+ result = NI_Base_Vector128_AsSByte;
+ }
+ else if (strcmp(methodName, "Single") == 0)
+ {
+ result = NI_Base_Vector128_AsSingle;
+ }
+ else if (strcmp(methodName, "UInt16") == 0)
+ {
+ result = NI_Base_Vector128_AsUInt16;
+ }
+ else if (strcmp(methodName, "UInt32") == 0)
+ {
+ result = NI_Base_Vector128_AsUInt32;
+ }
+ else if (strcmp(methodName, "UInt64") == 0)
+ {
+ result = NI_Base_Vector128_AsUInt64;
+ }
+ }
+#if defined(_TARGET_XARCH_)
+ else if (strcmp(methodName, "get_Zero") == 0)
+ {
+ result = NI_Base_Vector128_Zero;
+ }
+#endif // _TARGET_XARCH_
+ }
+ }
+#if defined(_TARGET_XARCH_)
+ else if (strncmp(className, "256", 3) == 0)
+ {
+ className += 3;
+
+ if (strcmp(className, "`1") == 0)
+ {
+ if (strncmp(methodName, "As", 2) == 0)
+ {
+ methodName += 2;
+
+ if (strcmp(methodName, "`1") == 0)
+ {
+ result = NI_Base_Vector256_As;
+ }
+ else if (strcmp(methodName, "Byte") == 0)
+ {
+ result = NI_Base_Vector256_AsByte;
+ }
+ else if (strcmp(methodName, "Double") == 0)
+ {
+ result = NI_Base_Vector256_AsDouble;
+ }
+ else if (strcmp(methodName, "Int16") == 0)
+ {
+ result = NI_Base_Vector256_AsInt16;
+ }
+ else if (strcmp(methodName, "Int32") == 0)
+ {
+ result = NI_Base_Vector256_AsInt32;
+ }
+ else if (strcmp(methodName, "Int64") == 0)
+ {
+ result = NI_Base_Vector256_AsInt64;
+ }
+ else if (strcmp(methodName, "SByte") == 0)
+ {
+ result = NI_Base_Vector256_AsSByte;
+ }
+ else if (strcmp(methodName, "Single") == 0)
+ {
+ result = NI_Base_Vector256_AsSingle;
+ }
+ else if (strcmp(methodName, "UInt16") == 0)
+ {
+ result = NI_Base_Vector256_AsUInt16;
+ }
+ else if (strcmp(methodName, "UInt32") == 0)
+ {
+ result = NI_Base_Vector256_AsUInt32;
+ }
+ else if (strcmp(methodName, "UInt64") == 0)
+ {
+ result = NI_Base_Vector256_AsUInt64;
+ }
+ }
+ else if (strcmp(methodName, "get_Zero") == 0)
+ {
+ result = NI_Base_Vector256_Zero;
+ }
+ }
+ }
+#endif // _TARGET_XARCH_
+ }
+ }
+#if defined(_TARGET_XARCH_)
+ else if (strcmp(namespaceName, ".X86") == 0)
+ {
+ result = HWIntrinsicInfo::lookupId(className, methodName);
+ }
#elif defined(_TARGET_ARM64_)
- if (strcmp(namespaceName, "System.Runtime.Intrinsics.Arm.Arm64") == 0)
- {
- result = lookupHWIntrinsic(className, methodName);
- }
+ else if (strcmp(namespaceName, ".Arm.Arm64") == 0)
+ {
+ result = lookupHWIntrinsic(className, methodName);
+ }
#else // !defined(_TARGET_XARCH_) && !defined(_TARGET_ARM64_)
#error Unsupported platform
#endif // !defined(_TARGET_XARCH_) && !defined(_TARGET_ARM64_)
+ }
#endif // FEATURE_HW_INTRINSICS
return result;
{
InstructionSet_ILLEGAL = 0,
#ifdef _TARGET_XARCH_
+ InstructionSet_Base,
// Start linear order SIMD instruction sets
// These ISAs have strictly generation to generation order.
- InstructionSet_SSE = 1,
- InstructionSet_SSE2 = 2,
- InstructionSet_SSE3 = 3,
- InstructionSet_SSSE3 = 4,
- InstructionSet_SSE41 = 5,
- InstructionSet_SSE42 = 6,
- InstructionSet_AVX = 7,
- InstructionSet_AVX2 = 8,
- // Reserve values <32 for future SIMD instruction sets (i.e., AVX512),
+ InstructionSet_SSE,
+ InstructionSet_SSE2,
+ InstructionSet_SSE3,
+ InstructionSet_SSSE3,
+ InstructionSet_SSE41,
+ InstructionSet_SSE42,
+ InstructionSet_AVX,
+ InstructionSet_AVX2,
// End linear order SIMD instruction sets.
-
- InstructionSet_AES = 32,
- InstructionSet_BMI1 = 33,
- InstructionSet_BMI2 = 34,
- InstructionSet_FMA = 35,
- InstructionSet_LZCNT = 36,
- InstructionSet_PCLMULQDQ = 37,
- InstructionSet_POPCNT = 38,
+ InstructionSet_AES,
+ InstructionSet_BMI1,
+ InstructionSet_BMI2,
+ InstructionSet_FMA,
+ InstructionSet_LZCNT,
+ InstructionSet_PCLMULQDQ,
+ InstructionSet_POPCNT,
#elif defined(_TARGET_ARM_)
InstructionSet_NEON,
#elif defined(_TARGET_ARM64_)
#if defined(_TARGET_AMD64_) || defined(_TARGET_X86_)
// Enable AVX instruction set for wide operations as default. When both AVX and SSE3_4 are set, we will use the most
// capable instruction set available which will prefer AVX over SSE3/4.
-CONFIG_INTEGER(EnableSSE, W("EnableSSE"), 1) // Enable SSE
-CONFIG_INTEGER(EnableSSE2, W("EnableSSE2"), 1) // Enable SSE2
-CONFIG_INTEGER(EnableSSE3, W("EnableSSE3"), 1) // Enable SSE3
-CONFIG_INTEGER(EnableSSSE3, W("EnableSSSE3"), 1) // Enable SSSE3
-CONFIG_INTEGER(EnableSSE41, W("EnableSSE41"), 1) // Enable SSE41
-CONFIG_INTEGER(EnableSSE42, W("EnableSSE42"), 1) // Enable SSE42
-CONFIG_INTEGER(EnableAVX, W("EnableAVX"), 1) // Enable AVX
-CONFIG_INTEGER(EnableAVX2, W("EnableAVX2"), 1) // Enable AVX2
-CONFIG_INTEGER(EnableFMA, W("EnableFMA"), 1) // Enable FMA
-CONFIG_INTEGER(EnableAES, W("EnableAES"), 1) // Enable AES
-CONFIG_INTEGER(EnableBMI1, W("EnableBMI1"), 1) // Enable BMI1
-CONFIG_INTEGER(EnableBMI2, W("EnableBMI2"), 1) // Enable BMI2
-CONFIG_INTEGER(EnableLZCNT, W("EnableLZCNT"), 1) // Enable AES
-CONFIG_INTEGER(EnablePCLMULQDQ, W("EnablePCLMULQDQ"), 1) // Enable PCLMULQDQ
-CONFIG_INTEGER(EnablePOPCNT, W("EnablePOPCNT"), 1) // Enable POPCNT
-#else // !defined(_TARGET_AMD64_) && !defined(_TARGET_X86_)
+CONFIG_INTEGER(EnableHWIntrinsic, W("EnableHWIntrinsic"), 1) // Enable Base
+CONFIG_INTEGER(EnableSSE, W("EnableSSE"), 1) // Enable SSE
+CONFIG_INTEGER(EnableSSE2, W("EnableSSE2"), 1) // Enable SSE2
+CONFIG_INTEGER(EnableSSE3, W("EnableSSE3"), 1) // Enable SSE3
+CONFIG_INTEGER(EnableSSSE3, W("EnableSSSE3"), 1) // Enable SSSE3
+CONFIG_INTEGER(EnableSSE41, W("EnableSSE41"), 1) // Enable SSE41
+CONFIG_INTEGER(EnableSSE42, W("EnableSSE42"), 1) // Enable SSE42
+CONFIG_INTEGER(EnableAVX, W("EnableAVX"), 1) // Enable AVX
+CONFIG_INTEGER(EnableAVX2, W("EnableAVX2"), 1) // Enable AVX2
+CONFIG_INTEGER(EnableFMA, W("EnableFMA"), 1) // Enable FMA
+CONFIG_INTEGER(EnableAES, W("EnableAES"), 1) // Enable AES
+CONFIG_INTEGER(EnableBMI1, W("EnableBMI1"), 1) // Enable BMI1
+CONFIG_INTEGER(EnableBMI2, W("EnableBMI2"), 1) // Enable BMI2
+CONFIG_INTEGER(EnableLZCNT, W("EnableLZCNT"), 1) // Enable AES
+CONFIG_INTEGER(EnablePCLMULQDQ, W("EnablePCLMULQDQ"), 1) // Enable PCLMULQDQ
+CONFIG_INTEGER(EnablePOPCNT, W("EnablePOPCNT"), 1) // Enable POPCNT
+#else // !defined(_TARGET_AMD64_) && !defined(_TARGET_X86_)
// Enable AVX instruction set for wide operations as default
CONFIG_INTEGER(EnableAVX, W("EnableAVX"), 0)
-#endif // !defined(_TARGET_AMD64_) && !defined(_TARGET_X86_)
+#endif // !defined(_TARGET_AMD64_) && !defined(_TARGET_X86_)
///
/// JIT
///
JITDUMP(requiresCopyBlock ? " this requires a CopyBlock.\n" : " using field by field assignments.\n");
- // Mark the dest/src structs as DoNotEnreg
- // when they are not reg-sized non-field-addressed structs and we are using a CopyBlock
- // or the struct is not promoted
+ // Mark the dest/src structs as DoNotEnreg when they are not being fully referenced as the same type.
//
if (!destDoFldAsg && (destLclVar != nullptr) && !destSingleLclVarAsg)
{
- if (!destLclVar->lvRegStruct)
+ if (!destLclVar->lvRegStruct || (destLclVar->lvType != dest->TypeGet()))
{
// Mark it as DoNotEnregister.
lvaSetVarDoNotEnregister(destLclNum DEBUGARG(DNER_BlockOp));
_AssignFields:
+ // We may have allocated a temp above, and that may have caused the lvaTable to be expanded.
+ // So, beyond this point we cannot rely on the old values of 'srcLclVar' and 'destLclVar'.
for (unsigned i = 0; i < fieldCnt; ++i)
{
FieldSeqNode* curFieldSeq = nullptr;
if (srcSingleLclVarAsg)
{
noway_assert(fieldCnt == 1);
- noway_assert(srcLclVar != nullptr);
+ noway_assert(srcLclNum != BAD_VAR_NUM);
noway_assert(addrSpill == nullptr);
- src = gtNewLclvNode(srcLclNum, srcLclVar->TypeGet());
+ src = gtNewLclvNode(srcLclNum, lvaGetDesc(srcLclNum)->TypeGet());
}
else
{
CORINFO_CLASS_HANDLE classHnd = lvaTable[destLclNum].lvVerTypeInfo.GetClassHandle();
CORINFO_FIELD_HANDLE fieldHnd =
info.compCompHnd->getFieldInClass(classHnd, lvaTable[fieldLclNum].lvFldOrdinal);
- curFieldSeq = GetFieldSeqStore()->CreateSingleton(fieldHnd);
+ curFieldSeq = GetFieldSeqStore()->CreateSingleton(fieldHnd);
+ var_types destType = lvaGetDesc(fieldLclNum)->lvType;
- src = gtNewOperNode(GT_ADD, TYP_BYREF, src,
- new (this, GT_CNS_INT)
- GenTreeIntCon(TYP_I_IMPL, lvaTable[fieldLclNum].lvFldOffset, curFieldSeq));
-
- src = gtNewIndir(lvaTable[fieldLclNum].TypeGet(), src);
+ bool done = false;
+ if (lvaGetDesc(fieldLclNum)->lvFldOffset == 0)
+ {
+ // If this is a full-width use of the src via a different type, we need to create a GT_LCL_FLD.
+ // (Note that if it was the same type, 'srcSingleLclVarAsg' would be true.)
+ if (srcLclNum != BAD_VAR_NUM)
+ {
+ noway_assert(srcLclVarTree != nullptr);
+ assert(destType != TYP_STRUCT);
+ unsigned destSize = genTypeSize(destType);
+ srcLclVar = lvaGetDesc(srcLclNum);
+ unsigned srcSize =
+ (srcLclVar->lvType == TYP_STRUCT) ? srcLclVar->lvExactSize : genTypeSize(srcLclVar);
+ if (destSize == srcSize)
+ {
+ srcLclVarTree->gtFlags |= GTF_VAR_CAST;
+ srcLclVarTree->ChangeOper(GT_LCL_FLD);
+ srcLclVarTree->gtType = destType;
+ srcLclVarTree->AsLclFld()->gtFieldSeq = curFieldSeq;
+ src = srcLclVarTree;
+ done = true;
+ }
+ }
+ }
+ else // if (lvaGetDesc(fieldLclNum)->lvFldOffset != 0)
+ {
+ src = gtNewOperNode(GT_ADD, TYP_BYREF, src,
+ new (this, GT_CNS_INT)
+ GenTreeIntCon(TYP_I_IMPL, lvaGetDesc(fieldLclNum)->lvFldOffset,
+ curFieldSeq));
+ }
+ if (!done)
+ {
+ src = gtNewIndir(destType, src);
+ }
}
}
// exposed. Neither liveness nor SSA are able to track this kind of indirect assignments.
if (addrSpill && !destDoFldAsg && destLclNum != BAD_VAR_NUM)
{
- noway_assert(lvaTable[destLclNum].lvAddrExposed);
+ noway_assert(lvaGetDesc(destLclNum)->lvAddrExposed);
}
#if LOCAL_ASSERTION_PROP
NI_Math_Round = 3,
NI_System_Collections_Generic_EqualityComparer_get_Default = 4,
NI_System_Buffers_Binary_BinaryPrimitives_ReverseEndianness = 5,
+
#ifdef FEATURE_HW_INTRINSICS
NI_HW_INTRINSIC_START,
#if defined(_TARGET_XARCH_)
#endif // !__has_builtin(_rotr)
PALIMPORT int __cdecl abs(int);
-#ifndef PAL_STDCPP_COMPAT
-PALIMPORT LONG __cdecl labs(LONG);
-#endif // !PAL_STDCPP_COMPAT
// clang complains if this is declared with __int64
PALIMPORT long long __cdecl llabs(long long);
+#ifndef PAL_STDCPP_COMPAT
+PALIMPORT LONG __cdecl labs(LONG);
PALIMPORT int __cdecl _signbit(double);
PALIMPORT int __cdecl _finite(double);
PALIMPORT float __cdecl sqrtf(float);
PALIMPORT float __cdecl tanf(float);
PALIMPORT float __cdecl tanhf(float);
+#endif // !PAL_STDCPP_COMPAT
#ifndef PAL_STDCPP_COMPAT
--- /dev/null
+// Licensed to the .NET Foundation under one or more agreements.
+// The .NET Foundation licenses this file to you under the MIT license.
+// See the LICENSE file in the project root for more information.
+
+/*++
+
+
+
+Module Name:
+
+ include/pal/cgroup.h
+
+Abstract:
+
+ Header file for the CGroup related functions.
+
+
+
+--*/
+
+#ifndef _PAL_CGROUP_H_
+#define _PAL_CGROUP_H_
+
+#ifdef __cplusplus
+extern "C"
+{
+#endif // __cplusplus
+
+void InitializeCGroup();
+void CleanupCGroup();
+
+#ifdef __cplusplus
+}
+#endif // __cplusplus
+
+#endif /* _PAL_CGROUP_H_ */
+
#include "pal/init.h"
#include "pal/numa.h"
#include "pal/stackstring.hpp"
+#include "pal/cgroup.h"
#if HAVE_MACH_EXCEPTIONS
#include "../exception/machexception.h"
goto done;
}
+ InitializeCGroup();
+
// Initialize the environment.
if (FALSE == EnvironInitialize())
{
CLEANUP1:
SHMCleanup();
CLEANUP0:
+ CleanupCGroup();
TLSCleanup();
ERROR("PAL_Initialize failed\n");
SetLastError(palError);
#include "pal/palinternal.h"
#include <sys/resource.h>
#include "pal/virtual.h"
+#include "pal/cgroup.h"
#include <algorithm>
#define PROC_MOUNTINFO_FILENAME "/proc/self/mountinfo"
#define CFS_PERIOD_FILENAME "/cpu.cfs_period_us"
class CGroup
{
- char *m_memory_cgroup_path;
- char *m_cpu_cgroup_path;
+ static char *s_memory_cgroup_path;
+ static char *s_cpu_cgroup_path;
public:
- CGroup()
+ static void Initialize()
{
- m_memory_cgroup_path = FindCgroupPath(&IsMemorySubsystem);
- m_cpu_cgroup_path = FindCgroupPath(&IsCpuSubsystem);
+ s_memory_cgroup_path = FindCgroupPath(&IsMemorySubsystem);
+ s_cpu_cgroup_path = FindCgroupPath(&IsCpuSubsystem);
}
- ~CGroup()
+ static void Cleanup()
{
- PAL_free(m_memory_cgroup_path);
- PAL_free(m_cpu_cgroup_path);
+ PAL_free(s_memory_cgroup_path);
+ PAL_free(s_cpu_cgroup_path);
}
- bool GetPhysicalMemoryLimit(size_t *val)
+ static bool GetPhysicalMemoryLimit(size_t *val)
{
char *mem_limit_filename = nullptr;
bool result = false;
- if (m_memory_cgroup_path == nullptr)
+ if (s_memory_cgroup_path == nullptr)
return result;
- size_t len = strlen(m_memory_cgroup_path);
+ size_t len = strlen(s_memory_cgroup_path);
len += strlen(MEM_LIMIT_FILENAME);
mem_limit_filename = (char*)PAL_malloc(len+1);
if (mem_limit_filename == nullptr)
return result;
- strcpy_s(mem_limit_filename, len+1, m_memory_cgroup_path);
+ strcpy_s(mem_limit_filename, len+1, s_memory_cgroup_path);
strcat_s(mem_limit_filename, len+1, MEM_LIMIT_FILENAME);
result = ReadMemoryValueFromFile(mem_limit_filename, val);
PAL_free(mem_limit_filename);
return result;
}
- bool GetPhysicalMemoryUsage(size_t *val)
+ static bool GetPhysicalMemoryUsage(size_t *val)
{
char *mem_usage_filename = nullptr;
bool result = false;
- if (m_memory_cgroup_path == nullptr)
+ if (s_memory_cgroup_path == nullptr)
return result;
- size_t len = strlen(m_memory_cgroup_path);
+ size_t len = strlen(s_memory_cgroup_path);
len += strlen(MEM_USAGE_FILENAME);
mem_usage_filename = (char*)malloc(len+1);
if (mem_usage_filename == nullptr)
return result;
- strcpy(mem_usage_filename, m_memory_cgroup_path);
+ strcpy(mem_usage_filename, s_memory_cgroup_path);
strcat(mem_usage_filename, MEM_USAGE_FILENAME);
result = ReadMemoryValueFromFile(mem_usage_filename, val);
free(mem_usage_filename);
return result;
}
- bool GetCpuLimit(UINT *val)
+ static bool GetCpuLimit(UINT *val)
{
long long quota;
long long period;
return cgroup_path;
}
- bool ReadMemoryValueFromFile(const char* filename, size_t* val)
+ static bool ReadMemoryValueFromFile(const char* filename, size_t* val)
{
return ::ReadMemoryValueFromFile(filename, val);
}
- long long ReadCpuCGroupValue(const char* subsystemFilename){
+ static long long ReadCpuCGroupValue(const char* subsystemFilename){
char *filename = nullptr;
bool result = false;
long long val;
size_t len;
- if (m_cpu_cgroup_path == nullptr)
+ if (s_cpu_cgroup_path == nullptr)
return -1;
- len = strlen(m_cpu_cgroup_path);
+ len = strlen(s_cpu_cgroup_path);
len += strlen(subsystemFilename);
filename = (char*)PAL_malloc(len + 1);
if (filename == nullptr)
return -1;
- strcpy_s(filename, len+1, m_cpu_cgroup_path);
+ strcpy_s(filename, len+1, s_cpu_cgroup_path);
strcat_s(filename, len+1, subsystemFilename);
result = ReadLongLongValueFromFile(filename, &val);
PAL_free(filename);
return val;
}
- bool ReadLongLongValueFromFile(const char* filename, long long* val)
+ static bool ReadLongLongValueFromFile(const char* filename, long long* val)
{
bool result = false;
char *line = nullptr;
}
};
+char *CGroup::s_memory_cgroup_path = nullptr;
+char *CGroup::s_cpu_cgroup_path = nullptr;
+
+void InitializeCGroup()
+{
+ CGroup::Initialize();
+}
+
+void CleanupCGroup()
+{
+ CGroup::Cleanup();
+}
+
size_t
PALAPI
PAL_GetRestrictedPhysicalMemoryLimit()
{
- CGroup cgroup;
size_t physical_memory_limit;
- if (!cgroup.GetPhysicalMemoryLimit(&physical_memory_limit))
+ if (!CGroup::GetPhysicalMemoryLimit(&physical_memory_limit))
physical_memory_limit = SIZE_T_MAX;
struct rlimit curr_rlimit;
BOOL result = false;
size_t linelen;
char* line = nullptr;
- CGroup cgroup;
if (val == nullptr)
return FALSE;
// Linux uses cgroup usage to trigger oom kills.
- if (cgroup.GetPhysicalMemoryUsage(val))
+ if (CGroup::GetPhysicalMemoryUsage(val))
return TRUE;
// process resident set size.
PALAPI
PAL_GetCpuLimit(UINT* val)
{
- CGroup cgroup;
-
if (val == nullptr)
return FALSE;
- return cgroup.GetCpuLimit(val);
+ return CGroup::GetCpuLimit(val);
}
}
#endif // defined(FEATURE_COMINTEROP_APARTMENT_SUPPORT) && !defined(CROSSGEN_COMPILE)
-// Looks in all the modules for the DefaultDomain attribute
-// The order is assembly and then the modules. It is first
-// come, first serve.
-BOOL SystemDomain::SetGlobalSharePolicyUsingAttribute(IMDInternalImport* pScope, mdMethodDef mdMethod)
-{
- STANDARD_VM_CONTRACT;
-
-
- return FALSE;
-}
-
// Helper function to load an assembly. This is called from LoadCOMClass.
/* static */
m_cRef=1;
- // Initialize Shared state. Assemblies are loaded
- // into each domain by default.
-#ifdef FEATURE_LOADER_OPTIMIZATION
- m_SharePolicy = SHARE_POLICY_UNSPECIFIED;
-#endif
-
m_pRootAssembly = NULL;
m_pwDynamicDir = NULL;
BaseDomain::Init();
- // Set up the IL stub cache
- m_ILStubCache.Init(GetLoaderAllocator()->GetHighFrequencyHeap());
-
// Set up the binding caches
m_AssemblyCache.Init(&m_DomainCacheCrst, GetHighFrequencyHeap());
m_UnmanagedCache.InitializeTable(this, &m_DomainCacheCrst);
RETURN pDomainFile;
}
-AppDomain::SharePolicy AppDomain::GetSharePolicy()
-{
- LIMITED_METHOD_CONTRACT;
-
- return SHARE_POLICY_NEVER;
-}
#endif // FEATURE_LOADER_OPTIMIZATION
// This is the value stored in the table
Assembly *pAssembly = (Assembly *) u2;
if (pLocator->GetType()==SharedAssemblyLocator::DOMAINASSEMBLY)
- {
- if (!pAssembly->GetManifestFile()->Equals(pLocator->GetDomainAssembly()->GetFile()))
- return FALSE;
-
- return pAssembly->CanBeShared(pLocator->GetDomainAssembly());
- }
+ return FALSE;
else
if (pLocator->GetType()==SharedAssemblyLocator::PEASSEMBLY)
return pAssembly->GetManifestFile()->Equals(pLocator->GetPEAssembly());
#include "domainfile.h"
#include "objectlist.h"
#include "fptrstubs.h"
-#include "ilstubcache.h"
#include "testhookmgr.h"
#include "gcheaputilities.h"
#include "gchandleutilities.h"
BOOL ContainsAssembly(Assembly * assem);
-#ifdef FEATURE_LOADER_OPTIMIZATION
- enum SharePolicy
- {
- // Attributes to control when to use domain neutral assemblies
- SHARE_POLICY_UNSPECIFIED, // Use the current default policy (LoaderOptimization.NotSpecified)
- SHARE_POLICY_NEVER, // Do not share anything, except the system assembly (LoaderOptimization.SingleDomain)
- SHARE_POLICY_ALWAYS, // Share everything possible (LoaderOptimization.MultiDomain)
- SHARE_POLICY_GAC, // Share only GAC-bound assemblies (LoaderOptimization.MultiDomainHost)
-
- SHARE_POLICY_COUNT,
- SHARE_POLICY_MASK = 0x3,
-
- // NOTE that previously defined was a bit 0x40 which might be set on this value
- // in custom attributes.
- SHARE_POLICY_DEFAULT = SHARE_POLICY_NEVER,
- };
-
- SharePolicy GetSharePolicy();
-#endif // FEATURE_LOADER_OPTIMIZATION
-
//****************************************************************************************
//
// Reference count. When an appdomain is first created the reference is bump
OBJECTHANDLE m_ExposedObject;
-#ifdef FEATURE_LOADER_OPTIMIZATION
- // Indicates where assemblies will be loaded for
- // this domain. By default all assemblies are loaded into the domain.
- // There are two additional settings, all
- // assemblies can be loaded into the shared domain or assemblies
- // that are strong named are loaded into the shared area.
- SharePolicy m_SharePolicy;
-#endif
-
IUnknown *m_pComIPForExposedObject;
// Hash table that maps a clsid to a type
DWORD m_TrackSpinLock;
#endif
-
- // IL stub cache with fabricated MethodTable parented by a random module in this AD.
- ILStubCache m_ILStubCache;
-
// The number of times we have entered this AD
ULONG m_dwThreadEnterCount;
// The number of threads that have entered this AD, for ADU only
BOOL IsBindingModelLocked();
BOOL LockBindingModel();
- ILStubCache* GetILStubCache()
- {
- LIMITED_METHOD_CONTRACT;
- return &m_ILStubCache;
- }
-
- static AppDomain* GetDomain(ILStubCache* pILStubCache)
- {
- return CONTAINING_RECORD(pILStubCache, AppDomain, m_ILStubCache);
- }
-
enum {
CONTEXT_INITIALIZED = 0x0001,
USER_CREATED_DOMAIN = 0x0002, // created by call to AppDomain.CreateDomain
static Thread::ApartmentState GetEntryPointThreadAptState(IMDInternalImport* pScope, mdMethodDef mdMethod);
static void SetThreadAptState(Thread::ApartmentState state);
#endif
- static BOOL SetGlobalSharePolicyUsingAttribute(IMDInternalImport* pScope, mdMethodDef mdMethod);
-
//****************************************************************************************
//
m_winMDStatus(WinMDStatus_Unknown),
m_pManifestWinMDImport(NULL),
#endif // FEATURE_COMINTEROP
- m_fIsDomainNeutral(pDomain == SharedDomain::GetDomain()),
m_debuggerFlags(debuggerFlags),
m_fTerminated(FALSE)
#ifdef FEATURE_COMINTEROP
return hr;
}// Used by the IMetadata API's to access an assemblies metadata.
-#ifdef FEATURE_LOADER_OPTIMIZATION
-
-BOOL Assembly::CanBeShared(DomainAssembly *pDomainAssembly)
-{
- PTR_PEAssembly pFile=pDomainAssembly->GetFile();
-
- if(pFile == NULL)
- return FALSE;
-
- if(pFile->IsDynamic())
- return FALSE;
-
- if(IsSystem() && pFile->IsSystem())
- return TRUE;
-
- if ((pDomainAssembly->GetDebuggerInfoBits()&~(DACF_PDBS_COPIED|DACF_IGNORE_PDBS|DACF_OBSOLETE_TRACK_JIT_INFO))
- != (m_debuggerFlags&~(DACF_PDBS_COPIED|DACF_IGNORE_PDBS|DACF_OBSOLETE_TRACK_JIT_INFO)))
- {
- LOG((LF_CODESHARING,
- LL_INFO100,
- "We can't share it, desired debugging flags %x are different than %x\n",
- pDomainAssembly->GetDebuggerInfoBits(), (m_debuggerFlags&~(DACF_PDBS_COPIED|DACF_IGNORE_PDBS|DACF_OBSOLETE_TRACK_JIT_INFO))));
- STRESS_LOG2(LF_CODESHARING, LL_INFO100,"Flags diff= %08x [%08x/%08x]",pDomainAssembly->GetDebuggerInfoBits(),
- m_debuggerFlags);
- return FALSE;
- }
-
- return TRUE;
-}
-
-
-#endif // FEATURE_LOADER_OPTIMIZATION
-
void DECLSPEC_NORETURN Assembly::ThrowTypeLoadException(LPCUTF8 pszFullName, UINT resIDWhy)
{
WRAPPER_NO_CONTRACT;
OBJECTHANDLE GetLoaderAllocatorObjectHandle() { WRAPPER_NO_CONTRACT; return GetLoaderAllocator()->GetLoaderAllocatorObjectHandle(); }
#endif // FEATURE_COLLECTIBLE_TYPES
- BOOL CanBeShared(DomainAssembly *pAsAssembly);
-
- void SetDomainNeutral() { LIMITED_METHOD_CONTRACT; m_fIsDomainNeutral = TRUE; }
- BOOL IsDomainNeutral() { LIMITED_METHOD_DAC_CONTRACT; return m_fIsDomainNeutral; }
+ BOOL IsDomainNeutral() { LIMITED_METHOD_DAC_CONTRACT; return FALSE; }
BOOL IsSIMDVectorAssembly() { LIMITED_METHOD_DAC_CONTRACT; return m_fIsSIMDVectorAssembly; }
IWinMDImport *m_pManifestWinMDImport;
#endif // FEATURE_COMINTEROP
- BOOL m_fIsDomainNeutral;
-
DebuggerAssemblyControlFlags m_debuggerFlags;
BOOL m_fTerminated;
}
CONTRACTL_END;
- // Use per-AD cache for domain specific modules when not NGENing
+ // Use per-LoaderAllocator cache for modules when not NGENing
BaseDomain *pDomain = GetDomain();
- if (!pDomain->IsSharedDomain() && !pDomain->AsAppDomain()->IsCompilationDomain())
- return pDomain->AsAppDomain()->GetILStubCache();
+ if (!IsSystem() && !pDomain->IsSharedDomain() && !pDomain->AsAppDomain()->IsCompilationDomain())
+ return GetLoaderAllocator()->GetILStubCache();
if (m_pILStubCache == NULL)
{
m_fDebuggerUnloadStarted(FALSE),
m_fCollectible(pLoaderAllocator->IsCollectible()),
m_fHostAssemblyPublished(false),
- m_fCalculatedShouldLoadDomainNeutral(false),
- m_fShouldLoadDomainNeutral(false),
m_pLoaderAllocator(pLoaderAllocator),
m_NextDomainAssemblyInSameALC(NULL)
{
Module * pNativeModule = pNativeImage->GetLoadedLayout()->GetPersistedModuleImage();
EnsureWritablePages(pNativeModule);
PEFile ** ppNativeFile = (PEFile **) (PBYTE(pNativeModule) + Module::GetFileOffset());
- BOOL bExpectedToBeShared= ShouldLoadDomainNeutral();
- if (!bExpectedToBeShared)
- {
- GetFile()->SetNativeImageUsedExclusively();
- }
+ GetFile()->SetNativeImageUsedExclusively();
PEAssembly * pFile = (PEAssembly *)FastInterlockCompareExchangePointer((void **)ppNativeFile, (void *)GetFile(), (void *)NULL);
STRESS_LOG3(LF_ZAP,LL_INFO100,"Attempted to set new native file %p, old file was %p, location in the image=%p\n",GetFile(),pFile,ppNativeFile);
if (pFile!=NULL && !IsSystem() &&
- ( !bExpectedToBeShared ||
- pFile == PEFile::Dummy() ||
+ ( pFile == PEFile::Dummy() ||
pFile->IsNativeImageUsedExclusively() ||
!(GetFile()->GetPath().Equals(pFile->GetPath())))
}
#endif // FEATURE_PREJIT
-BOOL DomainAssembly::ShouldLoadDomainNeutral()
-{
- STANDARD_VM_CONTRACT;
-
- if (m_fCalculatedShouldLoadDomainNeutral)
- return m_fShouldLoadDomainNeutral;
-
- m_fShouldLoadDomainNeutral = !!ShouldLoadDomainNeutralHelper();
- m_fCalculatedShouldLoadDomainNeutral = true;
-
- return m_fShouldLoadDomainNeutral;
-}
-
-BOOL DomainAssembly::ShouldLoadDomainNeutralHelper()
-{
- STANDARD_VM_CONTRACT;
-
-#ifdef FEATURE_LOADER_OPTIMIZATION
-
-
- if (IsSystem())
- return TRUE;
-
- if (IsSingleAppDomain())
- return FALSE;
-
- if (GetFile()->IsDynamic())
- return FALSE;
-
-#ifdef FEATURE_COMINTEROP
- if (GetFile()->IsWindowsRuntime())
- return FALSE;
-#endif
-
- switch(this->GetAppDomain()->GetSharePolicy()) {
- case AppDomain::SHARE_POLICY_ALWAYS:
- return TRUE;
-
- case AppDomain::SHARE_POLICY_GAC:
- return IsSystem();
-
- case AppDomain::SHARE_POLICY_NEVER:
- return FALSE;
-
- case AppDomain::SHARE_POLICY_UNSPECIFIED:
- case AppDomain::SHARE_POLICY_COUNT:
- break;
- }
-
- return FALSE; // No meaning in doing costly closure walk for CoreCLR.
-
-
-#else // FEATURE_LOADER_OPTIMIZATION
- return IsSystem();
-#endif // FEATURE_LOADER_OPTIMIZATION
-}
-
// This is where the decision whether an assembly is DomainNeutral (shared) nor not is made.
void DomainAssembly::Allocate()
{
//! If you decide to remove "if" do not remove this brace: order is important here - in the case of an exception,
//! the Assembly holder must destruct before the AllocMemTracker declared above.
- NewHolder<Assembly> assemblyHolder(NULL);
-
- // Determine whether we are supposed to load the assembly as a shared
- // assembly or into the app domain.
- if (ShouldLoadDomainNeutral())
- {
-
-#ifdef FEATURE_LOADER_OPTIMIZATION
-
-
- // Try to find an existing shared version of the assembly which
- // is compatible with our domain.
-
- SharedDomain * pSharedDomain = SharedDomain::GetDomain();
+ // We can now rely on the fact that our MDImport will not change so we can stop refcounting it.
+ GetFile()->MakeMDImportPersistent();
- SIZE_T nInitialShareableAssemblyCount = pSharedDomain->GetShareableAssemblyCount();
- DWORD dwSwitchCount = 0;
-
- SharedFileLockHolder pFileLock(pSharedDomain, GetFile(), FALSE);
-
- if (IsSystem())
- {
- pAssembly=SystemDomain::SystemAssembly();
- }
- else
- {
- SharedAssemblyLocator locator(this);
- pAssembly = pSharedDomain->FindShareableAssembly(&locator);
-
- if (pAssembly == NULL)
- {
- pFileLock.Acquire();
- pAssembly = pSharedDomain->FindShareableAssembly(&locator);
- }
- }
-
- if (pAssembly == NULL)
- {
-
- // We can now rely on the fact that our MDImport will not change so we can stop refcounting it.
- GetFile()->MakeMDImportPersistent();
-
- // Go ahead and create new shared version of the assembly if possible
- // <TODO> We will need to pass a valid OBJECREF* here in the future when we implement SCU </TODO>
- assemblyHolder = pAssembly = Assembly::Create(pSharedDomain, GetFile(), GetDebuggerInfoBits(), this->IsCollectible(), pamTracker, this->IsCollectible() ? this->GetLoaderAllocator() : NULL);
-
- // Compute the closure assembly dependencies
- // of the code & layout of given assembly.
- //
- // An assembly has direct dependencies listed in its manifest.
- //
- // We do not in general also have all of those dependencies' dependencies in the manifest.
- // After all, we may be only using a small portion of the assembly.
- //
- // However, since all dependent assemblies must also be shared (so that
- // the shared data in this assembly can refer to it), we are in
- // effect forced to behave as though we do have all of their dependencies.
- // This is because the resulting shared assembly that we will depend on
- // DOES have those dependencies, but we won't be able to validly share that
- // assembly unless we match all of ITS dependencies, too.
- // Sets the tenured bit atomically with the hash insert.
- pSharedDomain->AddShareableAssembly(pAssembly);
- }
-#else // FEATURE_LOADER_OPTIMIZATION
- _ASSERTE(IsSystem());
- if (SystemDomain::SystemAssembly())
- {
- pAssembly = SystemDomain::SystemAssembly();
- }
- else
- {
- // We can now rely on the fact that our MDImport will not change so we can stop refcounting it.
- GetFile()->MakeMDImportPersistent();
-
- // <TODO> We will need to pass a valid OBJECTREF* here in the future when we implement SCU </TODO>
- SharedDomain * pSharedDomain = SharedDomain::GetDomain();
- assemblyHolder = pAssembly = Assembly::Create(pSharedDomain, GetFile(), GetDebuggerInfoBits(), this->IsCollectible(), pamTracker, this->IsCollectible() ? this->GetLoaderAllocator() : NULL);
- pAssembly->SetIsTenured();
- }
-#endif // FEATURE_LOADER_OPTIMIZATION
- }
- else
- {
- // We can now rely on the fact that our MDImport will not change so we can stop refcounting it.
- GetFile()->MakeMDImportPersistent();
-
- // <TODO> We will need to pass a valid OBJECTREF* here in the future when we implement SCU </TODO>
- assemblyHolder = pAssembly = Assembly::Create(m_pDomain, GetFile(), GetDebuggerInfoBits(), this->IsCollectible(), pamTracker, this->IsCollectible() ? this->GetLoaderAllocator() : NULL);
- assemblyHolder->SetIsTenured();
- }
+ NewHolder<Assembly> assemblyHolder(NULL);
+ assemblyHolder = pAssembly = Assembly::Create(m_pDomain, GetFile(), GetDebuggerInfoBits(), this->IsCollectible(), pamTracker, this->IsCollectible() ? this->GetLoaderAllocator() : NULL);
+ assemblyHolder->SetIsTenured();
//@todo! This is too early to be calling SuppressRelease. The right place to call it is below after
// the CANNOTTHROWCOMPLUSEXCEPTION. Right now, we have to do this to unblock OOM injection testing quickly
public:
ULONG HashIdentity();
- private:
-
- BOOL ShouldLoadDomainNeutral();
- BOOL ShouldLoadDomainNeutralHelper();
-
// ------------------------------------------------------------
// Instance data
// ------------------------------------------------------------
BOOL m_fDebuggerUnloadStarted;
BOOL m_fCollectible;
Volatile<bool> m_fHostAssemblyPublished;
- Volatile<bool> m_fCalculatedShouldLoadDomainNeutral;
- Volatile<bool> m_fShouldLoadDomainNeutral;
PTR_LoaderAllocator m_pLoaderAllocator;
DomainAssembly* m_NextDomainAssemblyInSameALC;
BOOL bIsAppDomain = pBaseDomain->IsAppDomain();
BOOL bIsExecutable = bIsAppDomain ? !(pBaseDomain->AsAppDomain()->IsPassiveDomain()) : FALSE;
BOOL bIsSharedDomain = pBaseDomain->IsSharedDomain();
- UINT32 uSharingPolicy = bIsAppDomain?(pBaseDomain->AsAppDomain()->GetSharePolicy()):0;
+ UINT32 uSharingPolicy = 0;
ULONGLONG ullDomainId = (ULONGLONG)pBaseDomain;
ULONG ulDomainFlags = ((bIsDefaultDomain ? ETW::LoaderLog::LoaderStructs::DefaultDomain : 0) |
CONTRACT_END;
#ifdef _DEBUG
- if (pModule->GetDomain()->IsSharedDomain() || pModule->GetDomain()->AsAppDomain()->IsCompilationDomain())
+ if (pModule->IsSystem() || pModule->GetDomain()->IsSharedDomain() || pModule->GetDomain()->AsAppDomain()->IsCompilationDomain())
{
// in the shared domain and compilation AD we are associated with the module
CONSISTENCY_CHECK(pModule->GetILStubCache() == this);
}
else
{
- // otherwise we are associated with the AD
- AppDomain* pStubCacheDomain = AppDomain::GetDomain(this);
- CONSISTENCY_CHECK(pStubCacheDomain == pModule->GetDomain()->AsAppDomain());
+ // otherwise we are associated with the LoaderAllocator
+ LoaderAllocator* pStubLoaderAllocator = LoaderAllocator::GetLoaderAllocator(this);
+ CONSISTENCY_CHECK(pStubLoaderAllocator == pModule->GetLoaderAllocator());
}
#endif // _DEBUG
#else
m_pPrecodeHeap = new (&m_PrecodeHeapInstance) CodeFragmentHeap(this, STUB_CODE_BLOCK_PRECODE);
#endif
+
+ // Set up the IL stub cache
+ m_ILStubCache.Init(m_pHighFrequencyHeap);
}
return TRUE;
}
-BOOL LoaderAllocator::IsDomainNeutral()
-{
- CONTRACTL {
- NOTHROW;
- GC_NOTRIGGER;
- MODE_ANY;
- SO_TOLERANT;
- } CONTRACTL_END;
-
- return GetDomain()->IsSharedDomain();
-}
-
DomainAssemblyIterator::DomainAssemblyIterator(DomainAssembly* pFirstAssembly)
{
pCurrentAssembly = pFirstAssembly;
class FuncPtrStubs;
#include "qcall.h"
+#include "ilstubcache.h"
#define VPTRU_LoaderAllocator 0x3200
// The cache is keyed by MethodDesc pointers.
UMEntryThunkCache * m_pUMEntryThunkCache;
+ // IL stub cache with fabricated MethodTable parented by a random module in this LoaderAllocator.
+ ILStubCache m_ILStubCache;
+
public:
BYTE *GetVSDHeapInitialBlock(DWORD *pSize);
BYTE *GetCodeHeapInitialBlock(const BYTE * loAddr, const BYTE * hiAddr, DWORD minimumSize, DWORD *pSize);
virtual ~LoaderAllocator();
BaseDomain *GetDomain() { LIMITED_METHOD_CONTRACT; return m_pDomain; }
virtual BOOL CanUnload() = 0;
- BOOL IsDomainNeutral();
+ BOOL IsDomainNeutral() { LIMITED_METHOD_DAC_CONTRACT; return FALSE; }
void Init(BaseDomain *pDomain, BYTE *pExecutableHeapMemory = NULL);
void Terminate();
virtual void ReleaseManagedAssemblyLoadContext() {}
UMEntryThunkCache *GetUMEntryThunkCache();
#endif
+
+ static LoaderAllocator* GetLoaderAllocator(ILStubCache* pILStubCache)
+ {
+ return CONTAINING_RECORD(pILStubCache, LoaderAllocator, m_ILStubCache);
+ }
+
+ ILStubCache* GetILStubCache()
+ {
+ LIMITED_METHOD_CONTRACT;
+ return &m_ILStubCache;
+ }
}; // class LoaderAllocator
typedef VPTR(LoaderAllocator) PTR_LoaderAllocator;
// and the loader allocator in the current domain for non-collectable types
LoaderAllocator * GetDomainSpecificLoaderAllocator();
- inline BOOL IsDomainNeutral();
+ BOOL IsDomainNeutral() { LIMITED_METHOD_DAC_CONTRACT; return FALSE; }
Module* GetLoaderModule();
return dac_cast<PTR_InstantiatedMethodDesc>(this);
}
-inline BOOL MethodDesc::IsDomainNeutral()
-{
- WRAPPER_NO_CONTRACT;
- return !IsLCGMethod() && GetDomain()->IsSharedDomain();
-}
-
inline BOOL MethodDesc::IsZapped()
{
WRAPPER_NO_CONTRACT;
}
//==========================================================================================
-BOOL MethodTable::IsDomainNeutral()
-{
- STATIC_CONTRACT_NOTHROW;
- STATIC_CONTRACT_GC_NOTRIGGER;
- STATIC_CONTRACT_SO_TOLERANT;
- STATIC_CONTRACT_FORBID_FAULT;
- STATIC_CONTRACT_SUPPORTS_DAC;
-
- BOOL ret = GetLoaderModule()->GetAssembly()->IsDomainNeutral();
-#ifndef DACCESS_COMPILE
- _ASSERTE(!ret == !GetLoaderAllocator()->IsDomainNeutral());
-#endif
-
- return ret;
-}
-
-//==========================================================================================
BOOL MethodTable::HasSameTypeDefAs(MethodTable *pMT)
{
LIMITED_METHOD_DAC_CONTRACT;
#endif
// Return whether the type lives in the shared domain.
- BOOL IsDomainNeutral();
+ BOOL IsDomainNeutral() { LIMITED_METHOD_DAC_CONTRACT; return FALSE; }
MethodTable *LoadEnclosingMethodTable(ClassLoadLevel targetLevel = CLASS_DEPENDENCIES_LOADED);
return GetLoaderModule();
}
-BOOL TypeDesc::IsDomainNeutral()
-{
- CONTRACTL
- {
- NOTHROW;
- GC_NOTRIGGER;
- FORBID_FAULT;
- }
- CONTRACTL_END
-
- return GetDomain()->IsSharedDomain();
-}
-
BOOL ParamTypeDesc::OwnsTemplateMethodTable()
{
CONTRACTL
// i.e. are domain-bound. If any of the parts are domain-bound
// then they will all belong to the same domain.
PTR_BaseDomain GetDomain();
- BOOL IsDomainNeutral();
+ BOOL IsDomainNeutral() { LIMITED_METHOD_DAC_CONTRACT; return FALSE; }
PTR_LoaderAllocator GetLoaderAllocator()
{
return AsMethodTable()->GetInternalCorElementType();
}
-BOOL TypeHandle::IsDomainNeutral() const
-{
- LIMITED_METHOD_CONTRACT;
-
- if (IsTypeDesc())
- return AsTypeDesc()->IsDomainNeutral();
- else
- return AsMethodTable()->IsDomainNeutral();
-}
-
BOOL TypeHandle::HasInstantiation() const
{
LIMITED_METHOD_DAC_CONTRACT;
PTR_LoaderAllocator GetLoaderAllocator() const;
- BOOL IsDomainNeutral() const;
+ BOOL IsDomainNeutral() { LIMITED_METHOD_DAC_CONTRACT; return FALSE; }
// Get the class token, assuming the type handle represents a named type,
// i.e. a class, a value type, a generic instantiation etc.
System.Management.Tests # https://github.com/dotnet/coreclr/issues/16001
System.Memory.Tests # https://github.com/dotnet/coreclr/issues/20958
System.Net.Http.Functional.Tests # https://github.com/dotnet/coreclr/issues/17739
+System.Net.NameResolution.Functional.Tests # https://github.com/dotnet/coreclr/issues/21224 -- JitStressRegs=1
System.Net.NameResolution.Pal.Tests # https://github.com/dotnet/coreclr/issues/17740
System.Numerics.Vectors.Tests # https://github.com/dotnet/coreclr/issues/19537
+System.Runtime.Tests # https://github.com/dotnet/coreclr/issues/21223 -- JitStress=2
+System.Text.Encodings.Web.Tests # https://github.com/dotnet/coreclr/issues/21113 -- minopts
+System.Text.Json.Tests # https://github.com/dotnet/coreclr/issues/21112
System.Text.RegularExpressions.Tests # https://github.com/dotnet/coreclr/issues/17754 -- timeout -- JitMinOpts only
<ExcludeList Include="$(XunitTestBinBase)/JIT/Regression/JitBlue/GitHub_11408/GitHub_11408/*">
<Issue>11408</Issue>
</ExcludeList>
+ <ExcludeList Include="$(XunitTestBinBase)/reflection/regression/dev10bugs/Dev10_630880/*">
+ <Issue>21173</Issue>
+ </ExcludeList>
<ExcludeList Include="$(XunitTestBinBase)/baseservices/exceptions/StackTracePreserve/StackTracePreserveTests/*">
<Issue>20322</Issue>
</ExcludeList>
<ExcludeList Include="$(XunitTestBinBase)/JIT/SIMD/Matrix4x4_ro/*">
<Issue>19537</Issue>
</ExcludeList>
- <ExcludeList Include="$(XunitTestBinBase)/JIT/HardwareIntrinsics/General/Vector64/Vector64_ro/*">
- <Issue>21064</Issue>
- </ExcludeList>
</ItemGroup>
<!-- Arm64 All OS -->
<ExcludeList Include="$(XunitTestBinBase)/readytorun/r2rdump/R2RDumpTest/*">
<Issue>19441</Issue>
</ExcludeList>
- <ExcludeList Include="$(XunitTestBinBase)/JIT/HardwareIntrinsics/General/Vector64_1/Vector64_1_r/*">
- <Issue>21064</Issue>
- </ExcludeList>
- <ExcludeList Include="$(XunitTestBinBase)/JIT/HardwareIntrinsics/General/Vector64_1/Vector64_1_ro/*">
- <Issue>21064</Issue>
- </ExcludeList>
- <ExcludeList Include="$(XunitTestBinBase)/JIT/HardwareIntrinsics/General/Vector128_1/Vector128_1_r/*">
- <Issue>21064</Issue>
- </ExcludeList>
- <ExcludeList Include="$(XunitTestBinBase)/JIT/HardwareIntrinsics/General/Vector128_1/Vector128_1_ro/*">
- <Issue>21064</Issue>
- </ExcludeList>
</ItemGroup>
<ItemGroup Condition="'$(XunitTestBinBase)' != ''">
<TestUnsupportedOutsideWindows>true</TestUnsupportedOutsideWindows>
<DisableProjectBuild Condition="'$(TargetsUnix)' == 'true'">true</DisableProjectBuild>
<DefineConstants>BLOCK_WINDOWS_NANO</DefineConstants>
+ <!-- Issue 21221, https://github.com/dotnet/coreclr/issues/21221 -->
+ <GCStressIncompatible>true</GCStressIncompatible>
</PropertyGroup>
<ItemGroup>
<Compile Include="$(InteropCommonDir)ExeLauncherProgram.cs" />
--- /dev/null
+// Licensed to the .NET Foundation under one or more agreements.
+// The .NET Foundation licenses this file to you under the MIT license.
+// See the LICENSE file in the project root for more information.
+
+using System;
+using System.Runtime.CompilerServices;
+
+interface IRT
+{
+ void WriteLine<T>(T val);
+}
+
+class CRT : IRT
+{
+ public static object line;
+ public void WriteLine<T>(T val) => line = val;
+}
+
+public class Program
+{
+ static IRT s_rt;
+ static byte[] s_1 = new byte[] { 0 };
+ static int s_3;
+ static short[] s_8 = new short[] { -1 };
+
+ public static int Main()
+ {
+ s_rt = new CRT();
+ M11(s_8, 0, 0, 0, true, s_1);
+ return ((int)CRT.line == -1) ? 100 : 1;
+ }
+
+ // Test case for a lvNormalizeOnLoad related issue in assertion propagation.
+ // A "normal" lclvar is substituted with a "normalize on load" lclvar (arg3),
+ // that results in load normalization being skipped.
+
+ [MethodImpl(MethodImplOptions.NoInlining)]
+ static ushort M11(short[] arg0, ushort arg1, short arg3, byte arg4, bool arg7, byte[] arg10)
+ {
+ if (arg7)
+ {
+ ulong var4 = (ulong)s_3;
+
+ // mov edi, gword ptr [classVar[0x2c44174]]
+ // cmp dword ptr [edi + 4], 0
+ // jbe SHORT G_M17557_IG06
+ // movsx edi, word ptr [edi + 8]
+ // mov word ptr [ebp + 14H], di ; word only store
+ arg3 = s_8[0];
+
+ short var5 = arg3;
+ s_rt.WriteLine(var4);
+
+ // call CORINFO_HELP_VIRTUAL_FUNC_PTR
+ // mov ecx, edi
+ // mov edx, dword ptr [ebp + 14H] ; dword load, no sign extension
+ // call eax
+ s_rt.WriteLine((int)var5);
+ }
+
+ if (!arg7)
+ {
+ var vr7 = arg0[0];
+ }
+
+ arg10[0] = arg4;
+ return arg1;
+ }
+}
--- /dev/null
+<?xml version="1.0" encoding="utf-8"?>
+<Project ToolsVersion="12.0" DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
+ <Import Project="$([MSBuild]::GetDirectoryNameOfFileAbove($(MSBuildThisFileDirectory), dir.props))\dir.props" />
+ <PropertyGroup>
+ <Configuration Condition=" '$(Configuration)' == '' ">Release</Configuration>
+ <Platform Condition=" '$(Platform)' == '' ">AnyCPU</Platform>
+ <AssemblyName>$(MSBuildProjectName)</AssemblyName>
+ <OutputType>Exe</OutputType>
+ <DebugType></DebugType>
+ <Optimize>True</Optimize>
+ </PropertyGroup>
+ <ItemGroup>
+ <Compile Include="$(MSBuildProjectName).cs" />
+ </ItemGroup>
+ <Import Project="$([MSBuild]::GetDirectoryNameOfFileAbove($(MSBuildThisFileDirectory), dir.targets))\dir.targets" />
+ <PropertyGroup Condition=" '$(MsBuildProjectDirOverride)' != '' "></PropertyGroup>
+</Project>
\ No newline at end of file