Some upcoming changes to reduce tiering overhead require that directly
invoked virtual methods be called indirectly via their slot, so that the
method body can be updated and callers patched up by patching the method
table slot.
Existing code for x64 implicitly assumes that a GT_JMP indirect target address
is near enough to the call site that a 32 bit RIP-relative displacement will
work. We can ensure this is true by always generating a reloc (and hence
potentially a jump stub) -- unless the target happens to fit in 32 bits and
so can be addressed absolutely.
Commit migrated from https://github.com/dotnet/coreclr/commit/
8c6a9e003feffbd798d6cdbbb5f89b170d75e05d
}
#ifdef _TARGET_AMD64_
- // If code addr could be encoded as 32-bit offset relative to IP, we need to record a relocation.
- if (genCodeIndirAddrCanBeEncodedAsPCRelOffset(addr))
+ // See if the code indir addr can be encoded as 32-bit displacement relative to zero.
+ // We don't need a relocation in that case.
+ if (genCodeIndirAddrCanBeEncodedAsZeroRelOffset(addr))
{
- return true;
+ return false;
}
- // It could be possible that the code indir addr could be encoded as 32-bit displacement relative
- // to zero. But we don't need to emit a relocation in that case.
- return false;
+ // Else we need a relocation.
+ return true;
#else //_TARGET_X86_
- // On x86 there is need for recording relocations during jitting,
+ // On x86 there is no need to record or ask for relocations during jitting,
// because all addrs fit within 32-bits.
return false;
#endif //_TARGET_X86_