This matches other types of relocations, e.g. to constant pool. And
makes things more consistent with PIC large code model.
Some users of the large code model may not place small data in the lower
2GB of the address space (e.g.
https://github.com/ClangBuiltLinux/linux/issues/2016), so just
unconditionally use 64-bit relocations in the large code model.
So now functions in a section not marked large will use 64-bit
relocations to reference everything when using the large code model.
This also fixes some lldb tests broken by #88172
(https://lab.llvm.org/buildbot/#/builders/68/builds/72458).
(cherry picked from commit
6cea7c491f4c4c68aa0494a9b18f36ff40c22c81)
}
bool X86DAGToDAGISel::selectMOV64Imm32(SDValue N, SDValue &Imm) {
- // Cannot use 32 bit constants to reference objects in kernel code model.
- // Cannot use 32 bit constants to reference objects in large PIC mode since
- // GOTOFF is 64 bits.
+ // Cannot use 32 bit constants to reference objects in kernel/large code
+ // model.
if (TM.getCodeModel() == CodeModel::Kernel ||
- (TM.getCodeModel() == CodeModel::Large && TM.isPositionIndependent()))
+ TM.getCodeModel() == CodeModel::Large)
return false;
// In static codegen with small code model, we can get the address of a label
;
; LARGE-STATIC-LABEL: lea_forced_small_data:
; LARGE-STATIC: # %bb.0:
-; LARGE-STATIC-NEXT: movl $forced_small_data, %eax
+; LARGE-STATIC-NEXT: movabsq $forced_small_data, %rax
; LARGE-STATIC-NEXT: retq
;
; SMALL-PIC-LABEL: lea_forced_small_data:
;
; LARGE-STATIC-LABEL: load_forced_small_data:
; LARGE-STATIC: # %bb.0:
-; LARGE-STATIC-NEXT: movl $forced_small_data+8, %eax
+; LARGE-STATIC-NEXT: movabsq $forced_small_data+8, %rax
; LARGE-STATIC-NEXT: movl (%rax), %eax
; LARGE-STATIC-NEXT: retq
;