5 Author: Will Deacon <will.deacon@arm.com>
7 Date : 07 September 2012
9 This document is based on the ARM booting document by Russell King and
10 is relevant to all public releases of the AArch64 Linux kernel.
12 The AArch64 exception model is made up of a number of exception levels
13 (EL0 - EL3), with EL0 and EL1 having a secure and a non-secure
14 counterpart. EL2 is the hypervisor level and exists only in non-secure
15 mode. EL3 is the highest priority level and exists only in secure mode.
17 For the purposes of this document, we will use the term `boot loader`
18 simply to define all software that executes on the CPU(s) before control
19 is passed to the Linux kernel. This may include secure monitor and
20 hypervisor code, or it may just be a handful of instructions for
21 preparing a minimal boot environment.
23 Essentially, the boot loader should provide (as a minimum) the
26 1. Setup and initialise the RAM
27 2. Setup the device tree
28 3. Decompress the kernel image
29 4. Call the kernel image
32 1. Setup and initialise RAM
33 ---------------------------
35 Requirement: MANDATORY
37 The boot loader is expected to find and initialise all RAM that the
38 kernel will use for volatile data storage in the system. It performs
39 this in a machine dependent manner. (It may use internal algorithms
40 to automatically locate and size all RAM, or it may use knowledge of
41 the RAM in the machine, or any other method the boot loader designer
45 2. Setup the device tree
46 -------------------------
48 Requirement: MANDATORY
50 The device tree blob (dtb) must be placed on an 8-byte boundary and must
51 not exceed 2 megabytes in size. Since the dtb will be mapped cacheable
52 using blocks of up to 2 megabytes in size, it must not be placed within
53 any 2M region which must be mapped with any specific attributes.
55 NOTE: versions prior to v4.2 also require that the DTB be placed within
56 the 512 MB region starting at text_offset bytes below the kernel Image.
58 3. Decompress the kernel image
59 ------------------------------
63 The AArch64 kernel does not currently provide a decompressor and
64 therefore requires decompression (gzip etc.) to be performed by the boot
65 loader if a compressed Image target (e.g. Image.gz) is used. For
66 bootloaders that do not implement this requirement, the uncompressed
67 Image target is available instead.
70 4. Call the kernel image
71 ------------------------
73 Requirement: MANDATORY
75 The decompressed kernel image contains a 64-byte header as follows::
77 u32 code0; /* Executable code */
78 u32 code1; /* Executable code */
79 u64 text_offset; /* Image load offset, little endian */
80 u64 image_size; /* Effective Image size, little endian */
81 u64 flags; /* kernel flags, little endian */
82 u64 res2 = 0; /* reserved */
83 u64 res3 = 0; /* reserved */
84 u64 res4 = 0; /* reserved */
85 u32 magic = 0x644d5241; /* Magic number, little endian, "ARM\x64" */
86 u32 res5; /* reserved (used for PE COFF offset) */
91 - As of v3.17, all fields are little endian unless stated otherwise.
93 - code0/code1 are responsible for branching to stext.
95 - when booting through EFI, code0/code1 are initially skipped.
96 res5 is an offset to the PE header and the PE header has the EFI
97 entry point (efi_stub_entry). When the stub has done its work, it
98 jumps to code0 to resume the normal boot process.
100 - Prior to v3.17, the endianness of text_offset was not specified. In
101 these cases image_size is zero and text_offset is 0x80000 in the
102 endianness of the kernel. Where image_size is non-zero image_size is
103 little-endian and must be respected. Where image_size is zero,
104 text_offset can be assumed to be 0x80000.
106 - The flags field (introduced in v3.17) is a little-endian 64-bit field
109 ============= ===============================================================
110 Bit 0 Kernel endianness. 1 if BE, 0 if LE.
111 Bit 1-2 Kernel Page size.
117 Bit 3 Kernel physical placement
120 2MB aligned base should be as close as possible
121 to the base of DRAM, since memory below it is not
122 accessible via the linear mapping
124 2MB aligned base may be anywhere in physical
127 ============= ===============================================================
129 - When image_size is zero, a bootloader should attempt to keep as much
130 memory as possible free for use by the kernel immediately after the
131 end of the kernel image. The amount of space required will vary
132 depending on selected features, and is effectively unbound.
134 The Image must be placed text_offset bytes from a 2MB aligned base
135 address anywhere in usable system RAM and called there. The region
136 between the 2 MB aligned base address and the start of the image has no
137 special significance to the kernel, and may be used for other purposes.
138 At least image_size bytes from the start of the image must be free for
140 NOTE: versions prior to v4.6 cannot make use of memory below the
141 physical offset of the Image so it is recommended that the Image be
142 placed as close as possible to the start of system RAM.
144 If an initrd/initramfs is passed to the kernel at boot, it must reside
145 entirely within a 1 GB aligned physical memory window of up to 32 GB in
146 size that fully covers the kernel Image as well.
148 Any memory described to the kernel (even that below the start of the
149 image) which is not marked as reserved from the kernel (e.g., with a
150 memreserve region in the device tree) will be considered as available to
153 Before jumping into the kernel, the following conditions must be met:
155 - Quiesce all DMA capable devices so that memory does not get
156 corrupted by bogus network packets or disk data. This will save
157 you many hours of debug.
159 - Primary CPU general-purpose register settings:
161 - x0 = physical address of device tree blob (dtb) in system RAM.
162 - x1 = 0 (reserved for future use)
163 - x2 = 0 (reserved for future use)
164 - x3 = 0 (reserved for future use)
168 All forms of interrupts must be masked in PSTATE.DAIF (Debug, SError,
170 The CPU must be in either EL2 (RECOMMENDED in order to have access to
171 the virtualisation extensions) or non-secure EL1.
177 The instruction cache may be on or off, and must not hold any stale
178 entries corresponding to the loaded kernel image.
180 The address range corresponding to the loaded kernel image must be
181 cleaned to the PoC. In the presence of a system cache or other
182 coherent masters with caches enabled, this will typically require
183 cache maintenance by VA rather than set/way operations.
184 System caches which respect the architected cache maintenance by VA
185 operations must be configured and may be enabled.
186 System caches which do not respect architected cache maintenance by VA
187 operations (not recommended) must be configured and disabled.
191 CNTFRQ must be programmed with the timer frequency and CNTVOFF must
192 be programmed with a consistent value on all CPUs. If entering the
193 kernel at EL1, CNTHCTL_EL2 must have EL1PCTEN (bit 0) set where
198 All CPUs to be booted by the kernel must be part of the same coherency
199 domain on entry to the kernel. This may require IMPLEMENTATION DEFINED
200 initialisation to enable the receiving of maintenance operations on
205 All writable architected system registers at or below the exception
206 level where the kernel image will be entered must be initialised by
207 software at a higher exception level to prevent execution in an UNKNOWN
213 - SCR_EL3.FIQ must have the same value across all CPUs the kernel is
215 - The value of SCR_EL3.FIQ must be the same as the one present at boot
216 time whenever the kernel is executing.
218 - If EL3 is present and the kernel is entered at EL2:
220 - SCR_EL3.HCE (bit 8) must be initialised to 0b1.
222 For systems with a GICv3 interrupt controller to be used in v3 mode:
225 - ICC_SRE_EL3.Enable (bit 3) must be initialiased to 0b1.
226 - ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b1.
227 - ICC_CTLR_EL3.PMHE (bit 6) must be set to the same value across
228 all CPUs the kernel is executing on, and must stay constant
229 for the lifetime of the kernel.
231 - If the kernel is entered at EL1:
233 - ICC.SRE_EL2.Enable (bit 3) must be initialised to 0b1
234 - ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b1.
236 - The DT or ACPI tables must describe a GICv3 interrupt controller.
238 For systems with a GICv3 interrupt controller to be used in
239 compatibility (v2) mode:
243 ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b0.
245 - If the kernel is entered at EL1:
247 ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0.
249 - The DT or ACPI tables must describe a GICv2 interrupt controller.
251 For CPUs with pointer authentication functionality:
255 - SCR_EL3.APK (bit 16) must be initialised to 0b1
256 - SCR_EL3.API (bit 17) must be initialised to 0b1
258 - If the kernel is entered at EL1:
260 - HCR_EL2.APK (bit 40) must be initialised to 0b1
261 - HCR_EL2.API (bit 41) must be initialised to 0b1
263 For CPUs with Activity Monitors Unit v1 (AMUv1) extension present:
267 - CPTR_EL3.TAM (bit 30) must be initialised to 0b0
268 - CPTR_EL2.TAM (bit 30) must be initialised to 0b0
269 - AMCNTENSET0_EL0 must be initialised to 0b1111
270 - AMCNTENSET1_EL0 must be initialised to a platform specific value
271 having 0b1 set for the corresponding bit for each of the auxiliary
274 - If the kernel is entered at EL1:
276 - AMCNTENSET0_EL0 must be initialised to 0b1111
277 - AMCNTENSET1_EL0 must be initialised to a platform specific value
278 having 0b1 set for the corresponding bit for each of the auxiliary
281 For CPUs with the Fine Grained Traps (FEAT_FGT) extension present:
283 - If EL3 is present and the kernel is entered at EL2:
285 - SCR_EL3.FGTEn (bit 27) must be initialised to 0b1.
287 For CPUs with support for HCRX_EL2 (FEAT_HCX) present:
289 - If EL3 is present and the kernel is entered at EL2:
291 - SCR_EL3.HXEn (bit 38) must be initialised to 0b1.
293 For CPUs with Advanced SIMD and floating point support:
297 - CPTR_EL3.TFP (bit 10) must be initialised to 0b0.
299 - If EL2 is present and the kernel is entered at EL1:
301 - CPTR_EL2.TFP (bit 10) must be initialised to 0b0.
303 For CPUs with the Scalable Vector Extension (FEAT_SVE) present:
307 - CPTR_EL3.EZ (bit 8) must be initialised to 0b1.
309 - ZCR_EL3.LEN must be initialised to the same value for all CPUs the
310 kernel is executed on.
312 - If the kernel is entered at EL1 and EL2 is present:
314 - CPTR_EL2.TZ (bit 8) must be initialised to 0b0.
316 - CPTR_EL2.ZEN (bits 17:16) must be initialised to 0b11.
318 - ZCR_EL2.LEN must be initialised to the same value for all CPUs the
319 kernel will execute on.
321 For CPUs with the Scalable Matrix Extension (FEAT_SME):
325 - CPTR_EL3.ESM (bit 12) must be initialised to 0b1.
327 - SCR_EL3.EnTP2 (bit 41) must be initialised to 0b1.
329 - SMCR_EL3.LEN must be initialised to the same value for all CPUs the
330 kernel will execute on.
332 - If the kernel is entered at EL1 and EL2 is present:
334 - CPTR_EL2.TSM (bit 12) must be initialised to 0b0.
336 - CPTR_EL2.SMEN (bits 25:24) must be initialised to 0b11.
338 - SCTLR_EL2.EnTP2 (bit 60) must be initialised to 0b1.
340 - SMCR_EL2.LEN must be initialised to the same value for all CPUs the
341 kernel will execute on.
343 The requirements described above for CPU mode, caches, MMUs, architected
344 timers, coherency and system registers apply to all CPUs. All CPUs must
345 enter the kernel in the same exception level. Where the values documented
346 disable traps it is permissible for these traps to be enabled so long as
347 those traps are handled transparently by higher exception levels as though
348 the values documented were set.
350 The boot loader is expected to enter the kernel on each CPU in the
353 - The primary CPU must jump directly to the first instruction of the
354 kernel image. The device tree blob passed by this CPU must contain
355 an 'enable-method' property for each cpu node. The supported
356 enable-methods are described below.
358 It is expected that the bootloader will generate these device tree
359 properties and insert them into the blob prior to kernel entry.
361 - CPUs with a "spin-table" enable-method must have a 'cpu-release-addr'
362 property in their cpu node. This property identifies a
363 naturally-aligned 64-bit zero-initalised memory location.
365 These CPUs should spin outside of the kernel in a reserved area of
366 memory (communicated to the kernel by a /memreserve/ region in the
367 device tree) polling their cpu-release-addr location, which must be
368 contained in the reserved region. A wfe instruction may be inserted
369 to reduce the overhead of the busy-loop and a sev will be issued by
370 the primary CPU. When a read of the location pointed to by the
371 cpu-release-addr returns a non-zero value, the CPU must jump to this
372 value. The value will be written as a single 64-bit little-endian
373 value, so CPUs must convert the read value to their native endianness
374 before jumping to it.
376 - CPUs with a "psci" enable method should remain outside of
377 the kernel (i.e. outside of the regions of memory described to the
378 kernel in the memory node, or in a reserved area of memory described
379 to the kernel by a /memreserve/ region in the device tree). The
380 kernel will issue CPU_ON calls as described in ARM document number ARM
381 DEN 0022A ("Power State Coordination Interface System Software on ARM
382 processors") to bring CPUs into the kernel.
384 The device tree should contain a 'psci' node, as described in
385 Documentation/devicetree/bindings/arm/psci.yaml.
387 - Secondary CPU general-purpose register settings
389 - x0 = 0 (reserved for future use)
390 - x1 = 0 (reserved for future use)
391 - x2 = 0 (reserved for future use)
392 - x3 = 0 (reserved for future use)