2 # General architecture dependent options
6 tristate "OProfile system profiling"
8 depends on HAVE_OPROFILE
10 select RING_BUFFER_ALLOW_SWAP
12 OProfile is a profiling system capable of profiling the
13 whole system, include the kernel, kernel modules, libraries,
18 config OPROFILE_EVENT_MULTIPLEX
19 bool "OProfile multiplexing support (EXPERIMENTAL)"
21 depends on OPROFILE && X86
23 The number of hardware counters is limited. The multiplexing
24 feature enables OProfile to gather more events than counters
25 are provided by the hardware. This is realized by switching
26 between events at an user specified time interval.
33 config OPROFILE_NMI_TIMER
35 depends on PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !PPC64
40 depends on HAVE_KPROBES
43 Kprobes allows you to trap at almost any kernel address and
44 execute a callback function. register_kprobe() establishes
45 a probepoint and specifies the callback. Kprobes is useful
46 for kernel debugging, non-intrusive instrumentation and testing.
50 bool "Optimize very unlikely/likely branches"
51 depends on HAVE_ARCH_JUMP_LABEL
53 This option enables a transparent branch optimization that
54 makes certain almost-always-true or almost-always-false branch
55 conditions even cheaper to execute within the kernel.
57 Certain performance-sensitive kernel code, such as trace points,
58 scheduler functionality, networking code and KVM have such
59 branches and include support for this optimization technique.
61 If it is detected that the compiler has support for "asm goto",
62 the kernel will compile such branches with just a nop
63 instruction. When the condition flag is toggled to true, the
64 nop will be converted to a jump instruction to execute the
65 conditional block of instructions.
67 This technique lowers overhead and stress on the branch prediction
68 of the processor and generally makes the kernel faster. The update
69 of the condition is slower, but those are always very rare.
71 ( On 32-bit x86, the necessary options added to the compiler
72 flags may increase the size of the kernel slightly. )
76 depends on KPROBES && HAVE_OPTPROBES
79 config KPROBES_ON_FTRACE
81 depends on KPROBES && HAVE_KPROBES_ON_FTRACE
82 depends on DYNAMIC_FTRACE_WITH_REGS
84 If function tracer is enabled and the arch supports full
85 passing of pt_regs to function tracing, then kprobes can
86 optimize on top of function tracing.
92 Uprobes is the user-space counterpart to kprobes: they
93 enable instrumentation applications (such as 'perf probe')
94 to establish unintrusive probes in user-space binaries and
95 libraries, by executing handler functions when the probes
96 are hit by user-space applications.
98 ( These probes come in the form of single-byte breakpoints,
99 managed by the kernel and kept transparent to the probed
102 config HAVE_64BIT_ALIGNED_ACCESS
103 def_bool 64BIT && !HAVE_EFFICIENT_UNALIGNED_ACCESS
105 Some architectures require 64 bit accesses to be 64 bit
106 aligned, which also requires structs containing 64 bit values
107 to be 64 bit aligned too. This includes some 32 bit
108 architectures which can do 64 bit accesses, as well as 64 bit
109 architectures without unaligned access.
111 This symbol should be selected by an architecture if 64 bit
112 accesses are required to be 64 bit aligned in this way even
113 though it is not a 64 bit architecture.
115 See Documentation/unaligned-memory-access.txt for more
116 information on the topic of unaligned memory accesses.
118 config HAVE_EFFICIENT_UNALIGNED_ACCESS
121 Some architectures are unable to perform unaligned accesses
122 without the use of get_unaligned/put_unaligned. Others are
123 unable to perform such accesses efficiently (e.g. trap on
124 unaligned access and require fixing it up in the exception
127 This symbol should be selected by an architecture if it can
128 perform unaligned accesses efficiently to allow different
129 code paths to be selected for these cases. Some network
130 drivers, for example, could opt to not fix up alignment
131 problems with received packets if doing so would not help
134 See Documentation/unaligned-memory-access.txt for more
135 information on the topic of unaligned memory accesses.
137 config ARCH_USE_BUILTIN_BSWAP
140 Modern versions of GCC (since 4.4) have builtin functions
141 for handling byte-swapping. Using these, instead of the old
142 inline assembler that the architecture code provides in the
143 __arch_bswapXX() macros, allows the compiler to see what's
144 happening and offers more opportunity for optimisation. In
145 particular, the compiler will be able to combine the byteswap
146 with a nearby load or store and use load-and-swap or
147 store-and-swap instructions if the architecture has them. It
148 should almost *never* result in code which is worse than the
149 hand-coded assembler in <asm/swab.h>. But just in case it
150 does, the use of the builtins is optional.
152 Any architecture with load-and-swap or store-and-swap
153 instructions should set this. And it shouldn't hurt to set it
154 on architectures that don't have such instructions.
158 depends on KPROBES && HAVE_KRETPROBES
160 config USER_RETURN_NOTIFIER
162 depends on HAVE_USER_RETURN_NOTIFIER
164 Provide a kernel-internal notification when a cpu is about to
167 config HAVE_IOREMAP_PROT
173 config HAVE_KRETPROBES
176 config HAVE_OPTPROBES
179 config HAVE_KPROBES_ON_FTRACE
182 config HAVE_NMI_WATCHDOG
185 # An arch should select this if it provides all these things:
187 # task_pt_regs() in asm/processor.h or asm/ptrace.h
188 # arch_has_single_step() if there is hardware single-step support
189 # arch_has_block_step() if there is hardware block-step support
190 # asm/syscall.h supplying asm-generic/syscall.h interface
191 # linux/regset.h user_regset interfaces
192 # CORE_DUMP_USE_REGSET #define'd in linux/elf.h
193 # TIF_SYSCALL_TRACE calls tracehook_report_syscall_{entry,exit}
194 # TIF_NOTIFY_RESUME calls tracehook_notify_resume()
195 # signal delivery calls tracehook_signal_handler()
197 config HAVE_ARCH_TRACEHOOK
200 config HAVE_DMA_ATTRS
203 config HAVE_DMA_CONTIGUOUS
206 config GENERIC_SMP_IDLE_THREAD
209 config GENERIC_IDLE_POLL_SETUP
212 # Select if arch init_task initializer is different to init/init_task.c
213 config ARCH_INIT_TASK
216 # Select if arch has its private alloc_task_struct() function
217 config ARCH_TASK_STRUCT_ALLOCATOR
220 # Select if arch has its private alloc_thread_info() function
221 config ARCH_THREAD_INFO_ALLOCATOR
224 config HAVE_REGS_AND_STACK_ACCESS_API
227 This symbol should be selected by an architecure if it supports
228 the API needed to access registers and stack entries from pt_regs,
229 declared in asm/ptrace.h
230 For example the kprobes-based event tracer needs this API.
235 The <linux/clk.h> calls support software clock gating and
236 thus are a key power management tool on many systems.
238 config HAVE_DMA_API_DEBUG
241 config HAVE_HW_BREAKPOINT
243 depends on PERF_EVENTS
245 config HAVE_MIXED_BREAKPOINTS_REGS
247 depends on HAVE_HW_BREAKPOINT
249 Depending on the arch implementation of hardware breakpoints,
250 some of them have separate registers for data and instruction
251 breakpoints addresses, others have mixed registers to store
252 them but define the access type in a control register.
253 Select this option if your arch implements breakpoints under the
256 config HAVE_USER_RETURN_NOTIFIER
259 config HAVE_PERF_EVENTS_NMI
262 System hardware can generate an NMI using the perf event
263 subsystem. Also has support for calculating CPU cycle events
264 to determine how many clock cycles in a given period.
266 config HAVE_PERF_REGS
269 Support selective register dumps for perf events. This includes
270 bit-mapping of each registers and a unique architecture id.
272 config HAVE_PERF_USER_STACK_DUMP
275 Support user stack dumps for perf event samples. This needs
276 access to the user stack pointer which is not unified across
279 config HAVE_ARCH_JUMP_LABEL
282 config HAVE_RCU_TABLE_FREE
285 config ARCH_HAVE_NMI_SAFE_CMPXCHG
288 config HAVE_ALIGNED_STRUCT_PAGE
291 This makes sure that struct pages are double word aligned and that
292 e.g. the SLUB allocator can perform double word atomic operations
293 on a struct page for better performance. However selecting this
294 might increase the size of a struct page by a word.
296 config HAVE_CMPXCHG_LOCAL
299 config HAVE_CMPXCHG_DOUBLE
302 config ARCH_WANT_IPC_PARSE_VERSION
305 config ARCH_WANT_COMPAT_IPC_PARSE_VERSION
308 config ARCH_WANT_OLD_COMPAT_IPC
309 select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
312 config HAVE_ARCH_SECCOMP_FILTER
315 An arch should select this symbol if it provides all of these things:
317 - syscall_get_arguments()
319 - syscall_set_return_value()
320 - SIGSYS siginfo_t support
321 - secure_computing is called from a ptrace_event()-safe context
322 - secure_computing return value is checked and a return value of -1
323 results in the system call being skipped immediately.
324 - seccomp syscall wired up
326 For best performance, an arch should use seccomp_phase1 and
327 seccomp_phase2 directly. It should call seccomp_phase1 for all
328 syscalls if TIF_SECCOMP is set, but seccomp_phase1 does not
329 need to be called from a ptrace-safe context. It must then
330 call seccomp_phase2 if seccomp_phase1 returns anything other
331 than SECCOMP_PHASE1_OK or SECCOMP_PHASE1_SKIP.
333 As an additional optimization, an arch may provide seccomp_data
334 directly to seccomp_phase1; this avoids multiple calls
335 to the syscall_xyz helpers for every syscall.
337 config SECCOMP_FILTER
339 depends on HAVE_ARCH_SECCOMP_FILTER && SECCOMP && NET
341 Enable tasks to build secure computing environments defined
342 in terms of Berkeley Packet Filter programs which implement
343 task-defined system call filtering polices.
345 See Documentation/prctl/seccomp_filter.txt for details.
347 config HAVE_CC_STACKPROTECTOR
350 An arch should select this symbol if:
351 - its compiler supports the -fstack-protector option
352 - it has implemented a stack canary (e.g. __stack_chk_guard)
354 config CC_STACKPROTECTOR
357 Set when a stack-protector mode is enabled, so that the build
358 can enable kernel-side support for the GCC feature.
361 prompt "Stack Protector buffer overflow detection"
362 depends on HAVE_CC_STACKPROTECTOR
363 default CC_STACKPROTECTOR_NONE
365 This option turns on the "stack-protector" GCC feature. This
366 feature puts, at the beginning of functions, a canary value on
367 the stack just before the return address, and validates
368 the value just before actually returning. Stack based buffer
369 overflows (that need to overwrite this return address) now also
370 overwrite the canary, which gets detected and the attack is then
371 neutralized via a kernel panic.
373 config CC_STACKPROTECTOR_NONE
376 Disable "stack-protector" GCC feature.
378 config CC_STACKPROTECTOR_REGULAR
380 select CC_STACKPROTECTOR
382 Functions will have the stack-protector canary logic added if they
383 have an 8-byte or larger character array on the stack.
385 This feature requires gcc version 4.2 or above, or a distribution
386 gcc with the feature backported ("-fstack-protector").
388 On an x86 "defconfig" build, this feature adds canary checks to
389 about 3% of all kernel functions, which increases kernel code size
392 config CC_STACKPROTECTOR_STRONG
394 select CC_STACKPROTECTOR
396 Functions will have the stack-protector canary logic added in any
397 of the following conditions:
399 - local variable's address used as part of the right hand side of an
400 assignment or function argument
401 - local variable is an array (or union containing an array),
402 regardless of array type or length
403 - uses register local variables
405 This feature requires gcc version 4.9 or above, or a distribution
406 gcc with the feature backported ("-fstack-protector-strong").
408 On an x86 "defconfig" build, this feature adds canary checks to
409 about 20% of all kernel functions, which increases the kernel code
414 config HAVE_CONTEXT_TRACKING
417 Provide kernel/user boundaries probes necessary for subsystems
418 that need it, such as userspace RCU extended quiescent state.
419 Syscalls need to be wrapped inside user_exit()-user_enter() through
420 the slow path using TIF_NOHZ flag. Exceptions handlers must be
421 wrapped as well. Irqs are already protected inside
422 rcu_irq_enter/rcu_irq_exit() but preemption or signal handling on
423 irq exit still need to be protected.
425 config HAVE_VIRT_CPU_ACCOUNTING
428 config HAVE_VIRT_CPU_ACCOUNTING_GEN
432 With VIRT_CPU_ACCOUNTING_GEN, cputime_t becomes 64-bit.
433 Before enabling this option, arch code must be audited
434 to ensure there are no races in concurrent read/write of
435 cputime_t. For example, reading/writing 64-bit cputime_t on
436 some 32-bit arches may require multiple accesses, so proper
437 locking is needed to protect against concurrent accesses.
440 config HAVE_IRQ_TIME_ACCOUNTING
443 Archs need to ensure they use a high enough resolution clock to
444 support irq time accounting and then call enable_sched_clock_irqtime().
446 config HAVE_ARCH_TRANSPARENT_HUGEPAGE
449 config HAVE_ARCH_HUGE_VMAP
452 config HAVE_ARCH_SOFT_DIRTY
455 config HAVE_MOD_ARCH_SPECIFIC
458 The arch uses struct mod_arch_specific to store data. Many arches
459 just need a simple module loader without arch specific data - those
460 should not enable this.
462 config MODULES_USE_ELF_RELA
465 Modules only use ELF RELA relocations. Modules with ELF REL
466 relocations will give an error.
468 config MODULES_USE_ELF_REL
471 Modules only use ELF REL relocations. Modules with ELF RELA
472 relocations will give an error.
474 config HAVE_UNDERSCORE_SYMBOL_PREFIX
477 Some architectures generate an _ in front of C symbols; things like
478 module loading and assembly files need to know about this.
480 config HAVE_IRQ_EXIT_ON_IRQ_STACK
483 Architecture doesn't only execute the irq handler on the irq stack
484 but also irq_exit(). This way we can process softirqs on this irq
485 stack instead of switching to a new one when we call __do_softirq()
486 in the end of an hardirq.
487 This spares a stack switch and improves cache usage on softirq
490 config PGTABLE_LEVELS
494 config ARCH_HAS_ELF_RANDOMIZE
497 An architecture supports choosing randomized locations for
498 stack, mmap, brk, and ET_DYN. Defined functions:
500 - arch_randomize_brk()
505 config CLONE_BACKWARDS
508 Architecture has tls passed as the 4th argument of clone(2),
511 config CLONE_BACKWARDS2
514 Architecture has the first two arguments of clone(2) swapped.
516 config CLONE_BACKWARDS3
519 Architecture has tls passed as the 3rd argument of clone(2),
522 config ODD_RT_SIGACTION
525 Architecture has unusual rt_sigaction(2) arguments
527 config OLD_SIGSUSPEND
530 Architecture has old sigsuspend(2) syscall, of one-argument variety
532 config OLD_SIGSUSPEND3
535 Even weirder antique ABI - three-argument sigsuspend(2)
540 Architecture has old sigaction(2) syscall. Nope, not the same
541 as OLD_SIGSUSPEND | OLD_SIGSUSPEND3 - alpha has sigsuspend(2),
542 but fairly different variant of sigaction(2), thanks to OSF/1
545 config COMPAT_OLD_SIGACTION
548 source "kernel/gcov/Kconfig"