Greg Hackmann [Tue, 16 Sep 2014 18:13:31 +0000 (11:13 -0700)]
arm64: ranchu: enable ARMV7_COMPAT
Change-Id: Id6162ab5debc431d7577b65d9bc89e9b22d00e72
Signed-off-by: Greg Hackmann <ghackmann@google.com>
Greg Hackmann [Tue, 16 Sep 2014 18:11:13 +0000 (11:11 -0700)]
Merge branch 'android-3.10' into android-goldfish-3.10
Conflicts:
arch/arm64/Kconfig
arch/arm64/include/asm/ptrace.h
arch/arm64/kernel/Makefile
arch/arm64/kernel/debug-monitors.c
arch/arm64/kernel/traps.c
Change-Id: Ifb43a8bad9636ee48ff52a29592af95e959a4399
Greg Hackmann [Tue, 5 Aug 2014 23:14:27 +0000 (16:14 -0700)]
arm64: restrict effects of ARMV7_COMPAT_CPUINFO to ARMv7 tasks
Since ARMV7_COMPAT_CPUINFO only exists to support existing ARMv7
binaries, restrict its effects to compat tasks
Bug:
16819658
Change-Id: I1092de596c7822d23f5f3f8a05b417a3cb49f593
Signed-off-by: Greg Hackmann <ghackmann@google.com>
Alex Van Brunt [Thu, 20 Feb 2014 18:46:21 +0000 (10:46 -0800)]
arm64: report vfpv3 instead of vfpv3d16
vfpv3 is the correct version for an ARMv8 processor and it is the
version reported by an A15.
Change-Id: I486f3af21a352c27775888cca332a48d7e0c59ce
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/370076
Alex Van Brunt [Thu, 9 Jan 2014 20:51:05 +0000 (12:51 -0800)]
arm64: cpuinfo: ARMv7 compatable cpuinfo option
To be backwards compatable with the output of cpuinfo on an ARMv7,
print the features that were optional in ARMv7 but are required in
ARMv8.
Change-Id: Ic728f71be4a971adc79ef552f25cfbf95a4dac29
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/366095
Reviewed-by: Richard Wiley <rwiley@nvidia.com>
Tested-by: Oskari Jaaskelainen <oskarij@nvidia.com>
Rich Wiley [Wed, 4 Jun 2014 18:44:03 +0000 (11:44 -0700)]
arm64: enable deprecated SETEND instruction in SCTLR compat config
Change-Id: I703d4843f8aab2ec63324f04cc13aaabae88e163
Signed-off-by: Rich Wiley <rwiley@nvidia.com>
Reviewed-on: http://git-master/r/422174
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
Rich Wiley [Wed, 4 Jun 2014 18:41:53 +0000 (11:41 -0700)]
arm64: make SCTLR compat config depend on CONFIG_ARMV7_COMPAT
Conflicts:
arch/arm64/mm/proc.S
Change-Id: I76e0067839c96e3082b42c80d3fc670cf3d371b5
Signed-off-by: Rich Wiley <rwiley@nvidia.com>
Reviewed-on: http://git-master/r/422173
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com>
Reviewed-by: Bharat Nihalani <bnihalani@nvidia.com>
Tested-by: Bharat Nihalani <bnihalani@nvidia.com>
Alex Van Brunt [Tue, 28 Jan 2014 20:40:10 +0000 (12:40 -0800)]
arm64: optionally set CP15BEN in SCTLR
Setting CP15BEN allows legacy applications running in AArch32 mode
that use CP15 DMB as similar instructions to continue running.
Change-Id: If76d3c6ee12865ff8c4b4e7aed01146bead87773
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/366096
Reviewed-by: Richard Wiley <rwiley@nvidia.com>
Tested-by: Oskari Jaaskelainen <oskarij@nvidia.com>
Rich Wiley [Mon, 10 Mar 2014 21:01:06 +0000 (14:01 -0700)]
arm64: fix SWP instruction emulation
initial variable values may get overwritten
if they're listed as an output in ASM, even if
they're not explicitly written to.
Change-Id: I2a239e1819850a2a7005a46e83d82deac4ca303b
Signed-off-by: Rich Wiley <rwiley@nvidia.com>
Reviewed-on: http://git-master/r/379646
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Li Li (SW-TEGRA) <lli5@nvidia.com>
Tested-by: Li Li (SW-TEGRA) <lli5@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alexander Van Brunt <avanbrunt@nvidia.com>
Alex Van Brunt [Fri, 21 Feb 2014 02:18:53 +0000 (18:18 -0800)]
arm64: add fault handling to SWP emulation
Add excpetion table and fixup for SWP/SWPB instruction emulation.
This prevents the kernel from panicing when emulating a SWP/SWPB
instruction that access unmapped memory.
Change-Id: I4a9ca34fa161a0f306cdb663827d9bee39cec733
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/370278
Alex Van Brunt [Wed, 19 Feb 2014 01:50:57 +0000 (17:50 -0800)]
arm64: fix a warning and a typo in SWP emulation
The store-release-exclusive is missing the "L" that makes it a
release rather than a normal store-exclusive.
Remove a variable that is not used and causes a compiler warning.
Change-Id: I91633a352b805ed9af450b632c9ee394235637c4
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/369076
Reviewed-by: Richard Wiley <rwiley@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Alex Van Brunt [Thu, 30 Jan 2014 23:10:39 +0000 (15:10 -0800)]
arm64: emulate the swp/swpb instruction
The swp and spwb instructions were deprecated in ARMv6. ARMv8
obsoleted the instruction. Despite this, many applications rely on
these instruuctions.
This patch starts with the version present in the arm architecture.
However, it uses the ldx*()/stx*() functions to implement the handler
in C code. It also removes a lot of code that is not needed.
Change-Id: I6882fbe5f71bfa8f9e9a75d067b2111188c6f2fa
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/366097
Reviewed-by: Richard Wiley <rwiley@nvidia.com>
Tested-by: Oskari Jaaskelainen <oskarij@nvidia.com>
Conflicts:
arch/arm64/Kconfig
arch/arm64/kernel/Makefile
Alex Van Brunt [Tue, 11 Feb 2014 18:08:51 +0000 (10:08 -0800)]
arm64: a backwards compatible config option
Create a config option that when selected configures the kernel to be
as backwards compatable with kernels that ran on an ARMv7 processor
as possible.
Change-Id: I7cd67e6d4174335f9a67aba2a39dfd993f240c27
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/366094
Reviewed-by: Richard Wiley <rwiley@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Tested-by: Oskari Jaaskelainen <oskarij@nvidia.com>
Peng Du [Wed, 23 Jul 2014 18:40:33 +0000 (11:40 -0700)]
arm64: kernel: check mode for get_user in undefinstr
get_user() should be called only for user_mode undef instruction.
Change-Id: Ia654783de0cf72abac6847ac9630236f9f0d6ebb
Signed-off-by: Peng Du <pdu@nvidia.com>
Reviewed-on: http://git-master/r/441348
Reviewed-by: Thomas Cherry <tcherry@nvidia.com>
Reviewed-by: Bo Yan <byan@nvidia.com>
Alex Van Brunt [Wed, 29 Jan 2014 21:41:01 +0000 (13:41 -0800)]
arm64: add undefined instruction handler hooks
Add undefined instruction handler hooks similar to the system in the
arm archetecture. One difference is that hooks can only be added at
boot time and they can never be removed. This removes the need for
the spinlock in the handler.
Change-Id: I4684937f5209ca2a64ee63947bb2ab6411ae14f7
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/361736
Reviewed-on: http://git-master/r/365059
Reviewed-by: Richard Wiley <rwiley@nvidia.com>
Tested-by: Oskari Jaaskelainen <oskarij@nvidia.com>
Alex Van Brunt [Wed, 29 Jan 2014 21:45:20 +0000 (13:45 -0800)]
arm64: ptrace: add is_wide_instruction() macro
Add the is_wide_instruction() macro. This was copied from the arm
architecture.
Change-Id: I28f83b47f5c587fe778dc2846df77673f8dd918b
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/361737
Reviewed-by: Peng Du <pdu@nvidia.com>
Reviewed-on: http://git-master/r/365060
Reviewed-by: Richard Wiley <rwiley@nvidia.com>
Tested-by: Oskari Jaaskelainen <oskarij@nvidia.com>
Alex Van Brunt [Thu, 30 Jan 2014 23:07:34 +0000 (15:07 -0800)]
arm64: copy conditional instruction tests from arm
Copy the code that is used to compute if a conditional instruction
would be executed.
This code is needed to support A32 instruction emulation in the
kernel.
Change-Id: I0bab7537efd8cc317bd20995cd36961cf95165aa
Signed-off-by: Alex Van Brunt <avanbrunt@nvidia.com>
Reviewed-on: http://git-master/r/362154
Reviewed-on: http://git-master/r/365061
Reviewed-by: Richard Wiley <rwiley@nvidia.com>
Tested-by: Oskari Jaaskelainen <oskarij@nvidia.com>
Will Deacon [Sat, 16 Mar 2013 08:48:13 +0000 (08:48 +0000)]
arm64: debug: consolidate software breakpoint handlers
The software breakpoint handlers are hooked in directly from ptrace,
which makes it difficult to add additional handlers for things like
kprobes and kgdb.
This patch moves the handling code into debug-monitors.c, where we can
dispatch to different debug subsystems more easily.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Greg Hackmann [Thu, 11 Sep 2014 21:55:20 +0000 (14:55 -0700)]
Merge branch 'android-3.10' into android-goldfish-3.10
Conflicts:
arch/arm64/kernel/setup.c
Change-Id: Id0c3a53d893313581df8f34671a6d0ae43947652
Ard Biesheuvel [Mon, 3 Mar 2014 07:34:46 +0000 (07:34 +0000)]
arm64: advertise ARMv8 extensions to 32-bit compat ELF binaries
This adds support for advertising the presence of ARMv8 Crypto
Extensions in the Aarch32 execution state to 32-bit ELF binaries
running in 32-bit compat mode under the arm64 kernel.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Ard Biesheuvel [Mon, 3 Mar 2014 07:34:45 +0000 (07:34 +0000)]
arm64: add AT_HWCAP2 support for 32-bit compat
Add support for the ELF auxv entry AT_HWCAP2 when running 32-bit
ELF binaries in compat mode.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Ard Biesheuvel [Mon, 3 Mar 2014 07:34:44 +0000 (07:34 +0000)]
binfmt_elf: add ELF_HWCAP2 to compat auxv entries
Add ELF_HWCAP2 to the set of auxv entries that is passed to
a 32-bit ELF program running in 32-bit compat mode under a
64-bit kernel.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Steve Capper [Mon, 16 Dec 2013 21:04:36 +0000 (21:04 +0000)]
arm64: Add hwcaps for crypto and CRC32 extensions.
Advertise the optional cryptographic and CRC32 instructions to
user space where present. Several hwcap bits [3-7] are allocated.
Signed-off-by: Steve Capper <steve.capper@linaro.org>
[bit 2 is taken now so use bits 3-7 instead]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Steve Capper [Wed, 18 Sep 2013 15:14:28 +0000 (16:14 +0100)]
arm64: Widen hwcap to be 64 bit
Under arm64 elf_hwcap is a 32 bit quantity, but it is stored in
a 64 bit auxiliary ELF field and glibc reads hwcap as 64 bit.
This patch widens elf_hwcap to be 64 bit.
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Greg Hackmann [Tue, 9 Sep 2014 19:14:40 +0000 (12:14 -0700)]
arm64: add HWCAP_EVTSTRM and associated hwcap refactoring
Take the hwcaps changes from
46efe547aca8498d51b64460c02366ae4032ca32 to
facilitate cherry-picking later hwcaps changes, while skipping the timer
changes that actually enable event streams for now. The timer changes
depend on some non-trivial changes made after 3.10, and can safely be
dropped: the kernel will just continue reporting that HWCAP_EVTSTRM is
not available.
Bug:
17431179
Change-Id: I41548846f8cd7ae8147a2b115cc0f84708e29552
Signed-off-by: Greg Hackmann <ghackmann@google.com>
Greg Hackmann [Wed, 10 Sep 2014 00:36:05 +0000 (17:36 -0700)]
arm64: process: dump memory around registers when displaying regs
A port of
8608d7c4418c75841c562a90cddd9beae5798a48 to ARM64. Both the
original code and this port are limited to dumping kernel addresses, so
don't bother if the registers are from a userspace process.
Change-Id: Idc76804c54efaaeb70311cbb500c54db6dac4525
Signed-off-by: Greg Hackmann <ghackmann@google.com>
Todd Poynor [Sat, 6 Sep 2014 01:27:38 +0000 (18:27 -0700)]
cpufreq: interactive: make common_tunables static
From: Cylen Yao <cylen.yao@mediatek.com>
common_tunables should be static.
Change-Id: I502ee3062bece5082fea7861eff2f6237e25cede
Signed-off-by: Todd Poynor <toddpoynor@google.com>
David 'Digit' Turner [Wed, 3 Sep 2014 10:04:11 +0000 (12:04 +0200)]
Merge remote-tracking branch 'origin/android-3.10' into android-goldfish-3.10
Mostly to fix a bug where the kernel would not report the stack pointer
of 32-bit processes when running on an ARM64 kernel.
Conflicts:
arch/arm64/Kconfig
arch/arm64/include/asm/ptrace.h
arch/arm64/kernel/fpsimd.c
Signed-off-by: David 'Digit' Turner <digit@android.com>
JP Abgrall [Thu, 4 Sep 2014 00:36:44 +0000 (17:36 -0700)]
android: base-cfg: enforce the needed XFRM_MODE_TUNNEL (for VPN)
Change-Id: I587023d56877d32806079676790751155c768982
Signed-off-by: JP Abgrall <jpa@google.com>
Heiko Carstens [Wed, 2 Jul 2014 22:22:37 +0000 (15:22 -0700)]
fs/seq_file: fallback to vmalloc allocation
There are a couple of seq_files which use the single_open() interface.
This interface requires that the whole output must fit into a single
buffer.
E.g. for /proc/stat allocation failures have been observed because an
order-4 memory allocation failed due to memory fragmentation. In such
situations reading /proc/stat is not possible anymore.
Therefore change the seq_file code to fallback to vmalloc allocations
which will usually result in a couple of order-0 allocations and hence
also work if memory is fragmented.
For reference a call trace where reading from /proc/stat failed:
sadc: page allocation failure: order:4, mode:0x1040d0
CPU: 1 PID: 192063 Comm: sadc Not tainted 3.10.0-123.el7.s390x #1
[...]
Call Trace:
show_stack+0x6c/0xe8
warn_alloc_failed+0xd6/0x138
__alloc_pages_nodemask+0x9da/0xb68
__get_free_pages+0x2e/0x58
kmalloc_order_trace+0x44/0xc0
stat_open+0x5a/0xd8
proc_reg_open+0x8a/0x140
do_dentry_open+0x1bc/0x2c8
finish_open+0x46/0x60
do_last+0x382/0x10d0
path_openat+0xc8/0x4f8
do_filp_open+0x46/0xa8
do_sys_open+0x114/0x1f0
sysc_tracego+0x14/0x1a
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Tested-by: David Rientjes <rientjes@google.com>
Cc: Ian Kent <raven@themaw.net>
Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com>
Cc: Thorsten Diehl <thorsten.diehl@de.ibm.com>
Cc: Andrea Righi <andrea@betterlinux.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Stefan Bader <stefan.bader@canonical.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Conflicts:
fs/seq_file.c
Change-Id: I009080dd017b020ffd5e812e5b472bdb8349217a
Al Viro [Tue, 6 May 2014 18:02:53 +0000 (14:02 -0400)]
nick kvfree() from apparmor
too many places open-code it
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Conflicts:
mm/util.c
security/apparmor/include/apparmor.h
Change-Id: Ie8602e0199282dc462921cb7217158d1998853b0
Al Viro [Mon, 6 May 2013 02:10:35 +0000 (03:10 +0100)]
apparmor: no need to delay vfree()
vfree() can be called from interrupt contexts now
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: John Johansen <john.johansen@canonical.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Leo Yan [Mon, 1 Sep 2014 03:09:51 +0000 (11:09 +0800)]
arm64: fix bug for reloading FPSIMD state after cpu power off
Now arm64 defers reloading FPSIMD state, but this optimization also
introduces the bug after cpu resume back from low power mode.
The reason is after the cpu has been powered off, s/w need set the
cpu's fpsimd_last_state to NULL so that it will force to reload
FPSIMD state for the thread, otherwise there has the chance to meet
the condition for both the task's fpsimd_state.cpu field contains the
id of the current cpu, and the cpu's fpsimd_last_state per-cpu variable
points to the task's fpsimd_state, so finally kernel will skip to reload
the context during it return back to userland.
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Leo Yan <leoy@marvell.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Catalin Marinas [Fri, 29 Aug 2014 15:08:02 +0000 (16:08 +0100)]
arm64: Add brackets around user_stack_pointer()
Commit
5f888a1d33 (ARM64: perf: support dwarf unwinding in compat mode)
changes user_stack_pointer() to return the compat SP for 32-bit tasks
but without brackets around the whole definition, with possible issues
on the call sites (noticed with a subsequent fix for KSTK_ESP).
Fixes: 5f888a1d33c4 (ARM64: perf: support dwarf unwinding in compat mode)
Reported-by: Sudeep Holla <sudeep.holla@arm.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Fri, 29 Aug 2014 15:11:10 +0000 (16:11 +0100)]
arm64: report correct stack pointer in KSTK_ESP for compat tasks
The KSTK_ESP macro is used to determine the user stack pointer for a
given task. In particular, this is used to to report the '[stack]' VMA
in /proc/self/maps, which is used by Android to determine the stack
location for children of the main thread.
This patch fixes the macro to use user_stack_pointer instead of directly
returning sp. This means that we report w13 instead of sp, since the
former is used as the stack pointer when executing in AArch32 state.
Cc: <stable@vger.kernel.org>
Reported-by: Serban Constantinescu <Serban.Constantinescu@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Catalin Marinas [Thu, 10 Jul 2014 10:37:40 +0000 (11:37 +0100)]
arm64: Cast KSTK_(EIP|ESP) to unsigned long
This is for similarity with thread_saved_(pc|sp) and to avoid some
compiler warnings in the audit code.
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Jean Pihet [Mon, 3 Feb 2014 18:18:29 +0000 (19:18 +0100)]
ARM64: perf: support dwarf unwinding in compat mode
Add support for unwinding using the dwarf information in compat
mode. Using the correct user stack pointer allows perf to record
the frames correctly in the native and compat modes.
Note that although the dwarf frame unwinding works ok using
libunwind in native mode (on ARMv7 & ARMv8), some changes are
required to the libunwind code for the compat mode. Those changes
are posted separately on the libunwind mailing list.
Tested on ARMv8 platform with v8 and compat v7 binaries, the latter
are statically built.
Signed-off-by: Jean Pihet <jean.pihet@linaro.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Greg Hackmann [Thu, 28 Aug 2014 21:00:10 +0000 (14:00 -0700)]
arm64: check for upper PAGE_SHIFT bits in pfn_valid()
pfn_valid() returns a false positive when the lower (64 - PAGE_SHIFT)
bits match a valid pfn but some of the upper bits are set. This caused
a kernel panic in kpageflags_read() when a userspace utility parsed
/proc/*/pagemap, neglected to discard the upper flag bits, and tried to
lseek()+read() from the corresponding offset in /proc/kpageflags.
A valid pfn will never have the upper PAGE_SHIFT bits set, so simply
check for this before passing the pfn to memblock_is_memory().
Change-Id: Ief5d8cd4dd93cbecd545a634a8d5885865cb5970
Signed-off-by: Greg Hackmann <ghackmann@google.com>
Ard Biesheuvel [Tue, 24 Sep 2013 07:28:03 +0000 (09:28 +0200)]
arm64: pull in <asm/simd.h> from asm-generic
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Ard Biesheuvel [Tue, 4 Mar 2014 01:10:04 +0000 (01:10 +0000)]
arm64: enable generic CPU feature modalias matching for this architecture
This enables support for the generic CPU feature modalias implementation that
wires up optional CPU features to udev based module autoprobing.
A file <asm/cpufeature.h> is provided that maps CPU feature numbers to
elf_hwcap bits, which is the standard way on arm64 to advertise optional CPU
features both internally and to user space.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[catalin.marinas@arm.com: removed unnecessary "!!"]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Conflicts:
arch/arm64/Kconfig
Change-Id: Ief16b3197cd0564d8cf8aa82e9614bcda6399fe5
Ard Biesheuvel [Tue, 4 Mar 2014 05:28:39 +0000 (13:28 +0800)]
crypto: allow blkcipher walks over AEAD data
This adds the function blkcipher_aead_walk_virt_block, which allows the caller
to use the blkcipher walk API to handle the input and output scatterlists.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Tue, 4 Mar 2014 05:28:38 +0000 (13:28 +0800)]
crypto: remove direct blkcipher_walk dependency on transform
In order to allow other uses of the blkcipher walk API than the blkcipher
algos themselves, this patch copies some of the transform data members to the
walk struct so the transform is only accessed at walk init time.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Mon, 24 Feb 2014 14:26:29 +0000 (15:26 +0100)]
arm64: add support for kernel mode NEON in interrupt context
This patch modifies kernel_neon_begin() and kernel_neon_end(), so
they may be called from any context. To address the case where only
a couple of registers are needed, kernel_neon_begin_partial(u32) is
introduced which takes as a parameter the number of bottom 'n' NEON
q-registers required. To mark the end of such a partial section, the
regular kernel_neon_end() should be used.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Conflicts:
arch/arm64/include/asm/neon.h
Change-Id: Ifc7c6aa77e2ab8dd98bb9975cccab54e09693ab7
Ard Biesheuvel [Thu, 8 May 2014 09:20:23 +0000 (11:20 +0200)]
arm64: defer reloading a task's FPSIMD state to userland resume
If a task gets scheduled out and back in again and nothing has touched
its FPSIMD state in the mean time, there is really no reason to reload
it from memory. Similarly, repeated calls to kernel_neon_begin() and
kernel_neon_end() will preserve and restore the FPSIMD state every time.
This patch defers the FPSIMD state restore to the last possible moment,
i.e., right before the task returns to userland. If a task does not return to
userland at all (for any reason), the existing FPSIMD state is preserved
and may be reused by the owning task if it gets scheduled in again on the
same CPU.
This patch adds two more functions to abstract away from straight FPSIMD
register file saves and restores:
- fpsimd_restore_current_state -> ensure current's FPSIMD state is loaded
- fpsimd_flush_task_state -> invalidate live copies of a task's FPSIMD state
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Conflicts:
arch/arm64/kernel/fpsimd.c
Change-Id: Ib1c0d8d0afb3c248cd4d060eb35877530dd92fdc
Ard Biesheuvel [Mon, 24 Feb 2014 14:26:27 +0000 (15:26 +0100)]
arm64: add abstractions for FPSIMD state manipulation
There are two tacit assumptions in the FPSIMD handling code that will no longer
hold after the next patch that optimizes away some FPSIMD state restores:
. the FPSIMD registers of this CPU contain the userland FPSIMD state of
task 'current';
. when switching to a task, its FPSIMD state will always be restored from
memory.
This patch adds the following functions to abstract away from straight FPSIMD
register file saves and restores:
- fpsimd_preserve_current_state -> ensure current's FPSIMD state is saved
- fpsimd_update_current_state -> replace current's FPSIMD state
Where necessary, the signal handling and fork code are updated to use the above
wrappers instead of poking into the FPSIMD registers directly.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Conflicts:
arch/arm64/kernel/fpsimd.c
Change-Id: I53ae7082427cb1c5cc32e1f2ddbd4218115601ba
Ard Biesheuvel [Sat, 8 Feb 2014 12:34:09 +0000 (13:34 +0100)]
cpu: add generic support for CPU feature based module autoloading
This patch adds support for advertising optional CPU features over udev
using the modalias, and for declaring compatibility with/dependency upon
such a feature in a module.
The mapping between feature numbers and actual features should be provided
by the architecture in a file called <asm/cpufeature.h> which exports the
following functions/macros:
- cpu_feature(FEAT), a preprocessor macro that maps token FEAT to a
numeric index;
- bool cpu_have_feature(n), returning whether this CPU has support for
feature #n;
- MAX_CPU_FEATURES, an upper bound for 'n' in the previous function.
The feature can then be enabled by setting CONFIG_GENERIC_CPU_AUTOPROBE
for the architecture.
For instance, a module that registers its module init function using
module_cpu_feature_match(FEAT_X, module_init_function)
will be probed automatically when the CPU's support for the 'FEAT_X'
feature is advertised over udev, and will only allow the module to be
loaded by hand if the 'FEAT_X' feature is supported.
Change-Id: Icae8e3ff347235fc72a5b41279f0afdb34fb161a
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
kbuild test robot [Tue, 24 Sep 2013 00:21:29 +0000 (08:21 +0800)]
crypto: ablk_helper - Replace memcpy with struct assignment
tree: git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
head:
48e6dc1b2a1ad8186d48968d5018912bdacac744
commit:
a62b01cd6cc1feb5e80d64d6937c291473ed82cb [20/24] crypto: create generic version of ablk_helper
coccinelle warnings: (new ones prefixed by >>)
>> crypto/ablk_helper.c:97:2-8: Replace memcpy with struct assignment
>> crypto/ablk_helper.c:78:2-8: Replace memcpy with struct assignment
Please consider folding the attached diff :-)
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Fri, 20 Sep 2013 07:55:40 +0000 (09:55 +0200)]
crypto: create generic version of ablk_helper
Create a generic version of ablk_helper so it can be reused
by other architectures.
Acked-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Mikulas Patocka [Fri, 25 Jul 2014 23:40:20 +0000 (19:40 -0400)]
crypto: arm64-aes - fix encryption of unaligned data
cryptsetup fails on arm64 when using kernel encryption via AF_ALG socket.
See https://bugzilla.redhat.com/show_bug.cgi?id=
1122937
The bug is caused by incorrect handling of unaligned data in
arch/arm64/crypto/aes-glue.c. Cryptsetup creates a buffer that is aligned
on 8 bytes, but not on 16 bytes. It opens AF_ALG socket and uses the
socket to encrypt data in the buffer. The arm64 crypto accelerator causes
data corruption or crashes in the scatterwalk_pagedone.
This patch fixes the bug by passing the residue bytes that were not
processed as the last parameter to blkcipher_walk_done.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Andreas Schwab [Thu, 24 Jul 2014 16:03:26 +0000 (17:03 +0100)]
arm64/crypto: fix makefile rule for aes-glue-%.o
This fixes the following build failure when building with CONFIG_MODVERSIONS
enabled:
CC [M] arch/arm64/crypto/aes-glue-ce.o
ld: cannot find arch/arm64/crypto/aes-glue-ce.o: No such file or directory
make[1]: *** [arch/arm64/crypto/aes-ce-blk.o] Error 1
make: *** [arch/arm64/crypto] Error 2
The $(obj)/aes-glue-%.o rule only creates $(obj)/.tmp_aes-glue-ce.o, it
should use if_changed_rule instead of if_changed_dep.
Signed-off-by: Andreas Schwab <schwab@suse.de>
[ardb: mention CONFIG_MODVERSIONS in commit log]
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Ard Biesheuvel [Mon, 16 Jun 2014 10:02:16 +0000 (11:02 +0100)]
arm64/crypto: improve performance of GHASH algorithm
This patches modifies the GHASH secure hash implementation to switch to a
faster, polynomial multiplication based reduction instead of one that uses
shifts and rotates.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Ard Biesheuvel [Mon, 16 Jun 2014 10:02:15 +0000 (11:02 +0100)]
arm64/crypto: fix data corruption bug in GHASH algorithm
This fixes a bug in the GHASH algorithm resulting in the calculated hash to be
incorrect if the input is presented in chunks whose size is not a multiple of
16 bytes.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Fixes: fdd2389457b2 ("arm64/crypto: GHASH secure hash using ARMv8 Crypto Extensions")
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Ard Biesheuvel [Fri, 21 Mar 2014 09:19:17 +0000 (10:19 +0100)]
arm64/crypto: AES-ECB/CBC/CTR/XTS using ARMv8 NEON and Crypto Extensions
This adds ARMv8 implementations of AES in ECB, CBC, CTR and XTS modes,
both for ARMv8 with Crypto Extensions and for plain ARMv8 NEON.
The Crypto Extensions version can only run on ARMv8 implementations that
have support for these optional extensions.
The plain NEON version is a table based yet time invariant implementation.
All S-box substitutions are performed in parallel, leveraging the wide range
of ARMv8's tbl/tbx instructions, and the huge NEON register file, which can
comfortably hold the entire S-box and still have room to spare for doing the
actual computations.
The key expansion routines were borrowed from aes_generic.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Mon, 10 Feb 2014 10:26:29 +0000 (11:26 +0100)]
arm64/crypto: AES in CCM mode using ARMv8 Crypto Extensions
This patch adds support for the AES-CCM encryption algorithm for CPUs that
have support for the AES part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Wed, 5 Feb 2014 17:13:38 +0000 (18:13 +0100)]
arm64/crypto: AES using ARMv8 Crypto Extensions
This patch adds support for the AES symmetric encryption algorithm for CPUs
that have support for the AES part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Wed, 26 Mar 2014 19:53:05 +0000 (20:53 +0100)]
arm64/crypto: GHASH secure hash using ARMv8 Crypto Extensions
This is a port to ARMv8 (Crypto Extensions) of the Intel implementation of the
GHASH Secure Hash (used in the Galois/Counter chaining mode). It relies on the
optional PMULL/PMULL2 instruction (polynomial multiply long, what Intel call
carry-less multiply).
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Ard Biesheuvel [Thu, 20 Mar 2014 14:35:40 +0000 (15:35 +0100)]
arm64/crypto: SHA-224/SHA-256 using ARMv8 Crypto Extensions
This patch adds support for the SHA-224 and SHA-256 Secure Hash Algorithms
for CPUs that have support for the SHA-2 part of the ARM v8 Crypto Extensions.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
JP Abgrall [Thu, 28 Aug 2014 02:07:30 +0000 (19:07 -0700)]
arm64/crypto: SHA-1 using ARMv8 Crypto Extensions
This patch adds support for the SHA-1 Secure Hash Algorithm for CPUs that
have support for the SHA-1 part of the ARM v8 Crypto Extensions.
Change-Id: I29fafd308e17aff6e0d59938c106fae6ad7fe78e
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Conflicts:
arch/arm64/Makefile
JP Abgrall [Thu, 28 Aug 2014 03:30:29 +0000 (20:30 -0700)]
arm: fixup NR_syscalls to accommodate the new seccomp syscall
This belongs in
commit:
83f1ccba87b06575966b65352db565c363af7bcf
https://android-review.googlesource.com/#/c/104520
Change-Id: Id5037cbebac9b86c863da79c3b8729e627e65f8e
Signed-off-by: JP Abgrall <jpa@google.com>
Kees Cook [Thu, 5 Jun 2014 07:23:17 +0000 (00:23 -0700)]
seccomp: implement SECCOMP_FILTER_FLAG_TSYNC
Applying restrictive seccomp filter programs to large or diverse
codebases often requires handling threads which may be started early in
the process lifetime (e.g., by code that is linked in). While it is
possible to apply permissive programs prior to process start up, it is
difficult to further restrict the kernel ABI to those threads after that
point.
This change adds a new seccomp syscall flag to SECCOMP_SET_MODE_FILTER for
synchronizing thread group seccomp filters at filter installation time.
When calling seccomp(SECCOMP_SET_MODE_FILTER, SECCOMP_FILTER_FLAG_TSYNC,
filter) an attempt will be made to synchronize all threads in current's
threadgroup to its new seccomp filter program. This is possible iff all
threads are using a filter that is an ancestor to the filter current is
attempting to synchronize to. NULL filters (where the task is running as
SECCOMP_MODE_NONE) are also treated as ancestors allowing threads to be
transitioned into SECCOMP_MODE_FILTER. If prctrl(PR_SET_NO_NEW_PRIVS,
...) has been set on the calling thread, no_new_privs will be set for
all synchronized threads too. On success, 0 is returned. On failure,
the pid of one of the failing threads will be returned and no filters
will have been applied.
The race conditions against another thread are:
- requesting TSYNC (already handled by sighand lock)
- performing a clone (already handled by sighand lock)
- changing its filter (already handled by sighand lock)
- calling exec (handled by cred_guard_mutex)
The clone case is assisted by the fact that new threads will have their
seccomp state duplicated from their parent before appearing on the tasklist.
Holding cred_guard_mutex means that seccomp filters cannot be assigned
while in the middle of another thread's exec (potentially bypassing
no_new_privs or similar). The call to de_thread() may kill threads waiting
for the mutex.
Changes across threads to the filter pointer includes a barrier.
Based on patches by Will Drewry.
Suggested-by: Julien Tinnes <jln@chromium.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Kees Cook [Fri, 27 Jun 2014 22:01:35 +0000 (15:01 -0700)]
seccomp: allow mode setting across threads
This changes the mode setting helper to allow threads to change the
seccomp mode from another thread. We must maintain barriers to keep
TIF_SECCOMP synchronized with the rest of the seccomp state.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Conflicts:
kernel/seccomp.c
Change-Id: I091ffa55d8f4e83ff02558a55e2b4dc76ac26905
Kees Cook [Fri, 27 Jun 2014 22:18:48 +0000 (15:18 -0700)]
seccomp: introduce writer locking
Normally, task_struct.seccomp.filter is only ever read or modified by
the task that owns it (current). This property aids in fast access
during system call filtering as read access is lockless.
Updating the pointer from another task, however, opens up race
conditions. To allow cross-thread filter pointer updates, writes to the
seccomp fields are now protected by the sighand spinlock (which is shared
by all threads in the thread group). Read access remains lockless because
pointer updates themselves are atomic. However, writes (or cloning)
often entail additional checking (like maximum instruction counts)
which require locking to perform safely.
In the case of cloning threads, the child is invisible to the system
until it enters the task list. To make sure a child can't be cloned from
a thread and left in a prior state, seccomp duplication is additionally
moved under the sighand lock. Then parent and child are certain have
the same seccomp state when they exit the lock.
Based on patches by Will Drewry and David Drysdale.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Conflicts:
kernel/fork.c
Change-Id: Ie01ece43b610867013f7d0e0a2a7be0b9077630f
Kees Cook [Fri, 27 Jun 2014 22:16:33 +0000 (15:16 -0700)]
seccomp: split filter prep from check and apply
In preparation for adding seccomp locking, move filter creation away
from where it is checked and applied. This will allow for locking where
no memory allocation is happening. The validation, filter attachment,
and seccomp mode setting can all happen under the future locks.
For extreme defensiveness, I've added a BUG_ON check for the calculated
size of the buffer allocation in case BPF_MAXINSN ever changes, which
shouldn't ever happen. The compiler should actually optimize out this
check since the test above it makes it impossible.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Conflicts:
kernel/seccomp.c
Change-Id: I8d89f80a5b4f2826d90474dcea441c41f0af6594
Kees Cook [Tue, 10 Jun 2014 22:45:09 +0000 (15:45 -0700)]
MIPS: add seccomp syscall
Wires up the new seccomp syscall.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Conflicts:
arch/mips/include/uapi/asm/unistd.h
arch/mips/kernel/scall32-o32.S
arch/mips/kernel/scall64-64.S
arch/mips/kernel/scall64-n32.S
arch/mips/kernel/scall64-o32.S
Change-Id: I7031bdbec7c90292aeb7e255c73cb36e6ec43af2
Kees Cook [Tue, 10 Jun 2014 22:40:23 +0000 (15:40 -0700)]
ARM: add seccomp syscall
Wires up the new seccomp syscall.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Conflicts:
arch/arm/include/uapi/asm/unistd.h
arch/arm/kernel/calls.S
Change-Id: Ia519993681f70bd38699a73d8897ae9b44b3f0af
Kees Cook [Wed, 25 Jun 2014 23:08:24 +0000 (16:08 -0700)]
seccomp: add "seccomp" syscall
This adds the new "seccomp" syscall with both an "operation" and "flags"
parameter for future expansion. The third argument is a pointer value,
used with the SECCOMP_SET_MODE_FILTER operation. Currently, flags must
be 0. This is functionally equivalent to prctl(PR_SET_SECCOMP, ...).
In addition to the TSYNC flag later in this patch series, there is a
non-zero chance that this syscall could be used for configuring a fixed
argument area for seccomp-tracer-aware processes to pass syscall arguments
in the future. Hence, the use of "seccomp" not simply "seccomp_add_filter"
for this syscall. Additionally, this syscall uses operation, flags,
and user pointer for arguments because strictly passing arguments via
a user pointer would mean seccomp itself would be unable to trivially
filter the seccomp syscall itself.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Conflicts:
arch/x86/syscalls/syscall_32.tbl
arch/x86/syscalls/syscall_64.tbl
include/uapi/asm-generic/unistd.h
kernel/seccomp.c
Change-Id: Id7a365079829fd9164315dec75d6ee415c29b176
Kees Cook [Wed, 25 Jun 2014 22:55:25 +0000 (15:55 -0700)]
seccomp: split mode setting routines
Separates the two mode setting paths to make things more readable with
fewer #ifdefs within function bodies.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Kees Cook [Wed, 25 Jun 2014 22:38:02 +0000 (15:38 -0700)]
seccomp: extract check/assign mode helpers
To support splitting mode 1 from mode 2, extract the mode checking and
assignment logic into common functions.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Kees Cook [Wed, 21 May 2014 22:02:11 +0000 (15:02 -0700)]
seccomp: create internal mode-setting function
In preparation for having other callers of the seccomp mode setting
logic, split the prctl entry point away from the core logic that performs
seccomp mode setting.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Kees Cook [Fri, 18 Jul 2014 18:28:33 +0000 (11:28 -0700)]
MAINTAINERS: create seccomp entry
Add myself as seccomp maintainer.
Suggested-by: James Morris <jmorris@namei.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
Kees Cook [Wed, 16 Apr 2014 17:54:34 +0000 (10:54 -0700)]
seccomp: fix memory leak on filter attach
This sets the correct error code when final filter memory is unavailable,
and frees the raw filter no matter what.
unreferenced object 0xffff8800d6ea4000 (size 512):
comm "sshd", pid 278, jiffies
4294898315 (age 46.653s)
hex dump (first 32 bytes):
21 00 00 00 04 00 00 00 15 00 01 00 3e 00 00 c0 !...........>...
06 00 00 00 00 00 00 00 21 00 00 00 00 00 00 00 ........!.......
backtrace:
[<
ffffffff8151414e>] kmemleak_alloc+0x4e/0xb0
[<
ffffffff811a3a40>] __kmalloc+0x280/0x320
[<
ffffffff8110842e>] prctl_set_seccomp+0x11e/0x3b0
[<
ffffffff8107bb6b>] SyS_prctl+0x3bb/0x4a0
[<
ffffffff8152ef2d>] system_call_fastpath+0x1a/0x1f
[<
ffffffffffffffff>] 0xffffffffffffffff
Reported-by: Masami Ichikawa <masami256@gmail.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Masami Ichikawa <masami256@gmail.com>
Acked-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Conflicts:
kernel/seccomp.c
Change-Id: Ide3c27bf378397f8faf4218e75c31e4b8bc43c4c
Kees Cook [Fri, 8 Nov 2013 23:51:56 +0000 (00:51 +0100)]
ARM: 7888/1: seccomp: not compatible with ARM OABI
Make sure that seccomp filter won't be built when ARM OABI is in use,
since there is work needed to distinguish calling conventions. Until
that is done (which is likely never since OABI is deprecated), make
sure seccomp filter is unavailable in the OABI world.
Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Will Drewry <wad@chromium.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
xerox_lin [Thu, 14 Aug 2014 06:48:44 +0000 (14:48 +0800)]
usb: Add support for rndis uplink aggregation
RNDIS protocol supports data aggregation on uplink and can help
reduce mips by reducing number of interrupts on device. Throughput
also improved by 20-30%. Aggregation is disabled by setting
aggregation packet size to 1. To help better UL throughput, set
as ul aggregation support to 3 rndis packets by default. It can be
configured via module parameter: rndis_ul_max_pkt_per_xfer.
Change-Id: I0b62a21a5c3ceb6b04933d0d6da33301dbafe493
Signed-off-by: Vamsi Krishna <vskrishn@codeaurora.org>
Signed-off-by: Xerox Lin <xerox_lin@htc.com>
Dmitry Shmidt [Fri, 22 Aug 2014 21:40:18 +0000 (14:40 -0700)]
Add flags parameter to get_country_code template
Change-Id: Ic3f173db144a301ea104f544fc8ec723241c1d59
Signed-off-by: Dmitry Shmidt <dimitrysh@google.com>
Colin Cross [Tue, 5 Aug 2014 19:05:17 +0000 (12:05 -0700)]
mm: fix prctl_set_vma_anon_name
prctl_set_vma_anon_name could attempt to set the name across
two vmas at the same time due to a typo, which might corrupt
the vma list. Fix it to use tmp instead of end to limit
the name setting to a single vma at a time.
Change-Id: Ie32d8ddb0fd547efbeedd6528acdab5ca5b308b4
Reported-by: Jed Davis <jld@mozilla.com>
Signed-off-by: Colin Cross <ccross@android.com>
xerox_lin [Mon, 18 Aug 2014 13:54:23 +0000 (21:54 +0800)]
USB: rndis: Free the rndis response queue during REMOTE_NDIS_RESET_MSG
When rndis data transfer is in progress, some Windows7 Host PC is not
sending the GET_ENCAPSULATED_RESPONSE command for receiving the response
for the previous SEND_ENCAPSULATED_COMMAND processed.
The rndis function driver appends each response for the
SEND_ENCAPSULATED_COMMAND in a queue. As the above process got corrupted,
the Host sends a REMOTE_NDIS_RESET_MSG command to do a soft-reset.
As the rndis response queue is not freed, the previous response is sent
as a part of this REMOTE_NDIS_RESET_MSG's reset response and the Host
blocks any more Rndis transfers.
Hence free the rndis response queue as a part of this soft-reset so that
the current response for REMOTE_NDIS_RESET_MSG is sent properly during the
response command.
Change-Id: I8eff3849db452fe01b7d1fe4140ef1f1ad3f4fd4
Signed-off-by: Rajkumar Raghupathy <raghup@codeaurora.org>
Signed-off-by: Xerox Lin <xerox_lin@htc.com>
Maxime Bizon [Thu, 29 Aug 2013 18:28:13 +0000 (20:28 +0200)]
firmware loader: fix pending_fw_head list corruption
Got the following oops just before reboot:
Unable to handle kernel NULL pointer dereference at virtual address
00000000
[<
8028d300>] (__list_del_entry+0x44/0xac)
[<
802e3320>] (__fw_load_abort.part.13+0x1c/0x50)
[<
802e337c>] (fw_shutdown_notify+0x28/0x50)
[<
80034f80>] (notifier_call_chain.isra.1+0x5c/0x9c)
[<
800350ec>] (__blocking_notifier_call_chain+0x44/0x58)
[<
80035114>] (blocking_notifier_call_chain+0x14/0x18)
[<
80035d64>] (kernel_restart_prepare+0x14/0x38)
[<
80035d94>] (kernel_restart+0xc/0x50)
The following race condition triggers here:
_request_firmware_load()
device_create_file(...)
kobject_uevent(...)
(schedule)
(resume)
firmware_loading_store(1)
firmware_loading_store(0)
list_del_init(&buf->pending_list)
(schedule)
(resume)
list_add(&buf->pending_list, &pending_fw_head);
wait_for_completion(&buf->completion);
causing an oops later when walking pending_list after the firmware has
been released.
The proposed fix is to move the list_add() before sysfs attribute
creation.
Signed-off-by: Maxime Bizon <mbizon@freebox.fr>
Acked-by: Ming Lei <ming.lei@canonical.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Wed, 22 May 2013 16:28:38 +0000 (18:28 +0200)]
firmware: Avoid deadlock of usermodehelper lock at shutdown
When a system goes to reboot/shutdown, it tries to disable the
usermode helper via usermodehelper_disable(). This might be blocked
when a driver tries to load a firmware beforehand and it's stuck by
some reason. For example, dell_rbu driver loads the firmware in
non-hotplug mode and waits for user-space clearing the loading sysfs
flag. If user-space doesn't clear the flag, it waits forever, thus
blocks the reboot, too.
As a workaround, in this patch, the firmware class driver registers a
reboot notifier so that it can abort all pending f/w bufs before
issuing usermodehelper_disable().
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Acked-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Conflicts:
drivers/base/firmware_class.c
Change-Id: I7ff6c198cd34090e55845b9d4035b1e5dc86226b
Laura Abbott [Fri, 24 Jan 2014 23:19:49 +0000 (15:19 -0800)]
staging: android: ashmem: Avoid deadlock with mmap/shrink
Both ashmem_mmap and ashmem_shrink take the ashmem_lock. It may
be possible for ashmem_mmap to invoke ashmem_shrink:
-000|mutex_lock(lock = 0x0)
-001|ashmem_shrink(?, sc = 0x0) <--- try to take ashmem_mutex again
-002|shrink_slab(shrink = 0xDA5F1CC0, nr_pages_scanned = 0, lru_pages
-002|=
-002|124)
-003|try_to_free_pages(zonelist = 0x0, ?, ?, ?)
-004|__alloc_pages_nodemask(gfp_mask = 21200, order = 1, zonelist =
-004|0xC11D0940,
-005|new_slab(s = 0xE4841E80, ?, node = -1)
-006|__slab_alloc.isra.43.constprop.50(s = 0xE4841E80, gfpflags =
-006|
2148925462, ad
-007|kmem_cache_alloc(s = 0xE4841E80, gfpflags = 208)
-008|shmem_alloc_inode(?)
-009|alloc_inode(sb = 0xE480E800)
-010|new_inode_pseudo(?)
-011|new_inode(?)
-012|shmem_get_inode(sb = 0xE480E800, dir = 0x0, ?, dev = 0, flags =
-012|187)
-013|shmem_file_setup(?, ?, flags = 187)
-014|ashmem_mmap(?, vma = 0xC5D64210) <---- Acquire ashmem_mutex
-015|mmap_region(file = 0xDF8E2C00, addr =
1772974080, len = 233472,
-015|flags = 57,
-016|sys_mmap_pgoff(addr = 0, len = 230400, prot = 3, flags = 1, fd =
-016|157, pgoff
-017|ret_fast_syscall(asm)
-->|exception
-018|NUR:0x40097508(asm)
---|end of frame
Avoid this deadlock by using mutex_trylock in ashmem_shrink; if the mutex
is already held, do not attempt to shrink.
Change-Id: I222bbf55856d5849da813b730de0636c80966c8e
Reported-by: Matt Wagantall <mattw@codeaurora.org>
Reported-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
Reported-by: Osvaldo Banuelos <osvaldob@codeaurora.org>
Reported-by: Subbaraman Narayanamurthy <subbaram@codeaurora.org>
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Greg Hackmann [Tue, 29 Jul 2014 19:34:22 +0000 (12:34 -0700)]
platform: goldfish: pipe: don't log when dropping PIPE_ERROR_AGAIN
On PIPE_ERROR_AGAIN, just stopping in the middle of a transfer and
returning the number of bytes actually handled is the right behavior.
Other errors should be returned on the next read() or write() call.
Continue logging those until we confirm nothing actually relies on the
existing (wrong) behavior of dropping errors on the floor.
Change-Id: I578b17ef54cd00d6003bbce1ecf9438e3ae3657e
Signed-off-by: Greg Hackmann <ghackmann@google.com>
Greg Hackmann [Fri, 25 Jul 2014 21:02:57 +0000 (14:02 -0700)]
platform: goldfish: pipe: add devicetree bindings
Change-Id: I25fbcbbe3201df71d2f84d2f338b86bc3934c641
Signed-off-by: Greg Hackmann <ghackmann@google.com>
Greg Hackmann [Tue, 29 Jul 2014 17:22:21 +0000 (10:22 -0700)]
arm64: ranchu: regenerate minimal defconfig
Change-Id: Ib6ab21d07d9b21eda9bf6be7198228fdc5d94f24
Signed-off-by: Greg Hackmann <ghackmann@google.com>
Christoffer Dall [Thu, 12 Jun 2014 12:07:25 +0000 (14:07 +0200)]
arm64: Add ranchu defconfig
Add a working defconfig for the aarch64 Android emulator machine named
'ranchu'. This is a simple machine based on QEMU's -M virt board model,
but with some goldfish and android-specific devices added. Also adds
other generic kernel features (such as CONFIG_RTC and CONFIG_IPTABLES
to allow a full Android system to boot.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Christoffer Dall [Thu, 19 Jun 2014 14:24:04 +0000 (16:24 +0200)]
goldfish_fb: Set pixclock = 0
User space Android code identifies pixclock == 0 as a sign for emulation
and will set the frame rate to 60 fps when reading this value, which is
the desired outcome.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Christoffer Dall [Mon, 23 Jun 2014 10:53:39 +0000 (12:53 +0200)]
android_pipe: Pin pages to memory while copying and other cleanups
The existing code had a troubling TODO statement concerning the fact
that it just did a check if the page that the QEMU backend was going to
read from / write to was there before the call to the QEMU backend and
then relying on the fact that the page stayed around, even in a
preemptible SMP kernel. Obviously the page could go away or be
reassigned, and strange things may happen.
Further, writes were not tracked, so any use of COW or KSM-like features
would break completely. Probably that was never used by adbd (the only
current active user of the pipe), but could prove much more dangerous
for the GPU passthrough mechanism.
Instead, use get_user_pages() as the comment suggested and cleanup the
error path and add the set_page_dirt() call on a successful read
operation.
Also clarify the count used to return from successful read/write calls
and use Linux style commentary in various places of the file.
Note: The "just ignore error and return whatever we read so far" error
handling is really quite horrific. I cannot change it without a more
careful study of all user space ABIs reliance on this 'feature'.
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Conflicts:
drivers/platform/goldfish/goldfish_pipe.c
Change-Id: Ib4e5dd6612781072d617f69f67f3991b946dae36
Alex Bennée [Thu, 19 Jun 2014 14:05:27 +0000 (15:05 +0100)]
android_pipe: don't be clever with #define offsets
You just make it harder to figure out when commands are being used.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Conflicts:
drivers/platform/goldfish/goldfish_pipe.c
Change-Id: I0ed4f1cba98116d0b0c3936e8b232e45d9beaa7a
Lorenzo Pieralisi [Fri, 24 Jan 2014 10:56:19 +0000 (10:56 +0000)]
arm64: kernel: fix per-cpu offset restore on resume
The introduction of percpu offset optimisation through tpidr_el1 in:
Commit id :
7158627686f02319c50c8d9d78f75d4c8
"arm64: percpu: implement optimised pcpu access using tpidr_el1"
requires cpu_{suspend/resume} to restore the tpidr_el1 register upon resume
so that percpu variables can be addressed correctly when a CPU comes out
of reset from warm-boot.
This patch fixes cpu_{suspend}/{resume} tpidr_el1 restoration on resume, by
calling the set_my_cpu_offset C API, as it is done on primary and secondary
CPUs on cold boot, so that, even if the register used to store the percpu
offset is changed, the save and restore of general purpose registers does not
have to be updated.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Lorenzo Pieralisi [Fri, 10 Jan 2014 13:15:05 +0000 (13:15 +0000)]
arm64: kernel: restore HW breakpoint registers in cpu_suspend
When a CPU resumes from low-power, it restores HW breakpoint and
watchpoint slots through a CPU PM notifier. Since we want to enable
debugging as early as possible in the resume path, the mdscr content
is restored along the general purpose registers in the cpu_suspend API
and debug exceptions are reenabled when cpu_suspend returns. Since the
CPU PM notifier is run after a CPU has been resumed, we cannot expect
HW breakpoint registers to contain sane values till the notifier is run,
since the HW breakpoints registers content is unknown at reset; this means
that the CPU might run with debug exceptions enabled, mdscr restored but HW
breakpoint registers containing junk values that can trigger spurious
debug exceptions.
This patch fixes current HW breakpoints restore by moving the HW breakpoints
registers restoration to the cpu_suspend API, before the debug exceptions are
enabled. This way, as soon as the cpu_suspend function returns the
kernel can resume debugging with sane values in HW breakpoint registers.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Alex Shi [Wed, 5 Mar 2014 06:49:35 +0000 (14:49 +0800)]
arm64: hw_breakpoint compile error fixing
When backport commit,
arm64: kernel: implement HW breakpoints CPU PM notifier
We have the following compile error, that is due to on 3.10, we still
need to use __get_cpu_var instead of this_cpu_ptr. This patch is
to fix it.
arch/arm64/kernel/hw_breakpoint.c: In function ‘hw_breakpoint_reset’:
arch/arm64/kernel/hw_breakpoint.c:864:15: error: cast specifies array type
arch/arm64/kernel/hw_breakpoint.c:873:15: error: cast specifies array type
make[1]: *** [arch/arm64/kernel/hw_breakpoint.o] Error 1
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Lorenzo Pieralisi [Mon, 5 Aug 2013 14:24:27 +0000 (15:24 +0100)]
arm64: kernel: add MPIDR_EL1 accessors macros
In order to simplify access to different affinity levels within the
MPIDR_EL1 register values, this patch implements some preprocessor
macros that allow to retrieve the MPIDR_EL1 affinity level value according
to the level passed as input parameter.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Lorenzo Pieralisi [Wed, 17 Jul 2013 13:54:21 +0000 (14:54 +0100)]
arm64: add CPU power management menu/entries
This patch provides a menu for CPU power management options in the
arm64 Kconfig and adds an entry to enable the generic CPU idle configuration.
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Lorenzo Pieralisi [Thu, 7 Nov 2013 18:37:14 +0000 (18:37 +0000)]
arm64: kernel: add PM build infrastructure
This patch adds the required makefile and kconfig entries to enable PM
for arm64 systems.
The kernel relies on the cpu_{suspend}/{resume} infrastructure to
properly save the context for a CPU and put it to sleep, hence this
patch adds the config option required to enable cpu_{suspend}/{resume}
API.
In order to rely on the CPU PM implementation for saving and restoring
of CPU subsystems like GIC and PMU, the arch Kconfig must be also
augmented to select the CONFIG_CPU_PM option when SUSPEND or CPU_IDLE
kernel implementations are selected.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Conflicts:
arch/arm64/Kconfig
arch/arm64/kernel/Makefile
Conflicts:
arch/arm64/Kconfig
arch/arm64/kernel/Makefile
Conflicts:
arch/arm64/Kconfig
arch/arm64/kernel/Makefile
Change-Id: I2329b804b5d335c7b585709df363918b85627dbd
Lorenzo Pieralisi [Wed, 17 Jul 2013 09:12:24 +0000 (10:12 +0100)]
arm64: kernel: add CPU idle call
When CPU idle is enabled, the architectural idle call should go through
the idle subsystem to allow CPUs to enter idle states defined
by the platform CPU idle back-end operations.
This patch, mirroring other archs behaviour, adds the CPU idle call to the
architectural arch_cpu_idle implementation for arm64.
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Lorenzo Pieralisi [Wed, 4 Sep 2013 09:55:17 +0000 (10:55 +0100)]
arm64: enable generic clockevent broadcast
On platforms with power management capabilities, timers that are shut
down when a CPU enters deep C-states must be emulated using an always-on
timer and a timer IPI to relay the timer IRQ to target CPUs on an SMP
system.
This patch enables the generic clockevents broadcast infrastructure for
arm64, by providing the required Kconfig entries and adding the timer
IPI infrastructure.
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Conflicts:
arch/arm64/Kconfig
Conflicts:
arch/arm64/Kconfig
Change-Id: I620c6a7ef19b655109342c5ef28ffa2f062bd630
Lorenzo Pieralisi [Mon, 5 Aug 2013 14:20:35 +0000 (15:20 +0100)]
arm64: kernel: implement HW breakpoints CPU PM notifier
When a CPU is shutdown either through CPU idle or suspend to RAM, the
content of HW breakpoint registers must be reset or restored to proper
values when CPU resume from low power states. This patch adds debug register
restore operations to the HW breakpoint control function and implements a
CPU PM notifier that allows to restore the content of HW breakpoint registers
to allow proper suspend/resume operations.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Lorenzo Pieralisi [Tue, 13 Aug 2013 09:45:19 +0000 (10:45 +0100)]
arm64: kernel: refactor code to install/uninstall breakpoints
Most of the code executed to install and uninstall breakpoints is
common and can be factored out in a function that through a runtime
operations type provides the requested implementation.
This patch creates a common function that can be used to install/uninstall
breakpoints and defines the set of operations that can be carried out
through it.
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Conflicts:
arch/arm64/kernel/hw_breakpoint.c
Lorenzo Pieralisi [Mon, 5 Aug 2013 14:04:46 +0000 (15:04 +0100)]
arm: kvm: implement CPU PM notifier
Upon CPU shutdown and consequent warm-reboot, the hypervisor CPU state
must be re-initialized. This patch implements a CPU PM notifier that
upon warm-boot calls a KVM hook to reinitialize properly the hypervisor
state so that the CPU can be safely resumed.
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Lorenzo Pieralisi [Fri, 19 Jul 2013 16:48:08 +0000 (17:48 +0100)]
arm64: kernel: implement fpsimd CPU PM notifier
When a CPU enters a low power state, its FP register content is lost.
This patch adds a notifier to save the FP context on CPU shutdown
and restore it on CPU resume. The context is saved and restored only
if the suspending thread is not a kernel thread, mirroring the current
context switch behaviour.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Conflicts:
arch/arm64/kernel/fpsimd.c
Lorenzo Pieralisi [Mon, 22 Jul 2013 11:22:13 +0000 (12:22 +0100)]
arm64: kernel: cpu_{suspend/resume} implementation
Kernel subsystems like CPU idle and suspend to RAM require a generic
mechanism to suspend a processor, save its context and put it into
a quiescent state. The cpu_{suspend}/{resume} implementation provides
such a framework through a kernel interface allowing to save/restore
registers, flush the context to DRAM and suspend/resume to/from
low-power states where processor context may be lost.
The CPU suspend implementation relies on the suspend protocol registered
in CPU operations to carry out a suspend request after context is
saved and flushed to DRAM. The cpu_suspend interface:
int cpu_suspend(unsigned long arg);
allows to pass an opaque parameter that is handed over to the suspend CPU
operations back-end so that it can take action according to the
semantics attached to it. The arg parameter allows suspend to RAM and CPU
idle drivers to communicate to suspend protocol back-ends; it requires
standardization so that the interface can be reused seamlessly across
systems, paving the way for generic drivers.
Context memory is allocated on the stack, whose address is stashed in a
per-cpu variable to keep track of it and passed to core functions that
save/restore the registers required by the architecture.
Even though, upon successful execution, the cpu_suspend function shuts
down the suspending processor, the warm boot resume mechanism, based
on the cpu_resume function, makes the resume path operate as a
cpu_suspend function return, so that cpu_suspend can be treated as a C
function by the caller, which simplifies coding the PM drivers that rely
on the cpu_suspend API.
Upon context save, the minimal amount of memory is flushed to DRAM so
that it can be retrieved when the MMU is off and caches are not searched.
The suspend CPU operation, depending on the required operations (eg CPU vs
Cluster shutdown) is in charge of flushing the cache hierarchy either
implicitly (by calling firmware implementations like PSCI) or explicitly
by executing the required cache maintainance functions.
Debug exceptions are disabled during cpu_{suspend}/{resume} operations
so that debug registers can be saved and restored properly preventing
preemption from debug agents enabled in the kernel.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Conflicts:
arch/arm64/kernel/asm-offsets.c
Lorenzo Pieralisi [Wed, 17 Jul 2013 09:14:45 +0000 (10:14 +0100)]
arm64: kernel: suspend/resume registers save/restore
Power management software requires the kernel to save and restore
CPU registers while going through suspend and resume operations
triggered by kernel subsystems like CPU idle and suspend to RAM.
This patch implements code that provides save and restore mechanism
for the arm v8 implementation. Memory for the context is passed as
parameter to both cpu_do_suspend and cpu_do_resume functions, and allows
the callers to implement context allocation as they deem fit.
The registers that are saved and restored correspond to the registers set
actually required by the kernel to be up and running which represents a
subset of v8 ISA.
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>