Santosh Sivaraj [Tue, 27 Jun 2017 07:00:06 +0000 (12:30 +0530)]
powerpc/smp: Convert NR_CPUS to nr_cpu_ids
nr_cpu_ids can be limited by nr_cpus boot parameter, whereas NR_CPUS is a
compile time constant, which shouldn't be compared against during cpu kick.
Signed-off-by: Santosh Sivaraj <santosh@fossix.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Santosh Sivaraj [Tue, 27 Jun 2017 07:00:05 +0000 (12:30 +0530)]
powerpc/smp: Do not BUG_ON if invalid CPU during kick
During secondary start, we do not need to BUG_ON if an invalid CPU number
is passed. We already print an error if secondary cannot be started, so
just return an error instead.
Signed-off-by: Santosh Sivaraj <santosh@fossix.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Javier Martinez Canillas [Thu, 15 Jun 2017 18:54:18 +0000 (20:54 +0200)]
powerpc/44x: Add generic compatible string for I2C EEPROM
The at24 driver allows to register I2C EEPROM chips using different vendor
and devices, but the I2C subsystem does not take the vendor into account
when matching using the I2C table since it only has device entries.
But when matching using an OF table, both the vendor and device has to be
taken into account so the driver defines only a set of compatible strings
using the "atmel" vendor as a generic fallback for compatible I2C devices.
So add this generic fallback to the device node compatible string to make
the device to match the driver using the OF device ID table.
Signed-off-by: Javier Martinez Canillas <javier@dowhile0.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Javier Martinez Canillas [Thu, 15 Jun 2017 18:54:17 +0000 (20:54 +0200)]
powerpc/83xx: Add generic compatible string for I2C EEPROM
The at24 driver allows to register I2C EEPROM chips using different vendor
and devices, but the I2C subsystem does not take the vendor into account
when matching using the I2C table since it only has device entries.
But when matching using an OF table, both the vendor and device has to be
taken into account so the driver defines only a set of compatible strings
using the "atmel" vendor as a generic fallback for compatible I2C devices.
So add this generic fallback to the device node compatible string to make
the device to match the driver using the OF device ID table.
Signed-off-by: Javier Martinez Canillas <javier@dowhile0.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Javier Martinez Canillas [Thu, 15 Jun 2017 18:54:16 +0000 (20:54 +0200)]
powerpc/512x: Add generic compatible string for I2C EEPROM
The at24 driver allows to register I2C EEPROM chips using different vendor
and devices, but the I2C subsystem does not take the vendor into account
when matching using the I2C table since it only has device entries.
But when matching using an OF table, both the vendor and device has to be
taken into account so the driver defines only a set of compatible strings
using the "atmel" vendor as a generic fallback for compatible I2C devices.
So add this generic fallback to the device node compatible string to make
the device to match the driver using the OF device ID table.
Signed-off-by: Javier Martinez Canillas <javier@dowhile0.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Javier Martinez Canillas [Thu, 15 Jun 2017 18:54:15 +0000 (20:54 +0200)]
powerpc/fsl: Add generic compatible string for I2C EEPROM
The at24 driver allows to register I2C EEPROM chips using different vendor
and devices, but the I2C subsystem does not take the vendor into account
when matching using the I2C table since it only has device entries.
But when matching using an OF table, both the vendor and device has to be
taken into account so the driver defines only a set of compatible strings
using the "atmel" vendor as a generic fallback for compatible I2C devices.
So add this generic fallback to the device node compatible string to make
the device to match the driver using the OF device ID table.
Signed-off-by: Javier Martinez Canillas <javier@dowhile0.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Javier Martinez Canillas [Thu, 15 Jun 2017 18:54:14 +0000 (20:54 +0200)]
powerpc/5200: Add generic compatible string for I2C EEPROM
The at24 driver allows to register I2C EEPROM chips using different vendor
and devices, but the I2C subsystem does not take the vendor into account
when matching using the I2C table since it only has device entries.
But when matching using an OF table, both the vendor and device has to be
taken into account so the driver defines only a set of compatible strings
using the "atmel" vendor as a generic fallback for compatible I2C devices.
So add this generic fallback to the device node compatible string to make
the device to match the driver using the OF device ID table.
Signed-off-by: Javier Martinez Canillas <javier@dowhile0.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Wed, 14 Jun 2017 13:02:41 +0000 (23:02 +1000)]
cpuidle: powerpc: no memory barrier after break from idle
A memory barrier is not required after the task wakes up,
only if we clear the polling flag before waking. The case
where we have work to do is the important one, so optimise
for it.
Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Wed, 14 Jun 2017 13:02:40 +0000 (23:02 +1000)]
cpuidle: powerpc: read mostly for common globals
Ensure these don't get put into bouncing cachelines.
Reviewed-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Wed, 14 Jun 2017 13:02:39 +0000 (23:02 +1000)]
cpuidle: powerpc: cpuidle set polling before enabling irqs
local_irq_enable can cause interrupts to be taken which could
take significant amount of processing time. The idle process
should set its polling flag before this, so another process that
wakes it during this time will not have to send an IPI.
Expand the TIF_POLLING_NRFLAG coverage to as large as possible.
Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Hari Bathini [Thu, 1 Jun 2017 19:40:10 +0000 (01:10 +0530)]
powerpc/fadump: add reschedule point while releasing memory
Around 95% of memory is reserved by fadump/capture kernel. All this
memory is freed, one page at a time, on writing '1' to the node
/sys/kernel/fadump_release_mem. On systems with large memory, this
can take a long time to complete, leading to soft lockup warning
messages. To avoid this, add reschedule points at regular intervals.
Also, while memblock_reserve() implicitly takes care of holes in the
given memory range while reserving memory, those holes need to be
taken care of while releasing memory as memory is freed one page at
a time. Add support to skip holes while releasing memory.
Suggested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Hari Bathini [Thu, 1 Jun 2017 17:22:10 +0000 (22:52 +0530)]
powerpc/fadump: provide a helpful error message
fadump fails to register when there are holes in boot memory area.
Provide a helpful error message to the user in such case.
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Hari Bathini [Thu, 1 Jun 2017 17:21:26 +0000 (22:51 +0530)]
powerpc/fadump: avoid holes in boot memory area when fadump is registered
To register fadump, boot memory area - the size of low memory chunk that
is required for a kernel to boot successfully when booted with restricted
memory, is assumed to have no holes. But this memory area is currently
not protected from hot-remove operations. So, fadump could fail to
re-register after a memory hot-remove operation, if memory is removed
from boot memory area. To avoid this, ensure that memory from boot
memory area is not hot-removed when fadump is registered.
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Reviewed-by: Mahesh J Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Hari Bathini [Thu, 1 Jun 2017 17:20:38 +0000 (22:50 +0530)]
powerpc/fadump: avoid duplicates in crash memory ranges
fadump sets up crash memory ranges to be used for creating PT_LOAD
program headers in elfcore header. Memory chunk RMA_START through
boot memory area size is added as the first memory range because
firmware, at the time of crash, moves this memory chunk to different
location specified during fadump registration making it necessary to
create a separate program header for it with the correct offset.
This memory chunk is skipped while setting up the remaining memory
ranges. But currently, there is possibility that some of this memory
may have duplicate entries like when it is hot-removed and added
again. Ensure that no two memory ranges represent the same memory.
When 5 lmbs are hot-removed and then hot-plugged before registering
fadump, here is how the program headers in /proc/vmcore exported by
fadump look like
without this change:
Program Headers:
Type Offset VirtAddr PhysAddr
FileSiz MemSiz Flags Align
NOTE 0x0000000000010000 0x0000000000000000 0x0000000000000000
0x0000000000001894 0x0000000000001894 0
LOAD 0x0000000000021020 0xc000000000000000 0x0000000000000000
0x0000000040000000 0x0000000040000000 RWE 0
LOAD 0x0000000040031020 0xc000000000000000 0x0000000000000000
0x0000000010000000 0x0000000010000000 RWE 0
LOAD 0x0000000050040000 0xc000000010000000 0x0000000010000000
0x0000000050000000 0x0000000050000000 RWE 0
LOAD 0x00000000a0040000 0xc000000060000000 0x0000000060000000
0x000000019ffe0000 0x000000019ffe0000 RWE 0
and with this change:
Program Headers:
Type Offset VirtAddr PhysAddr
FileSiz MemSiz Flags Align
NOTE 0x0000000000010000 0x0000000000000000 0x0000000000000000
0x0000000000001894 0x0000000000001894 0
LOAD 0x0000000000021020 0xc000000000000000 0x0000000000000000
0x0000000040000000 0x0000000040000000 RWE 0
LOAD 0x0000000040030000 0xc000000040000000 0x0000000040000000
0x0000000020000000 0x0000000020000000 RWE 0
LOAD 0x0000000060030000 0xc000000060000000 0x0000000060000000
0x000000019ffe0000 0x000000019ffe0000 RWE 0
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Reviewed-by: Mahesh J Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Madhavan Srinivasan [Sun, 25 Jun 2017 15:34:46 +0000 (21:04 +0530)]
powerpc/perf: Fix branch event code for power9
Correct "branch" event code of Power9 is "r4d05e". Replace the current
"branch" event code with "r4d05e" and add a hack to use "r10012" as
event code for Power9 DD1.
Fixes:
d89f473ff6f8 ("powerpc/perf: Fix PM_BRU_CMPL event code for power9")
Reported-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Sat, 24 Jun 2017 19:57:27 +0000 (14:57 -0500)]
powerpc/xive: Silence message about VP block allocation
There is no reason for that message to be pr_info(), it will be printed
every time we start a KVM guest.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Sat, 24 Jun 2017 17:29:01 +0000 (12:29 -0500)]
powerpc/64s: Invalidate ERAT on powersave wakeup for POWER9
On POWER9 the ERAT may be incorrect on wakeup from some stop states
that lose state. This causes random segvs and illegal instructions
when these stop states are enabled.
This patch invalidates the ERAT on wakeup on POWER9 to prevent this
from causing a problem.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Merge comment change with upstream changes]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Benjamin Herrenschmidt [Sun, 25 Jun 2017 20:08:46 +0000 (15:08 -0500)]
powerpc: Only do ERAT invalidate on radix context switch on P9 DD1
From: Michael Neuling <mikey@neuling.org>
On P9 (Nimbus) DD2 and later, in radix mode, the move to the PID
register will implicitly invalidate the user space ERAT entries
and leave the kernel ones alone. Thus the only thing needed is
an isync() to synchronize this with subsequent uaccess's
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Russell Currey [Wed, 21 Jun 2017 07:18:04 +0000 (17:18 +1000)]
powerpc/powernv/pci: Enable 64-bit devices to access >4GB DMA space
On PHB3/POWER8 systems, devices can select between two different sections
of address space, TVE#0 and TVE#1. TVE#0 is intended for 32bit devices
that aren't capable of addressing more than 4GB. Selecting TVE#1 instead,
with the capability of addressing over 4GB, is performed by setting bit 59
of a PCI address.
However, some devices aren't capable of addressing at least 59 bits, but
still want more than 4GB of DMA space. In order to enable this, reconfigure
TVE#0 to be suitable for 64-bit devices by allocating memory past the
initial 4GB that is inaccessible by 64-bit DMAs.
This bypass mode is only enabled if a device requests 4GB or more of DMA
address space, if the system has PHB3 (POWER8 systems), and if the device
does not share a PE with any devices from different vendors.
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Russell Currey [Wed, 21 Jun 2017 07:18:03 +0000 (17:18 +1000)]
powerpc/powernv/pci: Add helper to check if a PE has a single vendor
Add a helper that determines if all the devices contained in a given PE
are all from the same vendor or not. This can be useful in determining
if it's okay to make PE-wide changes that may be suitable for some
devices but not for others.
This is used later in the series.
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Russell Currey [Wed, 14 Jun 2017 04:20:00 +0000 (14:20 +1000)]
powerpc/powernv/pci: Add support for PHB4 diagnostics
As with P7IOC and PHB3, add kernel-side support for decoding and printing
diagnostic data for PHB4.
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Russell Currey [Wed, 14 Jun 2017 04:19:59 +0000 (14:19 +1000)]
powerpc/powernv/pci: Dynamically allocate PHB diag data
Diagnostic data for PHBs currently works by allocated a fixed-sized buffer.
This is simple, but either wastes memory (though only a few kilobytes) or
in the case of PHB4 isn't enough to fit the whole data blob.
For machines that don't describe the diagnostic data size in the device
tree, use the hardcoded buffer size as before. For those that do, only
allocate exactly what's needed.
In the special case of P7IOC (which has two types of diag data), the larger
should be specified in the device tree.
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Russell Currey [Wed, 14 Jun 2017 04:19:58 +0000 (14:19 +1000)]
powerpc/powernv/pci: Reduce spam when dumping PEST
Dumping the PE State Tables (PEST) can be highly verbose if a number of PEs
are affected, especially in the case where the whole PHB is frozen and 512
lines get printed. Check for duplicates when dumping the PEST to reduce
useless output.
For example:
PE[0f8] A/B:
9700002600000000 80000080d00000f8
PE[0f9] A/B:
8000000000000000 0000000000000000
PE[..0fe] A/B: as above
PE[0ff] A/B:
8440002b00000000 0000000000000000
instead of:
PE[0f8] A/B:
9700002600000000 80000080d00000f8
PE[0f9] A/B:
8000000000000000 0000000000000000
PE[0fa] A/B:
8000000000000000 0000000000000000
PE[0fb] A/B:
8000000000000000 0000000000000000
PE[0fc] A/B:
8000000000000000 0000000000000000
PE[0fd] A/B:
8000000000000000 0000000000000000
PE[0fe] A/B:
8000000000000000 0000000000000000
PE[0ff] A/B:
8440002b00000000 0000000000000000
and you can imagine how much worse it can get for 512 PEs.
Signed-off-by: Russell Currey <ruscur@russell.cc>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Neuling [Mon, 8 May 2017 06:18:31 +0000 (16:18 +1000)]
powerpc/tm: Fix comment
Update to real function name.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Neuling [Mon, 8 May 2017 06:23:31 +0000 (16:23 +1000)]
powerpc: Fix asm offsets to point to actual FP and VMX regs
The asm code assumes the FP regs are at the start of fp_state. While
this is true now, it may not always be the case and there is nothing
enforcing it.
This fixes the asm-offsets to point to the actual FP registers inside
the fp_state. Similarly for VMX.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Neuling [Thu, 15 Jun 2017 01:53:16 +0000 (11:53 +1000)]
powerpc: Fix /proc/cpuinfo revision for POWER9 DD2
The P9 PVR bits 12-15 don't indicate a revision but instead different
chip configurations. From BookIV we have:
Bits Configuration
0 : Scale out 12 cores
1 : Scale out 24 cores
2 : Scale up 12 cores
3 : Scale up 24 cores
DD1 doesn't use this but DD2 does. Linux will mostly use the "Scale
out 24 core" configuration (ie. SMT4 not SMT8) which results in a PVR
of 0x004e1200. The reported revision in /proc/cpuinfo is hence
reported incorrectly as "18.0".
This patch fixes this to mask off only the relevant bits for the major
revision (ie. bits 8-11) for POWER9.
Signed-off-by: Michael Neuling <mikey@neuling.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Balbir Singh [Tue, 11 Apr 2017 05:23:25 +0000 (15:23 +1000)]
powerpc/mm: Trace tlbie(l) instructions
Add a trace point for tlbie(l) (Translation Lookaside Buffer Invalidate
Entry (Local)) instructions.
The tlbie instruction has changed over the years, so not all versions
accept the same operands. Use the ISA v3 field operands because they are
the most verbose, we may change them in future.
Example output:
qemu-system-ppc-5371 [016] 1412.369519: tlbie:
tlbie with lpid 0, local 1, rb=
67bd8900174c11c1, rs=0, ric=0 prs=0 r=0
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
[mpe: Add some missing trace_tlbie()s, reword change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Paul Mackerras [Sat, 27 May 2017 08:04:52 +0000 (18:04 +1000)]
powerpc: Convert VDSO update function to use new update_vsyscall interface
This converts the powerpc VDSO time update function to use the new
interface introduced in commit
576094b7f0aa ("time: Introduce new
GENERIC_TIME_VSYSCALL", 2012-09-11). Where the old interface gave
us the time as of the last update in seconds and whole nanoseconds,
with the new interface we get the nanoseconds part effectively in
a binary fixed-point format with tk->tkr_mono.shift bits to the
right of the binary point.
With the old interface, the fractional nanoseconds got truncated,
meaning that the value returned by the VDSO clock_gettime function
would have about 1ns of jitter in it compared to the value computed
by the generic timekeeping code in the kernel.
The powerpc VDSO time functions (clock_gettime and gettimeofday)
already work in units of 2^-32 seconds, or 0.23283 ns, because that
makes it simple to split the result into seconds and fractional
seconds, and represent the fractional seconds in either microseconds
or nanoseconds. This is good enough accuracy for now, so this patch
avoids changing how the VDSO works or the interface in the VDSO data
page.
This patch converts the powerpc update_vsyscall_old to be called
update_vsyscall and use the new interface. We convert the fractional
second to units of 2^-32 seconds without truncating to whole nanoseconds.
(There is still a conversion to whole nanoseconds for any legacy users
of the vdso_data/systemcfg stamp_xtime field.)
In addition, this improves the accuracy of the computation of tb_to_xs
for those systems with high-frequency timebase clocks (>= 268.5 MHz)
by doing the right shift in two parts, one before the multiplication and
one after, rather than doing the right shift before the multiplication.
(We can't do all of the right shift after the multiplication unless we
use 128-bit arithmetic.)
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Acked-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Santosh Sivaraj [Tue, 20 Jun 2017 07:44:47 +0000 (13:14 +0530)]
powerpc/time: Fix tracing in time.c
Since trace_clock is in a different file and already marked with notrace,
enable tracing in time.c by removing it from the disabled list in Makefile.
Also annotate clocksource read functions and sched_clock with notrace.
Testing: Timer and ftrace selftests run with different trace clocks.
Acked-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Santosh Sivaraj <santosh@fossix.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Mon, 19 Jun 2017 11:57:33 +0000 (21:57 +1000)]
powerpc/64s: Rename slb_allocate_realmode() to slb_allocate()
As for slb_miss_realmode(), rename slb_allocate_realmode() to avoid
confusion over whether it runs in real or virtual mode - it runs in
both.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Michael Ellerman [Mon, 19 Jun 2017 11:52:03 +0000 (21:52 +1000)]
powerpc/64s: Rename slb_miss_realmode() to slb_miss_common()
slb_miss_realmode() doesn't always runs in real mode, which is what the
name implies. So rename it to avoid confusing people.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Michael Ellerman [Mon, 19 Jun 2017 11:47:11 +0000 (21:47 +1000)]
powerpc/64s: Use BRANCH_TO_COMMON() for slb_miss_realmode
All the callers of slb_miss_realmode currently open code the #ifndef
CONFIG_RELOCATABLE check and the branch via CTR in the RELOCATABLE case.
We have a macro to do this, BRANCH_TO_COMMON(), so use it.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Nicholas Piggin [Sun, 21 May 2017 13:15:50 +0000 (23:15 +1000)]
powerpc/64s/paca: EX_CTR is not used with RELOCATABLE=n, remove it
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sun, 21 May 2017 13:15:49 +0000 (23:15 +1000)]
powerpc/64s/paca: EX_R3 can be merged with EX_DAR
EX_R3 is used only for a small section of the bad stack handler.
Merge it with EX_DAR.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sun, 21 May 2017 13:15:48 +0000 (23:15 +1000)]
powerpc/64s/paca: EX_LR can be merged with EX_DAR
EX_LR is used only for a small section of the SLB miss handler.
Merge it with EX_DAR.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sun, 21 May 2017 13:15:47 +0000 (23:15 +1000)]
powerpc/64s/paca: EX_SRR0 is unused, remove it
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sun, 21 May 2017 13:15:46 +0000 (23:15 +1000)]
powerpc/64s: Add EX_SIZE definition for paca exception save areas
Rather than open-coding it 4 times.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Move __ASSEMBLY__ guards into head-64.h where they're really needed]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sun, 21 May 2017 13:15:45 +0000 (23:15 +1000)]
powerpc/64s: Avoid r3 save/restore in SLB miss handler
The SLB miss handler uses r3 for the faulting address but r12 is
mostly able to be freed up to save r3 in. It just requires SRR1
be reloaded again on error.
It would be more conventional to use r12 for SRR1 (and use r11 to
save r3), but slb_allocate_realmode clobbers r11 and not r12.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sun, 21 May 2017 13:15:44 +0000 (23:15 +1000)]
powerpc/64s: SLB miss already has CTR saved for relocatable kernel
The EXCEPTION_PROLOG_1 used by SLB miss already saves CTR when the
kernel is built with CONFIG_RELOCATABLE. So it does not have to be
saved and reloaded when branching to slb_miss_realmode. It can be
restored from the PACA as usual.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sun, 21 May 2017 13:15:43 +0000 (23:15 +1000)]
powerpc/64s: Avoid saving faulting address into EX_DAR in SLB miss
The EX_DAR save area is only used in exceptional cases. With r3 no
longer clobbered by slb_allocate_realmode, saving faulting address to
EX_DAR can be deferred to those cases.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sun, 21 May 2017 13:15:42 +0000 (23:15 +1000)]
powerpc/64s: Preserve r3 in slb_allocate_realmode()
One fewer registers clobbered by this function means the SLB miss
handler can save one fewer.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 13 Jun 2017 13:05:57 +0000 (23:05 +1000)]
powerpc/64s/idle: Run latch switch is done with MSR[EE]=0
In the idle sleep/wake code we know that MSR[EE] is clear, so we can
avoid 2 x mfmsr and 2 x mtmsr by calling the double-underscore
versions of the run latch routines which assume interrupts are already
disabled.
Acked-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 13 Jun 2017 13:05:52 +0000 (23:05 +1000)]
powerpc/64s/idle: Predict HMI wakeup as unlikely
In a busy system, idle wakeups can be expected from IPIs and device
interrupts.
Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 13 Jun 2017 13:05:51 +0000 (23:05 +1000)]
powerpc/64s/idle: Avoid SRR usage in idle sleep/wake paths
Idle code now always runs at the 0xc... effective address whether
in real or virtual mode. This means rfid can be ditched, along
with a lot of SRR manipulations.
In the wakeup path, carry SRR1 around in r12. Use mtmsrd to change
MSR states as required.
This also balances the return prediction for the idle call, by
doing blr rather than rfid to return to the idle caller.
On POWER9, 2-process context switch on different cores, with snooze
disabled, increases performance by 2%.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Incorporate v2 fixes from Nick]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 13 Jun 2017 13:05:50 +0000 (23:05 +1000)]
powerpc/64s/idle: Branch to handler with virtual mode offset
Have the system reset idle wakeup handlers branched to in real mode
with the 0xc... kernel address applied. This allows simplifications of
avoiding rfid when switching to virtual mode in the wakeup handler.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 13 Jun 2017 13:05:49 +0000 (23:05 +1000)]
powerpc/64s: Don't unbalance the return branch predictor in __replay_interrupt()
The __replay_interrupt() code is branched to with bl, but the caller is
returned to directly with rfid from the interrupt.
Instead, rfid to a stub that returns to the caller with blr, which
should keep the return branch predictor balanced.
Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 13 Jun 2017 13:05:48 +0000 (23:05 +1000)]
powerpc/64s: msgclr when handling doorbell exceptions from system reset
msgsnd doorbell exceptions are cleared when the doorbell interrupt is
taken. However if a doorbell exception causes a system reset interrupt
wake from power saving state, the message is not cleared. Processing
the doorbell from the system reset interrupt requires msgclr to avoid
taking the exception again.
Testing this plus the previous wakup direct patch gives:
original wakeup direct msgclr
Different threads, same core: 315k/s 264k/s 345k/s
Different cores: 235k/s 242k/s 242k/s
Net speedup is +10% for same core, and +3% for different core.
Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 13 Jun 2017 13:05:47 +0000 (23:05 +1000)]
powerpc/64s/idle: Process interrupts from system reset wakeup
When the CPU wakes from low power state, it begins at the system reset
interrupt with the exception that caused the wakeup encoded in SRR1.
Today, powernv idle wakeup ignores the wakeup reason (except a special
case for HMI), and the regular interrupt corresponding to the
exception will fire after the idle wakeup exits.
Change this to replay the interrupt from the idle wakeup before
interrupts are hard-enabled.
Test on POWER8 of context_switch selftests benchmark with polling idle
disabled (e.g., always nap, giving cross-CPU IPIs) gives the following
results:
original wakeup direct
Different threads, same core: 315k/s 264k/s
Different cores: 235k/s 242k/s
There is a slowdown for doorbell IPI (same core) case because system
reset wakeup does not clear the message and the doorbell interrupt
fires again needlessly.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 13 Jun 2017 13:05:46 +0000 (23:05 +1000)]
powerpc/powernv: Simplify lazy IRQ handling in CPU offline
Rather than concern ourselves with any soft-mask logic in the CPU
hotplug handler, just hard disable interrupts. This ensures there
are no lazy-irqs pending, which means we can call directly to idle
instruction in order to sleep.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Tue, 13 Jun 2017 13:05:45 +0000 (23:05 +1000)]
powerpc/64s/idle: Move soft interrupt mask logic into C code
This simplifies the asm and fixes irq-off tracing over sleep
instructions.
Also move powersave_nap check for POWER8 into C code, and move
PSSCR register value calculation for POWER9 into C.
Reviewed-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Murilo Opsfelder Araujo [Mon, 29 May 2017 13:26:28 +0000 (10:26 -0300)]
drivers/watchdog/Kconfig: Update CONFIG_WATCHDOG_RTAS dependencies
drivers/watchdog/wdrtas.c uses symbols defined in arch/powerpc/kernel/rtas.c,
which are exported iff CONFIG_PPC_RTAS is selected. Building wdrtas.c without
setting CONFIG_PPC_RTAS throws the following errors:
ERROR: ".rtas_token" [drivers/watchdog/wdrtas.ko] undefined!
ERROR: "rtas_data_buf" [drivers/watchdog/wdrtas.ko] undefined!
ERROR: "rtas_data_buf_lock" [drivers/watchdog/wdrtas.ko] undefined!
ERROR: ".rtas_get_sensor" [drivers/watchdog/wdrtas.ko] undefined!
ERROR: ".rtas_call" [drivers/watchdog/wdrtas.ko] undefined!
This was identified during a randconfig build where CONFIG_WATCHDOG_RTAS=m and
CONFIG_PPC_RTAS was not set. Logs are here:
http://kisskb.ellerman.id.au/kisskb/buildresult/
12982152/
This patch fixes the issue by updating CONFIG_WATCHDOG_RTAS to depend on just
CONFIG_PPC_RTAS, removing COMPILE_TEST entirely.
Signed-off-by: Murilo Opsfelder Araujo <mopsfelder@gmail.com>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 8 Jun 2017 15:36:09 +0000 (01:36 +1000)]
powerpc/64s: Avoid cpabort in context switch when possible
The ISA v3.0B copy-paste facility only requires cpabort when switching
to a process that has foreign real addresses mapped (direct access to
accelerators), to clear a potential copy buffer filled by a previous
thread. There is no accelerator driver implemented yet, so cpabort can
be removed. It can be be re-added when a driver is implemented.
POWER9 DD1 requires the copy buffer to always be cleared on context
switch, but if accelerators are not in use, then an unpaired copy from
a dummy region is sufficient to clear data out of the copy buffer.
This increases context switch performance by about 5% on POWER9.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 8 Jun 2017 15:36:08 +0000 (01:36 +1000)]
powerpc/64: Drop explicit hwsync in context switch
The sync (aka. hwsync, aka. heavyweight sync) in the context switch
code to prevent MMIO access being reordered from the point of view of
a single process if it gets migrated to a different CPU is not
required because there is an hwsync performed earlier in the context
switch path.
Comment this so it's clear enough if anything changes on the scheduler
or the powerpc sides. Remove the hwsync from _switch.
This improves context switch performance by 2-3% on POWER8.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 8 Jun 2017 15:36:07 +0000 (01:36 +1000)]
powerpc/64: Drop reservation-clearing ldarx in context switch
There is no need to explicitly break the reservation in _switch,
because we are guaranteed that the context switch path will include a
larx/stcx.
Comment the guarantee and remove the reservation clear from _switch.
This is worth 1-2% in context switch performance.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 8 Jun 2017 15:36:06 +0000 (01:36 +1000)]
powerpc/64s: Leave interrupts hard enabled in context switch for radix
Commit
4387e9ff25 ("[POWERPC] Fix PMU + soft interrupt disable bug")
hard disabled interrupts over the low level context switch, because
the SLB management can't cope with a PMU interrupt accesing the stack
in that window.
Radix based kernel mapping does not use the SLB so it does not require
interrupts hard disabled here.
This is worth 1-2% in context switch performance on POWER9.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 8 Jun 2017 15:35:05 +0000 (01:35 +1000)]
powerpc/64: Avoid restore_math call if possible in syscall exit
The syscall exit code that branches to restore_math is quite heavy on
Book3S, consisting of 2 mtmsr instructions. Threads that don't use both
FP and vector can get caught here if the kernel ever uses FP or vector.
Lazy-FP/vec context switching also trips this case.
So check for lazy FP and vector before switching RI for restore_math.
Move most of this case out of line.
For threads that do want to restore math registers, the MSR switches are
still suboptimal. Future direction may be to use a soft-RI bit to avoid
MSR switches in kernel (similar to soft-EE), but for now at least the
no-restore
POWER9 context switch rate increases by about 5% due to sched_yield(2)
return performance. I haven't constructed a test to measure the syscall
cost.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 8 Jun 2017 15:35:04 +0000 (01:35 +1000)]
powerpc/64s: Optimize hypercall/syscall entry
After
bc3551257a ("powerpc/64: Allow for relocation-on interrupts from
guest to host"), a getppid() system call goes from 307 cycles to 358
cycles (+17%) on POWER8. This is due significantly to the scratch SPR
used by the hypercall check.
It turns out there are a some volatile registers common to both system
call and hypercall (in particular, r12, cr0, ctr), which can be used to
avoid the SPR and some other overheads. This brings getppid to 320 cycles
(+4%).
Testing hcall entry performance by running "sc 1" in guest userspace
before this patch is 854 cycles, afterwards is 826. Also a small win
there.
POWER9 syscall is improved by about the same amount, hcall not tested.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michael Ellerman [Tue, 6 Jun 2017 05:48:57 +0000 (15:48 +1000)]
powerpc/mm/radix: Only add X for pages overlapping kernel text
Currently we map the whole linear mapping with PAGE_KERNEL_X. Instead we
should check if the page overlaps the kernel text and only then add
PAGE_KERNEL_X.
Note that we still use 1G pages if they're available, so this will
typically still result in a 1G executable page at KERNELBASE. So this fix is
primarily useful for catching stray branches to high linear mapping addresses.
Without this patch, we can execute at 1G in xmon using:
0:mon> m
c000000040000000
c000000040000000 00 l
c000000040000000 00000000 01006038
c000000040000004 00000000 2000804e
c000000040000008 00000000 x
0:mon> di
c000000040000000
c000000040000000 38600001 li r3,1
c000000040000004 4e800020 blr
0:mon> p
c000000040000000
return value is 0x1
After we get a 400 as expected:
0:mon> p
c000000040000000
*** 400 exception occurred
Fixes:
2bfd65e45e87 ("powerpc/mm/radix: Add radix callbacks for early init routines")
Cc: stable@vger.kernel.org # v4.7+
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Michael Ellerman [Thu, 15 Jun 2017 06:20:46 +0000 (16:20 +1000)]
Revert "powerpc: Handle simultaneous interrupts at once"
This reverts commit
45cb08f4791ce6a15c54598b4cb73db4b4b8294f.
For some reason this is causing IRQ problems on Freescale Book3E
machines, eg on my p5020ds:
irq 25: nobody cared (try booting with the "irqpoll" option)
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.12.0-rc3-gcc-6.3.1-00037-g45cb08f4791c #624
Call Trace:
[
c0000000fffdbb10] [
c00000000049962c] .dump_stack+0xa8/0xe8 (unreliable)
[
c0000000fffdbba0] [
c0000000000babf4] .__report_bad_irq+0x54/0x140
[
c0000000fffdbc40] [
c0000000000bb11c] .note_interrupt+0x324/0x380
[
c0000000fffdbd00] [
c0000000000b7110] .handle_irq_event_percpu+0x68/0x88
[
c0000000fffdbd90] [
c0000000000b718c] .handle_irq_event+0x5c/0xa8
[
c0000000fffdbe10] [
c0000000000bc01c] .handle_fasteoi_irq+0xe4/0x298
[
c0000000fffdbe90] [
c0000000000b59c4] .generic_handle_irq+0x50/0x74
[
c0000000fffdbf10] [
c0000000000075d8] .__do_irq+0x74/0x1f0
[
c0000000fffdbf90] [
c0000000000189f8] .call_do_irq+0x14/0x24
[
c0000000f7173060] [
c0000000000077e4] .do_IRQ+0x90/0x120
[
c0000000f7173100] [
c00000000001d93c] exc_0x500_common+0xfc/0x100
--- interrupt: 501 at .prepare_to_wait_event+0xc/0x14c
LR = .fsl_elbc_run_command+0xc8/0x23c
[
c0000000f71734d0] [
c00000000065f418] .nand_reset+0xb8/0x168
[
c0000000f7173560] [
c00000000065fec4] .nand_scan_ident+0x2b0/0x1638
[
c0000000f7173650] [
c000000000666cd8] .fsl_elbc_nand_probe+0x34c/0x5f0
ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[
c0000000f7173750] [
c0000000005a3c60] .platform_drv_probe+0x64/0xb0
[
c0000000f71737d0] [
c0000000005a12e0] .really_probe+0x290/0x334
[
c0000000f7173870] [
c0000000005a14a0] .__driver_attach+0x11c/0x120
[
c0000000f7173900] [
c00000000059e6a0] .bus_for_each_dev+0x98/0xfc
[
c0000000f71739a0] [
c0000000005a0b3c] .driver_attach+0x34/0x4c
[
c0000000f7173a20] [
c0000000005a04b0] .bus_add_driver+0x1ac/0x2e0
[
c0000000f7173ac0] [
c0000000005a2170] .driver_register+0x94/0x160
[
c0000000f7173b40] [
c0000000005a3be0] .__platform_driver_register+0x60/0x7c
[
c0000000f7173bc0] [
c000000000d6aab4] .fsl_elbc_nand_driver_init+0x24/0x38
[
c0000000f7173c30] [
c000000000001934] .do_one_initcall+0x68/0x1b8
[
c0000000f7173d00] [
c000000000d210f8] .kernel_init_freeable+0x260/0x338
[
c0000000f7173db0] [
c0000000000021b0] .kernel_init+0x20/0xe70
[
c0000000f7173e30] [
c0000000000009bc] .ret_from_kernel_thread+0x58/0x9c
handlers:
[<
c000000000ed85c8>] .fsl_lbc_ctrl_irq
Disabling IRQ #25
Ben also had concerns with the implementation being potentially slow on
some PICs, so revert it for now.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 29 May 2017 06:26:44 +0000 (16:26 +1000)]
powerpc/64s: Machine check handle ifetch from foreign real address for POWER9
The i-side 0111b machine check, which is "Instruction Fetch to foreign
address space", was missed by
7b9f71f974 ("powerpc/64s: POWER9 machine
check handler").
The POWER9 processor core considers host real addresses with a
nonzero value in RA(8:12) as foreign address space, accessible only
by the copy and paste instructions. The copy and paste instruction
pair can be used to invoke the Nest accelerators via the Virtual
Accelerator Switchboard (VAS).
It is an error for any regular load/store or ifetch to go to a foreign
addresses. When relocation is on, this causes an MMU exception. When
relocation is off, a machine check exception. It is possible to trigger
this machine check by branching to a foreign address with MSR[IR]=0.
Fixes:
7b9f71f974a1 ("powerpc/64s: POWER9 machine check handler")
Reported-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Dan Carpenter [Fri, 5 May 2017 05:34:58 +0000 (08:34 +0300)]
cxl: Unlock on error in probe
We should unlock if get_cxl_adapter() fails.
Fixes:
594ff7d067ca ("cxl: Support to flash a new image on the adapter from a guest")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Mon, 29 May 2017 15:31:56 +0000 (17:31 +0200)]
powerpc/mm: Rename map_page() to map_kernel_page() on 32-bit
These two functions implement the same semantics, so unify their naming so we
can share code that calls them. The longer name is more descriptive so use it.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Acked-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Balbir Singh [Tue, 2 May 2017 05:17:06 +0000 (15:17 +1000)]
powerpc/mm/hugetlb: Add support for page accounting
Add __GFP_ACCOUNT to __hugepte_alloc()
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Balbir Singh [Tue, 2 May 2017 05:17:05 +0000 (15:17 +1000)]
powerpc/mm/book(e)(3s)/32: Add page table accounting
Add support in pte_alloc_one() and pgd_alloc() by
passing __GFP_ACCOUNT in the flags
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Balbir Singh [Tue, 2 May 2017 05:17:04 +0000 (15:17 +1000)]
powerpc/mm/book(e)(3s)/64: Add page table accounting
Introduce a helper pgtable_gfp_flags() which
just returns the current gfp flags and adds
__GFP_ACCOUNT to account for page table allocation.
The generic helper is added to include/asm/pgalloc.h
and has two variants - WARNING ugly bits ahead
1. If the header is included from a module, no check
for mm == &init_mm is done, since init_mm is not
exported
2. For kernel includes, the check is done and required
see (3e79ec7 arch: x86: charge page tables to kmemcg)
The fundamental assumption is that no module should be
doing pgd/pud/pmd and pte alloc's on behalf of init_mm
directly.
NOTE: This adds an overhead to pmd/pud/pgd allocations
similar to x86. The other alternative was to implement
pmd_alloc_kernel/pud_alloc_kernel and pgd_alloc_kernel
with their offset variants.
For 4k page size, pte_alloc_one no longer calls
pte_alloc_one_kernel.
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Balbir Singh [Thu, 25 May 2017 07:28:32 +0000 (17:28 +1000)]
powerpc/mm/hash: Do a local flush if possible when no batch is active
Currently in hpte_need_flush() if there is no batch pending we always do a
global TLB flush, which is inefficient if the mm has never run on another
thread.
Instead do the same check that __flush_tlb_pending() does and check if a local
flush is sufficient when batch->active is false. Instead of open-coding it we
use mm_is_thread_local().
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
[mpe: Don't use a local, just inline mm_is_thread_local()]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Yang Li [Fri, 2 Jun 2017 23:20:23 +0000 (18:20 -0500)]
MAINTAINERS: Update my email address from freescale to nxp
Signed-off-by: Li Yang <leoyang.li@nxp.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Yang Li [Fri, 2 Jun 2017 23:20:22 +0000 (18:20 -0500)]
MAINTAINERS: Update entry for Freescale SoC drivers
Add myself as the maintainer for drivers/fsl/soc/ and fix the scope for
device tree bindings.
Signed-off-by: Li Yang <leoyang.li@nxp.com>
Acked-by: Scott Wood <oss@buserror.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Sun, 4 Jun 2017 12:06:22 +0000 (22:06 +1000)]
selftests/powerpc: context_switch use private futexes with threads
This reduces overhead of mutex locking and increases context switch
rate significantly (which helps to measure and profile the context
switch path).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Colin Ian King [Mon, 5 Jun 2017 06:49:12 +0000 (16:49 +1000)]
powerpc: Fix some spelling mistakes
Collation of some spelling fixes from Colin.
Attemping -> Attempting
intialized -> initialized
missmanaged -> mismanaged
Signed-off-by: Colin Ian King <colin.king@canonical.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Matt Brown [Tue, 23 May 2017 23:45:59 +0000 (09:45 +1000)]
powerpc/lib/xor_vmx: Ensure no altivec code executes before enable_kernel_altivec()
The xor_vmx.c file is used for the RAID5 xor operations. In these functions
altivec is enabled to run the operation and then disabled.
The code uses enable_kernel_altivec() around the core of the algorithm, however
the whole file is built with -maltivec, so the compiler is within its rights to
generate altivec code anywhere. This has been seen at least once in the wild:
0:mon> di $xor_altivec_2
c0000000000b97d0 3c4c01d9 addis r2,r12,473
c0000000000b97d4 3842db30 addi r2,r2,-9424
c0000000000b97d8 7c0802a6 mflr r0
c0000000000b97dc f8010010 std r0,16(r1)
c0000000000b97e0 60000000 nop
c0000000000b97e4 7c0802a6 mflr r0
c0000000000b97e8 faa1ffa8 std r21,-88(r1)
...
c0000000000b981c f821ff41 stdu r1,-192(r1)
c0000000000b9820 7f8101ce stvx v28,r1,r0 <-- POP
c0000000000b9824 38000030 li r0,48
c0000000000b9828 7fa101ce stvx v29,r1,r0
...
c0000000000b984c 4bf6a06d bl
c0000000000238b8 # enable_kernel_altivec
This patch splits the non-altivec code into xor_vmx_glue.c which calls the
altivec functions in xor_vmx.c. By compiling xor_vmx_glue.c without
-maltivec we can guarantee that altivec instruction will not be executed
outside of the enable/disable block.
Signed-off-by: Matt Brown <matthew.brown.dev@gmail.com>
[mpe: Rework change log and include disassembly]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Hari Bathini [Fri, 2 Jun 2017 07:30:27 +0000 (13:00 +0530)]
powerpc/fadump: Set an upper limit for boot memory size
By default, 5% of system RAM is reserved for preserving boot memory.
Alternatively, a user can specify the amount of memory to reserve.
See Documentation/powerpc/firmware-assisted-dump.txt for details. In
addition to the memory reserved for preserving boot memory, some more
memory is reserved, to save HPTE region, CPU state data and ELF core
headers.
Memory Reservation during first kernel looks like below:
Low memory Top of memory
0 boot memory size |
| | |<--Reserved dump area -->|
V V | Permanent Reservation V
+-----------+----------/ /----------+---+----+-----------+----+
| | |CPU|HPTE| DUMP |ELF |
+-----------+----------/ /----------+---+----+-----------+----+
| ^
| |
\ /
-------------------------------------------
Boot memory content gets transferred to
reserved area by firmware at the time of
crash
This implicitly means that the sum of the sizes of boot memory, CPU
state data, HPTE region, DUMP preserving area and ELF core headers
can't be greater than the total memory size. But currently, a user is
allowed to specify any value as boot memory size. So, the above rule
is violated when a boot memory size around 50% of the total available
memory is specified. As the kernel is not handling this currently, it
may lead to undefined behavior. Fix it by setting an upper limit for
boot memory size to 25% of the total available memory. Also, instead
of using memblock_end_of_DRAM(), which doesn't take the holes, if any,
in the memory layout into account, use memblock_phys_mem_size() to
calculate the percentage of total available memory.
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Hari Bathini [Mon, 22 May 2017 09:34:47 +0000 (15:04 +0530)]
powerpc/fadump: Update comment about offset where fadump is reserved
With commit
f6e6bedb7731 ("powerpc/fadump: Reserve memory at an offset
closer to bottom of RAM"), memory for fadump is no longer reserved at
the top of RAM. But there are still a few places which say so. Change
them appropriately.
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Hari Bathini [Mon, 22 May 2017 09:34:23 +0000 (15:04 +0530)]
powerpc/fadump: Add a warning when 'fadump_reserve_mem=' is used
With commit
11550dc0a00b ("powerpc/fadump: reuse crashkernel parameter
for fadump memory reservation"), 'fadump_reserve_mem=' parameter is
deprecated in favor of 'crashkernel=' parameter. Add a warning if
'fadump_reserve_mem=' is still used.
Fixes:
11550dc0a00b ("powerpc/fadump: reuse crashkernel parameter for fadump memory reservation")
Suggested-by: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
[mpe: Unsplit long printk strings]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Michal Suchanek [Sat, 27 May 2017 15:46:15 +0000 (17:46 +0200)]
powerpc/fadump: Return error when fadump registration fails
- log an error message when registration fails and no error code listed
in the switch is returned
- translate the hv error code to posix error code and return it from
fw_register
- return the posix error code from fw_register to the process writing
to sysfs
- return EEXIST on re-registration
- return success on deregistration when fadump is not registered
- return ENODEV when no memory is reserved for fadump
Signed-off-by: Michal Suchanek <msuchanek@suse.de>
Tested-by: Hari Bathini <hbathini@linux.vnet.ibm.com>
[mpe: Use pr_err() to shrink the error print]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Fri, 21 Apr 2017 11:18:52 +0000 (13:18 +0200)]
powerpc: Remove __ilog2()s and use generic ones
With the __ilog2() function as defined in
arch/powerpc/include/asm/bitops.h, GCC will not optimise the code
in case of constant parameter.
The generic ilog2() function in include/linux/log2.h is written
to handle the case of the constant parameter.
This patch discards the three __ilog2() functions and
defines __ilog2() as ilog2()
For non constant calls, the generated code is doing the same:
int test__ilog2(unsigned long x)
{
return __ilog2(x);
}
int test__ilog2_u32(u32 n)
{
return __ilog2_u32(n);
}
int test__ilog2_u64(u64 n)
{
return __ilog2_u64(n);
}
On PPC32 before the patch:
00000000 <test__ilog2>:
0: 7c 63 00 34 cntlzw r3,r3
4: 20 63 00 1f subfic r3,r3,31
8: 4e 80 00 20 blr
0000000c <test__ilog2_u32>:
c: 7c 63 00 34 cntlzw r3,r3
10: 20 63 00 1f subfic r3,r3,31
14: 4e 80 00 20 blr
On PPC32 after the patch:
00000000 <test__ilog2>:
0: 7c 63 00 34 cntlzw r3,r3
4: 20 63 00 1f subfic r3,r3,31
8: 4e 80 00 20 blr
0000000c <test__ilog2_u32>:
c: 7c 63 00 34 cntlzw r3,r3
10: 20 63 00 1f subfic r3,r3,31
14: 4e 80 00 20 blr
On PPC64 before the patch:
0000000000000000 <.test__ilog2>:
0: 7c 63 00 74 cntlzd r3,r3
4: 20 63 00 3f subfic r3,r3,63
8: 7c 63 07 b4 extsw r3,r3
c: 4e 80 00 20 blr
0000000000000010 <.test__ilog2_u32>:
10: 7c 63 00 34 cntlzw r3,r3
14: 20 63 00 1f subfic r3,r3,31
18: 7c 63 07 b4 extsw r3,r3
1c: 4e 80 00 20 blr
0000000000000020 <.test__ilog2_u64>:
20: 7c 63 00 74 cntlzd r3,r3
24: 20 63 00 3f subfic r3,r3,63
28: 7c 63 07 b4 extsw r3,r3
2c: 4e 80 00 20 blr
On PPC64 after the patch:
0000000000000000 <.test__ilog2>:
0: 7c 63 00 74 cntlzd r3,r3
4: 20 63 00 3f subfic r3,r3,63
8: 7c 63 07 b4 extsw r3,r3
c: 4e 80 00 20 blr
0000000000000010 <.test__ilog2_u32>:
10: 7c 63 00 34 cntlzw r3,r3
14: 20 63 00 1f subfic r3,r3,31
18: 7c 63 07 b4 extsw r3,r3
1c: 4e 80 00 20 blr
0000000000000020 <.test__ilog2_u64>:
20: 7c 63 00 74 cntlzd r3,r3
24: 20 63 00 3f subfic r3,r3,63
28: 7c 63 07 b4 extsw r3,r3
2c: 4e 80 00 20 blr
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Fri, 21 Apr 2017 11:18:50 +0000 (13:18 +0200)]
powerpc: Replace ffz() by equivalent generic function
With the ffz() function as defined in arch/powerpc/include/asm/bitops.h
GCC will not optimise the code in case of constant parameter.
This patch replaces ffz() by the generic function.
The generic ffz(x) expects to never be called with ~x == 0
as written in the comment in include/asm-generic/bitops/ffz.h
The only user of ffz() within arch/powerpc/ is
platforms/512x/mpc5121_ads_cpld.c, which checks if x is not 0xff
For non constant calls, the generated code is doing the same:
unsigned long testffz(unsigned long x)
{
return ffz(x);
}
On PPC32, before the patch:
00000018 <testffz>:
18: 7c 63 18 f9 not. r3,r3
1c: 40 82 00 0c bne 28 <testffz+0x10>
20: 38 60 00 20 li r3,32
24: 4e 80 00 20 blr
28: 7d 23 00 d0 neg r9,r3
2c: 7d 23 18 38 and r3,r9,r3
30: 7c 63 00 34 cntlzw r3,r3
34: 20 63 00 1f subfic r3,r3,31
38: 4e 80 00 20 blr
On PPC32, after the patch:
00000018 <testffz>:
18: 39 23 00 01 addi r9,r3,1
1c: 7d 23 18 78 andc r3,r9,r3
20: 7c 63 00 34 cntlzw r3,r3
24: 20 63 00 1f subfic r3,r3,31
28: 4e 80 00 20 blr
On PPC64, before the patch:
0000000000000030 <.testffz>:
30: 7c 60 18 f9 not. r0,r3
34: 38 60 00 40 li r3,64
38: 4d 82 00 20 beqlr
3c: 7c 60 00 d0 neg r3,r0
40: 7c 63 00 38 and r3,r3,r0
44: 7c 63 00 74 cntlzd r3,r3
48: 20 63 00 3f subfic r3,r3,63
4c: 7c 63 07 b4 extsw r3,r3
50: 4e 80 00 20 blr
On PPC64, after the patch:
0000000000000030 <.testffz>:
30: 38 03 00 01 addi r0,r3,1
34: 7c 03 18 78 andc r3,r0,r3
38: 7c 63 00 74 cntlzd r3,r3
3c: 20 63 00 3f subfic r3,r3,63
40: 4e 80 00 20 blr
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Fri, 21 Apr 2017 11:18:48 +0000 (13:18 +0200)]
powerpc: Use builtin functions for fls()/__fls()/fls64()
With the fls() functions as defined in arch/powerpc/include/asm/bitops.h
GCC will not optimise the code in case of constant parameter.
This patch replaces __fls() by the builtin function, and modifies
fls() and fls64() to use builtins instead of inline assembly
For non constant calls, the generated code is doing the same:
int testfls(unsigned int x)
{
return fls(x);
}
unsigned long test__fls(unsigned long x)
{
return __fls(x);
}
int testfls64(__u64 x)
{
return fls64(x);
}
On PPC32, before the patch:
00000064 <testfls>:
64: 7c 63 00 34 cntlzw r3,r3
68: 20 63 00 20 subfic r3,r3,32
6c: 4e 80 00 20 blr
00000070 <test__fls>:
70: 7c 63 00 34 cntlzw r3,r3
74: 20 63 00 1f subfic r3,r3,31
78: 4e 80 00 20 blr
0000007c <testfls64>:
7c: 2c 03 00 00 cmpwi r3,0
80: 40 82 00 10 bne 90 <testfls64+0x14>
84: 7c 83 00 34 cntlzw r3,r4
88: 20 63 00 20 subfic r3,r3,32
8c: 4e 80 00 20 blr
90: 7c 63 00 34 cntlzw r3,r3
94: 20 63 00 40 subfic r3,r3,64
98: 4e 80 00 20 blr
On PPC32, after the patch:
00000054 <testfls>:
54: 7c 63 00 34 cntlzw r3,r3
58: 20 63 00 20 subfic r3,r3,32
5c: 4e 80 00 20 blr
00000060 <test__fls>:
60: 7c 63 00 34 cntlzw r3,r3
64: 20 63 00 1f subfic r3,r3,31
68: 4e 80 00 20 blr
0000006c <testfls64>:
6c: 2c 03 00 00 cmpwi r3,0
70: 41 82 00 10 beq 80 <testfls64+0x14>
74: 7c 63 00 34 cntlzw r3,r3
78: 20 63 00 40 subfic r3,r3,64
7c: 4e 80 00 20 blr
80: 7c 83 00 34 cntlzw r3,r4
84: 20 63 00 40 subfic r3,r3,32
88: 4e 80 00 20 blr
On PPC64, before the patch:
00000000000000a0 <.testfls>:
a0: 7c 63 00 34 cntlzw r3,r3
a4: 20 63 00 20 subfic r3,r3,32
a8: 7c 63 07 b4 extsw r3,r3
ac: 4e 80 00 20 blr
00000000000000b0 <.test__fls>:
b0: 7c 63 00 74 cntlzd r3,r3
b4: 20 63 00 3f subfic r3,r3,63
b8: 7c 63 07 b4 extsw r3,r3
bc: 4e 80 00 20 blr
00000000000000c0 <.testfls64>:
c0: 7c 63 00 74 cntlzd r3,r3
c4: 20 63 00 40 subfic r3,r3,64
c8: 7c 63 07 b4 extsw r3,r3
cc: 4e 80 00 20 blr
On PPC64, after the patch:
0000000000000090 <.testfls>:
90: 7c 63 00 34 cntlzw r3,r3
94: 20 63 00 20 subfic r3,r3,32
98: 7c 63 07 b4 extsw r3,r3
9c: 4e 80 00 20 blr
00000000000000a0 <.test__fls>:
a0: 7c 63 00 74 cntlzd r3,r3
a4: 20 63 00 3f subfic r3,r3,63
a8: 4e 80 00 20 blr
ac: 60 00 00 00 nop
00000000000000b0 <.testfls64>:
b0: 7c 63 00 74 cntlzd r3,r3
b4: 20 63 00 40 subfic r3,r3,64
b8: 7c 63 07 b4 extsw r3,r3
bc: 4e 80 00 20 blr
Those builtins have been in GCC since at least 3.4.6 (see
https://gcc.gnu.org/onlinedocs/gcc-3.4.6/gcc/Other-Builtins.html )
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Fri, 21 Apr 2017 11:18:46 +0000 (13:18 +0200)]
powerpc: Discard ffs()/__ffs() function and use builtin functions instead
With the ffs() function as defined in arch/powerpc/include/asm/bitops.h
GCC will not optimise the code in case of constant parameter, as shown
by the small exemple below.
int ffs_test(void)
{
return 4 << ffs(31);
}
c0012334 <ffs_test>:
c0012334: 39 20 00 01 li r9,1
c0012338: 38 60 00 04 li r3,4
c001233c: 7d 29 00 34 cntlzw r9,r9
c0012340: 21 29 00 20 subfic r9,r9,32
c0012344: 7c 63 48 30 slw r3,r3,r9
c0012348: 4e 80 00 20 blr
With this patch, the same function will compile as follows:
c0012334 <ffs_test>:
c0012334: 38 60 00 08 li r3,8
c0012338: 4e 80 00 20 blr
The same happens with __ffs()
For non constant calls, the generated code is doing the same,
allthought it is slightly different on 64 bits for ffs():
unsigned long test__ffs(unsigned long x)
{
return __ffs(x);
}
int testffs(int x)
{
return ffs(x);
}
On PPC32, before the patch:
0000003c <test__ffs>:
3c: 7d 23 00 d0 neg r9,r3
40: 7d 23 18 38 and r3,r9,r3
44: 7c 63 00 34 cntlzw r3,r3
48: 20 63 00 1f subfic r3,r3,31
4c: 4e 80 00 20 blr
00000050 <testffs>:
50: 7d 23 00 d0 neg r9,r3
54: 7d 23 18 38 and r3,r9,r3
58: 7c 63 00 34 cntlzw r3,r3
5c: 20 63 00 20 subfic r3,r3,32
60: 4e 80 00 20 blr
On PPC32, after the patch:
0000002c <test__ffs>:
2c: 7d 23 00 d0 neg r9,r3
30: 7d 23 18 38 and r3,r9,r3
34: 7c 63 00 34 cntlzw r3,r3
38: 20 63 00 1f subfic r3,r3,31
3c: 4e 80 00 20 blr
00000040 <testffs>:
40: 7d 23 00 d0 neg r9,r3
44: 7d 23 18 38 and r3,r9,r3
48: 7c 63 00 34 cntlzw r3,r3
4c: 20 63 00 20 subfic r3,r3,32
50: 4e 80 00 20 blr
On PPC64, before the patch:
0000000000000060 <.test__ffs>:
60: 7c 03 00 d0 neg r0,r3
64: 7c 03 18 38 and r3,r0,r3
68: 7c 63 00 74 cntlzd r3,r3
6c: 20 63 00 3f subfic r3,r3,63
70: 7c 63 07 b4 extsw r3,r3
74: 4e 80 00 20 blr
0000000000000080 <.testffs>:
80: 7c 03 00 d0 neg r0,r3
84: 7c 03 18 38 and r3,r0,r3
88: 7c 63 00 74 cntlzd r3,r3
8c: 20 63 00 40 subfic r3,r3,64
90: 7c 63 07 b4 extsw r3,r3
94: 4e 80 00 20 blr
On PPC64, after the patch:
0000000000000050 <.test__ffs>:
50: 7c 03 00 d0 neg r0,r3
54: 7c 03 18 38 and r3,r0,r3
58: 7c 63 00 74 cntlzd r3,r3
5c: 20 63 00 3f subfic r3,r3,63
60: 4e 80 00 20 blr
0000000000000070 <.testffs>:
70: 7c 03 00 d0 neg r0,r3
74: 7c 03 18 38 and r3,r0,r3
78: 7c 63 00 34 cntlzw r3,r3
7c: 20 63 00 20 subfic r3,r3,32
80: 7c 63 07 b4 extsw r3,r3
84: 4e 80 00 20 blr
(ffs() operates on an int so cntlzw is equivalent to cntlzd)
In addition, when reading the generated vmlinux, we can observe
that with the builtin functions, GCC sometimes efficiently spreads
the instructions within the generated functions while the inline
assembly force them to remain grouped together.
__builtin_ffs() is already used in arch/powerpc/include/asm/page_32.h
Those builtins have been in GCC since at least 3.4.6 (see
https://gcc.gnu.org/onlinedocs/gcc-3.4.6/gcc/Other-Builtins.html )
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Thu, 16 Mar 2017 08:55:45 +0000 (09:55 +0100)]
powerpc: Handle simultaneous interrupts at once
It often happens to have simultaneous interrupts, for instance
when having double Ethernet attachment. With the current
implementation, we suffer the cost of kernel entry/exit for each
interrupt.
This patch introduces a loop in __do_irq() to handle all interrupts
at once before returning.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Fri, 10 Mar 2017 10:37:01 +0000 (11:37 +0100)]
powerpc/8xx: fix mpc8xx_get_irq() return on no irq
IRQ 0 is a valid HW interrupt. So get_irq() shall return 0 when
there is no irq, instead of returning irq_linear_revmap(... ,0)
Fixes:
f2a0bd3753dad ("[POWERPC] 8xx: powerpc port of core CPM PIC")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Fri, 5 Aug 2016 11:28:05 +0000 (13:28 +0200)]
powerpc/40x: Clear MSR_DR in one insn instead of two
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 19 Apr 2017 12:56:32 +0000 (14:56 +0200)]
powerpc/mm: The 8xx doesn't call do_page_fault() for breakpoints
The 8xx has a dedicated exception for breakpoints, that directly
calls do_break()
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 19 Apr 2017 12:56:30 +0000 (14:56 +0200)]
powerpc/mm: Evaluate user_mode(regs) only once in do_page_fault()
Analysis of the assembly code shows that when using user_mode(regs),
at least the 'andi.' is redone all the time, and also
the 'lwz ,132(r31)' most of the time. With the new form, the 'is_user'
is mapped to cr4, then all further use of is_user results in just
things like 'beq cr4,218 <do_page_fault+0x218>'
Without the patch:
50: 81 1e 00 84 lwz r8,132(r30)
54: 71 09 40 00 andi. r9,r8,16384
58: 40 82 00 0c bne 64 <do_page_fault+0x64>
84: 81 3e 00 84 lwz r9,132(r30)
8c: 71 2a 40 00 andi. r10,r9,16384
90: 41 a2 01 64 beq 1f4 <do_page_fault+0x1f4>
d4: 81 3e 00 84 lwz r9,132(r30)
dc: 71 28 40 00 andi. r8,r9,16384
e0: 41 82 02 08 beq 2e8 <do_page_fault+0x2e8>
108: 81 3e 00 84 lwz r9,132(r30)
110: 71 28 40 00 andi. r8,r9,16384
118: 41 82 02 28 beq 340 <do_page_fault+0x340>
1e4: 81 3e 00 84 lwz r9,132(r30)
1e8: 71 2a 40 00 andi. r10,r9,16384
1ec: 40 82 01 68 bne 354 <do_page_fault+0x354>
228: 81 3e 00 84 lwz r9,132(r30)
22c: 71 28 40 00 andi. r8,r9,16384
230: 41 82 ff c4 beq 1f4 <do_page_fault+0x1f4>
288: 71 2a 40 00 andi. r10,r9,16384
294: 41 a2 fe 60 beq f4 <do_page_fault+0xf4>
50c: 81 3e 00 84 lwz r9,132(r30)
514: 71 2a 40 00 andi. r10,r9,16384
518: 40 a2 fc e0 bne 1f8 <do_page_fault+0x1f8>
534: 81 3e 00 84 lwz r9,132(r30)
53c: 71 2a 40 00 andi. r10,r9,16384
540: 41 82 fc b8 beq 1f8 <do_page_fault+0x1f8>
This patch creates a local var called 'is_user' which contains the
result of user_mode(regs)
With the patch:
20: 81 03 00 84 lwz r8,132(r3)
48: 55 09 97 fe rlwinm r9,r8,18,31,31
58: 2e 09 00 00 cmpwi cr4,r9,0
5c: 40 92 00 0c bne cr4,68 <do_page_fault+0x68>
88: 41 b2 01 90 beq cr4,218 <do_page_fault+0x218>
d4: 40 92 01 d0 bne cr4,2a4 <do_page_fault+0x2a4>
120: 41 b2 00 f8 beq cr4,218 <do_page_fault+0x218>
138: 41 b2 ff a0 beq cr4,d8 <do_page_fault+0xd8>
1d4: 40 92 00 e0 bne cr4,2b4 <do_page_fault+0x2b4>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 19 Apr 2017 12:56:28 +0000 (14:56 +0200)]
powerpc/mm: Remove a redundant test in do_page_fault()
The result of (trap == 0x400) is already in is_exec.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Wed, 19 Apr 2017 12:56:24 +0000 (14:56 +0200)]
powerpc/mm: Only call store_updates_sp() on stores in do_page_fault()
Function store_updates_sp() checks whether the faulting
instruction is a store updating r1. Therefore we can limit its calls
to store exceptions.
This patch is an improvement of commit
a7a9dcd882a67 ("powerpc: Avoid
taking a data miss on every userspace instruction miss")
With the same microbenchmark app, run with 500 as argument, on an
MPC885 we get:
Before this patch: 152000 DTLB misses
After this patch: 147000 DTLB misses
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Christophe Leroy [Mon, 29 May 2017 15:32:06 +0000 (17:32 +0200)]
powerpc/mm: Remove __this_fixmap_does_not_exist()
This function has not been used since commit
9494a1e8428ea
("powerpc: use generic fixmap.h)
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Balbir Singh [Thu, 25 May 2017 03:36:50 +0000 (13:36 +1000)]
powerpc/mm/ptdump: Dump the first entry of the linear mapping as well
The check in hpte_find() should be < and not <= for PAGE_OFFSET
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 11 May 2017 17:40:40 +0000 (03:40 +1000)]
powerpc: Link warning for orphan sections
Add --orphan-handling=warn to final link flags. This ensures we can
handle all sections explicitly. This would have caught subtle breakage
such as
7de3b27bac47da9de08409df1d69664acbb72197 at build-time.
Also bring existing orphan sections into the fold:
- .text.hot and .text.unlikely are compiler generated sections.
- .sdata2, .dynsbss, .plt are used by PPC32
- We previously did not specify DWARF_DEBUG or STABS_DEBUG
- DWARF_DEBUG did not include all DWARF sections that can be emitted
- A number of sections are unused and can be discarded.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 11 May 2017 17:40:39 +0000 (03:40 +1000)]
powerpc/64: Tool to check head sections location sanity
Use a tool to check that the location of "fixed sections" are where
we expected them to be, which catches cases the linker script can't
(stubs being added to start of .text section), and which ends up
being neater.
Sample output:
ERROR: start_text address is
c000000000008100, should be
c000000000008000
ERROR: see comments in arch/powerpc/tools/head_check.sh
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Fold in fix from Nick for 4.6 era toolchains]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Mon, 29 May 2017 07:39:40 +0000 (17:39 +1000)]
powerpc/64: Handle linker stubs in low .text code
Very large kernels may require linker stubs for branches from HEAD
text code. The linker may place these stubs before the HEAD text
sections, which breaks the assumption that HEAD text is located at 0
(or the .text section being located at 0x7000/0x8000 on Book3S
kernels).
Provide an option to create a small section just before the .text
section with an empty 256 - 4 bytes, and adjust the start of the .text
section to match. The linker will tend to put stubs in that section
and not break our relative-to-absolute offset assumptions.
This causes a small waste of space on common kernels, but allows large
kernels to build and boot. For now, it is an EXPERT config option,
defaulting to =n, but a reference is provided for it in the build-time
check for such breakage. This is good enough for allyesconfig and
custom users / hackers.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 11 May 2017 17:40:38 +0000 (03:40 +1000)]
powerpc/64s: Tool to flag direct branches from unrelocated interrupt vectors
Direct banches from code below __end_interrupts to code above
__end_interrupts when built with CONFIG_RELOCATABLE are disallowed
because they will break when the kernel is not located at 0.
Sample output:
WARNING: Unrelocated relative branches
c000000000000118 bl-> 0xc000000000038fb8 <pnv_restore_hyp_resource>
c00000000000013c b-> 0xc0000000001068a4 <kvm_start_guest>
c000000000000148 b-> 0xc00000000003919c <pnv_wakeup_loss>
c00000000000014c b-> 0xc00000000003923c <pnv_wakeup_noloss>
c0000000000005a4 b-> 0xc000000000106ffc <kvmppc_interrupt_hv>
c000000000001af0 b-> 0xc000000000106ffc <kvmppc_interrupt_hv>
c000000000001b24 b-> 0xc000000000106ffc <kvmppc_interrupt_hv>
c000000000001b58 b-> 0xc000000000106ffc <kvmppc_interrupt_hv>
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 11 May 2017 15:56:52 +0000 (01:56 +1000)]
powerpc/64: Linker on-demand sfpr functions for modules
For final link, the powerpc64 linker generates fpr save/restore
functions on-demand, placing them in the .sfpr section. Starting with
binutils 2.25, these can be provided for non-final links with
--save-restore-funcs. Use that where possible for module links.
This saves about 200 bytes per module (~60kB) on powernv defconfig
build.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 11 May 2017 15:56:51 +0000 (01:56 +1000)]
powerpc/64: Do not create new section for save/restore functions
There is no need to create a new section for these. Consolidate with
32-bit and just use .text.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 11 May 2017 15:56:50 +0000 (01:56 +1000)]
powerpc/64: Do not link crtsaveres.o in boot
crtsaveres.S is empty with 64-bit builds already, so just don't
build and link it to match the vmlinux build.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Use CONFIG_PPC64_BOOT_WRAPPER not CONFIG_PPC32 to fix BE build]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 11 May 2017 15:56:49 +0000 (01:56 +1000)]
powerpc/64: Do not link crtsavres.o in vmlinux
The 64-bit linker creates save/restore functions on demand with final
links, so vmlinux does not require crtsavres.o.
Make crtsavres.o extra-y on 64-bit (it is still required by modules).
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Nicholas Piggin [Thu, 11 May 2017 15:56:48 +0000 (01:56 +1000)]
powerpc/64: Place sfpr section explicitly with the linker script
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Stephen Rothwell [Fri, 26 May 2017 08:32:29 +0000 (18:32 +1000)]
powerpc: Use uapi/asm-generic/sockios.h
The arch version is identical except for comments and white space.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Stephen Rothwell [Fri, 26 May 2017 06:19:46 +0000 (16:19 +1000)]
powerpc: Use the asm-generic versions of some uapi includes
These are completely obvious as all they do is include the asm-generic
versions.
Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Ivan Mikhaylov [Fri, 19 May 2017 15:47:05 +0000 (18:47 +0300)]
powerpc/[booke|4xx]: Don't clobber TCR[WP] when setting TCR[DIE]
Prevent a kernel panic caused by unintentionally clearing TCR watchdog
bits. At this point in the kernel boot, the watchdog may have already
been enabled by u-boot. The original code's attempt to write to the TCR
register results in an inadvertent clearing of the watchdog
configuration bits, causing the 476 to reset.
Signed-off-by: Ivan Mikhaylov <ivan@de.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>