S: Canada K2P 0X8
N: Mikael Pettersson
-E: mikpe@it.uu.se
-W: http://user.it.uu.se/~mikpe/linux/
+E: mikpelinux@gmail.com
D: Miscellaneous fixes
N: Reed H. Petty
that the USB device has been connected to the machine. This
file is read-only.
Users:
- PowerTOP <power@bughost.org>
- http://www.lesswatts.org/projects/powertop/
+ PowerTOP <powertop@lists.01.org>
+ https://01.org/powertop/
What: /sys/bus/usb/device/.../power/active_duration
Date: January 2008
will give an integer percentage. Note that this does not
account for counter wrap.
Users:
- PowerTOP <power@bughost.org>
- http://www.lesswatts.org/projects/powertop/
+ PowerTOP <powertop@lists.01.org>
+ https://01.org/powertop/
What: /sys/bus/usb/devices/<busnum>-<port[.port]>...:<config num>-<interface num>/supports_autosuspend
Date: January 2008
What: /sys/devices/.../power/
Date: January 2009
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../power directory contains attributes
allowing the user space to check and modify some power
What: /sys/devices/.../power/wakeup
Date: January 2009
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../power/wakeup attribute allows the user
space to check if the device is enabled to wake up the system
What: /sys/devices/.../power/control
Date: January 2009
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../power/control attribute allows the user
space to control the run-time power management of the device.
What: /sys/devices/.../power/async
Date: January 2009
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../async attribute allows the user space to
enable or diasble the device's suspend and resume callbacks to
What: /sys/devices/.../power/wakeup_count
Date: September 2010
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../wakeup_count attribute contains the number
of signaled wakeup events associated with the device. This
What: /sys/devices/.../power/wakeup_active_count
Date: September 2010
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../wakeup_active_count attribute contains the
number of times the processing of wakeup events associated with
What: /sys/devices/.../power/wakeup_abort_count
Date: February 2012
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../wakeup_abort_count attribute contains the
number of times the processing of a wakeup event associated with
What: /sys/devices/.../power/wakeup_expire_count
Date: February 2012
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../wakeup_expire_count attribute contains the
number of times a wakeup event associated with the device has
What: /sys/devices/.../power/wakeup_active
Date: September 2010
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../wakeup_active attribute contains either 1,
or 0, depending on whether or not a wakeup event associated with
What: /sys/devices/.../power/wakeup_total_time_ms
Date: September 2010
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../wakeup_total_time_ms attribute contains
the total time of processing wakeup events associated with the
What: /sys/devices/.../power/wakeup_max_time_ms
Date: September 2010
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../wakeup_max_time_ms attribute contains
the maximum time of processing a single wakeup event associated
What: /sys/devices/.../power/wakeup_last_time_ms
Date: September 2010
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../wakeup_last_time_ms attribute contains
the value of the monotonic clock corresponding to the time of
What: /sys/devices/.../power/wakeup_prevent_sleep_time_ms
Date: February 2012
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../wakeup_prevent_sleep_time_ms attribute
contains the total time the device has been preventing
What: /sys/devices/.../power/pm_qos_latency_us
Date: March 2012
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../power/pm_qos_resume_latency_us attribute
contains the PM QoS resume latency limit for the given device,
What: /sys/devices/.../power/pm_qos_no_power_off
Date: September 2012
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../power/pm_qos_no_power_off attribute
is used for manipulating the PM QoS "no power off" flag. If
What: /sys/devices/.../power/pm_qos_remote_wakeup
Date: September 2012
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/devices/.../power/pm_qos_remote_wakeup attribute
is used for manipulating the PM QoS "remote wakeup required"
What: /sys/power/
Date: August 2006
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/power directory will contain files that will
provide a unified interface to the power management
What: /sys/power/state
Date: August 2006
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/power/state file controls the system power state.
Reading from this file returns what states are supported,
What: /sys/power/disk
Date: September 2006
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/power/disk file controls the operating mode of the
suspend-to-disk mechanism. Reading from this file returns
What: /sys/power/image_size
Date: August 2006
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/power/image_size file controls the size of the image
created by the suspend-to-disk mechanism. It can be written a
What: /sys/power/pm_trace
Date: August 2006
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/power/pm_trace file controls the code which saves the
last PM event point in the RTC across reboots, so that you can
What: /sys/power/pm_async
Date: January 2009
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/power/pm_async file controls the switch allowing the
user space to enable or disable asynchronous suspend and resume
What: /sys/power/wakeup_count
Date: July 2010
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/power/wakeup_count file allows user space to put the
system into a sleep state while taking into account the
What: /sys/power/reserved_size
Date: May 2011
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/power/reserved_size file allows user space to control
the amount of memory reserved for allocations made by device
What: /sys/power/autosleep
Date: April 2012
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/power/autosleep file can be written one of the strings
returned by reads from /sys/power/state. If that happens, a
What: /sys/power/wake_lock
Date: February 2012
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/power/wake_lock file allows user space to create
wakeup source objects and activate them on demand (if one of
What: /sys/power/wake_unlock
Date: February 2012
-Contact: Rafael J. Wysocki <rjw@sisk.pl>
+Contact: Rafael J. Wysocki <rjw@rjwysocki.net>
Description:
The /sys/power/wake_unlock file allows user space to deactivate
wakeup sources created with the help of /sys/power/wake_lock.
When to use this method is described in detail on the
Linux/ACPI home page:
-http://www.lesswatts.org/projects/acpi/overridingDSDT.php
+https://01.org/linux-acpi/documentation/overriding-dsdt
parameters containing user virtual addresses *must* have
their top byte cleared before trapping to the kernel.
- (2) Tags are not guaranteed to be preserved when delivering
- signals. This means that signal handlers in applications
- making use of tags cannot rely on the tag information for
- user virtual addresses being maintained for fields inside
- siginfo_t. One exception to this rule is for signals raised
- in response to debug exceptions, where the tag information
+ (2) Non-zero tags are not preserved when delivering signals.
+ This means that signal handlers in applications making use
+ of tags cannot rely on the tag information for user virtual
+ addresses being maintained for fields inside siginfo_t.
+ One exception to this rule is for signals raised in response
+ to watchpoint debug exceptions, where the tag information
will be preserved.
(3) Special care should be taken when using tagged pointers,
since it is likely that C compilers will not hazard two
- addresses differing only in the upper bits.
+ virtual addresses differing only in the upper byte.
The architecture prevents the use of a tagged PC, so the upper byte will
be set to a sign-extension of bit 55 on exception return.
- Generic Block Device Capability (/sys/block/<device>/capability)
cfq-iosched.txt
- CFQ IO scheduler tunables
+cmdline-partition.txt
+ - how to specify block device partitions on kernel command line
data-integrity.txt
- Block data integrity
deadline-iosched.txt
-Embedded device command line partition
+Embedded device command line partition parsing
=====================================================================
-Read block device partition table from command line.
-The partition used for fixed block device (eMMC) embedded device.
-It is no MBR, save storage space. Bootloader can be easily accessed
+Support for reading the block device partition table from the command line.
+It is typically used for fixed block (eMMC) embedded devices.
+It has no MBR, so saves storage space. Bootloader can be easily accessed
by absolute address of data on the block device.
Users can easily change the partition.
nlh->nlmsg_seq = seq++;
nlh->nlmsg_pid = getpid();
nlh->nlmsg_type = NLMSG_DONE;
- nlh->nlmsg_len = NLMSG_LENGTH(size - sizeof(*nlh));
+ nlh->nlmsg_len = size;
nlh->nlmsg_flags = 0;
m = NLMSG_DATA(nlh);
+++ /dev/null
-*** Memory binding ***
-
-The /memory node provides basic information about the address and size
-of the physical memory. This node is usually filled or updated by the
-bootloader, depending on the actual memory configuration of the given
-hardware.
-
-The memory layout is described by the following node:
-
-/ {
- #address-cells = <(n)>;
- #size-cells = <(m)>;
- memory {
- device_type = "memory";
- reg = <(baseaddr1) (size1)
- (baseaddr2) (size2)
- ...
- (baseaddrN) (sizeN)>;
- };
- ...
-};
-
-A memory node follows the typical device tree rules for "reg" property:
-n: number of cells used to store base address value
-m: number of cells used to store size value
-baseaddrX: defines a base address of the defined memory bank
-sizeX: the size of the defined memory bank
-
-
-More than one memory bank can be defined.
-
-
-*** Reserved memory regions ***
-
-In /memory/reserved-memory node one can create child nodes describing
-particular reserved (excluded from normal use) memory regions. Such
-memory regions are usually designed for the special usage by various
-device drivers. A good example are contiguous memory allocations or
-memory sharing with other operating system on the same hardware board.
-Those special memory regions might depend on the board configuration and
-devices used on the target system.
-
-Parameters for each memory region can be encoded into the device tree
-with the following convention:
-
-[(label):] (name) {
- compatible = "linux,contiguous-memory-region", "reserved-memory-region";
- reg = <(address) (size)>;
- (linux,default-contiguous-region);
-};
-
-compatible: one or more of:
- - "linux,contiguous-memory-region" - enables binding of this
- region to Contiguous Memory Allocator (special region for
- contiguous memory allocations, shared with movable system
- memory, Linux kernel-specific).
- - "reserved-memory-region" - compatibility is defined, given
- region is assigned for exclusive usage for by the respective
- devices.
-
-reg: standard property defining the base address and size of
- the memory region
-
-linux,default-contiguous-region: property indicating that the region
- is the default region for all contiguous memory
- allocations, Linux specific (optional)
-
-It is optional to specify the base address, so if one wants to use
-autoconfiguration of the base address, '0' can be specified as a base
-address in the 'reg' property.
-
-The /memory/reserved-memory node must contain the same #address-cells
-and #size-cells value as the root node.
-
-
-*** Device node's properties ***
-
-Once regions in the /memory/reserved-memory node have been defined, they
-may be referenced by other device nodes. Bindings that wish to reference
-memory regions should explicitly document their use of the following
-property:
-
-memory-region = <&phandle_to_defined_region>;
-
-This property indicates that the device driver should use the memory
-region pointed by the given phandle.
-
-
-*** Example ***
-
-This example defines a memory consisting of 4 memory banks. 3 contiguous
-regions are defined for Linux kernel, one default of all device drivers
-(named contig_mem, placed at 0x72000000, 64MiB), one dedicated to the
-framebuffer device (labelled display_mem, placed at 0x78000000, 8MiB)
-and one for multimedia processing (labelled multimedia_mem, placed at
-0x77000000, 64MiB). 'display_mem' region is then assigned to fb@12300000
-device for DMA memory allocations (Linux kernel drivers will use CMA is
-available or dma-exclusive usage otherwise). 'multimedia_mem' is
-assigned to scaler@12500000 and codec@12600000 devices for contiguous
-memory allocations when CMA driver is enabled.
-
-The reason for creating a separate region for framebuffer device is to
-match the framebuffer base address to the one configured by bootloader,
-so once Linux kernel drivers starts no glitches on the displayed boot
-logo appears. Scaller and codec drivers should share the memory
-allocations.
-
-/ {
- #address-cells = <1>;
- #size-cells = <1>;
-
- /* ... */
-
- memory {
- reg = <0x40000000 0x10000000
- 0x50000000 0x10000000
- 0x60000000 0x10000000
- 0x70000000 0x10000000>;
-
- reserved-memory {
- #address-cells = <1>;
- #size-cells = <1>;
-
- /*
- * global autoconfigured region for contiguous allocations
- * (used only with Contiguous Memory Allocator)
- */
- contig_region@0 {
- compatible = "linux,contiguous-memory-region";
- reg = <0x0 0x4000000>;
- linux,default-contiguous-region;
- };
-
- /*
- * special region for framebuffer
- */
- display_region: region@78000000 {
- compatible = "linux,contiguous-memory-region", "reserved-memory-region";
- reg = <0x78000000 0x800000>;
- };
-
- /*
- * special region for multimedia processing devices
- */
- multimedia_region: region@77000000 {
- compatible = "linux,contiguous-memory-region";
- reg = <0x77000000 0x4000000>;
- };
- };
- };
-
- /* ... */
-
- fb0: fb@12300000 {
- status = "okay";
- memory-region = <&display_region>;
- };
-
- scaler: scaler@12500000 {
- status = "okay";
- memory-region = <&multimedia_region>;
- };
-
- codec: codec@12600000 {
- status = "okay";
- memory-region = <&multimedia_region>;
- };
-};
-* Samsung Exynos specific extensions to the Synopsis Designware Mobile
+* Samsung Exynos specific extensions to the Synopsys Designware Mobile
Storage Host Controller
-The Synopsis designware mobile storage host controller is used to interface
+The Synopsys designware mobile storage host controller is used to interface
a SoC with storage medium such as eMMC or SD/MMC cards. This file documents
-differences between the core Synopsis dw mshc controller properties described
-by synopsis-dw-mshc.txt and the properties used by the Samsung Exynos specific
-extensions to the Synopsis Designware Mobile Storage Host Controller.
+differences between the core Synopsys dw mshc controller properties described
+by synopsys-dw-mshc.txt and the properties used by the Samsung Exynos specific
+extensions to the Synopsys Designware Mobile Storage Host Controller.
Required Properties:
-* Rockchip specific extensions to the Synopsis Designware Mobile
+* Rockchip specific extensions to the Synopsys Designware Mobile
Storage Host Controller
-The Synopsis designware mobile storage host controller is used to interface
+The Synopsys designware mobile storage host controller is used to interface
a SoC with storage medium such as eMMC or SD/MMC cards. This file documents
-differences between the core Synopsis dw mshc controller properties described
-by synopsis-dw-mshc.txt and the properties used by the Rockchip specific
-extensions to the Synopsis Designware Mobile Storage Host Controller.
+differences between the core Synopsys dw mshc controller properties described
+by synopsys-dw-mshc.txt and the properties used by the Rockchip specific
+extensions to the Synopsys Designware Mobile Storage Host Controller.
Required Properties:
-* Synopsis Designware Mobile Storage Host Controller
+* Synopsys Designware Mobile Storage Host Controller
-The Synopsis designware mobile storage host controller is used to interface
+The Synopsys designware mobile storage host controller is used to interface
a SoC with storage medium such as eMMC or SD/MMC cards. This file documents
differences between the core mmc properties described by mmc.txt and the
-properties used by the Synopsis Designware Mobile Storage Host Controller.
+properties used by the Synopsys Designware Mobile Storage Host Controller.
Required Properties:
* compatible: should be
- - snps,dw-mshc: for controllers compliant with synopsis dw-mshc.
+ - snps,dw-mshc: for controllers compliant with synopsys dw-mshc.
* #address-cells: should be 1.
* #size-cells: should be 0.
described in mmc.txt, can be used. Additionally the following tmio_mmc-specific
optional bindings can be used.
+Required properties:
+- compatible: "renesas,sdhi-shmobile" - a generic sh-mobile SDHI unit
+ "renesas,sdhi-sh7372" - SDHI IP on SH7372 SoC
+ "renesas,sdhi-sh73a0" - SDHI IP on SH73A0 SoC
+ "renesas,sdhi-r8a73a4" - SDHI IP on R8A73A4 SoC
+ "renesas,sdhi-r8a7740" - SDHI IP on R8A7740 SoC
+ "renesas,sdhi-r8a7778" - SDHI IP on R8A7778 SoC
+ "renesas,sdhi-r8a7779" - SDHI IP on R8A7779 SoC
+ "renesas,sdhi-r8a7790" - SDHI IP on R8A7790 SoC
+
Optional properties:
- toshiba,mmc-wrprotect-disable: write-protect detection is unavailable
-
-When used with Renesas SDHI hardware, the following compatibility strings
-configure various model-specific properties:
-
-"renesas,sh7372-sdhi": (default) compatible with SH7372
-"renesas,r8a7740-sdhi": compatible with R8A7740: certain MMC/SD commands have to
- wait for the interface to become idle.
Clock Properties:
+ - fsl,cksel Timer reference clock source.
- fsl,tclk-period Timer reference clock period in nanoseconds.
- fsl,tmr-prsc Prescaler, divides the output clock.
- fsl,tmr-add Frequency compensation value.
clock. You must choose these carefully for the clock to work right.
Here is how to figure good values:
- TimerOsc = system clock MHz
+ TimerOsc = selected reference clock MHz
tclk_period = desired clock period nanoseconds
NominalFreq = 1000 / tclk_period MHz
FreqDivRatio = TimerOsc / NominalFreq (must be greater that 1.0)
Pulse Per Second (PPS) signal, since this will be offered to the PPS
subsystem to synchronize the Linux clock.
+ Reference clock source is determined by the value, which is holded
+ in CKSEL bits in TMR_CTRL register. "fsl,cksel" property keeps the
+ value, which will be directly written in those bits, that is why,
+ according to reference manual, the next clock sources can be used:
+
+ <0> - external high precision timer reference clock (TSEC_TMR_CLK
+ input is used for this purpose);
+ <1> - eTSEC system clock;
+ <2> - eTSEC1 transmit clock;
+ <3> - RTC clock input.
+
+ When this attribute is not used, eTSEC system clock will serve as
+ IEEE 1588 timer reference clock.
+
Example:
ptp_clock@24E00 {
reg = <0x24E00 0xB0>;
interrupts = <12 0x8 13 0x8>;
interrupt-parent = < &ipic >;
+ fsl,cksel = <1>;
fsl,tclk-period = <10>;
fsl,tmr-prsc = <100>;
fsl,tmr-add = <0x999999A4>;
-* Synopsis Designware PCIe interface
+* Synopsys Designware PCIe interface
Required properties:
- compatible: should contain "snps,dw-pcie" to identify the
--- /dev/null
+CS42L73 audio CODEC
+
+Required properties:
+
+ - compatible : "cirrus,cs42l73"
+
+ - reg : the I2C address of the device for I2C
+
+Optional properties:
+
+ - reset_gpio : a GPIO spec for the reset pin.
+ - chgfreq : Charge Pump Frequency values 0x00-0x0F
+
+
+Example:
+
+codec: cs42l73@4a {
+ compatible = "cirrus,cs42l73";
+ reg = <0x4a>;
+ reset_gpio = <&gpio 10 0>;
+ chgfreq = <0x05>;
+};
\ No newline at end of file
--- /dev/null
+* Texas Instruments SoC audio setups with TLV320AIC3X Codec
+
+Required properties:
+- compatible : "ti,da830-evm-audio" : forDM365/DA8xx/OMAPL1x/AM33xx
+- ti,model : The user-visible name of this sound complex.
+- ti,audio-codec : The phandle of the TLV320AIC3x audio codec
+- ti,mcasp-controller : The phandle of the McASP controller
+- ti,codec-clock-rate : The Codec Clock rate (in Hz) applied to the Codec
+- ti,audio-routing : A list of the connections between audio components.
+ Each entry is a pair of strings, the first being the connection's sink,
+ the second being the connection's source. Valid names for sources and
+ sinks are the codec's pins, and the jacks on the board:
+
+ Board connectors:
+
+ * Headphone Jack
+ * Line Out
+ * Mic Jack
+ * Line In
+
+
+Example:
+
+sound {
+ compatible = "ti,da830-evm-audio";
+ ti,model = "DA830 EVM";
+ ti,audio-codec = <&tlv320aic3x>;
+ ti,mcasp-controller = <&mcasp1>;
+ ti,codec-clock-rate = <12000000>;
+ ti,audio-routing =
+ "Headphone Jack", "HPLOUT",
+ "Headphone Jack", "HPROUT",
+ "Line Out", "LLOUT",
+ "Line Out", "RLOUT",
+ "MIC3L", "Mic Bias 2V",
+ "MIC3R", "Mic Bias 2V",
+ "Mic Bias 2V", "Mic Jack",
+ "LINE1L", "Line In",
+ "LINE2L", "Line In",
+ "LINE1R", "Line In",
+ "LINE2R", "Line In";
+};
- compatible :
"ti,dm646x-mcasp-audio" : for DM646x platforms
"ti,da830-mcasp-audio" : for both DA830 & DA850 platforms
- "ti,omap2-mcasp-audio" : for OMAP2 platforms (TI81xx, AM33xx)
-
-- reg : Should contain McASP registers offset and length
-- interrupts : Interrupt number for McASP
-- op-mode : I2S/DIT ops mode.
-- tdm-slots : Slots for TDM operation.
-- num-serializer : Serializers used by McASP.
-- serial-dir : A list of serializer pin mode. The list number should be equal
- to "num-serializer" parameter. Each entry is a number indication
- serializer pin direction. (0 - INACTIVE, 1 - TX, 2 - RX)
+ "ti,am33xx-mcasp-audio" : for AM33xx platforms (AM33xx, TI81xx)
+- reg : Should contain reg specifiers for the entries in the reg-names property.
+- reg-names : Should contain:
+ * "mpu" for the main registers (required). For compatibility with
+ existing software, it is recommended this is the first entry.
+ * "dat" for separate data port register access (optional).
+- op-mode : I2S/DIT ops mode. 0 for I2S mode. 1 for DIT mode used for S/PDIF,
+ IEC60958-1, and AES-3 formats.
+- tdm-slots : Slots for TDM operation. Indicates number of channels transmitted
+ or received over one serializer.
+- serial-dir : A list of serializer configuration. Each entry is a number
+ indication for serializer pin direction.
+ (0 - INACTIVE, 1 - TX, 2 - RX)
+- dmas: two element list of DMA controller phandles and DMA request line
+ ordered pairs.
+- dma-names: identifier string for each DMA request line in the dmas property.
+ These strings correspond 1:1 with the ordered pairs in dmas. The dma
+ identifiers must be "rx" and "tx".
Optional properties:
- rx-num-evt : FIFO levels.
- sram-size-playback : size of sram to be allocated during playback
- sram-size-capture : size of sram to be allocated during capture
+- interrupts : Interrupt numbers for McASP, currently not used by the driver
+- interrupt-names : Known interrupt names are "tx" and "rx"
+- pinctrl-0: Should specify pin control group used for this controller.
+- pinctrl-names: Should contain only one value - "default", for more details
+ please refer to pinctrl-bindings.txt
+
Example:
mcasp0: mcasp0@1d00000 {
compatible = "ti,da830-mcasp-audio";
- #address-cells = <1>;
- #size-cells = <0>;
reg = <0x100000 0x3000>;
- interrupts = <82 83>;
+ reg-names "mpu";
+ interrupts = <82>, <83>;
+ interrupts-names = "tx", "rx";
op-mode = <0>; /* MCASP_IIS_MODE */
tdm-slots = <2>;
- num-serializer = <16>;
serial-dir = <
0 0 0 0 /* 0: INACTIVE, 1: TX, 2: RX */
0 0 0 0
ssize_t (*listxattr) (struct dentry *, char *, size_t);
int (*removexattr) (struct dentry *, const char *);
void (*update_time)(struct inode *, struct timespec *, int);
- int (*atomic_open)(struct inode *, struct dentry *,
+ int (*atomic_open)(struct inode *, struct dentry *, struct file *,
+ unsigned open_flag, umode_t create_mode, int *opened);
int (*tmpfile) (struct inode *, struct dentry *, umode_t);
-} ____cacheline_aligned;
- struct file *, unsigned open_flag,
- umode_t create_mode, int *opened);
};
Again, all methods are called without any locks being held, unless
method the filesystem can look up, possibly create and open the file in
one atomic operation. If it cannot perform this (e.g. the file type
turned out to be wrong) it may signal this by returning 1 instead of
- usual 0 or -ve . This method is only called if the last
- component is negative or needs lookup. Cached positive dentries are
- still handled by f_op->open().
+ usual 0 or -ve . This method is only called if the last component is
+ negative or needs lookup. Cached positive dentries are still handled by
+ f_op->open(). If the file was created, the FILE_CREATED flag should be
+ set in "opened". In case of O_EXCL the method must only succeed if the
+ file didn't exist and hence FILE_CREATED shall always be set on success.
tmpfile: called in the end of O_TMPFILE open(). Optional, equivalent to
atomically creating, opening and unlinking a file in given directory.
Format: <io>,<irq>,<mode>
See header of drivers/net/hamradio/baycom_ser_hdx.c.
+ blkdevparts= Manual partition parsing of block device(s) for
+ embedded devices based on command line input.
+ See Documentation/block/cmdline-partition.txt
+
boot_delay= Milliseconds to delay each printk during boot.
Values larger than 10 seconds (10000) are changed to
no delay (0).
pages. In the event, a node is too small to have both
kernelcore and Movable pages, kernelcore pages will
take priority and other nodes will have a larger number
- of kernelcore pages. The Movable zone is used for the
+ of Movable pages. The Movable zone is used for the
allocation of pages that may be reclaimed or moved
by the page migration subsystem. This means that
HugeTLB pages may not be allocated from this zone.
the unplug protocol
never -- do not unplug even if version check succeeds
+ xen_nopvspin [X86,XEN]
+ Disables the ticketlock slowpath using Xen PV
+ optimizations.
+
xirc2ps_cs= [NET,PCMCIA]
Format:
<irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
To remove an ARP target:
# echo -192.168.0.100 > /sys/class/net/bond0/bonding/arp_ip_target
+To configure the interval between learning packet transmits:
+# echo 12 > /sys/class/net/bond0/bonding/lp_interval
+ NOTE: the lp_inteval is the number of seconds between instances where
+the bonding driver sends learning packets to each slaves peer switch. The
+default interval is 1 second.
+
Example Configuration
---------------------
We begin with the same example that is shown in section 3.3,
runqueue.
CFS maintains a time-ordered rbtree, where all runnable tasks are sorted by the
-p->se.vruntime key (there is a subtraction using rq->cfs.min_vruntime to
-account for possible wraparounds). CFS picks the "leftmost" task from this
-tree and sticks to it.
+p->se.vruntime key. CFS picks the "leftmost" task from this tree and sticks to it.
As the system progresses forwards, the executed tasks are put into the tree
more and more to the right --- slowly but surely giving a chance for every task
to become the "leftmost task" and thus get on the CPU within a deterministic
alc269-dmic Enable ALC269(VA) digital mic workaround
alc271-dmic Enable ALC271X digital mic workaround
inv-dmic Inverted internal mic workaround
+ headset-mic Indicates a combined headset (headphone+mic) jack
lenovo-dock Enables docking station I/O for some Lenovos
dell-headset-multi Headset jack, which can also be used as mic-in
dell-headset-dock Headset jack (without mic-in), and also dock I/O
imac27 IMac 27 Inch
auto BIOS setup (default)
+Cirrus Logic CS4208
+===================
+ mba6 MacBook Air 6,1 and 6,2
+ gpio0 Enable GPIO 0 amp
+ auto BIOS setup (default)
+
VIA VT17xx/VT18xx/VT20xx
========================
auto BIOS setup (default)
--- /dev/null
+Dynamic PCM
+===========
+
+1. Description
+==============
+
+Dynamic PCM allows an ALSA PCM device to digitally route its PCM audio to
+various digital endpoints during the PCM stream runtime. e.g. PCM0 can route
+digital audio to I2S DAI0, I2S DAI1 or PDM DAI2. This is useful for on SoC DSP
+drivers that expose several ALSA PCMs and can route to multiple DAIs.
+
+The DPCM runtime routing is determined by the ALSA mixer settings in the same
+way as the analog signal is routed in an ASoC codec driver. DPCM uses a DAPM
+graph representing the DSP internal audio paths and uses the mixer settings to
+determine the patch used by each ALSA PCM.
+
+DPCM re-uses all the existing component codec, platform and DAI drivers without
+any modifications.
+
+
+Phone Audio System with SoC based DSP
+-------------------------------------
+
+Consider the following phone audio subsystem. This will be used in this
+document for all examples :-
+
+| Front End PCMs | SoC DSP | Back End DAIs | Audio devices |
+
+ *************
+PCM0 <------------> * * <----DAI0-----> Codec Headset
+ * *
+PCM1 <------------> * * <----DAI1-----> Codec Speakers
+ * DSP *
+PCM2 <------------> * * <----DAI2-----> MODEM
+ * *
+PCM3 <------------> * * <----DAI3-----> BT
+ * *
+ * * <----DAI4-----> DMIC
+ * *
+ * * <----DAI5-----> FM
+ *************
+
+This diagram shows a simple smart phone audio subsystem. It supports Bluetooth,
+FM digital radio, Speakers, Headset Jack, digital microphones and cellular
+modem. This sound card exposes 4 DSP front end (FE) ALSA PCM devices and
+supports 6 back end (BE) DAIs. Each FE PCM can digitally route audio data to any
+of the BE DAIs. The FE PCM devices can also route audio to more than 1 BE DAI.
+
+
+
+Example - DPCM Switching playback from DAI0 to DAI1
+---------------------------------------------------
+
+Audio is being played to the Headset. After a while the user removes the headset
+and audio continues playing on the speakers.
+
+Playback on PCM0 to Headset would look like :-
+
+ *************
+PCM0 <============> * * <====DAI0=====> Codec Headset
+ * *
+PCM1 <------------> * * <----DAI1-----> Codec Speakers
+ * DSP *
+PCM2 <------------> * * <----DAI2-----> MODEM
+ * *
+PCM3 <------------> * * <----DAI3-----> BT
+ * *
+ * * <----DAI4-----> DMIC
+ * *
+ * * <----DAI5-----> FM
+ *************
+
+The headset is removed from the jack by user so the speakers must now be used :-
+
+ *************
+PCM0 <============> * * <----DAI0-----> Codec Headset
+ * *
+PCM1 <------------> * * <====DAI1=====> Codec Speakers
+ * DSP *
+PCM2 <------------> * * <----DAI2-----> MODEM
+ * *
+PCM3 <------------> * * <----DAI3-----> BT
+ * *
+ * * <----DAI4-----> DMIC
+ * *
+ * * <----DAI5-----> FM
+ *************
+
+The audio driver processes this as follows :-
+
+ 1) Machine driver receives Jack removal event.
+
+ 2) Machine driver OR audio HAL disables the Headset path.
+
+ 3) DPCM runs the PCM trigger(stop), hw_free(), shutdown() operations on DAI0
+ for headset since the path is now disabled.
+
+ 4) Machine driver or audio HAL enables the speaker path.
+
+ 5) DPCM runs the PCM ops for startup(), hw_params(), prepapre() and
+ trigger(start) for DAI1 Speakers since the path is enabled.
+
+In this example, the machine driver or userspace audio HAL can alter the routing
+and then DPCM will take care of managing the DAI PCM operations to either bring
+the link up or down. Audio playback does not stop during this transition.
+
+
+
+DPCM machine driver
+===================
+
+The DPCM enabled ASoC machine driver is similar to normal machine drivers
+except that we also have to :-
+
+ 1) Define the FE and BE DAI links.
+
+ 2) Define any FE/BE PCM operations.
+
+ 3) Define widget graph connections.
+
+
+1 FE and BE DAI links
+---------------------
+
+| Front End PCMs | SoC DSP | Back End DAIs | Audio devices |
+
+ *************
+PCM0 <------------> * * <----DAI0-----> Codec Headset
+ * *
+PCM1 <------------> * * <----DAI1-----> Codec Speakers
+ * DSP *
+PCM2 <------------> * * <----DAI2-----> MODEM
+ * *
+PCM3 <------------> * * <----DAI3-----> BT
+ * *
+ * * <----DAI4-----> DMIC
+ * *
+ * * <----DAI5-----> FM
+ *************
+
+For the example above we have to define 4 FE DAI links and 6 BE DAI links. The
+FE DAI links are defined as follows :-
+
+static struct snd_soc_dai_link machine_dais[] = {
+ {
+ .name = "PCM0 System",
+ .stream_name = "System Playback",
+ .cpu_dai_name = "System Pin",
+ .platform_name = "dsp-audio",
+ .codec_name = "snd-soc-dummy",
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .dynamic = 1,
+ .trigger = {SND_SOC_DPCM_TRIGGER_POST, SND_SOC_DPCM_TRIGGER_POST},
+ .dpcm_playback = 1,
+ },
+ .....< other FE and BE DAI links here >
+};
+
+This FE DAI link is pretty similar to a regular DAI link except that we also
+set the DAI link to a DPCM FE with the "dynamic = 1". The supported FE stream
+directions should also be set with the "dpcm_playback" and "dpcm_capture"
+flags. There is also an option to specify the ordering of the trigger call for
+each FE. This allows the ASoC core to trigger the DSP before or after the other
+components (as some DSPs have strong requirements for the ordering DAI/DSP
+start and stop sequences).
+
+The FE DAI above sets the codec and code DAIs to dummy devices since the BE is
+dynamic and will change depending on runtime config.
+
+The BE DAIs are configured as follows :-
+
+static struct snd_soc_dai_link machine_dais[] = {
+ .....< FE DAI links here >
+ {
+ .name = "Codec Headset",
+ .cpu_dai_name = "ssp-dai.0",
+ .platform_name = "snd-soc-dummy",
+ .no_pcm = 1,
+ .codec_name = "rt5640.0-001c",
+ .codec_dai_name = "rt5640-aif1",
+ .ignore_suspend = 1,
+ .ignore_pmdown_time = 1,
+ .be_hw_params_fixup = hswult_ssp0_fixup,
+ .ops = &haswell_ops,
+ .dpcm_playback = 1,
+ .dpcm_capture = 1,
+ },
+ .....< other BE DAI links here >
+};
+
+This BE DAI link connects DAI0 to the codec (in this case RT5460 AIF1). It sets
+the "no_pcm" flag to mark it has a BE and sets flags for supported stream
+directions using "dpcm_playback" and "dpcm_capture" above.
+
+The BE has also flags set for ignoreing suspend and PM down time. This allows
+the BE to work in a hostless mode where the host CPU is not transferring data
+like a BT phone call :-
+
+ *************
+PCM0 <------------> * * <----DAI0-----> Codec Headset
+ * *
+PCM1 <------------> * * <----DAI1-----> Codec Speakers
+ * DSP *
+PCM2 <------------> * * <====DAI2=====> MODEM
+ * *
+PCM3 <------------> * * <====DAI3=====> BT
+ * *
+ * * <----DAI4-----> DMIC
+ * *
+ * * <----DAI5-----> FM
+ *************
+
+This allows the host CPU to sleep whilst the DSP, MODEM DAI and the BT DAI are
+still in operation.
+
+A BE DAI link can also set the codec to a dummy device if the code is a device
+that is managed externally.
+
+Likewise a BE DAI can also set a dummy cpu DAI if the CPU DAI is managed by the
+DSP firmware.
+
+
+2 FE/BE PCM operations
+----------------------
+
+The BE above also exports some PCM operations and a "fixup" callback. The fixup
+callback is used by the machine driver to (re)configure the DAI based upon the
+FE hw params. i.e. the DSP may perform SRC or ASRC from the FE to BE.
+
+e.g. DSP converts all FE hw params to run at fixed rate of 48k, 16bit, stereo for
+DAI0. This means all FE hw_params have to be fixed in the machine driver for
+DAI0 so that the DAI is running at desired configuration regardless of the FE
+configuration.
+
+static int dai0_fixup(struct snd_soc_pcm_runtime *rtd,
+ struct snd_pcm_hw_params *params)
+{
+ struct snd_interval *rate = hw_param_interval(params,
+ SNDRV_PCM_HW_PARAM_RATE);
+ struct snd_interval *channels = hw_param_interval(params,
+ SNDRV_PCM_HW_PARAM_CHANNELS);
+
+ /* The DSP will covert the FE rate to 48k, stereo */
+ rate->min = rate->max = 48000;
+ channels->min = channels->max = 2;
+
+ /* set DAI0 to 16 bit */
+ snd_mask_set(¶ms->masks[SNDRV_PCM_HW_PARAM_FORMAT -
+ SNDRV_PCM_HW_PARAM_FIRST_MASK],
+ SNDRV_PCM_FORMAT_S16_LE);
+ return 0;
+}
+
+The other PCM operation are the same as for regular DAI links. Use as necessary.
+
+
+3 Widget graph connections
+--------------------------
+
+The BE DAI links will normally be connected to the graph at initialisation time
+by the ASoC DAPM core. However, if the BE codec or BE DAI is a dummy then this
+has to be set explicitly in the driver :-
+
+/* BE for codec Headset - DAI0 is dummy and managed by DSP FW */
+{"DAI0 CODEC IN", NULL, "AIF1 Capture"},
+{"AIF1 Playback", NULL, "DAI0 CODEC OUT"},
+
+
+Writing a DPCM DSP driver
+=========================
+
+The DPCM DSP driver looks much like a standard platform class ASoC driver
+combined with elements from a codec class driver. A DSP platform driver must
+implement :-
+
+ 1) Front End PCM DAIs - i.e. struct snd_soc_dai_driver.
+
+ 2) DAPM graph showing DSP audio routing from FE DAIs to BEs.
+
+ 3) DAPM widgets from DSP graph.
+
+ 4) Mixers for gains, routing, etc.
+
+ 5) DMA configuration.
+
+ 6) BE AIF widgets.
+
+Items 6 is important for routing the audio outside of the DSP. AIF need to be
+defined for each BE and each stream direction. e.g for BE DAI0 above we would
+have :-
+
+SND_SOC_DAPM_AIF_IN("DAI0 RX", NULL, 0, SND_SOC_NOPM, 0, 0),
+SND_SOC_DAPM_AIF_OUT("DAI0 TX", NULL, 0, SND_SOC_NOPM, 0, 0),
+
+The BE AIF are used to connect the DSP graph to the graphs for the other
+component drivers (e.g. codec graph).
+
+
+Hostless PCM streams
+====================
+
+A hostless PCM stream is a stream that is not routed through the host CPU. An
+example of this would be a phone call from handset to modem.
+
+
+ *************
+PCM0 <------------> * * <----DAI0-----> Codec Headset
+ * *
+PCM1 <------------> * * <====DAI1=====> Codec Speakers/Mic
+ * DSP *
+PCM2 <------------> * * <====DAI2=====> MODEM
+ * *
+PCM3 <------------> * * <----DAI3-----> BT
+ * *
+ * * <----DAI4-----> DMIC
+ * *
+ * * <----DAI5-----> FM
+ *************
+
+In this case the PCM data is routed via the DSP. The host CPU in this use case
+is only used for control and can sleep during the runtime of the stream.
+
+The host can control the hostless link either by :-
+
+ 1) Configuring the link as a CODEC <-> CODEC style link. In this case the link
+ is enabled or disabled by the state of the DAPM graph. This usually means
+ there is a mixer control that can be used to connect or disconnect the path
+ between both DAIs.
+
+ 2) Hostless FE. This FE has a virtual connection to the BE DAI links on the DAPM
+ graph. Control is then carried out by the FE as regualar PCM operations.
+ This method gives more control over the DAI links, but requires much more
+ userspace code to control the link. Its recommended to use CODEC<->CODEC
+ unless your HW needs more fine grained sequencing of the PCM ops.
+
+
+CODEC <-> CODEC link
+--------------------
+
+This DAI link is enabled when DAPM detects a valid path within the DAPM graph.
+The machine driver sets some additional parameters to the DAI link i.e.
+
+static const struct snd_soc_pcm_stream dai_params = {
+ .formats = SNDRV_PCM_FMTBIT_S32_LE,
+ .rate_min = 8000,
+ .rate_max = 8000,
+ .channels_min = 2,
+ .channels_max = 2,
+};
+
+static struct snd_soc_dai_link dais[] = {
+ < ... more DAI links above ... >
+ {
+ .name = "MODEM",
+ .stream_name = "MODEM",
+ .cpu_dai_name = "dai2",
+ .codec_dai_name = "modem-aif1",
+ .codec_name = "modem",
+ .dai_fmt = SND_SOC_DAIFMT_I2S | SND_SOC_DAIFMT_NB_NF
+ | SND_SOC_DAIFMT_CBM_CFM,
+ .params = &dai_params,
+ }
+ < ... more DAI links here ... >
+
+These parameters are used to configure the DAI hw_params() when DAPM detects a
+valid path and then calls the PCM operations to start the link. DAPM will also
+call the appropriate PCM operations to disable the DAI when the path is no
+longer valid.
+
+
+Hostless FE
+-----------
+
+The DAI link(s) are enabled by a FE that does not read or write any PCM data.
+This means creating a new FE that is connected with a virtual path to both
+DAI links. The DAI links will be started when the FE PCM is started and stopped
+when the FE PCM is stopped. Note that the FE PCM cannot read or write data in
+this configuration.
+
+
-ASoC Codec Driver
-=================
+ASoC Codec Class Driver
+=======================
-The codec driver is generic and hardware independent code that configures the
-codec to provide audio capture and playback. It should contain no code that is
-specific to the target platform or machine. All platform and machine specific
-code should be added to the platform and machine drivers respectively.
+The codec class driver is generic and hardware independent code that configures
+the codec, FM, MODEM, BT or external DSP to provide audio capture and playback.
+It should contain no code that is specific to the target platform or machine.
+All platform and machine specific code should be added to the platform and
+machine drivers respectively.
-Each codec driver *must* provide the following features:-
+Each codec class driver *must* provide the following features:-
1) Codec DAI and PCM configuration
- 2) Codec control IO - using I2C, 3 Wire(SPI) or both APIs
+ 2) Codec control IO - using RegMap API
3) Mixers and audio controls
4) Codec audio operations
+ 5) DAPM description.
+ 6) DAPM event handler.
Optionally, codec drivers can also provide:-
- 5) DAPM description.
- 6) DAPM event handler.
7) DAC Digital mute control.
Its probably best to use this guide in conjunction with the existing codec
2 - Codec control IO
--------------------
The codec can usually be controlled via an I2C or SPI style interface
-(AC97 combines control with data in the DAI). The codec drivers provide
-functions to read and write the codec registers along with supplying a
-register cache:-
-
- /* IO control data and register cache */
- void *control_data; /* codec control (i2c/3wire) data */
- void *reg_cache;
-
-Codec read/write should do any data formatting and call the hardware
-read write below to perform the IO. These functions are called by the
-core and ALSA when performing DAPM or changing the mixer:-
-
- unsigned int (*read)(struct snd_soc_codec *, unsigned int);
- int (*write)(struct snd_soc_codec *, unsigned int, unsigned int);
-
-Codec hardware IO functions - usually points to either the I2C, SPI or AC97
-read/write:-
-
- hw_write_t hw_write;
- hw_read_t hw_read;
+(AC97 combines control with data in the DAI). The codec driver should use the
+Regmap API for all codec IO. Please see include/linux/regmap.h and existing
+codec drivers for example regmap usage.
3 - Mixers and audio controls
4 - Codec Audio Operations
--------------------------
-The codec driver also supports the following ALSA operations:-
+The codec driver also supports the following ALSA PCM operations:-
/* SoC audio ops */
struct snd_soc_ops {
There are 4 power domains within DAPM
- 1. Codec domain - VREF, VMID (core codec and audio power)
+ 1. Codec bias domain - VREF, VMID (core codec and audio power)
Usually controlled at codec probe/remove and suspend/resume, although
can be set at stream time if power is not needed for sidetone, etc.
o Line - Line Input/Output (and optional Jack)
o Speaker - Speaker
o Supply - Power or clock supply widget used by other widgets.
+ o Regulator - External regulator that supplies power to audio components.
+ o Clock - External clock that supplies clock to audio componnents.
+ o AIF IN - Audio Interface Input (with TDM slot mask).
+ o AIF OUT - Audio Interface Output (with TDM slot mask).
+ o Siggen - Signal Generator.
+ o DAI IN - Digital Audio Interface Input.
+ o DAI OUT - Digital Audio Interface Output.
+ o DAI Link - DAI Link between two DAI structures */
o Pre - Special PRE widget (exec before all others)
o Post - Special POST widget (exec after all others)
(Widgets are defined in include/sound/soc-dapm.h)
-Widgets are usually added in the codec driver and the machine driver. There are
-convenience macros defined in soc-dapm.h that can be used to quickly build a
-list of widgets of the codecs and machines DAPM widgets.
+Widgets can be added to the sound card by any of the component driver types.
+There are convenience macros defined in soc-dapm.h that can be used to quickly
+build a list of widgets of the codecs and machines DAPM widgets.
Most widgets have a name, register, shift and invert. Some widgets have extra
parameters for stream name and kcontrols.
-------------------------
Stream Widgets relate to the stream power domain and only consist of ADCs
-(analog to digital converters) and DACs (digital to analog converters).
+(analog to digital converters), DACs (digital to analog converters),
+AIF IN and AIF OUT.
Stream widgets have the following format:-
SND_SOC_DAPM_DAC(name, stream name, reg, shift, invert),
+SND_SOC_DAPM_AIF_IN(name, stream, slot, reg, shift, invert)
NOTE: the stream name must match the corresponding stream name in your codec
snd_soc_codec_dai.
SND_SOC_DAPM_DAC("HiFi DAC", "HiFi Playback", REG, 3, 1),
SND_SOC_DAPM_ADC("HiFi ADC", "HiFi Capture", REG, 2, 1),
+e.g. stream widgets for AIF
+
+SND_SOC_DAPM_AIF_IN("AIF1RX", "AIF1 Playback", 0, SND_SOC_NOPM, 0, 0),
+SND_SOC_DAPM_AIF_OUT("AIF1TX", "AIF1 Capture", 0, SND_SOC_NOPM, 0, 0),
+
2.2 Path Domain Widgets
-----------------------
you can use SND_SOC_DAPM_MIXER_NAMED_CTL instead. the parameters are the same
as for SND_SOC_DAPM_MIXER.
-2.3 Platform/Machine domain Widgets
------------------------------------
+
+2.3 Machine domain Widgets
+--------------------------
Machine widgets are different from codec widgets in that they don't have a
codec register bit associated with them. A machine widget is assigned to each
-machine audio component (non codec) that can be independently powered. e.g.
+machine audio component (non codec or DSP) that can be independently
+powered. e.g.
o Speaker Amp
o Microphone Bias
SND_SOC_DAPM_MIC("Mic Jack", spitz_mic_bias),
-2.4 Codec Domain
-----------------
+2.4 Codec (BIAS) Domain
+-----------------------
-The codec power domain has no widgets and is handled by the codecs DAPM event
-handler. This handler is called when the codec powerstate is changed wrt to any
-stream event or by kernel PM events.
+The codec bias power domain has no widgets and is handled by the codecs DAPM
+event handler. This handler is called when the codec powerstate is changed wrt
+to any stream event or by kernel PM events.
2.5 Virtual Widgets
subsystem individually with a call to snd_soc_dapm_new_control().
-3. Codec Widget Interconnections
-================================
+3. Codec/DSP Widget Interconnections
+====================================
-Widgets are connected to each other within the codec and machine by audio paths
-(called interconnections). Each interconnection must be defined in order to
-create a map of all audio paths between widgets.
+Widgets are connected to each other within the codec, platform and machine by
+audio paths (called interconnections). Each interconnection must be defined in
+order to create a map of all audio paths between widgets.
-This is easiest with a diagram of the codec (and schematic of the machine audio
-system), as it requires joining widgets together via their audio signal paths.
+This is easiest with a diagram of the codec or DSP (and schematic of the machine
+audio system), as it requires joining widgets together via their audio signal
+paths.
e.g., from the WM8731 output mixer (wm8731.c)
o Mic Jack
o Codec Pins
-When a codec pin is NC it can be marked as not used with a call to
-
-snd_soc_dapm_set_endpoint(codec, "Widget Name", 0);
-
-The last argument is 0 for inactive and 1 for active. This way the pin and its
-input widget will never be powered up and consume power.
-
-This also applies to machine widgets. e.g. if a headphone is connected to a
-jack then the jack can be marked active. If the headphone is removed, then
-the headphone jack can be marked inactive.
+Endpoints are added to the DAPM graph so that their usage can be determined in
+order to save power. e.g. NC codecs pins will be switched OFF, unconnected
+jacks can also be switched OFF.
5 DAPM Widget Events
ASoC Machine Driver
===================
-The ASoC machine (or board) driver is the code that glues together the platform
-and codec drivers.
+The ASoC machine (or board) driver is the code that glues together all the
+component drivers (e.g. codecs, platforms and DAIs). It also describes the
+relationships between each componnent which include audio paths, GPIOs,
+interrupts, clocking, jacks and voltage regulators.
The machine driver can contain codec and platform specific code. It registers
the audio subsystem with the kernel as a platform device and is represented by
ASoC Platform Driver
====================
-An ASoC platform driver can be divided into audio DMA and SoC DAI configuration
-and control. The platform drivers only target the SoC CPU and must have no board
-specific code.
+An ASoC platform driver class can be divided into audio DMA drivers, SoC DAI
+drivers and DSP drivers. The platform drivers only target the SoC CPU and must
+have no board specific code.
Audio DMA
=========
5) Suspend and resume (optional)
Please see codec.txt for a description of items 1 - 4.
+
+
+SoC DSP Drivers
+===============
+
+Each SoC DSP driver usually supplies the following features :-
+
+ 1) DAPM graph
+ 2) Mixer controls
+ 3) DMA IO to/from DSP buffers (if applicable)
+ 4) Definition of DSP front end (FE) PCM devices.
+
+Please see DPCM.txt for a description of item 4.
ACPI
M: Len Brown <lenb@kernel.org>
-M: Rafael J. Wysocki <rjw@sisk.pl>
+M: Rafael J. Wysocki <rjw@rjwysocki.net>
L: linux-acpi@vger.kernel.org
-W: http://www.lesswatts.org/projects/acpi/
-Q: http://patchwork.kernel.org/project/linux-acpi/list/
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux
+W: https://01.org/linux-acpi
+Q: https://patchwork.kernel.org/project/linux-acpi/list/
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
S: Supported
F: drivers/acpi/
F: drivers/pnp/pnpacpi/
ACPI FAN DRIVER
M: Zhang Rui <rui.zhang@intel.com>
L: linux-acpi@vger.kernel.org
-W: http://www.lesswatts.org/projects/acpi/
+W: https://01.org/linux-acpi
S: Supported
F: drivers/acpi/fan.c
ACPI THERMAL DRIVER
M: Zhang Rui <rui.zhang@intel.com>
L: linux-acpi@vger.kernel.org
-W: http://www.lesswatts.org/projects/acpi/
+W: https://01.org/linux-acpi
S: Supported
F: drivers/acpi/*thermal*
ACPI VIDEO DRIVER
M: Zhang Rui <rui.zhang@intel.com>
L: linux-acpi@vger.kernel.org
-W: http://www.lesswatts.org/projects/acpi/
+W: https://01.org/linux-acpi
S: Supported
F: drivers/acpi/video.c
F: arch/arm/mach-gemini/
ARM/CSR SIRFPRIMA2 MACHINE SUPPORT
-M: Barry Song <baohua.song@csr.com>
+M: Barry Song <baohua@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
T: git git://git.kernel.org/pub/scm/linux/kernel/git/baohua/linux.git
S: Maintained
F: arch/arm/mach-prima2/
+F: drivers/clk/clk-prima2.c
+F: drivers/clocksource/timer-prima2.c
+F: drivers/clocksource/timer-marco.c
F: drivers/dma/sirf-dma.c
F: drivers/i2c/busses/i2c-sirf.c
+F: drivers/input/misc/sirfsoc-onkey.c
+F: drivers/irqchip/irq-sirfsoc.c
F: drivers/mmc/host/sdhci-sirf.c
F: drivers/pinctrl/sirf/
+F: drivers/rtc/rtc-sirfsoc.c
F: drivers/spi/spi-sirf.c
ARM/EBSA110 MACHINE SUPPORT
M: Jason Cooper <jason@lakedaemon.net>
M: Andrew Lunn <andrew@lunn.ch>
M: Gregory Clement <gregory.clement@free-electrons.com>
+M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-mvebu/
ARM/Marvell Dove/Kirkwood/MV78xx0/Orion SOC support
M: Jason Cooper <jason@lakedaemon.net>
M: Andrew Lunn <andrew@lunn.ch>
+M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-dove/
F: drivers/net/ethernet/seeq/ether3*
F: drivers/scsi/arm/
+ARM/Rockchip SoC support
+M: Heiko Stuebner <heiko@sntech.de>
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Maintained
+F: arch/arm/mach-rockchip/
+F: drivers/*/*rockchip*
+
ARM/SHARK MACHINE SUPPORT
M: Alexander Schulz <alex@shark-linux.de>
W: http://www.shark-linux.de/shark.html
BONDING DRIVER
M: Jay Vosburgh <fubar@us.ibm.com>
+M: Veaceslav Falico <vfalico@redhat.com>
M: Andy Gospodarek <andy@greyhouse.net>
L: netdev@vger.kernel.org
W: http://sourceforge.net/projects/bonding/
F: drivers/net/ethernet/broadcom/bnx2x/
BROADCOM BCM281XX/BCM11XXX ARM ARCHITECTURE
-M: Christian Daudt <csd@broadcom.com>
+M: Christian Daudt <bcm@fixthebug.org>
+L: bcm-kernel-feedback-list@broadcom.com
T: git git://git.github.com/broadcom/bcm11351
S: Maintained
F: arch/arm/mach-bcm/
F: drivers/net/ethernet/ti/cpmac.c
CPU FREQUENCY DRIVERS
-M: Rafael J. Wysocki <rjw@sisk.pl>
+M: Rafael J. Wysocki <rjw@rjwysocki.net>
M: Viresh Kumar <viresh.kumar@linaro.org>
L: cpufreq@vger.kernel.org
L: linux-pm@vger.kernel.org
F: drivers/cpuidle/cpuidle-big_little.c
CPUIDLE DRIVERS
-M: Rafael J. Wysocki <rjw@sisk.pl>
+M: Rafael J. Wysocki <rjw@rjwysocki.net>
M: Daniel Lezcano <daniel.lezcano@linaro.org>
L: linux-pm@vger.kernel.org
S: Maintained
F: include/linux/dm-*.h
F: include/uapi/linux/dm-*.h
+DIGI NEO AND CLASSIC PCI PRODUCTS
+M: Lidza Louina <lidza.louina@gmail.com>
+L: driverdev-devel@linuxdriverproject.org
+S: Maintained
+F: drivers/staging/dgnc/
+
+DIGI EPCA PCI PRODUCTS
+M: Lidza Louina <lidza.louina@gmail.com>
+L: driverdev-devel@linuxdriverproject.org
+S: Maintained
+F: drivers/staging/dgap/
+
DIOLAN U2C-12 I2C DRIVER
M: Guenter Roeck <linux@roeck-us.net>
L: linux-i2c@vger.kernel.org
DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
M: Vinod Koul <vinod.koul@intel.com>
M: Dan Williams <dan.j.williams@intel.com>
+L: dmaengine@vger.kernel.org
+Q: https://patchwork.kernel.org/project/linux-dmaengine/list/
S: Supported
F: drivers/dma/
F: include/linux/dma*
L: dri-devel@lists.freedesktop.org
L: linux-tegra@vger.kernel.org
T: git git://anongit.freedesktop.org/tegra/linux.git
-S: Maintained
+S: Supported
F: drivers/gpu/host1x/
F: include/uapi/drm/tegra_drm.h
F: Documentation/devicetree/bindings/gpu/nvidia,tegra20-host1x.txt
FREEZER
M: Pavel Machek <pavel@ucw.cz>
-M: "Rafael J. Wysocki" <rjw@sisk.pl>
+M: "Rafael J. Wysocki" <rjw@rjwysocki.net>
L: linux-pm@vger.kernel.org
S: Supported
F: Documentation/power/freezing-of-tasks.txt
S: Odd Fixes (e.g., new signatures)
F: drivers/scsi/fdomain.*
+GCOV BASED KERNEL PROFILING
+M: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
+S: Maintained
+F: kernel/gcov/
+F: Documentation/gcov.txt
+
GDT SCSI DISK ARRAY CONTROLLER DRIVER
M: Achim Leubner <achim_leubner@adaptec.com>
L: linux-scsi@vger.kernel.org
HIBERNATION (aka Software Suspend, aka swsusp)
M: Pavel Machek <pavel@ucw.cz>
-M: "Rafael J. Wysocki" <rjw@sisk.pl>
+M: "Rafael J. Wysocki" <rjw@rjwysocki.net>
L: linux-pm@vger.kernel.org
S: Supported
F: arch/x86/power/
INTEL MENLOW THERMAL DRIVER
M: Sujith Thomas <sujith.thomas@intel.com>
L: platform-driver-x86@vger.kernel.org
-W: http://www.lesswatts.org/projects/acpi/
+W: https://01.org/linux-acpi
S: Supported
F: drivers/platform/x86/intel_menlow.c
INTEL I/OAT DMA DRIVER
M: Dan Williams <dan.j.williams@intel.com>
-S: Maintained
+M: Dave Jiang <dave.jiang@intel.com>
+L: dmaengine@vger.kernel.org
+Q: https://patchwork.kernel.org/project/linux-dmaengine/list/
+S: Supported
F: drivers/dma/ioat*
INTEL IOMMU (VT-d)
S: Maintained
F: drivers/tty/serial/ioc3_serial.c
+IOMMU DRIVERS
+M: Joerg Roedel <joro@8bytes.org>
+L: iommu@lists.linux-foundation.org
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git
+S: Maintained
+F: drivers/iommu/
+
IP MASQUERADING
M: Juanjo Ciarlante <jjciarla@raiz.uncu.edu.ar>
S: Maintained
F: drivers/net/wireless/prism54/
PROMISE SATA TX2/TX4 CONTROLLER LIBATA DRIVER
-M: Mikael Pettersson <mikpe@it.uu.se>
+M: Mikael Pettersson <mikpelinux@gmail.com>
L: linux-ide@vger.kernel.org
S: Maintained
F: drivers/ata/sata_promise.*
F: include/uapi/linux/sched.h
SCORE ARCHITECTURE
-M: Chen Liqin <liqin.chen@sunplusct.com>
+M: Chen Liqin <liqin.linux@gmail.com>
M: Lennox Wu <lennox.wu@gmail.com>
-W: http://www.sunplusct.com
+W: http://www.sunplus.com
S: Supported
F: arch/score/
F: sound/soc/
F: include/sound/soc*
+SOUND - DMAENGINE HELPERS
+M: Lars-Peter Clausen <lars@metafoo.de>
+S: Supported
+F: include/sound/dmaengine_pcm.h
+F: sound/core/pcm_dmaengine.c
+F: sound/soc/soc-generic-dmaengine-pcm.c
+
SPARC + UltraSPARC (sparc/sparc64)
M: "David S. Miller" <davem@davemloft.net>
L: sparclinux@vger.kernel.org
SUSPEND TO RAM
M: Len Brown <len.brown@intel.com>
M: Pavel Machek <pavel@ucw.cz>
-M: "Rafael J. Wysocki" <rjw@sisk.pl>
+M: "Rafael J. Wysocki" <rjw@rjwysocki.net>
L: linux-pm@vger.kernel.org
S: Supported
F: Documentation/power/
S: Maintained
F: drivers/media/rc/ttusbir.c
-TEGRA SUPPORT
+TEGRA ARCHITECTURE SUPPORT
M: Stephen Warren <swarren@wwwdotorg.org>
+M: Thierry Reding <thierry.reding@gmail.com>
L: linux-tegra@vger.kernel.org
Q: http://patchwork.ozlabs.org/project/linux-tegra/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/swarren/linux-tegra.git
S: Supported
N: [^a-z]tegra
+TEGRA ASOC DRIVER
+M: Stephen Warren <swarren@wwwdotorg.org>
+S: Supported
+F: sound/soc/tegra/
+
+TEGRA CLOCK DRIVER
+M: Peter De Schrijver <pdeschrijver@nvidia.com>
+M: Prashant Gaikwad <pgaikwad@nvidia.com>
+S: Supported
+F: drivers/clk/tegra/
+
+TEGRA DMA DRIVER
+M: Laxman Dewangan <ldewangan@nvidia.com>
+S: Supported
+F: drivers/dma/tegra20-apb-dma.c
+
+TEGRA GPIO DRIVER
+M: Stephen Warren <swarren@wwwdotorg.org>
+S: Supported
+F: drivers/gpio/gpio-tegra.c
+
+TEGRA I2C DRIVER
+M: Laxman Dewangan <ldewangan@nvidia.com>
+S: Supported
+F: drivers/i2c/busses/i2c-tegra.c
+
+TEGRA IOMMU DRIVERS
+M: Hiroshi Doyu <hdoyu@nvidia.com>
+S: Supported
+F: drivers/iommu/tegra*
+
+TEGRA KBC DRIVER
+M: Rakesh Iyer <riyer@nvidia.com>
+M: Laxman Dewangan <ldewangan@nvidia.com>
+S: Supported
+F: drivers/input/keyboard/tegra-kbc.c
+
+TEGRA PINCTRL DRIVER
+M: Stephen Warren <swarren@wwwdotorg.org>
+S: Supported
+F: drivers/pinctrl/pinctrl-tegra*
+
+TEGRA PWM DRIVER
+M: Thierry Reding <thierry.reding@gmail.com>
+S: Supported
+F: drivers/pwm/pwm-tegra.c
+
+TEGRA SERIAL DRIVER
+M: Laxman Dewangan <ldewangan@nvidia.com>
+S: Supported
+F: drivers/tty/serial/serial-tegra.c
+
+TEGRA SPI DRIVER
+M: Laxman Dewangan <ldewangan@nvidia.com>
+S: Supported
+F: drivers/spi/spi-tegra*
+
TEHUTI ETHERNET DRIVER
M: Andy Gospodarek <andy@greyhouse.net>
L: netdev@vger.kernel.org
F: drivers/hid/usbhid/
USB/IP DRIVERS
-M: Matt Mooney <mfm@muteddisk.com>
L: linux-usb@vger.kernel.org
-S: Maintained
+S: Orphan
F: drivers/staging/usbip/
USB ISP116X DRIVER
S: Maintained
F: drivers/net/usb/rtl8150.c
-USB SERIAL BELKIN F5U103 DRIVER
-M: William Greathouse <wgreathouse@smva.com>
-L: linux-usb@vger.kernel.org
-S: Maintained
-F: drivers/usb/serial/belkin_sa.*
-
-USB SERIAL CYPRESS M8 DRIVER
-M: Lonnie Mendez <dignome@gmail.com>
-L: linux-usb@vger.kernel.org
-S: Maintained
-W: http://geocities.com/i0xox0i
-W: http://firstlight.net/cvs
-F: drivers/usb/serial/cypress_m8.*
-
-USB SERIAL CYBERJACK DRIVER
-M: Matthias Bruestle and Harald Welte <support@reiner-sct.com>
-W: http://www.reiner-sct.de/support/treiber_cyberjack.php
-S: Maintained
-F: drivers/usb/serial/cyberjack.c
-
-USB SERIAL DIGI ACCELEPORT DRIVER
-M: Peter Berger <pberger@brimson.com>
-M: Al Borchers <alborchers@steinerpoint.com>
+USB SERIAL SUBSYSTEM
+M: Johan Hovold <jhovold@gmail.com>
L: linux-usb@vger.kernel.org
S: Maintained
-F: drivers/usb/serial/digi_acceleport.c
-
-USB SERIAL DRIVER
-M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-L: linux-usb@vger.kernel.org
-S: Supported
F: Documentation/usb/usb-serial.txt
-F: drivers/usb/serial/generic.c
-F: drivers/usb/serial/usb-serial.c
+F: drivers/usb/serial/
F: include/linux/usb/serial.h
-USB SERIAL EMPEG EMPEG-CAR MARK I/II DRIVER
-M: Gary Brubaker <xavyer@ix.netcom.com>
-L: linux-usb@vger.kernel.org
-S: Maintained
-F: drivers/usb/serial/empeg.c
-
-USB SERIAL KEYSPAN DRIVER
-M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-L: linux-usb@vger.kernel.org
-S: Maintained
-F: drivers/usb/serial/*keyspan*
-
-USB SERIAL WHITEHEAT DRIVER
-M: Support Department <support@connecttech.com>
-L: linux-usb@vger.kernel.org
-W: http://www.connecttech.com
-S: Supported
-F: drivers/usb/serial/whiteheat*
-
USB SMSC75XX ETHERNET DRIVER
M: Steve Glendinning <steve.glendinning@shawell.net>
L: netdev@vger.kernel.org
XEN NETWORK BACKEND DRIVER
M: Ian Campbell <ian.campbell@citrix.com>
+M: Wei Liu <wei.liu2@citrix.com>
L: xen-devel@lists.xenproject.org (moderated for non-subscribers)
L: netdev@vger.kernel.org
S: Supported
VERSION = 3
PATCHLEVEL = 12
SUBLEVEL = 0
-EXTRAVERSION = -rc1
+EXTRAVERSION =
NAME = One Giant Leap for Frogkind
# *DOCUMENTATION*
config HAVE_ARCH_JUMP_LABEL
bool
-config HAVE_ARCH_MUTEX_CPU_RELAX
- bool
-
config HAVE_RCU_TABLE_FREE
bool
static inline void arch_spin_unlock(arch_spinlock_t *lock)
{
- lock->slock = __ARCH_SPIN_LOCK_UNLOCKED__;
+ unsigned int tmp = __ARCH_SPIN_LOCK_UNLOCKED__;
+
+ __asm__ __volatile__(
+ " ex %0, [%1] \n"
+ : "+r" (tmp)
+ : "r"(&(lock->slock))
+ : "memory");
+
smp_mb();
}
* Because it essentially checks if buffer end is within limit and @len is
* non-ngeative, which implies that buffer start will be within limit too.
*
- * The reason for rewriting being, for majorit yof cases, @len is generally
+ * The reason for rewriting being, for majority of cases, @len is generally
* compile time constant, causing first sub-expression to be compile time
* subsumed.
*
*
*/
#define __user_ok(addr, sz) (((sz) <= TASK_SIZE) && \
- (((addr)+(sz)) <= get_fs()))
+ ((addr) <= (get_fs() - (sz))))
#define __access_ok(addr, sz) (unlikely(__kernel_ok) || \
likely(__user_ok((addr), (sz))))
REG_IGNORE_ONE(pad2);
REG_IN_CHUNK(callee, efa, cregs); /* callee_regs[r25..r13] */
REG_IGNORE_ONE(efa); /* efa update invalid */
- REG_IN_ONE(stop_pc, &ptregs->ret); /* stop_pc: PC update */
+ REG_IGNORE_ONE(stop_pc); /* PC updated via @ret */
return ret;
}
{
struct rt_sigframe __user *sf;
unsigned int magic;
- int err;
struct pt_regs *regs = current_pt_regs();
/* Always make any pending restarted system calls return -EINTR */
if (!access_ok(VERIFY_READ, sf, sizeof(*sf)))
goto badframe;
- err = restore_usr_regs(regs, sf);
- err |= __get_user(magic, &sf->sigret_magic);
- if (err)
+ if (__get_user(magic, &sf->sigret_magic))
goto badframe;
if (unlikely(is_do_ss_needed(magic)))
if (restore_altstack(&sf->uc.uc_stack))
goto badframe;
+ if (restore_usr_regs(regs, sf))
+ goto badframe;
+
/* Don't restart from sigreturn */
syscall_wont_restart(regs);
return 1;
/*
+ * w/o SA_SIGINFO, struct ucontext is partially populated (only
+ * uc_mcontext/uc_sigmask) for kernel's normal user state preservation
+ * during signal handler execution. This works for SA_SIGINFO as well
+ * although the semantics are now overloaded (the same reg state can be
+ * inspected by userland: but are they allowed to fiddle with it ?
+ */
+ err |= stash_usr_regs(sf, regs, set);
+
+ /*
* SA_SIGINFO requires 3 args to signal handler:
* #1: sig-no (common to any handler)
* #2: struct siginfo
magic = MAGIC_SIGALTSTK;
}
- /*
- * w/o SA_SIGINFO, struct ucontext is partially populated (only
- * uc_mcontext/uc_sigmask) for kernel's normal user state preservation
- * during signal handler execution. This works for SA_SIGINFO as well
- * although the semantics are now overloaded (the same reg state can be
- * inspected by userland: but are they allowed to fiddle with it ?
- */
- err |= stash_usr_regs(sf, regs, set);
err |= __put_user(magic, &sf->sigret_magic);
if (err)
return err;
{
struct clock_event_device *clk = &per_cpu(arc_clockevent_device, cpu);
- clockevents_calc_mult_shift(clk, arc_get_core_freq(), 5);
-
- clk->max_delta_ns = clockevent_delta2ns(ARC_TIMER_MAX, clk);
clk->cpumask = cpumask_of(cpu);
-
- clockevents_register_device(clk);
+ clockevents_config_and_register(clk, arc_get_core_freq(),
+ 0, ARC_TIMER_MAX);
/*
* setup the per-cpu timer IRQ handler - for all cpus
regs->status32 &= ~STATUS_DE_MASK;
} else {
regs->ret += state.instr_len;
+
+ /* handle zero-overhead-loop */
+ if ((regs->ret == regs->lp_end) && (regs->lp_count)) {
+ regs->ret = regs->lp_start;
+ regs->lp_count--;
+ }
}
return 0;
#include <asm/pgalloc.h>
#include <asm/mmu.h>
-static int handle_vmalloc_fault(struct mm_struct *mm, unsigned long address)
+static int handle_vmalloc_fault(unsigned long address)
{
/*
* Synchronize this task's top level page-table
pud_t *pud, *pud_k;
pmd_t *pmd, *pmd_k;
- pgd = pgd_offset_fast(mm, address);
+ pgd = pgd_offset_fast(current->active_mm, address);
pgd_k = pgd_offset_k(address);
if (!pgd_present(*pgd_k))
* nothing more.
*/
if (address >= VMALLOC_START && address <= VMALLOC_END) {
- ret = handle_vmalloc_fault(mm, address);
+ ret = handle_vmalloc_fault(address);
if (unlikely(ret))
goto bad_area_nosemaphore;
else
config KERNEL_MODE_NEON
bool "Support for NEON in kernel mode"
- default n
- depends on NEON
+ depends on NEON && AEABI
help
Say Y to include support for NEON in kernel mode.
# Convert bzImage to zImage
bzImage: zImage
-zImage Image xipImage bootpImage uImage: vmlinux
+BOOT_TARGETS = zImage Image xipImage bootpImage uImage
+INSTALL_TARGETS = zinstall uinstall install
+
+PHONY += bzImage $(BOOT_TARGETS) $(INSTALL_TARGETS)
+
+$(BOOT_TARGETS): vmlinux
$(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $(boot)/$@
-zinstall uinstall install: vmlinux
+$(INSTALL_TARGETS):
$(Q)$(MAKE) $(build)=$(boot) MACHINE=$(MACHINE) $@
%.dtb: | scripts
@test "$(INITRD)" != "" || \
(echo You must specify INITRD; exit -1)
-install: $(obj)/Image
- $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
+install:
+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \
$(obj)/Image System.map "$(INSTALL_PATH)"
-zinstall: $(obj)/zImage
- $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
+zinstall:
+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \
$(obj)/zImage System.map "$(INSTALL_PATH)"
-uinstall: $(obj)/uImage
- $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
+uinstall:
+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \
$(obj)/uImage System.map "$(INSTALL_PATH)"
zi:
- $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \
$(obj)/zImage System.map "$(INSTALL_PATH)"
i:
- $(CONFIG_SHELL) $(srctree)/$(src)/install.sh $(KERNELRELEASE) \
+ $(CONFIG_SHELL) $(srctree)/$(src)/install.sh "$(KERNELRELEASE)" \
$(obj)/Image System.map "$(INSTALL_PATH)"
subdir- := bootp compressed dts
dtb-$(CONFIG_ARCH_AT91) += sama5d34ek.dtb
dtb-$(CONFIG_ARCH_AT91) += sama5d35ek.dtb
+dtb-$(CONFIG_ARCH_ATLAS6) += atlas6-evb.dtb
+
dtb-$(CONFIG_ARCH_BCM2835) += bcm2835-rpi-b.dtb
dtb-$(CONFIG_ARCH_BCM) += bcm11351-brt.dtb \
bcm28155-ap.dtb
am335x-evm.dtb \
am335x-evmsk.dtb \
am335x-bone.dtb \
+ am335x-boneblack.dtb \
am3517-evm.dtb \
am3517_mt_ventoux.dtb \
am43x-epos-evm.dtb
--- /dev/null
+/*
+ * Copyright (C) 2012 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/ {
+ model = "TI AM335x BeagleBone";
+ compatible = "ti,am335x-bone", "ti,am33xx";
+
+ cpus {
+ cpu@0 {
+ cpu0-supply = <&dcdc2_reg>;
+ };
+ };
+
+ memory {
+ device_type = "memory";
+ reg = <0x80000000 0x10000000>; /* 256 MB */
+ };
+
+ am33xx_pinmux: pinmux@44e10800 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&clkout2_pin>;
+
+ user_leds_s0: user_leds_s0 {
+ pinctrl-single,pins = <
+ 0x54 (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_a5.gpio1_21 */
+ 0x58 (PIN_OUTPUT_PULLUP | MUX_MODE7) /* gpmc_a6.gpio1_22 */
+ 0x5c (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_a7.gpio1_23 */
+ 0x60 (PIN_OUTPUT_PULLUP | MUX_MODE7) /* gpmc_a8.gpio1_24 */
+ >;
+ };
+
+ i2c0_pins: pinmux_i2c0_pins {
+ pinctrl-single,pins = <
+ 0x188 (PIN_INPUT_PULLUP | MUX_MODE0) /* i2c0_sda.i2c0_sda */
+ 0x18c (PIN_INPUT_PULLUP | MUX_MODE0) /* i2c0_scl.i2c0_scl */
+ >;
+ };
+
+ uart0_pins: pinmux_uart0_pins {
+ pinctrl-single,pins = <
+ 0x170 (PIN_INPUT_PULLUP | MUX_MODE0) /* uart0_rxd.uart0_rxd */
+ 0x174 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* uart0_txd.uart0_txd */
+ >;
+ };
+
+ clkout2_pin: pinmux_clkout2_pin {
+ pinctrl-single,pins = <
+ 0x1b4 (PIN_OUTPUT_PULLDOWN | MUX_MODE3) /* xdma_event_intr1.clkout2 */
+ >;
+ };
+
+ cpsw_default: cpsw_default {
+ pinctrl-single,pins = <
+ /* Slave 1 */
+ 0x110 (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxerr.mii1_rxerr */
+ 0x114 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mii1_txen.mii1_txen */
+ 0x118 (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxdv.mii1_rxdv */
+ 0x11c (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mii1_txd3.mii1_txd3 */
+ 0x120 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mii1_txd2.mii1_txd2 */
+ 0x124 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mii1_txd1.mii1_txd1 */
+ 0x128 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mii1_txd0.mii1_txd0 */
+ 0x12c (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_txclk.mii1_txclk */
+ 0x130 (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxclk.mii1_rxclk */
+ 0x134 (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxd3.mii1_rxd3 */
+ 0x138 (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxd2.mii1_rxd2 */
+ 0x13c (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxd1.mii1_rxd1 */
+ 0x140 (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxd0.mii1_rxd0 */
+ >;
+ };
+
+ cpsw_sleep: cpsw_sleep {
+ pinctrl-single,pins = <
+ /* Slave 1 reset value */
+ 0x110 (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x114 (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x118 (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x11c (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x120 (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x124 (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x128 (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x12c (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x130 (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x134 (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x138 (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x13c (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x140 (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ >;
+ };
+
+ davinci_mdio_default: davinci_mdio_default {
+ pinctrl-single,pins = <
+ /* MDIO */
+ 0x148 (PIN_INPUT_PULLUP | SLEWCTRL_FAST | MUX_MODE0) /* mdio_data.mdio_data */
+ 0x14c (PIN_OUTPUT_PULLUP | MUX_MODE0) /* mdio_clk.mdio_clk */
+ >;
+ };
+
+ davinci_mdio_sleep: davinci_mdio_sleep {
+ pinctrl-single,pins = <
+ /* MDIO reset value */
+ 0x148 (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ 0x14c (PIN_INPUT_PULLDOWN | MUX_MODE7)
+ >;
+ };
+ };
+
+ ocp {
+ uart0: serial@44e09000 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart0_pins>;
+
+ status = "okay";
+ };
+
+ musb: usb@47400000 {
+ status = "okay";
+
+ control@44e10000 {
+ status = "okay";
+ };
+
+ usb-phy@47401300 {
+ status = "okay";
+ };
+
+ usb-phy@47401b00 {
+ status = "okay";
+ };
+
+ usb@47401000 {
+ status = "okay";
+ };
+
+ usb@47401800 {
+ status = "okay";
+ dr_mode = "host";
+ };
+
+ dma-controller@07402000 {
+ status = "okay";
+ };
+ };
+
+ i2c0: i2c@44e0b000 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c0_pins>;
+
+ status = "okay";
+ clock-frequency = <400000>;
+
+ tps: tps@24 {
+ reg = <0x24>;
+ };
+
+ };
+ };
+
+ leds {
+ pinctrl-names = "default";
+ pinctrl-0 = <&user_leds_s0>;
+
+ compatible = "gpio-leds";
+
+ led@2 {
+ label = "beaglebone:green:heartbeat";
+ gpios = <&gpio1 21 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "heartbeat";
+ default-state = "off";
+ };
+
+ led@3 {
+ label = "beaglebone:green:mmc0";
+ gpios = <&gpio1 22 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "mmc0";
+ default-state = "off";
+ };
+
+ led@4 {
+ label = "beaglebone:green:usr2";
+ gpios = <&gpio1 23 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ led@5 {
+ label = "beaglebone:green:usr3";
+ gpios = <&gpio1 24 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+ };
+};
+
+/include/ "tps65217.dtsi"
+
+&tps {
+ regulators {
+ dcdc1_reg: regulator@0 {
+ regulator-always-on;
+ };
+
+ dcdc2_reg: regulator@1 {
+ /* VDD_MPU voltage limits 0.95V - 1.26V with +/-4% tolerance */
+ regulator-name = "vdd_mpu";
+ regulator-min-microvolt = <925000>;
+ regulator-max-microvolt = <1325000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ dcdc3_reg: regulator@2 {
+ /* VDD_CORE voltage limits 0.95V - 1.1V with +/-4% tolerance */
+ regulator-name = "vdd_core";
+ regulator-min-microvolt = <925000>;
+ regulator-max-microvolt = <1150000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ ldo1_reg: regulator@3 {
+ regulator-always-on;
+ };
+
+ ldo2_reg: regulator@4 {
+ regulator-always-on;
+ };
+
+ ldo3_reg: regulator@5 {
+ regulator-always-on;
+ };
+
+ ldo4_reg: regulator@6 {
+ regulator-always-on;
+ };
+ };
+};
+
+&cpsw_emac0 {
+ phy_id = <&davinci_mdio>, <0>;
+ phy-mode = "mii";
+};
+
+&cpsw_emac1 {
+ phy_id = <&davinci_mdio>, <1>;
+ phy-mode = "mii";
+};
+
+&mac {
+ pinctrl-names = "default", "sleep";
+ pinctrl-0 = <&cpsw_default>;
+ pinctrl-1 = <&cpsw_sleep>;
+
+};
+
+&davinci_mdio {
+ pinctrl-names = "default", "sleep";
+ pinctrl-0 = <&davinci_mdio_default>;
+ pinctrl-1 = <&davinci_mdio_sleep>;
+};
/dts-v1/;
#include "am33xx.dtsi"
-
-/ {
- model = "TI AM335x BeagleBone";
- compatible = "ti,am335x-bone", "ti,am33xx";
-
- cpus {
- cpu@0 {
- cpu0-supply = <&dcdc2_reg>;
- };
- };
-
- memory {
- device_type = "memory";
- reg = <0x80000000 0x10000000>; /* 256 MB */
- };
-
- am33xx_pinmux: pinmux@44e10800 {
- pinctrl-names = "default";
- pinctrl-0 = <&clkout2_pin>;
-
- user_leds_s0: user_leds_s0 {
- pinctrl-single,pins = <
- 0x54 (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_a5.gpio1_21 */
- 0x58 (PIN_OUTPUT_PULLUP | MUX_MODE7) /* gpmc_a6.gpio1_22 */
- 0x5c (PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_a7.gpio1_23 */
- 0x60 (PIN_OUTPUT_PULLUP | MUX_MODE7) /* gpmc_a8.gpio1_24 */
- >;
- };
-
- i2c0_pins: pinmux_i2c0_pins {
- pinctrl-single,pins = <
- 0x188 (PIN_INPUT_PULLUP | MUX_MODE0) /* i2c0_sda.i2c0_sda */
- 0x18c (PIN_INPUT_PULLUP | MUX_MODE0) /* i2c0_scl.i2c0_scl */
- >;
- };
-
- uart0_pins: pinmux_uart0_pins {
- pinctrl-single,pins = <
- 0x170 (PIN_INPUT_PULLUP | MUX_MODE0) /* uart0_rxd.uart0_rxd */
- 0x174 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* uart0_txd.uart0_txd */
- >;
- };
-
- clkout2_pin: pinmux_clkout2_pin {
- pinctrl-single,pins = <
- 0x1b4 (PIN_OUTPUT_PULLDOWN | MUX_MODE3) /* xdma_event_intr1.clkout2 */
- >;
- };
-
- cpsw_default: cpsw_default {
- pinctrl-single,pins = <
- /* Slave 1 */
- 0x110 (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxerr.mii1_rxerr */
- 0x114 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mii1_txen.mii1_txen */
- 0x118 (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxdv.mii1_rxdv */
- 0x11c (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mii1_txd3.mii1_txd3 */
- 0x120 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mii1_txd2.mii1_txd2 */
- 0x124 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mii1_txd1.mii1_txd1 */
- 0x128 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mii1_txd0.mii1_txd0 */
- 0x12c (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_txclk.mii1_txclk */
- 0x130 (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxclk.mii1_rxclk */
- 0x134 (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxd3.mii1_rxd3 */
- 0x138 (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxd2.mii1_rxd2 */
- 0x13c (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxd1.mii1_rxd1 */
- 0x140 (PIN_INPUT_PULLUP | MUX_MODE0) /* mii1_rxd0.mii1_rxd0 */
- >;
- };
-
- cpsw_sleep: cpsw_sleep {
- pinctrl-single,pins = <
- /* Slave 1 reset value */
- 0x110 (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x114 (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x118 (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x11c (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x120 (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x124 (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x128 (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x12c (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x130 (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x134 (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x138 (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x13c (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x140 (PIN_INPUT_PULLDOWN | MUX_MODE7)
- >;
- };
-
- davinci_mdio_default: davinci_mdio_default {
- pinctrl-single,pins = <
- /* MDIO */
- 0x148 (PIN_INPUT_PULLUP | SLEWCTRL_FAST | MUX_MODE0) /* mdio_data.mdio_data */
- 0x14c (PIN_OUTPUT_PULLUP | MUX_MODE0) /* mdio_clk.mdio_clk */
- >;
- };
-
- davinci_mdio_sleep: davinci_mdio_sleep {
- pinctrl-single,pins = <
- /* MDIO reset value */
- 0x148 (PIN_INPUT_PULLDOWN | MUX_MODE7)
- 0x14c (PIN_INPUT_PULLDOWN | MUX_MODE7)
- >;
- };
- };
-
- ocp {
- uart0: serial@44e09000 {
- pinctrl-names = "default";
- pinctrl-0 = <&uart0_pins>;
-
- status = "okay";
- };
-
- musb: usb@47400000 {
- status = "okay";
-
- control@44e10000 {
- status = "okay";
- };
-
- usb-phy@47401300 {
- status = "okay";
- };
-
- usb-phy@47401b00 {
- status = "okay";
- };
-
- usb@47401000 {
- status = "okay";
- };
-
- usb@47401800 {
- status = "okay";
- dr_mode = "host";
- };
-
- dma-controller@07402000 {
- status = "okay";
- };
- };
-
- i2c0: i2c@44e0b000 {
- pinctrl-names = "default";
- pinctrl-0 = <&i2c0_pins>;
-
- status = "okay";
- clock-frequency = <400000>;
-
- tps: tps@24 {
- reg = <0x24>;
- };
-
- };
- };
-
- leds {
- pinctrl-names = "default";
- pinctrl-0 = <&user_leds_s0>;
-
- compatible = "gpio-leds";
-
- led@2 {
- label = "beaglebone:green:heartbeat";
- gpios = <&gpio1 21 GPIO_ACTIVE_HIGH>;
- linux,default-trigger = "heartbeat";
- default-state = "off";
- };
-
- led@3 {
- label = "beaglebone:green:mmc0";
- gpios = <&gpio1 22 GPIO_ACTIVE_HIGH>;
- linux,default-trigger = "mmc0";
- default-state = "off";
- };
-
- led@4 {
- label = "beaglebone:green:usr2";
- gpios = <&gpio1 23 GPIO_ACTIVE_HIGH>;
- default-state = "off";
- };
-
- led@5 {
- label = "beaglebone:green:usr3";
- gpios = <&gpio1 24 GPIO_ACTIVE_HIGH>;
- default-state = "off";
- };
- };
-};
-
-/include/ "tps65217.dtsi"
-
-&tps {
- regulators {
- dcdc1_reg: regulator@0 {
- regulator-always-on;
- };
-
- dcdc2_reg: regulator@1 {
- /* VDD_MPU voltage limits 0.95V - 1.26V with +/-4% tolerance */
- regulator-name = "vdd_mpu";
- regulator-min-microvolt = <925000>;
- regulator-max-microvolt = <1325000>;
- regulator-boot-on;
- regulator-always-on;
- };
-
- dcdc3_reg: regulator@2 {
- /* VDD_CORE voltage limits 0.95V - 1.1V with +/-4% tolerance */
- regulator-name = "vdd_core";
- regulator-min-microvolt = <925000>;
- regulator-max-microvolt = <1150000>;
- regulator-boot-on;
- regulator-always-on;
- };
-
- ldo1_reg: regulator@3 {
- regulator-always-on;
- };
-
- ldo2_reg: regulator@4 {
- regulator-always-on;
- };
-
- ldo3_reg: regulator@5 {
- regulator-always-on;
- };
-
- ldo4_reg: regulator@6 {
- regulator-always-on;
- };
- };
-};
-
-&cpsw_emac0 {
- phy_id = <&davinci_mdio>, <0>;
- phy-mode = "mii";
-};
-
-&cpsw_emac1 {
- phy_id = <&davinci_mdio>, <1>;
- phy-mode = "mii";
-};
-
-&mac {
- pinctrl-names = "default", "sleep";
- pinctrl-0 = <&cpsw_default>;
- pinctrl-1 = <&cpsw_sleep>;
-
-};
-
-&davinci_mdio {
- pinctrl-names = "default", "sleep";
- pinctrl-0 = <&davinci_mdio_default>;
- pinctrl-1 = <&davinci_mdio_sleep>;
-};
+#include "am335x-bone-common.dtsi"
--- /dev/null
+/*
+ * Copyright (C) 2012 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+/dts-v1/;
+
+#include "am33xx.dtsi"
+#include "am335x-bone-common.dtsi"
+
+&ldo3_reg {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+};
};
soc {
+ ranges = <MBUS_ID(0xf0, 0x01) 0 0xd0000000 0x100000
+ MBUS_ID(0x01, 0xe0) 0 0xfff00000 0x100000>;
+
+ pcie-controller {
+ status = "okay";
+
+ /* Connected to Marvell SATA controller */
+ pcie@1,0 {
+ /* Port 0, Lane 0 */
+ status = "okay";
+ };
+
+ /* Connected to FL1009 USB 3.0 controller */
+ pcie@2,0 {
+ /* Port 1, Lane 0 */
+ status = "okay";
+ };
+ };
+
internal-regs {
serial@12000 {
clock-frequency = <200000000>;
marvell,pins = "mpp56";
marvell,function = "gpio";
};
+
+ poweroff: poweroff {
+ marvell,pins = "mpp8";
+ marvell,function = "gpio";
+ };
};
mdio {
pwm_polarity = <0>;
};
};
-
- pcie-controller {
- status = "okay";
-
- /* Connected to Marvell SATA controller */
- pcie@1,0 {
- /* Port 0, Lane 0 */
- status = "okay";
- };
-
- /* Connected to FL1009 USB 3.0 controller */
- pcie@2,0 {
- /* Port 1, Lane 0 */
- status = "okay";
- };
- };
};
};
button@1 {
label = "Power Button";
linux,code = <116>; /* KEY_POWER */
- gpios = <&gpio1 30 1>;
+ gpios = <&gpio1 30 0>;
};
button@2 {
};
};
+ gpio_poweroff {
+ compatible = "gpio-poweroff";
+ pinctrl-0 = <&poweroff>;
+ pinctrl-names = "default";
+ gpios = <&gpio0 8 1>;
+ };
+
};
timer@20300 {
compatible = "marvell,armada-xp-timer";
+ clocks = <&coreclk 2>, <&refclk>;
+ clock-names = "nbclk", "fixed";
};
coreclk: mvebu-sar@18230 {
};
};
};
+
+ clocks {
+ /* 25 MHz reference crystal */
+ refclk: oscillator {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <25000000>;
+ };
+ };
};
AT91_PIOA 8 AT91_PERIPH_A AT91_PINCTRL_NONE>; /* PA8 periph A */
};
- pinctrl_uart2_rts: uart2_rts-0 {
+ pinctrl_usart2_rts: usart2_rts-0 {
atmel,pins =
<AT91_PIOB 0 AT91_PERIPH_B AT91_PINCTRL_NONE>; /* PB0 periph B */
};
- pinctrl_uart2_cts: uart2_cts-0 {
+ pinctrl_usart2_cts: usart2_cts-0 {
atmel,pins =
<AT91_PIOB 1 AT91_PERIPH_B AT91_PINCTRL_NONE>; /* PB1 periph B */
};
interrupts = <12 IRQ_TYPE_LEVEL_HIGH 0>;
dmas = <&dma0 1 AT91_DMA_CFG_PER_ID(0)>;
dma-names = "rxtx";
+ pinctrl-names = "default";
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
interrupts = <26 IRQ_TYPE_LEVEL_HIGH 0>;
dmas = <&dma1 1 AT91_DMA_CFG_PER_ID(0)>;
dma-names = "rxtx";
+ pinctrl-names = "default";
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
interrupts = <17>;
fifosize = <128>;
clocks = <&clks 13>;
+ sirf,uart-dma-rx-channel = <21>;
+ sirf,uart-dma-tx-channel = <2>;
};
uart1: uart@b0060000 {
interrupts = <19>;
fifosize = <128>;
clocks = <&clks 15>;
+ sirf,uart-dma-rx-channel = <6>;
+ sirf,uart-dma-tx-channel = <7>;
};
usp0: usp@b0080000 {
compatible = "sirf,prima2-usp";
reg = <0xb0080000 0x10000>;
interrupts = <20>;
+ fifosize = <128>;
clocks = <&clks 28>;
+ sirf,usp-dma-rx-channel = <17>;
+ sirf,usp-dma-tx-channel = <18>;
};
usp1: usp@b0090000 {
compatible = "sirf,prima2-usp";
reg = <0xb0090000 0x10000>;
interrupts = <21>;
+ fifosize = <128>;
clocks = <&clks 29>;
+ sirf,usp-dma-rx-channel = <14>;
+ sirf,usp-dma-tx-channel = <15>;
};
dmac0: dma-controller@b00b0000 {
compatible = "sirf,prima2-vip";
reg = <0xb00C0000 0x10000>;
clocks = <&clks 31>;
+ interrupts = <14>;
+ sirf,vip-dma-rx-channel = <16>;
};
spi0: spi@b00d0000 {
<1 14 0xf08>,
<1 11 0xf08>,
<1 10 0xf08>;
+ /* Unfortunately we need this since some versions of U-Boot
+ * on Exynos don't set the CNTFRQ register, so we need the
+ * value from DT.
+ */
+ clock-frequency = <24000000>;
};
mct@101C0000 {
compatible = "fsl,imx27-cspi";
reg = <0x1000e000 0x1000>;
interrupts = <16>;
- clocks = <&clks 53>, <&clks 53>;
+ clocks = <&clks 53>, <&clks 60>;
clock-names = "ipg", "per";
status = "disabled";
};
compatible = "fsl,imx27-cspi";
reg = <0x1000f000 0x1000>;
interrupts = <15>;
- clocks = <&clks 52>, <&clks 52>;
+ clocks = <&clks 52>, <&clks 60>;
clock-names = "ipg", "per";
status = "disabled";
};
compatible = "fsl,imx27-cspi";
reg = <0x10017000 0x1000>;
interrupts = <6>;
- clocks = <&clks 51>, <&clks 51>;
+ clocks = <&clks 51>, <&clks 60>;
clock-names = "ipg", "per";
status = "disabled";
};
compatible = "fsl,imx51-pata", "fsl,imx27-pata";
reg = <0x83fe0000 0x4000>;
interrupts = <70>;
- clocks = <&clks 161>;
+ clocks = <&clks 172>;
status = "disabled";
};
#define MX6QDL_PAD_EIM_D29__ECSPI4_SS0 0x0c8 0x3dc 0x824 0x2 0x1
#define MX6QDL_PAD_EIM_D29__UART2_RTS_B 0x0c8 0x3dc 0x924 0x4 0x1
#define MX6QDL_PAD_EIM_D29__UART2_CTS_B 0x0c8 0x3dc 0x000 0x4 0x0
-#define MX6QDL_PAD_EIM_D29__UART2_DTE_RTS_B 0x0c4 0x3dc 0x000 0x4 0x0
-#define MX6QDL_PAD_EIM_D29__UART2_DTE_CTS_B 0x0c4 0x3dc 0x924 0x4 0x1
+#define MX6QDL_PAD_EIM_D29__UART2_DTE_RTS_B 0x0c8 0x3dc 0x000 0x4 0x0
+#define MX6QDL_PAD_EIM_D29__UART2_DTE_CTS_B 0x0c8 0x3dc 0x924 0x4 0x1
#define MX6QDL_PAD_EIM_D29__GPIO3_IO29 0x0c8 0x3dc 0x000 0x5 0x0
#define MX6QDL_PAD_EIM_D29__IPU2_CSI1_VSYNC 0x0c8 0x3dc 0x8e4 0x6 0x0
#define MX6QDL_PAD_EIM_D29__IPU1_DI0_PIN14 0x0c8 0x3dc 0x000 0x7 0x0
model = "ARM Integrator/CP";
compatible = "arm,integrator-cp";
- aliases {
- arm,timer-primary = &timer2;
- arm,timer-secondary = &timer1;
- };
-
chosen {
bootargs = "root=/dev/ram0 console=ttyAMA0,38400n8 earlyprintk";
};
};
timer0: timer@13000000 {
+ /* TIMER0 runs @ 25MHz */
compatible = "arm,integrator-cp-timer";
+ status = "disabled";
};
timer1: timer@13000100 {
+ /* TIMER1 runs @ 1MHz */
compatible = "arm,integrator-cp-timer";
};
timer2: timer@13000200 {
+ /* TIMER2 runs @ 1MHz */
compatible = "arm,integrator-cp-timer";
};
cpu@0 {
device_type = "cpu";
compatible = "marvell,feroceon";
+ reg = <0>;
clocks = <&core_clk 1>, <&core_clk 3>, <&gate_clk 11>;
clock-names = "cpu_clk", "ddrclk", "powersave";
};
xor@60900 {
compatible = "marvell,orion-xor";
reg = <0x60900 0x100
- 0xd0B00 0x100>;
+ 0x60B00 0x100>;
status = "okay";
clocks = <&gate_clk 16>;
/ {
model = "TI OMAP3 BeagleBoard xM";
- compatible = "ti,omap3-beagle-xm, ti,omap3-beagle", "ti,omap3";
+ compatible = "ti,omap3-beagle-xm", "ti,omap36xx", "ti,omap3";
cpus {
cpu@0 {
>;
};
+ mcbsp2_pins: pinmux_mcbsp2_pins {
+ pinctrl-single,pins = <
+ 0x10c (PIN_INPUT | MUX_MODE0) /* mcbsp2_fsx.mcbsp2_fsx */
+ 0x10e (PIN_INPUT | MUX_MODE0) /* mcbsp2_clkx.mcbsp2_clkx */
+ 0x110 (PIN_INPUT | MUX_MODE0) /* mcbsp2_dr.mcbsp2.dr */
+ 0x112 (PIN_OUTPUT | MUX_MODE0) /* mcbsp2_dx.mcbsp2_dx */
+ >;
+ };
+
mmc1_pins: pinmux_mmc1_pins {
pinctrl-single,pins = <
0x114 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc1_clk.sdmmc1_clk */
clock-frequency = <400000>;
};
+&mcbsp2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mcbsp2_pins>;
+};
+
&mmc1 {
pinctrl-names = "default";
pinctrl-0 = <&mmc1_pins>;
#address-cells = <1>;
#size-cells = <0>;
pinctrl-single,register-width = <16>;
- pinctrl-single,function-mask = <0x7f1f>;
+ pinctrl-single,function-mask = <0xff1f>;
};
omap3_pmx_wkup: pinmux@0x48002a00 {
#address-cells = <1>;
#size-cells = <0>;
pinctrl-single,register-width = <16>;
- pinctrl-single,function-mask = <0x7f1f>;
+ pinctrl-single,function-mask = <0xff1f>;
};
gpio1: gpio@48310000 {
*/
clock-frequency = <19200000>;
};
+
+ /* regulator for wl12xx on sdio5 */
+ wl12xx_vmmc: wl12xx_vmmc {
+ pinctrl-names = "default";
+ pinctrl-0 = <&wl12xx_gpio>;
+ compatible = "regulator-fixed";
+ regulator-name = "vwl1271";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ gpio = <&gpio2 11 0>;
+ startup-delay-us = <70000>;
+ enable-active-high;
+ };
};
&omap4_pmx_wkup {
0x1c (PIN_OUTPUT | MUX_MODE3) /* gpio_wk8 */
>;
};
+
+ /*
+ * wl12xx GPIO outputs for WLAN_EN, BT_EN, FM_EN, BT_WAKEUP
+ * REVISIT: Are the pull-ups needed for GPIO 48 and 49?
+ */
+ wl12xx_gpio: pinmux_wl12xx_gpio {
+ pinctrl-single,pins = <
+ 0x26 (PIN_OUTPUT | MUX_MODE3) /* gpmc_a19.gpio_43 */
+ 0x2c (PIN_OUTPUT | MUX_MODE3) /* gpmc_a22.gpio_46 */
+ 0x30 (PIN_OUTPUT_PULLUP | MUX_MODE3) /* gpmc_a24.gpio_48 */
+ 0x32 (PIN_OUTPUT_PULLUP | MUX_MODE3) /* gpmc_a25.gpio_49 */
+ >;
+ };
+
+ /* wl12xx GPIO inputs and SDIO pins */
+ wl12xx_pins: pinmux_wl12xx_pins {
+ pinctrl-single,pins = <
+ 0x38 (PIN_INPUT | MUX_MODE3) /* gpmc_ncs2.gpio_52 */
+ 0x3a (PIN_INPUT | MUX_MODE3) /* gpmc_ncs3.gpio_53 */
+ 0x108 (PIN_OUTPUT | MUX_MODE0) /* sdmmc5_clk.sdmmc5_clk */
+ 0x10a (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_cmd.sdmmc5_cmd */
+ 0x10c (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat0.sdmmc5_dat0 */
+ 0x10e (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat1.sdmmc5_dat1 */
+ 0x110 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat2.sdmmc5_dat2 */
+ 0x112 (PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc5_dat3.sdmmc5_dat3 */
+ >;
+ };
};
&i2c1 {
};
&mmc5 {
- ti,non-removable;
+ pinctrl-names = "default";
+ pinctrl-0 = <&wl12xx_pins>;
+ vmmc-supply = <&wl12xx_vmmc>;
+ non-removable;
bus-width = <4>;
+ cap-power-off-card;
};
&emif1 {
"DMic", "Digital Mic",
"Digital Mic", "Digital Mic1 Bias";
};
+
+ /* regulator for wl12xx on sdio5 */
+ wl12xx_vmmc: wl12xx_vmmc {
+ pinctrl-names = "default";
+ pinctrl-0 = <&wl12xx_gpio>;
+ compatible = "regulator-fixed";
+ regulator-name = "vwl1271";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ gpio = <&gpio2 22 0>;
+ startup-delay-us = <70000>;
+ enable-active-high;
+ };
};
&omap4_pmx_wkup {
0xf0 (PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_sda */
>;
};
+
+ /* wl12xx GPIO output for WLAN_EN */
+ wl12xx_gpio: pinmux_wl12xx_gpio {
+ pinctrl-single,pins = <
+ 0x3c (PIN_OUTPUT | MUX_MODE3) /* gpmc_nwp.gpio_54 */
+ >;
+ };
+
+ /* wl12xx GPIO inputs and SDIO pins */
+ wl12xx_pins: pinmux_wl12xx_pins {
+ pinctrl-single,pins = <
+ 0x3a (PIN_INPUT | MUX_MODE3) /* gpmc_ncs3.gpio_53 */
+ 0x108 (PIN_OUTPUT | MUX_MODE3) /* sdmmc5_clk.sdmmc5_clk */
+ 0x10a (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_cmd.sdmmc5_cmd */
+ 0x10c (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_dat0.sdmmc5_dat0 */
+ 0x10e (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_dat1.sdmmc5_dat1 */
+ 0x110 (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_dat2.sdmmc5_dat2 */
+ 0x112 (PIN_INPUT_PULLUP | MUX_MODE3) /* sdmmc5_dat3.sdmmc5_dat3 */
+ >;
+ };
};
&i2c1 {
};
&mmc5 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&wl12xx_pins>;
+ vmmc-supply = <&wl12xx_vmmc>;
+ non-removable;
bus-width = <4>;
- ti,non-removable;
+ cap-power-off-card;
};
&emif1 {
omap_dwc3@4a020000 {
compatible = "ti,dwc3";
ti,hwmods = "usb_otg_ss";
- reg = <0x4a020000 0x1000>;
+ reg = <0x4a020000 0x10000>;
interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <1>;
ranges;
dwc3@4a030000 {
compatible = "snps,dwc3";
- reg = <0x4a030000 0x1000>;
+ reg = <0x4a030000 0x10000>;
interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_HIGH>;
usb-phy = <&usb2_phy>, <&usb3_phy>;
tx-fifo-resize;
};
};
- ocp2scp {
+ ocp2scp@4a080000 {
compatible = "ti,omap-ocp2scp";
#address-cells = <1>;
#size-cells = <1>;
+ reg = <0x4a080000 0x20>;
ranges;
ti,hwmods = "ocp2scp1";
usb2_phy: usb2phy@4a084000 {
compatible = "simple-bus";
#address-cells = <1>;
#size-cells = <1>;
- ranges = <0xb0000000 0xb0000000 0x180000>;
+ ranges = <0xb0000000 0xb0000000 0x180000>,
+ <0x56000000 0x56000000 0x1b00000>;
timer@b0020000 {
compatible = "sirf,prima2-tick";
uart0: uart@b0050000 {
cell-index = <0>;
compatible = "sirf,prima2-uart";
- reg = <0xb0050000 0x10000>;
+ reg = <0xb0050000 0x1000>;
interrupts = <17>;
+ fifosize = <128>;
clocks = <&clks 13>;
+ sirf,uart-dma-rx-channel = <21>;
+ sirf,uart-dma-tx-channel = <2>;
};
uart1: uart@b0060000 {
cell-index = <1>;
compatible = "sirf,prima2-uart";
- reg = <0xb0060000 0x10000>;
+ reg = <0xb0060000 0x1000>;
interrupts = <18>;
+ fifosize = <32>;
clocks = <&clks 14>;
};
uart2: uart@b0070000 {
cell-index = <2>;
compatible = "sirf,prima2-uart";
- reg = <0xb0070000 0x10000>;
+ reg = <0xb0070000 0x1000>;
interrupts = <19>;
+ fifosize = <128>;
clocks = <&clks 15>;
+ sirf,uart-dma-rx-channel = <6>;
+ sirf,uart-dma-tx-channel = <7>;
};
usp0: usp@b0080000 {
compatible = "sirf,prima2-usp";
reg = <0xb0080000 0x10000>;
interrupts = <20>;
+ fifosize = <128>;
clocks = <&clks 28>;
+ sirf,usp-dma-rx-channel = <17>;
+ sirf,usp-dma-tx-channel = <18>;
};
usp1: usp@b0090000 {
compatible = "sirf,prima2-usp";
reg = <0xb0090000 0x10000>;
interrupts = <21>;
+ fifosize = <128>;
clocks = <&clks 29>;
+ sirf,usp-dma-rx-channel = <14>;
+ sirf,usp-dma-tx-channel = <15>;
};
usp2: usp@b00a0000 {
compatible = "sirf,prima2-usp";
reg = <0xb00a0000 0x10000>;
interrupts = <22>;
+ fifosize = <128>;
clocks = <&clks 30>;
+ sirf,usp-dma-rx-channel = <10>;
+ sirf,usp-dma-tx-channel = <11>;
};
dmac0: dma-controller@b00b0000 {
compatible = "sirf,prima2-vip";
reg = <0xb00C0000 0x10000>;
clocks = <&clks 31>;
+ interrupts = <14>;
+ sirf,vip-dma-rx-channel = <16>;
};
spi0: spi@b00d0000 {
};
sdhi0: sdhi@ee100000 {
- compatible = "renesas,r8a73a4-sdhi";
+ compatible = "renesas,sdhi-r8a73a4";
reg = <0 0xee100000 0 0x100>;
interrupt-parent = <&gic>;
interrupts = <0 165 4>;
};
sdhi1: sdhi@ee120000 {
- compatible = "renesas,r8a73a4-sdhi";
+ compatible = "renesas,sdhi-r8a73a4";
reg = <0 0xee120000 0 0x100>;
interrupt-parent = <&gic>;
interrupts = <0 166 4>;
};
sdhi2: sdhi@ee140000 {
- compatible = "renesas,r8a73a4-sdhi";
+ compatible = "renesas,sdhi-r8a73a4";
reg = <0 0xee140000 0 0x100>;
interrupt-parent = <&gic>;
interrupts = <0 167 4>;
pfc: pfc@fffc0000 {
compatible = "renesas,pfc-r8a7778";
reg = <0xfffc000 0x118>;
- #gpio-range-cells = <3>;
};
};
pfc: pfc@fffc0000 {
compatible = "renesas,pfc-r8a7779";
reg = <0xfffc0000 0x23c>;
- #gpio-range-cells = <3>;
};
thermal@ffc48000 {
pfc: pfc@e6060000 {
compatible = "renesas,pfc-r8a7790";
reg = <0 0xe6060000 0 0x250>;
- #gpio-range-cells = <3>;
};
sdhi0: sdhi@ee100000 {
- compatible = "renesas,r8a7790-sdhi";
+ compatible = "renesas,sdhi-r8a7790";
reg = <0 0xee100000 0 0x100>;
interrupt-parent = <&gic>;
interrupts = <0 165 4>;
};
sdhi1: sdhi@ee120000 {
- compatible = "renesas,r8a7790-sdhi";
+ compatible = "renesas,sdhi-r8a7790";
reg = <0 0xee120000 0 0x100>;
interrupt-parent = <&gic>;
interrupts = <0 166 4>;
};
sdhi2: sdhi@ee140000 {
- compatible = "renesas,r8a7790-sdhi";
+ compatible = "renesas,sdhi-r8a7790";
reg = <0 0xee140000 0 0x100>;
interrupt-parent = <&gic>;
interrupts = <0 167 4>;
};
sdhi3: sdhi@ee160000 {
- compatible = "renesas,r8a7790-sdhi";
+ compatible = "renesas,sdhi-r8a7790";
reg = <0 0xee160000 0 0x100>;
interrupt-parent = <&gic>;
interrupts = <0 168 4>;
};
sdhi0: sdhi@ee100000 {
- compatible = "renesas,r8a7740-sdhi";
+ compatible = "renesas,sdhi-r8a7740";
reg = <0xee100000 0x100>;
interrupt-parent = <&gic>;
interrupts = <0 83 4
/* SDHI1 and SDHI2 have no CD pins, no need for CD IRQ */
sdhi1: sdhi@ee120000 {
- compatible = "renesas,r8a7740-sdhi";
+ compatible = "renesas,sdhi-r8a7740";
reg = <0xee120000 0x100>;
interrupt-parent = <&gic>;
interrupts = <0 88 4
};
sdhi2: sdhi@ee140000 {
- compatible = "renesas,r8a7740-sdhi";
+ compatible = "renesas,sdhi-r8a7740";
reg = <0xee140000 0x100>;
interrupt-parent = <&gic>;
interrupts = <0 104 4
# $4 - default install path (blank if root directory)
#
+verify () {
+ if [ ! -f "$1" ]; then
+ echo "" 1>&2
+ echo " *** Missing file: $1" 1>&2
+ echo ' *** You need to run "make" before "make install".' 1>&2
+ echo "" 1>&2
+ exit 1
+ fi
+}
+
+# Make sure the files actually exist
+verify "$2"
+verify "$3"
+
# User may have a custom install script
if [ -x ~/bin/${INSTALLKERNEL} ]; then exec ~/bin/${INSTALLKERNEL} "$@"; fi
if [ -x /sbin/${INSTALLKERNEL} ]; then exec /sbin/${INSTALLKERNEL} "$@"; fi
.ccnt = 1,
};
+static const struct of_device_id edma_of_ids[] = {
+ { .compatible = "ti,edma3", },
+ {}
+};
+
/*****************************************************************************/
static void map_dmach_queue(unsigned ctlr, unsigned ch_no,
static int prepare_unused_channel_list(struct device *dev, void *data)
{
struct platform_device *pdev = to_platform_device(dev);
- int i, ctlr;
+ int i, count, ctlr;
+ struct of_phandle_args dma_spec;
+ if (dev->of_node) {
+ count = of_property_count_strings(dev->of_node, "dma-names");
+ if (count < 0)
+ return 0;
+ for (i = 0; i < count; i++) {
+ if (of_parse_phandle_with_args(dev->of_node, "dmas",
+ "#dma-cells", i,
+ &dma_spec))
+ continue;
+
+ if (!of_match_node(edma_of_ids, dma_spec.np)) {
+ of_node_put(dma_spec.np);
+ continue;
+ }
+
+ clear_bit(EDMA_CHAN_SLOT(dma_spec.args[0]),
+ edma_cc[0]->edma_unused);
+ of_node_put(dma_spec.np);
+ }
+ return 0;
+ }
+
+ /* For non-OF case */
for (i = 0; i < pdev->num_resources; i++) {
if ((pdev->resource[i].flags & IORESOURCE_DMA) &&
(int)pdev->resource[i].start >= 0) {
ctlr = EDMA_CTLR(pdev->resource[i].start);
clear_bit(EDMA_CHAN_SLOT(pdev->resource[i].start),
- edma_cc[ctlr]->edma_unused);
+ edma_cc[ctlr]->edma_unused);
}
}
return 0;
}
-static const struct of_device_id edma_of_ids[] = {
- { .compatible = "ti,edma3", },
- {}
-};
-
static struct platform_driver edma_driver = {
.driver = {
.name = "edma",
{
phys_reset_t phys_reset;
- BUG_ON(!platform_ops);
+ if (WARN_ON_ONCE(!platform_ops || !platform_ops->power_down))
+ return;
BUG_ON(!irqs_disabled());
/*
{
phys_reset_t phys_reset;
- BUG_ON(!platform_ops);
+ if (WARN_ON_ONCE(!platform_ops || !platform_ops->suspend))
+ return;
BUG_ON(!irqs_disabled());
/* Very similar to mcpm_cpu_power_down() */
#include <linux/module.h>
#include <linux/string.h>
#include <asm/mach/sharpsl_param.h>
+#include <asm/memory.h>
/*
* Certain hardware parameters determined at the time of device manufacture,
*/
#ifdef CONFIG_ARCH_SA1100
#define PARAM_BASE 0xe8ffc000
+#define param_start(x) (void *)(x)
#else
#define PARAM_BASE 0xa0000a00
+#define param_start(x) __va(x)
#endif
#define MAGIC_CHG(a,b,c,d) ( ( d << 24 ) | ( c << 16 ) | ( b << 8 ) | a )
void sharpsl_save_param(void)
{
- memcpy(&sharpsl_param, (void *)PARAM_BASE, sizeof(struct sharpsl_param_info));
+ memcpy(&sharpsl_param, param_start(PARAM_BASE), sizeof(struct sharpsl_param_info));
if (sharpsl_param.comadj_keyword != COMADJ_MAGIC)
sharpsl_param.comadj=-1;
CONFIG_TEGRA_PCI=y
CONFIG_TEGRA_EMC_SCALING_ENABLE=y
CONFIG_ARCH_U8500=y
+CONFIG_MACH_HREFV60=y
CONFIG_MACH_SNOWBALL=y
CONFIG_MACH_UX500_DT=y
CONFIG_ARCH_VEXPRESS=y
CONFIG_SMP=y
CONFIG_HIGHPTE=y
CONFIG_ARM_APPENDED_DTB=y
+CONFIG_ARM_ATAG_DTB_COMPAT=y
CONFIG_NET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_MMC_ARMMMCI=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_SDHCI_ESDHC_IMX=y
CONFIG_MMC_SDHCI_TEGRA=y
CONFIG_MMC_SDHCI_SPEAR=y
CONFIG_MMC_OMAP=y
@ const AES_KEY *key) {
.align 5
ENTRY(AES_encrypt)
- sub r3,pc,#8 @ AES_encrypt
+ adr r3,AES_encrypt
stmdb sp!,{r1,r4-r12,lr}
mov r12,r0 @ inp
mov r11,r2
.align 5
ENTRY(private_AES_set_encrypt_key)
_armv4_AES_set_encrypt_key:
- sub r3,pc,#8 @ AES_set_encrypt_key
+ adr r3,_armv4_AES_set_encrypt_key
teq r0,#0
moveq r0,#-1
beq .Labrt
@ const AES_KEY *key) {
.align 5
ENTRY(AES_decrypt)
- sub r3,pc,#8 @ AES_decrypt
+ adr r3,AES_decrypt
stmdb sp!,{r1,r4-r12,lr}
mov r12,r0 @ inp
mov r11,r2
generic-y += termios.h
generic-y += timex.h
generic-y += trace_clock.h
-generic-y += types.h
generic-y += unaligned.h
static __always_inline bool arch_static_branch(struct static_key *key)
{
- asm goto("1:\n\t"
+ asm_volatile_goto("1:\n\t"
JUMP_LABEL_NOP "\n\t"
".pushsection __jump_table, \"aw\"\n\t"
".word 1b, %l[l_yes], %c0\n\t"
*
* This must be called with interrupts disabled.
*
- * This does not return. Re-entry in the kernel is expected via
- * mcpm_entry_point.
+ * On success this does not return. Re-entry in the kernel is expected
+ * via mcpm_entry_point.
+ *
+ * This will return if mcpm_platform_register() has not been called
+ * previously in which case the caller should take appropriate action.
*/
void mcpm_cpu_power_down(void);
*
* This must be called with interrupts disabled.
*
- * This does not return. Re-entry in the kernel is expected via
- * mcpm_entry_point.
+ * On success this does not return. Re-entry in the kernel is expected
+ * via mcpm_entry_point.
+ *
+ * This will return if mcpm_platform_register() has not been called
+ * previously in which case the caller should take appropriate action.
*/
void mcpm_cpu_suspend(u64 expected_residency);
unsigned int i, unsigned int n,
unsigned long *args)
{
+ if (n == 0)
+ return;
+
if (i + n > SYSCALL_MAX_ARGS) {
unsigned long *args_bad = args + SYSCALL_MAX_ARGS - i;
unsigned int n_bad = n + i - SYSCALL_MAX_ARGS;
unsigned int i, unsigned int n,
const unsigned long *args)
{
+ if (n == 0)
+ return;
+
if (i + n > SYSCALL_MAX_ARGS) {
pr_warning("%s called with max args %d, handling only %d\n",
__func__, i + n, SYSCALL_MAX_ARGS);
#include <asm/unified.h>
#include <asm/compiler.h>
+#if __LINUX_ARM_ARCH__ < 6
+#include <asm-generic/uaccess-unaligned.h>
+#else
+#define __get_user_unaligned __get_user
+#define __put_user_unaligned __put_user
+#endif
+
#define VERIFY_READ 0
#define VERIFY_WRITE 1
ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine
add r1, sp, #S_OFF
- cmp scno, #(__ARM_NR_BASE - __NR_SYSCALL_BASE)
+2: cmp scno, #(__ARM_NR_BASE - __NR_SYSCALL_BASE)
eor r0, scno, #__NR_SYSCALL_BASE @ put OS number back
bcs arm_syscall
-2: mov why, #0 @ no longer a real syscall
+ mov why, #0 @ no longer a real syscall
b sys_ni_syscall @ not private func
#if defined(CONFIG_OABI_COMPAT) || !defined(CONFIG_AEABI)
#ifdef CONFIG_CONTEXT_TRACKING
.if \save
stmdb sp!, {r0-r3, ip, lr}
- bl user_exit
+ bl context_tracking_user_exit
ldmia sp!, {r0-r3, ip, lr}
.else
- bl user_exit
+ bl context_tracking_user_exit
.endif
#endif
.endm
#ifdef CONFIG_CONTEXT_TRACKING
.if \save
stmdb sp!, {r0-r3, ip, lr}
- bl user_enter
+ bl context_tracking_user_enter
ldmia sp!, {r0-r3, ip, lr}
.else
- bl user_enter
+ bl context_tracking_user_enter
.endif
#endif
.endm
mrc p15, 0, r0, c0, c0, 5 @ read MPIDR
and r0, r0, #0xc0000000 @ multiprocessing extensions and
teq r0, #0x80000000 @ not part of a uniprocessor system?
- moveq pc, lr @ yes, assume SMP
+ bne __fixup_smp_on_up @ no, assume UP
+
+ @ Core indicates it is SMP. Check for Aegis SOC where a single
+ @ Cortex-A9 CPU is present but SMP operations fault.
+ mov r4, #0x41000000
+ orr r4, r4, #0x0000c000
+ orr r4, r4, #0x00000090
+ teq r3, r4 @ Check for ARM Cortex-A9
+ movne pc, lr @ Not ARM Cortex-A9,
+
+ @ If a future SoC *does* use 0x0 as the PERIPH_BASE, then the
+ @ below address check will need to be #ifdef'd or equivalent
+ @ for the Aegis platform.
+ mrc p15, 4, r0, c15, c0 @ get SCU base address
+ teq r0, #0x0 @ '0' on actual UP A9 hardware
+ beq __fixup_smp_on_up @ So its an A9 UP
+ ldr r0, [r0, #4] @ read SCU Config
+ and r0, r0, #0x3 @ number of CPUs
+ teq r0, #0x0 @ is 1?
+ movne pc, lr
__fixup_smp_on_up:
adr r0, 1f
*/
int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
{
- struct kvm_regs *cpu_reset;
+ struct kvm_regs *reset_regs;
const struct kvm_irq_level *cpu_vtimer_irq;
switch (vcpu->arch.target) {
case KVM_ARM_TARGET_CORTEX_A15:
if (vcpu->vcpu_id > a15_max_cpu_idx)
return -EINVAL;
- cpu_reset = &a15_regs_reset;
+ reset_regs = &a15_regs_reset;
vcpu->arch.midr = read_cpuid_id();
cpu_vtimer_irq = &a15_vtimer_irq;
break;
}
/* Reset core registers */
- memcpy(&vcpu->arch.regs, cpu_reset, sizeof(vcpu->arch.regs));
+ memcpy(&vcpu->arch.regs, reset_regs, sizeof(vcpu->arch.regs));
/* Reset CP15 registers */
kvm_reset_coprocs(vcpu);
static struct irqaction at91rm9200_timer_irq = {
.name = "at91_tick",
- .flags = IRQF_SHARED | IRQF_DISABLED | IRQF_TIMER | IRQF_IRQPOLL,
+ .flags = IRQF_SHARED | IRQF_TIMER | IRQF_IRQPOLL,
.handler = at91rm9200_timer_interrupt,
.irq = NR_IRQS_LEGACY + AT91_ID_SYS,
};
static struct irqaction at91sam926x_pit_irq = {
.name = "at91_tick",
- .flags = IRQF_SHARED | IRQF_DISABLED | IRQF_TIMER | IRQF_IRQPOLL,
+ .flags = IRQF_SHARED | IRQF_TIMER | IRQF_IRQPOLL,
.handler = at91sam926x_pit_interrupt,
.irq = NR_IRQS_LEGACY + AT91_ID_SYS,
};
#include "at91_rstc.h"
.arm
+/*
+ * at91_ramc_base is an array void*
+ * init at NULL if only one DDR controler is present in or DT
+ */
.globl at91sam9g45_restart
at91sam9g45_restart:
ldr r5, =at91_ramc_base @ preload constants
ldr r0, [r5]
+ ldr r5, [r5, #4] @ ddr1
+ cmp r5, #0
ldr r4, =at91_rstc_base
ldr r1, [r4]
.balign 32 @ align to cache line
+ strne r2, [r5, #AT91_DDRSDRC_RTR] @ disable DDR1 access
+ strne r3, [r5, #AT91_DDRSDRC_LPR] @ power down DDR1
str r2, [r0, #AT91_DDRSDRC_RTR] @ disable DDR0 access
str r3, [r0, #AT91_DDRSDRC_LPR] @ power down DDR0
str r4, [r1, #AT91_RSTC_CR] @ reset processor
static struct irqaction at91x40_timer_irq = {
.name = "at91_tick",
- .flags = IRQF_DISABLED | IRQF_TIMER,
+ .flags = IRQF_TIMER,
.handler = at91x40_timer_interrupt
};
.context = (void *)0x7f00,
};
-static struct snd_platform_data dm365_evm_snd_data = {
+static struct snd_platform_data dm365_evm_snd_data __maybe_unused = {
.asp_chan_q = EVENTQ_3,
};
#include <mach/hardware.h>
-#include <linux/platform_device.h>
-
#define DAVINCI_UART0_BASE (IO_PHYS + 0x20000)
#define DAVINCI_UART1_BASE (IO_PHYS + 0x20400)
#define DAVINCI_UART2_BASE (IO_PHYS + 0x20800)
#define UART_DM646X_SCR_TX_WATERMARK 0x08
#ifndef __ASSEMBLY__
+#include <linux/platform_device.h>
+
extern int davinci_serial_init(struct platform_device *);
#endif
init.ops = &clk_fixup_mux_ops;
init.parent_names = parents;
init.num_parents = num_parents;
+ init.flags = 0;
fixup_mux->mux.reg = reg;
fixup_mux->mux.shift = shift;
clk_register_clkdev(clk[ata_ahb_gate], "ata", NULL);
clk_register_clkdev(clk[rtc_ipg_gate], NULL, "imx21-rtc");
clk_register_clkdev(clk[scc_ipg_gate], "scc", NULL);
- clk_register_clkdev(clk[cpu_div], NULL, "cpufreq-cpu0.0");
+ clk_register_clkdev(clk[cpu_div], NULL, "cpu0");
clk_register_clkdev(clk[emi_ahb_gate], "emi_ahb" , NULL);
mxc_timer_init(MX27_IO_ADDRESS(MX27_GPT1_BASE_ADDR), MX27_INT_GPT1);
clk_register_clkdev(clk[ssi2_ipg_gate], NULL, "imx-ssi.1");
clk_register_clkdev(clk[ssi3_ipg_gate], NULL, "imx-ssi.2");
clk_register_clkdev(clk[sdma_gate], NULL, "imx35-sdma");
- clk_register_clkdev(clk[cpu_podf], NULL, "cpufreq-cpu0.0");
+ clk_register_clkdev(clk[cpu_podf], NULL, "cpu0");
clk_register_clkdev(clk[iim_gate], "iim", NULL);
clk_register_clkdev(clk[dummy], NULL, "imx2-wdt.0");
clk_register_clkdev(clk[dummy], NULL, "imx2-wdt.1");
mx51_spdif_xtal_sel, ARRAY_SIZE(mx51_spdif_xtal_sel));
clk[spdif1_sel] = imx_clk_mux("spdif1_sel", MXC_CCM_CSCMR2, 2, 2,
spdif_sel, ARRAY_SIZE(spdif_sel));
- clk[spdif1_pred] = imx_clk_divider("spdif1_podf", "spdif1_sel", MXC_CCM_CDCDR, 16, 3);
+ clk[spdif1_pred] = imx_clk_divider("spdif1_pred", "spdif1_sel", MXC_CCM_CDCDR, 16, 3);
clk[spdif1_podf] = imx_clk_divider("spdif1_podf", "spdif1_pred", MXC_CCM_CDCDR, 9, 6);
clk[spdif1_com_sel] = imx_clk_mux("spdif1_com_sel", MXC_CCM_CSCMR2, 5, 1,
mx51_spdif1_com_sel, ARRAY_SIZE(mx51_spdif1_com_sel));
of_node_put(np);
}
-static void __init imx6q_opp_init(struct device *cpu_dev)
+static void __init imx6q_opp_init(void)
{
struct device_node *np;
+ struct device *cpu_dev = get_cpu_device(0);
+ if (!cpu_dev) {
+ pr_warn("failed to get cpu0 device\n");
+ return;
+ }
np = of_node_get(cpu_dev->of_node);
if (!np) {
pr_warn("failed to find cpu0 node\n");
imx6q_cpuidle_init();
if (IS_ENABLED(CONFIG_ARM_IMX6Q_CPUFREQ)) {
- imx6q_opp_init(&imx6q_cpufreq_pdev.dev);
+ imx6q_opp_init();
platform_device_register(&imx6q_cpufreq_pdev);
}
}
/* Configure the L2 PREFETCH and POWER registers */
val = readl_relaxed(l2x0_base + L2X0_PREFETCH_CTRL);
val |= 0x70800000;
+ /*
+ * The L2 cache controller(PL310) version on the i.MX6D/Q is r3p1-50rel0
+ * The L2 cache controller(PL310) version on the i.MX6DL/SOLO/SL is r3p2
+ * But according to ARM PL310 errata: 752271
+ * ID: 752271: Double linefill feature can cause data corruption
+ * Fault Status: Present in: r3p0, r3p1, r3p1-50rel0. Fixed in r3p2
+ * Workaround: The only workaround to this erratum is to disable the
+ * double linefill feature. This is the default behavior.
+ */
+ if (cpu_is_imx6q())
+ val &= ~(1 << 30 | 1 << 23);
writel_relaxed(val, l2x0_base + L2X0_PREFETCH_CTRL);
val = L2X0_DYNAMIC_CLK_GATING_EN | L2X0_STNDBY_MODE_EN;
writel_relaxed(val, l2x0_base + L2X0_POWER_CTRL);
/* Simple oneliner include to the PCIv3 early init */
+#ifdef CONFIG_PCI
extern int pci_v3_early_init(void);
+#else
+static inline int pci_v3_early_init(void)
+{
+ return 0;
+}
+#endif
coherency_base = of_iomap(np, 0);
coherency_cpu_base = of_iomap(np, 1);
set_cpu_coherent(cpu_logical_map(smp_processor_id()), 0);
+ of_node_put(np);
}
return 0;
static int __init coherency_late_init(void)
{
- if (of_find_matching_node(NULL, of_coherency_table))
+ struct device_node *np;
+
+ np = of_find_matching_node(NULL, of_coherency_table);
+ if (np) {
bus_register_notifier(&platform_bus_type,
&mvebu_hwcc_platform_nb);
+ of_node_put(np);
+ }
return 0;
}
pr_info("Initializing Power Management Service Unit\n");
pmsu_mp_base = of_iomap(np, 0);
pmsu_reset_base = of_iomap(np, 1);
+ of_node_put(np);
}
return 0;
BUG_ON(!match);
system_controller_base = of_iomap(np, 0);
mvebu_sc = (struct mvebu_system_controller *)match->data;
+ of_node_put(np);
}
return 0;
.restart = omap3xxx_restart,
MACHINE_END
+static const char *omap36xx_boards_compat[] __initdata = {
+ "ti,omap36xx",
+ NULL,
+};
+
+DT_MACHINE_START(OMAP36XX_DT, "Generic OMAP36xx (Flattened Device Tree)")
+ .reserve = omap_reserve,
+ .map_io = omap3_map_io,
+ .init_early = omap3630_init_early,
+ .init_irq = omap_intc_of_init,
+ .handle_irq = omap3_intc_handle_irq,
+ .init_machine = omap_generic_init,
+ .init_late = omap3_init_late,
+ .init_time = omap3_sync32k_timer_init,
+ .dt_compat = omap36xx_boards_compat,
+ .restart = omap3xxx_restart,
+MACHINE_END
+
static const char *omap3_gp_boards_compat[] __initdata = {
"ti,omap3-beagle",
"timll,omap3-devkit8000",
.name = "lp5523:kb1",
.chan_nr = 0,
.led_current = 50,
+ .max_current = 100,
}, {
.name = "lp5523:kb2",
.chan_nr = 1,
.led_current = 50,
+ .max_current = 100,
}, {
.name = "lp5523:kb3",
.chan_nr = 2,
.led_current = 50,
+ .max_current = 100,
}, {
.name = "lp5523:kb4",
.chan_nr = 3,
.led_current = 50,
+ .max_current = 100,
}, {
.name = "lp5523:b",
.chan_nr = 4,
.led_current = 50,
+ .max_current = 100,
}, {
.name = "lp5523:g",
.chan_nr = 5,
.led_current = 50,
+ .max_current = 100,
}, {
.name = "lp5523:r",
.chan_nr = 6,
.led_current = 50,
+ .max_current = 100,
}, {
.name = "lp5523:kb5",
.chan_nr = 7,
.led_current = 50,
+ .max_current = 100,
}, {
.name = "lp5523:kb6",
.chan_nr = 8,
.led_current = 50,
+ .max_current = 100,
}
};
CLK(NULL, "auxclk5_src_ck", &auxclk5_src_ck),
CLK(NULL, "auxclk5_ck", &auxclk5_ck),
CLK(NULL, "auxclkreq5_ck", &auxclkreq5_ck),
- CLK("omap-gpmc", "fck", &dummy_ck),
+ CLK("50000000.gpmc", "fck", &dummy_ck),
CLK("omap_i2c.1", "ick", &dummy_ck),
CLK("omap_i2c.2", "ick", &dummy_ck),
CLK("omap_i2c.3", "ick", &dummy_ck),
* Call idle CPU cluster PM exit notifier chain
* to restore GIC and wakeupgen context.
*/
- if ((cx->mpu_state == PWRDM_POWER_RET) &&
+ if (dev->cpu == 0 && (cx->mpu_state == PWRDM_POWER_RET) &&
(cx->mpu_logic_state == PWRDM_POWER_OFF))
cpu_cluster_pm_exit();
struct gpmc_timings t;
int ret;
- if (gpmc_onenand_data->of_node)
+ if (gpmc_onenand_data->of_node) {
gpmc_read_settings_dt(gpmc_onenand_data->of_node,
&onenand_async);
+ if (onenand_async.sync_read || onenand_async.sync_write) {
+ if (onenand_async.sync_write)
+ gpmc_onenand_data->flags |=
+ ONENAND_SYNC_READWRITE;
+ else
+ gpmc_onenand_data->flags |= ONENAND_SYNC_READ;
+ onenand_async.sync_read = false;
+ onenand_async.sync_write = false;
+ }
+ }
omap2_onenand_set_async_mode(onenand_base);
*/
ret = gpmc_cs_remap(cs, res.start);
if (ret < 0) {
- dev_err(&pdev->dev, "cannot remap GPMC CS %d to 0x%x\n",
- cs, res.start);
+ dev_err(&pdev->dev, "cannot remap GPMC CS %d to %pa\n",
+ cs, &res.start);
goto err;
}
#define OMAP_PULL_UP (1 << 4)
#define OMAP_ALTELECTRICALSEL (1 << 5)
-/* 34xx specific mux bit defines */
+/* omap3/4/5 specific mux bit defines */
#define OMAP_INPUT_EN (1 << 8)
#define OMAP_OFF_EN (1 << 9)
#define OMAP_OFFOUT_EN (1 << 10)
#define OMAP_OFF_PULL_EN (1 << 12)
#define OMAP_OFF_PULL_UP (1 << 13)
#define OMAP_WAKEUP_EN (1 << 14)
-
-/* 44xx specific mux bit defines */
#define OMAP_WAKEUP_EVENT (1 << 15)
/* Active pin states */
"uart1_rts", "ssi1_flag_tx", NULL, NULL,
"gpio_149", NULL, NULL, "safe_mode"),
_OMAP3_MUXENTRY(UART1_RX, 151,
- "uart1_rx", "ss1_wake_tx", "mcbsp1_clkr", "mcspi4_clk",
+ "uart1_rx", "ssi1_wake_tx", "mcbsp1_clkr", "mcspi4_clk",
"gpio_151", NULL, NULL, "safe_mode"),
_OMAP3_MUXENTRY(UART1_TX, 148,
"uart1_tx", "ssi1_dat_tx", NULL, NULL,
/*
- * OMAP4 SMP source file. It contains platform specific fucntions
+ * OMAP4 SMP source file. It contains platform specific functions
* needed for the linux smp kernel.
*
* Copyright (C) 2009 Texas Instruments, Inc.
}
od = omap_device_alloc(pdev, hwmods, oh_cnt);
- if (!od) {
+ if (IS_ERR(od)) {
dev_err(&pdev->dev, "Cannot allocate omap_device for :%s\n",
oh_name);
ret = PTR_ERR(od);
#endif /* CONFIG_HAVE_ARM_TWD */
#endif /* CONFIG_ARCH_OMAP4 */
-#ifdef CONFIG_SOC_OMAP5
+#if defined(CONFIG_SOC_OMAP5) || defined(CONFIG_SOC_DRA7XX)
void __init omap5_realtime_timer_init(void)
{
omap4_sync32k_timer_init();
clocksource_of_init();
}
-#endif /* CONFIG_SOC_OMAP5 */
+#endif /* CONFIG_SOC_OMAP5 || CONFIG_SOC_DRA7XX */
/**
* omap_timer_init - build and register timer device with an
}
static struct flash_platform_data collie_flash_data = {
- .map_name = "cfi_probe",
+ .map_name = "jedec_probe",
.init = collie_flash_init,
.set_vpp = collie_set_vpp,
.exit = collie_flash_exit,
PIN_MAP_MUX_GROUP_DEFAULT("asoc-simple-card.1", "pfc-r8a7740",
"fsib_mclk_in", "fsib"),
/* GETHER */
- PIN_MAP_MUX_GROUP_DEFAULT("sh-eth", "pfc-r8a7740",
+ PIN_MAP_MUX_GROUP_DEFAULT("r8a7740-gether", "pfc-r8a7740",
"gether_mii", "gether"),
- PIN_MAP_MUX_GROUP_DEFAULT("sh-eth", "pfc-r8a7740",
+ PIN_MAP_MUX_GROUP_DEFAULT("r8a7740-gether", "pfc-r8a7740",
"gether_int", "gether"),
/* HDMI */
PIN_MAP_MUX_GROUP_DEFAULT("sh-mobile-hdmi", "pfc-r8a7740",
#include <linux/pinctrl/machine.h>
#include <linux/platform_data/gpio-rcar.h>
#include <linux/platform_device.h>
+#include <linux/phy.h>
#include <linux/regulator/fixed.h>
#include <linux/regulator/machine.h>
#include <linux/sh_eth.h>
ðer_pdata, sizeof(ether_pdata));
}
+/*
+ * Ether LEDs on the Lager board are named LINK and ACTIVE which corresponds
+ * to non-default 01 setting of the Micrel KSZ8041 PHY control register 1 bits
+ * 14-15. We have to set them back to 01 from the default 00 value each time
+ * the PHY is reset. It's also important because the PHY's LED0 signal is
+ * connected to SoC's ETH_LINK signal and in the PHY's default mode it will
+ * bounce on and off after each packet, which we apparently want to avoid.
+ */
+static int lager_ksz8041_fixup(struct phy_device *phydev)
+{
+ u16 phyctrl1 = phy_read(phydev, 0x1e);
+
+ phyctrl1 &= ~0xc000;
+ phyctrl1 |= 0x4000;
+ return phy_write(phydev, 0x1e, phyctrl1);
+}
+
+static void __init lager_init(void)
+{
+ lager_add_standard_devices();
+
+ phy_register_fixup_for_id("r8a7790-ether-ff:01", lager_ksz8041_fixup);
+}
+
static const char *lager_boards_compat_dt[] __initdata = {
"renesas,lager",
NULL,
DT_MACHINE_START(LAGER_DT, "lager")
.init_early = r8a7790_init_delay,
.init_time = r8a7790_timer_init,
- .init_machine = lager_add_standard_devices,
+ .init_machine = lager_init,
.dt_compat = lager_boards_compat_dt,
MACHINE_END
CLKDEV_CON_ID("pll2h", &pll2h_clk),
/* CPU clock */
- CLKDEV_DEV_ID("cpufreq-cpu0", &z_clk),
+ CLKDEV_DEV_ID("cpu0", &z_clk),
/* DIV6 */
CLKDEV_CON_ID("zb", &div6_clks[DIV6_ZB]),
CLKDEV_DEV_ID("smp_twd", &twd_clk), /* smp_twd */
/* DIV4 clocks */
- CLKDEV_DEV_ID("cpufreq-cpu0", &div4_clks[DIV4_Z]),
+ CLKDEV_DEV_ID("cpu0", &div4_clks[DIV4_Z]),
/* DIV6 clocks */
CLKDEV_CON_ID("vck1_clk", &div6_clks[DIV6_VCK1]),
-menu "ST-Ericsson AB U300/U335 Platform"
-
-comment "ST-Ericsson Mobile Platform Products"
-
config ARCH_U300
bool "ST-Ericsson U300 Series" if ARCH_MULTI_V5
depends on MMU
help
Support for ST-Ericsson U300 series mobile platforms.
-comment "ST-Ericsson U300/U335 Feature Selections"
+if ARCH_U300
+
+menu "ST-Ericsson AB U300/U335 Platform"
config MACH_U300
depends on ARCH_U300
SPI framework and ARM PL022 support.
endmenu
+
+endif
* some SMI service available.
*/
outer_cache.disable = NULL;
+ outer_cache.set_debug = NULL;
return 0;
}
} else
BUG();
+ /*
+ * If the CPU is committed to power down, make sure
+ * the power controller will be in charge of waking it
+ * up upon IRQ, ie IRQ lines are cut from GIC CPU IF
+ * to the CPU by disabling the GIC CPU IF to prevent wfi
+ * from completing execution behind power controller back
+ */
+ if (!skip_wfi)
+ gic_cpu_if_down();
+
if (last_man && __mcpm_outbound_enter_critical(cpu, cluster)) {
arch_spin_unlock(&tc2_pm_lock);
cpu = MPIDR_AFFINITY_LEVEL(mpidr, 0);
cluster = MPIDR_AFFINITY_LEVEL(mpidr, 1);
ve_spc_set_resume_addr(cluster, cpu, virt_to_phys(mcpm_entry_point));
- gic_cpu_if_down();
tc2_pm_down(residency);
}
break;
len = (j - i) << PAGE_SHIFT;
- ret = iommu_map(mapping->domain, iova, phys, len, 0);
+ ret = iommu_map(mapping->domain, iova, phys, len,
+ IOMMU_READ|IOMMU_WRITE);
if (ret < 0)
goto fail;
iova += len;
GFP_KERNEL);
}
+static int __dma_direction_to_prot(enum dma_data_direction dir)
+{
+ int prot;
+
+ switch (dir) {
+ case DMA_BIDIRECTIONAL:
+ prot = IOMMU_READ | IOMMU_WRITE;
+ break;
+ case DMA_TO_DEVICE:
+ prot = IOMMU_READ;
+ break;
+ case DMA_FROM_DEVICE:
+ prot = IOMMU_WRITE;
+ break;
+ default:
+ prot = 0;
+ }
+
+ return prot;
+}
+
/*
* Map a part of the scatter-gather list into contiguous io address space
*/
int ret = 0;
unsigned int count;
struct scatterlist *s;
+ int prot;
size = PAGE_ALIGN(size);
*handle = DMA_ERROR_CODE;
!dma_get_attr(DMA_ATTR_SKIP_CPU_SYNC, attrs))
__dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir);
- ret = iommu_map(mapping->domain, iova, phys, len, 0);
+ prot = __dma_direction_to_prot(dir);
+
+ ret = iommu_map(mapping->domain, iova, phys, len, prot);
if (ret < 0)
goto fail;
count += len >> PAGE_SHIFT;
if (dma_addr == DMA_ERROR_CODE)
return dma_addr;
- switch (dir) {
- case DMA_BIDIRECTIONAL:
- prot = IOMMU_READ | IOMMU_WRITE;
- break;
- case DMA_TO_DEVICE:
- prot = IOMMU_READ;
- break;
- case DMA_FROM_DEVICE:
- prot = IOMMU_WRITE;
- break;
- default:
- prot = 0;
- }
+ prot = __dma_direction_to_prot(dir);
ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, prot);
if (ret < 0)
#include <linux/nodemask.h>
#include <linux/initrd.h>
#include <linux/of_fdt.h>
-#include <linux/of_reserved_mem.h>
#include <linux/highmem.h>
#include <linux/gfp.h>
#include <linux/memblock.h>
if (mdesc->reserve)
mdesc->reserve();
- early_init_dt_scan_reserved_mem();
-
/*
* reserve memory for DMA contigouos allocations,
* must come from DMA area inside low memory
{
if (fp->bpf_func != sk_run_filter)
module_free(NULL, fp->bpf_func);
+ kfree(fp);
}
bool
default y
-config DEBUG_STACK_USAGE
- bool "Enable stack utilization instrumentation"
- depends on DEBUG_KERNEL
- help
- Enables the display of the minimum amount of free stack which each
- task has ever had available in the sysrq-T output.
-
config EARLY_PRINTK
bool "Early printk support"
default y
# CONFIG_WIRELESS is not set
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
-# CONFIG_BLK_DEV is not set
+CONFIG_BLK_DEV=y
CONFIG_SCSI=y
# CONFIG_SCSI_PROC_FS is not set
CONFIG_BLK_DEV_SD=y
# CONFIG_IOMMU_SUPPORT is not set
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
+CONFIG_EXT4_FS=y
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
# CONFIG_EXT3_FS_XATTR is not set
CONFIG_FUSE_FS=y
CONFIG_DEBUG_INFO=y
# CONFIG_FTRACE is not set
CONFIG_ATOMIC64_SELFTEST=y
+CONFIG_VIRTIO_MMIO=y
+CONFIG_VIRTIO_BLK=y
COMPAT_HWCAP_VFPv3|COMPAT_HWCAP_VFPv4|\
COMPAT_HWCAP_NEON|COMPAT_HWCAP_IDIV)
-extern unsigned int elf_hwcap;
+extern unsigned long elf_hwcap;
#endif
#endif
#define get_user(x, ptr) \
({ \
+ __typeof__(*(ptr)) __user *__p = (ptr); \
might_fault(); \
- access_ok(VERIFY_READ, (ptr), sizeof(*(ptr))) ? \
- __get_user((x), (ptr)) : \
+ access_ok(VERIFY_READ, __p, sizeof(*__p)) ? \
+ __get_user((x), __p) : \
((x) = 0, -EFAULT); \
})
#define put_user(x, ptr) \
({ \
+ __typeof__(*(ptr)) __user *__p = (ptr); \
might_fault(); \
- access_ok(VERIFY_WRITE, (ptr), sizeof(*(ptr))) ? \
- __put_user((x), (ptr)) : \
+ access_ok(VERIFY_WRITE, __p, sizeof(*__p)) ? \
+ __put_user((x), __p) : \
-EFAULT; \
})
void fpsimd_flush_thread(void)
{
+ preempt_disable();
memset(¤t->thread.fpsimd_state, 0, sizeof(struct fpsimd_state));
fpsimd_load_state(¤t->thread.fpsimd_state);
+ preempt_enable();
}
#ifdef CONFIG_KERNEL_MODE_NEON
void __show_regs(struct pt_regs *regs)
{
- int i;
+ int i, top_reg;
+ u64 lr, sp;
+
+ if (compat_user_mode(regs)) {
+ lr = regs->compat_lr;
+ sp = regs->compat_sp;
+ top_reg = 12;
+ } else {
+ lr = regs->regs[30];
+ sp = regs->sp;
+ top_reg = 29;
+ }
show_regs_print_info(KERN_DEFAULT);
print_symbol("PC is at %s\n", instruction_pointer(regs));
- print_symbol("LR is at %s\n", regs->regs[30]);
+ print_symbol("LR is at %s\n", lr);
printk("pc : [<%016llx>] lr : [<%016llx>] pstate: %08llx\n",
- regs->pc, regs->regs[30], regs->pstate);
- printk("sp : %016llx\n", regs->sp);
- for (i = 29; i >= 0; i--) {
+ regs->pc, lr, regs->pstate);
+ printk("sp : %016llx\n", sp);
+ for (i = top_reg; i >= 0; i--) {
printk("x%-2d: %016llx ", i, regs->regs[i]);
if (i % 2 == 0)
printk("\n");
unsigned int processor_id;
EXPORT_SYMBOL(processor_id);
-unsigned int elf_hwcap __read_mostly;
+unsigned long elf_hwcap __read_mostly;
EXPORT_SYMBOL_GPL(elf_hwcap);
static const char *cpu_name;
force_sig_info(sig, &si, tsk);
}
-void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *regs)
+static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *regs)
{
struct task_struct *tsk = current;
struct mm_struct *mm = tsk->active_mm;
*/
ENTRY(__cpu_flush_user_tlb_range)
vma_vm_mm x3, x2 // get vma->vm_mm
- mmid x3, x3 // get vm_mm->context.id
+ mmid w3, x3 // get vm_mm->context.id
dsb sy
lsr x0, x0, #12 // align address
lsr x1, x1, #12
generic-y += clkdev.h
+generic-y += cputime.h
+generic-y += delay.h
+generic-y += device.h
+generic-y += div64.h
+generic-y += emergency-restart.h
generic-y += exec.h
-generic-y += trace_clock.h
+generic-y += futex.h
+generic-y += irq_regs.h
generic-y += param.h
+generic-y += local.h
+generic-y += local64.h
+generic-y += percpu.h
+generic-y += scatterlist.h
+generic-y += sections.h
+generic-y += topology.h
+generic-y += trace_clock.h
+generic-y += xor.h
+++ /dev/null
-#ifndef __ASM_AVR32_CPUTIME_H
-#define __ASM_AVR32_CPUTIME_H
-
-#include <asm-generic/cputime.h>
-
-#endif /* __ASM_AVR32_CPUTIME_H */
+++ /dev/null
-#include <asm-generic/delay.h>
+++ /dev/null
-/*
- * Arch specific extensions to struct device
- *
- * This file is released under the GPLv2
- */
-#include <asm-generic/device.h>
-
+++ /dev/null
-#ifndef __ASM_AVR32_DIV64_H
-#define __ASM_AVR32_DIV64_H
-
-#include <asm-generic/div64.h>
-
-#endif /* __ASM_AVR32_DIV64_H */
+++ /dev/null
-#ifndef __ASM_AVR32_EMERGENCY_RESTART_H
-#define __ASM_AVR32_EMERGENCY_RESTART_H
-
-#include <asm-generic/emergency-restart.h>
-
-#endif /* __ASM_AVR32_EMERGENCY_RESTART_H */
+++ /dev/null
-#ifndef __ASM_AVR32_FUTEX_H
-#define __ASM_AVR32_FUTEX_H
-
-#include <asm-generic/futex.h>
-
-#endif /* __ASM_AVR32_FUTEX_H */
+++ /dev/null
-#include <asm-generic/irq_regs.h>
+++ /dev/null
-#ifndef __ASM_AVR32_LOCAL_H
-#define __ASM_AVR32_LOCAL_H
-
-#include <asm-generic/local.h>
-
-#endif /* __ASM_AVR32_LOCAL_H */
+++ /dev/null
-#include <asm-generic/local64.h>
+++ /dev/null
-#ifndef __ASM_AVR32_PERCPU_H
-#define __ASM_AVR32_PERCPU_H
-
-#include <asm-generic/percpu.h>
-
-#endif /* __ASM_AVR32_PERCPU_H */
+++ /dev/null
-#ifndef __ASM_AVR32_SCATTERLIST_H
-#define __ASM_AVR32_SCATTERLIST_H
-
-#include <asm-generic/scatterlist.h>
-
-#endif /* __ASM_AVR32_SCATTERLIST_H */
+++ /dev/null
-#ifndef __ASM_AVR32_SECTIONS_H
-#define __ASM_AVR32_SECTIONS_H
-
-#include <asm-generic/sections.h>
-
-#endif /* __ASM_AVR32_SECTIONS_H */
+++ /dev/null
-#ifndef __ASM_AVR32_TOPOLOGY_H
-#define __ASM_AVR32_TOPOLOGY_H
-
-#include <asm-generic/topology.h>
-
-#endif /* __ASM_AVR32_TOPOLOGY_H */
+++ /dev/null
-#ifndef _ASM_XOR_H
-#define _ASM_XOR_H
-
-#include <asm-generic/xor.h>
-
-#endif
memset(childregs, 0, sizeof(struct pt_regs));
p->thread.cpu_context.r0 = arg;
p->thread.cpu_context.r1 = usp; /* fn */
- p->thread.cpu_context.r2 = syscall_return;
+ p->thread.cpu_context.r2 = (unsigned long)syscall_return;
p->thread.cpu_context.pc = (unsigned long)ret_from_kernel_thread;
childregs->sr = MODE_SUPERVISOR;
} else {
case CLOCK_EVT_MODE_SHUTDOWN:
sysreg_write(COMPARE, 0);
pr_debug("%s: stop\n", evdev->name);
- cpu_idle_poll_ctrl(false);
+ if (evdev->mode == CLOCK_EVT_MODE_ONESHOT ||
+ evdev->mode == CLOCK_EVT_MODE_RESUME) {
+ /*
+ * Only disable idle poll if we have forced that
+ * in a previous call.
+ */
+ cpu_idle_poll_ctrl(false);
+ }
break;
default:
BUG();
vmlinux.32: vmlinux
$(OBJCOPY) -O $(32bit-bfd) $(OBJCOPYFLAGS) $< $@
-
-#obj-$(CONFIG_KPROBES) += kprobes.o
-
#
# The 64-bit ELF tools are pretty broken so at this time we generate 64-bit
# ELF files from 32-bit files by conversion.
.resource = alchemy_pci_host_res,
};
-static struct __initdata platform_device * mtx1_devs[] = {
+static struct platform_device *mtx1_devs[] __initdata = {
&mtx1_pci_host,
&mtx1_gpio_leds,
&mtx1_wdt,
#include <linux/module.h>
#include <linux/spinlock.h>
#include <linux/syscore_ops.h>
+#include <asm/cpu.h>
#include <asm/mach-au1x00/au1000.h>
/* control register offsets */
{
#if defined(CONFIG_DMA_COHERENT)
/* Au1200 AB USB does not support coherent memory */
- if (!(read_c0_prid() & 0xff)) {
+ if (!(read_c0_prid() & PRID_REV_MASK)) {
printk(KERN_INFO "Au1200 USB: this is chip revision AB !!\n");
printk(KERN_INFO "Au1200 USB: update your board or re-configure"
" the kernel\n");
switch (c->cputype) {
case CPU_BMIPS3300:
- if ((read_c0_prid() & 0xff00) != PRID_IMP_BMIPS3300_ALT)
+ if ((read_c0_prid() & PRID_IMP_MASK) != PRID_IMP_BMIPS3300_ALT)
__cpu_name[cpu] = "Broadcom BCM6338";
/* fall-through */
case CPU_BMIPS32:
chipid_reg = BCM_6345_PERF_BASE;
break;
case CPU_BMIPS4350:
- switch ((read_c0_prid() & 0xff)) {
+ switch ((read_c0_prid() & PRID_REV_MASK)) {
case 0x04:
chipid_reg = BCM_3368_PERF_BASE;
break;
-../../../../../include/dt-bindings
+../../../../../include/dt-bindings
\ No newline at end of file
#include <linux/smp.h>
#include <asm/cpu-info.h>
+#include <asm/cpu-type.h>
#include <asm/time.h>
#include <asm/octeon/octeon.h>
#include <asm/bootinfo.h>
#include <asm/cpu.h>
+#include <asm/cpu-type.h>
#include <asm/processor.h>
#include <asm/dec/prom.h>
#include <asm/cpu-info.h>
#include <cpu-feature-overrides.h>
-#ifndef current_cpu_type
-#define current_cpu_type() current_cpu_data.cputype
-#endif
-
-#define boot_cpu_type() cpu_data[0].cputype
-
/*
* SMP assumption: Options of CPU 0 are a superset of all processors.
* This is true for all known MIPS systems.
/*
* MIPS32, MIPS64, VR5500, IDT32332, IDT32334 and maybe a few other
- * pre-MIPS32/MIPS53 processors have CLO, CLZ. The IDT RC64574 is 64-bit and
+ * pre-MIPS32/MIPS64 processors have CLO, CLZ. The IDT RC64574 is 64-bit and
* has CLO and CLZ but not DCLO nor DCLZ. For 64-bit kernels
* cpu_has_clo_clz also indicates the availability of DCLO and DCLZ.
*/
extern struct cpuinfo_mips cpu_data[];
#define current_cpu_data cpu_data[smp_processor_id()]
#define raw_current_cpu_data cpu_data[raw_smp_processor_id()]
+#define boot_cpu_data cpu_data[0]
extern void cpu_probe(void);
extern void cpu_report(void);
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2003, 2004 Ralf Baechle
+ * Copyright (C) 2004 Maciej W. Rozycki
+ */
+#ifndef __ASM_CPU_TYPE_H
+#define __ASM_CPU_TYPE_H
+
+#include <linux/smp.h>
+#include <linux/compiler.h>
+
+static inline int __pure __get_cpu_type(const int cpu_type)
+{
+ switch (cpu_type) {
+#if defined(CONFIG_SYS_HAS_CPU_LOONGSON2E) || \
+ defined(CONFIG_SYS_HAS_CPU_LOONGSON2F)
+ case CPU_LOONGSON2:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_LOONGSON1B
+ case CPU_LOONGSON1:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_MIPS32_R1
+ case CPU_4KC:
+ case CPU_ALCHEMY:
+ case CPU_BMIPS3300:
+ case CPU_BMIPS4350:
+ case CPU_PR4450:
+ case CPU_BMIPS32:
+ case CPU_JZRISC:
+#endif
+
+#if defined(CONFIG_SYS_HAS_CPU_MIPS32_R1) || \
+ defined(CONFIG_SYS_HAS_CPU_MIPS32_R2)
+ case CPU_4KEC:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_MIPS32_R2
+ case CPU_4KSC:
+ case CPU_24K:
+ case CPU_34K:
+ case CPU_1004K:
+ case CPU_74K:
+ case CPU_M14KC:
+ case CPU_M14KEC:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_MIPS64_R1
+ case CPU_5KC:
+ case CPU_5KE:
+ case CPU_20KC:
+ case CPU_25KF:
+ case CPU_SB1:
+ case CPU_SB1A:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_MIPS64_R2
+ /*
+ * All MIPS64 R2 processors have their own special symbols. That is,
+ * there currently is no pure R2 core
+ */
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_R3000
+ case CPU_R2000:
+ case CPU_R3000:
+ case CPU_R3000A:
+ case CPU_R3041:
+ case CPU_R3051:
+ case CPU_R3052:
+ case CPU_R3081:
+ case CPU_R3081E:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_TX39XX
+ case CPU_TX3912:
+ case CPU_TX3922:
+ case CPU_TX3927:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_VR41XX
+ case CPU_VR41XX:
+ case CPU_VR4111:
+ case CPU_VR4121:
+ case CPU_VR4122:
+ case CPU_VR4131:
+ case CPU_VR4133:
+ case CPU_VR4181:
+ case CPU_VR4181A:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_R4300
+ case CPU_R4300:
+ case CPU_R4310:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_R4X00
+ case CPU_R4000PC:
+ case CPU_R4000SC:
+ case CPU_R4000MC:
+ case CPU_R4200:
+ case CPU_R4400PC:
+ case CPU_R4400SC:
+ case CPU_R4400MC:
+ case CPU_R4600:
+ case CPU_R4700:
+ case CPU_R4640:
+ case CPU_R4650:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_TX49XX
+ case CPU_TX49XX:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_R5000
+ case CPU_R5000:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_R5432
+ case CPU_R5432:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_R5500
+ case CPU_R5500:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_R6000
+ case CPU_R6000:
+ case CPU_R6000A:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_NEVADA
+ case CPU_NEVADA:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_R8000
+ case CPU_R8000:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_R10000
+ case CPU_R10000:
+ case CPU_R12000:
+ case CPU_R14000:
+#endif
+#ifdef CONFIG_SYS_HAS_CPU_RM7000
+ case CPU_RM7000:
+ case CPU_SR71000:
+#endif
+#ifdef CONFIG_SYS_HAS_CPU_RM9000
+ case CPU_RM9000:
+#endif
+#ifdef CONFIG_SYS_HAS_CPU_SB1
+ case CPU_SB1:
+ case CPU_SB1A:
+#endif
+#ifdef CONFIG_SYS_HAS_CPU_CAVIUM_OCTEON
+ case CPU_CAVIUM_OCTEON:
+ case CPU_CAVIUM_OCTEON_PLUS:
+ case CPU_CAVIUM_OCTEON2:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_BMIPS4380
+ case CPU_BMIPS4380:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_BMIPS5000
+ case CPU_BMIPS5000:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_XLP
+ case CPU_XLP:
+#endif
+
+#ifdef CONFIG_SYS_HAS_CPU_XLR
+ case CPU_XLR:
+#endif
+ break;
+ default:
+ unreachable();
+ }
+
+ return cpu_type;
+}
+
+static inline int __pure current_cpu_type(void)
+{
+ const int cpu_type = current_cpu_data.cputype;
+
+ return __get_cpu_type(cpu_type);
+}
+
+static inline int __pure boot_cpu_type(void)
+{
+ const int cpu_type = cpu_data[0].cputype;
+
+ return __get_cpu_type(cpu_type);
+}
+
+#endif /* __ASM_CPU_TYPE_H */
* various MIPS cpu types.
*
* Copyright (C) 1996 David S. Miller (davem@davemloft.net)
- * Copyright (C) 2004 Maciej W. Rozycki
+ * Copyright (C) 2004, 2013 Maciej W. Rozycki
*/
#ifndef _ASM_CPU_H
#define _ASM_CPU_H
-/* Assigned Company values for bits 23:16 of the PRId Register
- (CP0 register 15, select 0). As of the MIPS32 and MIPS64 specs from
- MTI, the PRId register is defined in this (backwards compatible)
- way:
+/*
+ As of the MIPS32 and MIPS64 specs from MTI, the PRId register (CP0
+ register 15, select 0) is defined in this (backwards compatible) way:
+----------------+----------------+----------------+----------------+
| Company Options| Company ID | Processor ID | Revision |
spec.
*/
+#define PRID_OPT_MASK 0xff000000
+
+/*
+ * Assigned Company values for bits 23:16 of the PRId register.
+ */
+
+#define PRID_COMP_MASK 0xff0000
+
#define PRID_COMP_LEGACY 0x000000
#define PRID_COMP_MIPS 0x010000
#define PRID_COMP_BROADCOM 0x020000
#define PRID_COMP_INGENIC 0xd00000
/*
- * Assigned values for the product ID register. In order to detect a
- * certain CPU type exactly eventually additional registers may need to
- * be examined. These are valid when 23:16 == PRID_COMP_LEGACY
+ * Assigned Processor ID (implementation) values for bits 15:8 of the PRId
+ * register. In order to detect a certain CPU type exactly eventually
+ * additional registers may need to be examined.
*/
+
+#define PRID_IMP_MASK 0xff00
+
+/*
+ * These are valid when 23:16 == PRID_COMP_LEGACY
+ */
+
#define PRID_IMP_R2000 0x0100
#define PRID_IMP_AU1_REV1 0x0100
#define PRID_IMP_AU1_REV2 0x0200
#define PRID_IMP_NETLOGIC_XLP2XX 0x1200
/*
- * Definitions for 7:0 on legacy processors
+ * Particular Revision values for bits 7:0 of the PRId register.
*/
#define PRID_REV_MASK 0x00ff
+/*
+ * Definitions for 7:0 on legacy processors
+ */
+
#define PRID_REV_TX4927 0x0022
#define PRID_REV_TX4937 0x0030
#define PRID_REV_R4400 0x0040
* 31 16 15 8 7 0
*/
+#define FPIR_IMP_MASK 0xff00
+
#define FPIR_IMP_NONE 0x0000
enum cpu_type_enum {
static __always_inline bool arch_static_branch(struct static_key *key)
{
- asm goto("1:\tnop\n\t"
+ asm_volatile_goto("1:\tnop\n\t"
"nop\n\t"
".pushsection __jump_table, \"aw\"\n\t"
WORD_INSN " 1b, %l[l_yes], %0\n\t"
#include <linux/io.h>
#include <linux/irq.h>
+#include <asm/cpu.h>
+
/* cpu pipeline flush */
void static inline au_sync(void)
{
static inline int alchemy_get_cputype(void)
{
- switch (read_c0_prid() & 0xffff0000) {
+ switch (read_c0_prid() & (PRID_OPT_MASK | PRID_COMP_MASK)) {
case 0x00030000:
return ALCHEMY_CPU_AU1000;
break;
#ifndef __ASM_MACH_IP22_CPU_FEATURE_OVERRIDES_H
#define __ASM_MACH_IP22_CPU_FEATURE_OVERRIDES_H
+#include <asm/cpu.h>
+
/*
* IP22 with a variety of processors so we can't use defaults for everything.
*/
#ifndef __ASM_MACH_IP27_CPU_FEATURE_OVERRIDES_H
#define __ASM_MACH_IP27_CPU_FEATURE_OVERRIDES_H
+#include <asm/cpu.h>
+
/*
* IP27 only comes with R10000 family processors all using the same config
*/
#ifndef __ASM_MACH_IP28_CPU_FEATURE_OVERRIDES_H
#define __ASM_MACH_IP28_CPU_FEATURE_OVERRIDES_H
+#include <asm/cpu.h>
+
/*
* IP28 only comes with R10000 family processors all using the same config
*/
#define MIPS_CONF4_MMUEXTDEF (_ULCAST_(3) << 14)
#define MIPS_CONF4_MMUEXTDEF_MMUSIZEEXT (_ULCAST_(1) << 14)
+#define MIPS_CONF5_NF (_ULCAST_(1) << 0)
+#define MIPS_CONF5_UFR (_ULCAST_(1) << 2)
+#define MIPS_CONF5_MSAEN (_ULCAST_(1) << 27)
+#define MIPS_CONF5_EVA (_ULCAST_(1) << 28)
+#define MIPS_CONF5_CV (_ULCAST_(1) << 29)
+#define MIPS_CONF5_K (_ULCAST_(1) << 30)
+
#define MIPS_CONF6_SYND (_ULCAST_(1) << 13)
#define MIPS_CONF7_WII (_ULCAST_(1) << 31)
extern int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
enum pci_mmap_state mmap_state, int write_combine);
+#define HAVE_ARCH_PCI_RESOURCE_TO_USER
+
+static inline void pci_resource_to_user(const struct pci_dev *dev, int bar,
+ const struct resource *rsrc, resource_size_t *start,
+ resource_size_t *end)
+{
+ phys_t size = resource_size(rsrc);
+
+ *start = fixup_bigphys_addr(rsrc->start, size);
+ *end = rsrc->start + size;
+}
+
/*
* Dynamic DMA mapping stuff.
* MIPS has everything mapped statically.
#ifdef __KERNEL__
+#include <asm/cpu-features.h>
#include <asm/mipsregs.h>
+#include <asm/cpu-type.h>
/*
* This is the clock rate of the i8253 PIT. A MIPS system may not have
typedef unsigned int cycles_t;
+/*
+ * On R4000/R4400 before version 5.0 an erratum exists such that if the
+ * cycle counter is read in the exact moment that it is matching the
+ * compare register, no interrupt will be generated.
+ *
+ * There is a suggested workaround and also the erratum can't strike if
+ * the compare interrupt isn't being used as the clock source device.
+ * However for now the implementaton of this function doesn't get these
+ * fine details right.
+ */
static inline cycles_t get_cycles(void)
{
- return 0;
+ switch (boot_cpu_type()) {
+ case CPU_R4400PC:
+ case CPU_R4400SC:
+ case CPU_R4400MC:
+ if ((read_c0_prid() & 0xff) >= 0x0050)
+ return read_c0_count();
+ break;
+
+ case CPU_R4000PC:
+ case CPU_R4000SC:
+ case CPU_R4000MC:
+ break;
+
+ default:
+ if (cpu_has_counter)
+ return read_c0_count();
+ break;
+ }
+
+ return 0; /* no usable counter */
}
#endif /* __KERNEL__ */
#ifndef _ASM_VGA_H
#define _ASM_VGA_H
+#include <asm/addrspace.h>
#include <asm/byteorder.h>
/*
* access the videoram directly without any black magic.
*/
-#define VGA_MAP_MEM(x, s) (0xb0000000L + (unsigned long)(x))
+#define VGA_MAP_MEM(x, s) CKSEG1ADDR(0x10000000L + (unsigned long)(x))
#define vga_readb(x) (*(x))
#define vga_writeb(x, y) (*(y) = (x))
#include <asm/bugs.h>
#include <asm/cpu.h>
+#include <asm/cpu-type.h>
#include <asm/fpu.h>
#include <asm/mipsregs.h>
#include <asm/watch.h>
{
struct cpuinfo_mips *c = ¤t_cpu_data;
- switch (c->cputype) {
+ switch (current_cpu_type()) {
case CPU_34K:
/*
* Erratum "RPS May Cause Incorrect Instruction Execution"
*/
static inline int __cpu_has_fpu(void)
{
- return ((cpu_get_fpu_id() & 0xff00) != FPIR_IMP_NONE);
+ return ((cpu_get_fpu_id() & FPIR_IMP_MASK) != FPIR_IMP_NONE);
}
static inline void cpu_probe_vmbits(struct cpuinfo_mips *c)
return config4 & MIPS_CONF_M;
}
+static inline unsigned int decode_config5(struct cpuinfo_mips *c)
+{
+ unsigned int config5;
+
+ config5 = read_c0_config5();
+ config5 &= ~MIPS_CONF5_UFR;
+ write_c0_config5(config5);
+
+ return config5 & MIPS_CONF_M;
+}
+
static void decode_configs(struct cpuinfo_mips *c)
{
int ok;
ok = decode_config3(c);
if (ok)
ok = decode_config4(c);
+ if (ok)
+ ok = decode_config5(c);
mips_probe_watch_registers(c);
static inline void cpu_probe_legacy(struct cpuinfo_mips *c, unsigned int cpu)
{
- switch (c->processor_id & 0xff00) {
+ switch (c->processor_id & PRID_IMP_MASK) {
case PRID_IMP_R2000:
c->cputype = CPU_R2000;
__cpu_name[cpu] = "R2000";
c->tlbsize = 64;
break;
case PRID_IMP_R3000:
- if ((c->processor_id & 0xff) == PRID_REV_R3000A) {
+ if ((c->processor_id & PRID_REV_MASK) == PRID_REV_R3000A) {
if (cpu_has_confreg()) {
c->cputype = CPU_R3081E;
__cpu_name[cpu] = "R3081";
break;
case PRID_IMP_R4000:
if (read_c0_config() & CONF_SC) {
- if ((c->processor_id & 0xff) >= PRID_REV_R4400) {
+ if ((c->processor_id & PRID_REV_MASK) >=
+ PRID_REV_R4400) {
c->cputype = CPU_R4400PC;
__cpu_name[cpu] = "R4400PC";
} else {
__cpu_name[cpu] = "R4000PC";
}
} else {
- if ((c->processor_id & 0xff) >= PRID_REV_R4400) {
+ if ((c->processor_id & PRID_REV_MASK) >=
+ PRID_REV_R4400) {
c->cputype = CPU_R4400SC;
__cpu_name[cpu] = "R4400SC";
} else {
__cpu_name[cpu] = "TX3927";
c->tlbsize = 64;
} else {
- switch (c->processor_id & 0xff) {
+ switch (c->processor_id & PRID_REV_MASK) {
case PRID_REV_TX3912:
c->cputype = CPU_TX3912;
__cpu_name[cpu] = "TX3912";
static inline void cpu_probe_mips(struct cpuinfo_mips *c, unsigned int cpu)
{
decode_configs(c);
- switch (c->processor_id & 0xff00) {
+ switch (c->processor_id & PRID_IMP_MASK) {
case PRID_IMP_4KC:
c->cputype = CPU_4KC;
__cpu_name[cpu] = "MIPS 4Kc";
static inline void cpu_probe_alchemy(struct cpuinfo_mips *c, unsigned int cpu)
{
decode_configs(c);
- switch (c->processor_id & 0xff00) {
+ switch (c->processor_id & PRID_IMP_MASK) {
case PRID_IMP_AU1_REV1:
case PRID_IMP_AU1_REV2:
c->cputype = CPU_ALCHEMY;
break;
case 4:
__cpu_name[cpu] = "Au1200";
- if ((c->processor_id & 0xff) == 2)
+ if ((c->processor_id & PRID_REV_MASK) == 2)
__cpu_name[cpu] = "Au1250";
break;
case 5:
{
decode_configs(c);
- switch (c->processor_id & 0xff00) {
+ switch (c->processor_id & PRID_IMP_MASK) {
case PRID_IMP_SB1:
c->cputype = CPU_SB1;
__cpu_name[cpu] = "SiByte SB1";
/* FPU in pass1 is known to have issues. */
- if ((c->processor_id & 0xff) < 0x02)
+ if ((c->processor_id & PRID_REV_MASK) < 0x02)
c->options &= ~(MIPS_CPU_FPU | MIPS_CPU_32FPR);
break;
case PRID_IMP_SB1A:
static inline void cpu_probe_sandcraft(struct cpuinfo_mips *c, unsigned int cpu)
{
decode_configs(c);
- switch (c->processor_id & 0xff00) {
+ switch (c->processor_id & PRID_IMP_MASK) {
case PRID_IMP_SR71000:
c->cputype = CPU_SR71000;
__cpu_name[cpu] = "Sandcraft SR71000";
static inline void cpu_probe_nxp(struct cpuinfo_mips *c, unsigned int cpu)
{
decode_configs(c);
- switch (c->processor_id & 0xff00) {
+ switch (c->processor_id & PRID_IMP_MASK) {
case PRID_IMP_PR4450:
c->cputype = CPU_PR4450;
__cpu_name[cpu] = "Philips PR4450";
static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
{
decode_configs(c);
- switch (c->processor_id & 0xff00) {
+ switch (c->processor_id & PRID_IMP_MASK) {
case PRID_IMP_BMIPS32_REV4:
case PRID_IMP_BMIPS32_REV8:
c->cputype = CPU_BMIPS32;
set_elf_platform(cpu, "bmips3300");
break;
case PRID_IMP_BMIPS43XX: {
- int rev = c->processor_id & 0xff;
+ int rev = c->processor_id & PRID_REV_MASK;
if (rev >= PRID_REV_BMIPS4380_LO &&
rev <= PRID_REV_BMIPS4380_HI) {
static inline void cpu_probe_cavium(struct cpuinfo_mips *c, unsigned int cpu)
{
decode_configs(c);
- switch (c->processor_id & 0xff00) {
+ switch (c->processor_id & PRID_IMP_MASK) {
case PRID_IMP_CAVIUM_CN38XX:
case PRID_IMP_CAVIUM_CN31XX:
case PRID_IMP_CAVIUM_CN30XX:
decode_configs(c);
/* JZRISC does not implement the CP0 counter. */
c->options &= ~MIPS_CPU_COUNTER;
- switch (c->processor_id & 0xff00) {
+ switch (c->processor_id & PRID_IMP_MASK) {
case PRID_IMP_JZRISC:
c->cputype = CPU_JZRISC;
__cpu_name[cpu] = "Ingenic JZRISC";
{
decode_configs(c);
- if ((c->processor_id & 0xff00) == PRID_IMP_NETLOGIC_AU13XX) {
+ if ((c->processor_id & PRID_IMP_MASK) == PRID_IMP_NETLOGIC_AU13XX) {
c->cputype = CPU_ALCHEMY;
__cpu_name[cpu] = "Au1300";
/* following stuff is not for Alchemy */
MIPS_CPU_EJTAG |
MIPS_CPU_LLSC);
- switch (c->processor_id & 0xff00) {
+ switch (c->processor_id & PRID_IMP_MASK) {
case PRID_IMP_NETLOGIC_XLP2XX:
c->cputype = CPU_XLP;
__cpu_name[cpu] = "Broadcom XLPII";
c->cputype = CPU_UNKNOWN;
c->processor_id = read_c0_prid();
- switch (c->processor_id & 0xff0000) {
+ switch (c->processor_id & PRID_COMP_MASK) {
case PRID_COMP_LEGACY:
cpu_probe_legacy(c, cpu);
break;
#include <linux/sched.h>
#include <asm/cpu.h>
#include <asm/cpu-info.h>
+#include <asm/cpu-type.h>
#include <asm/idle.h>
#include <asm/mipsregs.h>
return;
}
- switch (c->cputype) {
+ switch (current_cpu_type()) {
case CPU_R3081:
case CPU_R3081E:
cpu_wait = r3081_wait;
3:
#if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP)
- PTR_L t8, __stack_chk_guard
+ PTR_LA t8, __stack_chk_guard
LONG_L t9, TASK_STACK_CANARY(a1)
LONG_S t9, 0(t8)
#endif
[C(LL)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = { 0x1c, CNTR_ODD, P },
- [C(RESULT_MISS)] = { 0x1d, CNTR_EVEN | CNTR_ODD, P },
+ [C(RESULT_MISS)] = { 0x1d, CNTR_EVEN, P },
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = { 0x1c, CNTR_ODD, P },
- [C(RESULT_MISS)] = { 0x1d, CNTR_EVEN | CNTR_ODD, P },
+ [C(RESULT_MISS)] = { 0x1d, CNTR_EVEN, P },
},
},
[C(ITLB)] = {
1:
#if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP)
- PTR_L t8, __stack_chk_guard
+ PTR_LA t8, __stack_chk_guard
LONG_L t9, TASK_STACK_CANARY(a1)
LONG_S t9, 0(t8)
#endif
1:
#if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP)
- PTR_L t8, __stack_chk_guard
+ PTR_LA t8, __stack_chk_guard
LONG_L t9, TASK_STACK_CANARY(a1)
LONG_S t9, 0(t8)
#endif
#include <linux/export.h>
#include <asm/cpu-features.h>
+#include <asm/cpu-type.h>
#include <asm/div64.h>
#include <asm/smtc_ipi.h>
#include <asm/time.h>
#include <asm/break.h>
#include <asm/cop2.h>
#include <asm/cpu.h>
+#include <asm/cpu-type.h>
#include <asm/dsp.h>
#include <asm/fpu.h>
#include <asm/fpu_emulator.h>
regs->regs[rt] = read_c0_count();
return 0;
case 3: /* Count register resolution */
- switch (current_cpu_data.cputype) {
+ switch (current_cpu_type()) {
case CPU_20KC:
case CPU_25KF:
regs->regs[rt] = 1;
#include <asm/bootinfo.h>
#include <asm/cacheops.h>
#include <asm/cpu-features.h>
+#include <asm/cpu-type.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#include <asm/r4kcache.h>
unsigned long dcache_size;
unsigned int config1;
struct cpuinfo_mips *c = ¤t_cpu_data;
+ int cputype = current_cpu_type();
config1 = read_c0_config1();
- switch (c->cputype) {
+ switch (cputype) {
case CPU_CAVIUM_OCTEON:
case CPU_CAVIUM_OCTEON_PLUS:
c->icache.linesz = 2 << ((config1 >> 19) & 7);
c->icache.sets * c->icache.ways * c->icache.linesz;
c->icache.waybit = ffs(icache_size / c->icache.ways) - 1;
c->dcache.linesz = 128;
- if (c->cputype == CPU_CAVIUM_OCTEON_PLUS)
+ if (cputype == CPU_CAVIUM_OCTEON_PLUS)
c->dcache.sets = 2; /* CN5XXX has two Dcache sets */
else
c->dcache.sets = 1; /* CN3XXX has one Dcache set */
#include <linux/highmem.h>
#include <linux/kernel.h>
#include <linux/linkage.h>
+#include <linux/preempt.h>
#include <linux/sched.h>
#include <linux/smp.h>
#include <linux/mm.h>
#include <asm/cacheops.h>
#include <asm/cpu.h>
#include <asm/cpu-features.h>
+#include <asm/cpu-type.h>
#include <asm/io.h>
#include <asm/page.h>
#include <asm/pgtable.h>
/* Catch bad driver code */
BUG_ON(size == 0);
+ preempt_disable();
if (cpu_has_inclusive_pcaches) {
if (size >= scache_size)
r4k_blast_scache();
else
blast_scache_range(addr, addr + size);
+ preempt_enable();
__sync();
return;
}
R4600_HIT_CACHEOP_WAR_IMPL;
blast_dcache_range(addr, addr + size);
}
+ preempt_enable();
bc_wback_inv(addr, size);
__sync();
/* Catch bad driver code */
BUG_ON(size == 0);
+ preempt_disable();
if (cpu_has_inclusive_pcaches) {
if (size >= scache_size)
r4k_blast_scache();
*/
blast_inv_scache_range(addr, addr + size);
}
+ preempt_enable();
__sync();
return;
}
R4600_HIT_CACHEOP_WAR_IMPL;
blast_inv_dcache_range(addr, addr + size);
}
+ preempt_enable();
bc_inv(addr, size);
__sync();
static inline void alias_74k_erratum(struct cpuinfo_mips *c)
{
+ unsigned int imp = c->processor_id & PRID_IMP_MASK;
+ unsigned int rev = c->processor_id & PRID_REV_MASK;
+
/*
* Early versions of the 74K do not update the cache tags on a
* vtag miss/ptag hit which can occur in the case of KSEG0/KUSEG
* aliases. In this case it is better to treat the cache as always
* having aliases.
*/
- if ((c->processor_id & 0xff) <= PRID_REV_ENCODE_332(2, 4, 0))
- c->dcache.flags |= MIPS_CACHE_VTAG;
- if ((c->processor_id & 0xff) == PRID_REV_ENCODE_332(2, 4, 0))
- write_c0_config6(read_c0_config6() | MIPS_CONF6_SYND);
- if (((c->processor_id & 0xff00) == PRID_IMP_1074K) &&
- ((c->processor_id & 0xff) <= PRID_REV_ENCODE_332(1, 1, 0))) {
- c->dcache.flags |= MIPS_CACHE_VTAG;
- write_c0_config6(read_c0_config6() | MIPS_CONF6_SYND);
+ switch (imp) {
+ case PRID_IMP_74K:
+ if (rev <= PRID_REV_ENCODE_332(2, 4, 0))
+ c->dcache.flags |= MIPS_CACHE_VTAG;
+ if (rev == PRID_REV_ENCODE_332(2, 4, 0))
+ write_c0_config6(read_c0_config6() | MIPS_CONF6_SYND);
+ break;
+ case PRID_IMP_1074K:
+ if (rev <= PRID_REV_ENCODE_332(1, 1, 0)) {
+ c->dcache.flags |= MIPS_CACHE_VTAG;
+ write_c0_config6(read_c0_config6() | MIPS_CONF6_SYND);
+ }
+ break;
+ default:
+ BUG();
}
}
unsigned long config1;
unsigned int lsize;
- switch (c->cputype) {
+ switch (current_cpu_type()) {
case CPU_R4600: /* QED style two way caches? */
case CPU_R4700:
case CPU_R5000:
* presumably no vendor is shipping his hardware in the "bad"
* configuration.
*/
- if ((prid & 0xff00) == PRID_IMP_R4000 && (prid & 0xff) < 0x40 &&
+ if ((prid & PRID_IMP_MASK) == PRID_IMP_R4000 &&
+ (prid & PRID_REV_MASK) < PRID_REV_R4400 &&
!(config & CONF_SC) && c->icache.linesz != 16 &&
PAGE_SIZE <= 0x8000)
panic("Improper R4000SC processor configuration detected");
* normally they'd suffer from aliases but magic in the hardware deals
* with that for us so we don't need to take care ourselves.
*/
- switch (c->cputype) {
+ switch (current_cpu_type()) {
case CPU_20KC:
case CPU_25KF:
case CPU_SB1:
case CPU_34K:
case CPU_74K:
case CPU_1004K:
- if (c->cputype == CPU_74K)
+ if (current_cpu_type() == CPU_74K)
alias_74k_erratum(c);
if ((read_c0_config7() & (1 << 16))) {
/* effectively physically indexed dcache,
c->dcache.flags |= MIPS_CACHE_ALIASES;
}
- switch (c->cputype) {
+ switch (current_cpu_type()) {
case CPU_20KC:
/*
* Some older 20Kc chips doesn't have the 'VI' bit in
* processors don't have a S-cache that would be relevant to the
* Linux memory management.
*/
- switch (c->cputype) {
+ switch (current_cpu_type()) {
case CPU_R4000SC:
case CPU_R4000MC:
case CPU_R4400SC:
{
extern char __weak except_vec2_generic;
extern char __weak except_vec2_sb1;
- struct cpuinfo_mips *c = ¤t_cpu_data;
- switch (c->cputype) {
+ switch (current_cpu_type()) {
case CPU_SB1:
case CPU_SB1A:
set_uncached_handler(0x100, &except_vec2_sb1, 0x80);
#include <linux/highmem.h>
#include <asm/cache.h>
+#include <asm/cpu-type.h>
#include <asm/io.h>
#include <dma-coherence.h>
{
int i;
- /* Make sure that gcc doesn't leave the empty loop body. */
- for (i = 0; i < nelems; i++, sg++) {
- if (cpu_needs_post_dma_flush(dev))
+ if (cpu_needs_post_dma_flush(dev))
+ for (i = 0; i < nelems; i++, sg++)
__dma_sync(sg_page(sg), sg->offset, sg->length,
direction);
- }
}
static void mips_dma_sync_sg_for_device(struct device *dev,
{
int i;
- /* Make sure that gcc doesn't leave the empty loop body. */
- for (i = 0; i < nelems; i++, sg++) {
- if (!plat_device_is_coherent(dev))
+ if (!plat_device_is_coherent(dev))
+ for (i = 0; i < nelems; i++, sg++)
__dma_sync(sg_page(sg), sg->offset, sg->length,
direction);
- }
}
int mips_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
#include <asm/bugs.h>
#include <asm/cacheops.h>
+#include <asm/cpu-type.h>
#include <asm/inst.h>
#include <asm/io.h>
#include <asm/page.h>
#include <linux/sched.h>
#include <linux/mm.h>
+#include <asm/cpu-type.h>
#include <asm/mipsregs.h>
#include <asm/bcache.h>
#include <asm/cacheops.h>
unsigned int tmp;
/* Check the bypass bit (L2B) */
- switch (c->cputype) {
+ switch (current_cpu_type()) {
case CPU_34K:
case CPU_74K:
case CPU_1004K:
#include <linux/module.h>
#include <asm/cpu.h>
+#include <asm/cpu-type.h>
#include <asm/bootinfo.h>
#include <asm/mmu_context.h>
#include <asm/pgtable.h>
#include <linux/cache.h>
#include <asm/cacheflush.h>
+#include <asm/cpu-type.h>
#include <asm/pgtable.h>
#include <asm/war.h>
#include <asm/uasm.h>
{
int cpu;
- for (cpu = 0; cpu < NR_CPUS; cpu++) {
+ for (cpu = 0; cpu < nr_cpu_ids; cpu++) {
fill_ipi_map1(gic_resched_int_base, cpu, GIC_CPU_INT1);
fill_ipi_map1(gic_call_int_base, cpu, GIC_CPU_INT2);
}
/* FIXME */
int i;
#if defined(CONFIG_MIPS_MT_SMP)
- gic_call_int_base = GIC_NUM_INTRS - NR_CPUS;
- gic_resched_int_base = gic_call_int_base - NR_CPUS;
+ gic_call_int_base = GIC_NUM_INTRS -
+ (NR_CPUS - nr_cpu_ids) * 2 - nr_cpu_ids;
+ gic_resched_int_base = gic_call_int_base - nr_cpu_ids;
fill_ipi_map();
#endif
gic_init(GIC_BASE_ADDR, GIC_ADDRSPACE_SZ, gic_intr_map,
printk("CPU%d: status register now %08x\n", smp_processor_id(), read_c0_status());
write_c0_status(0x1100dc00);
printk("CPU%d: status register frc %08x\n", smp_processor_id(), read_c0_status());
- for (i = 0; i < NR_CPUS; i++) {
+ for (i = 0; i < nr_cpu_ids; i++) {
arch_init_ipiirq(MIPS_GIC_IRQ_BASE +
GIC_RESCHED_INT(i), &irq_resched);
arch_init_ipiirq(MIPS_GIC_IRQ_BASE +
#include <linux/timex.h>
#include <linux/mc146818rtc.h>
+#include <asm/cpu.h>
#include <asm/mipsregs.h>
#include <asm/mipsmtregs.h>
#include <asm/hardirq.h>
#endif
#if defined (CONFIG_KVM_GUEST) && defined (CONFIG_KVM_HOST_FREQ)
- unsigned int prid = read_c0_prid() & 0xffff00;
+ unsigned int prid = read_c0_prid() & (PRID_COMP_MASK | PRID_IMP_MASK);
/*
* XXXKYMA: hardwire the CPU frequency to Host Freq/4
void __init plat_time_init(void)
{
- unsigned int prid = read_c0_prid() & 0xffff00;
+ unsigned int prid = read_c0_prid() & (PRID_COMP_MASK | PRID_IMP_MASK);
unsigned int freq;
estimate_frequencies();
*/
#include <linux/init.h>
+#include <asm/cpu.h>
#include <asm/setup.h>
#include <asm/time.h>
#include <asm/irq.h>
*/
static unsigned int __init estimate_cpu_frequency(void)
{
- unsigned int prid = read_c0_prid() & 0xffff00;
+ unsigned int prid = read_c0_prid() & (PRID_COMP_MASK | PRID_IMP_MASK);
unsigned int tick = 0;
unsigned int freq;
unsigned int orig;
#include <linux/irq.h>
#include <linux/interrupt.h>
+#include <asm/cpu.h>
#include <asm/mipsregs.h>
#include <asm/netlogic/xlr/fmn.h>
#include <asm/netlogic/xlr/xlr.h>
int processor_id, num_core;
num_core = hweight32(nlm_current_node()->coremask);
- processor_id = read_c0_prid() & 0xff00;
+ processor_id = read_c0_prid() & PRID_IMP_MASK;
setup_cpu_fmninfo(cpu, num_core);
switch (processor_id) {
#include <linux/oprofile.h>
#include <linux/smp.h>
#include <asm/cpu-info.h>
+#include <asm/cpu-type.h>
#include "op_impl.h"
#include <linux/mm.h>
#include <linux/console.h>
#include <linux/tty.h>
+#include <linux/vt.h>
#include <asm/sibyte/bcm1480_regs.h>
#include <asm/sibyte/bcm1480_scd.h>
return -ENOENT;
}
- rt->membase = devm_request_and_ioremap(&pdev->dev, res);
+ rt->membase = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(rt->membase))
return PTR_ERR(rt->membase);
#include <linux/string.h>
#include <asm/bootinfo.h>
+#include <asm/cpu.h>
#include <asm/mipsregs.h>
#include <asm/io.h>
#include <asm/sibyte/sb1250.h>
uint64_t sys_rev;
int plldiv;
- sb1_pass = read_c0_prid() & 0xff;
+ sb1_pass = read_c0_prid() & PRID_REV_MASK;
sys_rev = __raw_readq(IOADDR(A_SCD_SYSTEM_REVISION));
soc_type = SYS_SOC_TYPE(sys_rev);
part_type = G_SYS_PART(sys_rev);
#include <linux/string.h>
#include <asm/bootinfo.h>
+#include <asm/cpu.h>
#include <asm/mipsregs.h>
#include <asm/io.h>
#include <asm/sibyte/sb1250.h>
int plldiv;
int bad_config = 0;
- sb1_pass = read_c0_prid() & 0xff;
+ sb1_pass = read_c0_prid() & PRID_REV_MASK;
sys_rev = __raw_readq(IOADDR(A_SCD_SYSTEM_REVISION));
soc_type = SYS_SOC_TYPE(sys_rev);
soc_pass = G_SYS_REVISION(sys_rev);
#endif
#include <asm/bootinfo.h>
+#include <asm/cpu.h>
#include <asm/io.h>
#include <asm/reboot.h>
#include <asm/sni.h>
system_type = "RM300-Cxx";
break;
case SNI_BRD_PCI_DESKTOP:
- switch (read_c0_prid() & 0xff00) {
+ switch (read_c0_prid() & PRID_IMP_MASK) {
case PRID_IMP_R4600:
case PRID_IMP_R4700:
system_type = "RM200-C20";
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
*/
-
-#include <linux/of.h> /* linux/of.h gets to determine #include ordering */
-
#ifndef _ASM_OPENRISC_PROM_H
#define _ASM_OPENRISC_PROM_H
-#ifdef __KERNEL__
-#ifndef __ASSEMBLY__
-#include <linux/types.h>
-#include <asm/irq.h>
-#include <linux/irqdomain.h>
-#include <linux/atomic.h>
-#include <linux/of_irq.h>
-#include <linux/of_fdt.h>
-#include <linux/of_address.h>
-#include <linux/proc_fs.h>
-#include <linux/platform_device.h>
#define HAVE_ARCH_DEVTREE_FIXUPS
-/* Other Prototypes */
-extern int early_uartlite_console(void);
-
-/* Parse the ibm,dma-window property of an OF node into the busno, phys and
- * size parameters.
- */
-void of_parse_dma_window(struct device_node *dn, const void *dma_window_prop,
- unsigned long *busno, unsigned long *phys, unsigned long *size);
-
-extern void kdump_move_device_tree(void);
-
-/* Get the MAC address */
-extern const void *of_get_mac_address(struct device_node *np);
-
-/**
- * of_irq_map_pci - Resolve the interrupt for a PCI device
- * @pdev: the device whose interrupt is to be resolved
- * @out_irq: structure of_irq filled by this function
- *
- * This function resolves the PCI interrupt for a given PCI device. If a
- * device-node exists for a given pci_dev, it will use normal OF tree
- * walking. If not, it will implement standard swizzling and walk up the
- * PCI tree until an device-node is found, at which point it will finish
- * resolving using the OF tree walking.
- */
-struct pci_dev;
-extern int of_irq_map_pci(struct pci_dev *pdev, struct of_irq *out_irq);
-
-#endif /* __ASSEMBLY__ */
-#endif /* __KERNEL__ */
#endif /* _ASM_OPENRISC_PROM_H */
CONFIG_LLC2=m
CONFIG_NET_PKTGEN=m
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
CONFIG_PARPORT=y
CONFIG_LLC2=m
CONFIG_NET_PKTGEN=m
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
CONFIG_BLK_DEV_UMEM=m
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=16
CONFIG_SYSFS_DEPRECATED_V2=y
+CONFIG_BLK_DEV_INITRD=y
CONFIG_SLAB=y
CONFIG_MODULES=y
CONFIG_MODVERSIONS=y
# CONFIG_INET_LRO is not set
CONFIG_IPV6=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
CONFIG_PARPORT=y
CONFIG_PARPORT_PC=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=16
CONFIG_SYSFS_DEPRECATED_V2=y
+CONFIG_BLK_DEV_INITRD=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_EXPERT=y
CONFIG_KALLSYMS_ALL=y
CONFIG_IP_NF_QUEUE=m
CONFIG_NET_PKTGEN=m
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
CONFIG_BLK_DEV_UMEM=m
CONFIG_LLC2=m
CONFIG_DNS_RESOLVER=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_STANDALONE is not set
CONFIG_PARPORT=y
CONFIG_PARPORT_PC=y
CONFIG_INET6_IPCOMP=y
CONFIG_LLC2=m
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
CONFIG_PARPORT=y
/* traps.c */
void parisc_terminate(char *msg, struct pt_regs *regs,
- int code, unsigned long offset);
+ int code, unsigned long offset) __noreturn __cold;
/* mm/fault.c */
void do_page_fault(struct pt_regs *regs, unsigned long code,
ldw MEM_PDC_HI(%r0),%r6
depd %r6, 31, 32, %r3 /* move to upper word */
+ mfctl %cr30,%r6 /* PCX-W2 firmware bug */
+
ldo PDC_PSW(%r0),%arg0 /* 21 */
ldo PDC_PSW_SET_DEFAULTS(%r0),%arg1 /* 2 */
ldo PDC_PSW_WIDE_BIT(%r0),%arg2 /* 2 */
copy %r0,%arg3
stext_pdc_ret:
+ mtctl %r6,%cr30 /* restore task thread info */
+
/* restore rfi target address*/
ldd TI_TASK-THREAD_SZ_ALGN(%sp), %r10
tophys_r1 %r10
IPI_NOP=0,
IPI_RESCHEDULE=1,
IPI_CALL_FUNC,
- IPI_CALL_FUNC_SINGLE,
IPI_CPU_START,
IPI_CPU_STOP,
IPI_CPU_TEST
generic_smp_call_function_interrupt();
break;
- case IPI_CALL_FUNC_SINGLE:
- smp_debug(100, KERN_DEBUG "CPU%d IPI_CALL_FUNC_SINGLE\n", this_cpu);
- generic_smp_call_function_single_interrupt();
- break;
-
case IPI_CPU_START:
smp_debug(100, KERN_DEBUG "CPU%d IPI_CPU_START\n", this_cpu);
break;
void arch_send_call_function_single_ipi(int cpu)
{
- send_IPI_single(cpu, IPI_CALL_FUNC_SINGLE);
+ send_IPI_single(cpu, IPI_CALL_FUNC);
}
/*
do_exit(SIGSEGV);
}
-int syscall_ipi(int (*syscall) (struct pt_regs *), struct pt_regs *regs)
-{
- return syscall(regs);
-}
-
/* gdb uses break 4,8 */
#define GDB_BREAK_INSN 0x10004
static void handle_gdb_break(struct pt_regs *regs, int wot)
else {
/*
- * The kernel should never fault on its own address space.
+ * The kernel should never fault on its own address space,
+ * unless pagefault_disable() was called before.
*/
- if (fault_space == 0)
+ if (fault_space == 0 && !in_atomic())
{
pdc_chassis_send_status(PDC_CHASSIS_DIRECT_PANIC);
parisc_terminate("Kernel Fault", regs, code, fault_address);
-
}
}
#ifdef __KERNEL__
#include <linux/module.h>
#include <linux/compiler.h>
-#include <asm/uaccess.h>
+#include <linux/uaccess.h>
#define s_space "%%sr1"
#define d_space "%%sr2"
#else
EXPORT_SYMBOL(copy_from_user);
EXPORT_SYMBOL(copy_in_user);
EXPORT_SYMBOL(memcpy);
+
+long probe_kernel_read(void *dst, const void *src, size_t size)
+{
+ unsigned long addr = (unsigned long)src;
+
+ if (size < 0 || addr < PAGE_SIZE)
+ return -EFAULT;
+
+ /* check for I/O space F_EXTEND(0xfff00000) access as well? */
+
+ return __probe_kernel_read(dst, src, size);
+}
+
#endif
unsigned long address)
{
struct vm_area_struct *vma, *prev_vma;
- struct task_struct *tsk = current;
- struct mm_struct *mm = tsk->mm;
+ struct task_struct *tsk;
+ struct mm_struct *mm;
unsigned long acc_type;
int fault;
- unsigned int flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
+ unsigned int flags;
- if (in_atomic() || !mm)
+ if (in_atomic())
goto no_context;
+ tsk = current;
+ mm = tsk->mm;
+ if (!mm)
+ goto no_context;
+
+ flags = FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE;
if (user_mode(regs))
flags |= FAULT_FLAG_USER;
+
+ acc_type = parisc_acctyp(code, regs->iir);
if (acc_type & VM_WRITE)
flags |= FAULT_FLAG_WRITE;
retry:
good_area:
- acc_type = parisc_acctyp(code,regs->iir);
-
if ((vma->vm_flags & acc_type) != acc_type)
goto bad_area;
src-wlib-$(CONFIG_PPC_82xx) += pq2.c fsl-soc.c planetcore.c
src-wlib-$(CONFIG_EMBEDDED6xx) += mv64x60.c mv64x60_i2c.c ugecon.c
-src-plat-y := of.c
+src-plat-y := of.c epapr.c
src-plat-$(CONFIG_40x) += fixed-head.S ep405.c cuboot-hotfoot.c \
treeboot-walnut.c cuboot-acadia.c \
cuboot-kilauea.c simpleboot.c \
prpmc2800.c
src-plat-$(CONFIG_AMIGAONE) += cuboot-amigaone.c
src-plat-$(CONFIG_PPC_PS3) += ps3-head.S ps3-hvcall.S ps3.c
-src-plat-$(CONFIG_EPAPR_BOOT) += epapr.c
+src-plat-$(CONFIG_EPAPR_BOOT) += epapr.c epapr-wrapper.c
src-wlib := $(sort $(src-wlib-y))
src-plat := $(sort $(src-plat-y))
--- /dev/null
+extern void epapr_platform_init(unsigned long r3, unsigned long r4,
+ unsigned long r5, unsigned long r6,
+ unsigned long r7);
+
+void platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ epapr_platform_init(r3, r4, r5, r6, r7);
+}
fdt_addr, fdt_totalsize((void *)fdt_addr), ima_size);
}
-void platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
- unsigned long r6, unsigned long r7)
+void epapr_platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
{
epapr_magic = r6;
ima_size = r7;
static unsigned long claim_base;
+void epapr_platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7);
+
static void *of_try_claim(unsigned long size)
{
unsigned long addr = 0;
}
}
-void platform_init(unsigned long a1, unsigned long a2, void *promptr)
+static void of_platform_init(unsigned long a1, unsigned long a2, void *promptr)
{
platform_ops.image_hdr = of_image_hdr;
platform_ops.malloc = of_try_claim;
loader_info.initrd_size = a2;
}
}
+
+void platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* Detect OF vs. ePAPR boot */
+ if (r5)
+ of_platform_init(r3, r4, (void *)r5);
+ else
+ epapr_platform_init(r3, r4, r5, r6, r7);
+}
+
case "$platform" in
pseries)
- platformo=$object/of.o
+ platformo="$object/of.o $object/epapr.o"
link_address='0x4000000'
;;
maple)
- platformo=$object/of.o
+ platformo="$object/of.o $object/epapr.o"
link_address='0x400000'
;;
pmac|chrp)
- platformo=$object/of.o
+ platformo="$object/of.o $object/epapr.o"
;;
coff)
- platformo="$object/crt0.o $object/of.o"
+ platformo="$object/crt0.o $object/of.o $object/epapr.o"
lds=$object/zImage.coff.lds
link_address='0x500000'
pie=
platformo="$object/treeboot-iss4xx.o"
;;
epapr)
+ platformo="$object/epapr.o $object/epapr-wrapper.o"
link_address='0x20000000'
pie=-pie
;;
extern void irq_ctx_init(void);
extern void call_do_softirq(struct thread_info *tp);
-extern int call_handle_irq(int irq, void *p1,
- struct thread_info *tp, void *func);
+extern void call_do_irq(struct pt_regs *regs, struct thread_info *tp);
extern void do_IRQ(struct pt_regs *regs);
+extern void __do_irq(struct pt_regs *regs);
int irq_choose_cpu(const struct cpumask *mask);
static __always_inline bool arch_static_branch(struct static_key *key)
{
- asm goto("1:\n\t"
+ asm_volatile_goto("1:\n\t"
"nop\n\t"
".pushsection __jump_table, \"aw\"\n\t"
JUMP_ENTRY_TYPE "1b, %l[l_yes], %c0\n\t"
struct thread_struct {
unsigned long ksp; /* Kernel stack pointer */
- unsigned long ksp_limit; /* if ksp <= ksp_limit stack overflow */
-
#ifdef CONFIG_PPC64
unsigned long ksp_vsid;
#endif
#endif
#ifdef CONFIG_PPC32
void *pgdir; /* root of page-table tree */
+ unsigned long ksp_limit; /* if ksp <= ksp_limit stack overflow */
#endif
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
/*
#else
#define INIT_THREAD { \
.ksp = INIT_SP, \
- .ksp_limit = INIT_SP_LIMIT, \
.regs = (struct pt_regs *)INIT_SP - 1, /* XXX bogus, I think */ \
.fs = KERNEL_DS, \
.fpr = {{0}}, \
DEFINE(TASKTHREADPPR, offsetof(struct task_struct, thread.ppr));
#else
DEFINE(THREAD_INFO, offsetof(struct task_struct, stack));
+ DEFINE(THREAD_INFO_GAP, _ALIGN_UP(sizeof(struct thread_info), 16));
+ DEFINE(KSP_LIMIT, offsetof(struct thread_struct, ksp_limit));
#endif /* CONFIG_PPC64 */
DEFINE(KSP, offsetof(struct thread_struct, ksp));
- DEFINE(KSP_LIMIT, offsetof(struct thread_struct, ksp_limit));
DEFINE(PT_REGS, offsetof(struct thread_struct, regs));
#ifdef CONFIG_BOOKE
DEFINE(THREAD_NORMSAVES, offsetof(struct thread_struct, normsave[0]));
/* number of bytes needed for the bitmap */
sz = BITS_TO_LONGS(tbl->it_size) * sizeof(unsigned long);
- page = alloc_pages_node(nid, GFP_ATOMIC, get_order(sz));
+ page = alloc_pages_node(nid, GFP_KERNEL, get_order(sz));
if (!page)
panic("iommu_init_table: Can't allocate %ld bytes\n", sz);
tbl->it_map = page_address(page);
}
#endif
-static inline void handle_one_irq(unsigned int irq)
-{
- struct thread_info *curtp, *irqtp;
- unsigned long saved_sp_limit;
- struct irq_desc *desc;
-
- desc = irq_to_desc(irq);
- if (!desc)
- return;
-
- /* Switch to the irq stack to handle this */
- curtp = current_thread_info();
- irqtp = hardirq_ctx[smp_processor_id()];
-
- if (curtp == irqtp) {
- /* We're already on the irq stack, just handle it */
- desc->handle_irq(irq, desc);
- return;
- }
-
- saved_sp_limit = current->thread.ksp_limit;
-
- irqtp->task = curtp->task;
- irqtp->flags = 0;
-
- /* Copy the softirq bits in preempt_count so that the
- * softirq checks work in the hardirq context. */
- irqtp->preempt_count = (irqtp->preempt_count & ~SOFTIRQ_MASK) |
- (curtp->preempt_count & SOFTIRQ_MASK);
-
- current->thread.ksp_limit = (unsigned long)irqtp +
- _ALIGN_UP(sizeof(struct thread_info), 16);
-
- call_handle_irq(irq, desc, irqtp, desc->handle_irq);
- current->thread.ksp_limit = saved_sp_limit;
- irqtp->task = NULL;
-
- /* Set any flag that may have been set on the
- * alternate stack
- */
- if (irqtp->flags)
- set_bits(irqtp->flags, &curtp->flags);
-}
-
static inline void check_stack_overflow(void)
{
#ifdef CONFIG_DEBUG_STACKOVERFLOW
#endif
}
-void do_IRQ(struct pt_regs *regs)
+void __do_irq(struct pt_regs *regs)
{
- struct pt_regs *old_regs = set_irq_regs(regs);
+ struct irq_desc *desc;
unsigned int irq;
irq_enter();
*/
irq = ppc_md.get_irq();
- /* We can hard enable interrupts now */
+ /* We can hard enable interrupts now to allow perf interrupts */
may_hard_irq_enable();
/* And finally process it */
- if (irq != NO_IRQ)
- handle_one_irq(irq);
- else
+ if (unlikely(irq == NO_IRQ))
__get_cpu_var(irq_stat).spurious_irqs++;
+ else {
+ desc = irq_to_desc(irq);
+ if (likely(desc))
+ desc->handle_irq(irq, desc);
+ }
trace_irq_exit(regs);
irq_exit();
+}
+
+void do_IRQ(struct pt_regs *regs)
+{
+ struct pt_regs *old_regs = set_irq_regs(regs);
+ struct thread_info *curtp, *irqtp, *sirqtp;
+
+ /* Switch to the irq stack to handle this */
+ curtp = current_thread_info();
+ irqtp = hardirq_ctx[raw_smp_processor_id()];
+ sirqtp = softirq_ctx[raw_smp_processor_id()];
+
+ /* Already there ? */
+ if (unlikely(curtp == irqtp || curtp == sirqtp)) {
+ __do_irq(regs);
+ set_irq_regs(old_regs);
+ return;
+ }
+
+ /* Prepare the thread_info in the irq stack */
+ irqtp->task = curtp->task;
+ irqtp->flags = 0;
+
+ /* Copy the preempt_count so that the [soft]irq checks work. */
+ irqtp->preempt_count = curtp->preempt_count;
+
+ /* Switch stack and call */
+ call_do_irq(regs, irqtp);
+
+ /* Restore stack limit */
+ irqtp->task = NULL;
+
+ /* Copy back updates to the thread_info */
+ if (irqtp->flags)
+ set_bits(irqtp->flags, &curtp->flags);
+
set_irq_regs(old_regs);
}
memset((void *)softirq_ctx[i], 0, THREAD_SIZE);
tp = softirq_ctx[i];
tp->cpu = i;
- tp->preempt_count = 0;
memset((void *)hardirq_ctx[i], 0, THREAD_SIZE);
tp = hardirq_ctx[i];
tp->cpu = i;
- tp->preempt_count = HARDIRQ_OFFSET;
}
}
static inline void do_softirq_onstack(void)
{
struct thread_info *curtp, *irqtp;
- unsigned long saved_sp_limit = current->thread.ksp_limit;
curtp = current_thread_info();
irqtp = softirq_ctx[smp_processor_id()];
irqtp->task = curtp->task;
irqtp->flags = 0;
- current->thread.ksp_limit = (unsigned long)irqtp +
- _ALIGN_UP(sizeof(struct thread_info), 16);
call_do_softirq(irqtp);
- current->thread.ksp_limit = saved_sp_limit;
irqtp->task = NULL;
/* Set any flag that may have been set on the
.text
+/*
+ * We store the saved ksp_limit in the unused part
+ * of the STACK_FRAME_OVERHEAD
+ */
_GLOBAL(call_do_softirq)
mflr r0
stw r0,4(r1)
+ lwz r10,THREAD+KSP_LIMIT(r2)
+ addi r11,r3,THREAD_INFO_GAP
stwu r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r3)
mr r1,r3
+ stw r10,8(r1)
+ stw r11,THREAD+KSP_LIMIT(r2)
bl __do_softirq
+ lwz r10,8(r1)
lwz r1,0(r1)
lwz r0,4(r1)
+ stw r10,THREAD+KSP_LIMIT(r2)
mtlr r0
blr
-_GLOBAL(call_handle_irq)
+_GLOBAL(call_do_irq)
mflr r0
stw r0,4(r1)
- mtctr r6
- stwu r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r5)
- mr r1,r5
- bctrl
+ lwz r10,THREAD+KSP_LIMIT(r2)
+ addi r11,r3,THREAD_INFO_GAP
+ stwu r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r4)
+ mr r1,r4
+ stw r10,8(r1)
+ stw r11,THREAD+KSP_LIMIT(r2)
+ bl __do_irq
+ lwz r10,8(r1)
lwz r1,0(r1)
lwz r0,4(r1)
+ stw r10,THREAD+KSP_LIMIT(r2)
mtlr r0
blr
mtlr r0
blr
-_GLOBAL(call_handle_irq)
- ld r8,0(r6)
+_GLOBAL(call_do_irq)
mflr r0
std r0,16(r1)
- mtctr r8
- stdu r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r5)
- mr r1,r5
- bctrl
+ stdu r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r4)
+ mr r1,r4
+ bl .__do_irq
ld r1,0(r1)
ld r0,16(r1)
mtlr r0
kregs = (struct pt_regs *) sp;
sp -= STACK_FRAME_OVERHEAD;
p->thread.ksp = sp;
+#ifdef CONFIG_PPC32
p->thread.ksp_limit = (unsigned long)task_stack_page(p) +
_ALIGN_UP(sizeof(struct thread_info), 16);
-
+#endif
#ifdef CONFIG_HAVE_HW_BREAKPOINT
p->thread.ptrace_bps[0] = NULL;
#endif
static cell_t __initdata regbuf[1024];
+static bool rtas_has_query_cpu_stopped;
+
/*
* Error results ... some OF calls will return "-1" on error, some
prom_setprop(rtas_node, "/rtas", "linux,rtas-entry",
&val, sizeof(val));
+ /* Check if it supports "query-cpu-stopped-state" */
+ if (prom_getprop(rtas_node, "query-cpu-stopped-state",
+ &val, sizeof(val)) != PROM_ERROR)
+ rtas_has_query_cpu_stopped = true;
+
#if defined(CONFIG_PPC_POWERNV) && defined(__BIG_ENDIAN__)
/* PowerVN takeover hack */
prom_rtas_data = base;
= (void *) LOW_ADDR(__secondary_hold_acknowledge);
unsigned long secondary_hold = LOW_ADDR(__secondary_hold);
+ /*
+ * On pseries, if RTAS supports "query-cpu-stopped-state",
+ * we skip this stage, the CPUs will be started by the
+ * kernel using RTAS.
+ */
+ if ((of_platform == PLATFORM_PSERIES ||
+ of_platform == PLATFORM_PSERIES_LPAR) &&
+ rtas_has_query_cpu_stopped) {
+ prom_printf("prom_hold_cpus: skipped\n");
+ return;
+ }
+
prom_debug("prom_hold_cpus: start...\n");
prom_debug(" 1) spinloop = 0x%x\n", (unsigned long)spinloop);
prom_debug(" 1) *spinloop = 0x%x\n", *spinloop);
* On non-powermacs, put all CPUs in spin-loops.
*
* PowerMacs use a different mechanism to spin CPUs
+ *
+ * (This must be done after instanciating RTAS)
*/
if (of_platform != PLATFORM_POWERMAC &&
of_platform != PLATFORM_OPAL)
#include <asm/machdep.h>
#include <asm/smp.h>
#include <asm/pmc.h>
+#include <asm/firmware.h>
#include "cacheinfo.h"
SYSFS_PMCSETUP(dscr, SPRN_DSCR);
SYSFS_PMCSETUP(pir, SPRN_PIR);
+/*
+ Lets only enable read for phyp resources and
+ enable write when needed with a separate function.
+ Lets be conservative and default to pseries.
+*/
static DEVICE_ATTR(mmcra, 0600, show_mmcra, store_mmcra);
static DEVICE_ATTR(spurr, 0400, show_spurr, NULL);
static DEVICE_ATTR(dscr, 0600, show_dscr, store_dscr);
-static DEVICE_ATTR(purr, 0600, show_purr, store_purr);
+static DEVICE_ATTR(purr, 0400, show_purr, store_purr);
static DEVICE_ATTR(pir, 0400, show_pir, NULL);
unsigned long dscr_default = 0;
EXPORT_SYMBOL(dscr_default);
+static void add_write_permission_dev_attr(struct device_attribute *attr)
+{
+ attr->attr.mode |= 0200;
+}
+
static ssize_t show_dscr_default(struct device *dev,
struct device_attribute *attr, char *buf)
{
if (cpu_has_feature(CPU_FTR_MMCRA))
device_create_file(s, &dev_attr_mmcra);
- if (cpu_has_feature(CPU_FTR_PURR))
+ if (cpu_has_feature(CPU_FTR_PURR)) {
+ if (!firmware_has_feature(FW_FEATURE_LPAR))
+ add_write_permission_dev_attr(&dev_attr_purr);
device_create_file(s, &dev_attr_purr);
+ }
if (cpu_has_feature(CPU_FTR_SPURR))
device_create_file(s, &dev_attr_spurr);
TABORT(R3)
blr
+ .section ".toc","aw"
+DSCR_DEFAULT:
+ .tc dscr_default[TC],dscr_default
+
+ .section ".text"
/* void tm_reclaim(struct thread_struct *thread,
* unsigned long orig_msr,
mr r15, r14
ori r15, r15, MSR_FP
li r16, MSR_RI
+ ori r16, r16, MSR_EE /* IRQs hard off */
andc r15, r15, r16
oris r15, r15, MSR_VEC@h
#ifdef CONFIG_VSX
std r1, PACATMSCRATCH(r13)
ld r1, PACAR1(r13)
+ /* Store the PPR in r11 and reset to decent value */
+ std r11, GPR11(r1) /* Temporary stash */
+ mfspr r11, SPRN_PPR
+ HMT_MEDIUM
+
/* Now get some more GPRS free */
std r7, GPR7(r1) /* Temporary stash */
std r12, GPR12(r1) /* '' '' '' */
ld r12, STACK_PARAM(0)(r1) /* Param 0, thread_struct * */
+ std r11, THREAD_TM_PPR(r12) /* Store PPR and free r11 */
+
addi r7, r12, PT_CKPT_REGS /* Thread's ckpt_regs */
/* Make r7 look like an exception frame so that we
SAVE_GPR(0, r7) /* user r0 */
SAVE_GPR(2, r7) /* user r2 */
SAVE_4GPRS(3, r7) /* user r3-r6 */
- SAVE_4GPRS(8, r7) /* user r8-r11 */
+ SAVE_GPR(8, r7) /* user r8 */
+ SAVE_GPR(9, r7) /* user r9 */
+ SAVE_GPR(10, r7) /* user r10 */
ld r3, PACATMSCRATCH(r13) /* user r1 */
ld r4, GPR7(r1) /* user r7 */
- ld r5, GPR12(r1) /* user r12 */
- GET_SCRATCH0(6) /* user r13 */
+ ld r5, GPR11(r1) /* user r11 */
+ ld r6, GPR12(r1) /* user r12 */
+ GET_SCRATCH0(8) /* user r13 */
std r3, GPR1(r7)
std r4, GPR7(r7)
- std r5, GPR12(r7)
- std r6, GPR13(r7)
+ std r5, GPR11(r7)
+ std r6, GPR12(r7)
+ std r8, GPR13(r7)
SAVE_NVGPRS(r7) /* user r14-r31 */
std r6, _XER(r7)
- /* ******************** TAR, PPR, DSCR ********** */
+ /* ******************** TAR, DSCR ********** */
mfspr r3, SPRN_TAR
- mfspr r4, SPRN_PPR
- mfspr r5, SPRN_DSCR
+ mfspr r4, SPRN_DSCR
std r3, THREAD_TM_TAR(r12)
- std r4, THREAD_TM_PPR(r12)
- std r5, THREAD_TM_DSCR(r12)
+ std r4, THREAD_TM_DSCR(r12)
/* MSR and flags: We don't change CRs, and we don't need to alter
* MSR.
std r3, THREAD_TM_TFHAR(r12)
std r4, THREAD_TM_TFIAR(r12)
- /* AMR and PPR are checkpointed too, but are unsupported by Linux. */
+ /* AMR is checkpointed too, but is unsupported by Linux. */
/* Restore original MSR/IRQ state & clear TM mode */
ld r14, TM_FRAME_L0(r1) /* Orig MSR */
mtcr r4
mtlr r0
ld r2, 40(r1)
+
+ /* Load system default DSCR */
+ ld r4, DSCR_DEFAULT@toc(r2)
+ ld r0, 0(r4)
+ mtspr SPRN_DSCR, r0
+
blr
restore_gprs:
- /* ******************** TAR, PPR, DSCR ********** */
- ld r4, THREAD_TM_TAR(r3)
- ld r5, THREAD_TM_PPR(r3)
- ld r6, THREAD_TM_DSCR(r3)
+ /* ******************** CR,LR,CCR,MSR ********** */
+ ld r4, _CTR(r7)
+ ld r5, _LINK(r7)
+ ld r6, _CCR(r7)
+ ld r8, _XER(r7)
- mtspr SPRN_TAR, r4
- mtspr SPRN_PPR, r5
- mtspr SPRN_DSCR, r6
+ mtctr r4
+ mtlr r5
+ mtcr r6
+ mtxer r8
- /* ******************** CR,LR,CCR,MSR ********** */
- ld r3, _CTR(r7)
- ld r4, _LINK(r7)
- ld r5, _CCR(r7)
- ld r6, _XER(r7)
+ /* ******************** TAR ******************** */
+ ld r4, THREAD_TM_TAR(r3)
+ mtspr SPRN_TAR, r4
- mtctr r3
- mtlr r4
- mtcr r5
- mtxer r6
+ /* Load up the PPR and DSCR in GPRs only at this stage */
+ ld r5, THREAD_TM_DSCR(r3)
+ ld r6, THREAD_TM_PPR(r3)
/* Clear the MSR RI since we are about to change R1. EE is already off
*/
mtmsrd r4, 1
REST_4GPRS(0, r7) /* GPR0-3 */
- REST_GPR(4, r7) /* GPR4-6 */
- REST_GPR(5, r7)
- REST_GPR(6, r7)
+ REST_GPR(4, r7) /* GPR4 */
REST_4GPRS(8, r7) /* GPR8-11 */
REST_2GPRS(12, r7) /* GPR12-13 */
REST_NVGPRS(r7) /* GPR14-31 */
- ld r7, GPR7(r7) /* GPR7 */
+ /* Load up PPR and DSCR here so we don't run with user values for long
+ */
+ mtspr SPRN_DSCR, r5
+ mtspr SPRN_PPR, r6
+
+ REST_GPR(5, r7) /* GPR5-7 */
+ REST_GPR(6, r7)
+ ld r7, GPR7(r7)
/* Commit register state as checkpointed state: */
TRECHKPT
+ HMT_MEDIUM
+
/* Our transactional state has now changed.
*
* Now just get out of here. Transactional (current) state will be
mtcr r4
mtlr r0
ld r2, 40(r1)
+
+ /* Load system default DSCR */
+ ld r4, DSCR_DEFAULT@toc(r2)
+ ld r0, 0(r4)
+ mtspr SPRN_DSCR, r0
+
blr
/* ****************************************************************** */
const char *cp;
dn = dev->of_node;
- if (!dn)
- return -ENODEV;
+ if (!dn) {
+ strcat(buf, "\n");
+ return strlen(buf);
+ }
cp = of_get_property(dn, "compatible", NULL);
- if (!cp)
- return -ENODEV;
+ if (!cp) {
+ strcat(buf, "\n");
+ return strlen(buf);
+ }
return sprintf(buf, "vio:T%sS%s\n", vio_dev->type, cp);
}
BEGIN_FTR_SECTION
mfspr r8, SPRN_DSCR
ld r7, HSTATE_DSCR(r13)
- std r8, VCPU_DSCR(r7)
+ std r8, VCPU_DSCR(r9)
mtspr SPRN_DSCR, r7
END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
unsigned long hva;
int pfnmap = 0;
int tsize = BOOK3E_PAGESZ_4K;
+ int ret = 0;
+ unsigned long mmu_seq;
+ struct kvm *kvm = vcpu_e500->vcpu.kvm;
+
+ /* used to check for invalidations in progress */
+ mmu_seq = kvm->mmu_notifier_seq;
+ smp_rmb();
/*
* Translate guest physical to true physical, acquiring
gvaddr &= ~((tsize_pages << PAGE_SHIFT) - 1);
}
+ spin_lock(&kvm->mmu_lock);
+ if (mmu_notifier_retry(kvm, mmu_seq)) {
+ ret = -EAGAIN;
+ goto out;
+ }
+
kvmppc_e500_ref_setup(ref, gtlbe, pfn);
kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
/* Clear i-cache for new pages */
kvmppc_mmu_flush_icache(pfn);
+out:
+ spin_unlock(&kvm->mmu_lock);
+
/* Drop refcount on page, so that mmu notifiers can clear it */
kvm_release_pfn_clean(pfn);
- return 0;
+ return ret;
}
/* XXX only map the one-one case, for now use TLB0 */
blr
- .macro source
+ .macro srcnr
100:
.section __ex_table,"a"
.align 3
- .llong 100b,.Lsrc_error
+ .llong 100b,.Lsrc_error_nr
.previous
.endm
- .macro dest
+ .macro source
+150:
+ .section __ex_table,"a"
+ .align 3
+ .llong 150b,.Lsrc_error
+ .previous
+ .endm
+
+ .macro dstnr
200:
.section __ex_table,"a"
.align 3
- .llong 200b,.Ldest_error
+ .llong 200b,.Ldest_error_nr
+ .previous
+ .endm
+
+ .macro dest
+250:
+ .section __ex_table,"a"
+ .align 3
+ .llong 250b,.Ldest_error
.previous
.endm
rldicl. r6,r3,64-1,64-2 /* r6 = (r3 & 0x3) >> 1 */
beq .Lcopy_aligned
- li r7,4
- sub r6,r7,r6
+ li r9,4
+ sub r6,r9,r6
mtctr r6
1:
-source; lhz r6,0(r3) /* align to doubleword */
+srcnr; lhz r6,0(r3) /* align to doubleword */
subi r5,r5,2
addi r3,r3,2
adde r0,r0,r6
-dest; sth r6,0(r4)
+dstnr; sth r6,0(r4)
addi r4,r4,2
bdnz 1b
mtctr r6
3:
-source; ld r6,0(r3)
+srcnr; ld r6,0(r3)
addi r3,r3,8
adde r0,r0,r6
-dest; std r6,0(r4)
+dstnr; std r6,0(r4)
addi r4,r4,8
bdnz 3b
srdi. r6,r5,2
beq .Lcopy_tail_halfword
-source; lwz r6,0(r3)
+srcnr; lwz r6,0(r3)
addi r3,r3,4
adde r0,r0,r6
-dest; stw r6,0(r4)
+dstnr; stw r6,0(r4)
addi r4,r4,4
subi r5,r5,4
srdi. r6,r5,1
beq .Lcopy_tail_byte
-source; lhz r6,0(r3)
+srcnr; lhz r6,0(r3)
addi r3,r3,2
adde r0,r0,r6
-dest; sth r6,0(r4)
+dstnr; sth r6,0(r4)
addi r4,r4,2
subi r5,r5,2
andi. r6,r5,1
beq .Lcopy_finish
-source; lbz r6,0(r3)
+srcnr; lbz r6,0(r3)
sldi r9,r6,8 /* Pad the byte out to 16 bits */
adde r0,r0,r9
-dest; stb r6,0(r4)
+dstnr; stb r6,0(r4)
.Lcopy_finish:
addze r0,r0 /* add in final carry */
blr
.Lsrc_error:
+ ld r14,STK_REG(R14)(r1)
+ ld r15,STK_REG(R15)(r1)
+ ld r16,STK_REG(R16)(r1)
+ addi r1,r1,STACKFRAMESIZE
+.Lsrc_error_nr:
cmpdi 0,r7,0
beqlr
li r6,-EFAULT
blr
.Ldest_error:
+ ld r14,STK_REG(R14)(r1)
+ ld r15,STK_REG(R15)(r1)
+ ld r16,STK_REG(R16)(r1)
+ addi r1,r1,STACKFRAMESIZE
+.Ldest_error_nr:
cmpdi 0,r8,0
beqlr
li r6,-EFAULT
*/
if ((ra == 1) && !(regs->msr & MSR_PR) \
&& (val3 >= (regs->gpr[1] - STACK_INT_FRAME_SIZE))) {
+#ifdef CONFIG_PPC32
/*
* Check if we will touch kernel sack overflow
*/
err = -EINVAL;
break;
}
-
+#endif /* CONFIG_PPC32 */
/*
* Check if we already set since that means we'll
* lose the previous value.
{
}
+void register_page_bootmem_memmap(unsigned long section_nr,
+ struct page *start_page, unsigned long size)
+{
+}
#endif /* CONFIG_SPARSEMEM_VMEMMAP */
}
#endif /* ! CONFIG_NEED_MULTIPLE_NODES */
+static void __init register_page_bootmem_info(void)
+{
+ int i;
+
+ for_each_online_node(i)
+ register_page_bootmem_info_node(NODE_DATA(i));
+}
+
void __init mem_init(void)
{
#ifdef CONFIG_SWIOTLB
swiotlb_init(0);
#endif
+ register_page_bootmem_info();
high_memory = (void *) __va(max_low_pfn * PAGE_SIZE);
set_max_mapnr(max_pfn);
free_all_bootmem();
{
if (fp->bpf_func != sk_run_filter)
module_free(NULL, fp->bpf_func);
+ kfree(fp);
}
#define MMCR1_UNIT_SHIFT(pmc) (60 - (4 * ((pmc) - 1)))
#define MMCR1_COMBINE_SHIFT(pmc) (35 - ((pmc) - 1))
#define MMCR1_PMCSEL_SHIFT(pmc) (24 - (((pmc) - 1)) * 8)
+#define MMCR1_FAB_SHIFT 36
#define MMCR1_DC_QUAL_SHIFT 47
#define MMCR1_IC_QUAL_SHIFT 46
* the threshold bits are used for the match value.
*/
if (event_is_fab_match(event[i])) {
- mmcr1 |= (event[i] >> EVENT_THR_CTL_SHIFT) &
- EVENT_THR_CTL_MASK;
+ mmcr1 |= ((event[i] >> EVENT_THR_CTL_SHIFT) &
+ EVENT_THR_CTL_MASK) << MMCR1_FAB_SHIFT;
} else {
val = (event[i] >> EVENT_THR_CTL_SHIFT) & EVENT_THR_CTL_MASK;
mmcra |= val << MMCRA_THR_CTL_SHIFT;
alloc_bootmem_cpumask_var(&of_spin_mask);
- /* Mark threads which are still spinning in hold loops. */
- if (cpu_has_feature(CPU_FTR_SMT)) {
- for_each_present_cpu(i) {
- if (cpu_thread_in_core(i) == 0)
- cpumask_set_cpu(i, of_spin_mask);
- }
- } else {
- cpumask_copy(of_spin_mask, cpu_present_mask);
+ /*
+ * Mark threads which are still spinning in hold loops
+ *
+ * We know prom_init will not have started them if RTAS supports
+ * query-cpu-stopped-state.
+ */
+ if (rtas_token("query-cpu-stopped-state") == RTAS_UNKNOWN_SERVICE) {
+ if (cpu_has_feature(CPU_FTR_SMT)) {
+ for_each_present_cpu(i) {
+ if (cpu_thread_in_core(i) == 0)
+ cpumask_set_cpu(i, of_spin_mask);
+ }
+ } else
+ cpumask_copy(of_spin_mask, cpu_present_mask);
+
+ cpumask_clear_cpu(boot_cpuid, of_spin_mask);
}
- cpumask_clear_cpu(boot_cpuid, of_spin_mask);
-
/* Non-lpar has additional take/give timebase */
if (rtas_token("freeze-time-base") != RTAS_UNKNOWN_SERVICE) {
smp_ops->give_timebase = rtas_give_timebase;
select ARCH_INLINE_WRITE_UNLOCK_IRQ
select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE
select ARCH_SAVE_PAGE_KEYS if HIBERNATION
+ select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_WANT_IPC_PARSE_VERSION
select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS2
select GENERIC_TIME_VSYSCALL_OLD
select HAVE_ALIGNED_STRUCT_PAGE if SLUB
select HAVE_ARCH_JUMP_LABEL if !MARCH_G5
- select HAVE_ARCH_MUTEX_CPU_RELAX
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE if 64BIT
static __always_inline bool arch_static_branch(struct static_key *key)
{
- asm goto("0: brcl 0,0\n"
+ asm_volatile_goto("0: brcl 0,0\n"
".pushsection __jump_table, \"aw\"\n"
ASM_ALIGN "\n"
ASM_PTR " 0b, %l[label], %0\n"
*/
#include <asm-generic/mutex-dec.h>
-
-#define arch_mutex_cpu_relax() barrier()
static inline void pgste_set_pte(pte_t *ptep, pte_t entry)
{
- if (!MACHINE_HAS_ESOP && (pte_val(entry) & _PAGE_WRITE)) {
+ if (!MACHINE_HAS_ESOP &&
+ (pte_val(entry) & _PAGE_PRESENT) &&
+ (pte_val(entry) & _PAGE_WRITE)) {
/*
* Without enhanced suppression-on-protection force
* the dirty bit on for all writable ptes.
barrier();
}
+#define arch_mutex_cpu_relax() barrier()
+
static inline void psw_set_key(unsigned int key)
{
asm volatile("spka 0(%0)" : : "d" (key));
extern int arch_spin_trylock_retry(arch_spinlock_t *);
extern void arch_spin_relax(arch_spinlock_t *lock);
+static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
+{
+ return lock.owner_cpu == 0;
+}
+
static inline void arch_spin_lock(arch_spinlock_t *lp)
{
int old;
typedef unsigned long long cycles_t;
-static inline unsigned long long get_tod_clock(void)
-{
- unsigned long long clk;
-
-#ifdef CONFIG_HAVE_MARCH_Z9_109_FEATURES
- asm volatile(".insn s,0xb27c0000,%0" : "=Q" (clk) : : "cc");
-#else
- asm volatile("stck %0" : "=Q" (clk) : : "cc");
-#endif
- return clk;
-}
-
static inline void get_tod_clock_ext(char *clk)
{
asm volatile("stcke %0" : "=Q" (*clk) : : "cc");
}
-static inline unsigned long long get_tod_clock_xt(void)
+static inline unsigned long long get_tod_clock(void)
{
unsigned char clk[16];
get_tod_clock_ext(clk);
return *((unsigned long long *)&clk[1]);
}
+static inline unsigned long long get_tod_clock_fast(void)
+{
+#ifdef CONFIG_HAVE_MARCH_Z9_109_FEATURES
+ unsigned long long clk;
+
+ asm volatile("stckf %0" : "=Q" (clk) : : "cc");
+ return clk;
+#else
+ return get_tod_clock();
+#endif
+}
+
static inline cycles_t get_cycles(void)
{
return (cycles_t) get_tod_clock() >> 2;
*/
static inline unsigned long long get_tod_clock_monotonic(void)
{
- return get_tod_clock_xt() - sched_clock_base_cc;
+ return get_tod_clock() - sched_clock_base_cc;
}
/**
break;
}
}
- return err;
+ return err ? -EFAULT : 0;
}
int copy_siginfo_from_user32(siginfo_t *to, compat_siginfo_t __user *from)
break;
}
}
- return err;
+ return err ? -EFAULT : 0;
}
static int save_sigregs32(struct pt_regs *regs, _sigregs32 __user *sregs)
}
/*
- * Copy up to one page to vmalloc or real memory
+ * Copy real to virtual or real memory
*/
-static ssize_t copy_page_real(void *buf, void *src, size_t csize)
+static int copy_from_realmem(void *dest, void *src, size_t count)
{
- size_t size;
+ unsigned long size;
+ int rc;
- if (is_vmalloc_addr(buf)) {
- BUG_ON(csize >= PAGE_SIZE);
- /* If buf is not page aligned, copy first part */
- size = min(roundup(__pa(buf), PAGE_SIZE) - __pa(buf), csize);
- if (size) {
- if (memcpy_real(load_real_addr(buf), src, size))
- return -EFAULT;
- buf += size;
- src += size;
- }
- /* Copy second part */
- size = csize - size;
- return (size) ? memcpy_real(load_real_addr(buf), src, size) : 0;
- } else {
- return memcpy_real(buf, src, csize);
- }
+ if (!count)
+ return 0;
+ if (!is_vmalloc_or_module_addr(dest))
+ return memcpy_real(dest, src, count);
+ do {
+ size = min(count, PAGE_SIZE - (__pa(dest) & ~PAGE_MASK));
+ if (memcpy_real(load_real_addr(dest), src, size))
+ return -EFAULT;
+ count -= size;
+ dest += size;
+ src += size;
+ } while (count);
+ return 0;
}
/*
rc = copy_to_user_real((void __force __user *) buf,
(void *) src, csize);
else
- rc = copy_page_real(buf, (void *) src, csize);
+ rc = copy_from_realmem(buf, (void *) src, csize);
return (rc == 0) ? rc : csize;
}
if (OLDMEM_BASE) {
if ((unsigned long) src < OLDMEM_SIZE) {
copied = min(count, OLDMEM_SIZE - (unsigned long) src);
- rc = memcpy_real(dest, src + OLDMEM_BASE, copied);
+ rc = copy_from_realmem(dest, src + OLDMEM_BASE, copied);
if (rc)
return rc;
}
return rc;
}
}
- return memcpy_real(dest + copied, src + copied, count - copied);
+ return copy_from_realmem(dest + copied, src + copied, count - copied);
}
/*
debug_finish_entry(debug_info_t * id, debug_entry_t* active, int level,
int exception)
{
- active->id.stck = get_tod_clock();
+ active->id.stck = get_tod_clock_fast();
active->id.fields.cpuid = smp_processor_id();
active->caller = __builtin_return_address(0);
active->id.fields.exception = exception;
tm __TI_flags+3(%r12),_TIF_SYSCALL
jno sysc_return
lm %r2,%r7,__PT_R2(%r11) # load svc arguments
+ l %r10,__TI_sysc_table(%r12) # 31 bit system call table
xr %r8,%r8 # svc 0 returns -ENOSYS
clc __PT_INT_CODE+2(2,%r11),BASED(.Lnr_syscalls+2)
jnl sysc_nr_ok # invalid svc number -> do svc 0
tm __TI_flags+7(%r12),_TIF_SYSCALL
jno sysc_return
lmg %r2,%r7,__PT_R2(%r11) # load svc arguments
+ lg %r10,__TI_sysc_table(%r12) # address of system call table
lghi %r8,0 # svc 0 returns -ENOSYS
llgh %r1,__PT_INT_CODE+2(%r11) # load new svc number
cghi %r1,NR_syscalls
case 0xac: /* stnsm */
case 0xad: /* stosm */
return -EINVAL;
+ case 0xc6:
+ switch (insn[0] & 0x0f) {
+ case 0x00: /* exrl */
+ return -EINVAL;
+ }
}
switch (insn[0]) {
case 0x0101: /* pr */
break;
case 0xc6:
switch (insn[0] & 0x0f) {
- case 0x00: /* exrl */
case 0x02: /* pfdrl */
case 0x04: /* cghrl */
case 0x05: /* chrl */
}
if ((!rc) && (vcpu->arch.sie_block->ckc <
- get_tod_clock() + vcpu->arch.sie_block->epoch)) {
+ get_tod_clock_fast() + vcpu->arch.sie_block->epoch)) {
if ((!psw_extint_disabled(vcpu)) &&
(vcpu->arch.sie_block->gcr[0] & 0x800ul))
rc = 1;
goto no_timer;
}
- now = get_tod_clock() + vcpu->arch.sie_block->epoch;
+ now = get_tod_clock_fast() + vcpu->arch.sie_block->epoch;
if (vcpu->arch.sie_block->ckc < now) {
__unset_cpu_idle(vcpu);
return 0;
}
if ((vcpu->arch.sie_block->ckc <
- get_tod_clock() + vcpu->arch.sie_block->epoch))
+ get_tod_clock_fast() + vcpu->arch.sie_block->epoch))
__try_deliver_ckc_interrupt(vcpu);
if (atomic_read(&fi->active)) {
do {
set_clock_comparator(end);
vtime_stop_cpu();
- } while (get_tod_clock() < end);
+ } while (get_tod_clock_fast() < end);
lockdep_on();
__ctl_load(cr0, 0, 0);
__ctl_load(cr6, 6, 6);
{
u64 clock_saved, end;
- end = get_tod_clock() + (usecs << 12);
+ end = get_tod_clock_fast() + (usecs << 12);
do {
clock_saved = 0;
if (end < S390_lowcore.clock_comparator) {
vtime_stop_cpu();
if (clock_saved)
local_tick_enable(clock_saved);
- } while (get_tod_clock() < end);
+ } while (get_tod_clock_fast() < end);
}
/*
{
u64 end;
- end = get_tod_clock() + (usecs << 12);
- while (get_tod_clock() < end)
+ end = get_tod_clock_fast() + (usecs << 12);
+ while (get_tod_clock_fast() < end)
cpu_relax();
}
nsecs <<= 9;
do_div(nsecs, 125);
- end = get_tod_clock() + nsecs;
+ end = get_tod_clock_fast() + nsecs;
if (nsecs & ~0xfffUL)
__udelay(nsecs >> 12);
- while (get_tod_clock() < end)
+ while (get_tod_clock_fast() < end)
barrier();
}
EXPORT_SYMBOL(__ndelay);
struct bpf_binary_header *header = (void *)addr;
if (fp->bpf_func == sk_run_filter)
- return;
+ goto free_filter;
set_memory_rw(addr, header->pages);
module_free(NULL, header);
+free_filter:
+ kfree(fp);
}
config SCORE
def_bool y
+ select HAVE_GENERIC_HARDIRQS
select GENERIC_IRQ_SHOW
select GENERIC_IOMAP
select GENERIC_ATOMIC64
source "crypto/Kconfig"
source "lib/Kconfig"
+
+config NO_IOMEM
+ def_bool y
#
KBUILD_AFLAGS += $(cflags-y)
KBUILD_CFLAGS += $(cflags-y)
-KBUILD_AFLAGS_MODULE += -mlong-calls
-KBUILD_CFLAGS_MODULE += -mlong-calls
+KBUILD_AFLAGS_MODULE +=
+KBUILD_CFLAGS_MODULE +=
LDFLAGS += --oformat elf32-littlescore
LDFLAGS_vmlinux += -G0 -static -nostdlib
__wsum sum)
{
__asm__ __volatile__(
- ".set\tnoreorder\t\t\t# csum_ipv6_magic\n\t"
- ".set\tnoat\n\t"
- "addu\t%0, %5\t\t\t# proto (long in network byte order)\n\t"
- "sltu\t$1, %0, %5\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %6\t\t\t# csum\n\t"
- "sltu\t$1, %0, %6\n\t"
- "lw\t%1, 0(%2)\t\t\t# four words source address\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 4(%2)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 8(%2)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 12(%2)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 0(%3)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 4(%3)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 8(%3)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "lw\t%1, 12(%3)\n\t"
- "addu\t%0, $1\n\t"
- "addu\t%0, %1\n\t"
- "sltu\t$1, %0, %1\n\t"
- "addu\t%0, $1\t\t\t# Add final carry\n\t"
- ".set\tnoat\n\t"
- ".set\tnoreorder"
+ ".set\tvolatile\t\t\t# csum_ipv6_magic\n\t"
+ "add\t%0, %0, %5\t\t\t# proto (long in network byte order)\n\t"
+ "cmp.c\t%5, %0\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %6\t\t\t# csum\n\t"
+ "cmp.c\t%6, %0\n\t"
+ "lw\t%1, [%2, 0]\t\t\t# four words source address\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "1:lw\t%1, [%2, 4]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "lw\t%1, [%2,8]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "lw\t%1, [%2, 12]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0,%1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "lw\t%1, [%3, 0]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "lw\t%1, [%3, 4]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "lw\t%1, [%3, 8]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "lw\t%1, [%3, 12]\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:add\t%0, %0, %1\n\t"
+ "cmp.c\t%1, %0\n\t"
+ "bleu 1f\n\t"
+ "addi\t%0, 0x1\n\t"
+ "1:\n\t"
+ ".set\toptimize"
: "=r" (sum), "=r" (proto)
: "r" (saddr), "r" (daddr),
"0" (htonl(len)), "1" (htonl(proto)), "r" (sum));
#define virt_to_bus virt_to_phys
#define bus_to_virt phys_to_virt
-
#endif /* _ASM_SCORE_IO_H */
#define _ASM_SCORE_PGALLOC_H
#include <linux/mm.h>
-
+#include <linux/highmem.h>
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pte_t *pte)
{
disable_irq
lw r8, [r28, TI_PRE_COUNT]
cmpz.c r8
- bne r8, restore_all
+ bne restore_all
need_resched:
lw r8, [r28, TI_FLAGS]
andri.c r9, r8, _TIF_NEED_RESCHED
sw r9, [r0, PT_EPC]
cmpi.c r27, __NR_syscalls # check syscall number
- bgeu illegal_syscall
+ bcs illegal_syscall
slli r8, r27, 2 # get syscall routine
la r11, sys_call_table
p->thread.reg0 = (unsigned long) childregs;
if (unlikely(p->flags & PF_KTHREAD)) {
memset(childregs, 0, sizeof(struct pt_regs));
- p->thread->reg12 = usp;
- p->thread->reg13 = arg;
+ p->thread.reg12 = usp;
+ p->thread.reg13 = arg;
p->thread.reg3 = (unsigned long) ret_from_kernel_thread;
} else {
*childregs = *current_pt_regs();
Only choose N if you know in advance that you will not need to modify
OpenPROM settings on the running system.
-# Makefile helper
+# Makefile helpers
config SPARC64_PCI
bool
default y
depends on SPARC64 && PCI
+config SPARC64_PCI_MSI
+ bool
+ default y
+ depends on SPARC64_PCI && PCI_MSI
+
endmenu
menu "Executable file formats"
once = 1;
error = request_irq(FLOPPY_IRQ, sparc_floppy_irq,
- IRQF_DISABLED, "floppy", NULL);
+ 0, "floppy", NULL);
return ((error == 0) ? 0 : -1);
}
static __always_inline bool arch_static_branch(struct static_key *key)
{
- asm goto("1:\n\t"
+ asm_volatile_goto("1:\n\t"
"nop\n\t"
"nop\n\t"
".pushsection __jump_table, \"aw\"\n\t"
+
#
# Makefile for the linux kernel.
#
obj-$(CONFIG_SPARC64_PCI) += pci.o pci_common.o psycho_common.o
obj-$(CONFIG_SPARC64_PCI) += pci_psycho.o pci_sabre.o pci_schizo.o
obj-$(CONFIG_SPARC64_PCI) += pci_sun4v.o pci_sun4v_asm.o pci_fire.o
-obj-$(CONFIG_PCI_MSI) += pci_msi.o
+obj-$(CONFIG_SPARC64_PCI_MSI) += pci_msi.o
obj-$(CONFIG_COMPAT) += sys32.o sys_sparc32.o signal32.o
if (boot_command && strlen(boot_command)) {
unsigned long len;
- strcpy(full_boot_str, "boot ");
- strlcpy(full_boot_str + strlen("boot "), boot_command,
- sizeof(full_boot_str + strlen("boot ")));
+ snprintf(full_boot_str, sizeof(full_boot_str), "boot %s",
+ boot_command);
len = strlen(full_boot_str);
if (reboot_data_supported) {
snprintf(lp->rx_irq_name, LDC_IRQ_NAME_MAX, "%s RX", name);
snprintf(lp->tx_irq_name, LDC_IRQ_NAME_MAX, "%s TX", name);
- err = request_irq(lp->cfg.rx_irq, ldc_rx, IRQF_DISABLED,
+ err = request_irq(lp->cfg.rx_irq, ldc_rx, 0,
lp->rx_irq_name, lp);
if (err)
return err;
- err = request_irq(lp->cfg.tx_irq, ldc_tx, IRQF_DISABLED,
+ err = request_irq(lp->cfg.tx_irq, ldc_tx, 0,
lp->tx_irq_name, lp);
if (err) {
free_irq(lp->cfg.rx_irq, lp);
{
if (fp->bpf_func != sk_run_filter)
module_free(NULL, fp->bpf_func);
+ kfree(fp);
}
config VMALLOC_RESERVE
hex
- default 0x1000000
+ default 0x2000000
config HARDWALL
bool "Hardwall support to allow access to user dynamic network"
unsigned int flags;
};
-int gxio_mpipe_alloc_buffer_stacks(gxio_mpipe_context_t * context,
+int gxio_mpipe_alloc_buffer_stacks(gxio_mpipe_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags)
{
unsigned int buffer_size_enum;
};
-int gxio_mpipe_init_buffer_stack_aux(gxio_mpipe_context_t * context,
+int gxio_mpipe_init_buffer_stack_aux(gxio_mpipe_context_t *context,
void *mem_va, size_t mem_size,
unsigned int mem_flags, unsigned int stack,
unsigned int buffer_size_enum)
unsigned int flags;
};
-int gxio_mpipe_alloc_notif_rings(gxio_mpipe_context_t * context,
+int gxio_mpipe_alloc_notif_rings(gxio_mpipe_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags)
{
unsigned int ring;
};
-int gxio_mpipe_init_notif_ring_aux(gxio_mpipe_context_t * context, void *mem_va,
+int gxio_mpipe_init_notif_ring_aux(gxio_mpipe_context_t *context, void *mem_va,
size_t mem_size, unsigned int mem_flags,
unsigned int ring)
{
unsigned int ring;
};
-int gxio_mpipe_request_notif_ring_interrupt(gxio_mpipe_context_t * context,
+int gxio_mpipe_request_notif_ring_interrupt(gxio_mpipe_context_t *context,
int inter_x, int inter_y,
int inter_ipi, int inter_event,
unsigned int ring)
unsigned int ring;
};
-int gxio_mpipe_enable_notif_ring_interrupt(gxio_mpipe_context_t * context,
+int gxio_mpipe_enable_notif_ring_interrupt(gxio_mpipe_context_t *context,
unsigned int ring)
{
struct enable_notif_ring_interrupt_param temp;
unsigned int flags;
};
-int gxio_mpipe_alloc_notif_groups(gxio_mpipe_context_t * context,
+int gxio_mpipe_alloc_notif_groups(gxio_mpipe_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags)
{
gxio_mpipe_notif_group_bits_t bits;
};
-int gxio_mpipe_init_notif_group(gxio_mpipe_context_t * context,
+int gxio_mpipe_init_notif_group(gxio_mpipe_context_t *context,
unsigned int group,
gxio_mpipe_notif_group_bits_t bits)
{
unsigned int flags;
};
-int gxio_mpipe_alloc_buckets(gxio_mpipe_context_t * context, unsigned int count,
+int gxio_mpipe_alloc_buckets(gxio_mpipe_context_t *context, unsigned int count,
unsigned int first, unsigned int flags)
{
struct alloc_buckets_param temp;
MPIPE_LBL_INIT_DAT_BSTS_TBL_t bucket_info;
};
-int gxio_mpipe_init_bucket(gxio_mpipe_context_t * context, unsigned int bucket,
+int gxio_mpipe_init_bucket(gxio_mpipe_context_t *context, unsigned int bucket,
MPIPE_LBL_INIT_DAT_BSTS_TBL_t bucket_info)
{
struct init_bucket_param temp;
unsigned int flags;
};
-int gxio_mpipe_alloc_edma_rings(gxio_mpipe_context_t * context,
+int gxio_mpipe_alloc_edma_rings(gxio_mpipe_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags)
{
unsigned int channel;
};
-int gxio_mpipe_init_edma_ring_aux(gxio_mpipe_context_t * context, void *mem_va,
+int gxio_mpipe_init_edma_ring_aux(gxio_mpipe_context_t *context, void *mem_va,
size_t mem_size, unsigned int mem_flags,
unsigned int ring, unsigned int channel)
{
EXPORT_SYMBOL(gxio_mpipe_init_edma_ring_aux);
-int gxio_mpipe_commit_rules(gxio_mpipe_context_t * context, const void *blob,
+int gxio_mpipe_commit_rules(gxio_mpipe_context_t *context, const void *blob,
size_t blob_size)
{
const void *params = blob;
unsigned int flags;
};
-int gxio_mpipe_register_client_memory(gxio_mpipe_context_t * context,
+int gxio_mpipe_register_client_memory(gxio_mpipe_context_t *context,
unsigned int iotlb, HV_PTE pte,
unsigned int flags)
{
unsigned int flags;
};
-int gxio_mpipe_link_open_aux(gxio_mpipe_context_t * context,
+int gxio_mpipe_link_open_aux(gxio_mpipe_context_t *context,
_gxio_mpipe_link_name_t name, unsigned int flags)
{
struct link_open_aux_param temp;
int mac;
};
-int gxio_mpipe_link_close_aux(gxio_mpipe_context_t * context, int mac)
+int gxio_mpipe_link_close_aux(gxio_mpipe_context_t *context, int mac)
{
struct link_close_aux_param temp;
struct link_close_aux_param *params = &temp;
int64_t val;
};
-int gxio_mpipe_link_set_attr_aux(gxio_mpipe_context_t * context, int mac,
+int gxio_mpipe_link_set_attr_aux(gxio_mpipe_context_t *context, int mac,
uint32_t attr, int64_t val)
{
struct link_set_attr_aux_param temp;
uint64_t cycles;
};
-int gxio_mpipe_get_timestamp_aux(gxio_mpipe_context_t * context, uint64_t * sec,
- uint64_t * nsec, uint64_t * cycles)
+int gxio_mpipe_get_timestamp_aux(gxio_mpipe_context_t *context, uint64_t *sec,
+ uint64_t *nsec, uint64_t *cycles)
{
int __result;
struct get_timestamp_aux_param temp;
uint64_t cycles;
};
-int gxio_mpipe_set_timestamp_aux(gxio_mpipe_context_t * context, uint64_t sec,
+int gxio_mpipe_set_timestamp_aux(gxio_mpipe_context_t *context, uint64_t sec,
uint64_t nsec, uint64_t cycles)
{
struct set_timestamp_aux_param temp;
int64_t nsec;
};
-int gxio_mpipe_adjust_timestamp_aux(gxio_mpipe_context_t * context,
- int64_t nsec)
+int gxio_mpipe_adjust_timestamp_aux(gxio_mpipe_context_t *context, int64_t nsec)
{
struct adjust_timestamp_aux_param temp;
struct adjust_timestamp_aux_param *params = &temp;
EXPORT_SYMBOL(gxio_mpipe_adjust_timestamp_aux);
-struct adjust_timestamp_freq_param {
- int32_t ppb;
-};
-
-int gxio_mpipe_adjust_timestamp_freq(gxio_mpipe_context_t * context,
- int32_t ppb)
-{
- struct adjust_timestamp_freq_param temp;
- struct adjust_timestamp_freq_param *params = &temp;
-
- params->ppb = ppb;
-
- return hv_dev_pwrite(context->fd, 0, (HV_VirtAddr) params,
- sizeof(*params),
- GXIO_MPIPE_OP_ADJUST_TIMESTAMP_FREQ);
-}
-
-EXPORT_SYMBOL(gxio_mpipe_adjust_timestamp_freq);
-
struct config_edma_ring_blks_param {
unsigned int ering;
unsigned int max_blks;
unsigned int db;
};
-int gxio_mpipe_config_edma_ring_blks(gxio_mpipe_context_t * context,
+int gxio_mpipe_config_edma_ring_blks(gxio_mpipe_context_t *context,
unsigned int ering, unsigned int max_blks,
unsigned int min_snf_blks, unsigned int db)
{
EXPORT_SYMBOL(gxio_mpipe_config_edma_ring_blks);
+struct adjust_timestamp_freq_param {
+ int32_t ppb;
+};
+
+int gxio_mpipe_adjust_timestamp_freq(gxio_mpipe_context_t *context, int32_t ppb)
+{
+ struct adjust_timestamp_freq_param temp;
+ struct adjust_timestamp_freq_param *params = &temp;
+
+ params->ppb = ppb;
+
+ return hv_dev_pwrite(context->fd, 0, (HV_VirtAddr) params,
+ sizeof(*params),
+ GXIO_MPIPE_OP_ADJUST_TIMESTAMP_FREQ);
+}
+
+EXPORT_SYMBOL(gxio_mpipe_adjust_timestamp_freq);
+
struct arm_pollfd_param {
union iorpc_pollfd pollfd;
};
-int gxio_mpipe_arm_pollfd(gxio_mpipe_context_t * context, int pollfd_cookie)
+int gxio_mpipe_arm_pollfd(gxio_mpipe_context_t *context, int pollfd_cookie)
{
struct arm_pollfd_param temp;
struct arm_pollfd_param *params = &temp;
union iorpc_pollfd pollfd;
};
-int gxio_mpipe_close_pollfd(gxio_mpipe_context_t * context, int pollfd_cookie)
+int gxio_mpipe_close_pollfd(gxio_mpipe_context_t *context, int pollfd_cookie)
{
struct close_pollfd_param temp;
struct close_pollfd_param *params = &temp;
HV_PTE base;
};
-int gxio_mpipe_get_mmio_base(gxio_mpipe_context_t * context, HV_PTE *base)
+int gxio_mpipe_get_mmio_base(gxio_mpipe_context_t *context, HV_PTE *base)
{
int __result;
struct get_mmio_base_param temp;
unsigned long size;
};
-int gxio_mpipe_check_mmio_offset(gxio_mpipe_context_t * context,
+int gxio_mpipe_check_mmio_offset(gxio_mpipe_context_t *context,
unsigned long offset, unsigned long size)
{
struct check_mmio_offset_param temp;
/* This file is machine-generated; DO NOT EDIT! */
#include "gxio/iorpc_mpipe_info.h"
-
struct instance_aux_param {
_gxio_mpipe_link_name_t name;
};
-int gxio_mpipe_info_instance_aux(gxio_mpipe_info_context_t * context,
+int gxio_mpipe_info_instance_aux(gxio_mpipe_info_context_t *context,
_gxio_mpipe_link_name_t name)
{
struct instance_aux_param temp;
_gxio_mpipe_link_mac_t mac;
};
-int gxio_mpipe_info_enumerate_aux(gxio_mpipe_info_context_t * context,
+int gxio_mpipe_info_enumerate_aux(gxio_mpipe_info_context_t *context,
unsigned int idx,
- _gxio_mpipe_link_name_t * name,
- _gxio_mpipe_link_mac_t * mac)
+ _gxio_mpipe_link_name_t *name,
+ _gxio_mpipe_link_mac_t *mac)
{
int __result;
struct enumerate_aux_param temp;
__result =
hv_dev_pread(context->fd, 0, (HV_VirtAddr) params, sizeof(*params),
- (((uint64_t) idx << 32) |
+ (((uint64_t)idx << 32) |
GXIO_MPIPE_INFO_OP_ENUMERATE_AUX));
*name = params->name;
*mac = params->mac;
HV_PTE base;
};
-int gxio_mpipe_info_get_mmio_base(gxio_mpipe_info_context_t * context,
+int gxio_mpipe_info_get_mmio_base(gxio_mpipe_info_context_t *context,
HV_PTE *base)
{
int __result;
unsigned long size;
};
-int gxio_mpipe_info_check_mmio_offset(gxio_mpipe_info_context_t * context,
+int gxio_mpipe_info_check_mmio_offset(gxio_mpipe_info_context_t *context,
unsigned long offset, unsigned long size)
{
struct check_mmio_offset_param temp;
unsigned int flags;
};
-int gxio_trio_alloc_asids(gxio_trio_context_t * context, unsigned int count,
+int gxio_trio_alloc_asids(gxio_trio_context_t *context, unsigned int count,
unsigned int first, unsigned int flags)
{
struct alloc_asids_param temp;
unsigned int flags;
};
-int gxio_trio_alloc_memory_maps(gxio_trio_context_t * context,
+int gxio_trio_alloc_memory_maps(gxio_trio_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags)
{
unsigned int flags;
};
-int gxio_trio_alloc_scatter_queues(gxio_trio_context_t * context,
+int gxio_trio_alloc_scatter_queues(gxio_trio_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags)
{
unsigned int flags;
};
-int gxio_trio_alloc_pio_regions(gxio_trio_context_t * context,
+int gxio_trio_alloc_pio_regions(gxio_trio_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags)
{
unsigned int flags;
};
-int gxio_trio_init_pio_region_aux(gxio_trio_context_t * context,
+int gxio_trio_init_pio_region_aux(gxio_trio_context_t *context,
unsigned int pio_region, unsigned int mac,
uint32_t bus_address_hi, unsigned int flags)
{
unsigned int order_mode;
};
-int gxio_trio_init_memory_map_mmu_aux(gxio_trio_context_t * context,
+int gxio_trio_init_memory_map_mmu_aux(gxio_trio_context_t *context,
unsigned int map, unsigned long va,
uint64_t size, unsigned int asid,
unsigned int mac, uint64_t bus_address,
struct pcie_trio_ports_property trio_ports;
};
-int gxio_trio_get_port_property(gxio_trio_context_t * context,
+int gxio_trio_get_port_property(gxio_trio_context_t *context,
struct pcie_trio_ports_property *trio_ports)
{
int __result;
unsigned int intx;
};
-int gxio_trio_config_legacy_intr(gxio_trio_context_t * context, int inter_x,
+int gxio_trio_config_legacy_intr(gxio_trio_context_t *context, int inter_x,
int inter_y, int inter_ipi, int inter_event,
unsigned int mac, unsigned int intx)
{
unsigned int asid;
};
-int gxio_trio_config_msi_intr(gxio_trio_context_t * context, int inter_x,
+int gxio_trio_config_msi_intr(gxio_trio_context_t *context, int inter_x,
int inter_y, int inter_ipi, int inter_event,
unsigned int mac, unsigned int mem_map,
uint64_t mem_map_base, uint64_t mem_map_limit,
unsigned int mac;
};
-int gxio_trio_set_mps_mrs(gxio_trio_context_t * context, uint16_t mps,
+int gxio_trio_set_mps_mrs(gxio_trio_context_t *context, uint16_t mps,
uint16_t mrs, unsigned int mac)
{
struct set_mps_mrs_param temp;
unsigned int mac;
};
-int gxio_trio_force_rc_link_up(gxio_trio_context_t * context, unsigned int mac)
+int gxio_trio_force_rc_link_up(gxio_trio_context_t *context, unsigned int mac)
{
struct force_rc_link_up_param temp;
struct force_rc_link_up_param *params = &temp;
unsigned int mac;
};
-int gxio_trio_force_ep_link_up(gxio_trio_context_t * context, unsigned int mac)
+int gxio_trio_force_ep_link_up(gxio_trio_context_t *context, unsigned int mac)
{
struct force_ep_link_up_param temp;
struct force_ep_link_up_param *params = &temp;
HV_PTE base;
};
-int gxio_trio_get_mmio_base(gxio_trio_context_t * context, HV_PTE *base)
+int gxio_trio_get_mmio_base(gxio_trio_context_t *context, HV_PTE *base)
{
int __result;
struct get_mmio_base_param temp;
unsigned long size;
};
-int gxio_trio_check_mmio_offset(gxio_trio_context_t * context,
+int gxio_trio_check_mmio_offset(gxio_trio_context_t *context,
unsigned long offset, unsigned long size)
{
struct check_mmio_offset_param temp;
union iorpc_interrupt interrupt;
};
-int gxio_usb_host_cfg_interrupt(gxio_usb_host_context_t * context, int inter_x,
+int gxio_usb_host_cfg_interrupt(gxio_usb_host_context_t *context, int inter_x,
int inter_y, int inter_ipi, int inter_event)
{
struct cfg_interrupt_param temp;
unsigned int flags;
};
-int gxio_usb_host_register_client_memory(gxio_usb_host_context_t * context,
+int gxio_usb_host_register_client_memory(gxio_usb_host_context_t *context,
HV_PTE pte, unsigned int flags)
{
struct register_client_memory_param temp;
HV_PTE base;
};
-int gxio_usb_host_get_mmio_base(gxio_usb_host_context_t * context, HV_PTE *base)
+int gxio_usb_host_get_mmio_base(gxio_usb_host_context_t *context, HV_PTE *base)
{
int __result;
struct get_mmio_base_param temp;
unsigned long size;
};
-int gxio_usb_host_check_mmio_offset(gxio_usb_host_context_t * context,
+int gxio_usb_host_check_mmio_offset(gxio_usb_host_context_t *context,
unsigned long offset, unsigned long size)
{
struct check_mmio_offset_param temp;
#include <gxio/kiorpc.h>
#include <gxio/usb_host.h>
-int gxio_usb_host_init(gxio_usb_host_context_t * context, int usb_index,
+int gxio_usb_host_init(gxio_usb_host_context_t *context, int usb_index,
int is_ehci)
{
char file[32];
EXPORT_SYMBOL_GPL(gxio_usb_host_init);
-int gxio_usb_host_destroy(gxio_usb_host_context_t * context)
+int gxio_usb_host_destroy(gxio_usb_host_context_t *context)
{
iounmap((void __force __iomem *)(context->mmio_base));
hv_dev_close(context->fd);
EXPORT_SYMBOL_GPL(gxio_usb_host_destroy);
-void *gxio_usb_host_get_reg_start(gxio_usb_host_context_t * context)
+void *gxio_usb_host_get_reg_start(gxio_usb_host_context_t *context)
{
return context->mmio_base;
}
EXPORT_SYMBOL_GPL(gxio_usb_host_get_reg_start);
-size_t gxio_usb_host_get_reg_len(gxio_usb_host_context_t * context)
+size_t gxio_usb_host_get_reg_len(gxio_usb_host_context_t *context)
{
return HV_USB_HOST_MMIO_SIZE;
}
*/
uint_reg_t stack_idx : 5;
/* Reserved. */
- uint_reg_t __reserved_2 : 5;
+ uint_reg_t __reserved_2 : 3;
+ /*
+ * Instance ID. For devices that support automatic buffer return between
+ * mPIPE instances, this field indicates the buffer owner. If the INST
+ * field does not match the mPIPE's instance number when a packet is
+ * egressed, buffers with HWB set will be returned to the other mPIPE
+ * instance. Note that not all devices support multi-mPIPE buffer
+ * return. The MPIPE_EDMA_INFO.REMOTE_BUFF_RTN_SUPPORT bit indicates
+ * whether the INST field in the buffer descriptor is populated by iDMA
+ * hardware. This field is ignored on writes.
+ */
+ uint_reg_t inst : 2;
/*
* Reads as one to indicate that this is a hardware managed buffer.
* Ignored on writes since all buffers on a given stack are the same size.
uint_reg_t c : 2;
uint_reg_t size : 3;
uint_reg_t hwb : 1;
- uint_reg_t __reserved_2 : 5;
+ uint_reg_t inst : 2;
+ uint_reg_t __reserved_2 : 3;
uint_reg_t stack_idx : 5;
uint_reg_t __reserved_1 : 6;
int_reg_t va : 35;
/* Reserved. */
uint_reg_t __reserved_0 : 3;
/* eDMA ring being accessed */
- uint_reg_t ring : 5;
+ uint_reg_t ring : 6;
/* Reserved. */
- uint_reg_t __reserved_1 : 18;
+ uint_reg_t __reserved_1 : 17;
/*
* This field of the address selects the region (address space) to be
* accessed. For the egress DMA post region, this field must be 5.
uint_reg_t svc_dom : 5;
uint_reg_t __reserved_2 : 6;
uint_reg_t region : 3;
- uint_reg_t __reserved_1 : 18;
- uint_reg_t ring : 5;
+ uint_reg_t __reserved_1 : 17;
+ uint_reg_t ring : 6;
uint_reg_t __reserved_0 : 3;
#endif
};
#ifndef __ARCH_MPIPE_CONSTANTS_H__
#define __ARCH_MPIPE_CONSTANTS_H__
-#define MPIPE_NUM_CLASSIFIERS 10
+#define MPIPE_NUM_CLASSIFIERS 16
#define MPIPE_CLS_MHZ 1200
-#define MPIPE_NUM_EDMA_RINGS 32
+#define MPIPE_NUM_EDMA_RINGS 64
#define MPIPE_NUM_SGMII_MACS 16
-#define MPIPE_NUM_XAUI_MACS 4
+#define MPIPE_NUM_XAUI_MACS 16
#define MPIPE_NUM_LOOPBACK_CHANNELS 4
#define MPIPE_NUM_NON_LB_CHANNELS 28
* descriptors toggles each time the ring tail pointer wraps.
*/
uint_reg_t gen : 1;
+ /**
+ * For devices with EDMA reorder support, this field allows the
+ * descriptor to select the egress FIFO. The associated DMA ring must
+ * have ALLOW_EFIFO_SEL enabled.
+ */
+ uint_reg_t efifo_sel : 6;
/** Reserved. Must be zero. */
- uint_reg_t r0 : 7;
+ uint_reg_t r0 : 1;
/** Checksum generation enabled for this transfer. */
uint_reg_t csum : 1;
/**
uint_reg_t notif : 1;
uint_reg_t ns : 1;
uint_reg_t csum : 1;
- uint_reg_t r0 : 7;
+ uint_reg_t r0 : 1;
+ uint_reg_t efifo_sel : 6;
uint_reg_t gen : 1;
#endif
/** Reserved. */
uint_reg_t __reserved_1 : 3;
/**
- * Instance ID. For devices that support more than one mPIPE instance,
- * this field indicates the buffer owner. If the INST field does not
- * match the mPIPE's instance number when a packet is egressed, buffers
- * with HWB set will be returned to the other mPIPE instance.
+ * Instance ID. For devices that support automatic buffer return between
+ * mPIPE instances, this field indicates the buffer owner. If the INST
+ * field does not match the mPIPE's instance number when a packet is
+ * egressed, buffers with HWB set will be returned to the other mPIPE
+ * instance. Note that not all devices support multi-mPIPE buffer
+ * return. The MPIPE_EDMA_INFO.REMOTE_BUFF_RTN_SUPPORT bit indicates
+ * whether the INST field in the buffer descriptor is populated by iDMA
+ * hardware.
*/
- uint_reg_t inst : 1;
- /** Reserved. */
- uint_reg_t __reserved_2 : 1;
+ uint_reg_t inst : 2;
/**
* Always set to one by hardware in iDMA packet descriptors. For eDMA,
* indicates whether the buffer will be released to the buffer stack
uint_reg_t c : 2;
uint_reg_t size : 3;
uint_reg_t hwb : 1;
- uint_reg_t __reserved_2 : 1;
- uint_reg_t inst : 1;
+ uint_reg_t inst : 2;
uint_reg_t __reserved_1 : 3;
uint_reg_t stack_idx : 5;
uint_reg_t __reserved_0 : 6;
/**
* Sequence number applied when packet is distributed. Classifier
* selects which sequence number is to be applied by writing the 13-bit
- * SQN-selector into this field.
+ * SQN-selector into this field. For devices that support EXT_SQN (as
+ * indicated in IDMA_INFO.EXT_SQN_SUPPORT), the GP_SQN can be extended to
+ * 32-bits via the IDMA_CTL.EXT_SQN register. In this case the
+ * PACKET_SQN will be reduced to 32 bits.
*/
uint_reg_t gp_sqn : 16;
/**
/** Reserved. */
uint_reg_t __reserved_5 : 3;
/**
- * Instance ID. For devices that support more than one mPIPE instance,
- * this field indicates the buffer owner. If the INST field does not
- * match the mPIPE's instance number when a packet is egressed, buffers
- * with HWB set will be returned to the other mPIPE instance.
+ * Instance ID. For devices that support automatic buffer return between
+ * mPIPE instances, this field indicates the buffer owner. If the INST
+ * field does not match the mPIPE's instance number when a packet is
+ * egressed, buffers with HWB set will be returned to the other mPIPE
+ * instance. Note that not all devices support multi-mPIPE buffer
+ * return. The MPIPE_EDMA_INFO.REMOTE_BUFF_RTN_SUPPORT bit indicates
+ * whether the INST field in the buffer descriptor is populated by iDMA
+ * hardware.
*/
- uint_reg_t inst : 1;
- /** Reserved. */
- uint_reg_t __reserved_6 : 1;
+ uint_reg_t inst : 2;
/**
* Always set to one by hardware in iDMA packet descriptors. For eDMA,
* indicates whether the buffer will be released to the buffer stack
uint_reg_t c : 2;
uint_reg_t size : 3;
uint_reg_t hwb : 1;
- uint_reg_t __reserved_6 : 1;
- uint_reg_t inst : 1;
+ uint_reg_t inst : 2;
uint_reg_t __reserved_5 : 3;
uint_reg_t stack_idx : 5;
uint_reg_t __reserved_4 : 6;
#ifndef __ARCH_TRIO_CONSTANTS_H__
#define __ARCH_TRIO_CONSTANTS_H__
-#define TRIO_NUM_ASIDS 16
+#define TRIO_NUM_ASIDS 32
#define TRIO_NUM_TLBS_PER_ASID 16
#define TRIO_NUM_TPIO_REGIONS 8
#define TRIO_LOG2_NUM_TPIO_REGIONS 3
-#define TRIO_NUM_MAP_MEM_REGIONS 16
-#define TRIO_LOG2_NUM_MAP_MEM_REGIONS 4
+#define TRIO_NUM_MAP_MEM_REGIONS 32
+#define TRIO_LOG2_NUM_MAP_MEM_REGIONS 5
#define TRIO_NUM_MAP_SQ_REGIONS 8
#define TRIO_LOG2_NUM_MAP_SQ_REGIONS 3
#define TRIO_LOG2_NUM_SQ_FIFO_ENTRIES 6
-#define TRIO_NUM_PUSH_DMA_RINGS 32
+#define TRIO_NUM_PUSH_DMA_RINGS 64
-#define TRIO_NUM_PULL_DMA_RINGS 32
+#define TRIO_NUM_PULL_DMA_RINGS 64
#endif /* __ARCH_TRIO_CONSTANTS_H__ */
*
* Atomically sets @v to @i and returns old @v
*/
-static inline u64 atomic64_xchg(atomic64_t *v, u64 n)
+static inline long long atomic64_xchg(atomic64_t *v, long long n)
{
return xchg64(&v->counter, n);
}
* Atomically checks if @v holds @o and replaces it with @n if so.
* Returns the old value at @v.
*/
-static inline u64 atomic64_cmpxchg(atomic64_t *v, u64 o, u64 n)
+static inline long long atomic64_cmpxchg(atomic64_t *v, long long o,
+ long long n)
{
return cmpxchg64(&v->counter, o, n);
}
/* A 64bit atomic type */
typedef struct {
- u64 __aligned(8) counter;
+ long long counter;
} atomic64_t;
#define ATOMIC64_INIT(val) { (val) }
*
* Atomically reads the value of @v.
*/
-static inline u64 atomic64_read(const atomic64_t *v)
+static inline long long atomic64_read(const atomic64_t *v)
{
/*
* Requires an atomic op to read both 32-bit parts consistently.
* Casting away const is safe since the atomic support routines
* do not write to memory if the value has not been modified.
*/
- return _atomic64_xchg_add((u64 *)&v->counter, 0);
+ return _atomic64_xchg_add((long long *)&v->counter, 0);
}
/**
*
* Atomically adds @i to @v.
*/
-static inline void atomic64_add(u64 i, atomic64_t *v)
+static inline void atomic64_add(long long i, atomic64_t *v)
{
_atomic64_xchg_add(&v->counter, i);
}
*
* Atomically adds @i to @v and returns @i + @v
*/
-static inline u64 atomic64_add_return(u64 i, atomic64_t *v)
+static inline long long atomic64_add_return(long long i, atomic64_t *v)
{
smp_mb(); /* barrier for proper semantics */
return _atomic64_xchg_add(&v->counter, i) + i;
* Atomically adds @a to @v, so long as @v was not already @u.
* Returns non-zero if @v was not @u, and zero otherwise.
*/
-static inline u64 atomic64_add_unless(atomic64_t *v, u64 a, u64 u)
+static inline long long atomic64_add_unless(atomic64_t *v, long long a,
+ long long u)
{
smp_mb(); /* barrier for proper semantics */
return _atomic64_xchg_add_unless(&v->counter, a, u) != u;
* atomic64_set() can't be just a raw store, since it would be lost if it
* fell between the load and store of one of the other atomic ops.
*/
-static inline void atomic64_set(atomic64_t *v, u64 n)
+static inline void atomic64_set(atomic64_t *v, long long n)
{
_atomic64_xchg(&v->counter, n);
}
extern struct __get_user __atomic_or(volatile int *p, int *lock, int n);
extern struct __get_user __atomic_andn(volatile int *p, int *lock, int n);
extern struct __get_user __atomic_xor(volatile int *p, int *lock, int n);
-extern u64 __atomic64_cmpxchg(volatile u64 *p, int *lock, u64 o, u64 n);
-extern u64 __atomic64_xchg(volatile u64 *p, int *lock, u64 n);
-extern u64 __atomic64_xchg_add(volatile u64 *p, int *lock, u64 n);
-extern u64 __atomic64_xchg_add_unless(volatile u64 *p,
- int *lock, u64 o, u64 n);
+extern long long __atomic64_cmpxchg(volatile long long *p, int *lock,
+ long long o, long long n);
+extern long long __atomic64_xchg(volatile long long *p, int *lock, long long n);
+extern long long __atomic64_xchg_add(volatile long long *p, int *lock,
+ long long n);
+extern long long __atomic64_xchg_add_unless(volatile long long *p,
+ int *lock, long long o, long long n);
/* Return failure from the atomic wrappers. */
struct __get_user __atomic_bad_address(int __user *addr);
int _atomic_xchg_add(int *v, int i);
int _atomic_xchg_add_unless(int *v, int a, int u);
int _atomic_cmpxchg(int *ptr, int o, int n);
-u64 _atomic64_xchg(u64 *v, u64 n);
-u64 _atomic64_xchg_add(u64 *v, u64 i);
-u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u);
-u64 _atomic64_cmpxchg(u64 *v, u64 o, u64 n);
+long long _atomic64_xchg(long long *v, long long n);
+long long _atomic64_xchg_add(long long *v, long long i);
+long long _atomic64_xchg_add_unless(long long *v, long long a, long long u);
+long long _atomic64_cmpxchg(long long *v, long long o, long long n);
#define xchg(ptr, n) \
({ \
if (sizeof(*(ptr)) != 4) \
__cmpxchg_called_with_bad_pointer(); \
smp_mb(); \
- (typeof(*(ptr)))_atomic_cmpxchg((int *)ptr, (int)o, (int)n); \
+ (typeof(*(ptr)))_atomic_cmpxchg((int *)ptr, (int)o, \
+ (int)n); \
})
#define xchg64(ptr, n) \
if (sizeof(*(ptr)) != 8) \
__xchg_called_with_bad_pointer(); \
smp_mb(); \
- (typeof(*(ptr)))_atomic64_xchg((u64 *)(ptr), (u64)(n)); \
+ (typeof(*(ptr)))_atomic64_xchg((long long *)(ptr), \
+ (long long)(n)); \
})
#define cmpxchg64(ptr, o, n) \
if (sizeof(*(ptr)) != 8) \
__cmpxchg_called_with_bad_pointer(); \
smp_mb(); \
- (typeof(*(ptr)))_atomic64_cmpxchg((u64 *)ptr, (u64)o, (u64)n); \
+ (typeof(*(ptr)))_atomic64_cmpxchg((long long *)ptr, \
+ (long long)o, (long long)n); \
})
#else
switch (sizeof(*(ptr))) { \
case 4: \
__x = (typeof(__x))(unsigned long) \
- __insn_exch4((ptr), (u32)(unsigned long)(n)); \
+ __insn_exch4((ptr), \
+ (u32)(unsigned long)(n)); \
break; \
case 8: \
- __x = (typeof(__x)) \
+ __x = (typeof(__x)) \
__insn_exch((ptr), (unsigned long)(n)); \
break; \
default: \
switch (sizeof(*(ptr))) { \
case 4: \
__x = (typeof(__x))(unsigned long) \
- __insn_cmpexch4((ptr), (u32)(unsigned long)(n)); \
+ __insn_cmpexch4((ptr), \
+ (u32)(unsigned long)(n)); \
break; \
case 8: \
- __x = (typeof(__x))__insn_cmpexch((ptr), (u64)(n)); \
+ __x = (typeof(__x))__insn_cmpexch((ptr), \
+ (long long)(n)); \
break; \
default: \
__cmpxchg_called_with_bad_pointer(); \
#define PAGE_OFFSET (-(_AC(1, UL) << (MAX_VA_WIDTH - 1)))
#define KERNEL_HIGH_VADDR _AC(0xfffffff800000000, UL) /* high 32GB */
-#define FIXADDR_BASE (KERNEL_HIGH_VADDR - 0x400000000) /* 4 GB */
-#define FIXADDR_TOP (KERNEL_HIGH_VADDR - 0x300000000) /* 4 GB */
+#define FIXADDR_BASE (KERNEL_HIGH_VADDR - 0x300000000) /* 4 GB */
+#define FIXADDR_TOP (KERNEL_HIGH_VADDR - 0x200000000) /* 4 GB */
#define _VMALLOC_START FIXADDR_TOP
-#define HUGE_VMAP_BASE (KERNEL_HIGH_VADDR - 0x200000000) /* 4 GB */
#define MEM_SV_START (KERNEL_HIGH_VADDR - 0x100000000) /* 256 MB */
#define MEM_MODULE_START (MEM_SV_START + (256*1024*1024)) /* 256 MB */
#define MEM_MODULE_END (MEM_MODULE_START + (256*1024*1024))
#ifndef _ASM_TILE_PERCPU_H
#define _ASM_TILE_PERCPU_H
-register unsigned long __my_cpu_offset __asm__("tp");
-#define __my_cpu_offset __my_cpu_offset
-#define set_my_cpu_offset(tp) (__my_cpu_offset = (tp))
+register unsigned long my_cpu_offset_reg asm("tp");
+
+#ifdef CONFIG_PREEMPT
+/*
+ * For full preemption, we can't just use the register variable
+ * directly, since we need barrier() to hazard against it, causing the
+ * compiler to reload anything computed from a previous "tp" value.
+ * But we also don't want to use volatile asm, since we'd like the
+ * compiler to be able to cache the value across multiple percpu reads.
+ * So we use a fake stack read as a hazard against barrier().
+ * The 'U' constraint is like 'm' but disallows postincrement.
+ */
+static inline unsigned long __my_cpu_offset(void)
+{
+ unsigned long tp;
+ register unsigned long *sp asm("sp");
+ asm("move %0, tp" : "=r" (tp) : "U" (*sp));
+ return tp;
+}
+#define __my_cpu_offset __my_cpu_offset()
+#else
+/*
+ * We don't need to hazard against barrier() since "tp" doesn't ever
+ * change with PREEMPT_NONE, and with PREEMPT_VOLUNTARY it only
+ * changes at function call points, at which we are already re-reading
+ * the value of "tp" due to "my_cpu_offset_reg" being a global variable.
+ */
+#define __my_cpu_offset my_cpu_offset_reg
+#endif
+
+#define set_my_cpu_offset(tp) (my_cpu_offset_reg = (tp))
#include <asm-generic/percpu.h>
#define PKMAP_BASE ((FIXADDR_BOOT_START - PAGE_SIZE*LAST_PKMAP) & PGDIR_MASK)
#ifdef CONFIG_HIGHMEM
-# define __VMAPPING_END (PKMAP_BASE & ~(HPAGE_SIZE-1))
+# define _VMALLOC_END (PKMAP_BASE & ~(HPAGE_SIZE-1))
#else
-# define __VMAPPING_END (FIXADDR_START & ~(HPAGE_SIZE-1))
-#endif
-
-#ifdef CONFIG_HUGEVMAP
-#define HUGE_VMAP_END __VMAPPING_END
-#define HUGE_VMAP_BASE (HUGE_VMAP_END - CONFIG_NR_HUGE_VMAPS * HPAGE_SIZE)
-#define _VMALLOC_END HUGE_VMAP_BASE
-#else
-#define _VMALLOC_END __VMAPPING_END
+# define _VMALLOC_END (FIXADDR_START & ~(HPAGE_SIZE-1))
#endif
/*
* memory allocation code). The vmalloc code puts in an internal
* guard page between each allocation.
*/
-#define _VMALLOC_END HUGE_VMAP_BASE
+#define _VMALLOC_END MEM_SV_START
#define VMALLOC_END _VMALLOC_END
#define VMALLOC_START _VMALLOC_START
-#define HUGE_VMAP_END (HUGE_VMAP_BASE + PGDIR_SIZE)
-
#ifndef __ASSEMBLY__
/* We have no pud since we are a three-level page table. */
#define GXIO_MPIPE_OP_GET_MMIO_BASE IORPC_OPCODE(IORPC_FORMAT_NONE_NOUSER, 0x8000)
#define GXIO_MPIPE_OP_CHECK_MMIO_OFFSET IORPC_OPCODE(IORPC_FORMAT_NONE_NOUSER, 0x8001)
-int gxio_mpipe_alloc_buffer_stacks(gxio_mpipe_context_t * context,
+int gxio_mpipe_alloc_buffer_stacks(gxio_mpipe_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags);
-int gxio_mpipe_init_buffer_stack_aux(gxio_mpipe_context_t * context,
+int gxio_mpipe_init_buffer_stack_aux(gxio_mpipe_context_t *context,
void *mem_va, size_t mem_size,
unsigned int mem_flags, unsigned int stack,
unsigned int buffer_size_enum);
-int gxio_mpipe_alloc_notif_rings(gxio_mpipe_context_t * context,
+int gxio_mpipe_alloc_notif_rings(gxio_mpipe_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags);
-int gxio_mpipe_init_notif_ring_aux(gxio_mpipe_context_t * context, void *mem_va,
+int gxio_mpipe_init_notif_ring_aux(gxio_mpipe_context_t *context, void *mem_va,
size_t mem_size, unsigned int mem_flags,
unsigned int ring);
-int gxio_mpipe_request_notif_ring_interrupt(gxio_mpipe_context_t * context,
+int gxio_mpipe_request_notif_ring_interrupt(gxio_mpipe_context_t *context,
int inter_x, int inter_y,
int inter_ipi, int inter_event,
unsigned int ring);
-int gxio_mpipe_enable_notif_ring_interrupt(gxio_mpipe_context_t * context,
+int gxio_mpipe_enable_notif_ring_interrupt(gxio_mpipe_context_t *context,
unsigned int ring);
-int gxio_mpipe_alloc_notif_groups(gxio_mpipe_context_t * context,
+int gxio_mpipe_alloc_notif_groups(gxio_mpipe_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags);
-int gxio_mpipe_init_notif_group(gxio_mpipe_context_t * context,
+int gxio_mpipe_init_notif_group(gxio_mpipe_context_t *context,
unsigned int group,
gxio_mpipe_notif_group_bits_t bits);
-int gxio_mpipe_alloc_buckets(gxio_mpipe_context_t * context, unsigned int count,
+int gxio_mpipe_alloc_buckets(gxio_mpipe_context_t *context, unsigned int count,
unsigned int first, unsigned int flags);
-int gxio_mpipe_init_bucket(gxio_mpipe_context_t * context, unsigned int bucket,
+int gxio_mpipe_init_bucket(gxio_mpipe_context_t *context, unsigned int bucket,
MPIPE_LBL_INIT_DAT_BSTS_TBL_t bucket_info);
-int gxio_mpipe_alloc_edma_rings(gxio_mpipe_context_t * context,
+int gxio_mpipe_alloc_edma_rings(gxio_mpipe_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags);
-int gxio_mpipe_init_edma_ring_aux(gxio_mpipe_context_t * context, void *mem_va,
+int gxio_mpipe_init_edma_ring_aux(gxio_mpipe_context_t *context, void *mem_va,
size_t mem_size, unsigned int mem_flags,
unsigned int ring, unsigned int channel);
-int gxio_mpipe_commit_rules(gxio_mpipe_context_t * context, const void *blob,
+int gxio_mpipe_commit_rules(gxio_mpipe_context_t *context, const void *blob,
size_t blob_size);
-int gxio_mpipe_register_client_memory(gxio_mpipe_context_t * context,
+int gxio_mpipe_register_client_memory(gxio_mpipe_context_t *context,
unsigned int iotlb, HV_PTE pte,
unsigned int flags);
-int gxio_mpipe_link_open_aux(gxio_mpipe_context_t * context,
+int gxio_mpipe_link_open_aux(gxio_mpipe_context_t *context,
_gxio_mpipe_link_name_t name, unsigned int flags);
-int gxio_mpipe_link_close_aux(gxio_mpipe_context_t * context, int mac);
+int gxio_mpipe_link_close_aux(gxio_mpipe_context_t *context, int mac);
-int gxio_mpipe_link_set_attr_aux(gxio_mpipe_context_t * context, int mac,
+int gxio_mpipe_link_set_attr_aux(gxio_mpipe_context_t *context, int mac,
uint32_t attr, int64_t val);
-int gxio_mpipe_get_timestamp_aux(gxio_mpipe_context_t * context, uint64_t * sec,
- uint64_t * nsec, uint64_t * cycles);
+int gxio_mpipe_get_timestamp_aux(gxio_mpipe_context_t *context, uint64_t *sec,
+ uint64_t *nsec, uint64_t *cycles);
-int gxio_mpipe_set_timestamp_aux(gxio_mpipe_context_t * context, uint64_t sec,
+int gxio_mpipe_set_timestamp_aux(gxio_mpipe_context_t *context, uint64_t sec,
uint64_t nsec, uint64_t cycles);
-int gxio_mpipe_adjust_timestamp_aux(gxio_mpipe_context_t * context,
+int gxio_mpipe_adjust_timestamp_aux(gxio_mpipe_context_t *context,
int64_t nsec);
-int gxio_mpipe_adjust_timestamp_freq(gxio_mpipe_context_t * context,
+int gxio_mpipe_adjust_timestamp_freq(gxio_mpipe_context_t *context,
int32_t ppb);
-int gxio_mpipe_arm_pollfd(gxio_mpipe_context_t * context, int pollfd_cookie);
+int gxio_mpipe_arm_pollfd(gxio_mpipe_context_t *context, int pollfd_cookie);
-int gxio_mpipe_close_pollfd(gxio_mpipe_context_t * context, int pollfd_cookie);
+int gxio_mpipe_close_pollfd(gxio_mpipe_context_t *context, int pollfd_cookie);
-int gxio_mpipe_get_mmio_base(gxio_mpipe_context_t * context, HV_PTE *base);
+int gxio_mpipe_get_mmio_base(gxio_mpipe_context_t *context, HV_PTE *base);
-int gxio_mpipe_check_mmio_offset(gxio_mpipe_context_t * context,
+int gxio_mpipe_check_mmio_offset(gxio_mpipe_context_t *context,
unsigned long offset, unsigned long size);
#endif /* !__GXIO_MPIPE_LINUX_RPC_H__ */
#define GXIO_MPIPE_INFO_OP_CHECK_MMIO_OFFSET IORPC_OPCODE(IORPC_FORMAT_NONE_NOUSER, 0x8001)
-int gxio_mpipe_info_instance_aux(gxio_mpipe_info_context_t * context,
+int gxio_mpipe_info_instance_aux(gxio_mpipe_info_context_t *context,
_gxio_mpipe_link_name_t name);
-int gxio_mpipe_info_enumerate_aux(gxio_mpipe_info_context_t * context,
+int gxio_mpipe_info_enumerate_aux(gxio_mpipe_info_context_t *context,
unsigned int idx,
- _gxio_mpipe_link_name_t * name,
- _gxio_mpipe_link_mac_t * mac);
+ _gxio_mpipe_link_name_t *name,
+ _gxio_mpipe_link_mac_t *mac);
-int gxio_mpipe_info_get_mmio_base(gxio_mpipe_info_context_t * context,
+int gxio_mpipe_info_get_mmio_base(gxio_mpipe_info_context_t *context,
HV_PTE *base);
-int gxio_mpipe_info_check_mmio_offset(gxio_mpipe_info_context_t * context,
+int gxio_mpipe_info_check_mmio_offset(gxio_mpipe_info_context_t *context,
unsigned long offset, unsigned long size);
#endif /* !__GXIO_MPIPE_INFO_LINUX_RPC_H__ */
#define GXIO_TRIO_OP_GET_MMIO_BASE IORPC_OPCODE(IORPC_FORMAT_NONE_NOUSER, 0x8000)
#define GXIO_TRIO_OP_CHECK_MMIO_OFFSET IORPC_OPCODE(IORPC_FORMAT_NONE_NOUSER, 0x8001)
-int gxio_trio_alloc_asids(gxio_trio_context_t * context, unsigned int count,
+int gxio_trio_alloc_asids(gxio_trio_context_t *context, unsigned int count,
unsigned int first, unsigned int flags);
-int gxio_trio_alloc_memory_maps(gxio_trio_context_t * context,
+int gxio_trio_alloc_memory_maps(gxio_trio_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags);
-int gxio_trio_alloc_scatter_queues(gxio_trio_context_t * context,
+int gxio_trio_alloc_scatter_queues(gxio_trio_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags);
-int gxio_trio_alloc_pio_regions(gxio_trio_context_t * context,
+int gxio_trio_alloc_pio_regions(gxio_trio_context_t *context,
unsigned int count, unsigned int first,
unsigned int flags);
-int gxio_trio_init_pio_region_aux(gxio_trio_context_t * context,
+int gxio_trio_init_pio_region_aux(gxio_trio_context_t *context,
unsigned int pio_region, unsigned int mac,
uint32_t bus_address_hi, unsigned int flags);
-int gxio_trio_init_memory_map_mmu_aux(gxio_trio_context_t * context,
+int gxio_trio_init_memory_map_mmu_aux(gxio_trio_context_t *context,
unsigned int map, unsigned long va,
uint64_t size, unsigned int asid,
unsigned int mac, uint64_t bus_address,
unsigned int node,
unsigned int order_mode);
-int gxio_trio_get_port_property(gxio_trio_context_t * context,
+int gxio_trio_get_port_property(gxio_trio_context_t *context,
struct pcie_trio_ports_property *trio_ports);
-int gxio_trio_config_legacy_intr(gxio_trio_context_t * context, int inter_x,
+int gxio_trio_config_legacy_intr(gxio_trio_context_t *context, int inter_x,
int inter_y, int inter_ipi, int inter_event,
unsigned int mac, unsigned int intx);
-int gxio_trio_config_msi_intr(gxio_trio_context_t * context, int inter_x,
+int gxio_trio_config_msi_intr(gxio_trio_context_t *context, int inter_x,
int inter_y, int inter_ipi, int inter_event,
unsigned int mac, unsigned int mem_map,
uint64_t mem_map_base, uint64_t mem_map_limit,
unsigned int asid);
-int gxio_trio_set_mps_mrs(gxio_trio_context_t * context, uint16_t mps,
+int gxio_trio_set_mps_mrs(gxio_trio_context_t *context, uint16_t mps,
uint16_t mrs, unsigned int mac);
-int gxio_trio_force_rc_link_up(gxio_trio_context_t * context, unsigned int mac);
+int gxio_trio_force_rc_link_up(gxio_trio_context_t *context, unsigned int mac);
-int gxio_trio_force_ep_link_up(gxio_trio_context_t * context, unsigned int mac);
+int gxio_trio_force_ep_link_up(gxio_trio_context_t *context, unsigned int mac);
-int gxio_trio_get_mmio_base(gxio_trio_context_t * context, HV_PTE *base);
+int gxio_trio_get_mmio_base(gxio_trio_context_t *context, HV_PTE *base);
-int gxio_trio_check_mmio_offset(gxio_trio_context_t * context,
+int gxio_trio_check_mmio_offset(gxio_trio_context_t *context,
unsigned long offset, unsigned long size);
#endif /* !__GXIO_TRIO_LINUX_RPC_H__ */
#define GXIO_USB_HOST_OP_GET_MMIO_BASE IORPC_OPCODE(IORPC_FORMAT_NONE_NOUSER, 0x8000)
#define GXIO_USB_HOST_OP_CHECK_MMIO_OFFSET IORPC_OPCODE(IORPC_FORMAT_NONE_NOUSER, 0x8001)
-int gxio_usb_host_cfg_interrupt(gxio_usb_host_context_t * context, int inter_x,
+int gxio_usb_host_cfg_interrupt(gxio_usb_host_context_t *context, int inter_x,
int inter_y, int inter_ipi, int inter_event);
-int gxio_usb_host_register_client_memory(gxio_usb_host_context_t * context,
+int gxio_usb_host_register_client_memory(gxio_usb_host_context_t *context,
HV_PTE pte, unsigned int flags);
-int gxio_usb_host_get_mmio_base(gxio_usb_host_context_t * context,
+int gxio_usb_host_get_mmio_base(gxio_usb_host_context_t *context,
HV_PTE *base);
-int gxio_usb_host_check_mmio_offset(gxio_usb_host_context_t * context,
+int gxio_usb_host_check_mmio_offset(gxio_usb_host_context_t *context,
unsigned long offset, unsigned long size);
#endif /* !__GXIO_USB_HOST_LINUX_RPC_H__ */
* @return Zero if the context was successfully initialized, else a
* GXIO_ERR_xxx error code.
*/
-extern int gxio_usb_host_init(gxio_usb_host_context_t * context, int usb_index,
+extern int gxio_usb_host_init(gxio_usb_host_context_t *context, int usb_index,
int is_ehci);
/* Destroy a USB context.
* @return Zero if the context was successfully destroyed, else a
* GXIO_ERR_xxx error code.
*/
-extern int gxio_usb_host_destroy(gxio_usb_host_context_t * context);
+extern int gxio_usb_host_destroy(gxio_usb_host_context_t *context);
/* Retrieve the address of the shim's MMIO registers.
*
* @param context Pointer to a properly initialized gxio_usb_host_context_t.
* @return The address of the shim's MMIO registers.
*/
-extern void *gxio_usb_host_get_reg_start(gxio_usb_host_context_t * context);
+extern void *gxio_usb_host_get_reg_start(gxio_usb_host_context_t *context);
/* Retrieve the length of the shim's MMIO registers.
*
* @param context Pointer to a properly initialized gxio_usb_host_context_t.
* @return The length of the shim's MMIO registers.
*/
-extern size_t gxio_usb_host_get_reg_len(gxio_usb_host_context_t * context);
+extern size_t gxio_usb_host_get_reg_len(gxio_usb_host_context_t *context);
#endif /* _GXIO_USB_H_ */
{
return sys_llseek(fd, offset_high, offset_low, result, origin);
}
-
+
/* Provide the compat syscall number to call mapping. */
#undef __SYSCALL
#define __SYSCALL(nr, call) [nr] = (call),
+++ /dev/null
-/*
- * Copyright 2011 Tilera Corporation. All Rights Reserved.
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation, version 2.
- *
- * This program is distributed in the hope that it will be useful, but
- * WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or
- * NON INFRINGEMENT. See the GNU General Public License for
- * more details.
- *
- * Atomically access user memory, but use MMU to avoid propagating
- * kernel exceptions.
- */
-
-#include <linux/linkage.h>
-#include <asm/errno.h>
-#include <asm/futex.h>
-#include <asm/page.h>
-#include <asm/processor.h>
-
-/*
- * Provide a set of atomic memory operations supporting <asm/futex.h>.
- *
- * r0: user address to manipulate
- * r1: new value to write, or for cmpxchg, old value to compare against
- * r2: (cmpxchg only) new value to write
- *
- * Return __get_user struct, r0 with value, r1 with error.
- */
-#define FUTEX_OP(name, ...) \
-STD_ENTRY(futex_##name) \
- __VA_ARGS__; \
- { \
- move r1, zero; \
- jrp lr \
- }; \
- STD_ENDPROC(futex_##name); \
- .pushsection __ex_table,"a"; \
- .quad 1b, get_user_fault; \
- .popsection
-
- .pushsection .fixup,"ax"
-get_user_fault:
- { movei r1, -EFAULT; jrp lr }
- ENDPROC(get_user_fault)
- .popsection
-
-FUTEX_OP(cmpxchg, mtspr CMPEXCH_VALUE, r1; 1: cmpexch4 r0, r0, r2)
-FUTEX_OP(set, 1: exch4 r0, r0, r1)
-FUTEX_OP(add, 1: fetchadd4 r0, r0, r1)
-FUTEX_OP(or, 1: fetchor4 r0, r0, r1)
-FUTEX_OP(andn, nor r1, r1, zero; 1: fetchand4 r0, r0, r1)
0,
"udn",
LIST_HEAD_INIT(hardwall_types[HARDWALL_UDN].list),
- __SPIN_LOCK_INITIALIZER(hardwall_types[HARDWALL_UDN].lock),
+ __SPIN_LOCK_UNLOCKED(hardwall_types[HARDWALL_UDN].lock),
NULL
},
#ifndef __tilepro__
1, /* disabled pending hypervisor support */
"idn",
LIST_HEAD_INIT(hardwall_types[HARDWALL_IDN].list),
- __SPIN_LOCK_INITIALIZER(hardwall_types[HARDWALL_IDN].lock),
+ __SPIN_LOCK_UNLOCKED(hardwall_types[HARDWALL_IDN].lock),
NULL
},
{ /* access to user-space IPI */
0,
"ipi",
LIST_HEAD_INIT(hardwall_types[HARDWALL_IPI].list),
- __SPIN_LOCK_INITIALIZER(hardwall_types[HARDWALL_IPI].lock),
+ __SPIN_LOCK_UNLOCKED(hardwall_types[HARDWALL_IPI].lock),
NULL
},
#endif
}
bzt r28, 1f
bnz r29, 1f
+ /* Disable interrupts explicitly for preemption. */
+ IRQ_DISABLE(r20,r21)
+ TRACE_IRQS_OFF
jal preempt_schedule_irq
FEEDBACK_REENTER(interrupt_return)
1:
}
beqzt r28, 1f
bnez r29, 1f
+ /* Disable interrupts explicitly for preemption. */
+ IRQ_DISABLE(r20,r21)
+ TRACE_IRQS_OFF
jal preempt_schedule_irq
FEEDBACK_REENTER(interrupt_return)
1:
if ((long)VMALLOC_START >= 0)
early_panic(
"Linux VMALLOC region below the 2GB line (%#lx)!\n"
- "Reconfigure the kernel with fewer NR_HUGE_VMAPS\n"
- "or smaller VMALLOC_RESERVE.\n",
+ "Reconfigure the kernel with smaller VMALLOC_RESERVE.\n",
VMALLOC_START);
#endif
}
#include <linux/mmzone.h>
#include <linux/dcache.h>
#include <linux/fs.h>
+#include <linux/string.h>
#include <asm/backtrace.h>
#include <asm/page.h>
#include <asm/ucontext.h>
}
if (vma->vm_file) {
- char *s;
p = d_path(&vma->vm_file->f_path, buf, bufsize);
if (IS_ERR(p))
p = "?";
- s = strrchr(p, '/');
- if (s)
- p = s+1;
+ name = kbasename(p);
} else {
- p = "anon";
+ name = "anon";
}
/* Generate a string description of the vma info. */
- namelen = strlen(p);
+ namelen = strlen(name);
remaining = (bufsize - 1) - namelen;
- memmove(buf, p, namelen);
+ memmove(buf, name, namelen);
snprintf(buf + namelen, remaining, "[%lx+%lx] ",
vma->vm_start, vma->vm_end - vma->vm_start);
}
/*
* This function generates unalign fixup JIT.
*
- * We fist find unalign load/store instruction's destination, source
- * reguisters: ra, rb and rd. and 3 scratch registers by calling
+ * We first find unalign load/store instruction's destination, source
+ * registers: ra, rb and rd. and 3 scratch registers by calling
* find_regs(...). 3 scratch clobbers should not alias with any register
* used in the fault bundle. Then analyze the fault bundle to determine
* if it's a load or store, operand width, branch or address increment etc.
EXPORT_SYMBOL(_atomic_xor);
-u64 _atomic64_xchg(u64 *v, u64 n)
+long long _atomic64_xchg(long long *v, long long n)
{
return __atomic64_xchg(v, __atomic_setup(v), n);
}
EXPORT_SYMBOL(_atomic64_xchg);
-u64 _atomic64_xchg_add(u64 *v, u64 i)
+long long _atomic64_xchg_add(long long *v, long long i)
{
return __atomic64_xchg_add(v, __atomic_setup(v), i);
}
EXPORT_SYMBOL(_atomic64_xchg_add);
-u64 _atomic64_xchg_add_unless(u64 *v, u64 a, u64 u)
+long long _atomic64_xchg_add_unless(long long *v, long long a, long long u)
{
/*
* Note: argument order is switched here since it is easier
}
EXPORT_SYMBOL(_atomic64_xchg_add_unless);
-u64 _atomic64_cmpxchg(u64 *v, u64 o, u64 n)
+long long _atomic64_cmpxchg(long long *v, long long o, long long n)
{
return __atomic64_cmpxchg(v, __atomic_setup(v), o, n);
}
pmd_k = vmalloc_sync_one(pgd, address);
if (!pmd_k)
return -1;
- if (pmd_huge(*pmd_k))
- return 0; /* support TILE huge_vmap() API */
pte_k = pte_offset_kernel(pmd_k, address);
if (!pte_present(*pte_k))
return -1;
printk(KERN_DEBUG " PKMAP %#lx - %#lx\n",
PKMAP_BASE, PKMAP_ADDR(LAST_PKMAP) - 1);
#endif
-#ifdef CONFIG_HUGEVMAP
- printk(KERN_DEBUG " HUGEMAP %#lx - %#lx\n",
- HUGE_VMAP_BASE, HUGE_VMAP_END - 1);
-#endif
printk(KERN_DEBUG " VMALLOC %#lx - %#lx\n",
_VMALLOC_START, _VMALLOC_END - 1);
#ifdef __tilegx__
}
/* Shatter the huge page into the preallocated L2 page table. */
- pmd_populate_kernel(&init_mm, pmd,
- get_prealloc_pte(pte_pfn(*(pte_t *)pmd)));
+ pmd_populate_kernel(&init_mm, pmd, get_prealloc_pte(pmd_pfn(*pmd)));
#ifdef __PAGETABLE_PMD_FOLDED
/* Walk every pgd on the system and update the pmd there. */
const char __user *buffer, size_t count, loff_t *pos)
{
char *end, buf[sizeof("nnnnn\0")];
+ size_t size;
int tmp;
- if (copy_from_user(buf, buffer, count))
+ size = min(count, sizeof(buf));
+ if (copy_from_user(buf, buffer, size))
return -EFAULT;
tmp = simple_strtol(buf, &end, 0);
bool "Intel Low Power Subsystem Support"
depends on ACPI
select COMMON_CLK
+ select PINCTRL
---help---
Select to build support for Intel Low Power Subsystem such as
found on Intel Lynxpoint PCH. Selecting this option enables
- things like clock tree (common clock framework) which are needed
- by the LPSS peripheral drivers.
+ things like clock tree (common clock framework) and pincontrol
+ which are needed by the LPSS peripheral drivers.
config X86_RDC321X
bool "RDC R-321x SoC"
config X86_UP_APIC
bool "Local APIC support on uniprocessors"
- depends on X86_32 && !SMP && !X86_32_NON_STANDARD
+ depends on X86_32 && !SMP && !X86_32_NON_STANDARD && !PCI_MSI
---help---
A local APIC (Advanced Programmable Interrupt Controller) is an
integrated interrupt controller in the CPU. If you have a single-CPU
config X86_LOCAL_APIC
def_bool y
- depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC
+ depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_APIC || PCI_MSI
config X86_IO_APIC
def_bool y
- depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_IOAPIC
+ depends on X86_64 || SMP || X86_32_NON_STANDARD || X86_UP_IOAPIC || PCI_MSI
config X86_VISWS_APIC
def_bool y
config MICROCODE
tristate "CPU microcode loading support"
+ depends on CPU_SUP_AMD || CPU_SUP_INTEL
select FW_LOADER
---help---
* Catch too early usage of this before alternatives
* have run.
*/
- asm goto("1: jmp %l[t_warn]\n"
+ asm_volatile_goto("1: jmp %l[t_warn]\n"
"2:\n"
".section .altinstructions,\"a\"\n"
" .long 1b - .\n"
#endif
- asm goto("1: jmp %l[t_no]\n"
+ asm_volatile_goto("1: jmp %l[t_no]\n"
"2:\n"
".section .altinstructions,\"a\"\n"
" .long 1b - .\n"
* have. Thus, we force the jump to the widest, 4-byte, signed relative
* offset even though the last would often fit in less bytes.
*/
- asm goto("1: .byte 0xe9\n .long %l[t_dynamic] - 2f\n"
+ asm_volatile_goto("1: .byte 0xe9\n .long %l[t_dynamic] - 2f\n"
"2:\n"
".section .altinstructions,\"a\"\n"
" .long 1b - .\n" /* src offset */
static __always_inline bool arch_static_branch(struct static_key *key)
{
- asm goto("1:"
+ asm_volatile_goto("1:"
".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t"
".pushsection __jump_table, \"aw\" \n\t"
_ASM_ALIGN "\n\t"
static inline void __mutex_fastpath_lock(atomic_t *v,
void (*fail_fn)(atomic_t *))
{
- asm volatile goto(LOCK_PREFIX " decl %0\n"
+ asm_volatile_goto(LOCK_PREFIX " decl %0\n"
" jns %l[exit]\n"
: : "m" (v->counter)
: "memory", "cc"
static inline void __mutex_fastpath_unlock(atomic_t *v,
void (*fail_fn)(atomic_t *))
{
- asm volatile goto(LOCK_PREFIX " incl %0\n"
+ asm_volatile_goto(LOCK_PREFIX " incl %0\n"
" jg %l[exit]\n"
: : "m" (v->counter)
: "memory", "cc"
do { \
typedef typeof(var) pao_T__; \
const int pao_ID__ = (__builtin_constant_p(val) && \
- ((val) == 1 || (val) == -1)) ? (val) : 0; \
+ ((val) == 1 || (val) == -1)) ? \
+ (int)(val) : 0; \
if (0) { \
pao_T__ pao_tmp__; \
pao_tmp__ = (val); \
return get_phys_to_machine(pfn) != INVALID_P2M_ENTRY;
}
-static inline unsigned long mfn_to_pfn(unsigned long mfn)
+static inline unsigned long mfn_to_pfn_no_overrides(unsigned long mfn)
{
unsigned long pfn;
- int ret = 0;
+ int ret;
if (xen_feature(XENFEAT_auto_translated_physmap))
return mfn;
- if (unlikely(mfn >= machine_to_phys_nr)) {
- pfn = ~0;
- goto try_override;
- }
- pfn = 0;
+ if (unlikely(mfn >= machine_to_phys_nr))
+ return ~0;
+
/*
* The array access can fail (e.g., device space beyond end of RAM).
* In such cases it doesn't matter what we return (we return garbage),
* but we must handle the fault without crashing!
*/
ret = __get_user(pfn, &machine_to_phys_mapping[mfn]);
-try_override:
- /* ret might be < 0 if there are no entries in the m2p for mfn */
if (ret < 0)
- pfn = ~0;
- else if (get_phys_to_machine(pfn) != mfn)
+ return ~0;
+
+ return pfn;
+}
+
+static inline unsigned long mfn_to_pfn(unsigned long mfn)
+{
+ unsigned long pfn;
+
+ if (xen_feature(XENFEAT_auto_translated_physmap))
+ return mfn;
+
+ pfn = mfn_to_pfn_no_overrides(mfn);
+ if (get_phys_to_machine(pfn) != mfn) {
/*
* If this appears to be a foreign mfn (because the pfn
* doesn't map back to the mfn), then check the local override
* m2p_find_override_pfn returns ~0 if it doesn't find anything.
*/
pfn = m2p_find_override_pfn(mfn, ~0);
+ }
/*
* pfn is ~0 if there are no entries in the m2p for mfn or if the
break;
case UV3_HUB_PART_NUMBER:
case UV3_HUB_PART_NUMBER_X:
- uv_min_hub_revision_id += UV3_HUB_REVISION_BASE - 1;
+ uv_min_hub_revision_id += UV3_HUB_REVISION_BASE;
break;
}
static int __kprobes
perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs)
{
- int ret;
u64 start_clock;
u64 finish_clock;
+ int ret;
if (!atomic_read(&active_events))
return NMI_DONE;
- start_clock = local_clock();
+ start_clock = sched_clock();
ret = x86_pmu.handle_irq(regs);
- finish_clock = local_clock();
+ finish_clock = sched_clock();
perf_sample_event_took(finish_clock - start_clock);
err = amd_pmu_init();
break;
default:
- return 0;
+ err = -ENOTSUPP;
}
if (err != 0) {
pr_cont("no PMU driver, software events only.\n");
void arch_perf_update_userpage(struct perf_event_mmap_page *userpg, u64 now)
{
- userpg->cap_usr_time = 0;
- userpg->cap_usr_time_zero = 0;
- userpg->cap_usr_rdpmc = x86_pmu.attr_rdpmc;
+ userpg->cap_user_time = 0;
+ userpg->cap_user_time_zero = 0;
+ userpg->cap_user_rdpmc = x86_pmu.attr_rdpmc;
userpg->pmc_width = x86_pmu.cntval_bits;
- if (!boot_cpu_has(X86_FEATURE_CONSTANT_TSC))
+ if (!sched_clock_stable)
return;
- if (!boot_cpu_has(X86_FEATURE_NONSTOP_TSC))
- return;
-
- userpg->cap_usr_time = 1;
+ userpg->cap_user_time = 1;
userpg->time_mult = this_cpu_read(cyc2ns);
userpg->time_shift = CYC2NS_SCALE_FACTOR;
userpg->time_offset = this_cpu_read(cyc2ns_offset) - now;
- if (sched_clock_stable && !check_tsc_disabled()) {
- userpg->cap_usr_time_zero = 1;
- userpg->time_zero = this_cpu_read(cyc2ns_offset);
- }
+ userpg->cap_user_time_zero = 1;
+ userpg->time_zero = this_cpu_read(cyc2ns_offset);
}
/*
static struct extra_reg intel_slm_extra_regs[] __read_mostly =
{
/* must define OFFCORE_RSP_X first, see intel_fixup_er() */
- INTEL_UEVENT_EXTRA_REG(0x01b7, MSR_OFFCORE_RSP_0, 0x768005ffff, RSP_0),
- INTEL_UEVENT_EXTRA_REG(0x02b7, MSR_OFFCORE_RSP_1, 0x768005ffff, RSP_1),
+ INTEL_UEVENT_EXTRA_REG(0x01b7, MSR_OFFCORE_RSP_0, 0x768005ffffull, RSP_0),
+ INTEL_UEVENT_EXTRA_REG(0x02b7, MSR_OFFCORE_RSP_1, 0x768005ffffull, RSP_1),
EVENT_EXTRA_END
};
break;
case 55: /* Atom 22nm "Silvermont" */
+ case 77: /* Avoton "Silvermont" */
memcpy(hw_cache_event_ids, slm_hw_cache_event_ids,
sizeof(hw_cache_event_ids));
memcpy(hw_cache_extra_regs, slm_hw_cache_extra_regs,
INTEL_EVENT_CONSTRAINT(0xd0, 0xf), /* MEM_UOP_RETIRED.* */
INTEL_EVENT_CONSTRAINT(0xd1, 0xf), /* MEM_LOAD_UOPS_RETIRED.* */
INTEL_EVENT_CONSTRAINT(0xd2, 0xf), /* MEM_LOAD_UOPS_LLC_HIT_RETIRED.* */
+ INTEL_EVENT_CONSTRAINT(0xd3, 0xf), /* MEM_LOAD_UOPS_LLC_MISS_RETIRED.* */
INTEL_UEVENT_CONSTRAINT(0x02d4, 0xf), /* MEM_LOAD_UOPS_MISC_RETIRED.LLC_MISS */
EVENT_CONSTRAINT_END
};
box->hrtimer.function = uncore_pmu_hrtimer;
}
-struct intel_uncore_box *uncore_alloc_box(struct intel_uncore_type *type, int cpu)
+static struct intel_uncore_box *uncore_alloc_box(struct intel_uncore_type *type, int node)
{
struct intel_uncore_box *box;
int i, size;
size = sizeof(*box) + type->num_shared_regs * sizeof(struct intel_uncore_extra_reg);
- box = kzalloc_node(size, GFP_KERNEL, cpu_to_node(cpu));
+ box = kzalloc_node(size, GFP_KERNEL, node);
if (!box)
return NULL;
struct intel_uncore_box *fake_box;
int ret = -EINVAL, n;
- fake_box = uncore_alloc_box(pmu->type, smp_processor_id());
+ fake_box = uncore_alloc_box(pmu->type, NUMA_NO_NODE);
if (!fake_box)
return -ENOMEM;
}
type = pci_uncores[UNCORE_PCI_DEV_TYPE(id->driver_data)];
- box = uncore_alloc_box(type, 0);
+ box = uncore_alloc_box(type, NUMA_NO_NODE);
if (!box)
return -ENOMEM;
if (pmu->func_id < 0)
pmu->func_id = j;
- box = uncore_alloc_box(type, cpu);
+ box = uncore_alloc_box(type, cpu_to_node(cpu));
if (!box)
return -ENOMEM;
TRACE_IRQS_OFF
.endm
-ENTRY(save_rest)
- PARTIAL_FRAME 1 (REST_SKIP+8)
- movq 5*8+16(%rsp), %r11 /* save return address */
- movq_cfi rbx, RBX+16
- movq_cfi rbp, RBP+16
- movq_cfi r12, R12+16
- movq_cfi r13, R13+16
- movq_cfi r14, R14+16
- movq_cfi r15, R15+16
- movq %r11, 8(%rsp) /* return address */
- FIXUP_TOP_OF_STACK %r11, 16
- ret
- CFI_ENDPROC
-END(save_rest)
-
/* save complete stack frame */
.pushsection .kprobes.text, "ax"
ENTRY(save_paranoid)
struct dentry *kvm_init_debugfs(void)
{
- d_kvm_debug = debugfs_create_dir("kvm", NULL);
+ d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
if (!d_kvm_debug)
printk(KERN_WARNING "Could not create 'kvm' debugfs directory\n");
if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
return;
- printk(KERN_INFO "KVM setup paravirtual spinlock\n");
+ pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
+ pv_lock_ops.unlock_kick = kvm_unlock_kick;
+}
+
+static __init int kvm_spinlock_init_jump(void)
+{
+ if (!kvm_para_available())
+ return 0;
+ if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
+ return 0;
static_key_slow_inc(¶virt_ticketlocks_enabled);
+ printk(KERN_INFO "KVM setup paravirtual spinlock\n");
- pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
- pv_lock_ops.unlock_kick = kvm_unlock_kick;
+ return 0;
}
+early_initcall(kvm_spinlock_init_jump);
+
#endif /* CONFIG_PARAVIRT_SPINLOCKS */
/* need to apply patch? */
if (rev >= mc_amd->hdr.patch_id) {
c->microcode = rev;
+ uci->cpu_sig.rev = rev;
return 0;
}
u64 before, delta, whole_msecs;
int remainder_ns, decimal_msecs, thishandled;
- before = local_clock();
+ before = sched_clock();
thishandled = a->handler(type, regs);
handled += thishandled;
- delta = local_clock() - before;
+ delta = sched_clock() - before;
trace_nmi_handler(a->handler, (int)delta, thishandled);
if (delta < nmi_longest_ns)
DMI_MATCH(DMI_PRODUCT_NAME, "Latitude E6320"),
},
},
+ { /* Handle problems with rebooting on the Latitude E5410. */
+ .callback = set_pci_reboot,
+ .ident = "Dell Latitude E5410",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Latitude E5410"),
+ },
+ },
{ /* Handle problems with rebooting on the Latitude E5420. */
.callback = set_pci_reboot,
.ident = "Dell Latitude E5420",
},
{ /* Handle problems with rebooting on the Precision M6600. */
.callback = set_pci_reboot,
- .ident = "Dell OptiPlex 990",
+ .ident = "Dell Precision M6600",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
DMI_MATCH(DMI_PRODUCT_NAME, "Precision M6600"),
},
},
+ { /* Handle problems with rebooting on the Dell PowerEdge C6100. */
+ .callback = set_pci_reboot,
+ .ident = "Dell PowerEdge C6100",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "C6100"),
+ },
+ },
+ { /* Some C6100 machines were shipped with vendor being 'Dell'. */
+ .callback = set_pci_reboot,
+ .ident = "Dell PowerEdge C6100",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "C6100"),
+ },
+ },
{ }
};
{
static int current_node = -1;
int node = early_cpu_to_node(cpu);
+ int max_cpu_present = find_last_bit(cpumask_bits(cpu_present_mask), NR_CPUS);
if (system_state == SYSTEM_BOOTING) {
if (node != current_node) {
current_node = node;
pr_info("Booting Node %3d, Processors ", node);
}
- pr_cont(" #%d%s", cpu, cpu == (nr_cpu_ids - 1) ? " OK\n" : "");
+ pr_cont(" #%4d%s", cpu, cpu == max_cpu_present ? " OK\n" : "");
return;
} else
pr_info("Booting Node %d Processor %d APIC 0x%x\n",
* the part that is occupied by the framebuffer */
len = mode->height * mode->stride;
len = PAGE_ALIGN(len);
- if (len > si->lfb_size << 16) {
+ if (len > (u64)si->lfb_size << 16) {
printk(KERN_WARNING "sysfb: VRAM smaller than advertised\n");
return -EINVAL;
}
/* setup IORESOURCE_MEM as framebuffer memory */
memset(&res, 0, sizeof(res));
- res.flags = IORESOURCE_MEM;
+ res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
res.name = simplefb_resname;
res.start = si->lfb_base;
res.end = si->lfb_base + len - 1;
return rc;
}
+static int em_ret_far_imm(struct x86_emulate_ctxt *ctxt)
+{
+ int rc;
+
+ rc = em_ret_far(ctxt);
+ if (rc != X86EMUL_CONTINUE)
+ return rc;
+ rsp_increment(ctxt, ctxt->src.val);
+ return X86EMUL_CONTINUE;
+}
+
static int em_cmpxchg(struct x86_emulate_ctxt *ctxt)
{
/* Save real source value, then compare EAX against destination. */
G(ByteOp, group11), G(0, group11),
/* 0xC8 - 0xCF */
I(Stack | SrcImmU16 | Src2ImmByte, em_enter), I(Stack, em_leave),
- N, I(ImplicitOps | Stack, em_ret_far),
+ I(ImplicitOps | Stack | SrcImmU16, em_ret_far_imm),
+ I(ImplicitOps | Stack, em_ret_far),
D(ImplicitOps), DI(SrcImmByte, intn),
D(ImplicitOps | No64), II(ImplicitOps, em_iret, iret),
/* 0xD0 - 0xD7 */
pt_element_t prefetch_ptes[PTE_PREFETCH_NUM];
gpa_t pte_gpa[PT_MAX_FULL_LEVELS];
pt_element_t __user *ptep_user[PT_MAX_FULL_LEVELS];
+ bool pte_writable[PT_MAX_FULL_LEVELS];
unsigned pt_access;
unsigned pte_access;
gfn_t gfn;
if (pte == orig_pte)
continue;
+ /*
+ * If the slot is read-only, simply do not process the accessed
+ * and dirty bits. This is the correct thing to do if the slot
+ * is ROM, and page tables in read-as-ROM/write-as-MMIO slots
+ * are only supported if the accessed and dirty bits are already
+ * set in the ROM (so that MMIO writes are never needed).
+ *
+ * Note that NPT does not allow this at all and faults, since
+ * it always wants nested page table entries for the guest
+ * page tables to be writable. And EPT works but will simply
+ * overwrite the read-only memory to set the accessed and dirty
+ * bits.
+ */
+ if (unlikely(!walker->pte_writable[level - 1]))
+ continue;
+
ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index, orig_pte, pte);
if (ret)
return ret;
goto error;
real_gfn = gpa_to_gfn(real_gfn);
- host_addr = gfn_to_hva(vcpu->kvm, real_gfn);
+ host_addr = gfn_to_hva_prot(vcpu->kvm, real_gfn,
+ &walker->pte_writable[walker->level - 1]);
if (unlikely(kvm_is_error_hva(host_addr)))
goto error;
static void ept_load_pdptrs(struct kvm_vcpu *vcpu)
{
+ struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
+
if (!test_bit(VCPU_EXREG_PDPTR,
(unsigned long *)&vcpu->arch.regs_dirty))
return;
if (is_paging(vcpu) && is_pae(vcpu) && !is_long_mode(vcpu)) {
- vmcs_write64(GUEST_PDPTR0, vcpu->arch.mmu.pdptrs[0]);
- vmcs_write64(GUEST_PDPTR1, vcpu->arch.mmu.pdptrs[1]);
- vmcs_write64(GUEST_PDPTR2, vcpu->arch.mmu.pdptrs[2]);
- vmcs_write64(GUEST_PDPTR3, vcpu->arch.mmu.pdptrs[3]);
+ vmcs_write64(GUEST_PDPTR0, mmu->pdptrs[0]);
+ vmcs_write64(GUEST_PDPTR1, mmu->pdptrs[1]);
+ vmcs_write64(GUEST_PDPTR2, mmu->pdptrs[2]);
+ vmcs_write64(GUEST_PDPTR3, mmu->pdptrs[3]);
}
}
static void ept_save_pdptrs(struct kvm_vcpu *vcpu)
{
+ struct kvm_mmu *mmu = vcpu->arch.walk_mmu;
+
if (is_paging(vcpu) && is_pae(vcpu) && !is_long_mode(vcpu)) {
- vcpu->arch.mmu.pdptrs[0] = vmcs_read64(GUEST_PDPTR0);
- vcpu->arch.mmu.pdptrs[1] = vmcs_read64(GUEST_PDPTR1);
- vcpu->arch.mmu.pdptrs[2] = vmcs_read64(GUEST_PDPTR2);
- vcpu->arch.mmu.pdptrs[3] = vmcs_read64(GUEST_PDPTR3);
+ mmu->pdptrs[0] = vmcs_read64(GUEST_PDPTR0);
+ mmu->pdptrs[1] = vmcs_read64(GUEST_PDPTR1);
+ mmu->pdptrs[2] = vmcs_read64(GUEST_PDPTR2);
+ mmu->pdptrs[3] = vmcs_read64(GUEST_PDPTR3);
}
__set_bit(VCPU_EXREG_PDPTR,
return 0;
}
+ /*
+ * EPT violation happened while executing iret from NMI,
+ * "blocked by NMI" bit has to be set before next VM entry.
+ * There are errata that may cause this bit to not be set:
+ * AAK134, BY25.
+ */
+ if (!(to_vmx(vcpu)->idt_vectoring_info & VECTORING_INFO_VALID_MASK) &&
+ cpu_has_virtual_nmis() &&
+ (exit_qualification & INTR_INFO_UNBLOCK_NMI))
+ vmcs_set_bits(GUEST_INTERRUPTIBILITY_INFO, GUEST_INTR_STATE_NMI);
+
gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS);
trace_kvm_page_fault(gpa, exit_qualification);
return;
}
+static void bpf_jit_free_deferred(struct work_struct *work)
+{
+ struct sk_filter *fp = container_of(work, struct sk_filter, work);
+ unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
+ struct bpf_binary_header *header = (void *)addr;
+
+ set_memory_rw(addr, header->pages);
+ module_free(NULL, header);
+ kfree(fp);
+}
+
void bpf_jit_free(struct sk_filter *fp)
{
if (fp->bpf_func != sk_run_filter) {
- unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
- struct bpf_binary_header *header = (void *)addr;
-
- set_memory_rw(addr, header->pages);
- module_free(NULL, header);
+ INIT_WORK(&fp->work, bpf_jit_free_deferred);
+ schedule_work(&fp->work);
}
}
if (!(pci_probe & PCI_PROBE_MMCONF) || pci_mmcfg_arch_init_failed)
return -ENODEV;
- if (start > end || !addr)
+ if (start > end)
return -EINVAL;
mutex_lock(&pci_mmcfg_lock);
return -EEXIST;
}
+ if (!addr) {
+ mutex_unlock(&pci_mmcfg_lock);
+ return -EINVAL;
+ }
+
rc = -EBUSY;
cfg = pci_mmconfig_alloc(seg, start, end, addr);
if (cfg == NULL) {
for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
md = p;
- if (!(md->attribute & EFI_MEMORY_RUNTIME) &&
- md->type != EFI_BOOT_SERVICES_CODE &&
- md->type != EFI_BOOT_SERVICES_DATA)
- continue;
+ if (!(md->attribute & EFI_MEMORY_RUNTIME)) {
+#ifdef CONFIG_X86_64
+ if (md->type != EFI_BOOT_SERVICES_CODE &&
+ md->type != EFI_BOOT_SERVICES_DATA)
+#endif
+ continue;
+ }
size = md->num_pages << EFI_PAGE_SHIFT;
end = md->phys_addr + size;
unsigned long uninitialized_var(address);
unsigned level;
pte_t *ptep = NULL;
- int ret = 0;
pfn = page_to_pfn(page);
if (!PageHighMem(page)) {
* frontend pages while they are being shared with the backend,
* because mfn_to_pfn (that ends up being called by GUPF) will
* return the backend pfn rather than the frontend pfn. */
- ret = __get_user(pfn, &machine_to_phys_mapping[mfn]);
- if (ret == 0 && get_phys_to_machine(pfn) == mfn)
+ pfn = mfn_to_pfn_no_overrides(mfn);
+ if (get_phys_to_machine(pfn) == mfn)
set_phys_to_machine(pfn, FOREIGN_FRAME(mfn));
return 0;
unsigned long uninitialized_var(address);
unsigned level;
pte_t *ptep = NULL;
- int ret = 0;
pfn = page_to_pfn(page);
mfn = get_phys_to_machine(pfn);
* the original pfn causes mfn_to_pfn(mfn) to return the frontend
* pfn again. */
mfn &= ~FOREIGN_FRAME_BIT;
- ret = __get_user(pfn, &machine_to_phys_mapping[mfn]);
- if (ret == 0 && get_phys_to_machine(pfn) == FOREIGN_FRAME(mfn) &&
+ pfn = mfn_to_pfn_no_overrides(mfn);
+ if (get_phys_to_machine(pfn) == FOREIGN_FRAME(mfn) &&
m2p_find_override(mfn) == NULL)
set_phys_to_machine(pfn, mfn);
old memory can be recycled */
make_lowmem_page_readwrite(xen_initial_gdt);
+#ifdef CONFIG_X86_32
+ /*
+ * Xen starts us with XEN_FLAT_RING1_DS, but linux code
+ * expects __USER_DS
+ */
+ loadsegment(ds, __USER_DS);
+ loadsegment(es, __USER_DS);
+#endif
+
xen_filter_cpu_maps();
xen_setup_vcpu_info_placement();
}
}
+/*
+ * Our init of PV spinlocks is split in two init functions due to us
+ * using paravirt patching and jump labels patching and having to do
+ * all of this before SMP code is invoked.
+ *
+ * The paravirt patching needs to be done _before_ the alternative asm code
+ * is started, otherwise we would not patch the core kernel code.
+ */
void __init xen_init_spinlocks(void)
{
return;
}
- static_key_slow_inc(¶virt_ticketlocks_enabled);
-
pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
pv_lock_ops.unlock_kick = xen_unlock_kick;
}
+/*
+ * While the jump_label init code needs to happend _after_ the jump labels are
+ * enabled and before SMP is started. Hence we use pre-SMP initcall level
+ * init. We cannot do it in xen_init_spinlocks as that is done before
+ * jump labels are activated.
+ */
+static __init int xen_init_spinlocks_jump(void)
+{
+ if (!xen_pvspin)
+ return 0;
+
+ static_key_slow_inc(¶virt_ticketlocks_enabled);
+ return 0;
+}
+early_initcall(xen_init_spinlocks_jump);
+
static __init int xen_parse_nopvspin(char *arg)
{
xen_pvspin = false;
* a3: exctable, original value in excsave1
*/
-fast_syscall_spill_registers_fixup:
+ENTRY(fast_syscall_spill_registers_fixup)
rsr a2, windowbase # get current windowbase (a2 is saved)
xsr a0, depc # restore depc and a0
*/
xsr a3, excsave1 # get spill-mask
- slli a2, a3, 1 # shift left by one
+ slli a3, a3, 1 # shift left by one
- slli a3, a2, 32-WSBITS
- src a2, a2, a3 # a1 = xxwww1yyxxxwww1yy......
+ slli a2, a3, 32-WSBITS
+ src a2, a3, a2 # a2 = xxwww1yyxxxwww1yy......
wsr a2, windowstart # set corrected windowstart
- rsr a3, excsave1
- l32i a2, a3, EXC_TABLE_DOUBLE_SAVE # restore a2
- l32i a3, a3, EXC_TABLE_PARAM # original WB (in user task)
+ srli a3, a3, 1
+ rsr a2, excsave1
+ l32i a2, a2, EXC_TABLE_DOUBLE_SAVE # restore a2
+ xsr a2, excsave1
+ s32i a3, a2, EXC_TABLE_DOUBLE_SAVE # save a3
+ l32i a3, a2, EXC_TABLE_PARAM # original WB (in user task)
+ xsr a2, excsave1
/* Return to the original (user task) WINDOWBASE.
* We leave the following frame behind:
* a0, a1, a2 same
- * a3: trashed (saved in excsave_1)
+ * a3: trashed (saved in EXC_TABLE_DOUBLE_SAVE)
* depc: depc (we have to return to that address)
- * excsave_1: a3
+ * excsave_1: exctable
*/
wsr a3, windowbase
* a0: return address
* a1: used, stack pointer
* a2: kernel stack pointer
- * a3: available, saved in EXCSAVE_1
+ * a3: available
* depc: exception address
- * excsave: a3
+ * excsave: exctable
* Note: This frame might be the same as above.
*/
rsr a0, exccause
addx4 a0, a0, a3 # find entry in table
l32i a0, a0, EXC_TABLE_FAST_USER # load handler
+ l32i a3, a3, EXC_TABLE_DOUBLE_SAVE
jx a0
-fast_syscall_spill_registers_fixup_return:
+ENDPROC(fast_syscall_spill_registers_fixup)
+
+ENTRY(fast_syscall_spill_registers_fixup_return)
/* When we return here, all registers have been restored (a2: DEPC) */
/* Restore fixup handler. */
- xsr a3, excsave1
- movi a2, fast_syscall_spill_registers_fixup
- s32i a2, a3, EXC_TABLE_FIXUP
- s32i a0, a3, EXC_TABLE_DOUBLE_SAVE
- rsr a2, windowbase
- s32i a2, a3, EXC_TABLE_PARAM
- l32i a2, a3, EXC_TABLE_KSTK
+ rsr a2, excsave1
+ s32i a3, a2, EXC_TABLE_DOUBLE_SAVE
+ movi a3, fast_syscall_spill_registers_fixup
+ s32i a3, a2, EXC_TABLE_FIXUP
+ rsr a3, windowbase
+ s32i a3, a2, EXC_TABLE_PARAM
+ l32i a2, a2, EXC_TABLE_KSTK
/* Load WB at the time the exception occurred. */
wsr a3, windowbase
rsync
+ rsr a3, excsave1
+ l32i a3, a3, EXC_TABLE_DOUBLE_SAVE
+
rfde
+ENDPROC(fast_syscall_spill_registers_fixup_return)
/*
* spill all registers.
sp = regs->areg[1];
- if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && ! on_sig_stack(sp)) {
+ if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && sas_ss_flags(sp) == 0) {
sp = current->sas_ss_sp + current->sas_ss_size;
}
return 1;
}
- if ((new = alloc_bootmem(sizeof new)) == NULL) {
+ new = alloc_bootmem(sizeof(*new));
+ if (new == NULL) {
printk("Alloc_bootmem failed\n");
return 1;
}
See Documentation/cgroups/blkio-controller.txt for more information.
-config CMDLINE_PARSER
+config BLK_CMDLINE_PARSER
bool "Block device command line partition parser"
default n
---help---
- Parsing command line, get the partitions information.
+ Enabling this option allows you to specify the partition layout from
+ the kernel boot args. This is typically of use for embedded devices
+ which don't otherwise have any standardized method for listing the
+ partitions on a block device.
+
+ See Documentation/block/cmdline-partition.txt for more information.
menu "Partition Types"
obj-$(CONFIG_BLOCK_COMPAT) += compat_ioctl.o
obj-$(CONFIG_BLK_DEV_INTEGRITY) += blk-integrity.o
-obj-$(CONFIG_CMDLINE_PARSER) += cmdline-parser.o
+obj-$(CONFIG_BLK_CMDLINE_PARSER) += cmdline-parser.o
blkg->online = true;
spin_unlock(&blkcg->lock);
- if (!ret)
+ if (!ret) {
+ if (blkcg == &blkcg_root) {
+ q->root_blkg = blkg;
+ q->root_rl.blkg = blkg;
+ }
return blkg;
+ }
/* @blkg failed fully initialized, use the usual release path */
blkg_put(blkg);
rcu_assign_pointer(blkcg->blkg_hint, NULL);
/*
+ * If root blkg is destroyed. Just clear the pointer since root_rl
+ * does not take reference on root blkg.
+ */
+ if (blkcg == &blkcg_root) {
+ blkg->q->root_blkg = NULL;
+ blkg->q->root_rl.blkg = NULL;
+ }
+
+ /*
* Put the reference taken at the time of creation so that when all
* queues are gone, group can be destroyed.
*/
blkg_destroy(blkg);
spin_unlock(&blkcg->lock);
}
-
- /*
- * root blkg is destroyed. Just clear the pointer since
- * root_rl does not take reference on root blkg.
- */
- q->root_blkg = NULL;
- q->root_rl.blkg = NULL;
}
/*
ret = PTR_ERR(blkg);
goto out_unlock;
}
- q->root_blkg = blkg;
- q->root_rl.blkg = blkg;
list_for_each_entry(blkg, &q->blkg_list, q_node)
cnt++;
if (plug) {
/*
* If this is the first request added after a plug, fire
- * of a plug trace. If others have been added before, check
- * if we have multiple devices in this plug. If so, make a
- * note to sort the list before dispatch.
+ * of a plug trace.
*/
- if (list_empty(&plug->list))
+ if (!request_count)
trace_block_plug(q);
else {
if (request_count >= BLK_MAX_REQUEST_COUNT) {
spin_lock_irq(q->queue_lock);
if (unlikely(blk_queue_dying(q))) {
+ rq->cmd_flags |= REQ_QUIET;
rq->errors = -ENXIO;
- if (rq->end_io)
- rq->end_io(rq, rq->errors);
+ __blk_end_request_all(rq, rq->errors);
spin_unlock_irq(q->queue_lock);
return;
}
if (samples) {
v = blkg_stat_read(&cfqg->stats.avg_queue_size_sum);
- do_div(v, samples);
+ v = div64_u64(v, samples);
}
__blkg_prfill_u64(sf, pd, v);
return 0;
if (!eq)
return -ENOMEM;
- cfqd = kmalloc_node(sizeof(*cfqd), GFP_KERNEL | __GFP_ZERO, q->node);
+ cfqd = kzalloc_node(sizeof(*cfqd), GFP_KERNEL, q->node);
if (!cfqd) {
kobject_put(&eq->kobj);
return -ENOMEM;
if (!eq)
return -ENOMEM;
- dd = kmalloc_node(sizeof(*dd), GFP_KERNEL | __GFP_ZERO, q->node);
+ dd = kzalloc_node(sizeof(*dd), GFP_KERNEL, q->node);
if (!dd) {
kobject_put(&eq->kobj);
return -ENOMEM;
{
struct elevator_queue *eq;
- eq = kmalloc_node(sizeof(*eq), GFP_KERNEL | __GFP_ZERO, q->node);
+ eq = kzalloc_node(sizeof(*eq), GFP_KERNEL, q->node);
if (unlikely(!eq))
goto err;
{
struct gendisk *disk;
- disk = kmalloc_node(sizeof(struct gendisk),
- GFP_KERNEL | __GFP_ZERO, node_id);
+ disk = kzalloc_node(sizeof(struct gendisk), GFP_KERNEL, node_id);
if (disk) {
if (!init_part_stats(&disk->part0)) {
kfree(disk);
config CMDLINE_PARTITION
bool "Command line partition support" if PARTITION_ADVANCED
- select CMDLINE_PARSER
+ select BLK_CMDLINE_PARSER
help
- Say Y here if you would read the partitions table from bootargs.
+ Say Y here if you want to read the partition table from bootargs.
The format for the command line is just like mtdparts.
* Copyright (C) 2013 HUAWEI
* Author: Cai Zhiyong <caizhiyong@huawei.com>
*
- * Read block device partition table from command line.
- * The partition used for fixed block device (eMMC) embedded device.
- * It is no MBR, save storage space. Bootloader can be easily accessed
+ * Read block device partition table from the command line.
+ * Typically used for fixed block (eMMC) embedded devices.
+ * It has no MBR, so saves storage space. Bootloader can be easily accessed
* by absolute address of data on the block device.
* Users can easily change the partition.
*
* The format for the command line is just like mtdparts.
*
- * Verbose config please reference "Documentation/block/cmdline-partition.txt"
+ * For further information, see "Documentation/block/cmdline-partition.txt"
*
*/
* the disk size.
*
* Hybrid MBRs do not necessarily comply with this.
+ *
+ * Consider a bad value here to be a warning to support dd'ing
+ * an image from a smaller disk to a larger disk.
*/
if (ret == GPT_MBR_PROTECTIVE) {
sz = le32_to_cpu(mbr->partition_record[part].size_in_lba);
if (sz != (uint32_t) total_sectors - 1 && sz != 0xFFFFFFFF)
- ret = 0;
+ pr_debug("GPT: mbr size in lba (%u) different than whole disk (%u).\n",
+ sz, min_t(uint32_t,
+ total_sectors - 1, 0xFFFFFFFF));
}
done:
return ret;
are configured, ACPI is used.
The project home page for the Linux ACPI subsystem is here:
- <http://www.lesswatts.org/projects/acpi/>
+ <https://01.org/linux-acpi>
Linux support for ACPI is based on Intel Corporation's ACPI
Component Architecture (ACPI CA). For more information on the
default y
help
This driver handles events on the power, sleep, and lid buttons.
- A daemon reads /proc/acpi/event and perform user-defined actions
- such as shutting down the system. This is necessary for
- software-controlled poweroff.
+ A daemon reads events from input devices or via netlink and
+ performs user-defined actions such as shutting down the system.
+ This is necessary for software-controlled poweroff.
To compile this driver as a module, choose M here:
the module will be called button.
#include <linux/ipmi.h>
#include <linux/device.h>
#include <linux/pnp.h>
+#include <linux/spinlock.h>
MODULE_AUTHOR("Zhao Yakui");
MODULE_DESCRIPTION("ACPI IPMI Opregion driver");
struct list_head head;
/* the IPMI request message list */
struct list_head tx_msg_list;
- struct mutex tx_msg_lock;
+ spinlock_t tx_msg_lock;
acpi_handle handle;
struct pnp_dev *pnp_dev;
ipmi_user_t user_interface;
struct kernel_ipmi_msg *msg;
struct acpi_ipmi_buffer *buffer;
struct acpi_ipmi_device *device;
+ unsigned long flags;
msg = &tx_msg->tx_message;
/*
/* Get the msgid */
device = tx_msg->device;
- mutex_lock(&device->tx_msg_lock);
+ spin_lock_irqsave(&device->tx_msg_lock, flags);
device->curr_msgid++;
tx_msg->tx_msgid = device->curr_msgid;
- mutex_unlock(&device->tx_msg_lock);
+ spin_unlock_irqrestore(&device->tx_msg_lock, flags);
}
static void acpi_format_ipmi_response(struct acpi_ipmi_msg *msg,
int msg_found = 0;
struct acpi_ipmi_msg *tx_msg;
struct pnp_dev *pnp_dev = ipmi_device->pnp_dev;
+ unsigned long flags;
if (msg->user != ipmi_device->user_interface) {
dev_warn(&pnp_dev->dev, "Unexpected response is returned. "
ipmi_free_recv_msg(msg);
return;
}
- mutex_lock(&ipmi_device->tx_msg_lock);
+ spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags);
list_for_each_entry(tx_msg, &ipmi_device->tx_msg_list, head) {
if (msg->msgid == tx_msg->tx_msgid) {
msg_found = 1;
}
}
- mutex_unlock(&ipmi_device->tx_msg_lock);
+ spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags);
if (!msg_found) {
dev_warn(&pnp_dev->dev, "Unexpected response (msg id %ld) is "
"returned.\n", msg->msgid);
struct acpi_ipmi_device *ipmi_device = handler_context;
int err, rem_time;
acpi_status status;
+ unsigned long flags;
/*
* IPMI opregion message.
* IPMI message is firstly written to the BMC and system software
return AE_NO_MEMORY;
acpi_format_ipmi_msg(tx_msg, address, value);
- mutex_lock(&ipmi_device->tx_msg_lock);
+ spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags);
list_add_tail(&tx_msg->head, &ipmi_device->tx_msg_list);
- mutex_unlock(&ipmi_device->tx_msg_lock);
+ spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags);
err = ipmi_request_settime(ipmi_device->user_interface,
&tx_msg->addr,
tx_msg->tx_msgid,
status = AE_OK;
end_label:
- mutex_lock(&ipmi_device->tx_msg_lock);
+ spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags);
list_del(&tx_msg->head);
- mutex_unlock(&ipmi_device->tx_msg_lock);
+ spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags);
kfree(tx_msg);
return status;
}
INIT_LIST_HEAD(&ipmi_device->head);
- mutex_init(&ipmi_device->tx_msg_lock);
+ spin_lock_init(&ipmi_device->tx_msg_lock);
INIT_LIST_HEAD(&ipmi_device->tx_msg_list);
ipmi_install_space_handler(ipmi_device);
}
}
EXPORT_SYMBOL_GPL(acpi_dev_pm_detach);
-
-/**
- * acpi_dev_pm_add_dependent - Add physical device depending for PM.
- * @handle: Handle of ACPI device node.
- * @depdev: Device depending on that node for PM.
- */
-void acpi_dev_pm_add_dependent(acpi_handle handle, struct device *depdev)
-{
- struct acpi_device_physical_node *dep;
- struct acpi_device *adev;
-
- if (!depdev || acpi_bus_get_device(handle, &adev))
- return;
-
- mutex_lock(&adev->physical_node_lock);
-
- list_for_each_entry(dep, &adev->power_dependent, node)
- if (dep->dev == depdev)
- goto out;
-
- dep = kzalloc(sizeof(*dep), GFP_KERNEL);
- if (dep) {
- dep->dev = depdev;
- list_add_tail(&dep->node, &adev->power_dependent);
- }
-
- out:
- mutex_unlock(&adev->physical_node_lock);
-}
-EXPORT_SYMBOL_GPL(acpi_dev_pm_add_dependent);
-
-/**
- * acpi_dev_pm_remove_dependent - Remove physical device depending for PM.
- * @handle: Handle of ACPI device node.
- * @depdev: Device depending on that node for PM.
- */
-void acpi_dev_pm_remove_dependent(acpi_handle handle, struct device *depdev)
-{
- struct acpi_device_physical_node *dep;
- struct acpi_device *adev;
-
- if (!depdev || acpi_bus_get_device(handle, &adev))
- return;
-
- mutex_lock(&adev->physical_node_lock);
-
- list_for_each_entry(dep, &adev->power_dependent, node)
- if (dep->dev == depdev) {
- list_del(&dep->node);
- kfree(dep);
- break;
- }
-
- mutex_unlock(&adev->physical_node_lock);
-}
-EXPORT_SYMBOL_GPL(acpi_dev_pm_remove_dependent);
#endif /* CONFIG_PM */
#define ACPI_POWER_RESOURCE_STATE_ON 0x01
#define ACPI_POWER_RESOURCE_STATE_UNKNOWN 0xFF
-struct acpi_power_dependent_device {
- struct list_head node;
- struct acpi_device *adev;
- struct work_struct work;
-};
-
struct acpi_power_resource {
struct acpi_device device;
struct list_head list_node;
- struct list_head dependent;
char *name;
u32 system_level;
u32 order;
return 0;
}
-static void acpi_power_resume_dependent(struct work_struct *work)
-{
- struct acpi_power_dependent_device *dep;
- struct acpi_device_physical_node *pn;
- struct acpi_device *adev;
- int state;
-
- dep = container_of(work, struct acpi_power_dependent_device, work);
- adev = dep->adev;
- if (acpi_power_get_inferred_state(adev, &state))
- return;
-
- if (state > ACPI_STATE_D0)
- return;
-
- mutex_lock(&adev->physical_node_lock);
-
- list_for_each_entry(pn, &adev->physical_node_list, node)
- pm_request_resume(pn->dev);
-
- list_for_each_entry(pn, &adev->power_dependent, node)
- pm_request_resume(pn->dev);
-
- mutex_unlock(&adev->physical_node_lock);
-}
-
static int __acpi_power_on(struct acpi_power_resource *resource)
{
acpi_status status = AE_OK;
resource->name));
} else {
result = __acpi_power_on(resource);
- if (result) {
+ if (result)
resource->ref_count--;
- } else {
- struct acpi_power_dependent_device *dep;
-
- list_for_each_entry(dep, &resource->dependent, node)
- schedule_work(&dep->work);
- }
}
return result;
}
return result;
}
-static void acpi_power_add_dependent(struct acpi_power_resource *resource,
- struct acpi_device *adev)
-{
- struct acpi_power_dependent_device *dep;
-
- mutex_lock(&resource->resource_lock);
-
- list_for_each_entry(dep, &resource->dependent, node)
- if (dep->adev == adev)
- goto out;
-
- dep = kzalloc(sizeof(*dep), GFP_KERNEL);
- if (!dep)
- goto out;
-
- dep->adev = adev;
- INIT_WORK(&dep->work, acpi_power_resume_dependent);
- list_add_tail(&dep->node, &resource->dependent);
-
- out:
- mutex_unlock(&resource->resource_lock);
-}
-
-static void acpi_power_remove_dependent(struct acpi_power_resource *resource,
- struct acpi_device *adev)
-{
- struct acpi_power_dependent_device *dep;
- struct work_struct *work = NULL;
-
- mutex_lock(&resource->resource_lock);
-
- list_for_each_entry(dep, &resource->dependent, node)
- if (dep->adev == adev) {
- list_del(&dep->node);
- work = &dep->work;
- break;
- }
-
- mutex_unlock(&resource->resource_lock);
-
- if (work) {
- cancel_work_sync(work);
- kfree(dep);
- }
-}
-
static struct attribute *attrs[] = {
NULL,
};
void acpi_power_add_remove_device(struct acpi_device *adev, bool add)
{
- struct acpi_device_power_state *ps;
- struct acpi_power_resource_entry *entry;
int state;
if (adev->wakeup.flags.valid)
if (!adev->power.flags.power_resources)
return;
- ps = &adev->power.states[ACPI_STATE_D0];
- list_for_each_entry(entry, &ps->resources, node) {
- struct acpi_power_resource *resource = entry->resource;
-
- if (add)
- acpi_power_add_dependent(resource, adev);
- else
- acpi_power_remove_dependent(resource, adev);
- }
-
for (state = ACPI_STATE_D0; state <= ACPI_STATE_D3_HOT; state++)
acpi_power_expose_hide(adev,
&adev->power.states[state].resources,
acpi_init_device_object(device, handle, ACPI_BUS_TYPE_POWER,
ACPI_STA_DEFAULT);
mutex_init(&resource->resource_lock);
- INIT_LIST_HEAD(&resource->dependent);
INIT_LIST_HEAD(&resource->list_node);
resource->name = device->pnp.bus_id;
strcpy(acpi_device_name(device), ACPI_POWER_DEVICE_NAME);
mutex_lock(&resource->resource_lock);
result = acpi_power_get_state(resource->device.handle, &state);
- if (result)
+ if (result) {
+ mutex_unlock(&resource->resource_lock);
continue;
+ }
if (state == ACPI_POWER_RESOURCE_STATE_OFF
&& resource->ref_count) {
}
return 0;
}
-EXPORT_SYMBOL_GPL(acpi_bus_get_device);
+EXPORT_SYMBOL(acpi_bus_get_device);
int acpi_device_add(struct acpi_device *device,
void (*release)(struct device *))
INIT_LIST_HEAD(&device->wakeup_list);
INIT_LIST_HEAD(&device->physical_node_list);
mutex_init(&device->physical_node_lock);
- INIT_LIST_HEAD(&device->power_dependent);
new_bus_id = kzalloc(sizeof(struct acpi_device_bus_id), GFP_KERNEL);
if (!new_bus_id) {
EXPORT_SYMBOL(acpi_bus_register_driver);
/**
- * acpi_bus_unregister_driver - unregisters a driver with the APIC bus
+ * acpi_bus_unregister_driver - unregisters a driver with the ACPI bus
* @driver: driver to unregister
*
* Unregisters a driver with the ACPI bus. Searches the namespace for all
if (!(hpriv->cap & HOST_CAP_SSS) || ahci_ignore_sss)
host->flags |= ATA_HOST_PARALLEL_SCAN;
else
- printk(KERN_INFO "ahci: SSS flag set, parallel bus scan disabled\n");
+ dev_info(&pdev->dev, "SSS flag set, parallel bus scan disabled\n");
if (pi.flags & ATA_FLAG_EM)
ahci_reset_em(host);
if (!(hpriv->cap & HOST_CAP_SSS) || ahci_ignore_sss)
host->flags |= ATA_HOST_PARALLEL_SCAN;
else
- printk(KERN_INFO "ahci: SSS flag set, parallel bus scan disabled\n");
+ dev_info(dev, "SSS flag set, parallel bus scan disabled\n");
if (pi.flags & ATA_FLAG_EM)
ahci_reset_em(host);
rc = ap->ops->transmit_led_message(ap,
emp->led_state,
4);
+ /*
+ * If busy, give a breather but do not
+ * release EH ownership by using msleep()
+ * instead of ata_msleep(). EM Transmit
+ * bit is busy for the whole host and
+ * releasing ownership will cause other
+ * ports to fail the same way.
+ */
if (rc == -EBUSY)
- ata_msleep(ap, 1);
+ msleep(1);
else
break;
}
{
ata_acpi_clear_gtf(dev);
}
-
-void ata_scsi_acpi_bind(struct ata_device *dev)
-{
- acpi_handle handle = ata_dev_acpi_handle(dev);
- if (handle)
- acpi_dev_pm_add_dependent(handle, &dev->sdev->sdev_gendev);
-}
-
-void ata_scsi_acpi_unbind(struct ata_device *dev)
-{
- acpi_handle handle = ata_dev_acpi_handle(dev);
- if (handle)
- acpi_dev_pm_remove_dependent(handle, &dev->sdev->sdev_gendev);
-}
* should be retried. To be used from EH.
*
* SCSI midlayer limits the number of retries to scmd->allowed.
- * scmd->retries is decremented for commands which get retried
+ * scmd->allowed is incremented for commands which get retried
* due to unrelated failures (qc->err_mask is zero).
*/
void ata_eh_qc_retry(struct ata_queued_cmd *qc)
{
struct scsi_cmnd *scmd = qc->scsicmd;
- if (!qc->err_mask && scmd->retries)
- scmd->retries--;
+ if (!qc->err_mask)
+ scmd->allowed++;
__ata_eh_qc_complete(qc);
}
if (!IS_ERR(sdev)) {
dev->sdev = sdev;
scsi_device_put(sdev);
- ata_scsi_acpi_bind(dev);
} else {
dev->sdev = NULL;
}
struct scsi_device *sdev;
unsigned long flags;
- ata_scsi_acpi_unbind(dev);
-
/* Alas, we need to grab scan_mutex to ensure SCSI device
* state doesn't change underneath us and thus
* scsi_device_get() always succeeds. The mutex locking can
extern void ata_acpi_bind_port(struct ata_port *ap);
extern void ata_acpi_bind_dev(struct ata_device *dev);
extern acpi_handle ata_dev_acpi_handle(struct ata_device *dev);
-extern void ata_scsi_acpi_bind(struct ata_device *dev);
-extern void ata_scsi_acpi_unbind(struct ata_device *dev);
#else
static inline void ata_acpi_dissociate(struct ata_host *host) { }
static inline int ata_acpi_on_suspend(struct ata_port *ap) { return 0; }
pm_message_t state) { }
static inline void ata_acpi_bind_port(struct ata_port *ap) {}
static inline void ata_acpi_bind_dev(struct ata_device *dev) {}
-static inline void ata_scsi_acpi_bind(struct ata_device *dev) {}
-static inline void ata_scsi_acpi_unbind(struct ata_device *dev) {}
#endif
/* libata-scsi.c */
ap->ioaddr.cmd_addr = cmd_addr;
- if (pnp_port_valid(idev, 1) == 0) {
+ if (pnp_port_valid(idev, 1)) {
ctl_addr = devm_ioport_map(&idev->dev,
pnp_port_start(idev, 1), 1);
ap->ioaddr.altstatus_addr = ctl_addr;
* sata_promise.c - Promise SATA
*
* Maintained by: Tejun Heo <tj@kernel.org>
- * Mikael Pettersson <mikpe@it.uu.se>
+ * Mikael Pettersson
* Please ALWAYS copy linux-ide@vger.kernel.org
* on emails.
*
.id_table = he_pci_tbl,
};
-static int __init he_init(void)
-{
- return pci_register_driver(&he_driver);
-}
-
-static void __exit he_cleanup(void)
-{
- pci_unregister_driver(&he_driver);
-}
-
-module_init(he_init);
-module_exit(he_cleanup);
+module_pci_driver(he_driver);
return error;
}
- if (mac[i] == NULL || mac_pton(mac[i], card->atmdev->esi)) {
+ if (mac[i] == NULL || !mac_pton(mac[i], card->atmdev->esi)) {
nicstar_read_eprom(card->membase, NICSTAR_EPROM_MAC_ADDR_OFFSET,
card->atmdev->esi, 6);
if (memcmp(card->atmdev->esi, "\x00\x00\x00\x00\x00\x00", 6) ==
*/
void device_shutdown(void)
{
- struct device *dev;
+ struct device *dev, *parent;
spin_lock(&devices_kset->list_lock);
/*
* prevent it from being freed because parent's
* lock is to be held
*/
- get_device(dev->parent);
+ parent = get_device(dev->parent);
get_device(dev);
/*
* Make sure the device is off the kset list, in the
spin_unlock(&devices_kset->list_lock);
/* hold lock to avoid race with probe/release */
- if (dev->parent)
- device_lock(dev->parent);
+ if (parent)
+ device_lock(parent);
device_lock(dev);
/* Don't allow any more runtime suspends */
}
device_unlock(dev);
- if (dev->parent)
- device_unlock(dev->parent);
+ if (parent)
+ device_unlock(parent);
put_device(dev);
- put_device(dev->parent);
+ put_device(parent);
spin_lock(&devices_kset->list_lock);
}
online_type = ONLINE_KEEP;
else if (!strncmp(buf, "offline", min_t(int, count, 7)))
online_type = -1;
- else
- return -EINVAL;
+ else {
+ ret = -EINVAL;
+ goto err;
+ }
switch (online_type) {
case ONLINE_KERNEL:
ret = -EINVAL; /* should never happen */
}
+err:
unlock_device_hotplug();
if (ret)
/* lsb */
unsigned int shift;
unsigned int reg;
+
+ unsigned int id_size;
+ unsigned int id_offset;
};
#ifdef CONFIG_DEBUG_FS
rm_field->reg = reg_field.reg;
rm_field->shift = reg_field.lsb;
rm_field->mask = ((BIT(field_bits) - 1) << reg_field.lsb);
+ rm_field->id_size = reg_field.id_size;
+ rm_field->id_offset = reg_field.id_offset;
}
/**
}
EXPORT_SYMBOL_GPL(regmap_field_write);
+/**
+ * regmap_field_update_bits(): Perform a read/modify/write cycle
+ * on the register field
+ *
+ * @field: Register field to write to
+ * @mask: Bitmask to change
+ * @val: Value to be written
+ *
+ * A value of zero will be returned on success, a negative errno will
+ * be returned in error cases.
+ */
+int regmap_field_update_bits(struct regmap_field *field, unsigned int mask, unsigned int val)
+{
+ mask = (mask << field->shift) & field->mask;
+
+ return regmap_update_bits(field->regmap, field->reg,
+ mask, val << field->shift);
+}
+EXPORT_SYMBOL_GPL(regmap_field_update_bits);
+
+/**
+ * regmap_fields_write(): Write a value to a single register field with port ID
+ *
+ * @field: Register field to write to
+ * @id: port ID
+ * @val: Value to be written
+ *
+ * A value of zero will be returned on success, a negative errno will
+ * be returned in error cases.
+ */
+int regmap_fields_write(struct regmap_field *field, unsigned int id,
+ unsigned int val)
+{
+ if (id >= field->id_size)
+ return -EINVAL;
+
+ return regmap_update_bits(field->regmap,
+ field->reg + (field->id_offset * id),
+ field->mask, val << field->shift);
+}
+EXPORT_SYMBOL_GPL(regmap_fields_write);
+
+/**
+ * regmap_fields_update_bits(): Perform a read/modify/write cycle
+ * on the register field
+ *
+ * @field: Register field to write to
+ * @id: port ID
+ * @mask: Bitmask to change
+ * @val: Value to be written
+ *
+ * A value of zero will be returned on success, a negative errno will
+ * be returned in error cases.
+ */
+int regmap_fields_update_bits(struct regmap_field *field, unsigned int id,
+ unsigned int mask, unsigned int val)
+{
+ if (id >= field->id_size)
+ return -EINVAL;
+
+ mask = (mask << field->shift) & field->mask;
+
+ return regmap_update_bits(field->regmap,
+ field->reg + (field->id_offset * id),
+ mask, val << field->shift);
+}
+EXPORT_SYMBOL_GPL(regmap_fields_update_bits);
+
/*
* regmap_bulk_write(): Write multiple registers to the device
*
EXPORT_SYMBOL_GPL(regmap_field_read);
/**
+ * regmap_fields_read(): Read a value to a single register field with port ID
+ *
+ * @field: Register field to read from
+ * @id: port ID
+ * @val: Pointer to store read value
+ *
+ * A value of zero will be returned on success, a negative errno will
+ * be returned in error cases.
+ */
+int regmap_fields_read(struct regmap_field *field, unsigned int id,
+ unsigned int *val)
+{
+ int ret;
+ unsigned int reg_val;
+
+ if (id >= field->id_size)
+ return -EINVAL;
+
+ ret = regmap_read(field->regmap,
+ field->reg + (field->id_offset * id),
+ ®_val);
+ if (ret != 0)
+ return ret;
+
+ reg_val &= field->mask;
+ reg_val >>= field->shift;
+ *val = reg_val;
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(regmap_fields_read);
+
+/**
* regmap_bulk_read(): Read multiple registers from the device
*
* @map: Register map to write to
}
}
-static void bcma_core_pci_power_save(struct bcma_drv_pci *pc, bool up)
-{
- u16 data;
-
- if (pc->core->id.rev >= 15 && pc->core->id.rev <= 20) {
- data = up ? 0x74 : 0x7C;
- bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
- BCMA_CORE_PCI_MDIO_BLK1_MGMT1, 0x7F64);
- bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
- BCMA_CORE_PCI_MDIO_BLK1_MGMT3, data);
- } else if (pc->core->id.rev >= 21 && pc->core->id.rev <= 22) {
- data = up ? 0x75 : 0x7D;
- bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
- BCMA_CORE_PCI_MDIO_BLK1_MGMT1, 0x7E65);
- bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
- BCMA_CORE_PCI_MDIO_BLK1_MGMT3, data);
- }
-}
-
/**************************************************
* Init.
**************************************************/
bcma_core_pci_clientmode_init(pc);
}
+void bcma_core_pci_power_save(struct bcma_bus *bus, bool up)
+{
+ struct bcma_drv_pci *pc;
+ u16 data;
+
+ if (bus->hosttype != BCMA_HOSTTYPE_PCI)
+ return;
+
+ pc = &bus->drv_pci[0];
+
+ if (pc->core->id.rev >= 15 && pc->core->id.rev <= 20) {
+ data = up ? 0x74 : 0x7C;
+ bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
+ BCMA_CORE_PCI_MDIO_BLK1_MGMT1, 0x7F64);
+ bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
+ BCMA_CORE_PCI_MDIO_BLK1_MGMT3, data);
+ } else if (pc->core->id.rev >= 21 && pc->core->id.rev <= 22) {
+ data = up ? 0x75 : 0x7D;
+ bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
+ BCMA_CORE_PCI_MDIO_BLK1_MGMT1, 0x7E65);
+ bcma_pcie_mdio_writeread(pc, BCMA_CORE_PCI_MDIO_BLK1,
+ BCMA_CORE_PCI_MDIO_BLK1_MGMT3, data);
+ }
+}
+EXPORT_SYMBOL_GPL(bcma_core_pci_power_save);
+
int bcma_core_pci_irq_ctl(struct bcma_drv_pci *pc, struct bcma_device *core,
bool enable)
{
pc = &bus->drv_pci[0];
- bcma_core_pci_power_save(pc, true);
-
bcma_core_pci_extend_L1timer(pc, true);
}
EXPORT_SYMBOL_GPL(bcma_core_pci_up);
pc = &bus->drv_pci[0];
bcma_core_pci_extend_L1timer(pc, false);
-
- bcma_core_pci_power_save(pc, false);
}
EXPORT_SYMBOL_GPL(bcma_core_pci_down);
return NULL;
}
+#define IS_ERR_VALUE_U32(x) ((x) >= (u32)-MAX_ERRNO)
+
static int bcma_get_next_core(struct bcma_bus *bus, u32 __iomem **eromptr,
struct bcma_device_id *match, int core_num,
struct bcma_device *core)
* the main register space for the core
*/
tmp = bcma_erom_get_addr_desc(bus, eromptr, SCAN_ADDR_TYPE_SLAVE, 0);
- if (tmp == 0 || IS_ERR_VALUE(tmp)) {
+ if (tmp == 0 || IS_ERR_VALUE_U32(tmp)) {
/* Try again to see if it is a bridge */
tmp = bcma_erom_get_addr_desc(bus, eromptr,
SCAN_ADDR_TYPE_BRIDGE, 0);
- if (tmp == 0 || IS_ERR_VALUE(tmp)) {
+ if (tmp == 0 || IS_ERR_VALUE_U32(tmp)) {
return -EILSEQ;
} else {
bcma_info(bus, "Bridge found\n");
for (j = 0; ; j++) {
tmp = bcma_erom_get_addr_desc(bus, eromptr,
SCAN_ADDR_TYPE_SLAVE, i);
- if (IS_ERR_VALUE(tmp)) {
+ if (IS_ERR_VALUE_U32(tmp)) {
/* no more entries for port _i_ */
/* pr_debug("erom: slave port %d "
* "has %d descriptors\n", i, j); */
for (j = 0; ; j++) {
tmp = bcma_erom_get_addr_desc(bus, eromptr,
SCAN_ADDR_TYPE_MWRAP, i);
- if (IS_ERR_VALUE(tmp)) {
+ if (IS_ERR_VALUE_U32(tmp)) {
/* no more entries for port _i_ */
/* pr_debug("erom: master wrapper %d "
* "has %d descriptors\n", i, j); */
for (j = 0; ; j++) {
tmp = bcma_erom_get_addr_desc(bus, eromptr,
SCAN_ADDR_TYPE_SWRAP, i + hack);
- if (IS_ERR_VALUE(tmp)) {
+ if (IS_ERR_VALUE_U32(tmp)) {
/* no more entries for port _i_ */
/* pr_debug("erom: master wrapper %d "
* has %d descriptors\n", i, j); */
int err;
u32 cp;
+ memset(&arg64, 0, sizeof(arg64));
err = 0;
err |=
copy_from_user(&arg64.LUN_info, &arg32->LUN_info,
ida_pci_info_struct pciinfo;
if (!arg) return -EINVAL;
+ memset(&pciinfo, 0, sizeof(pciinfo));
pciinfo.bus = host->pci_dev->bus->number;
pciinfo.dev_fn = host->pci_dev->devfn;
pciinfo.board_id = host->board_id;
u64 snap_id)
{
u32 which;
+ const char *snap_name;
which = rbd_dev_snap_index(rbd_dev, snap_id);
if (which == BAD_SNAP_INDEX)
- return NULL;
+ return ERR_PTR(-ENOENT);
- return _rbd_dev_v1_snap_name(rbd_dev, which);
+ snap_name = _rbd_dev_v1_snap_name(rbd_dev, which);
+ return snap_name ? snap_name : ERR_PTR(-ENOMEM);
}
static const char *rbd_snap_name(struct rbd_device *rbd_dev, u64 snap_id)
obj_request_done_set(obj_request);
}
-static int rbd_obj_notify_ack(struct rbd_device *rbd_dev, u64 notify_id)
+static int rbd_obj_notify_ack_sync(struct rbd_device *rbd_dev, u64 notify_id)
{
struct rbd_obj_request *obj_request;
struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc;
obj_request->osd_req = rbd_osd_req_create(rbd_dev, false, obj_request);
if (!obj_request->osd_req)
goto out;
- obj_request->callback = rbd_obj_request_put;
osd_req_op_watch_init(obj_request->osd_req, 0, CEPH_OSD_OP_NOTIFY_ACK,
notify_id, 0, 0);
rbd_osd_req_format_read(obj_request);
ret = rbd_obj_request_submit(osdc, obj_request);
-out:
if (ret)
- rbd_obj_request_put(obj_request);
+ goto out;
+ ret = rbd_obj_request_wait(obj_request);
+out:
+ rbd_obj_request_put(obj_request);
return ret;
}
if (ret)
rbd_warn(rbd_dev, "header refresh error (%d)\n", ret);
- rbd_obj_notify_ack(rbd_dev, notify_id);
+ rbd_obj_notify_ack_sync(rbd_dev, notify_id);
}
/*
clear_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags);
}
+static void rbd_dev_update_size(struct rbd_device *rbd_dev)
+{
+ sector_t size;
+ bool removing;
+
+ /*
+ * Don't hold the lock while doing disk operations,
+ * or lock ordering will conflict with the bdev mutex via:
+ * rbd_add() -> blkdev_get() -> rbd_open()
+ */
+ spin_lock_irq(&rbd_dev->lock);
+ removing = test_bit(RBD_DEV_FLAG_REMOVING, &rbd_dev->flags);
+ spin_unlock_irq(&rbd_dev->lock);
+ /*
+ * If the device is being removed, rbd_dev->disk has
+ * been destroyed, so don't try to update its size
+ */
+ if (!removing) {
+ size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;
+ dout("setting size to %llu sectors", (unsigned long long)size);
+ set_capacity(rbd_dev->disk, size);
+ revalidate_disk(rbd_dev->disk);
+ }
+}
+
static int rbd_dev_refresh(struct rbd_device *rbd_dev)
{
u64 mapping_size;
up_write(&rbd_dev->header_rwsem);
if (mapping_size != rbd_dev->mapping.size) {
- sector_t size;
-
- size = (sector_t)rbd_dev->mapping.size / SECTOR_SIZE;
- dout("setting size to %llu sectors", (unsigned long long)size);
- set_capacity(rbd_dev->disk, size);
- revalidate_disk(rbd_dev->disk);
+ rbd_dev_update_size(rbd_dev);
}
return ret;
snap_id = snapc->snaps[which];
snap_name = rbd_dev_v2_snap_name(rbd_dev, snap_id);
- if (IS_ERR(snap_name))
- break;
+ if (IS_ERR(snap_name)) {
+ /* ignore no-longer existing snapshots */
+ if (PTR_ERR(snap_name) == -ENOENT)
+ continue;
+ else
+ break;
+ }
found = !strcmp(name, snap_name);
kfree(snap_name);
}
/* Look up the snapshot name, and make a copy */
snap_name = rbd_snap_name(rbd_dev, spec->snap_id);
- if (!snap_name) {
- ret = -ENOMEM;
+ if (IS_ERR(snap_name)) {
+ ret = PTR_ERR(snap_name);
goto out_err;
}
if (ret < 0 || already)
return ret;
- rbd_bus_del_dev(rbd_dev);
ret = rbd_dev_header_watch_sync(rbd_dev, false);
if (ret)
rbd_warn(rbd_dev, "failed to cancel watch event (%d)\n", ret);
+
+ /*
+ * flush remaining watch callbacks - these must be complete
+ * before the osd_client is shutdown
+ */
+ dout("%s: flushing notifies", __func__);
+ ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc);
+ /*
+ * Don't free anything from rbd_dev->disk until after all
+ * notifies are completely processed. Otherwise
+ * rbd_bus_del_dev() will race with rbd_watch_cb(), resulting
+ * in a potential use after free of rbd_dev->disk or rbd_dev.
+ */
+ rbd_bus_del_dev(rbd_dev);
rbd_dev_image_release(rbd_dev);
module_put(THIS_MODULE);
{ USB_DEVICE(0x04CA, 0x3008) },
{ USB_DEVICE(0x13d3, 0x3362) },
{ USB_DEVICE(0x0CF3, 0xE004) },
+ { USB_DEVICE(0x0CF3, 0xE005) },
{ USB_DEVICE(0x0930, 0x0219) },
{ USB_DEVICE(0x0489, 0xe057) },
{ USB_DEVICE(0x13d3, 0x3393) },
{ USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0xe005), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
/* Broadcom BCM20702A0 */
{ USB_DEVICE(0x0b05, 0x17b5) },
+ { USB_DEVICE(0x0b05, 0x17cb) },
{ USB_DEVICE(0x04ca, 0x2003) },
{ USB_DEVICE(0x0489, 0xe042) },
{ USB_DEVICE(0x413c, 0x8197) },
/*Broadcom devices with vendor specific id */
{ USB_VENDOR_AND_INTERFACE_INFO(0x0a5c, 0xff, 0x01, 0x01) },
+ /* Belkin F8065bf - Broadcom based */
+ { USB_VENDOR_AND_INTERFACE_INFO(0x050d, 0xff, 0x01, 0x01) },
+
{ } /* Terminating entry */
};
{ USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0xe005), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
phys_addr_t sdramwins_phys_base,
size_t sdramwins_size)
{
+ struct device_node *np;
int win;
mbus->mbuswins_base = ioremap(mbuswins_phys_base, mbuswins_size);
return -ENOMEM;
}
- if (of_find_compatible_node(NULL, NULL, "marvell,coherency-fabric"))
+ np = of_find_compatible_node(NULL, NULL, "marvell,coherency-fabric");
+ if (np) {
mbus->hw_io_coherency = 1;
+ of_node_put(np);
+ }
for (win = 0; win < mbus->soc->num_wins; win++)
mvebu_mbus_disable_window(mbus, win);
int ret;
/*
- * These are optional, so we clear them and they'll
- * be zero if they are missing from the DT.
+ * These are optional, so we make sure that resource_size(x) will
+ * return 0.
*/
memset(mem, 0, sizeof(struct resource));
+ mem->end = -1;
memset(io, 0, sizeof(struct resource));
+ io->end = -1;
ret = of_property_read_u32_array(np, "pcie-mem-aperture", reg, ARRAY_SIZE(reg));
if (!ret) {
*/
void add_device_randomness(const void *buf, unsigned int size)
{
- unsigned long time = get_cycles() ^ jiffies;
+ unsigned long time = random_get_entropy() ^ jiffies;
mix_pool_bytes(&input_pool, buf, size, NULL);
mix_pool_bytes(&input_pool, &time, sizeof(time), NULL);
goto out;
sample.jiffies = jiffies;
- sample.cycles = get_cycles();
+ sample.cycles = random_get_entropy();
sample.num = num;
mix_pool_bytes(&input_pool, &sample, sizeof(sample), NULL);
struct fast_pool *fast_pool = &__get_cpu_var(irq_randomness);
struct pt_regs *regs = get_irq_regs();
unsigned long now = jiffies;
- __u32 input[4], cycles = get_cycles();
+ __u32 input[4], cycles = random_get_entropy();
input[0] = cycles ^ jiffies;
input[1] = irq;
static u32 random_int_secret[MD5_MESSAGE_BYTES / 4] ____cacheline_aligned;
-static int __init random_int_secret_init(void)
+int random_int_secret_init(void)
{
get_random_bytes(random_int_secret, sizeof(random_int_secret));
return 0;
}
-late_initcall(random_int_secret_init);
/*
* Get a random word for internal kernel use only. Similar to urandom but
hash = get_cpu_var(get_random_int_hash);
- hash[0] += current->pid + jiffies + get_cycles();
+ hash[0] += current->pid + jiffies + random_get_entropy();
md5_transform(hash, random_int_secret);
ret = hash[0];
put_cpu_var(get_random_int_hash);
#include <linux/errno.h>
#include <linux/err.h>
#include <linux/interrupt.h>
+#include <xen/xen.h>
#include <xen/events.h>
#include <xen/interface/io/tpmif.h>
#include <xen/grant_table.h>
return length;
}
-ssize_t tpm_show_locality(struct device *dev, struct device_attribute *attr,
- char *buf)
-{
- struct tpm_chip *chip = dev_get_drvdata(dev);
- struct tpm_private *priv = TPM_VPRIV(chip);
- u8 locality = priv->shr->locality;
-
- return sprintf(buf, "%d\n", locality);
-}
-
-ssize_t tpm_store_locality(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t len)
-{
- struct tpm_chip *chip = dev_get_drvdata(dev);
- struct tpm_private *priv = TPM_VPRIV(chip);
- u8 val;
-
- int rv = kstrtou8(buf, 0, &val);
- if (rv)
- return rv;
-
- priv->shr->locality = val;
-
- return len;
-}
-
static const struct file_operations vtpm_ops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel);
static DEVICE_ATTR(durations, S_IRUGO, tpm_show_durations, NULL);
static DEVICE_ATTR(timeouts, S_IRUGO, tpm_show_timeouts, NULL);
-static DEVICE_ATTR(locality, S_IRUGO | S_IWUSR, tpm_show_locality,
- tpm_store_locality);
static struct attribute *vtpm_attrs[] = {
&dev_attr_pubek.attr,
&dev_attr_cancel.attr,
&dev_attr_durations.attr,
&dev_attr_timeouts.attr,
- &dev_attr_locality.attr,
NULL,
};
.attrs = vtpm_attrs,
};
-#define TPM_LONG_TIMEOUT (10 * 60 * HZ)
-
static const struct tpm_vendor_specific tpm_vtpm = {
.status = vtpm_status,
.recv = vtpm_recv,
.miscdev = {
.fops = &vtpm_ops,
},
- .duration = {
- TPM_LONG_TIMEOUT,
- TPM_LONG_TIMEOUT,
- TPM_LONG_TIMEOUT,
- },
};
static irqreturn_t tpmif_interrupt(int dummy, void *dev_id)
*/
#define SRC_CR 0x00U
+#define SRC_CR_T0_ENSEL BIT(15)
+#define SRC_CR_T1_ENSEL BIT(17)
+#define SRC_CR_T2_ENSEL BIT(19)
+#define SRC_CR_T3_ENSEL BIT(21)
+#define SRC_CR_T4_ENSEL BIT(23)
+#define SRC_CR_T5_ENSEL BIT(25)
+#define SRC_CR_T6_ENSEL BIT(27)
+#define SRC_CR_T7_ENSEL BIT(29)
#define SRC_XTALCR 0x0CU
#define SRC_XTALCR_XTALTIMEN BIT(20)
#define SRC_XTALCR_SXTALDIS BIT(19)
__func__, np->name);
return;
}
+
+ /* Set all timers to use the 2.4 MHz TIMCLK */
+ val = readl(src_base + SRC_CR);
+ val |= SRC_CR_T0_ENSEL;
+ val |= SRC_CR_T1_ENSEL;
+ val |= SRC_CR_T2_ENSEL;
+ val |= SRC_CR_T3_ENSEL;
+ val |= SRC_CR_T4_ENSEL;
+ val |= SRC_CR_T5_ENSEL;
+ val |= SRC_CR_T6_ENSEL;
+ val |= SRC_CR_T7_ENSEL;
+ writel(val, src_base + SRC_CR);
+
val = readl(src_base + SRC_XTALCR);
pr_info("SXTALO is %s\n",
(val & SRC_XTALCR_SXTALDIS) ? "disabled" : "enabled");
};
static const u32 a370_tclk_freqs[] __initconst = {
- 16600000,
- 20000000,
+ 166000000,
+ 200000000,
};
static u32 __init a370_get_tclk_freq(void __iomem *sar)
#define SOCFPGA_L4_SP_CLK "l4_sp_clk"
#define SOCFPGA_NAND_CLK "nand_clk"
#define SOCFPGA_NAND_X_CLK "nand_x_clk"
-#define SOCFPGA_MMC_CLK "mmc_clk"
+#define SOCFPGA_MMC_CLK "sdmmc_clk"
#define SOCFPGA_DB_CLK "gpio_db_clk"
#define div_mask(width) ((1 << (width)) - 1)
vco = icst_hz_to_vco(icst->params, rate);
icst->rate = icst_hz(icst->params, vco);
- vco_set(icst->vcoreg, icst->lockreg, vco);
+ vco_set(icst->lockreg, icst->vcoreg, vco);
return 0;
}
config ARMADA_370_XP_TIMER
bool
+ select CLKSRC_OF
config ORION_TIMER
select CLKSRC_OF
clocksource_of_init_fn init_func;
for_each_matching_node_and_match(np, __clksrc_of_table, &match) {
+ if (!of_device_is_available(np))
+ continue;
+
init_func = match->data;
init_func(np);
}
ced->name = dev_name(&p->pdev->dev);
ced->features = CLOCK_EVT_FEAT_ONESHOT;
ced->rating = 200;
- ced->cpumask = cpumask_of(0);
+ ced->cpumask = cpu_possible_mask;
ced->set_next_event = em_sti_clock_event_next;
ced->set_mode = em_sti_clock_event_mode;
evt->irq);
return -EIO;
}
- irq_set_affinity(evt->irq, cpumask_of(cpu));
} else {
enable_percpu_irq(mct_irqs[MCT_L0_IRQ], 0);
}
unsigned long action, void *hcpu)
{
struct mct_clock_event_device *mevt;
+ unsigned int cpu;
/*
* Grab cpu pointer in each case to avoid spurious
mevt = this_cpu_ptr(&percpu_mct_tick);
exynos4_local_timer_setup(&mevt->evt);
break;
+ case CPU_ONLINE:
+ cpu = (unsigned long)hcpu;
+ if (mct_int_type == MCT_INT_SPI)
+ irq_set_affinity(mct_irqs[MCT_L0_IRQ + cpu],
+ cpumask_of(cpu));
+ break;
case CPU_DYING:
mevt = this_cpu_ptr(&percpu_mct_tick);
exynos4_local_timer_stop(&mevt->evt);
&percpu_mct_tick);
WARN(err, "MCT: can't request IRQ %d (%d)\n",
mct_irqs[MCT_L0_IRQ], err);
+ } else {
+ irq_set_affinity(mct_irqs[MCT_L0_IRQ], cpumask_of(0));
}
err = register_cpu_notifier(&exynos4_mct_cpu_nb);
msg = (struct cn_msg *)buffer;
ev = (struct proc_event *)msg->data;
+ memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
msg->ack = 0; /* not used */
msg->len = sizeof(*ev);
+ msg->flags = 0; /* not used */
/* If cn_netlink_send() failed, the data is not sent */
cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
}
msg = (struct cn_msg *)buffer;
ev = (struct proc_event *)msg->data;
+ memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
msg->ack = 0; /* not used */
msg->len = sizeof(*ev);
+ msg->flags = 0; /* not used */
cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
}
msg = (struct cn_msg *)buffer;
ev = (struct proc_event *)msg->data;
+ memset(&ev->event_data, 0, sizeof(ev->event_data));
ev->what = which_id;
ev->event_data.id.process_pid = task->pid;
ev->event_data.id.process_tgid = task->tgid;
memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
msg->ack = 0; /* not used */
msg->len = sizeof(*ev);
+ msg->flags = 0; /* not used */
cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
}
msg = (struct cn_msg *)buffer;
ev = (struct proc_event *)msg->data;
+ memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
msg->ack = 0; /* not used */
msg->len = sizeof(*ev);
+ msg->flags = 0; /* not used */
cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
}
msg = (struct cn_msg *)buffer;
ev = (struct proc_event *)msg->data;
+ memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
msg->ack = 0; /* not used */
msg->len = sizeof(*ev);
+ msg->flags = 0; /* not used */
cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
}
msg = (struct cn_msg *)buffer;
ev = (struct proc_event *)msg->data;
+ memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
msg->ack = 0; /* not used */
msg->len = sizeof(*ev);
+ msg->flags = 0; /* not used */
cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
}
msg = (struct cn_msg *)buffer;
ev = (struct proc_event *)msg->data;
+ memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
msg->ack = 0; /* not used */
msg->len = sizeof(*ev);
+ msg->flags = 0; /* not used */
cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
}
msg = (struct cn_msg *)buffer;
ev = (struct proc_event *)msg->data;
+ memset(&ev->event_data, 0, sizeof(ev->event_data));
get_seq(&msg->seq, &ev->cpu);
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
msg->ack = 0; /* not used */
msg->len = sizeof(*ev);
+ msg->flags = 0; /* not used */
cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
}
msg = (struct cn_msg *)buffer;
ev = (struct proc_event *)msg->data;
+ memset(&ev->event_data, 0, sizeof(ev->event_data));
msg->seq = rcvd_seq;
ktime_get_ts(&ts); /* get high res monotonic timestamp */
put_unaligned(timespec_to_ns(&ts), (__u64 *)&ev->timestamp_ns);
memcpy(&msg->id, &cn_proc_event_id, sizeof(msg->id));
msg->ack = rcvd_ack + 1;
msg->len = sizeof(*ev);
+ msg->flags = 0; /* not used */
cn_netlink_send(msg, CN_IDX_PROC, GFP_KERNEL);
}
data = nlmsg_data(nlh);
- memcpy(data, msg, sizeof(*data) + msg->len);
+ memcpy(data, msg, size);
NETLINK_CB(skb).dst_group = group;
static void cn_rx_skb(struct sk_buff *__skb)
{
struct nlmsghdr *nlh;
- int err;
struct sk_buff *skb;
+ int len, err;
skb = skb_get(__skb);
if (skb->len >= NLMSG_HDRLEN) {
nlh = nlmsg_hdr(skb);
+ len = nlmsg_len(nlh);
- if (nlh->nlmsg_len < sizeof(struct cn_msg) ||
+ if (len < (int)sizeof(struct cn_msg) ||
skb->len < nlh->nlmsg_len ||
- nlh->nlmsg_len > CONNECTOR_MAX_MSG_SIZE) {
+ len > CONNECTOR_MAX_MSG_SIZE) {
kfree_skb(skb);
return;
}
int ret;
if (acpi_disabled)
- return 0;
+ return -ENODEV;
+
+ /* don't keep reloading if cpufreq_driver exists */
+ if (cpufreq_get_current_driver())
+ return -EEXIST;
pr_debug("acpi_cpufreq_init\n");
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/clk.h>
+#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/err.h>
#include <linux/module.h>
struct device_node *np;
int ret;
- cpu_dev = &pdev->dev;
+ cpu_dev = get_cpu_device(0);
+ if (!cpu_dev) {
+ pr_err("failed to get cpu0 device\n");
+ return -ENODEV;
+ }
np = of_node_get(cpu_dev->of_node);
if (!np) {
if (of_property_read_u32(np, "clock-latency", &transition_latency))
transition_latency = CPUFREQ_ETERNAL;
- if (cpu_reg) {
+ if (!IS_ERR(cpu_reg)) {
struct opp *opp;
unsigned long min_uV, max_uV;
int i;
if (cpu == policy->cpu)
return;
+ /*
+ * Take direct locks as lock_policy_rwsem_write wouldn't work here.
+ * Also lock for last cpu is enough here as contention will happen only
+ * after policy->cpu is changed and after it is changed, other threads
+ * will try to acquire lock for new cpu. And policy is already updated
+ * by then.
+ */
+ down_write(&per_cpu(cpu_policy_rwsem, policy->cpu));
+
policy->last_cpu = policy->cpu;
policy->cpu = cpu;
+ up_write(&per_cpu(cpu_policy_rwsem, policy->last_cpu));
+
#ifdef CONFIG_CPU_FREQ_TABLE
cpufreq_frequency_table_update_policy_cpu(policy);
#endif
int ret;
/* first sibling now owns the new sysfs dir */
- cpu_dev = get_cpu_device(cpumask_first(policy->cpus));
+ cpu_dev = get_cpu_device(cpumask_any_but(policy->cpus, old_cpu));
/* Don't touch sysfs files during light-weight tear-down */
if (frozen)
policy->governor->name, CPUFREQ_NAME_LEN);
#endif
- WARN_ON(lock_policy_rwsem_write(cpu));
+ lock_policy_rwsem_read(cpu);
cpus = cpumask_weight(policy->cpus);
-
- if (cpus > 1)
- cpumask_clear_cpu(cpu, policy->cpus);
- unlock_policy_rwsem_write(cpu);
+ unlock_policy_rwsem_read(cpu);
if (cpu != policy->cpu) {
if (!frozen)
new_cpu = cpufreq_nominate_new_policy_cpu(policy, cpu, frozen);
if (new_cpu >= 0) {
- WARN_ON(lock_policy_rwsem_write(cpu));
update_policy_cpu(policy, new_cpu);
- unlock_policy_rwsem_write(cpu);
if (!frozen) {
pr_debug("%s: policy Kobject moved to cpu: %d "
return -EINVAL;
}
- lock_policy_rwsem_read(cpu);
+ WARN_ON(lock_policy_rwsem_write(cpu));
cpus = cpumask_weight(policy->cpus);
- unlock_policy_rwsem_read(cpu);
+
+ if (cpus > 1)
+ cpumask_clear_cpu(cpu, policy->cpus);
+ unlock_policy_rwsem_write(cpu);
/* If cpu is last user of policy, free policy */
if (cpus == 1) {
{
unsigned int ret_freq = 0;
+ if (cpufreq_disabled() || !cpufreq_driver)
+ return -ENOENT;
+
if (!down_read_trylock(&cpufreq_rwsem))
return 0;
write_lock_irqsave(&cpufreq_driver_lock, flags);
if (cpufreq_driver) {
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
- return -EBUSY;
+ return -EEXIST;
}
cpufreq_driver = driver_data;
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
opp_free_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table);
err_put_node:
of_node_put(np);
- dev_err(dvfs_info->dev, "%s: failed initialization\n", __func__);
+ dev_err(&pdev->dev, "%s: failed initialization\n", __func__);
return ret;
}
*/
#include <linux/clk.h>
+#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/delay.h>
#include <linux/err.h>
unsigned long min_volt, max_volt;
int num, ret;
- cpu_dev = &pdev->dev;
+ cpu_dev = get_cpu_device(0);
+ if (!cpu_dev) {
+ pr_err("failed to get cpu0 device\n");
+ return -ENODEV;
+ }
np = of_node_get(cpu_dev->of_node);
if (!np) {
}
struct sample {
- int core_pct_busy;
+ int32_t core_pct_busy;
u64 aperf;
u64 mperf;
int freq;
int32_t i_gain;
int32_t d_gain;
int deadband;
- int last_err;
+ int32_t last_err;
};
struct cpudata {
pid->d_gain = div_fp(int_tofp(percent), int_tofp(100));
}
-static signed int pid_calc(struct _pid *pid, int busy)
+static signed int pid_calc(struct _pid *pid, int32_t busy)
{
- signed int err, result;
+ signed int result;
int32_t pterm, dterm, fp_error;
int32_t integral_limit;
- err = pid->setpoint - busy;
- fp_error = int_tofp(err);
+ fp_error = int_tofp(pid->setpoint) - busy;
- if (abs(err) <= pid->deadband)
+ if (abs(fp_error) <= int_tofp(pid->deadband))
return 0;
pterm = mul_fp(pid->p_gain, fp_error);
if (pid->integral < -integral_limit)
pid->integral = -integral_limit;
- dterm = mul_fp(pid->d_gain, (err - pid->last_err));
- pid->last_err = err;
+ dterm = mul_fp(pid->d_gain, fp_error - pid->last_err);
+ pid->last_err = fp_error;
result = pterm + mul_fp(pid->integral, pid->i_gain) + dterm;
static void intel_pstate_get_min_max(struct cpudata *cpu, int *min, int *max)
{
int max_perf = cpu->pstate.turbo_pstate;
+ int max_perf_adj;
int min_perf;
if (limits.no_turbo)
max_perf = cpu->pstate.max_pstate;
- max_perf = fp_toint(mul_fp(int_tofp(max_perf), limits.max_perf));
- *max = clamp_t(int, max_perf,
+ max_perf_adj = fp_toint(mul_fp(int_tofp(max_perf), limits.max_perf));
+ *max = clamp_t(int, max_perf_adj,
cpu->pstate.min_pstate, cpu->pstate.turbo_pstate);
min_perf = fp_toint(mul_fp(int_tofp(max_perf), limits.min_perf));
static void intel_pstate_set_pstate(struct cpudata *cpu, int pstate)
{
int max_perf, min_perf;
+ u64 val;
intel_pstate_get_min_max(cpu, &min_perf, &max_perf);
trace_cpu_frequency(pstate * 100000, cpu->cpu);
cpu->pstate.current_pstate = pstate;
- wrmsrl(MSR_IA32_PERF_CTL, pstate << 8);
+ val = pstate << 8;
+ if (limits.no_turbo)
+ val |= (u64)1 << 32;
+ wrmsrl(MSR_IA32_PERF_CTL, val);
}
static inline void intel_pstate_pstate_increase(struct cpudata *cpu, int steps)
struct sample *sample)
{
u64 core_pct;
- core_pct = div64_u64(sample->aperf * 100, sample->mperf);
- sample->freq = cpu->pstate.max_pstate * core_pct * 1000;
+ core_pct = div64_u64(int_tofp(sample->aperf * 100),
+ sample->mperf);
+ sample->freq = fp_toint(cpu->pstate.max_pstate * core_pct * 1000);
sample->core_pct_busy = core_pct;
}
mod_timer_pinned(&cpu->timer, jiffies + delay);
}
-static inline int intel_pstate_get_scaled_busy(struct cpudata *cpu)
+static inline int32_t intel_pstate_get_scaled_busy(struct cpudata *cpu)
{
- int32_t busy_scaled;
int32_t core_busy, max_pstate, current_pstate;
- core_busy = int_tofp(cpu->samples[cpu->sample_ptr].core_pct_busy);
+ core_busy = cpu->samples[cpu->sample_ptr].core_pct_busy;
max_pstate = int_tofp(cpu->pstate.max_pstate);
current_pstate = int_tofp(cpu->pstate.current_pstate);
- busy_scaled = mul_fp(core_busy, div_fp(max_pstate, current_pstate));
-
- return fp_toint(busy_scaled);
+ return mul_fp(core_busy, div_fp(max_pstate, current_pstate));
}
static inline void intel_pstate_adjust_busy_pstate(struct cpudata *cpu)
{
- int busy_scaled;
+ int32_t busy_scaled;
struct _pid *pid;
signed int ctl = 0;
int steps;
static int intel_pstate_cpu_init(struct cpufreq_policy *policy)
{
- int rc, min_pstate, max_pstate;
struct cpudata *cpu;
+ int rc;
rc = intel_pstate_init_cpu(policy->cpu);
if (rc)
else
policy->policy = CPUFREQ_POLICY_POWERSAVE;
- intel_pstate_get_min_max(cpu, &min_pstate, &max_pstate);
- policy->min = min_pstate * 100000;
- policy->max = max_pstate * 100000;
+ policy->min = cpu->pstate.min_pstate * 100000;
+ policy->max = cpu->pstate.turbo_pstate * 100000;
/* cpuinfo and default policy values */
policy->cpuinfo.min_freq = cpu->pstate.min_pstate * 100000;
if (freq->frequency == CPUFREQ_ENTRY_INVALID)
continue;
- dvfs = &s3c64xx_dvfs_table[freq->index];
+ dvfs = &s3c64xx_dvfs_table[freq->driver_data];
found = 0;
for (i = 0; i < count; i++) {
unsigned int target_freq, unsigned int relation)
{
struct cpufreq_freqs freqs;
- unsigned long newfreq;
+ long newfreq;
struct clk *srcclk;
int index, ret, mult = 1;
depends on ARCH_DAVINCI || ARCH_OMAP
select DMA_ENGINE
select DMA_VIRTUAL_CHANNELS
+ select TI_PRIV_EDMA
default n
help
Enable support for the TI EDMA controller. This DMA
edma_alloc_slot(EDMA_CTLR(echan->ch_num),
EDMA_SLOT_ANY);
if (echan->slot[i] < 0) {
+ kfree(edesc);
dev_err(dev, "Failed to allocate slot\n");
+ kfree(edesc);
return NULL;
}
}
ccnt = sg_dma_len(sg) / (acnt * bcnt);
if (ccnt > (SZ_64K - 1)) {
dev_err(dev, "Exceeded max SG segment size\n");
+ kfree(edesc);
return NULL;
}
cidx = acnt * bcnt;
}
module_exit(edma_exit);
-MODULE_AUTHOR("Matt Porter <mporter@ti.com>");
+MODULE_AUTHOR("Matt Porter <matt.porter@linaro.org>");
MODULE_DESCRIPTION("TI EDMA DMA engine driver");
MODULE_LICENSE("GPL v2");
struct imxdma_engine *imxdma = imxdmac->imxdma;
int chno = imxdmac->channel;
struct imxdma_desc *desc;
+ unsigned long flags;
- spin_lock(&imxdma->lock);
+ spin_lock_irqsave(&imxdma->lock, flags);
if (list_empty(&imxdmac->ld_active)) {
- spin_unlock(&imxdma->lock);
+ spin_unlock_irqrestore(&imxdma->lock, flags);
goto out;
}
desc = list_first_entry(&imxdmac->ld_active,
struct imxdma_desc,
node);
- spin_unlock(&imxdma->lock);
+ spin_unlock_irqrestore(&imxdma->lock, flags);
if (desc->sg) {
u32 tmp;
{
struct imxdma_channel *imxdmac = to_imxdma_chan(d->desc.chan);
struct imxdma_engine *imxdma = imxdmac->imxdma;
- unsigned long flags;
int slot = -1;
int i;
switch (d->type) {
case IMXDMA_DESC_INTERLEAVED:
/* Try to get a free 2D slot */
- spin_lock_irqsave(&imxdma->lock, flags);
for (i = 0; i < IMX_DMA_2D_SLOTS; i++) {
if ((imxdma->slots_2d[i].count > 0) &&
((imxdma->slots_2d[i].xsr != d->x) ||
slot = i;
break;
}
- if (slot < 0) {
- spin_unlock_irqrestore(&imxdma->lock, flags);
+ if (slot < 0)
return -EBUSY;
- }
imxdma->slots_2d[slot].xsr = d->x;
imxdma->slots_2d[slot].ysr = d->y;
imxdmac->slot_2d = slot;
imxdmac->enabled_2d = true;
- spin_unlock_irqrestore(&imxdma->lock, flags);
if (slot == IMX_DMA_2D_SLOT_A) {
d->config_mem &= ~CCR_MSEL_B;
struct imxdma_channel *imxdmac = (void *)data;
struct imxdma_engine *imxdma = imxdmac->imxdma;
struct imxdma_desc *desc;
+ unsigned long flags;
- spin_lock(&imxdma->lock);
+ spin_lock_irqsave(&imxdma->lock, flags);
if (list_empty(&imxdmac->ld_active)) {
/* Someone might have called terminate all */
- goto out;
+ spin_unlock_irqrestore(&imxdma->lock, flags);
+ return;
}
desc = list_first_entry(&imxdmac->ld_active, struct imxdma_desc, node);
- if (desc->desc.callback)
- desc->desc.callback(desc->desc.callback_param);
-
/* If we are dealing with a cyclic descriptor, keep it on ld_active
* and dont mark the descriptor as complete.
* Only in non-cyclic cases it would be marked as complete
__func__, imxdmac->channel);
}
out:
- spin_unlock(&imxdma->lock);
+ spin_unlock_irqrestore(&imxdma->lock, flags);
+
+ if (desc->desc.callback)
+ desc->desc.callback(desc->desc.callback_param);
+
}
static int imxdma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd,
kfree(imxdmac->sg_list);
imxdmac->sg_list = kcalloc(periods + 1,
- sizeof(struct scatterlist), GFP_KERNEL);
+ sizeof(struct scatterlist), GFP_ATOMIC);
if (!imxdmac->sg_list)
return NULL;
void __iomem *base;
const struct hpb_dmae_slave_config *cfg;
char dev_id[16]; /* unique name per DMAC of channel */
+ dma_addr_t slave_addr;
};
struct hpb_dmae_device {
hpb_chan->xfer_mode = XFER_DOUBLE;
} else {
dev_err(hpb_chan->shdma_chan.dev, "DCR setting error");
- shdma_free_irq(&hpb_chan->shdma_chan);
return -EINVAL;
}
return 0;
}
-static int hpb_dmae_set_slave(struct shdma_chan *schan, int slave_id, bool try)
+static int hpb_dmae_set_slave(struct shdma_chan *schan, int slave_id,
+ dma_addr_t slave_addr, bool try)
{
struct hpb_dmae_chan *chan = to_chan(schan);
const struct hpb_dmae_slave_config *sc =
if (try)
return 0;
chan->cfg = sc;
+ chan->slave_addr = slave_addr ? : sc->addr;
return hpb_dmae_alloc_chan_resources(chan, sc);
}
{
struct hpb_dmae_chan *chan = to_chan(schan);
- return chan->cfg->addr;
+ return chan->slave_addr;
}
static struct shdma_desc *hpb_dmae_embedded_desc(void *buf, int i)
shdma_for_each_chan(schan, &hpbdev->shdma_dev, i) {
BUG_ON(!schan);
- shdma_free_irq(schan);
shdma_chan_remove(schan);
}
dma_dev->chancnt = 0;
struct lp_gpio *lg = irq_data_get_irq_handler_data(data);
struct irq_chip *chip = irq_data_get_irq_chip(data);
u32 base, pin, mask;
- unsigned long reg, pending;
+ unsigned long reg, ena, pending;
unsigned virq;
/* check from GPIO controller which pin triggered the interrupt */
for (base = 0; base < lg->chip.ngpio; base += 32) {
reg = lp_gpio_reg(&lg->chip, base, LP_INT_STAT);
+ ena = lp_gpio_reg(&lg->chip, base, LP_INT_ENABLE);
- while ((pending = inl(reg))) {
+ while ((pending = (inl(reg) & inl(ena)))) {
pin = __ffs(pending);
mask = BIT(pin);
/* Clear before handling so we don't lose an edge */
struct gpio_chip chip;
struct clk *dbck;
u32 mod_usage;
+ u32 irq_usage;
u32 dbck_enable_mask;
bool dbck_enabled;
struct device *dev;
#define GPIO_BIT(bank, gpio) (1 << GPIO_INDEX(bank, gpio))
#define GPIO_MOD_CTRL_BIT BIT(0)
+#define BANK_USED(bank) (bank->mod_usage || bank->irq_usage)
+#define LINE_USED(line, offset) (line & (1 << offset))
+
static int irq_to_gpio(struct gpio_bank *bank, unsigned int gpio_irq)
{
return bank->chip.base + gpio_irq;
return 0;
}
+static void _enable_gpio_module(struct gpio_bank *bank, unsigned offset)
+{
+ if (bank->regs->pinctrl) {
+ void __iomem *reg = bank->base + bank->regs->pinctrl;
+
+ /* Claim the pin for MPU */
+ __raw_writel(__raw_readl(reg) | (1 << offset), reg);
+ }
+
+ if (bank->regs->ctrl && !BANK_USED(bank)) {
+ void __iomem *reg = bank->base + bank->regs->ctrl;
+ u32 ctrl;
+
+ ctrl = __raw_readl(reg);
+ /* Module is enabled, clocks are not gated */
+ ctrl &= ~GPIO_MOD_CTRL_BIT;
+ __raw_writel(ctrl, reg);
+ bank->context.ctrl = ctrl;
+ }
+}
+
+static void _disable_gpio_module(struct gpio_bank *bank, unsigned offset)
+{
+ void __iomem *base = bank->base;
+
+ if (bank->regs->wkup_en &&
+ !LINE_USED(bank->mod_usage, offset) &&
+ !LINE_USED(bank->irq_usage, offset)) {
+ /* Disable wake-up during idle for dynamic tick */
+ _gpio_rmw(base, bank->regs->wkup_en, 1 << offset, 0);
+ bank->context.wake_en =
+ __raw_readl(bank->base + bank->regs->wkup_en);
+ }
+
+ if (bank->regs->ctrl && !BANK_USED(bank)) {
+ void __iomem *reg = bank->base + bank->regs->ctrl;
+ u32 ctrl;
+
+ ctrl = __raw_readl(reg);
+ /* Module is disabled, clocks are gated */
+ ctrl |= GPIO_MOD_CTRL_BIT;
+ __raw_writel(ctrl, reg);
+ bank->context.ctrl = ctrl;
+ }
+}
+
+static int gpio_is_input(struct gpio_bank *bank, int mask)
+{
+ void __iomem *reg = bank->base + bank->regs->direction;
+
+ return __raw_readl(reg) & mask;
+}
+
static int gpio_irq_type(struct irq_data *d, unsigned type)
{
struct gpio_bank *bank = irq_data_get_irq_chip_data(d);
unsigned gpio = 0;
int retval;
unsigned long flags;
+ unsigned offset;
- if (WARN_ON(!bank->mod_usage))
- return -EINVAL;
+ if (!BANK_USED(bank))
+ pm_runtime_get_sync(bank->dev);
#ifdef CONFIG_ARCH_OMAP1
if (d->irq > IH_MPUIO_BASE)
return -EINVAL;
spin_lock_irqsave(&bank->lock, flags);
- retval = _set_gpio_triggering(bank, GPIO_INDEX(bank, gpio), type);
+ offset = GPIO_INDEX(bank, gpio);
+ retval = _set_gpio_triggering(bank, offset, type);
+ if (!LINE_USED(bank->mod_usage, offset)) {
+ _enable_gpio_module(bank, offset);
+ _set_gpio_direction(bank, offset, 1);
+ } else if (!gpio_is_input(bank, 1 << offset)) {
+ spin_unlock_irqrestore(&bank->lock, flags);
+ return -EINVAL;
+ }
+
+ bank->irq_usage |= 1 << GPIO_INDEX(bank, gpio);
spin_unlock_irqrestore(&bank->lock, flags);
if (type & (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH))
* If this is the first gpio_request for the bank,
* enable the bank module.
*/
- if (!bank->mod_usage)
+ if (!BANK_USED(bank))
pm_runtime_get_sync(bank->dev);
spin_lock_irqsave(&bank->lock, flags);
/* Set trigger to none. You need to enable the desired trigger with
- * request_irq() or set_irq_type().
+ * request_irq() or set_irq_type(). Only do this if the IRQ line has
+ * not already been requested.
*/
- _set_gpio_triggering(bank, offset, IRQ_TYPE_NONE);
-
- if (bank->regs->pinctrl) {
- void __iomem *reg = bank->base + bank->regs->pinctrl;
-
- /* Claim the pin for MPU */
- __raw_writel(__raw_readl(reg) | (1 << offset), reg);
- }
-
- if (bank->regs->ctrl && !bank->mod_usage) {
- void __iomem *reg = bank->base + bank->regs->ctrl;
- u32 ctrl;
-
- ctrl = __raw_readl(reg);
- /* Module is enabled, clocks are not gated */
- ctrl &= ~GPIO_MOD_CTRL_BIT;
- __raw_writel(ctrl, reg);
- bank->context.ctrl = ctrl;
+ if (!LINE_USED(bank->irq_usage, offset)) {
+ _set_gpio_triggering(bank, offset, IRQ_TYPE_NONE);
+ _enable_gpio_module(bank, offset);
}
-
bank->mod_usage |= 1 << offset;
-
spin_unlock_irqrestore(&bank->lock, flags);
return 0;
static void omap_gpio_free(struct gpio_chip *chip, unsigned offset)
{
struct gpio_bank *bank = container_of(chip, struct gpio_bank, chip);
- void __iomem *base = bank->base;
unsigned long flags;
spin_lock_irqsave(&bank->lock, flags);
-
- if (bank->regs->wkup_en) {
- /* Disable wake-up during idle for dynamic tick */
- _gpio_rmw(base, bank->regs->wkup_en, 1 << offset, 0);
- bank->context.wake_en =
- __raw_readl(bank->base + bank->regs->wkup_en);
- }
-
bank->mod_usage &= ~(1 << offset);
-
- if (bank->regs->ctrl && !bank->mod_usage) {
- void __iomem *reg = bank->base + bank->regs->ctrl;
- u32 ctrl;
-
- ctrl = __raw_readl(reg);
- /* Module is disabled, clocks are gated */
- ctrl |= GPIO_MOD_CTRL_BIT;
- __raw_writel(ctrl, reg);
- bank->context.ctrl = ctrl;
- }
-
+ _disable_gpio_module(bank, offset);
_reset_gpio(bank, bank->chip.base + offset);
spin_unlock_irqrestore(&bank->lock, flags);
* If this is the last gpio to be freed in the bank,
* disable the bank module.
*/
- if (!bank->mod_usage)
+ if (!BANK_USED(bank))
pm_runtime_put(bank->dev);
}
struct gpio_bank *bank = irq_data_get_irq_chip_data(d);
unsigned int gpio = irq_to_gpio(bank, d->hwirq);
unsigned long flags;
+ unsigned offset = GPIO_INDEX(bank, gpio);
spin_lock_irqsave(&bank->lock, flags);
+ bank->irq_usage &= ~(1 << offset);
+ _disable_gpio_module(bank, offset);
_reset_gpio(bank, gpio);
spin_unlock_irqrestore(&bank->lock, flags);
+
+ /*
+ * If this is the last IRQ to be freed in the bank,
+ * disable the bank module.
+ */
+ if (!BANK_USED(bank))
+ pm_runtime_put(bank->dev);
}
static void gpio_ack_irq(struct irq_data *d)
return 0;
}
-static int gpio_is_input(struct gpio_bank *bank, int mask)
-{
- void __iomem *reg = bank->base + bank->regs->direction;
-
- return __raw_readl(reg) & mask;
-}
-
static int gpio_get(struct gpio_chip *chip, unsigned offset)
{
struct gpio_bank *bank;
{
struct gpio_bank *bank;
unsigned long flags;
+ int retval = 0;
bank = container_of(chip, struct gpio_bank, chip);
spin_lock_irqsave(&bank->lock, flags);
+
+ if (LINE_USED(bank->irq_usage, offset)) {
+ retval = -EINVAL;
+ goto exit;
+ }
+
bank->set_dataout(bank, offset, value);
_set_gpio_direction(bank, offset, 0);
+
+exit:
spin_unlock_irqrestore(&bank->lock, flags);
- return 0;
+ return retval;
}
static int gpio_debounce(struct gpio_chip *chip, unsigned offset,
struct gpio_bank *bank;
list_for_each_entry(bank, &omap_gpio_list, node) {
- if (!bank->mod_usage || !bank->loses_context)
+ if (!BANK_USED(bank) || !bank->loses_context)
continue;
bank->power_mode = pwr_mode;
struct gpio_bank *bank;
list_for_each_entry(bank, &omap_gpio_list, node) {
- if (!bank->mod_usage || !bank->loses_context)
+ if (!BANK_USED(bank) || !bank->loses_context)
continue;
pm_runtime_get_sync(bank->dev);
if (pdata) {
p->config = *pdata;
} else if (IS_ENABLED(CONFIG_OF) && np) {
- ret = of_parse_phandle_with_args(np, "gpio-ranges",
- "#gpio-range-cells", 0, &args);
- p->config.number_of_pins = ret == 0 && args.args_count == 3
- ? args.args[2]
+ ret = of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, 0,
+ &args);
+ p->config.number_of_pins = ret == 0 ? args.args[2]
: RCAR_MAX_GPIO_PER_BANK;
p->config.gpio_base = -1;
}
*/
static int desc_to_gpio(const struct gpio_desc *desc)
{
- return desc->chip->base + gpio_chip_hwgpio(desc);
+ return desc - &gpio_desc[0];
}
int status = -EPROBE_DEFER;
unsigned long flags;
- if (!desc || !desc->chip) {
+ if (!desc) {
pr_warn("%s: invalid GPIO\n", __func__);
return -EINVAL;
}
spin_lock_irqsave(&gpio_lock, flags);
chip = desc->chip;
+ if (chip == NULL)
+ goto done;
if (!try_module_get(chip->owner))
goto done;
static inline void ast_open_key(struct ast_private *ast)
{
- ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xA1, 0xFF, 0x04);
+ ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x80, 0xA8);
}
#define AST_VIDMEM_SIZE_8M 0x00800000
#include <drm/drmP.h>
+/******************************************************************/
+/** \name Context bitmap support */
+/*@{*/
+
/**
* Free a handle from the context bitmap.
*
* in drm_device::ctx_idr, while holding the drm_device::struct_mutex
* lock.
*/
-static void drm_ctxbitmap_free(struct drm_device * dev, int ctx_handle)
+void drm_ctxbitmap_free(struct drm_device * dev, int ctx_handle)
{
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- return;
-
mutex_lock(&dev->struct_mutex);
idr_remove(&dev->ctx_idr, ctx_handle);
mutex_unlock(&dev->struct_mutex);
}
-/******************************************************************/
-/** \name Context bitmap support */
-/*@{*/
-
-void drm_legacy_ctxbitmap_release(struct drm_device *dev,
- struct drm_file *file_priv)
-{
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- return;
-
- mutex_lock(&dev->ctxlist_mutex);
- if (!list_empty(&dev->ctxlist)) {
- struct drm_ctx_list *pos, *n;
-
- list_for_each_entry_safe(pos, n, &dev->ctxlist, head) {
- if (pos->tag == file_priv &&
- pos->handle != DRM_KERNEL_CONTEXT) {
- if (dev->driver->context_dtor)
- dev->driver->context_dtor(dev,
- pos->handle);
-
- drm_ctxbitmap_free(dev, pos->handle);
-
- list_del(&pos->head);
- kfree(pos);
- --dev->ctx_count;
- }
- }
- }
- mutex_unlock(&dev->ctxlist_mutex);
-}
-
/**
* Context bitmap allocation.
*
*
* Initialise the drm_device::ctx_idr
*/
-void drm_legacy_ctxbitmap_init(struct drm_device * dev)
+int drm_ctxbitmap_init(struct drm_device * dev)
{
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- return;
-
idr_init(&dev->ctx_idr);
+ return 0;
}
/**
* Free all idr members using drm_ctx_sarea_free helper function
* while holding the drm_device::struct_mutex lock.
*/
-void drm_legacy_ctxbitmap_cleanup(struct drm_device * dev)
+void drm_ctxbitmap_cleanup(struct drm_device * dev)
{
mutex_lock(&dev->struct_mutex);
idr_destroy(&dev->ctx_idr);
struct drm_local_map *map;
struct drm_map_list *_entry;
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- return -EINVAL;
-
mutex_lock(&dev->struct_mutex);
map = idr_find(&dev->ctx_idr, request->ctx_id);
struct drm_local_map *map = NULL;
struct drm_map_list *r_list = NULL;
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- return -EINVAL;
-
mutex_lock(&dev->struct_mutex);
list_for_each_entry(r_list, &dev->maplist, head) {
if (r_list->map
struct drm_ctx ctx;
int i;
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- return -EINVAL;
-
if (res->count >= DRM_RESERVED_CONTEXTS) {
memset(&ctx, 0, sizeof(ctx));
for (i = 0; i < DRM_RESERVED_CONTEXTS; i++) {
struct drm_ctx_list *ctx_entry;
struct drm_ctx *ctx = data;
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- return -EINVAL;
-
ctx->handle = drm_ctxbitmap_next(dev);
if (ctx->handle == DRM_KERNEL_CONTEXT) {
/* Skip kernel's context and get a new one. */
{
struct drm_ctx *ctx = data;
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- return -EINVAL;
-
/* This is 0, because we don't handle any context flags */
ctx->flags = 0;
{
struct drm_ctx *ctx = data;
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- return -EINVAL;
-
DRM_DEBUG("%d\n", ctx->handle);
return drm_context_switch(dev, dev->last_context, ctx->handle);
}
{
struct drm_ctx *ctx = data;
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- return -EINVAL;
-
DRM_DEBUG("%d\n", ctx->handle);
drm_context_switch_complete(dev, file_priv, ctx->handle);
{
struct drm_ctx *ctx = data;
- if (drm_core_check_feature(dev, DRIVER_MODESET))
- return -EINVAL;
-
DRM_DEBUG("%d\n", ctx->handle);
if (ctx->handle != DRM_KERNEL_CONTEXT) {
if (dev->driver->context_dtor)
/** Ioctl table */
static const struct drm_ioctl_desc drm_ioctls[] = {
- DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, DRM_UNLOCKED),
+ DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, 0),
DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, drm_getmagic, 0),
DRM_IOCTL_DEF(DRM_IOCTL_IRQ_BUSID, drm_irq_by_busid, DRM_MASTER|DRM_ROOT_ONLY),
cmd = ioctl->cmd_drv;
}
else if ((nr >= DRM_COMMAND_END) || (nr < DRM_COMMAND_BASE)) {
+ u32 drv_size;
+
ioctl = &drm_ioctls[nr];
- cmd = ioctl->cmd;
+
+ drv_size = _IOC_SIZE(ioctl->cmd);
usize = asize = _IOC_SIZE(cmd);
+ if (drv_size > asize)
+ asize = drv_size;
+
+ cmd = ioctl->cmd;
} else
goto err_i1;
/* Speaker Allocation Data Block */
if (dbl == 3) {
*sadb = kmalloc(dbl, GFP_KERNEL);
+ if (!*sadb)
+ return -ENOMEM;
memcpy(*sadb, &db[1], dbl);
count = dbl;
break;
if (dev->driver->driver_features & DRIVER_GEM)
drm_gem_release(dev, file_priv);
- drm_legacy_ctxbitmap_release(dev, file_priv);
+ mutex_lock(&dev->ctxlist_mutex);
+ if (!list_empty(&dev->ctxlist)) {
+ struct drm_ctx_list *pos, *n;
+
+ list_for_each_entry_safe(pos, n, &dev->ctxlist, head) {
+ if (pos->tag == file_priv &&
+ pos->handle != DRM_KERNEL_CONTEXT) {
+ if (dev->driver->context_dtor)
+ dev->driver->context_dtor(dev,
+ pos->handle);
+
+ drm_ctxbitmap_free(dev, pos->handle);
+
+ list_del(&pos->head);
+ kfree(pos);
+ --dev->ctx_count;
+ }
+ }
+ }
+ mutex_unlock(&dev->ctxlist_mutex);
mutex_lock(&dev->struct_mutex);
goto error_out_unreg;
}
- drm_legacy_ctxbitmap_init(dev);
+
+
+ retcode = drm_ctxbitmap_init(dev);
+ if (retcode) {
+ DRM_ERROR("Cannot allocate memory for context bitmap.\n");
+ goto error_out_unreg;
+ }
if (driver->driver_features & DRIVER_GEM) {
retcode = drm_gem_init(dev);
drm_rmmap(dev, r_list->map);
drm_ht_remove(&dev->map_hash);
- drm_legacy_ctxbitmap_cleanup(dev);
+ drm_ctxbitmap_cleanup(dev);
if (drm_core_check_feature(dev, DRIVER_MODESET))
drm_put_minor(&dev->control);
config DRM_EXYNOS_FIMC
bool "Exynos DRM FIMC"
- depends on DRM_EXYNOS_IPP && MFD_SYSCON && OF
+ depends on DRM_EXYNOS_IPP && MFD_SYSCON
help
Choose this option if you want to use Exynos FIMC for DRM.
return -ENOMEM;
}
- buf->kvaddr = dma_alloc_attrs(dev->dev, buf->size,
+ buf->kvaddr = (void __iomem *)dma_alloc_attrs(dev->dev,
+ buf->size,
&buf->dma_addr, GFP_KERNEL,
&buf->dma_attrs);
if (!buf->kvaddr) {
}
buf->sgt = drm_prime_pages_to_sg(buf->pages, nr_pages);
- if (!buf->sgt) {
+ if (IS_ERR(buf->sgt)) {
DRM_ERROR("failed to get sg table.\n");
- ret = -ENOMEM;
+ ret = PTR_ERR(buf->sgt);
goto err_free_attrs;
}
if (is_drm_iommu_supported(dev)) {
unsigned int nr_pages = buffer->size >> PAGE_SHIFT;
- buffer->kvaddr = vmap(buffer->pages, nr_pages, VM_MAP,
+ buffer->kvaddr = (void __iomem *) vmap(buffer->pages,
+ nr_pages, VM_MAP,
pgprot_writecombine(PAGE_KERNEL));
} else {
phys_addr_t dma_addr = buffer->dma_addr;
if (dma_addr)
- buffer->kvaddr = phys_to_virt(dma_addr);
+ buffer->kvaddr = (void __iomem *)phys_to_virt(dma_addr);
else
buffer->kvaddr = (void __iomem *)NULL;
}
if (IS_ERR(pages))
return PTR_ERR(pages);
+ gt->npage = gt->gem.size / PAGE_SIZE;
gt->pages = pages;
return 0;
reg_write(encoder, REG_VIP_CNTRL_2, priv->vip_cntrl_2);
break;
case DRM_MODE_DPMS_OFF:
- /* disable audio and video ports */
- reg_write(encoder, REG_ENA_AP, 0x00);
+ /* disable video ports */
reg_write(encoder, REG_ENA_VP_0, 0x00);
reg_write(encoder, REG_ENA_VP_1, 0x00);
reg_write(encoder, REG_ENA_VP_2, 0x00);
* then we do not take part in VGA arbitration and the
* vga_client_register() fails with -ENODEV.
*/
- if (!HAS_PCH_SPLIT(dev)) {
- ret = vga_client_register(dev->pdev, dev, NULL,
- i915_vga_set_decode);
- if (ret && ret != -ENODEV)
- goto out;
- }
+ ret = vga_client_register(dev->pdev, dev, NULL, i915_vga_set_decode);
+ if (ret && ret != -ENODEV)
+ goto out;
intel_register_dsm_handler();
*/
intel_fbdev_initial_config(dev);
- /*
- * Must do this after fbcon init so that
- * vgacon_save_screen() works during the handover.
- */
- i915_disable_vga_mem(dev);
-
/* Only enable hotplug handling once the fbdev is fully set up. */
dev_priv->enable_hotplug_processing = true;
intel_modeset_suspend_hw(dev);
}
+ i915_gem_suspend_gtt_mappings(dev);
+
i915_save_state(dev);
intel_opregion_fini(dev);
mutex_lock(&dev->struct_mutex);
i915_gem_restore_gtt_mappings(dev);
mutex_unlock(&dev->struct_mutex);
- }
+ } else if (drm_core_check_feature(dev, DRIVER_MODESET))
+ i915_check_and_clear_faults(dev);
__i915_drm_thaw(dev);
/* FIXME: Need a more generic return type */
gen6_gtt_pte_t (*pte_encode)(dma_addr_t addr,
- enum i915_cache_level level);
+ enum i915_cache_level level,
+ bool valid); /* Create a valid PTE */
void (*clear_range)(struct i915_address_space *vm,
unsigned int first_entry,
- unsigned int num_entries);
+ unsigned int num_entries,
+ bool use_scratch);
void (*insert_entries)(struct i915_address_space *vm,
struct sg_table *st,
unsigned int first_entry,
void i915_ppgtt_unbind_object(struct i915_hw_ppgtt *ppgtt,
struct drm_i915_gem_object *obj);
+void i915_check_and_clear_faults(struct drm_device *dev);
+void i915_gem_suspend_gtt_mappings(struct drm_device *dev);
void i915_gem_restore_gtt_mappings(struct drm_device *dev);
int __must_check i915_gem_gtt_prepare_object(struct drm_i915_gem_object *obj);
void i915_gem_gtt_bind_object(struct drm_i915_gem_object *obj,
if (i915_terminally_wedged(&dev_priv->gpu_error))
return VM_FAULT_SIGBUS;
case -EAGAIN:
- /* Give the error handler a chance to run and move the
- * objects off the GPU active list. Next time we service the
- * fault, we should be able to transition the page into the
- * GTT without touching the GPU (and so avoid further
- * EIO/EGAIN). If the GPU is wedged, then there is no issue
- * with coherency, just lost writes.
+ /*
+ * EAGAIN means the gpu is hung and we'll wait for the error
+ * handler to reset everything when re-faulting in
+ * i915_mutex_lock_interruptible.
*/
- set_need_resched();
case 0:
case -ERESTARTSYS:
case -EINTR:
if (!mutex_trylock(&dev->struct_mutex)) {
if (!mutex_is_locked_by(&dev->struct_mutex, current))
- return SHRINK_STOP;
+ return 0;
if (dev_priv->mm.shrinker_no_lock_stealing)
- return SHRINK_STOP;
+ return 0;
unlock = false;
}
if (!mutex_trylock(&dev->struct_mutex)) {
if (!mutex_is_locked_by(&dev->struct_mutex, current))
- return 0;
+ return SHRINK_STOP;
if (dev_priv->mm.shrinker_no_lock_stealing)
- return 0;
+ return SHRINK_STOP;
unlock = false;
}
#define HSW_WT_ELLC_LLC_AGE0 HSW_CACHEABILITY_CONTROL(0x6)
static gen6_gtt_pte_t snb_pte_encode(dma_addr_t addr,
- enum i915_cache_level level)
+ enum i915_cache_level level,
+ bool valid)
{
- gen6_gtt_pte_t pte = GEN6_PTE_VALID;
+ gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= GEN6_PTE_ADDR_ENCODE(addr);
switch (level) {
}
static gen6_gtt_pte_t ivb_pte_encode(dma_addr_t addr,
- enum i915_cache_level level)
+ enum i915_cache_level level,
+ bool valid)
{
- gen6_gtt_pte_t pte = GEN6_PTE_VALID;
+ gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= GEN6_PTE_ADDR_ENCODE(addr);
switch (level) {
#define BYT_PTE_SNOOPED_BY_CPU_CACHES (1 << 2)
static gen6_gtt_pte_t byt_pte_encode(dma_addr_t addr,
- enum i915_cache_level level)
+ enum i915_cache_level level,
+ bool valid)
{
- gen6_gtt_pte_t pte = GEN6_PTE_VALID;
+ gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= GEN6_PTE_ADDR_ENCODE(addr);
/* Mark the page as writeable. Other platforms don't have a
}
static gen6_gtt_pte_t hsw_pte_encode(dma_addr_t addr,
- enum i915_cache_level level)
+ enum i915_cache_level level,
+ bool valid)
{
- gen6_gtt_pte_t pte = GEN6_PTE_VALID;
+ gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= HSW_PTE_ADDR_ENCODE(addr);
if (level != I915_CACHE_NONE)
}
static gen6_gtt_pte_t iris_pte_encode(dma_addr_t addr,
- enum i915_cache_level level)
+ enum i915_cache_level level,
+ bool valid)
{
- gen6_gtt_pte_t pte = GEN6_PTE_VALID;
+ gen6_gtt_pte_t pte = valid ? GEN6_PTE_VALID : 0;
pte |= HSW_PTE_ADDR_ENCODE(addr);
switch (level) {
/* PPGTT support for Sandybdrige/Gen6 and later */
static void gen6_ppgtt_clear_range(struct i915_address_space *vm,
unsigned first_entry,
- unsigned num_entries)
+ unsigned num_entries,
+ bool use_scratch)
{
struct i915_hw_ppgtt *ppgtt =
container_of(vm, struct i915_hw_ppgtt, base);
unsigned first_pte = first_entry % I915_PPGTT_PT_ENTRIES;
unsigned last_pte, i;
- scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC);
+ scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC, true);
while (num_entries) {
last_pte = first_pte + num_entries;
dma_addr_t page_addr;
page_addr = sg_page_iter_dma_address(&sg_iter);
- pt_vaddr[act_pte] = vm->pte_encode(page_addr, cache_level);
+ pt_vaddr[act_pte] = vm->pte_encode(page_addr, cache_level, true);
if (++act_pte == I915_PPGTT_PT_ENTRIES) {
kunmap_atomic(pt_vaddr);
act_pt++;
}
ppgtt->base.clear_range(&ppgtt->base, 0,
- ppgtt->num_pd_entries * I915_PPGTT_PT_ENTRIES);
+ ppgtt->num_pd_entries * I915_PPGTT_PT_ENTRIES, true);
ppgtt->pd_offset = first_pd_entry_in_global_pt * sizeof(gen6_gtt_pte_t);
{
ppgtt->base.clear_range(&ppgtt->base,
i915_gem_obj_ggtt_offset(obj) >> PAGE_SHIFT,
- obj->base.size >> PAGE_SHIFT);
+ obj->base.size >> PAGE_SHIFT,
+ true);
}
extern int intel_iommu_gfx_mapped;
dev_priv->mm.interruptible = interruptible;
}
+void i915_check_and_clear_faults(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_ring_buffer *ring;
+ int i;
+
+ if (INTEL_INFO(dev)->gen < 6)
+ return;
+
+ for_each_ring(ring, dev_priv, i) {
+ u32 fault_reg;
+ fault_reg = I915_READ(RING_FAULT_REG(ring));
+ if (fault_reg & RING_FAULT_VALID) {
+ DRM_DEBUG_DRIVER("Unexpected fault\n"
+ "\tAddr: 0x%08lx\\n"
+ "\tAddress space: %s\n"
+ "\tSource ID: %d\n"
+ "\tType: %d\n",
+ fault_reg & PAGE_MASK,
+ fault_reg & RING_FAULT_GTTSEL_MASK ? "GGTT" : "PPGTT",
+ RING_FAULT_SRCID(fault_reg),
+ RING_FAULT_FAULT_TYPE(fault_reg));
+ I915_WRITE(RING_FAULT_REG(ring),
+ fault_reg & ~RING_FAULT_VALID);
+ }
+ }
+ POSTING_READ(RING_FAULT_REG(&dev_priv->ring[RCS]));
+}
+
+void i915_gem_suspend_gtt_mappings(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ /* Don't bother messing with faults pre GEN6 as we have little
+ * documentation supporting that it's a good idea.
+ */
+ if (INTEL_INFO(dev)->gen < 6)
+ return;
+
+ i915_check_and_clear_faults(dev);
+
+ dev_priv->gtt.base.clear_range(&dev_priv->gtt.base,
+ dev_priv->gtt.base.start / PAGE_SIZE,
+ dev_priv->gtt.base.total / PAGE_SIZE,
+ false);
+}
+
void i915_gem_restore_gtt_mappings(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_object *obj;
+ i915_check_and_clear_faults(dev);
+
/* First fill our portion of the GTT with scratch pages */
dev_priv->gtt.base.clear_range(&dev_priv->gtt.base,
dev_priv->gtt.base.start / PAGE_SIZE,
- dev_priv->gtt.base.total / PAGE_SIZE);
+ dev_priv->gtt.base.total / PAGE_SIZE,
+ true);
list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
i915_gem_clflush_object(obj, obj->pin_display);
for_each_sg_page(st->sgl, &sg_iter, st->nents, 0) {
addr = sg_page_iter_dma_address(&sg_iter);
- iowrite32(vm->pte_encode(addr, level), >t_entries[i]);
+ iowrite32(vm->pte_encode(addr, level, true), >t_entries[i]);
i++;
}
*/
if (i != 0)
WARN_ON(readl(>t_entries[i-1]) !=
- vm->pte_encode(addr, level));
+ vm->pte_encode(addr, level, true));
/* This next bit makes the above posting read even more important. We
* want to flush the TLBs only after we're certain all the PTE updates
static void gen6_ggtt_clear_range(struct i915_address_space *vm,
unsigned int first_entry,
- unsigned int num_entries)
+ unsigned int num_entries,
+ bool use_scratch)
{
struct drm_i915_private *dev_priv = vm->dev->dev_private;
gen6_gtt_pte_t scratch_pte, __iomem *gtt_base =
first_entry, num_entries, max_entries))
num_entries = max_entries;
- scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC);
+ scratch_pte = vm->pte_encode(vm->scratch.addr, I915_CACHE_LLC, use_scratch);
+
for (i = 0; i < num_entries; i++)
iowrite32(scratch_pte, >t_base[i]);
readl(gtt_base);
static void i915_ggtt_clear_range(struct i915_address_space *vm,
unsigned int first_entry,
- unsigned int num_entries)
+ unsigned int num_entries,
+ bool unused)
{
intel_gtt_clear_range(first_entry, num_entries);
}
dev_priv->gtt.base.clear_range(&dev_priv->gtt.base,
entry,
- obj->base.size >> PAGE_SHIFT);
+ obj->base.size >> PAGE_SHIFT,
+ true);
obj->has_global_gtt_mapping = 0;
}
const unsigned long count = (hole_end - hole_start) / PAGE_SIZE;
DRM_DEBUG_KMS("clearing unused GTT space: [%lx, %lx]\n",
hole_start, hole_end);
- ggtt_vm->clear_range(ggtt_vm, hole_start / PAGE_SIZE, count);
+ ggtt_vm->clear_range(ggtt_vm, hole_start / PAGE_SIZE, count, true);
}
/* And finally clear the reserved guard page */
- ggtt_vm->clear_range(ggtt_vm, end / PAGE_SIZE - 1, 1);
+ ggtt_vm->clear_range(ggtt_vm, end / PAGE_SIZE - 1, 1, true);
}
static bool
/* Seek the first printf which is hits start position */
if (e->pos < e->start) {
- len = vsnprintf(NULL, 0, f, args);
- if (!__i915_error_seek(e, len))
+ va_list tmp;
+
+ va_copy(tmp, args);
+ if (!__i915_error_seek(e, vsnprintf(NULL, 0, f, tmp)))
return;
}
return ret;
}
+static void i915_error_wake_up(struct drm_i915_private *dev_priv,
+ bool reset_completed)
+{
+ struct intel_ring_buffer *ring;
+ int i;
+
+ /*
+ * Notify all waiters for GPU completion events that reset state has
+ * been changed, and that they need to restart their wait after
+ * checking for potential errors (and bail out to drop locks if there is
+ * a gpu reset pending so that i915_error_work_func can acquire them).
+ */
+
+ /* Wake up __wait_seqno, potentially holding dev->struct_mutex. */
+ for_each_ring(ring, dev_priv, i)
+ wake_up_all(&ring->irq_queue);
+
+ /* Wake up intel_crtc_wait_for_pending_flips, holding crtc->mutex. */
+ wake_up_all(&dev_priv->pending_flip_queue);
+
+ /*
+ * Signal tasks blocked in i915_gem_wait_for_error that the pending
+ * reset state is cleared.
+ */
+ if (reset_completed)
+ wake_up_all(&dev_priv->gpu_error.reset_queue);
+}
+
/**
* i915_error_work_func - do process context error handling work
* @work: work struct
drm_i915_private_t *dev_priv = container_of(error, drm_i915_private_t,
gpu_error);
struct drm_device *dev = dev_priv->dev;
- struct intel_ring_buffer *ring;
char *error_event[] = { I915_ERROR_UEVENT "=1", NULL };
char *reset_event[] = { I915_RESET_UEVENT "=1", NULL };
char *reset_done_event[] = { I915_ERROR_UEVENT "=0", NULL };
- int i, ret;
+ int ret;
kobject_uevent_env(&dev->primary->kdev.kobj, KOBJ_CHANGE, error_event);
kobject_uevent_env(&dev->primary->kdev.kobj, KOBJ_CHANGE,
reset_event);
+ /*
+ * All state reset _must_ be completed before we update the
+ * reset counter, for otherwise waiters might miss the reset
+ * pending state and not properly drop locks, resulting in
+ * deadlocks with the reset work.
+ */
ret = i915_reset(dev);
+ intel_display_handle_reset(dev);
+
if (ret == 0) {
/*
* After all the gem state is reset, increment the reset
atomic_set(&error->reset_counter, I915_WEDGED);
}
- for_each_ring(ring, dev_priv, i)
- wake_up_all(&ring->irq_queue);
-
- intel_display_handle_reset(dev);
-
- wake_up_all(&dev_priv->gpu_error.reset_queue);
+ /*
+ * Note: The wake_up also serves as a memory barrier so that
+ * waiters see the update value of the reset counter atomic_t.
+ */
+ i915_error_wake_up(dev_priv, true);
}
}
void i915_handle_error(struct drm_device *dev, bool wedged)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_ring_buffer *ring;
- int i;
i915_capture_error_state(dev);
i915_report_and_clear_eir(dev);
&dev_priv->gpu_error.reset_counter);
/*
- * Wakeup waiting processes so that the reset work item
- * doesn't deadlock trying to grab various locks.
+ * Wakeup waiting processes so that the reset work function
+ * i915_error_work_func doesn't deadlock trying to grab various
+ * locks. By bumping the reset counter first, the woken
+ * processes will see a reset in progress and back off,
+ * releasing their locks and then wait for the reset completion.
+ * We must do this for _all_ gpu waiters that might hold locks
+ * that the reset work needs to acquire.
+ *
+ * Note: The wake_up serves as the required memory barrier to
+ * ensure that the waiters see the updated value of the reset
+ * counter atomic_t.
*/
- for_each_ring(ring, dev_priv, i)
- wake_up_all(&ring->irq_queue);
+ i915_error_wake_up(dev_priv, false);
}
/*
#define ARB_MODE_SWIZZLE_IVB (1<<5)
#define RENDER_HWS_PGA_GEN7 (0x04080)
#define RING_FAULT_REG(ring) (0x4094 + 0x100*(ring)->id)
+#define RING_FAULT_GTTSEL_MASK (1<<11)
+#define RING_FAULT_SRCID(x) ((x >> 3) & 0xff)
+#define RING_FAULT_FAULT_TYPE(x) ((x >> 1) & 0x3)
+#define RING_FAULT_VALID (1<<0)
#define DONE_REG 0x40b0
#define BSD_HWS_PGA_GEN7 (0x04180)
#define BLT_HWS_PGA_GEN7 (0x04280)
#define GEN7_SQ_CHICKEN_MBCUNIT_CONFIG 0x9030
#define GEN7_SQ_CHICKEN_MBCUNIT_SQINTMOB (1<<11)
+#define HSW_SCRATCH1 0xb038
+#define HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE (1<<27)
+
#define HSW_FUSE_STRAP 0x42014
#define HSW_CDCLK_LIMIT (1 << 24)
#define FDI_RX_CHICKEN(pipe) _PIPE(pipe, _FDI_RXA_CHICKEN, _FDI_RXB_CHICKEN)
#define SOUTH_DSPCLK_GATE_D 0xc2020
+#define PCH_DPLUNIT_CLOCK_GATE_DISABLE (1<<30)
#define PCH_DPLSUNIT_CLOCK_GATE_DISABLE (1<<29)
+#define PCH_CPUNIT_CLOCK_GATE_DISABLE (1<<14)
#define PCH_LP_PARTITION_LEVEL_DISABLE (1<<12)
/* CPU: FDI_TX */
#define GEN7_ROW_CHICKEN2_GT2 0xf4f4
#define DOP_CLOCK_GATING_DISABLE (1<<0)
+#define HSW_ROW_CHICKEN3 0xe49c
+#define HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE (1 << 6)
+
#define G4X_AUD_VID_DID (dev_priv->info->display_mmio_offset + 0x62020)
#define INTEL_AUDIO_DEVCL 0x808629FB
#define INTEL_AUDIO_DEVBLC 0x80862801
return true;
}
-static void intel_crt_get_config(struct intel_encoder *encoder,
- struct intel_crtc_config *pipe_config)
+static unsigned int intel_crt_get_flags(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
struct intel_crt *crt = intel_encoder_to_crt(encoder);
else
flags |= DRM_MODE_FLAG_NVSYNC;
- pipe_config->adjusted_mode.flags |= flags;
+ return flags;
+}
+
+static void intel_crt_get_config(struct intel_encoder *encoder,
+ struct intel_crtc_config *pipe_config)
+{
+ pipe_config->adjusted_mode.flags |= intel_crt_get_flags(encoder);
+}
+
+static void hsw_crt_get_config(struct intel_encoder *encoder,
+ struct intel_crtc_config *pipe_config)
+{
+ intel_ddi_get_config(encoder, pipe_config);
+
+ pipe_config->adjusted_mode.flags &= ~(DRM_MODE_FLAG_PHSYNC |
+ DRM_MODE_FLAG_NHSYNC |
+ DRM_MODE_FLAG_PVSYNC |
+ DRM_MODE_FLAG_NVSYNC);
+ pipe_config->adjusted_mode.flags |= intel_crt_get_flags(encoder);
}
/* Note: The caller is required to filter out dpms modes not supported by the
crt->base.mode_set = intel_crt_mode_set;
crt->base.disable = intel_disable_crt;
crt->base.enable = intel_enable_crt;
- crt->base.get_config = intel_crt_get_config;
+ if (IS_HASWELL(dev))
+ crt->base.get_config = hsw_crt_get_config;
+ else
+ crt->base.get_config = intel_crt_get_config;
if (I915_HAS_HOTPLUG(dev))
crt->base.hpd_pin = HPD_CRT;
if (HAS_DDI(dev))
/* Can only use the always-on power well for eDP when
* not using the panel fitter, and when not using motion
* blur mitigation (which we don't support). */
- if (intel_crtc->config.pch_pfit.size)
+ if (intel_crtc->config.pch_pfit.enabled)
temp |= TRANS_DDI_EDP_INPUT_A_ONOFF;
else
temp |= TRANS_DDI_EDP_INPUT_A_ON;
intel_dp_check_link_status(intel_dp);
}
-static void intel_ddi_get_config(struct intel_encoder *encoder,
- struct intel_crtc_config *pipe_config)
+void intel_ddi_get_config(struct intel_encoder *encoder,
+ struct intel_crtc_config *pipe_config)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(encoder->base.crtc);
flags |= DRM_MODE_FLAG_NVSYNC;
pipe_config->adjusted_mode.flags |= flags;
+
+ switch (temp & TRANS_DDI_BPC_MASK) {
+ case TRANS_DDI_BPC_6:
+ pipe_config->pipe_bpp = 18;
+ break;
+ case TRANS_DDI_BPC_8:
+ pipe_config->pipe_bpp = 24;
+ break;
+ case TRANS_DDI_BPC_10:
+ pipe_config->pipe_bpp = 30;
+ break;
+ case TRANS_DDI_BPC_12:
+ pipe_config->pipe_bpp = 36;
+ break;
+ default:
+ break;
+ }
}
static void intel_ddi_destroy(struct drm_encoder *encoder)
I915_WRITE(PIPESRC(intel_crtc->pipe),
((crtc->mode.hdisplay - 1) << 16) |
(crtc->mode.vdisplay - 1));
- if (!intel_crtc->config.pch_pfit.size &&
+ if (!intel_crtc->config.pch_pfit.enabled &&
(intel_pipe_has_type(crtc, INTEL_OUTPUT_LVDS) ||
intel_pipe_has_type(crtc, INTEL_OUTPUT_EDP))) {
I915_WRITE(PF_CTL(intel_crtc->pipe), 0);
FDI_FE_ERRC_ENABLE);
}
-static bool pipe_has_enabled_pch(struct intel_crtc *intel_crtc)
+static bool pipe_has_enabled_pch(struct intel_crtc *crtc)
{
- return intel_crtc->base.enabled && intel_crtc->config.has_pch_encoder;
+ return crtc->base.enabled && crtc->active &&
+ crtc->config.has_pch_encoder;
}
static void ivb_modeset_global_resources(struct drm_device *dev)
I915_READ(VSYNCSHIFT(cpu_transcoder)));
}
+static void cpt_enable_fdi_bc_bifurcation(struct drm_device *dev)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ uint32_t temp;
+
+ temp = I915_READ(SOUTH_CHICKEN1);
+ if (temp & FDI_BC_BIFURCATION_SELECT)
+ return;
+
+ WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE);
+ WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE);
+
+ temp |= FDI_BC_BIFURCATION_SELECT;
+ DRM_DEBUG_KMS("enabling fdi C rx\n");
+ I915_WRITE(SOUTH_CHICKEN1, temp);
+ POSTING_READ(SOUTH_CHICKEN1);
+}
+
+static void ivybridge_update_fdi_bc_bifurcation(struct intel_crtc *intel_crtc)
+{
+ struct drm_device *dev = intel_crtc->base.dev;
+ struct drm_i915_private *dev_priv = dev->dev_private;
+
+ switch (intel_crtc->pipe) {
+ case PIPE_A:
+ break;
+ case PIPE_B:
+ if (intel_crtc->config.fdi_lanes > 2)
+ WARN_ON(I915_READ(SOUTH_CHICKEN1) & FDI_BC_BIFURCATION_SELECT);
+ else
+ cpt_enable_fdi_bc_bifurcation(dev);
+
+ break;
+ case PIPE_C:
+ cpt_enable_fdi_bc_bifurcation(dev);
+
+ break;
+ default:
+ BUG();
+ }
+}
+
/*
* Enable PCH resources required for PCH ports:
* - PCH PLLs
assert_pch_transcoder_disabled(dev_priv, pipe);
+ if (IS_IVYBRIDGE(dev))
+ ivybridge_update_fdi_bc_bifurcation(intel_crtc);
+
/* Write the TU size bits before fdi link training, so that error
* detection works. */
I915_WRITE(FDI_RX_TUSIZE1(pipe),
struct drm_i915_private *dev_priv = dev->dev_private;
int pipe = crtc->pipe;
- if (crtc->config.pch_pfit.size) {
+ if (crtc->config.pch_pfit.enabled) {
/* Force use of hard-coded filter coefficients
* as some pre-programmed values are broken,
* e.g. x201.
/* To avoid upsetting the power well on haswell only disable the pfit if
* it's in use. The hw state code will make sure we get this right. */
- if (crtc->config.pch_pfit.size) {
+ if (crtc->config.pch_pfit.enabled) {
I915_WRITE(PF_CTL(pipe), 0);
I915_WRITE(PF_WIN_POS(pipe), 0);
I915_WRITE(PF_WIN_SZ(pipe), 0);
* consider. */
void intel_connector_dpms(struct drm_connector *connector, int mode)
{
- struct intel_encoder *encoder = intel_attached_encoder(connector);
-
/* All the simple cases only support two dpms states. */
if (mode != DRM_MODE_DPMS_ON)
mode = DRM_MODE_DPMS_OFF;
connector->dpms = mode;
/* Only need to change hw state when actually enabled */
- if (encoder->base.crtc)
- intel_encoder_dpms(encoder, mode);
- else
- WARN_ON(encoder->connectors_active != false);
+ if (connector->encoder)
+ intel_encoder_dpms(to_intel_encoder(connector->encoder), mode);
intel_modeset_check_state(connector->dev);
}
pipeconf = 0;
+ if (dev_priv->quirks & QUIRK_PIPEA_FORCE &&
+ I915_READ(PIPECONF(intel_crtc->pipe)) & PIPECONF_ENABLE)
+ pipeconf |= PIPECONF_ENABLE;
+
if (intel_crtc->pipe == 0 && INTEL_INFO(dev)->gen < 4) {
/* Enable pixel doubling when the dot clock is > 90% of the (display)
* core speed.
return -EINVAL;
}
- /* Ensure that the cursor is valid for the new mode before changing... */
- intel_crtc_update_cursor(crtc, true);
-
if (is_lvds && dev_priv->lvds_downclock_avail) {
/*
* Ensure we match the reduced clock's P to the target clock.
if (!(tmp & PIPECONF_ENABLE))
return false;
+ if (IS_G4X(dev) || IS_VALLEYVIEW(dev)) {
+ switch (tmp & PIPECONF_BPC_MASK) {
+ case PIPECONF_6BPC:
+ pipe_config->pipe_bpp = 18;
+ break;
+ case PIPECONF_8BPC:
+ pipe_config->pipe_bpp = 24;
+ break;
+ case PIPECONF_10BPC:
+ pipe_config->pipe_bpp = 30;
+ break;
+ default:
+ break;
+ }
+ }
+
intel_get_pipe_timings(crtc, pipe_config);
i9xx_get_pfit_config(crtc, pipe_config);
return true;
}
-static void cpt_enable_fdi_bc_bifurcation(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- uint32_t temp;
-
- temp = I915_READ(SOUTH_CHICKEN1);
- if (temp & FDI_BC_BIFURCATION_SELECT)
- return;
-
- WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE);
- WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE);
-
- temp |= FDI_BC_BIFURCATION_SELECT;
- DRM_DEBUG_KMS("enabling fdi C rx\n");
- I915_WRITE(SOUTH_CHICKEN1, temp);
- POSTING_READ(SOUTH_CHICKEN1);
-}
-
-static void ivybridge_update_fdi_bc_bifurcation(struct intel_crtc *intel_crtc)
-{
- struct drm_device *dev = intel_crtc->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- switch (intel_crtc->pipe) {
- case PIPE_A:
- break;
- case PIPE_B:
- if (intel_crtc->config.fdi_lanes > 2)
- WARN_ON(I915_READ(SOUTH_CHICKEN1) & FDI_BC_BIFURCATION_SELECT);
- else
- cpt_enable_fdi_bc_bifurcation(dev);
-
- break;
- case PIPE_C:
- cpt_enable_fdi_bc_bifurcation(dev);
-
- break;
- default:
- BUG();
- }
-}
-
int ironlake_get_lanes_required(int target_clock, int link_bw, int bpp)
{
/*
intel_crtc->config.dpll.p2 = clock.p2;
}
- /* Ensure that the cursor is valid for the new mode before changing... */
- intel_crtc_update_cursor(crtc, true);
-
/* CPU eDP is the only output that doesn't need a PCH PLL of its own. */
if (intel_crtc->config.has_pch_encoder) {
fp = i9xx_dpll_compute_fp(&intel_crtc->config.dpll);
&intel_crtc->config.fdi_m_n);
}
- if (IS_IVYBRIDGE(dev))
- ivybridge_update_fdi_bc_bifurcation(intel_crtc);
-
ironlake_set_pipeconf(crtc);
/* Set up the display plane register */
tmp = I915_READ(PF_CTL(crtc->pipe));
if (tmp & PF_ENABLE) {
+ pipe_config->pch_pfit.enabled = true;
pipe_config->pch_pfit.pos = I915_READ(PF_WIN_POS(crtc->pipe));
pipe_config->pch_pfit.size = I915_READ(PF_WIN_SZ(crtc->pipe));
if (!(tmp & PIPECONF_ENABLE))
return false;
+ switch (tmp & PIPECONF_BPC_MASK) {
+ case PIPECONF_6BPC:
+ pipe_config->pipe_bpp = 18;
+ break;
+ case PIPECONF_8BPC:
+ pipe_config->pipe_bpp = 24;
+ break;
+ case PIPECONF_10BPC:
+ pipe_config->pipe_bpp = 30;
+ break;
+ case PIPECONF_12BPC:
+ pipe_config->pipe_bpp = 36;
+ break;
+ default:
+ break;
+ }
+
if (I915_READ(PCH_TRANSCONF(crtc->pipe)) & TRANS_ENABLE) {
struct intel_shared_dpll *pll;
if (!crtc->base.enabled)
continue;
- if (crtc->pipe != PIPE_A || crtc->config.pch_pfit.size ||
+ if (crtc->pipe != PIPE_A || crtc->config.pch_pfit.enabled ||
crtc->config.cpu_transcoder != TRANSCODER_EDP)
enable = true;
}
if (!intel_ddi_pll_mode_set(crtc))
return -EINVAL;
- /* Ensure that the cursor is valid for the new mode before changing... */
- intel_crtc_update_cursor(crtc, true);
-
if (intel_crtc->config.has_dp_encoder)
intel_dp_set_m_n(intel_crtc);
/* Set ELD valid state */
tmp = I915_READ(aud_cntrl_st2);
- DRM_DEBUG_DRIVER("HDMI audio: pin eld vld status=0x%8x\n", tmp);
+ DRM_DEBUG_DRIVER("HDMI audio: pin eld vld status=0x%08x\n", tmp);
tmp |= (AUDIO_ELD_VALID_A << (pipe * 4));
I915_WRITE(aud_cntrl_st2, tmp);
tmp = I915_READ(aud_cntrl_st2);
- DRM_DEBUG_DRIVER("HDMI audio: eld vld status=0x%8x\n", tmp);
+ DRM_DEBUG_DRIVER("HDMI audio: eld vld status=0x%08x\n", tmp);
/* Enable HDMI mode */
tmp = I915_READ(aud_config);
- DRM_DEBUG_DRIVER("HDMI audio: audio conf: 0x%8x\n", tmp);
+ DRM_DEBUG_DRIVER("HDMI audio: audio conf: 0x%08x\n", tmp);
/* clear N_programing_enable and N_value_index */
tmp &= ~(AUD_CONFIG_N_VALUE_INDEX | AUD_CONFIG_N_PROG_ENABLE);
I915_WRITE(aud_config, tmp);
intel_crtc->cursor_width = width;
intel_crtc->cursor_height = height;
- intel_crtc_update_cursor(crtc, intel_crtc->cursor_bo != NULL);
+ if (intel_crtc->active)
+ intel_crtc_update_cursor(crtc, intel_crtc->cursor_bo != NULL);
return 0;
fail_unpin:
intel_crtc->cursor_x = x;
intel_crtc->cursor_y = y;
- intel_crtc_update_cursor(crtc, intel_crtc->cursor_bo != NULL);
+ if (intel_crtc->active)
+ intel_crtc_update_cursor(crtc, intel_crtc->cursor_bo != NULL);
return 0;
}
pipe_config->gmch_pfit.control,
pipe_config->gmch_pfit.pgm_ratios,
pipe_config->gmch_pfit.lvds_border_bits);
- DRM_DEBUG_KMS("pch pfit: pos: 0x%08x, size: 0x%08x\n",
+ DRM_DEBUG_KMS("pch pfit: pos: 0x%08x, size: 0x%08x, %s\n",
pipe_config->pch_pfit.pos,
- pipe_config->pch_pfit.size);
+ pipe_config->pch_pfit.size,
+ pipe_config->pch_pfit.enabled ? "enabled" : "disabled");
DRM_DEBUG_KMS("ips: %i\n", pipe_config->ips_enabled);
}
if (INTEL_INFO(dev)->gen < 4)
PIPE_CONF_CHECK_I(gmch_pfit.pgm_ratios);
PIPE_CONF_CHECK_I(gmch_pfit.lvds_border_bits);
- PIPE_CONF_CHECK_I(pch_pfit.pos);
- PIPE_CONF_CHECK_I(pch_pfit.size);
+ PIPE_CONF_CHECK_I(pch_pfit.enabled);
+ if (current_config->pch_pfit.enabled) {
+ PIPE_CONF_CHECK_I(pch_pfit.pos);
+ PIPE_CONF_CHECK_I(pch_pfit.size);
+ }
PIPE_CONF_CHECK_I(ips_enabled);
PIPE_CONF_CHECK_X(dpll_hw_state.fp0);
PIPE_CONF_CHECK_X(dpll_hw_state.fp1);
+ if (IS_G4X(dev) || INTEL_INFO(dev)->gen >= 5)
+ PIPE_CONF_CHECK_I(pipe_bpp);
+
#undef PIPE_CONF_CHECK_X
#undef PIPE_CONF_CHECK_I
#undef PIPE_CONF_CHECK_FLAGS
POSTING_READ(vga_reg);
}
-static void i915_enable_vga_mem(struct drm_device *dev)
-{
- /* Enable VGA memory on Intel HD */
- if (HAS_PCH_SPLIT(dev)) {
- vga_get_uninterruptible(dev->pdev, VGA_RSRC_LEGACY_IO);
- outb(inb(VGA_MSR_READ) | VGA_MSR_MEM_EN, VGA_MSR_WRITE);
- vga_set_legacy_decoding(dev->pdev, VGA_RSRC_LEGACY_IO |
- VGA_RSRC_LEGACY_MEM |
- VGA_RSRC_NORMAL_IO |
- VGA_RSRC_NORMAL_MEM);
- vga_put(dev->pdev, VGA_RSRC_LEGACY_IO);
- }
-}
-
-void i915_disable_vga_mem(struct drm_device *dev)
-{
- /* Disable VGA memory on Intel HD */
- if (HAS_PCH_SPLIT(dev)) {
- vga_get_uninterruptible(dev->pdev, VGA_RSRC_LEGACY_IO);
- outb(inb(VGA_MSR_READ) & ~VGA_MSR_MEM_EN, VGA_MSR_WRITE);
- vga_set_legacy_decoding(dev->pdev, VGA_RSRC_LEGACY_IO |
- VGA_RSRC_NORMAL_IO |
- VGA_RSRC_NORMAL_MEM);
- vga_put(dev->pdev, VGA_RSRC_LEGACY_IO);
- }
-}
-
void intel_modeset_init_hw(struct drm_device *dev)
{
intel_init_power_well(dev);
if (I915_READ(vga_reg) != VGA_DISP_DISABLE) {
DRM_DEBUG_KMS("Something enabled VGA plane, disabling it\n");
i915_disable_vga(dev);
- i915_disable_vga_mem(dev);
}
}
intel_disable_fbc(dev);
- i915_enable_vga_mem(dev);
-
intel_disable_gt_powersave(dev);
ironlake_teardown_rc6(dev);
DRM_DEBUG_KMS("aux_ch native nack\n");
return -EREMOTEIO;
case AUX_NATIVE_REPLY_DEFER:
- udelay(100);
+ /*
+ * For now, just give more slack to branch devices. We
+ * could check the DPCD for I2C bit rate capabilities,
+ * and if available, adjust the interval. We could also
+ * be more careful with DP-to-Legacy adapters where a
+ * long legacy cable may force very low I2C bit rates.
+ */
+ if (intel_dp->dpcd[DP_DOWNSTREAMPORT_PRESENT] &
+ DP_DWN_STRM_PORT_PRESENT)
+ usleep_range(500, 600);
+ else
+ usleep_range(300, 400);
continue;
default:
DRM_ERROR("aux_ch invalid native reply 0x%02x\n",
else
pipe_config->port_clock = 270000;
}
+
+ if (is_edp(intel_dp) && dev_priv->vbt.edp_bpp &&
+ pipe_config->pipe_bpp > dev_priv->vbt.edp_bpp) {
+ /*
+ * This is a big fat ugly hack.
+ *
+ * Some machines in UEFI boot mode provide us a VBT that has 18
+ * bpp and 1.62 GHz link bandwidth for eDP, which for reasons
+ * unknown we fail to light up. Yet the same BIOS boots up with
+ * 24 bpp and 2.7 GHz link. Use the same bpp as the BIOS uses as
+ * max, not what it tells us to use.
+ *
+ * Note: This will still be broken if the eDP panel is not lit
+ * up by the BIOS, and thus we can't get the mode at module
+ * load.
+ */
+ DRM_DEBUG_KMS("pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n",
+ pipe_config->pipe_bpp, dev_priv->vbt.edp_bpp);
+ dev_priv->vbt.edp_bpp = pipe_config->pipe_bpp;
+ }
}
static bool is_edp_psr(struct intel_dp *intel_dp)
/* Avoid continuous PSR exit by masking memup and hpd */
I915_WRITE(EDP_PSR_DEBUG_CTL, EDP_PSR_DEBUG_MASK_MEMUP |
- EDP_PSR_DEBUG_MASK_HPD);
+ EDP_PSR_DEBUG_MASK_HPD | EDP_PSR_DEBUG_MASK_LPSP);
intel_dp->psr_setup_done = true;
}
struct {
u32 pos;
u32 size;
+ bool enabled;
} pch_pfit;
/* FDI configuration, only valid if has_pch_encoder is set. */
extern bool
intel_ddi_connector_get_hw_state(struct intel_connector *intel_connector);
extern void intel_ddi_fdi_disable(struct drm_crtc *crtc);
+extern void intel_ddi_get_config(struct intel_encoder *encoder,
+ struct intel_crtc_config *pipe_config);
extern void intel_display_handle_reset(struct drm_device *dev);
extern bool intel_set_cpu_fifo_underrun_reporting(struct drm_device *dev,
extern void hsw_pc8_restore_interrupts(struct drm_device *dev);
extern void intel_aux_display_runtime_get(struct drm_i915_private *dev_priv);
extern void intel_aux_display_runtime_put(struct drm_i915_private *dev_priv);
-extern void i915_disable_vga_mem(struct drm_device *dev);
#endif /* __INTEL_DRV_H__ */
C(vtotal);
C(clock);
#undef C
+
+ drm_mode_set_crtcinfo(adjusted_mode, 0);
}
if (intel_dvo->dev.dev_ops->mode_fixup)
},
{
.callback = intel_no_lvds_dmi_callback,
+ .ident = "Intel D410PT",
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "Intel"),
+ DMI_MATCH(DMI_BOARD_NAME, "D410PT"),
+ },
+ },
+ {
+ .callback = intel_no_lvds_dmi_callback,
+ .ident = "Intel D425KT",
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "Intel"),
+ DMI_EXACT_MATCH(DMI_BOARD_NAME, "D425KT"),
+ },
+ },
+ {
+ .callback = intel_no_lvds_dmi_callback,
.ident = "Intel D510MO",
.matches = {
DMI_MATCH(DMI_BOARD_VENDOR, "Intel"),
done:
pipe_config->pch_pfit.pos = (x << 16) | y;
pipe_config->pch_pfit.size = (width << 16) | height;
+ pipe_config->pch_pfit.enabled = pipe_config->pch_pfit.size != 0;
}
static void
struct drm_crtc *crtc)
{
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
- uint32_t pixel_rate, pfit_size;
+ uint32_t pixel_rate;
pixel_rate = intel_crtc->config.adjusted_mode.clock;
/* We only use IF-ID interlacing. If we ever use PF-ID we'll need to
* adjust the pixel_rate here. */
- pfit_size = intel_crtc->config.pch_pfit.size;
- if (pfit_size) {
+ if (intel_crtc->config.pch_pfit.enabled) {
uint64_t pipe_w, pipe_h, pfit_w, pfit_h;
+ uint32_t pfit_size = intel_crtc->config.pch_pfit.size;
pipe_w = intel_crtc->config.requested_mode.hdisplay;
pipe_h = intel_crtc->config.requested_mode.vdisplay;
dev_priv->rps.rpe_delay),
dev_priv->rps.rpe_delay);
- INIT_DELAYED_WORK(&dev_priv->rps.vlv_work, vlv_rps_timer_work);
-
valleyview_set_rps(dev_priv->dev, dev_priv->rps.rpe_delay);
gen6_enable_rps_interrupts(dev);
* gating for the panel power sequencer or it will fail to
* start up when no ports are active.
*/
- I915_WRITE(SOUTH_DSPCLK_GATE_D, PCH_DPLSUNIT_CLOCK_GATE_DISABLE);
+ I915_WRITE(SOUTH_DSPCLK_GATE_D, PCH_DPLSUNIT_CLOCK_GATE_DISABLE |
+ PCH_DPLUNIT_CLOCK_GATE_DISABLE |
+ PCH_CPUNIT_CLOCK_GATE_DISABLE);
I915_WRITE(SOUTH_CHICKEN2, I915_READ(SOUTH_CHICKEN2) |
DPLS_EDP_PPS_FIX_DIS);
/* The below fixes the weird display corruption, a few pixels shifted
I915_WRITE(GEN7_L3_CHICKEN_MODE_REGISTER,
GEN7_WA_L3_CHICKEN_MODE);
+ /* L3 caching of data atomics doesn't work -- disable it. */
+ I915_WRITE(HSW_SCRATCH1, HSW_SCRATCH1_L3_DATA_ATOMICS_DISABLE);
+ I915_WRITE(HSW_ROW_CHICKEN3,
+ _MASKED_BIT_ENABLE(HSW_ROW_CHICKEN3_L3_GLOBAL_ATOMICS_DISABLE));
+
/* This is required by WaCatErrorRejectionIssue:hsw */
I915_WRITE(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG,
I915_READ(GEN7_SQ_CHICKEN_MBCUNIT_CONFIG) |
INIT_DELAYED_WORK(&dev_priv->rps.delayed_resume_work,
intel_gen6_powersave_work);
+
+ INIT_DELAYED_WORK(&dev_priv->rps.vlv_work, vlv_rps_timer_work);
}
uint16_t h_sync_offset, v_sync_offset;
int mode_clock;
+ memset(dtd, 0, sizeof(*dtd));
+
width = mode->hdisplay;
height = mode->vdisplay;
if (mode->flags & DRM_MODE_FLAG_PVSYNC)
dtd->part2.dtd_flags |= DTD_FLAG_VSYNC_POSITIVE;
- dtd->part2.sdvo_flags = 0;
dtd->part2.v_sync_off_high = v_sync_offset & 0xc0;
- dtd->part2.reserved = 0;
}
-static void intel_sdvo_get_mode_from_dtd(struct drm_display_mode * mode,
+static void intel_sdvo_get_mode_from_dtd(struct drm_display_mode *pmode,
const struct intel_sdvo_dtd *dtd)
{
- mode->hdisplay = dtd->part1.h_active;
- mode->hdisplay += ((dtd->part1.h_high >> 4) & 0x0f) << 8;
- mode->hsync_start = mode->hdisplay + dtd->part2.h_sync_off;
- mode->hsync_start += (dtd->part2.sync_off_width_high & 0xc0) << 2;
- mode->hsync_end = mode->hsync_start + dtd->part2.h_sync_width;
- mode->hsync_end += (dtd->part2.sync_off_width_high & 0x30) << 4;
- mode->htotal = mode->hdisplay + dtd->part1.h_blank;
- mode->htotal += (dtd->part1.h_high & 0xf) << 8;
-
- mode->vdisplay = dtd->part1.v_active;
- mode->vdisplay += ((dtd->part1.v_high >> 4) & 0x0f) << 8;
- mode->vsync_start = mode->vdisplay;
- mode->vsync_start += (dtd->part2.v_sync_off_width >> 4) & 0xf;
- mode->vsync_start += (dtd->part2.sync_off_width_high & 0x0c) << 2;
- mode->vsync_start += dtd->part2.v_sync_off_high & 0xc0;
- mode->vsync_end = mode->vsync_start +
+ struct drm_display_mode mode = {};
+
+ mode.hdisplay = dtd->part1.h_active;
+ mode.hdisplay += ((dtd->part1.h_high >> 4) & 0x0f) << 8;
+ mode.hsync_start = mode.hdisplay + dtd->part2.h_sync_off;
+ mode.hsync_start += (dtd->part2.sync_off_width_high & 0xc0) << 2;
+ mode.hsync_end = mode.hsync_start + dtd->part2.h_sync_width;
+ mode.hsync_end += (dtd->part2.sync_off_width_high & 0x30) << 4;
+ mode.htotal = mode.hdisplay + dtd->part1.h_blank;
+ mode.htotal += (dtd->part1.h_high & 0xf) << 8;
+
+ mode.vdisplay = dtd->part1.v_active;
+ mode.vdisplay += ((dtd->part1.v_high >> 4) & 0x0f) << 8;
+ mode.vsync_start = mode.vdisplay;
+ mode.vsync_start += (dtd->part2.v_sync_off_width >> 4) & 0xf;
+ mode.vsync_start += (dtd->part2.sync_off_width_high & 0x0c) << 2;
+ mode.vsync_start += dtd->part2.v_sync_off_high & 0xc0;
+ mode.vsync_end = mode.vsync_start +
(dtd->part2.v_sync_off_width & 0xf);
- mode->vsync_end += (dtd->part2.sync_off_width_high & 0x3) << 4;
- mode->vtotal = mode->vdisplay + dtd->part1.v_blank;
- mode->vtotal += (dtd->part1.v_high & 0xf) << 8;
+ mode.vsync_end += (dtd->part2.sync_off_width_high & 0x3) << 4;
+ mode.vtotal = mode.vdisplay + dtd->part1.v_blank;
+ mode.vtotal += (dtd->part1.v_high & 0xf) << 8;
- mode->clock = dtd->part1.clock * 10;
+ mode.clock = dtd->part1.clock * 10;
- mode->flags &= ~(DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC);
if (dtd->part2.dtd_flags & DTD_FLAG_INTERLACE)
- mode->flags |= DRM_MODE_FLAG_INTERLACE;
+ mode.flags |= DRM_MODE_FLAG_INTERLACE;
if (dtd->part2.dtd_flags & DTD_FLAG_HSYNC_POSITIVE)
- mode->flags |= DRM_MODE_FLAG_PHSYNC;
+ mode.flags |= DRM_MODE_FLAG_PHSYNC;
+ else
+ mode.flags |= DRM_MODE_FLAG_NHSYNC;
if (dtd->part2.dtd_flags & DTD_FLAG_VSYNC_POSITIVE)
- mode->flags |= DRM_MODE_FLAG_PVSYNC;
+ mode.flags |= DRM_MODE_FLAG_PVSYNC;
+ else
+ mode.flags |= DRM_MODE_FLAG_NVSYNC;
+
+ drm_mode_set_crtcinfo(&mode, 0);
+
+ drm_mode_copy(pmode, &mode);
}
static bool intel_sdvo_check_supp_encode(struct intel_sdvo *intel_sdvo)
DRM_DEBUG_KMS("forcing bpc to 8 for TV\n");
pipe_config->pipe_bpp = 8*3;
+ /* TV has it's own notion of sync and other mode flags, so clear them. */
+ pipe_config->adjusted_mode.flags = 0;
+
+ /*
+ * FIXME: We don't check whether the input mode is actually what we want
+ * or whether userspace is doing something stupid.
+ */
+
return true;
}
/* reset completed fence seqno, just discard anything pending: */
adreno_gpu->memptrs->fence = gpu->submitted_fence;
+ adreno_gpu->memptrs->rptr = 0;
+ adreno_gpu->memptrs->wptr = 0;
gpu->funcs->pm_resume(gpu);
ret = gpu->funcs->hw_init(gpu);
return;
} while(time_before(jiffies, t));
- DRM_ERROR("timeout waiting for %s to drain ringbuffer!\n", gpu->name);
+ DRM_ERROR("%s: timeout waiting to drain ringbuffer!\n", gpu->name);
/* TODO maybe we need to reset GPU here to recover from hang? */
}
{
struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu);
uint32_t freedwords;
+ unsigned long t = jiffies + ADRENO_IDLE_TIMEOUT;
do {
uint32_t size = gpu->rb->size / 4;
uint32_t wptr = get_wptr(gpu->rb);
uint32_t rptr = adreno_gpu->memptrs->rptr;
freedwords = (rptr + (size - 1) - wptr) % size;
+
+ if (time_after(jiffies, t)) {
+ DRM_ERROR("%s: timeout waiting for ringbuffer space\n", gpu->name);
+ break;
+ }
} while(freedwords < ndwords);
}
#include "msm_drv.h"
#include "mdp4_kms.h"
-#include <mach/iommu.h>
-
static struct mdp4_platform_config *mdp4_get_config(struct platform_device *dev);
static int mdp4_hw_init(struct msm_kms *kms)
#include "msm_drv.h"
#include "msm_gpu.h"
-#include <mach/iommu.h>
-
static void msm_fb_output_poll_changed(struct drm_device *dev)
{
struct msm_drm_private *priv = dev->dev_private;
int i, ret;
for (i = 0; i < cnt; i++) {
+ /* TODO maybe some day msm iommu won't require this hack: */
+ struct device *msm_iommu_get_ctx(const char *ctx_name);
struct device *ctx = msm_iommu_get_ctx(names[i]);
if (!ctx)
continue;
* imx drm driver on iMX5
*/
dev_err(dev->dev, "failed to load kms\n");
- ret = PTR_ERR(priv->kms);
+ ret = PTR_ERR(kms);
goto fail;
}
struct timespec *timeout)
{
struct msm_drm_private *priv = dev->dev_private;
- unsigned long timeout_jiffies = timespec_to_jiffies(timeout);
- unsigned long start_jiffies = jiffies;
- unsigned long remaining_jiffies;
int ret;
- if (time_after(start_jiffies, timeout_jiffies))
- remaining_jiffies = 0;
- else
- remaining_jiffies = timeout_jiffies - start_jiffies;
-
- ret = wait_event_interruptible_timeout(priv->fence_event,
- priv->completed_fence >= fence,
- remaining_jiffies);
- if (ret == 0) {
- DBG("timeout waiting for fence: %u (completed: %u)",
- fence, priv->completed_fence);
- ret = -ETIMEDOUT;
- } else if (ret != -ERESTARTSYS) {
- ret = 0;
+ if (!priv->gpu)
+ return 0;
+
+ if (fence > priv->gpu->submitted_fence) {
+ DRM_ERROR("waiting on invalid fence: %u (of %u)\n",
+ fence, priv->gpu->submitted_fence);
+ return -EINVAL;
+ }
+
+ if (!timeout) {
+ /* no-wait: */
+ ret = fence_completed(dev, fence) ? 0 : -EBUSY;
+ } else {
+ unsigned long timeout_jiffies = timespec_to_jiffies(timeout);
+ unsigned long start_jiffies = jiffies;
+ unsigned long remaining_jiffies;
+
+ if (time_after(start_jiffies, timeout_jiffies))
+ remaining_jiffies = 0;
+ else
+ remaining_jiffies = timeout_jiffies - start_jiffies;
+
+ ret = wait_event_interruptible_timeout(priv->fence_event,
+ fence_completed(dev, fence),
+ remaining_jiffies);
+
+ if (ret == 0) {
+ DBG("timeout waiting for fence: %u (completed: %u)",
+ fence, priv->completed_fence);
+ ret = -ETIMEDOUT;
+ } else if (ret != -ERESTARTSYS) {
+ ret = 0;
+ }
}
return ret;
.gem_vm_ops = &vm_ops,
.dumb_create = msm_gem_dumb_create,
.dumb_map_offset = msm_gem_dumb_map_offset,
- .dumb_destroy = msm_gem_dumb_destroy,
+ .dumb_destroy = drm_gem_dumb_destroy,
#ifdef CONFIG_DEBUG_FS
.debugfs_init = msm_debugfs_init,
.debugfs_cleanup = msm_debugfs_cleanup,
int msm_gem_queue_inactive_work(struct drm_gem_object *obj,
struct work_struct *work);
void msm_gem_move_to_active(struct drm_gem_object *obj,
- struct msm_gpu *gpu, uint32_t fence);
+ struct msm_gpu *gpu, bool write, uint32_t fence);
void msm_gem_move_to_inactive(struct drm_gem_object *obj);
int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op,
struct timespec *timeout);
#define DBG(fmt, ...) DRM_DEBUG(fmt"\n", ##__VA_ARGS__)
#define VERB(fmt, ...) if (0) DRM_DEBUG(fmt"\n", ##__VA_ARGS__)
+static inline bool fence_completed(struct drm_device *dev, uint32_t fence)
+{
+ struct msm_drm_private *priv = dev->dev_private;
+ return priv->completed_fence >= fence;
+}
+
static inline int align_pitch(int width, int bpp)
{
int bytespp = (bpp + 7) / 8;
}
msm_obj->sgt = drm_prime_pages_to_sg(p, npages);
- if (!msm_obj->sgt) {
+ if (IS_ERR(msm_obj->sgt)) {
dev_err(dev->dev, "failed to allocate sgt\n");
- return ERR_PTR(-ENOMEM);
+ return ERR_CAST(msm_obj->sgt);
}
msm_obj->pages = p;
out:
switch (ret) {
case -EAGAIN:
- set_need_resched();
case 0:
case -ERESTARTSYS:
case -EINTR:
MSM_BO_SCANOUT | MSM_BO_WC, &args->handle);
}
-int msm_gem_dumb_destroy(struct drm_file *file, struct drm_device *dev,
- uint32_t handle)
-{
- /* No special work needed, drop the reference and see what falls out */
- return drm_gem_handle_delete(file, handle);
-}
-
int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev,
uint32_t handle, uint64_t *offset)
{
}
void msm_gem_move_to_active(struct drm_gem_object *obj,
- struct msm_gpu *gpu, uint32_t fence)
+ struct msm_gpu *gpu, bool write, uint32_t fence)
{
struct msm_gem_object *msm_obj = to_msm_bo(obj);
msm_obj->gpu = gpu;
- msm_obj->fence = fence;
+ if (write)
+ msm_obj->write_fence = fence;
+ else
+ msm_obj->read_fence = fence;
list_del_init(&msm_obj->mm_list);
list_add_tail(&msm_obj->mm_list, &gpu->active_list);
}
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
msm_obj->gpu = NULL;
- msm_obj->fence = 0;
+ msm_obj->read_fence = 0;
+ msm_obj->write_fence = 0;
list_del_init(&msm_obj->mm_list);
list_add_tail(&msm_obj->mm_list, &priv->inactive_list);
struct msm_gem_object *msm_obj = to_msm_bo(obj);
int ret = 0;
- if (is_active(msm_obj) && !(op & MSM_PREP_NOSYNC))
- ret = msm_wait_fence_interruptable(dev, msm_obj->fence, timeout);
+ if (is_active(msm_obj)) {
+ uint32_t fence = 0;
+
+ if (op & MSM_PREP_READ)
+ fence = msm_obj->write_fence;
+ if (op & MSM_PREP_WRITE)
+ fence = max(fence, msm_obj->read_fence);
+ if (op & MSM_PREP_NOSYNC)
+ timeout = NULL;
+
+ ret = msm_wait_fence_interruptable(dev, fence, timeout);
+ }
/* TODO cache maintenance */
uint64_t off = drm_vma_node_start(&obj->vma_node);
WARN_ON(!mutex_is_locked(&dev->struct_mutex));
- seq_printf(m, "%08x: %c(%d) %2d (%2d) %08llx %p %d\n",
+ seq_printf(m, "%08x: %c(r=%u,w=%u) %2d (%2d) %08llx %p %d\n",
msm_obj->flags, is_active(msm_obj) ? 'A' : 'I',
- msm_obj->fence, obj->name, obj->refcount.refcount.counter,
+ msm_obj->read_fence, msm_obj->write_fence,
+ obj->name, obj->refcount.refcount.counter,
off, msm_obj->vaddr, obj->size);
}
*/
struct list_head mm_list;
struct msm_gpu *gpu; /* non-null if active */
- uint32_t fence;
+ uint32_t read_fence, write_fence;
/* Transiently in the process of submit ioctl, objects associated
* with the submit are on submit->bo_list.. this only lasts for
}
if (submit_bo.flags & BO_INVALID_FLAGS) {
- DBG("invalid flags: %x", submit_bo.flags);
+ DRM_ERROR("invalid flags: %x\n", submit_bo.flags);
ret = -EINVAL;
goto out_unlock;
}
*/
obj = idr_find(&file->object_idr, submit_bo.handle);
if (!obj) {
- DBG("invalid handle %u at index %u", submit_bo.handle, i);
+ DRM_ERROR("invalid handle %u at index %u\n", submit_bo.handle, i);
ret = -EINVAL;
goto out_unlock;
}
msm_obj = to_msm_bo(obj);
if (!list_empty(&msm_obj->submit_entry)) {
- DBG("handle %u at index %u already on submit list",
+ DRM_ERROR("handle %u at index %u already on submit list\n",
submit_bo.handle, i);
ret = -EINVAL;
goto out_unlock;
struct msm_gem_object **obj, uint32_t *iova, bool *valid)
{
if (idx >= submit->nr_bos) {
- DBG("invalid buffer index: %u (out of %u)", idx, submit->nr_bos);
- return EINVAL;
+ DRM_ERROR("invalid buffer index: %u (out of %u)\n",
+ idx, submit->nr_bos);
+ return -EINVAL;
}
if (obj)
int ret;
if (offset % 4) {
- DBG("non-aligned cmdstream buffer: %u", offset);
+ DRM_ERROR("non-aligned cmdstream buffer: %u\n", offset);
return -EINVAL;
}
return -EFAULT;
if (submit_reloc.submit_offset % 4) {
- DBG("non-aligned reloc offset: %u",
+ DRM_ERROR("non-aligned reloc offset: %u\n",
submit_reloc.submit_offset);
return -EINVAL;
}
if ((off >= (obj->base.size / 4)) ||
(off < last_offset)) {
- DBG("invalid offset %u at reloc %u", off, i);
+ DRM_ERROR("invalid offset %u at reloc %u\n", off, i);
return -EINVAL;
}
goto out;
if (submit_cmd.size % 4) {
- DBG("non-aligned cmdstream buffer size: %u",
+ DRM_ERROR("non-aligned cmdstream buffer size: %u\n",
submit_cmd.size);
ret = -EINVAL;
goto out;
}
- if (submit_cmd.size >= msm_obj->base.size) {
- DBG("invalid cmdstream size: %u", submit_cmd.size);
+ if ((submit_cmd.size + submit_cmd.submit_offset) >=
+ msm_obj->base.size) {
+ DRM_ERROR("invalid cmdstream size: %u\n", submit_cmd.size);
ret = -EINVAL;
goto out;
}
static void bs_init(struct msm_gpu *gpu, struct platform_device *pdev)
{
struct drm_device *dev = gpu->dev;
- struct kgsl_device_platform_data *pdata = pdev->dev.platform_data;
+ struct kgsl_device_platform_data *pdata;
if (!pdev) {
dev_err(dev->dev, "could not find dtv pdata\n");
return;
}
+ pdata = pdev->dev.platform_data;
if (pdata->bus_scale_table) {
gpu->bsc = msm_bus_scale_register_client(pdata->bus_scale_table);
DBG("bus scale client: %08x", gpu->bsc);
static void hangcheck_handler(unsigned long data)
{
struct msm_gpu *gpu = (struct msm_gpu *)data;
+ struct drm_device *dev = gpu->dev;
+ struct msm_drm_private *priv = dev->dev_private;
uint32_t fence = gpu->funcs->last_fence(gpu);
if (fence != gpu->hangcheck_fence) {
gpu->hangcheck_fence = fence;
} else if (fence < gpu->submitted_fence) {
/* no progress and not done.. hung! */
- struct msm_drm_private *priv = gpu->dev->dev_private;
gpu->hangcheck_fence = fence;
+ dev_err(dev->dev, "%s: hangcheck detected gpu lockup!\n",
+ gpu->name);
+ dev_err(dev->dev, "%s: completed fence: %u\n",
+ gpu->name, fence);
+ dev_err(dev->dev, "%s: submitted fence: %u\n",
+ gpu->name, gpu->submitted_fence);
queue_work(priv->wq, &gpu->recover_work);
}
/* if still more pending work, reset the hangcheck timer: */
if (gpu->submitted_fence > gpu->hangcheck_fence)
hangcheck_timer_reset(gpu);
+
+ /* workaround for missing irq: */
+ queue_work(priv->wq, &gpu->retire_work);
}
/*
obj = list_first_entry(&gpu->active_list,
struct msm_gem_object, mm_list);
- if (obj->fence <= fence) {
+ if ((obj->read_fence <= fence) &&
+ (obj->write_fence <= fence)) {
/* move to inactive: */
msm_gem_move_to_inactive(&obj->base);
msm_gem_put_iova(&obj->base, gpu->id);
submit->gpu->id, &iova);
}
- msm_gem_move_to_active(&msm_obj->base, gpu, submit->fence);
+ if (submit->bos[i].flags & MSM_SUBMIT_BO_READ)
+ msm_gem_move_to_active(&msm_obj->base, gpu, false, submit->fence);
+
+ if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE)
+ msm_gem_move_to_active(&msm_obj->base, gpu, true, submit->fence);
}
hangcheck_timer_reset(gpu);
mutex_unlock(&dev->struct_mutex);
init_reserved(struct nvbios_init *init)
{
u8 opcode = nv_ro08(init->bios, init->offset);
- trace("RESERVED\t0x%02x\n", opcode);
- init->offset += 1;
+ u8 length, i;
+
+ switch (opcode) {
+ case 0xaa:
+ length = 4;
+ break;
+ default:
+ length = 1;
+ break;
+ }
+
+ trace("RESERVED 0x%02x\t", opcode);
+ for (i = 1; i < length; i++)
+ cont(" 0x%02x", nv_ro08(init->bios, init->offset + i));
+ cont("\n");
+ init->offset += length;
}
/**
data = init_rdvgai(init, 0x03c4, 0x01);
init_wrvgai(init, 0x03c4, 0x01, data | 0x20);
- while ((addr = nv_ro32(bios, sdata)) != 0xffffffff) {
+ for (; (addr = nv_ro32(bios, sdata)) != 0xffffffff; sdata += 4) {
switch (addr) {
case 0x10021c: /* CKE_NORMAL */
case 0x1002d0: /* CMD_REFRESH */
[0x99] = { init_zm_auxch },
[0x9a] = { init_i2c_long_if },
[0xa9] = { init_gpio_ne },
+ [0xaa] = { init_reserved },
};
#define init_opcode_nr (sizeof(init_opcode) / sizeof(init_opcode[0]))
pmc->use_msi = false;
break;
default:
- pmc->use_msi = nouveau_boolopt(device->cfgopt, "NvMSI", true);
+ pmc->use_msi = nouveau_boolopt(device->cfgopt, "NvMSI", false);
if (pmc->use_msi) {
pmc->use_msi = pci_enable_msi(device->pdev) == 0;
if (pmc->use_msi) {
{
struct nouveau_drm *drm = nouveau_drm(dev);
struct nouveau_display *disp;
- u32 pclass = dev->pdev->class >> 8;
int ret, gen;
disp = drm->display = kzalloc(sizeof(*disp), GFP_KERNEL);
drm_kms_helper_poll_init(dev);
drm_kms_helper_poll_disable(dev);
- if (nouveau_modeset == 1 ||
- (nouveau_modeset < 0 && pclass == PCI_CLASS_DISPLAY_VGA)) {
- if (drm->vbios.dcb.entries) {
- if (nv_device(drm->device)->card_type < NV_50)
- ret = nv04_display_create(dev);
- else
- ret = nv50_display_create(dev);
- } else {
- ret = 0;
- }
-
- if (ret)
- goto disp_create_err;
+ if (drm->vbios.dcb.entries) {
+ if (nv_device(drm->device)->card_type < NV_50)
+ ret = nv04_display_create(dev);
+ else
+ ret = nv50_display_create(dev);
+ } else {
+ ret = 0;
+ }
- if (dev->mode_config.num_crtc) {
- ret = drm_vblank_init(dev, dev->mode_config.num_crtc);
- if (ret)
- goto vblank_err;
- }
+ if (ret)
+ goto disp_create_err;
- nouveau_backlight_init(dev);
+ if (dev->mode_config.num_crtc) {
+ ret = drm_vblank_init(dev, dev->mode_config.num_crtc);
+ if (ret)
+ goto vblank_err;
}
+ nouveau_backlight_init(dev);
return 0;
vblank_err:
int preferred_bpp;
int ret;
- if (!dev->mode_config.num_crtc)
+ if (!dev->mode_config.num_crtc ||
+ (dev->pdev->class >> 8) != PCI_CLASS_DISPLAY_VGA)
return 0;
fbcon = kzalloc(sizeof(struct nouveau_fbdev), GFP_KERNEL);
else
nvbe->ttm.ttm.func = &nv50_sgdma_backend;
- if (ttm_dma_tt_init(&nvbe->ttm, bdev, size, page_flags, dummy_read_page)) {
- kfree(nvbe);
+ if (ttm_dma_tt_init(&nvbe->ttm, bdev, size, page_flags, dummy_read_page))
return NULL;
- }
return &nvbe->ttm.ttm;
}
switch (connector->connector_type) {
case DRM_MODE_CONNECTOR_DVII:
case DRM_MODE_CONNECTOR_HDMIB: /* HDMI-B is basically DL-DVI; analog works fine */
- if (drm_detect_hdmi_monitor(radeon_connector->edid) &&
- radeon_audio)
- return ATOM_ENCODER_MODE_HDMI;
- else if (radeon_connector->use_digital)
+ if (radeon_audio != 0) {
+ if (radeon_connector->use_digital &&
+ (radeon_connector->audio == RADEON_AUDIO_ENABLE))
+ return ATOM_ENCODER_MODE_HDMI;
+ else if (drm_detect_hdmi_monitor(radeon_connector->edid) &&
+ (radeon_connector->audio == RADEON_AUDIO_AUTO))
+ return ATOM_ENCODER_MODE_HDMI;
+ else if (radeon_connector->use_digital)
+ return ATOM_ENCODER_MODE_DVI;
+ else
+ return ATOM_ENCODER_MODE_CRT;
+ } else if (radeon_connector->use_digital) {
return ATOM_ENCODER_MODE_DVI;
- else
+ } else {
return ATOM_ENCODER_MODE_CRT;
+ }
break;
case DRM_MODE_CONNECTOR_DVID:
case DRM_MODE_CONNECTOR_HDMIA:
default:
- if (drm_detect_hdmi_monitor(radeon_connector->edid) &&
- radeon_audio)
- return ATOM_ENCODER_MODE_HDMI;
- else
+ if (radeon_audio != 0) {
+ if (radeon_connector->audio == RADEON_AUDIO_ENABLE)
+ return ATOM_ENCODER_MODE_HDMI;
+ else if (drm_detect_hdmi_monitor(radeon_connector->edid) &&
+ (radeon_connector->audio == RADEON_AUDIO_AUTO))
+ return ATOM_ENCODER_MODE_HDMI;
+ else
+ return ATOM_ENCODER_MODE_DVI;
+ } else {
return ATOM_ENCODER_MODE_DVI;
+ }
break;
case DRM_MODE_CONNECTOR_LVDS:
return ATOM_ENCODER_MODE_LVDS;
case DRM_MODE_CONNECTOR_DisplayPort:
dig_connector = radeon_connector->con_priv;
if ((dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) ||
- (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP))
+ (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP)) {
return ATOM_ENCODER_MODE_DP;
- else if (drm_detect_hdmi_monitor(radeon_connector->edid) &&
- radeon_audio)
- return ATOM_ENCODER_MODE_HDMI;
- else
+ } else if (radeon_audio != 0) {
+ if (radeon_connector->audio == RADEON_AUDIO_ENABLE)
+ return ATOM_ENCODER_MODE_HDMI;
+ else if (drm_detect_hdmi_monitor(radeon_connector->edid) &&
+ (radeon_connector->audio == RADEON_AUDIO_AUTO))
+ return ATOM_ENCODER_MODE_HDMI;
+ else
+ return ATOM_ENCODER_MODE_DVI;
+ } else {
return ATOM_ENCODER_MODE_DVI;
+ }
break;
case DRM_MODE_CONNECTOR_eDP:
return ATOM_ENCODER_MODE_DP;
atombios_dig_encoder_setup(encoder, ATOM_ENABLE, 0);
atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_SETUP, 0, 0);
atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE, 0, 0);
- /* some early dce3.2 boards have a bug in their transmitter control table */
- if ((rdev->family != CHIP_RV710) && (rdev->family != CHIP_RV730))
+ /* some dce3.x boards have a bug in their transmitter control table.
+ * ACTION_ENABLE_OUTPUT can probably be dropped since ACTION_ENABLE
+ * does the same thing and more.
+ */
+ if ((rdev->family != CHIP_RV710) && (rdev->family != CHIP_RV730) &&
+ (rdev->family != CHIP_RS780) && (rdev->family != CHIP_RS880))
atombios_dig_transmitter_setup(encoder, ATOM_TRANSMITTER_ACTION_ENABLE_OUTPUT, 0, 0);
}
if (ENCODER_MODE_IS_DP(atombios_get_encoder_mode(encoder)) && connector) {
{ 25000, 30000, RADEON_SCLK_UP }
};
+void btc_get_max_clock_from_voltage_dependency_table(struct radeon_clock_voltage_dependency_table *table,
+ u32 *max_clock)
+{
+ u32 i, clock = 0;
+
+ if ((table == NULL) || (table->count == 0)) {
+ *max_clock = clock;
+ return;
+ }
+
+ for (i = 0; i < table->count; i++) {
+ if (clock < table->entries[i].clk)
+ clock = table->entries[i].clk;
+ }
+ *max_clock = clock;
+}
+
void btc_apply_voltage_dependency_rules(struct radeon_clock_voltage_dependency_table *table,
u32 clock, u16 max_voltage, u16 *voltage)
{
}
j++;
- if (j > SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE)
+ if (j >= SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE)
return -EINVAL;
tmp = RREG32(MC_PMG_CMD_MRS);
}
j++;
- if (j > SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE)
+ if (j >= SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE)
return -EINVAL;
break;
case MC_SEQ_RESERVE_M >> 2:
}
j++;
- if (j > SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE)
+ if (j >= SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE)
return -EINVAL;
break;
default:
bool disable_mclk_switching;
u32 mclk, sclk;
u16 vddc, vddci;
+ u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc;
if ((rdev->pm.dpm.new_active_crtc_count > 1) ||
btc_dpm_vblank_too_short(rdev))
ps->low.vddci = max_limits->vddci;
}
+ /* limit clocks to max supported clocks based on voltage dependency tables */
+ btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
+ &max_sclk_vddc);
+ btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
+ &max_mclk_vddci);
+ btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
+ &max_mclk_vddc);
+
+ if (max_sclk_vddc) {
+ if (ps->low.sclk > max_sclk_vddc)
+ ps->low.sclk = max_sclk_vddc;
+ if (ps->medium.sclk > max_sclk_vddc)
+ ps->medium.sclk = max_sclk_vddc;
+ if (ps->high.sclk > max_sclk_vddc)
+ ps->high.sclk = max_sclk_vddc;
+ }
+ if (max_mclk_vddci) {
+ if (ps->low.mclk > max_mclk_vddci)
+ ps->low.mclk = max_mclk_vddci;
+ if (ps->medium.mclk > max_mclk_vddci)
+ ps->medium.mclk = max_mclk_vddci;
+ if (ps->high.mclk > max_mclk_vddci)
+ ps->high.mclk = max_mclk_vddci;
+ }
+ if (max_mclk_vddc) {
+ if (ps->low.mclk > max_mclk_vddc)
+ ps->low.mclk = max_mclk_vddc;
+ if (ps->medium.mclk > max_mclk_vddc)
+ ps->medium.mclk = max_mclk_vddc;
+ if (ps->high.mclk > max_mclk_vddc)
+ ps->high.mclk = max_mclk_vddc;
+ }
+
/* XXX validate the min clocks required for display */
if (disable_mclk_switching) {
return ret;
}
- ret = rv770_dpm_force_performance_level(rdev, RADEON_DPM_FORCED_LEVEL_AUTO);
- if (ret) {
- DRM_ERROR("rv770_dpm_force_performance_level failed\n");
- return ret;
- }
-
return 0;
}
struct rv7xx_pl *pl);
void btc_apply_voltage_dependency_rules(struct radeon_clock_voltage_dependency_table *table,
u32 clock, u16 max_voltage, u16 *voltage);
+void btc_get_max_clock_from_voltage_dependency_table(struct radeon_clock_voltage_dependency_table *table,
+ u32 *max_clock);
void btc_apply_voltage_delta_rules(struct radeon_device *rdev,
u16 max_vddc, u16 max_vddci,
u16 *vddc, u16 *vddci);
};
extern u8 rv770_get_memory_module_index(struct radeon_device *rdev);
+extern void btc_get_max_clock_from_voltage_dependency_table(struct radeon_clock_voltage_dependency_table *table,
+ u32 *max_clock);
extern int ni_copy_and_switch_arb_sets(struct radeon_device *rdev,
u32 arb_freq_src, u32 arb_freq_dest);
extern u8 si_get_ddr3_mclk_frequency_ratio(u32 memory_clock);
struct radeon_clock_and_voltage_limits *max_limits;
bool disable_mclk_switching;
u32 sclk, mclk;
+ u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc;
int i;
if ((rdev->pm.dpm.new_active_crtc_count > 1) ||
}
}
+ /* limit clocks to max supported clocks based on voltage dependency tables */
+ btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
+ &max_sclk_vddc);
+ btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
+ &max_mclk_vddci);
+ btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
+ &max_mclk_vddc);
+
+ for (i = 0; i < ps->performance_level_count; i++) {
+ if (max_sclk_vddc) {
+ if (ps->performance_levels[i].sclk > max_sclk_vddc)
+ ps->performance_levels[i].sclk = max_sclk_vddc;
+ }
+ if (max_mclk_vddci) {
+ if (ps->performance_levels[i].mclk > max_mclk_vddci)
+ ps->performance_levels[i].mclk = max_mclk_vddci;
+ }
+ if (max_mclk_vddc) {
+ if (ps->performance_levels[i].mclk > max_mclk_vddc)
+ ps->performance_levels[i].mclk = max_mclk_vddc;
+ }
+ }
+
/* XXX validate the min clocks required for display */
if (disable_mclk_switching) {
if (pi->pcie_performance_request)
ci_notify_link_speed_change_after_state_change(rdev, new_ps, old_ps);
- ret = ci_dpm_force_performance_level(rdev, RADEON_DPM_FORCED_LEVEL_AUTO);
- if (ret) {
- DRM_ERROR("ci_dpm_force_performance_level failed\n");
- return ret;
- }
-
cik_update_cg(rdev, (RADEON_CG_BLOCK_GFX |
RADEON_CG_BLOCK_MC |
RADEON_CG_BLOCK_SDMA |
u32 smc_start_address,
const u8 *src, u32 byte_count, u32 limit)
{
+ unsigned long flags;
u32 data, original_data;
u32 addr;
u32 extra_shift;
- int ret;
+ int ret = 0;
if (smc_start_address & 3)
return -EINVAL;
addr = smc_start_address;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
while (byte_count >= 4) {
/* SMC address space is BE */
data = (src[0] << 24) | (src[1] << 16) | (src[2] << 8) | src[3];
ret = ci_set_smc_sram_address(rdev, addr, limit);
if (ret)
- return ret;
+ goto done;
WREG32(SMC_IND_DATA_0, data);
ret = ci_set_smc_sram_address(rdev, addr, limit);
if (ret)
- return ret;
+ goto done;
original_data = RREG32(SMC_IND_DATA_0);
ret = ci_set_smc_sram_address(rdev, addr, limit);
if (ret)
- return ret;
+ goto done;
WREG32(SMC_IND_DATA_0, data);
}
- return 0;
+
+done:
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
+
+ return ret;
}
void ci_start_smc(struct radeon_device *rdev)
int ci_load_smc_ucode(struct radeon_device *rdev, u32 limit)
{
+ unsigned long flags;
u32 ucode_start_address;
u32 ucode_size;
const u8 *src;
return -EINVAL;
src = (const u8 *)rdev->smc_fw->data;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
WREG32(SMC_IND_INDEX_0, ucode_start_address);
WREG32_P(SMC_IND_ACCESS_CNTL, AUTO_INCREMENT_IND_0, ~AUTO_INCREMENT_IND_0);
while (ucode_size >= 4) {
ucode_size -= 4;
}
WREG32_P(SMC_IND_ACCESS_CNTL, 0, ~AUTO_INCREMENT_IND_0);
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
return 0;
}
int ci_read_smc_sram_dword(struct radeon_device *rdev,
u32 smc_address, u32 *value, u32 limit)
{
+ unsigned long flags;
int ret;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
ret = ci_set_smc_sram_address(rdev, smc_address, limit);
- if (ret)
- return ret;
+ if (ret == 0)
+ *value = RREG32(SMC_IND_DATA_0);
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
- *value = RREG32(SMC_IND_DATA_0);
- return 0;
+ return ret;
}
int ci_write_smc_sram_dword(struct radeon_device *rdev,
u32 smc_address, u32 value, u32 limit)
{
+ unsigned long flags;
int ret;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
ret = ci_set_smc_sram_address(rdev, smc_address, limit);
- if (ret)
- return ret;
+ if (ret == 0)
+ WREG32(SMC_IND_DATA_0, value);
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
- WREG32(SMC_IND_DATA_0, value);
- return 0;
+ return ret;
}
static void cik_program_aspm(struct radeon_device *rdev);
static void cik_init_pg(struct radeon_device *rdev);
static void cik_init_cg(struct radeon_device *rdev);
+static void cik_fini_pg(struct radeon_device *rdev);
+static void cik_fini_cg(struct radeon_device *rdev);
+static void cik_enable_gui_idle_interrupt(struct radeon_device *rdev,
+ bool enable);
/* get temperature in millidegrees */
int ci_get_temp(struct radeon_device *rdev)
*/
u32 cik_pciep_rreg(struct radeon_device *rdev, u32 reg)
{
+ unsigned long flags;
u32 r;
+ spin_lock_irqsave(&rdev->pciep_idx_lock, flags);
WREG32(PCIE_INDEX, reg);
(void)RREG32(PCIE_INDEX);
r = RREG32(PCIE_DATA);
+ spin_unlock_irqrestore(&rdev->pciep_idx_lock, flags);
return r;
}
void cik_pciep_wreg(struct radeon_device *rdev, u32 reg, u32 v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->pciep_idx_lock, flags);
WREG32(PCIE_INDEX, reg);
(void)RREG32(PCIE_INDEX);
WREG32(PCIE_DATA, v);
(void)RREG32(PCIE_DATA);
+ spin_unlock_irqrestore(&rdev->pciep_idx_lock, flags);
}
static const u32 spectre_rlc_save_restore_register_list[] =
fw_name);
release_firmware(rdev->smc_fw);
rdev->smc_fw = NULL;
+ err = 0;
} else if (rdev->smc_fw->size != smc_req_size) {
printk(KERN_ERR
"cik_smc: Bogus length %zu in firmware \"%s\"\n",
} else if ((rdev->pdev->device == 0x1309) ||
(rdev->pdev->device == 0x130A) ||
(rdev->pdev->device == 0x130D) ||
- (rdev->pdev->device == 0x1313)) {
+ (rdev->pdev->device == 0x1313) ||
+ (rdev->pdev->device == 0x131D)) {
rdev->config.cik.max_cu_per_sh = 6;
rdev->config.cik.max_backends_per_se = 2;
} else if ((rdev->pdev->device == 0x1306) ||
rdev->config.cik.tile_config |= (3 << 0);
break;
}
- if ((mc_arb_ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT)
- rdev->config.cik.tile_config |= 1 << 4;
- else
- rdev->config.cik.tile_config |= 0 << 4;
+ rdev->config.cik.tile_config |=
+ ((mc_arb_ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT) << 4;
rdev->config.cik.tile_config |=
((gb_addr_config & PIPE_INTERLEAVE_SIZE_MASK) >> PIPE_INTERLEAVE_SIZE_SHIFT) << 8;
rdev->config.cik.tile_config |=
r = radeon_ib_get(rdev, ring->idx, &ib, NULL, 256);
if (r) {
DRM_ERROR("radeon: failed to get ib (%d).\n", r);
+ radeon_scratch_free(rdev, scratch);
return r;
}
ib.ptr[0] = PACKET3(PACKET3_SET_UCONFIG_REG, 1);
r = radeon_fence_wait(ib.fence, false);
if (r) {
DRM_ERROR("radeon: fence wait failed (%d).\n", r);
+ radeon_scratch_free(rdev, scratch);
+ radeon_ib_free(rdev, &ib);
return r;
}
for (i = 0; i < rdev->usec_timeout; i++) {
{
int r;
+ cik_enable_gui_idle_interrupt(rdev, false);
+
r = cik_cp_load_microcode(rdev);
if (r)
return r;
if (r)
return r;
+ cik_enable_gui_idle_interrupt(rdev, true);
+
return 0;
}
dev_info(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n",
RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS));
+ /* disable CG/PG */
+ cik_fini_pg(rdev);
+ cik_fini_cg(rdev);
+
/* stop the rlc */
cik_rlc_stop(rdev);
rdev->mc.aper_base = pci_resource_start(rdev->pdev, 0);
rdev->mc.aper_size = pci_resource_len(rdev->pdev, 0);
/* size in MB on si */
- rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024;
- rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024;
+ rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE) * 1024ULL * 1024ULL;
+ rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE) * 1024ULL * 1024ULL;
rdev->mc.visible_vram_size = rdev->mc.aper_size;
si_vram_gtt_location(rdev, &rdev->mc);
radeon_update_bandwidth_info(rdev);
u32 mc_id = (status & MEMORY_CLIENT_ID_MASK) >> MEMORY_CLIENT_ID_SHIFT;
u32 vmid = (status & FAULT_VMID_MASK) >> FAULT_VMID_SHIFT;
u32 protections = (status & PROTECTIONS_MASK) >> PROTECTIONS_SHIFT;
- char *block = (char *)&mc_client;
+ char block[5] = { mc_client >> 24, (mc_client >> 16) & 0xff,
+ (mc_client >> 8) & 0xff, mc_client & 0xff, 0 };
- printk("VM fault (0x%02x, vmid %d) at page %u, %s from %s (%d)\n",
+ printk("VM fault (0x%02x, vmid %d) at page %u, %s from '%s' (0x%08x) (%d)\n",
protections, vmid, addr,
(status & MEMORY_CLIENT_RW_MASK) ? "write" : "read",
- block, mc_id);
+ block, mc_client, mc_id);
}
/**
void cik_update_cg(struct radeon_device *rdev,
u32 block, bool enable)
{
+
if (block & RADEON_CG_BLOCK_GFX) {
+ cik_enable_gui_idle_interrupt(rdev, false);
/* order matters! */
if (enable) {
cik_enable_mgcg(rdev, true);
cik_enable_cgcg(rdev, false);
cik_enable_mgcg(rdev, false);
}
+ cik_enable_gui_idle_interrupt(rdev, true);
}
if (block & RADEON_CG_BLOCK_MC) {
{
u32 data, orig;
- if (enable && (rdev->pg_flags & RADEON_PG_SUPPORT_GFX_CG)) {
+ if (enable && (rdev->pg_flags & RADEON_PG_SUPPORT_GFX_PG)) {
orig = data = RREG32(RLC_PG_CNTL);
data |= GFX_PG_ENABLE;
if (orig != data)
if (rdev->pg_flags) {
cik_enable_sck_slowdown_on_pu(rdev, true);
cik_enable_sck_slowdown_on_pd(rdev, true);
- if (rdev->pg_flags & RADEON_PG_SUPPORT_GFX_CG) {
+ if (rdev->pg_flags & RADEON_PG_SUPPORT_GFX_PG) {
cik_init_gfx_cgpg(rdev);
cik_enable_cp_pg(rdev, true);
cik_enable_gds_pg(rdev, true);
{
if (rdev->pg_flags) {
cik_update_gfx_pg(rdev, false);
- if (rdev->pg_flags & RADEON_PG_SUPPORT_GFX_CG) {
+ if (rdev->pg_flags & RADEON_PG_SUPPORT_GFX_PG) {
cik_enable_cp_pg(rdev, false);
cik_enable_gds_pg(rdev, false);
}
u32 tmp;
/* gfx ring */
- WREG32(CP_INT_CNTL_RING0, CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE);
+ tmp = RREG32(CP_INT_CNTL_RING0) &
+ (CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE);
+ WREG32(CP_INT_CNTL_RING0, tmp);
/* sdma */
tmp = RREG32(SDMA0_CNTL + SDMA0_REGISTER_OFFSET) & ~TRAP_ENABLE;
WREG32(SDMA0_CNTL + SDMA0_REGISTER_OFFSET, tmp);
*/
int cik_irq_set(struct radeon_device *rdev)
{
- u32 cp_int_cntl = CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE |
- PRIV_INSTR_INT_ENABLE | PRIV_REG_INT_ENABLE;
+ u32 cp_int_cntl;
u32 cp_m1p0, cp_m1p1, cp_m1p2, cp_m1p3;
u32 cp_m2p0, cp_m2p1, cp_m2p2, cp_m2p3;
u32 crtc1 = 0, crtc2 = 0, crtc3 = 0, crtc4 = 0, crtc5 = 0, crtc6 = 0;
return 0;
}
+ cp_int_cntl = RREG32(CP_INT_CNTL_RING0) &
+ (CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE);
+ cp_int_cntl |= PRIV_INSTR_INT_ENABLE | PRIV_REG_INT_ENABLE;
+
hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~DC_HPDx_INT_EN;
hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~DC_HPDx_INT_EN;
hpd3 = RREG32(DC_HPD3_INT_CONTROL) & ~DC_HPDx_INT_EN;
if (eg_pi->pcie_performance_request)
cypress_notify_link_speed_change_after_state_change(rdev, new_ps, old_ps);
- ret = rv770_dpm_force_performance_level(rdev, RADEON_DPM_FORCED_LEVEL_AUTO);
- if (ret) {
- DRM_ERROR("rv770_dpm_force_performance_level failed\n");
- return ret;
- }
-
return 0;
}
static u32 dce6_endpoint_rreg(struct radeon_device *rdev,
u32 block_offset, u32 reg)
{
+ unsigned long flags;
u32 r;
+ spin_lock_irqsave(&rdev->end_idx_lock, flags);
WREG32(AZ_F0_CODEC_ENDPOINT_INDEX + block_offset, reg);
r = RREG32(AZ_F0_CODEC_ENDPOINT_DATA + block_offset);
+ spin_unlock_irqrestore(&rdev->end_idx_lock, flags);
+
return r;
}
static void dce6_endpoint_wreg(struct radeon_device *rdev,
u32 block_offset, u32 reg, u32 v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->end_idx_lock, flags);
if (ASIC_IS_DCE8(rdev))
WREG32(AZ_F0_CODEC_ENDPOINT_INDEX + block_offset, reg);
else
WREG32(AZ_F0_CODEC_ENDPOINT_INDEX + block_offset,
AZ_ENDPOINT_REG_WRITE_EN | AZ_ENDPOINT_REG_INDEX(reg));
WREG32(AZ_F0_CODEC_ENDPOINT_DATA + block_offset, v);
+ spin_unlock_irqrestore(&rdev->end_idx_lock, flags);
}
#define RREG32_ENDPOINT(block, reg) dce6_endpoint_rreg(rdev, (block), (reg))
struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv;
u32 offset = dig->afmt->offset;
- u32 id = dig->afmt->pin->id;
if (!dig->afmt->pin)
return;
- WREG32(AFMT_AUDIO_SRC_CONTROL + offset, AFMT_AUDIO_SRC_SELECT(id));
+ WREG32(AFMT_AUDIO_SRC_CONTROL + offset,
+ AFMT_AUDIO_SRC_SELECT(dig->afmt->pin->id));
}
void dce6_afmt_write_speaker_allocation(struct drm_encoder *encoder)
u8 *sadb;
int sad_count;
+ /* XXX: setting this register causes hangs on some asics */
+ return;
+
if (!dig->afmt->pin)
return;
rdev->config.evergreen.sx_max_export_size = 256;
rdev->config.evergreen.sx_max_export_pos_size = 64;
rdev->config.evergreen.sx_max_export_smx_size = 192;
- rdev->config.evergreen.max_hw_contexts = 8;
+ rdev->config.evergreen.max_hw_contexts = 4;
rdev->config.evergreen.sq_num_cf_insts = 2;
rdev->config.evergreen.sc_prim_fifo_size = 0x40;
u8 *sadb;
int sad_count;
+ /* XXX: setting this register causes hangs on some asics */
+ return;
+
list_for_each_entry(connector, &encoder->dev->mode_config.connector_list, head) {
if (connector->encoder == encoder)
radeon_connector = to_radeon_connector(connector);
/* fglrx clears sth in AFMT_AUDIO_PACKET_CONTROL2 here */
WREG32(HDMI_ACR_PACKET_CONTROL + offset,
- HDMI_ACR_AUTO_SEND | /* allow hw to sent ACR packets when required */
- HDMI_ACR_SOURCE); /* select SW CTS value */
+ HDMI_ACR_SOURCE | /* select SW CTS value */
+ HDMI_ACR_AUTO_SEND); /* allow hw to sent ACR packets when required */
evergreen_hdmi_update_ACR(encoder, mode->clock);
* 6. COMMAND [29:22] | BYTE_COUNT [20:0]
*/
# define PACKET3_CP_DMA_DST_SEL(x) ((x) << 20)
- /* 0 - SRC_ADDR
+ /* 0 - DST_ADDR
* 1 - GDS
*/
# define PACKET3_CP_DMA_ENGINE(x) ((x) << 27)
# define PACKET3_CP_DMA_CP_SYNC (1 << 31)
/* COMMAND */
# define PACKET3_CP_DMA_DIS_WC (1 << 21)
-# define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 23)
+# define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 22)
/* 0 - none
* 1 - 8 in 16
* 2 - 8 in 32
static void kv_enable_new_levels(struct radeon_device *rdev);
static void kv_program_nbps_index_settings(struct radeon_device *rdev,
struct radeon_ps *new_rps);
+static int kv_set_enabled_level(struct radeon_device *rdev, u32 level);
static int kv_set_enabled_levels(struct radeon_device *rdev);
static int kv_force_dpm_highest(struct radeon_device *rdev);
static int kv_force_dpm_lowest(struct radeon_device *rdev);
static void kv_program_vc(struct radeon_device *rdev)
{
- WREG32_SMC(CG_FTV_0, 0x3FFFC000);
+ WREG32_SMC(CG_FTV_0, 0x3FFFC100);
}
static void kv_clear_vc(struct radeon_device *rdev)
static int kv_unforce_levels(struct radeon_device *rdev)
{
- return kv_notify_message_to_smu(rdev, PPSMC_MSG_NoForcedLevel);
+ if (rdev->family == CHIP_KABINI)
+ return kv_notify_message_to_smu(rdev, PPSMC_MSG_NoForcedLevel);
+ else
+ return kv_set_enabled_levels(rdev);
}
static int kv_update_sclk_t(struct radeon_device *rdev)
&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk;
if (table && table->count) {
- for (i = pi->graphics_dpm_level_count - 1; i >= 0; i--) {
- if ((table->entries[i].clk == pi->boot_pl.sclk) ||
- (i == 0))
+ for (i = pi->graphics_dpm_level_count - 1; i > 0; i--) {
+ if (table->entries[i].clk == pi->boot_pl.sclk)
break;
}
if (table->num_max_dpm_entries == 0)
return -EINVAL;
- for (i = pi->graphics_dpm_level_count - 1; i >= 0; i--) {
- if ((table->entries[i].sclk_frequency == pi->boot_pl.sclk) ||
- (i == 0))
+ for (i = pi->graphics_dpm_level_count - 1; i > 0; i--) {
+ if (table->entries[i].sclk_frequency == pi->boot_pl.sclk)
break;
}
PPSMC_MSG_EnableULV : PPSMC_MSG_DisableULV);
}
+static void kv_reset_acp_boot_level(struct radeon_device *rdev)
+{
+ struct kv_power_info *pi = kv_get_pi(rdev);
+
+ pi->acp_boot_level = 0xff;
+}
+
static void kv_update_current_ps(struct radeon_device *rdev,
struct radeon_ps *rps)
{
pi->requested_rps.ps_priv = &pi->requested_ps;
}
+void kv_dpm_enable_bapm(struct radeon_device *rdev, bool enable)
+{
+ struct kv_power_info *pi = kv_get_pi(rdev);
+ int ret;
+
+ if (pi->bapm_enable) {
+ ret = kv_smc_bapm_enable(rdev, enable);
+ if (ret)
+ DRM_ERROR("kv_smc_bapm_enable failed\n");
+ }
+}
+
int kv_dpm_enable(struct radeon_device *rdev)
{
struct kv_power_info *pi = kv_get_pi(rdev);
return ret;
}
+ kv_reset_acp_boot_level(rdev);
+
if (rdev->irq.installed &&
r600_is_internal_thermal_sensor(rdev->pm.int_thermal_type)) {
ret = kv_set_thermal_temperature_range(rdev, R600_TEMP_RANGE_MIN, R600_TEMP_RANGE_MAX);
radeon_irq_set(rdev);
}
+ ret = kv_smc_bapm_enable(rdev, false);
+ if (ret) {
+ DRM_ERROR("kv_smc_bapm_enable failed\n");
+ return ret;
+ }
+
/* powerdown unused blocks for now */
kv_dpm_powergate_acp(rdev, true);
kv_dpm_powergate_samu(rdev, true);
RADEON_CG_BLOCK_BIF |
RADEON_CG_BLOCK_HDP), false);
+ kv_smc_bapm_enable(rdev, false);
+
/* powerup blocks */
kv_dpm_powergate_acp(rdev, false);
kv_dpm_powergate_samu(rdev, false);
return kv_enable_samu_dpm(rdev, !gate);
}
+static u8 kv_get_acp_boot_level(struct radeon_device *rdev)
+{
+ u8 i;
+ struct radeon_clock_voltage_dependency_table *table =
+ &rdev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table;
+
+ for (i = 0; i < table->count; i++) {
+ if (table->entries[i].clk >= 0) /* XXX */
+ break;
+ }
+
+ if (i >= table->count)
+ i = table->count - 1;
+
+ return i;
+}
+
+static void kv_update_acp_boot_level(struct radeon_device *rdev)
+{
+ struct kv_power_info *pi = kv_get_pi(rdev);
+ u8 acp_boot_level;
+
+ if (!pi->caps_stable_p_state) {
+ acp_boot_level = kv_get_acp_boot_level(rdev);
+ if (acp_boot_level != pi->acp_boot_level) {
+ pi->acp_boot_level = acp_boot_level;
+ kv_send_msg_to_smc_with_parameter(rdev,
+ PPSMC_MSG_ACPDPM_SetEnabledMask,
+ (1 << pi->acp_boot_level));
+ }
+ }
+}
+
static int kv_update_acp_dpm(struct radeon_device *rdev, bool gate)
{
struct kv_power_info *pi = kv_get_pi(rdev);
if (pi->caps_stable_p_state)
pi->acp_boot_level = table->count - 1;
else
- pi->acp_boot_level = 0;
+ pi->acp_boot_level = kv_get_acp_boot_level(rdev);
ret = kv_copy_bytes_to_smc(rdev,
pi->dpm_table_start +
}
}
- for (i = pi->graphics_dpm_level_count - 1; i >= 0; i--) {
- if ((table->entries[i].clk <= new_ps->levels[new_ps->num_levels -1].sclk) ||
- (i == 0)) {
- pi->highest_valid = i;
+ for (i = pi->graphics_dpm_level_count - 1; i > 0; i--) {
+ if (table->entries[i].clk <= new_ps->levels[new_ps->num_levels - 1].sclk)
break;
- }
}
+ pi->highest_valid = i;
if (pi->lowest_valid > pi->highest_valid) {
if ((new_ps->levels[0].sclk - table->entries[pi->highest_valid].clk) >
}
}
- for (i = pi->graphics_dpm_level_count - 1; i >= 0; i--) {
+ for (i = pi->graphics_dpm_level_count - 1; i > 0; i--) {
if (table->entries[i].sclk_frequency <=
- new_ps->levels[new_ps->num_levels - 1].sclk ||
- i == 0) {
- pi->highest_valid = i;
+ new_ps->levels[new_ps->num_levels - 1].sclk)
break;
- }
}
+ pi->highest_valid = i;
if (pi->lowest_valid > pi->highest_valid) {
if ((new_ps->levels[0].sclk -
RADEON_CG_BLOCK_BIF |
RADEON_CG_BLOCK_HDP), false);
+ if (pi->bapm_enable) {
+ ret = kv_smc_bapm_enable(rdev, rdev->pm.dpm.ac_power);
+ if (ret) {
+ DRM_ERROR("kv_smc_bapm_enable failed\n");
+ return ret;
+ }
+ }
+
if (rdev->family == CHIP_KABINI) {
if (pi->enable_dpm) {
kv_set_valid_clock_range(rdev, new_ps);
return ret;
}
#endif
+ kv_update_acp_boot_level(rdev);
kv_update_sclk_t(rdev);
kv_enable_nb_dpm(rdev);
}
RADEON_CG_BLOCK_BIF |
RADEON_CG_BLOCK_HDP), true);
- rdev->pm.dpm.forced_level = RADEON_DPM_FORCED_LEVEL_AUTO;
return 0;
}
void kv_dpm_reset_asic(struct radeon_device *rdev)
{
- kv_force_lowest_valid(rdev);
- kv_init_graphics_levels(rdev);
- kv_program_bootup_state(rdev);
- kv_upload_dpm_settings(rdev);
- kv_force_lowest_valid(rdev);
- kv_unforce_levels(rdev);
+ struct kv_power_info *pi = kv_get_pi(rdev);
+
+ if (rdev->family == CHIP_KABINI) {
+ kv_force_lowest_valid(rdev);
+ kv_init_graphics_levels(rdev);
+ kv_program_bootup_state(rdev);
+ kv_upload_dpm_settings(rdev);
+ kv_force_lowest_valid(rdev);
+ kv_unforce_levels(rdev);
+ } else {
+ kv_init_graphics_levels(rdev);
+ kv_program_bootup_state(rdev);
+ kv_freeze_sclk_dpm(rdev, true);
+ kv_upload_dpm_settings(rdev);
+ kv_freeze_sclk_dpm(rdev, false);
+ kv_set_enabled_level(rdev, pi->graphics_boot_level);
+ }
}
//XXX use sumo_dpm_display_configuration_changed
if (ret)
return ret;
- for (i = SMU7_MAX_LEVELS_GRAPHICS - 1; i >= 0; i--) {
+ for (i = SMU7_MAX_LEVELS_GRAPHICS - 1; i > 0; i--) {
if (enable_mask & (1 << i))
break;
}
- return kv_send_msg_to_smc_with_parameter(rdev, PPSMC_MSG_DPM_ForceState, i);
+ if (rdev->family == CHIP_KABINI)
+ return kv_send_msg_to_smc_with_parameter(rdev, PPSMC_MSG_DPM_ForceState, i);
+ else
+ return kv_set_enabled_level(rdev, i);
}
static int kv_force_dpm_lowest(struct radeon_device *rdev)
break;
}
- return kv_send_msg_to_smc_with_parameter(rdev, PPSMC_MSG_DPM_ForceState, i);
+ if (rdev->family == CHIP_KABINI)
+ return kv_send_msg_to_smc_with_parameter(rdev, PPSMC_MSG_DPM_ForceState, i);
+ else
+ return kv_set_enabled_level(rdev, i);
}
static u8 kv_get_sleep_divider_id_from_clock(struct radeon_device *rdev,
if (!pi->caps_sclk_ds)
return 0;
- for (i = KV_MAX_DEEPSLEEP_DIVIDER_ID; i <= 0; i--) {
+ for (i = KV_MAX_DEEPSLEEP_DIVIDER_ID; i > 0; i--) {
temp = sclk / sumo_get_sleep_divider_from_id(i);
- if ((temp >= min) || (i == 0))
+ if (temp >= min)
break;
}
ps->dpmx_nb_ps_lo = 0x1;
ps->dpmx_nb_ps_hi = 0x0;
} else {
- ps->dpm0_pg_nb_ps_lo = 0x1;
+ ps->dpm0_pg_nb_ps_lo = 0x3;
ps->dpm0_pg_nb_ps_hi = 0x0;
- ps->dpmx_nb_ps_lo = 0x2;
- ps->dpmx_nb_ps_hi = 0x1;
+ ps->dpmx_nb_ps_lo = 0x3;
+ ps->dpmx_nb_ps_hi = 0x0;
- if (pi->sys_info.nb_dpm_enable && pi->battery_state) {
+ if (pi->sys_info.nb_dpm_enable) {
force_high = (mclk >= pi->sys_info.nbp_memory_clock[3]) ||
pi->video_start || (rdev->pm.dpm.new_active_crtc_count >= 3) ||
pi->disable_nb_ps3_in_battery;
}
}
+static int kv_set_enabled_level(struct radeon_device *rdev, u32 level)
+{
+ u32 new_mask = (1 << level);
+
+ return kv_send_msg_to_smc_with_parameter(rdev,
+ PPSMC_MSG_SCLKDPM_SetEnabledMask,
+ new_mask);
+}
+
static int kv_set_enabled_levels(struct radeon_device *rdev)
{
struct kv_power_info *pi = kv_get_pi(rdev);
pi->caps_sclk_ds = true;
pi->enable_auto_thermal_throttling = true;
pi->disable_nb_ps3_in_battery = false;
- pi->bapm_enable = true;
+ pi->bapm_enable = false;
pi->voltage_drop_t = 0;
pi->caps_sclk_throttle_low_notification = false;
pi->caps_fps = false; /* true? */
int kv_read_smc_sram_dword(struct radeon_device *rdev, u32 smc_address,
u32 *value, u32 limit);
int kv_smc_dpm_enable(struct radeon_device *rdev, bool enable);
+int kv_smc_bapm_enable(struct radeon_device *rdev, bool enable);
int kv_copy_bytes_to_smc(struct radeon_device *rdev,
u32 smc_start_address,
const u8 *src, u32 byte_count, u32 limit);
return kv_notify_message_to_smu(rdev, PPSMC_MSG_DPM_Disable);
}
+int kv_smc_bapm_enable(struct radeon_device *rdev, bool enable)
+{
+ if (enable)
+ return kv_notify_message_to_smu(rdev, PPSMC_MSG_EnableBAPM);
+ else
+ return kv_notify_message_to_smu(rdev, PPSMC_MSG_DisableBAPM);
+}
+
int kv_copy_bytes_to_smc(struct radeon_device *rdev,
u32 smc_start_address,
const u8 *src, u32 byte_count, u32 limit)
fw_name);
release_firmware(rdev->smc_fw);
rdev->smc_fw = NULL;
+ err = 0;
} else if (rdev->smc_fw->size != smc_req_size) {
printk(KERN_ERR
"ni_mc: Bogus length %zu in firmware \"%s\"\n",
bool disable_mclk_switching;
u32 mclk, sclk;
u16 vddc, vddci;
+ u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc;
int i;
if ((rdev->pm.dpm.new_active_crtc_count > 1) ||
}
}
+ /* limit clocks to max supported clocks based on voltage dependency tables */
+ btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
+ &max_sclk_vddc);
+ btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
+ &max_mclk_vddci);
+ btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
+ &max_mclk_vddc);
+
+ for (i = 0; i < ps->performance_level_count; i++) {
+ if (max_sclk_vddc) {
+ if (ps->performance_levels[i].sclk > max_sclk_vddc)
+ ps->performance_levels[i].sclk = max_sclk_vddc;
+ }
+ if (max_mclk_vddci) {
+ if (ps->performance_levels[i].mclk > max_mclk_vddci)
+ ps->performance_levels[i].mclk = max_mclk_vddci;
+ }
+ if (max_mclk_vddc) {
+ if (ps->performance_levels[i].mclk > max_mclk_vddc)
+ ps->performance_levels[i].mclk = max_mclk_vddc;
+ }
+ }
+
/* XXX validate the min clocks required for display */
if (disable_mclk_switching) {
return ret;
}
- ret = ni_dpm_force_performance_level(rdev, RADEON_DPM_FORCED_LEVEL_AUTO);
- if (ret) {
- DRM_ERROR("ni_dpm_force_performance_level failed\n");
- return ret;
- }
-
return 0;
}
#define PPSMC_MSG_VCEPowerON ((uint32_t) 0x10f)
#define PPSMC_MSG_DCE_RemoveVoltageAdjustment ((uint32_t) 0x11d)
#define PPSMC_MSG_DCE_AllowVoltageAdjustment ((uint32_t) 0x11e)
+#define PPSMC_MSG_EnableBAPM ((uint32_t) 0x120)
+#define PPSMC_MSG_DisableBAPM ((uint32_t) 0x121)
#define PPSMC_MSG_UVD_DPM_Config ((uint32_t) 0x124)
uint32_t r100_pll_rreg(struct radeon_device *rdev, uint32_t reg)
{
+ unsigned long flags;
uint32_t data;
+ spin_lock_irqsave(&rdev->pll_idx_lock, flags);
WREG8(RADEON_CLOCK_CNTL_INDEX, reg & 0x3f);
r100_pll_errata_after_index(rdev);
data = RREG32(RADEON_CLOCK_CNTL_DATA);
r100_pll_errata_after_data(rdev);
+ spin_unlock_irqrestore(&rdev->pll_idx_lock, flags);
return data;
}
void r100_pll_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->pll_idx_lock, flags);
WREG8(RADEON_CLOCK_CNTL_INDEX, ((reg & 0x3f) | RADEON_PLL_WR_EN));
r100_pll_errata_after_index(rdev);
WREG32(RADEON_CLOCK_CNTL_DATA, v);
r100_pll_errata_after_data(rdev);
+ spin_unlock_irqrestore(&rdev->pll_idx_lock, flags);
}
static void r100_set_safe_registers(struct radeon_device *rdev)
seq_printf(m, "CP_RB_RPTR 0x%08x\n", rdp);
seq_printf(m, "%u free dwords in ring\n", ring->ring_free_dw);
seq_printf(m, "%u dwords in ring\n", count);
- for (j = 0; j <= count; j++) {
- i = (rdp + j) & ring->ptr_mask;
- seq_printf(m, "r[%04d]=0x%08x\n", i, ring->ring[i]);
+ if (ring->ready) {
+ for (j = 0; j <= count; j++) {
+ i = (rdp + j) & ring->ptr_mask;
+ seq_printf(m, "r[%04d]=0x%08x\n", i, ring->ring[i]);
+ }
}
return 0;
}
u32 r420_mc_rreg(struct radeon_device *rdev, u32 reg)
{
+ unsigned long flags;
u32 r;
+ spin_lock_irqsave(&rdev->mc_idx_lock, flags);
WREG32(R_0001F8_MC_IND_INDEX, S_0001F8_MC_IND_ADDR(reg));
r = RREG32(R_0001FC_MC_IND_DATA);
+ spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
return r;
}
void r420_mc_wreg(struct radeon_device *rdev, u32 reg, u32 v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->mc_idx_lock, flags);
WREG32(R_0001F8_MC_IND_INDEX, S_0001F8_MC_IND_ADDR(reg) |
S_0001F8_MC_IND_WR_EN(1));
WREG32(R_0001FC_MC_IND_DATA, v);
+ spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
}
static void r420_debugfs(struct radeon_device *rdev)
return rdev->clock.spll.reference_freq;
}
+int r600_set_uvd_clocks(struct radeon_device *rdev, u32 vclk, u32 dclk)
+{
+ return 0;
+}
+
/* get temperature in millidegrees */
int rv6xx_get_temp(struct radeon_device *rdev)
{
uint32_t rs780_mc_rreg(struct radeon_device *rdev, uint32_t reg)
{
+ unsigned long flags;
uint32_t r;
+ spin_lock_irqsave(&rdev->mc_idx_lock, flags);
WREG32(R_0028F8_MC_INDEX, S_0028F8_MC_IND_ADDR(reg));
r = RREG32(R_0028FC_MC_DATA);
WREG32(R_0028F8_MC_INDEX, ~C_0028F8_MC_IND_ADDR);
+ spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
return r;
}
void rs780_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->mc_idx_lock, flags);
WREG32(R_0028F8_MC_INDEX, S_0028F8_MC_IND_ADDR(reg) |
S_0028F8_MC_IND_WR_EN(1));
WREG32(R_0028FC_MC_DATA, v);
WREG32(R_0028F8_MC_INDEX, 0x7F);
+ spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
}
static void r600_mc_program(struct radeon_device *rdev)
*/
u32 r600_pciep_rreg(struct radeon_device *rdev, u32 reg)
{
+ unsigned long flags;
u32 r;
+ spin_lock_irqsave(&rdev->pciep_idx_lock, flags);
WREG32(PCIE_PORT_INDEX, ((reg) & 0xff));
(void)RREG32(PCIE_PORT_INDEX);
r = RREG32(PCIE_PORT_DATA);
+ spin_unlock_irqrestore(&rdev->pciep_idx_lock, flags);
return r;
}
void r600_pciep_wreg(struct radeon_device *rdev, u32 reg, u32 v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->pciep_idx_lock, flags);
WREG32(PCIE_PORT_INDEX, ((reg) & 0xff));
(void)RREG32(PCIE_PORT_INDEX);
WREG32(PCIE_PORT_DATA, (v));
(void)RREG32(PCIE_PORT_DATA);
+ spin_unlock_irqrestore(&rdev->pciep_idx_lock, flags);
}
/*
fw_name);
release_firmware(rdev->smc_fw);
rdev->smc_fw = NULL;
+ err = 0;
} else if (rdev->smc_fw->size != smc_req_size) {
printk(KERN_ERR
"smc: Bogus length %zu in firmware \"%s\"\n",
rdev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk =
le16_to_cpu(uvd_clk->usDClkLow) | (uvd_clk->ucDClkHigh << 16);
rdev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v =
- le16_to_cpu(limits->entries[i].usVoltage);
+ le16_to_cpu(entry->usVoltage);
entry = (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *)
((u8 *)entry + sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record));
}
void r600_free_extended_power_table(struct radeon_device *rdev)
{
- if (rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk.entries)
- kfree(rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk.entries);
- if (rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk.entries)
- kfree(rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk.entries);
- if (rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk.entries)
- kfree(rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk.entries);
- if (rdev->pm.dpm.dyn_state.mvdd_dependency_on_mclk.entries)
- kfree(rdev->pm.dpm.dyn_state.mvdd_dependency_on_mclk.entries);
- if (rdev->pm.dpm.dyn_state.cac_leakage_table.entries)
- kfree(rdev->pm.dpm.dyn_state.cac_leakage_table.entries);
- if (rdev->pm.dpm.dyn_state.phase_shedding_limits_table.entries)
- kfree(rdev->pm.dpm.dyn_state.phase_shedding_limits_table.entries);
- if (rdev->pm.dpm.dyn_state.ppm_table)
- kfree(rdev->pm.dpm.dyn_state.ppm_table);
- if (rdev->pm.dpm.dyn_state.cac_tdp_table)
- kfree(rdev->pm.dpm.dyn_state.cac_tdp_table);
- if (rdev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries)
- kfree(rdev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table.entries);
- if (rdev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries)
- kfree(rdev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries);
- if (rdev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries)
- kfree(rdev->pm.dpm.dyn_state.samu_clock_voltage_dependency_table.entries);
- if (rdev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries)
- kfree(rdev->pm.dpm.dyn_state.acp_clock_voltage_dependency_table.entries);
+ struct radeon_dpm_dynamic_state *dyn_state = &rdev->pm.dpm.dyn_state;
+
+ kfree(dyn_state->vddc_dependency_on_sclk.entries);
+ kfree(dyn_state->vddci_dependency_on_mclk.entries);
+ kfree(dyn_state->vddc_dependency_on_mclk.entries);
+ kfree(dyn_state->mvdd_dependency_on_mclk.entries);
+ kfree(dyn_state->cac_leakage_table.entries);
+ kfree(dyn_state->phase_shedding_limits_table.entries);
+ kfree(dyn_state->ppm_table);
+ kfree(dyn_state->cac_tdp_table);
+ kfree(dyn_state->vce_clock_voltage_dependency_table.entries);
+ kfree(dyn_state->uvd_clock_voltage_dependency_table.entries);
+ kfree(dyn_state->samu_clock_voltage_dependency_table.entries);
+ kfree(dyn_state->acp_clock_voltage_dependency_table.entries);
}
enum radeon_pcie_gen r600_get_pcie_gen_support(struct radeon_device *rdev,
static const struct radeon_hdmi_acr r600_hdmi_predefined_acr[] = {
/* 32kHz 44.1kHz 48kHz */
/* Clock N CTS N CTS N CTS */
- { 25174, 4576, 28125, 7007, 31250, 6864, 28125 }, /* 25,20/1.001 MHz */
+ { 25175, 4576, 28125, 7007, 31250, 6864, 28125 }, /* 25,20/1.001 MHz */
{ 25200, 4096, 25200, 6272, 28000, 6144, 25200 }, /* 25.20 MHz */
{ 27000, 4096, 27000, 6272, 30000, 6144, 27000 }, /* 27.00 MHz */
{ 27027, 4096, 27027, 6272, 30030, 6144, 27027 }, /* 27.00*1.001 MHz */
{ 54000, 4096, 54000, 6272, 60000, 6144, 54000 }, /* 54.00 MHz */
{ 54054, 4096, 54054, 6272, 60060, 6144, 54054 }, /* 54.00*1.001 MHz */
- { 74175, 11648, 210937, 17836, 234375, 11648, 140625 }, /* 74.25/1.001 MHz */
+ { 74176, 11648, 210937, 17836, 234375, 11648, 140625 }, /* 74.25/1.001 MHz */
{ 74250, 4096, 74250, 6272, 82500, 6144, 74250 }, /* 74.25 MHz */
- { 148351, 11648, 421875, 8918, 234375, 5824, 140625 }, /* 148.50/1.001 MHz */
+ { 148352, 11648, 421875, 8918, 234375, 5824, 140625 }, /* 148.50/1.001 MHz */
{ 148500, 4096, 148500, 6272, 165000, 6144, 148500 }, /* 148.50 MHz */
{ 0, 4096, 0, 6272, 0, 6144, 0 } /* Other */
};
*/
static void r600_hdmi_calc_cts(uint32_t clock, int *CTS, int N, int freq)
{
- if (*CTS == 0)
- *CTS = clock * N / (128 * freq) * 1000;
+ u64 n;
+ u32 d;
+
+ if (*CTS == 0) {
+ n = (u64)clock * (u64)N * 1000ULL;
+ d = 128 * freq;
+ do_div(n, d);
+ *CTS = n;
+ }
DRM_DEBUG("Using ACR timing N=%d CTS=%d for frequency %d\n",
N, *CTS, freq);
}
* number (coefficient of two integer numbers. DCCG_AUDIO_DTOx_PHASE
* is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator
*/
- if (ASIC_IS_DCE3(rdev)) {
- /* according to the reg specs, this should DCE3.2 only, but in
- * practice it seems to cover DCE3.0 as well.
- */
+ if (ASIC_IS_DCE32(rdev)) {
if (dig->dig_encoder == 0) {
dto_cntl = RREG32(DCCG_AUDIO_DTO0_CNTL) & ~DCCG_AUDIO_DTO_WALLCLOCK_RATIO_MASK;
dto_cntl |= DCCG_AUDIO_DTO_WALLCLOCK_RATIO(wallclock_ratio);
WREG32(DCCG_AUDIO_DTO1_MODULE, dto_modulo);
WREG32(DCCG_AUDIO_DTO_SELECT, 1); /* select DTO1 */
}
+ } else if (ASIC_IS_DCE3(rdev)) {
+ /* according to the reg specs, this should DCE3.2 only, but in
+ * practice it seems to cover DCE3.0/3.1 as well.
+ */
+ if (dig->dig_encoder == 0) {
+ WREG32(DCCG_AUDIO_DTO0_PHASE, base_rate * 100);
+ WREG32(DCCG_AUDIO_DTO0_MODULE, clock * 100);
+ WREG32(DCCG_AUDIO_DTO_SELECT, 0); /* select DTO0 */
+ } else {
+ WREG32(DCCG_AUDIO_DTO1_PHASE, base_rate * 100);
+ WREG32(DCCG_AUDIO_DTO1_MODULE, clock * 100);
+ WREG32(DCCG_AUDIO_DTO_SELECT, 1); /* select DTO1 */
+ }
} else {
- /* according to the reg specs, this should be DCE2.0 and DCE3.0 */
+ /* according to the reg specs, this should be DCE2.0 and DCE3.0/3.1 */
WREG32(AUDIO_DTO, AUDIO_DTO_PHASE(base_rate / 10) |
AUDIO_DTO_MODULE(clock / 10));
}
u8 *sadb;
int sad_count;
+ /* XXX: setting this register causes hangs on some asics */
+ return;
+
list_for_each_entry(connector, &encoder->dev->mode_config.connector_list, head) {
if (connector->encoder == encoder)
radeon_connector = to_radeon_connector(connector);
}
WREG32(HDMI0_ACR_PACKET_CONTROL + offset,
- HDMI0_ACR_AUTO_SEND | /* allow hw to sent ACR packets when required */
- HDMI0_ACR_SOURCE); /* select SW CTS value */
+ HDMI0_ACR_SOURCE | /* select SW CTS value - XXX verify that hw CTS works on all families */
+ HDMI0_ACR_AUTO_SEND); /* allow hw to sent ACR packets when required */
WREG32(HDMI0_VBI_PACKET_CONTROL + offset,
HDMI0_NULL_SEND | /* send null packets when required */
# define HDMI0_AVI_INFO_CONT (1 << 1)
# define HDMI0_AUDIO_INFO_SEND (1 << 4)
# define HDMI0_AUDIO_INFO_CONT (1 << 5)
-# define HDMI0_AUDIO_INFO_SOURCE (1 << 6) /* 0 - sound block; 1 - hmdi regs */
+# define HDMI0_AUDIO_INFO_SOURCE (1 << 6) /* 0 - sound block; 1 - hdmi regs */
# define HDMI0_AUDIO_INFO_UPDATE (1 << 7)
# define HDMI0_MPEG_INFO_SEND (1 << 8)
# define HDMI0_MPEG_INFO_CONT (1 << 9)
*/
# define PACKET3_CP_DMA_CP_SYNC (1 << 31)
/* COMMAND */
-# define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 23)
+# define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 22)
/* 0 - none
* 1 - 8 in 16
* 2 - 8 in 32
#define RADEON_CG_SUPPORT_HDP_MGCG (1 << 16)
/* PG flags */
-#define RADEON_PG_SUPPORT_GFX_CG (1 << 0)
+#define RADEON_PG_SUPPORT_GFX_PG (1 << 0)
#define RADEON_PG_SUPPORT_GFX_SMG (1 << 1)
#define RADEON_PG_SUPPORT_GFX_DMG (1 << 2)
#define RADEON_PG_SUPPORT_UVD (1 << 3)
struct radeon_clock_and_voltage_limits {
u32 sclk;
u32 mclk;
- u32 vddc;
- u32 vddci;
+ u16 vddc;
+ u16 vddci;
};
struct radeon_clock_array {
int (*force_performance_level)(struct radeon_device *rdev, enum radeon_dpm_forced_level level);
bool (*vblank_too_short)(struct radeon_device *rdev);
void (*powergate_uvd)(struct radeon_device *rdev, bool gate);
+ void (*enable_bapm)(struct radeon_device *rdev, bool enable);
} dpm;
/* pageflipping */
struct {
resource_size_t rmmio_size;
/* protects concurrent MM_INDEX/DATA based register access */
spinlock_t mmio_idx_lock;
+ /* protects concurrent SMC based register access */
+ spinlock_t smc_idx_lock;
+ /* protects concurrent PLL register access */
+ spinlock_t pll_idx_lock;
+ /* protects concurrent MC register access */
+ spinlock_t mc_idx_lock;
+ /* protects concurrent PCIE register access */
+ spinlock_t pcie_idx_lock;
+ /* protects concurrent PCIE_PORT register access */
+ spinlock_t pciep_idx_lock;
+ /* protects concurrent PIF register access */
+ spinlock_t pif_idx_lock;
+ /* protects concurrent CG register access */
+ spinlock_t cg_idx_lock;
+ /* protects concurrent UVD register access */
+ spinlock_t uvd_idx_lock;
+ /* protects concurrent RCU register access */
+ spinlock_t rcu_idx_lock;
+ /* protects concurrent DIDT register access */
+ spinlock_t didt_idx_lock;
+ /* protects concurrent ENDPOINT (audio) register access */
+ spinlock_t end_idx_lock;
void __iomem *rmmio;
radeon_rreg_t mc_rreg;
radeon_wreg_t mc_wreg;
*/
static inline uint32_t rv370_pcie_rreg(struct radeon_device *rdev, uint32_t reg)
{
+ unsigned long flags;
uint32_t r;
+ spin_lock_irqsave(&rdev->pcie_idx_lock, flags);
WREG32(RADEON_PCIE_INDEX, ((reg) & rdev->pcie_reg_mask));
r = RREG32(RADEON_PCIE_DATA);
+ spin_unlock_irqrestore(&rdev->pcie_idx_lock, flags);
return r;
}
static inline void rv370_pcie_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->pcie_idx_lock, flags);
WREG32(RADEON_PCIE_INDEX, ((reg) & rdev->pcie_reg_mask));
WREG32(RADEON_PCIE_DATA, (v));
+ spin_unlock_irqrestore(&rdev->pcie_idx_lock, flags);
}
static inline u32 tn_smc_rreg(struct radeon_device *rdev, u32 reg)
{
+ unsigned long flags;
u32 r;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
WREG32(TN_SMC_IND_INDEX_0, (reg));
r = RREG32(TN_SMC_IND_DATA_0);
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
return r;
}
static inline void tn_smc_wreg(struct radeon_device *rdev, u32 reg, u32 v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
WREG32(TN_SMC_IND_INDEX_0, (reg));
WREG32(TN_SMC_IND_DATA_0, (v));
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
}
static inline u32 r600_rcu_rreg(struct radeon_device *rdev, u32 reg)
{
+ unsigned long flags;
u32 r;
+ spin_lock_irqsave(&rdev->rcu_idx_lock, flags);
WREG32(R600_RCU_INDEX, ((reg) & 0x1fff));
r = RREG32(R600_RCU_DATA);
+ spin_unlock_irqrestore(&rdev->rcu_idx_lock, flags);
return r;
}
static inline void r600_rcu_wreg(struct radeon_device *rdev, u32 reg, u32 v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->rcu_idx_lock, flags);
WREG32(R600_RCU_INDEX, ((reg) & 0x1fff));
WREG32(R600_RCU_DATA, (v));
+ spin_unlock_irqrestore(&rdev->rcu_idx_lock, flags);
}
static inline u32 eg_cg_rreg(struct radeon_device *rdev, u32 reg)
{
+ unsigned long flags;
u32 r;
+ spin_lock_irqsave(&rdev->cg_idx_lock, flags);
WREG32(EVERGREEN_CG_IND_ADDR, ((reg) & 0xffff));
r = RREG32(EVERGREEN_CG_IND_DATA);
+ spin_unlock_irqrestore(&rdev->cg_idx_lock, flags);
return r;
}
static inline void eg_cg_wreg(struct radeon_device *rdev, u32 reg, u32 v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->cg_idx_lock, flags);
WREG32(EVERGREEN_CG_IND_ADDR, ((reg) & 0xffff));
WREG32(EVERGREEN_CG_IND_DATA, (v));
+ spin_unlock_irqrestore(&rdev->cg_idx_lock, flags);
}
static inline u32 eg_pif_phy0_rreg(struct radeon_device *rdev, u32 reg)
{
+ unsigned long flags;
u32 r;
+ spin_lock_irqsave(&rdev->pif_idx_lock, flags);
WREG32(EVERGREEN_PIF_PHY0_INDEX, ((reg) & 0xffff));
r = RREG32(EVERGREEN_PIF_PHY0_DATA);
+ spin_unlock_irqrestore(&rdev->pif_idx_lock, flags);
return r;
}
static inline void eg_pif_phy0_wreg(struct radeon_device *rdev, u32 reg, u32 v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->pif_idx_lock, flags);
WREG32(EVERGREEN_PIF_PHY0_INDEX, ((reg) & 0xffff));
WREG32(EVERGREEN_PIF_PHY0_DATA, (v));
+ spin_unlock_irqrestore(&rdev->pif_idx_lock, flags);
}
static inline u32 eg_pif_phy1_rreg(struct radeon_device *rdev, u32 reg)
{
+ unsigned long flags;
u32 r;
+ spin_lock_irqsave(&rdev->pif_idx_lock, flags);
WREG32(EVERGREEN_PIF_PHY1_INDEX, ((reg) & 0xffff));
r = RREG32(EVERGREEN_PIF_PHY1_DATA);
+ spin_unlock_irqrestore(&rdev->pif_idx_lock, flags);
return r;
}
static inline void eg_pif_phy1_wreg(struct radeon_device *rdev, u32 reg, u32 v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->pif_idx_lock, flags);
WREG32(EVERGREEN_PIF_PHY1_INDEX, ((reg) & 0xffff));
WREG32(EVERGREEN_PIF_PHY1_DATA, (v));
+ spin_unlock_irqrestore(&rdev->pif_idx_lock, flags);
}
static inline u32 r600_uvd_ctx_rreg(struct radeon_device *rdev, u32 reg)
{
+ unsigned long flags;
u32 r;
+ spin_lock_irqsave(&rdev->uvd_idx_lock, flags);
WREG32(R600_UVD_CTX_INDEX, ((reg) & 0x1ff));
r = RREG32(R600_UVD_CTX_DATA);
+ spin_unlock_irqrestore(&rdev->uvd_idx_lock, flags);
return r;
}
static inline void r600_uvd_ctx_wreg(struct radeon_device *rdev, u32 reg, u32 v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->uvd_idx_lock, flags);
WREG32(R600_UVD_CTX_INDEX, ((reg) & 0x1ff));
WREG32(R600_UVD_CTX_DATA, (v));
+ spin_unlock_irqrestore(&rdev->uvd_idx_lock, flags);
}
static inline u32 cik_didt_rreg(struct radeon_device *rdev, u32 reg)
{
+ unsigned long flags;
u32 r;
+ spin_lock_irqsave(&rdev->didt_idx_lock, flags);
WREG32(CIK_DIDT_IND_INDEX, (reg));
r = RREG32(CIK_DIDT_IND_DATA);
+ spin_unlock_irqrestore(&rdev->didt_idx_lock, flags);
return r;
}
static inline void cik_didt_wreg(struct radeon_device *rdev, u32 reg, u32 v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->didt_idx_lock, flags);
WREG32(CIK_DIDT_IND_INDEX, (reg));
WREG32(CIK_DIDT_IND_DATA, (v));
+ spin_unlock_irqrestore(&rdev->didt_idx_lock, flags);
}
void r100_pll_errata_after_index(struct radeon_device *rdev);
#define radeon_dpm_force_performance_level(rdev, l) rdev->asic->dpm.force_performance_level((rdev), (l))
#define radeon_dpm_vblank_too_short(rdev) rdev->asic->dpm.vblank_too_short((rdev))
#define radeon_dpm_powergate_uvd(rdev, g) rdev->asic->dpm.powergate_uvd((rdev), (g))
+#define radeon_dpm_enable_bapm(rdev, e) rdev->asic->dpm.enable_bapm((rdev), (e))
/* Common functions */
/* AGP */
.wait_for_vblank = &avivo_wait_for_vblank,
.set_backlight_level = &atombios_set_backlight_level,
.get_backlight_level = &atombios_get_backlight_level,
+ .hdmi_enable = &r600_hdmi_enable,
+ .hdmi_setmode = &r600_hdmi_setmode,
},
.copy = {
.blit = &r600_copy_cpdma,
.set_pcie_lanes = &r600_set_pcie_lanes,
.set_clock_gating = NULL,
.get_temperature = &rv6xx_get_temp,
+ .set_uvd_clocks = &r600_set_uvd_clocks,
},
.dpm = {
.init = &rv6xx_dpm_init,
.set_pcie_lanes = NULL,
.set_clock_gating = NULL,
.get_temperature = &rv6xx_get_temp,
+ .set_uvd_clocks = &r600_set_uvd_clocks,
},
.dpm = {
.init = &rs780_dpm_init,
.get_mclk = &rs780_dpm_get_mclk,
.print_power_state = &rs780_dpm_print_power_state,
.debugfs_print_current_performance_level = &rs780_dpm_debugfs_print_current_performance_level,
+ .force_performance_level = &rs780_dpm_force_performance_level,
},
.pflip = {
.pre_page_flip = &rs600_pre_page_flip,
.print_power_state = &trinity_dpm_print_power_state,
.debugfs_print_current_performance_level = &trinity_dpm_debugfs_print_current_performance_level,
.force_performance_level = &trinity_dpm_force_performance_level,
+ .enable_bapm = &trinity_dpm_enable_bapm,
},
.pflip = {
.pre_page_flip = &evergreen_pre_page_flip,
.debugfs_print_current_performance_level = &kv_dpm_debugfs_print_current_performance_level,
.force_performance_level = &kv_dpm_force_performance_level,
.powergate_uvd = &kv_dpm_powergate_uvd,
+ .enable_bapm = &kv_dpm_enable_bapm,
},
.pflip = {
.pre_page_flip = &evergreen_pre_page_flip,
RADEON_CG_SUPPORT_HDP_LS |
RADEON_CG_SUPPORT_HDP_MGCG;
rdev->pg_flags = 0 |
- /*RADEON_PG_SUPPORT_GFX_CG | */
+ /*RADEON_PG_SUPPORT_GFX_PG | */
RADEON_PG_SUPPORT_SDMA;
break;
case CHIP_OLAND:
RADEON_CG_SUPPORT_HDP_LS |
RADEON_CG_SUPPORT_HDP_MGCG;
rdev->pg_flags = 0;
- /*RADEON_PG_SUPPORT_GFX_CG |
+ /*RADEON_PG_SUPPORT_GFX_PG |
RADEON_PG_SUPPORT_GFX_SMG |
RADEON_PG_SUPPORT_GFX_DMG |
RADEON_PG_SUPPORT_UVD |
RADEON_CG_SUPPORT_HDP_LS |
RADEON_CG_SUPPORT_HDP_MGCG;
rdev->pg_flags = 0;
- /*RADEON_PG_SUPPORT_GFX_CG |
+ /*RADEON_PG_SUPPORT_GFX_PG |
RADEON_PG_SUPPORT_GFX_SMG |
RADEON_PG_SUPPORT_UVD |
RADEON_PG_SUPPORT_VCE |
u32 r600_get_xclk(struct radeon_device *rdev);
uint64_t r600_get_gpu_clock_counter(struct radeon_device *rdev);
int rv6xx_get_temp(struct radeon_device *rdev);
+int r600_set_uvd_clocks(struct radeon_device *rdev, u32 vclk, u32 dclk);
int r600_dpm_pre_set_power_state(struct radeon_device *rdev);
void r600_dpm_post_set_power_state(struct radeon_device *rdev);
/* r600 dma */
struct radeon_ps *ps);
void rs780_dpm_debugfs_print_current_performance_level(struct radeon_device *rdev,
struct seq_file *m);
+int rs780_dpm_force_performance_level(struct radeon_device *rdev,
+ enum radeon_dpm_forced_level level);
/*
* rv770,rv730,rv710,rv740
struct seq_file *m);
int trinity_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
+void trinity_dpm_enable_bapm(struct radeon_device *rdev, bool enable);
/* DCE6 - SI */
void dce6_bandwidth_update(struct radeon_device *rdev);
int kv_dpm_force_performance_level(struct radeon_device *rdev,
enum radeon_dpm_forced_level level);
void kv_dpm_powergate_uvd(struct radeon_device *rdev, bool gate);
+void kv_dpm_enable_bapm(struct radeon_device *rdev, bool enable);
/* uvd v1.0 */
uint32_t uvd_v1_0_get_rptr(struct radeon_device *rdev,
int index = GetIndexIntoMasterTable(DATA, PPLL_SS_Info);
uint16_t data_offset, size;
struct _ATOM_SPREAD_SPECTRUM_INFO *ss_info;
+ struct _ATOM_SPREAD_SPECTRUM_ASSIGNMENT *ss_assign;
uint8_t frev, crev;
int i, num_indices;
num_indices = (size - sizeof(ATOM_COMMON_TABLE_HEADER)) /
sizeof(ATOM_SPREAD_SPECTRUM_ASSIGNMENT);
-
+ ss_assign = (struct _ATOM_SPREAD_SPECTRUM_ASSIGNMENT*)
+ ((u8 *)&ss_info->asSS_Info[0]);
for (i = 0; i < num_indices; i++) {
- if (ss_info->asSS_Info[i].ucSS_Id == id) {
+ if (ss_assign->ucSS_Id == id) {
ss->percentage =
- le16_to_cpu(ss_info->asSS_Info[i].usSpreadSpectrumPercentage);
- ss->type = ss_info->asSS_Info[i].ucSpreadSpectrumType;
- ss->step = ss_info->asSS_Info[i].ucSS_Step;
- ss->delay = ss_info->asSS_Info[i].ucSS_Delay;
- ss->range = ss_info->asSS_Info[i].ucSS_Range;
- ss->refdiv = ss_info->asSS_Info[i].ucRecommendedRef_Div;
+ le16_to_cpu(ss_assign->usSpreadSpectrumPercentage);
+ ss->type = ss_assign->ucSpreadSpectrumType;
+ ss->step = ss_assign->ucSS_Step;
+ ss->delay = ss_assign->ucSS_Delay;
+ ss->range = ss_assign->ucSS_Range;
+ ss->refdiv = ss_assign->ucRecommendedRef_Div;
return true;
}
+ ss_assign = (struct _ATOM_SPREAD_SPECTRUM_ASSIGNMENT*)
+ ((u8 *)ss_assign + sizeof(struct _ATOM_SPREAD_SPECTRUM_ASSIGNMENT));
}
}
return false;
struct _ATOM_ASIC_INTERNAL_SS_INFO_V3 info_3;
};
+union asic_ss_assignment {
+ struct _ATOM_ASIC_SS_ASSIGNMENT v1;
+ struct _ATOM_ASIC_SS_ASSIGNMENT_V2 v2;
+ struct _ATOM_ASIC_SS_ASSIGNMENT_V3 v3;
+};
+
bool radeon_atombios_get_asic_ss_info(struct radeon_device *rdev,
struct radeon_atom_ss *ss,
int id, u32 clock)
int index = GetIndexIntoMasterTable(DATA, ASIC_InternalSS_Info);
uint16_t data_offset, size;
union asic_ss_info *ss_info;
+ union asic_ss_assignment *ss_assign;
uint8_t frev, crev;
int i, num_indices;
num_indices = (size - sizeof(ATOM_COMMON_TABLE_HEADER)) /
sizeof(ATOM_ASIC_SS_ASSIGNMENT);
+ ss_assign = (union asic_ss_assignment *)((u8 *)&ss_info->info.asSpreadSpectrum[0]);
for (i = 0; i < num_indices; i++) {
- if ((ss_info->info.asSpreadSpectrum[i].ucClockIndication == id) &&
- (clock <= le32_to_cpu(ss_info->info.asSpreadSpectrum[i].ulTargetClockRange))) {
+ if ((ss_assign->v1.ucClockIndication == id) &&
+ (clock <= le32_to_cpu(ss_assign->v1.ulTargetClockRange))) {
ss->percentage =
- le16_to_cpu(ss_info->info.asSpreadSpectrum[i].usSpreadSpectrumPercentage);
- ss->type = ss_info->info.asSpreadSpectrum[i].ucSpreadSpectrumMode;
- ss->rate = le16_to_cpu(ss_info->info.asSpreadSpectrum[i].usSpreadRateInKhz);
+ le16_to_cpu(ss_assign->v1.usSpreadSpectrumPercentage);
+ ss->type = ss_assign->v1.ucSpreadSpectrumMode;
+ ss->rate = le16_to_cpu(ss_assign->v1.usSpreadRateInKhz);
return true;
}
+ ss_assign = (union asic_ss_assignment *)
+ ((u8 *)ss_assign + sizeof(ATOM_ASIC_SS_ASSIGNMENT));
}
break;
case 2:
num_indices = (size - sizeof(ATOM_COMMON_TABLE_HEADER)) /
sizeof(ATOM_ASIC_SS_ASSIGNMENT_V2);
+ ss_assign = (union asic_ss_assignment *)((u8 *)&ss_info->info_2.asSpreadSpectrum[0]);
for (i = 0; i < num_indices; i++) {
- if ((ss_info->info_2.asSpreadSpectrum[i].ucClockIndication == id) &&
- (clock <= le32_to_cpu(ss_info->info_2.asSpreadSpectrum[i].ulTargetClockRange))) {
+ if ((ss_assign->v2.ucClockIndication == id) &&
+ (clock <= le32_to_cpu(ss_assign->v2.ulTargetClockRange))) {
ss->percentage =
- le16_to_cpu(ss_info->info_2.asSpreadSpectrum[i].usSpreadSpectrumPercentage);
- ss->type = ss_info->info_2.asSpreadSpectrum[i].ucSpreadSpectrumMode;
- ss->rate = le16_to_cpu(ss_info->info_2.asSpreadSpectrum[i].usSpreadRateIn10Hz);
+ le16_to_cpu(ss_assign->v2.usSpreadSpectrumPercentage);
+ ss->type = ss_assign->v2.ucSpreadSpectrumMode;
+ ss->rate = le16_to_cpu(ss_assign->v2.usSpreadRateIn10Hz);
if ((crev == 2) &&
((id == ASIC_INTERNAL_ENGINE_SS) ||
(id == ASIC_INTERNAL_MEMORY_SS)))
ss->rate /= 100;
return true;
}
+ ss_assign = (union asic_ss_assignment *)
+ ((u8 *)ss_assign + sizeof(ATOM_ASIC_SS_ASSIGNMENT_V2));
}
break;
case 3:
num_indices = (size - sizeof(ATOM_COMMON_TABLE_HEADER)) /
sizeof(ATOM_ASIC_SS_ASSIGNMENT_V3);
+ ss_assign = (union asic_ss_assignment *)((u8 *)&ss_info->info_3.asSpreadSpectrum[0]);
for (i = 0; i < num_indices; i++) {
- if ((ss_info->info_3.asSpreadSpectrum[i].ucClockIndication == id) &&
- (clock <= le32_to_cpu(ss_info->info_3.asSpreadSpectrum[i].ulTargetClockRange))) {
+ if ((ss_assign->v3.ucClockIndication == id) &&
+ (clock <= le32_to_cpu(ss_assign->v3.ulTargetClockRange))) {
ss->percentage =
- le16_to_cpu(ss_info->info_3.asSpreadSpectrum[i].usSpreadSpectrumPercentage);
- ss->type = ss_info->info_3.asSpreadSpectrum[i].ucSpreadSpectrumMode;
- ss->rate = le16_to_cpu(ss_info->info_3.asSpreadSpectrum[i].usSpreadRateIn10Hz);
+ le16_to_cpu(ss_assign->v3.usSpreadSpectrumPercentage);
+ ss->type = ss_assign->v3.ucSpreadSpectrumMode;
+ ss->rate = le16_to_cpu(ss_assign->v3.usSpreadRateIn10Hz);
if ((id == ASIC_INTERNAL_ENGINE_SS) ||
(id == ASIC_INTERNAL_MEMORY_SS))
ss->rate /= 100;
radeon_atombios_get_igp_ss_overrides(rdev, ss, id);
return true;
}
+ ss_assign = (union asic_ss_assignment *)
+ ((u8 *)ss_assign + sizeof(ATOM_ASIC_SS_ASSIGNMENT_V3));
}
break;
default:
}
}
+ if (property == rdev->mode_info.audio_property) {
+ struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ /* need to find digital encoder on connector */
+ encoder = radeon_find_encoder(connector, DRM_MODE_ENCODER_TMDS);
+ if (!encoder)
+ return 0;
+
+ radeon_encoder = to_radeon_encoder(encoder);
+
+ if (radeon_connector->audio != val) {
+ radeon_connector->audio = val;
+ radeon_property_change_mode(&radeon_encoder->base);
+ }
+ }
+
if (property == rdev->mode_info.underscan_property) {
/* need to find digital encoder on connector */
encoder = radeon_find_encoder(connector, DRM_MODE_ENCODER_TMDS);
if (radeon_dp_getdpcd(radeon_connector))
ret = connector_status_connected;
} else {
- /* try non-aux ddc (DP to DVI/HMDI/etc. adapter) */
+ /* try non-aux ddc (DP to DVI/HDMI/etc. adapter) */
if (radeon_ddc_probe(radeon_connector, false))
ret = connector_status_connected;
}
.force = radeon_dvi_force,
};
+static const struct drm_connector_funcs radeon_edp_connector_funcs = {
+ .dpms = drm_helper_connector_dpms,
+ .detect = radeon_dp_detect,
+ .fill_modes = drm_helper_probe_single_connector_modes,
+ .set_property = radeon_lvds_set_property,
+ .destroy = radeon_dp_connector_destroy,
+ .force = radeon_dvi_force,
+};
+
+static const struct drm_connector_funcs radeon_lvds_bridge_connector_funcs = {
+ .dpms = drm_helper_connector_dpms,
+ .detect = radeon_dp_detect,
+ .fill_modes = drm_helper_probe_single_connector_modes,
+ .set_property = radeon_lvds_set_property,
+ .destroy = radeon_dp_connector_destroy,
+ .force = radeon_dvi_force,
+};
+
void
radeon_add_atom_connector(struct drm_device *dev,
uint32_t connector_id,
goto failed;
radeon_dig_connector->igp_lane_info = igp_lane_info;
radeon_connector->con_priv = radeon_dig_connector;
- drm_connector_init(dev, &radeon_connector->base, &radeon_dp_connector_funcs, connector_type);
- drm_connector_helper_add(&radeon_connector->base, &radeon_dp_connector_helper_funcs);
if (i2c_bus->valid) {
/* add DP i2c bus */
if (connector_type == DRM_MODE_CONNECTOR_eDP)
case DRM_MODE_CONNECTOR_VGA:
case DRM_MODE_CONNECTOR_DVIA:
default:
+ drm_connector_init(dev, &radeon_connector->base,
+ &radeon_dp_connector_funcs, connector_type);
+ drm_connector_helper_add(&radeon_connector->base,
+ &radeon_dp_connector_helper_funcs);
connector->interlace_allowed = true;
connector->doublescan_allowed = true;
radeon_connector->dac_load_detect = true;
case DRM_MODE_CONNECTOR_HDMIA:
case DRM_MODE_CONNECTOR_HDMIB:
case DRM_MODE_CONNECTOR_DisplayPort:
+ drm_connector_init(dev, &radeon_connector->base,
+ &radeon_dp_connector_funcs, connector_type);
+ drm_connector_helper_add(&radeon_connector->base,
+ &radeon_dp_connector_helper_funcs);
drm_object_attach_property(&radeon_connector->base.base,
rdev->mode_info.underscan_property,
UNDERSCAN_OFF);
drm_object_attach_property(&radeon_connector->base.base,
rdev->mode_info.underscan_vborder_property,
0);
+ if (radeon_audio != 0)
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.audio_property,
+ (radeon_audio == 1) ?
+ RADEON_AUDIO_AUTO :
+ RADEON_AUDIO_DISABLE);
subpixel_order = SubPixelHorizontalRGB;
connector->interlace_allowed = true;
if (connector_type == DRM_MODE_CONNECTOR_HDMIB)
break;
case DRM_MODE_CONNECTOR_LVDS:
case DRM_MODE_CONNECTOR_eDP:
+ drm_connector_init(dev, &radeon_connector->base,
+ &radeon_lvds_bridge_connector_funcs, connector_type);
+ drm_connector_helper_add(&radeon_connector->base,
+ &radeon_dp_connector_helper_funcs);
drm_object_attach_property(&radeon_connector->base.base,
dev->mode_config.scaling_mode_property,
DRM_MODE_SCALE_FULLSCREEN);
rdev->mode_info.underscan_vborder_property,
0);
}
+ if (ASIC_IS_DCE2(rdev) && (radeon_audio != 0)) {
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.audio_property,
+ (radeon_audio == 1) ?
+ RADEON_AUDIO_AUTO :
+ RADEON_AUDIO_DISABLE);
+ }
if (connector_type == DRM_MODE_CONNECTOR_DVII) {
radeon_connector->dac_load_detect = true;
drm_object_attach_property(&radeon_connector->base.base,
rdev->mode_info.underscan_vborder_property,
0);
}
+ if (ASIC_IS_DCE2(rdev) && (radeon_audio != 0)) {
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.audio_property,
+ (radeon_audio == 1) ?
+ RADEON_AUDIO_AUTO :
+ RADEON_AUDIO_DISABLE);
+ }
subpixel_order = SubPixelHorizontalRGB;
connector->interlace_allowed = true;
if (connector_type == DRM_MODE_CONNECTOR_HDMIB)
rdev->mode_info.underscan_vborder_property,
0);
}
+ if (ASIC_IS_DCE2(rdev) && (radeon_audio != 0)) {
+ drm_object_attach_property(&radeon_connector->base.base,
+ rdev->mode_info.audio_property,
+ (radeon_audio == 1) ?
+ RADEON_AUDIO_AUTO :
+ RADEON_AUDIO_DISABLE);
+ }
connector->interlace_allowed = true;
/* in theory with a DP to VGA converter... */
connector->doublescan_allowed = false;
goto failed;
radeon_dig_connector->igp_lane_info = igp_lane_info;
radeon_connector->con_priv = radeon_dig_connector;
- drm_connector_init(dev, &radeon_connector->base, &radeon_dp_connector_funcs, connector_type);
+ drm_connector_init(dev, &radeon_connector->base, &radeon_edp_connector_funcs, connector_type);
drm_connector_helper_add(&radeon_connector->base, &radeon_dp_connector_helper_funcs);
if (i2c_bus->valid) {
/* add DP i2c bus */
#include <drm/radeon_drm.h>
#include "radeon_reg.h"
#include "radeon.h"
+#include "radeon_trace.h"
static int radeon_cs_parser_relocs(struct radeon_cs_parser *p)
{
p->relocs[i].lobj.bo = p->relocs[i].robj;
p->relocs[i].lobj.written = !!r->write_domain;
- /* the first reloc of an UVD job is the
- msg and that must be in VRAM */
- if (p->ring == R600_RING_TYPE_UVD_INDEX && i == 0) {
+ /* the first reloc of an UVD job is the msg and that must be in
+ VRAM, also but everything into VRAM on AGP cards to avoid
+ image corruptions */
+ if (p->ring == R600_RING_TYPE_UVD_INDEX &&
+ (i == 0 || drm_pci_device_is_agp(p->rdev->ddev))) {
/* TODO: is this still needed for NI+ ? */
p->relocs[i].lobj.domain =
RADEON_GEM_DOMAIN_VRAM;
return r;
}
+ trace_radeon_cs(&parser);
+
r = radeon_cs_ib_chunk(rdev, &parser);
if (r) {
goto out;
/* Registers mapping */
/* TODO: block userspace mapping of io register */
spin_lock_init(&rdev->mmio_idx_lock);
+ spin_lock_init(&rdev->smc_idx_lock);
+ spin_lock_init(&rdev->pll_idx_lock);
+ spin_lock_init(&rdev->mc_idx_lock);
+ spin_lock_init(&rdev->pcie_idx_lock);
+ spin_lock_init(&rdev->pciep_idx_lock);
+ spin_lock_init(&rdev->pif_idx_lock);
+ spin_lock_init(&rdev->cg_idx_lock);
+ spin_lock_init(&rdev->uvd_idx_lock);
+ spin_lock_init(&rdev->rcu_idx_lock);
+ spin_lock_init(&rdev->didt_idx_lock);
+ spin_lock_init(&rdev->end_idx_lock);
if (rdev->family >= CHIP_BONAIRE) {
rdev->rmmio_base = pci_resource_start(rdev->pdev, 5);
rdev->rmmio_size = pci_resource_len(rdev->pdev, 5);
return r;
}
if ((radeon_testing & 1)) {
- radeon_test_moves(rdev);
+ if (rdev->accel_working)
+ radeon_test_moves(rdev);
+ else
+ DRM_INFO("radeon: acceleration disabled, skipping move tests\n");
}
if ((radeon_testing & 2)) {
- radeon_test_syncing(rdev);
+ if (rdev->accel_working)
+ radeon_test_syncing(rdev);
+ else
+ DRM_INFO("radeon: acceleration disabled, skipping sync tests\n");
}
if (radeon_benchmarking) {
- radeon_benchmark(rdev, radeon_benchmarking);
+ if (rdev->accel_working)
+ radeon_benchmark(rdev, radeon_benchmarking);
+ else
+ DRM_INFO("radeon: acceleration disabled, skipping benchmarks\n");
}
return 0;
}
{ UNDERSCAN_AUTO, "auto" },
};
+static struct drm_prop_enum_list radeon_audio_enum_list[] =
+{ { RADEON_AUDIO_DISABLE, "off" },
+ { RADEON_AUDIO_ENABLE, "on" },
+ { RADEON_AUDIO_AUTO, "auto" },
+};
+
static int radeon_modeset_create_props(struct radeon_device *rdev)
{
int sz;
if (!rdev->mode_info.underscan_vborder_property)
return -ENOMEM;
+ sz = ARRAY_SIZE(radeon_audio_enum_list);
+ rdev->mode_info.audio_property =
+ drm_property_create_enum(rdev->ddev, 0,
+ "audio",
+ radeon_audio_enum_list, sz);
+
return 0;
}
int radeon_testing = 0;
int radeon_connector_table = 0;
int radeon_tv = 1;
-int radeon_audio = 0;
+int radeon_audio = -1;
int radeon_disp_priority = 0;
int radeon_hw_i2c = 0;
int radeon_pcie_gen2 = -1;
MODULE_PARM_DESC(tv, "TV enable (0 = disable)");
module_param_named(tv, radeon_tv, int, 0444);
-MODULE_PARM_DESC(audio, "Audio enable (1 = enable)");
+MODULE_PARM_DESC(audio, "Audio enable (-1 = auto, 0 = disable, 1 = enable)");
module_param_named(audio, radeon_audio, int, 0444);
MODULE_PARM_DESC(disp_priority, "Display Priority (0 = auto, 1 = normal, 2 = high)");
struct drm_property *underscan_property;
struct drm_property *underscan_hborder_property;
struct drm_property *underscan_vborder_property;
+ /* audio */
+ struct drm_property *audio_property;
/* hardcoded DFP edid from BIOS */
struct edid *bios_hardcoded_edid;
int bios_hardcoded_edid_size;
u8 cd_mux_state;
};
+enum radeon_connector_audio {
+ RADEON_AUDIO_DISABLE = 0,
+ RADEON_AUDIO_ENABLE = 1,
+ RADEON_AUDIO_AUTO = 2
+};
+
struct radeon_connector {
struct drm_connector base;
uint32_t connector_id;
struct radeon_hpd hpd;
struct radeon_router router;
struct radeon_i2c_chan *router_bus;
+ enum radeon_connector_audio audio;
};
struct radeon_framebuffer {
void radeon_pm_acpi_event_handler(struct radeon_device *rdev)
{
- if (rdev->pm.pm_method == PM_METHOD_PROFILE) {
+ if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled) {
+ mutex_lock(&rdev->pm.mutex);
+ if (power_supply_is_system_supplied() > 0)
+ rdev->pm.dpm.ac_power = true;
+ else
+ rdev->pm.dpm.ac_power = false;
+ if (rdev->asic->dpm.enable_bapm)
+ radeon_dpm_enable_bapm(rdev, rdev->pm.dpm.ac_power);
+ mutex_unlock(&rdev->pm.mutex);
+ } else if (rdev->pm.pm_method == PM_METHOD_PROFILE) {
if (rdev->pm.profile == PM_PROFILE_AUTO) {
mutex_lock(&rdev->pm.mutex);
radeon_pm_update_profile(rdev);
struct device_attribute *attr,
char *buf)
{
- struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+ struct drm_device *ddev = dev_get_drvdata(dev);
struct radeon_device *rdev = ddev->dev_private;
int cp = rdev->pm.profile;
const char *buf,
size_t count)
{
- struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+ struct drm_device *ddev = dev_get_drvdata(dev);
struct radeon_device *rdev = ddev->dev_private;
mutex_lock(&rdev->pm.mutex);
struct device_attribute *attr,
char *buf)
{
- struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+ struct drm_device *ddev = dev_get_drvdata(dev);
struct radeon_device *rdev = ddev->dev_private;
int pm = rdev->pm.pm_method;
const char *buf,
size_t count)
{
- struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+ struct drm_device *ddev = dev_get_drvdata(dev);
struct radeon_device *rdev = ddev->dev_private;
/* we don't support the legacy modes with dpm */
struct device_attribute *attr,
char *buf)
{
- struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+ struct drm_device *ddev = dev_get_drvdata(dev);
struct radeon_device *rdev = ddev->dev_private;
enum radeon_pm_state_type pm = rdev->pm.dpm.user_state;
const char *buf,
size_t count)
{
- struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+ struct drm_device *ddev = dev_get_drvdata(dev);
struct radeon_device *rdev = ddev->dev_private;
mutex_lock(&rdev->pm.mutex);
struct device_attribute *attr,
char *buf)
{
- struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+ struct drm_device *ddev = dev_get_drvdata(dev);
struct radeon_device *rdev = ddev->dev_private;
enum radeon_dpm_forced_level level = rdev->pm.dpm.forced_level;
const char *buf,
size_t count)
{
- struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+ struct drm_device *ddev = dev_get_drvdata(dev);
struct radeon_device *rdev = ddev->dev_private;
enum radeon_dpm_forced_level level;
int ret = 0;
struct device_attribute *attr,
char *buf)
{
- struct drm_device *ddev = pci_get_drvdata(to_pci_dev(dev));
+ struct drm_device *ddev = dev_get_drvdata(dev);
struct radeon_device *rdev = ddev->dev_private;
int temp;
return snprintf(buf, PAGE_SIZE, "%d\n", temp);
}
+static ssize_t radeon_hwmon_show_temp_thresh(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct drm_device *ddev = dev_get_drvdata(dev);
+ struct radeon_device *rdev = ddev->dev_private;
+ int hyst = to_sensor_dev_attr(attr)->index;
+ int temp;
+
+ if (hyst)
+ temp = rdev->pm.dpm.thermal.min_temp;
+ else
+ temp = rdev->pm.dpm.thermal.max_temp;
+
+ return snprintf(buf, PAGE_SIZE, "%d\n", temp);
+}
+
static ssize_t radeon_hwmon_show_name(struct device *dev,
struct device_attribute *attr,
char *buf)
}
static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO, radeon_hwmon_show_temp, NULL, 0);
+static SENSOR_DEVICE_ATTR(temp1_crit, S_IRUGO, radeon_hwmon_show_temp_thresh, NULL, 0);
+static SENSOR_DEVICE_ATTR(temp1_crit_hyst, S_IRUGO, radeon_hwmon_show_temp_thresh, NULL, 1);
static SENSOR_DEVICE_ATTR(name, S_IRUGO, radeon_hwmon_show_name, NULL, 0);
static struct attribute *hwmon_attributes[] = {
&sensor_dev_attr_temp1_input.dev_attr.attr,
+ &sensor_dev_attr_temp1_crit.dev_attr.attr,
+ &sensor_dev_attr_temp1_crit_hyst.dev_attr.attr,
&sensor_dev_attr_name.dev_attr.attr,
NULL
};
+static umode_t hwmon_attributes_visible(struct kobject *kobj,
+ struct attribute *attr, int index)
+{
+ struct device *dev = container_of(kobj, struct device, kobj);
+ struct drm_device *ddev = dev_get_drvdata(dev);
+ struct radeon_device *rdev = ddev->dev_private;
+
+ /* Skip limit attributes if DPM is not enabled */
+ if (rdev->pm.pm_method != PM_METHOD_DPM &&
+ (attr == &sensor_dev_attr_temp1_crit.dev_attr.attr ||
+ attr == &sensor_dev_attr_temp1_crit_hyst.dev_attr.attr))
+ return 0;
+
+ return attr->mode;
+}
+
static const struct attribute_group hwmon_attrgroup = {
.attrs = hwmon_attributes,
+ .is_visible = hwmon_attributes_visible,
};
static int radeon_hwmon_init(struct radeon_device *rdev)
radeon_dpm_post_set_power_state(rdev);
- /* force low perf level for thermal */
- if (rdev->pm.dpm.thermal_active &&
- rdev->asic->dpm.force_performance_level) {
- radeon_dpm_force_performance_level(rdev, RADEON_DPM_FORCED_LEVEL_LOW);
+ if (rdev->asic->dpm.force_performance_level) {
+ if (rdev->pm.dpm.thermal_active)
+ /* force low perf level for thermal */
+ radeon_dpm_force_performance_level(rdev, RADEON_DPM_FORCED_LEVEL_LOW);
+ else
+ /* otherwise, enable auto */
+ radeon_dpm_force_performance_level(rdev, RADEON_DPM_FORCED_LEVEL_AUTO);
}
done:
if (enable) {
mutex_lock(&rdev->pm.mutex);
rdev->pm.dpm.uvd_active = true;
+ /* disable this for now */
+#if 0
if ((rdev->pm.dpm.sd == 1) && (rdev->pm.dpm.hd == 0))
dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_SD;
else if ((rdev->pm.dpm.sd == 2) && (rdev->pm.dpm.hd == 0))
else if ((rdev->pm.dpm.sd == 0) && (rdev->pm.dpm.hd == 2))
dpm_state = POWER_STATE_TYPE_INTERNAL_UVD_HD2;
else
+#endif
dpm_state = POWER_STATE_TYPE_INTERNAL_UVD;
rdev->pm.dpm.state = dpm_state;
mutex_unlock(&rdev->pm.mutex);
{
/* set up the default clocks if the MC ucode is loaded */
if ((rdev->family >= CHIP_BARTS) &&
- (rdev->family <= CHIP_HAINAN) &&
+ (rdev->family <= CHIP_CAYMAN) &&
rdev->mc_fw) {
if (rdev->pm.default_vddc)
radeon_atom_set_voltage(rdev, rdev->pm.default_vddc,
if (ret) {
DRM_ERROR("radeon: dpm resume failed\n");
if ((rdev->family >= CHIP_BARTS) &&
- (rdev->family <= CHIP_HAINAN) &&
+ (rdev->family <= CHIP_CAYMAN) &&
rdev->mc_fw) {
if (rdev->pm.default_vddc)
radeon_atom_set_voltage(rdev, rdev->pm.default_vddc,
radeon_pm_init_profile(rdev);
/* set up the default clocks if the MC ucode is loaded */
if ((rdev->family >= CHIP_BARTS) &&
- (rdev->family <= CHIP_HAINAN) &&
+ (rdev->family <= CHIP_CAYMAN) &&
rdev->mc_fw) {
if (rdev->pm.default_vddc)
radeon_atom_set_voltage(rdev, rdev->pm.default_vddc,
{
int ret;
- /* default to performance state */
+ /* default to balanced state */
rdev->pm.dpm.state = POWER_STATE_TYPE_BALANCED;
rdev->pm.dpm.user_state = POWER_STATE_TYPE_BALANCED;
+ rdev->pm.dpm.forced_level = RADEON_DPM_FORCED_LEVEL_AUTO;
rdev->pm.default_sclk = rdev->clock.default_sclk;
rdev->pm.default_mclk = rdev->clock.default_mclk;
rdev->pm.current_sclk = rdev->clock.default_sclk;
if (ret) {
rdev->pm.dpm_enabled = false;
if ((rdev->family >= CHIP_BARTS) &&
- (rdev->family <= CHIP_HAINAN) &&
+ (rdev->family <= CHIP_CAYMAN) &&
rdev->mc_fw) {
if (rdev->pm.default_vddc)
radeon_atom_set_voltage(rdev, rdev->pm.default_vddc,
* packet that is the root issue
*/
i = (ring->rptr + ring->ptr_mask + 1 - 32) & ring->ptr_mask;
- for (j = 0; j <= (count + 32); j++) {
- seq_printf(m, "r[%5d]=0x%08x\n", i, ring->ring[i]);
- i = (i + 1) & ring->ptr_mask;
+ if (ring->ready) {
+ for (j = 0; j <= (count + 32); j++) {
+ seq_printf(m, "r[%5d]=0x%08x\n", i, ring->ring[i]);
+ i = (i + 1) & ring->ptr_mask;
+ }
}
return 0;
}
struct radeon_bo *vram_obj = NULL;
struct radeon_bo **gtt_obj = NULL;
uint64_t gtt_addr, vram_addr;
- unsigned i, n, size;
- int r, ring;
+ unsigned n, size;
+ int i, r, ring;
switch (flag) {
case RADEON_TEST_COPY_DMA:
TP_printk("bo=%p, pages=%u", __entry->bo, __entry->pages)
);
+TRACE_EVENT(radeon_cs,
+ TP_PROTO(struct radeon_cs_parser *p),
+ TP_ARGS(p),
+ TP_STRUCT__entry(
+ __field(u32, ring)
+ __field(u32, dw)
+ __field(u32, fences)
+ ),
+
+ TP_fast_assign(
+ __entry->ring = p->ring;
+ __entry->dw = p->chunks[p->chunk_ib_idx].length_dw;
+ __entry->fences = radeon_fence_count_emitted(
+ p->rdev, p->ring);
+ ),
+ TP_printk("ring=%u, dw=%u, fences=%u",
+ __entry->ring, __entry->dw,
+ __entry->fences)
+);
+
DECLARE_EVENT_CLASS(radeon_fence_request,
TP_PROTO(struct drm_device *dev, u32 seqno),
TP_ARGS(dev, seqno)
);
-DEFINE_EVENT(radeon_fence_request, radeon_fence_retire,
-
- TP_PROTO(struct drm_device *dev, u32 seqno),
-
- TP_ARGS(dev, seqno)
-);
-
DEFINE_EVENT(radeon_fence_request, radeon_fence_wait_begin,
TP_PROTO(struct drm_device *dev, u32 seqno),
(rdev->pm.dpm.hd != hd)) {
rdev->pm.dpm.sd = sd;
rdev->pm.dpm.hd = hd;
- streams_changed = true;
+ /* disable this for now */
+ /*streams_changed = true;*/
}
}
uint32_t rs400_mc_rreg(struct radeon_device *rdev, uint32_t reg)
{
+ unsigned long flags;
uint32_t r;
+ spin_lock_irqsave(&rdev->mc_idx_lock, flags);
WREG32(RS480_NB_MC_INDEX, reg & 0xff);
r = RREG32(RS480_NB_MC_DATA);
WREG32(RS480_NB_MC_INDEX, 0xff);
+ spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
return r;
}
void rs400_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->mc_idx_lock, flags);
WREG32(RS480_NB_MC_INDEX, ((reg) & 0xff) | RS480_NB_MC_IND_WR_EN);
WREG32(RS480_NB_MC_DATA, (v));
WREG32(RS480_NB_MC_INDEX, 0xff);
+ spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
}
#if defined(CONFIG_DEBUG_FS)
uint32_t rs600_mc_rreg(struct radeon_device *rdev, uint32_t reg)
{
+ unsigned long flags;
+ u32 r;
+
+ spin_lock_irqsave(&rdev->mc_idx_lock, flags);
WREG32(R_000070_MC_IND_INDEX, S_000070_MC_IND_ADDR(reg) |
S_000070_MC_IND_CITF_ARB0(1));
- return RREG32(R_000074_MC_IND_DATA);
+ r = RREG32(R_000074_MC_IND_DATA);
+ spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
+ return r;
}
void rs600_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->mc_idx_lock, flags);
WREG32(R_000070_MC_IND_INDEX, S_000070_MC_IND_ADDR(reg) |
S_000070_MC_IND_CITF_ARB0(1) | S_000070_MC_IND_WR_EN(1));
WREG32(R_000074_MC_IND_DATA, v);
+ spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
}
static void rs600_debugfs(struct radeon_device *rdev)
uint32_t rs690_mc_rreg(struct radeon_device *rdev, uint32_t reg)
{
+ unsigned long flags;
uint32_t r;
+ spin_lock_irqsave(&rdev->mc_idx_lock, flags);
WREG32(R_000078_MC_INDEX, S_000078_MC_IND_ADDR(reg));
r = RREG32(R_00007C_MC_DATA);
WREG32(R_000078_MC_INDEX, ~C_000078_MC_IND_ADDR);
+ spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
return r;
}
void rs690_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->mc_idx_lock, flags);
WREG32(R_000078_MC_INDEX, S_000078_MC_IND_ADDR(reg) |
S_000078_MC_IND_WR_EN(1));
WREG32(R_00007C_MC_DATA, v);
WREG32(R_000078_MC_INDEX, 0x7F);
+ spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
}
static void rs690_mc_program(struct radeon_device *rdev)
radeon_crtc = to_radeon_crtc(crtc);
pi->crtc_id = radeon_crtc->crtc_id;
if (crtc->mode.htotal && crtc->mode.vtotal)
- pi->refresh_rate =
- (crtc->mode.clock * 1000) /
- (crtc->mode.htotal * crtc->mode.vtotal);
+ pi->refresh_rate = drm_mode_vrefresh(&crtc->mode);
break;
}
}
WREG32_P(CG_INTGFX_MISC, 0, ~0xFFF00000);
}
-static void rs780_force_voltage_to_high(struct radeon_device *rdev)
+static void rs780_force_voltage(struct radeon_device *rdev, u16 voltage)
{
- struct igp_power_info *pi = rs780_get_pi(rdev);
struct igp_ps *current_state = rs780_get_ps(rdev->pm.dpm.current_ps);
if ((current_state->max_voltage == RS780_VDDC_LEVEL_HIGH) &&
udelay(1);
WREG32_P(FVTHROT_PWM_CTRL_REG0,
- STARTING_PWM_HIGHTIME(pi->max_voltage),
+ STARTING_PWM_HIGHTIME(voltage),
~STARTING_PWM_HIGHTIME_MASK);
WREG32_P(FVTHROT_PWM_CTRL_REG0,
WREG32_P(GFX_MACRO_BYPASS_CNTL, 0, ~SPLL_BYPASS_CNTL);
}
+static void rs780_force_fbdiv(struct radeon_device *rdev, u32 fb_div)
+{
+ struct igp_ps *current_state = rs780_get_ps(rdev->pm.dpm.current_ps);
+
+ if (current_state->sclk_low == current_state->sclk_high)
+ return;
+
+ WREG32_P(GFX_MACRO_BYPASS_CNTL, SPLL_BYPASS_CNTL, ~SPLL_BYPASS_CNTL);
+
+ WREG32_P(FVTHROT_FBDIV_REG2, FORCED_FEEDBACK_DIV(fb_div),
+ ~FORCED_FEEDBACK_DIV_MASK);
+ WREG32_P(FVTHROT_FBDIV_REG1, STARTING_FEEDBACK_DIV(fb_div),
+ ~STARTING_FEEDBACK_DIV_MASK);
+ WREG32_P(FVTHROT_FBDIV_REG1, FORCE_FEEDBACK_DIV, ~FORCE_FEEDBACK_DIV);
+
+ udelay(100);
+
+ WREG32_P(GFX_MACRO_BYPASS_CNTL, 0, ~SPLL_BYPASS_CNTL);
+}
+
static int rs780_set_engine_clock_scaling(struct radeon_device *rdev,
struct radeon_ps *new_ps,
struct radeon_ps *old_ps)
if (ret)
return ret;
- WREG32_P(GFX_MACRO_BYPASS_CNTL, SPLL_BYPASS_CNTL, ~SPLL_BYPASS_CNTL);
-
- WREG32_P(FVTHROT_FBDIV_REG2, FORCED_FEEDBACK_DIV(max_dividers.fb_div),
- ~FORCED_FEEDBACK_DIV_MASK);
- WREG32_P(FVTHROT_FBDIV_REG1, STARTING_FEEDBACK_DIV(max_dividers.fb_div),
- ~STARTING_FEEDBACK_DIV_MASK);
- WREG32_P(FVTHROT_FBDIV_REG1, FORCE_FEEDBACK_DIV, ~FORCE_FEEDBACK_DIV);
-
- udelay(100);
+ if ((min_dividers.ref_div != max_dividers.ref_div) ||
+ (min_dividers.post_div != max_dividers.post_div) ||
+ (max_dividers.ref_div != current_max_dividers.ref_div) ||
+ (max_dividers.post_div != current_max_dividers.post_div))
+ return -EINVAL;
- WREG32_P(GFX_MACRO_BYPASS_CNTL, 0, ~SPLL_BYPASS_CNTL);
+ rs780_force_fbdiv(rdev, max_dividers.fb_div);
if (max_dividers.fb_div > min_dividers.fb_div) {
WREG32_P(FVTHROT_FBDIV_REG0,
(new_state->sclk_low == old_state->sclk_low))
return;
+ if (new_state->sclk_high == new_state->sclk_low)
+ return;
+
rs780_clk_scaling_enable(rdev, true);
}
rs780_set_uvd_clock_before_set_eng_clock(rdev, new_ps, old_ps);
if (pi->voltage_control) {
- rs780_force_voltage_to_high(rdev);
+ rs780_force_voltage(rdev, pi->max_voltage);
mdelay(5);
}
if (ATOM_PPLIB_NONCLOCKINFO_VER1 < table_rev) {
rps->vclk = le32_to_cpu(non_clock_info->ulVCLK);
rps->dclk = le32_to_cpu(non_clock_info->ulDCLK);
- } else if (r600_is_uvd_state(rps->class, rps->class2)) {
- rps->vclk = RS780_DEFAULT_VCLK_FREQ;
- rps->dclk = RS780_DEFAULT_DCLK_FREQ;
} else {
rps->vclk = 0;
rps->dclk = 0;
}
+ if (r600_is_uvd_state(rps->class, rps->class2)) {
+ if ((rps->vclk == 0) || (rps->dclk == 0)) {
+ rps->vclk = RS780_DEFAULT_VCLK_FREQ;
+ rps->dclk = RS780_DEFAULT_DCLK_FREQ;
+ }
+ }
+
if (rps->class & ATOM_PPLIB_CLASSIFICATION_BOOT)
rdev->pm.dpm.boot_ps = rps;
if (rps->class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
seq_printf(m, "power level 1 sclk: %u vddc_index: %d\n",
ps->sclk_high, ps->max_voltage);
}
+
+int rs780_dpm_force_performance_level(struct radeon_device *rdev,
+ enum radeon_dpm_forced_level level)
+{
+ struct igp_power_info *pi = rs780_get_pi(rdev);
+ struct radeon_ps *rps = rdev->pm.dpm.current_ps;
+ struct igp_ps *ps = rs780_get_ps(rps);
+ struct atom_clock_dividers dividers;
+ int ret;
+
+ rs780_clk_scaling_enable(rdev, false);
+ rs780_voltage_scaling_enable(rdev, false);
+
+ if (level == RADEON_DPM_FORCED_LEVEL_HIGH) {
+ if (pi->voltage_control)
+ rs780_force_voltage(rdev, pi->max_voltage);
+
+ ret = radeon_atom_get_clock_dividers(rdev, COMPUTE_ENGINE_PLL_PARAM,
+ ps->sclk_high, false, ÷rs);
+ if (ret)
+ return ret;
+
+ rs780_force_fbdiv(rdev, dividers.fb_div);
+ } else if (level == RADEON_DPM_FORCED_LEVEL_LOW) {
+ ret = radeon_atom_get_clock_dividers(rdev, COMPUTE_ENGINE_PLL_PARAM,
+ ps->sclk_low, false, ÷rs);
+ if (ret)
+ return ret;
+
+ rs780_force_fbdiv(rdev, dividers.fb_div);
+
+ if (pi->voltage_control)
+ rs780_force_voltage(rdev, pi->min_voltage);
+ } else {
+ if (pi->voltage_control)
+ rs780_force_voltage(rdev, pi->max_voltage);
+
+ if (ps->sclk_high != ps->sclk_low) {
+ WREG32_P(FVTHROT_FBDIV_REG1, 0, ~FORCE_FEEDBACK_DIV);
+ rs780_clk_scaling_enable(rdev, true);
+ }
+
+ if (pi->voltage_control) {
+ rs780_voltage_scaling_enable(rdev, true);
+ rs780_enable_voltage_scaling(rdev, rps);
+ }
+ }
+
+ rdev->pm.dpm.forced_level = level;
+
+ return 0;
+}
uint32_t rv515_mc_rreg(struct radeon_device *rdev, uint32_t reg)
{
+ unsigned long flags;
uint32_t r;
+ spin_lock_irqsave(&rdev->mc_idx_lock, flags);
WREG32(MC_IND_INDEX, 0x7f0000 | (reg & 0xffff));
r = RREG32(MC_IND_DATA);
WREG32(MC_IND_INDEX, 0);
+ spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
+
return r;
}
void rv515_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v)
{
+ unsigned long flags;
+
+ spin_lock_irqsave(&rdev->mc_idx_lock, flags);
WREG32(MC_IND_INDEX, 0xff0000 | ((reg) & 0xffff));
WREG32(MC_IND_DATA, (v));
WREG32(MC_IND_INDEX, 0);
+ spin_unlock_irqrestore(&rdev->mc_idx_lock, flags);
}
#if defined(CONFIG_DEBUG_FS)
rv6xx_set_uvd_clock_after_set_eng_clock(rdev, new_ps, old_ps);
- rdev->pm.dpm.forced_level = RADEON_DPM_FORCED_LEVEL_AUTO;
-
return 0;
}
rv770_program_dcodt_after_state_switch(rdev, new_ps, old_ps);
rv770_set_uvd_clock_after_set_eng_clock(rdev, new_ps, old_ps);
- ret = rv770_dpm_force_performance_level(rdev, RADEON_DPM_FORCED_LEVEL_AUTO);
- if (ret) {
- DRM_ERROR("rv770_dpm_force_performance_level failed\n");
- return ret;
- }
-
return 0;
}
if (ATOM_PPLIB_NONCLOCKINFO_VER1 < table_rev) {
rps->vclk = le32_to_cpu(non_clock_info->ulVCLK);
rps->dclk = le32_to_cpu(non_clock_info->ulDCLK);
- } else if (r600_is_uvd_state(rps->class, rps->class2)) {
- rps->vclk = RV770_DEFAULT_VCLK_FREQ;
- rps->dclk = RV770_DEFAULT_DCLK_FREQ;
} else {
rps->vclk = 0;
rps->dclk = 0;
}
+ if (r600_is_uvd_state(rps->class, rps->class2)) {
+ if ((rps->vclk == 0) || (rps->dclk == 0)) {
+ rps->vclk = RV770_DEFAULT_VCLK_FREQ;
+ rps->dclk = RV770_DEFAULT_DCLK_FREQ;
+ }
+ }
+
if (rps->class & ATOM_PPLIB_CLASSIFICATION_BOOT)
rdev->pm.dpm.boot_ps = rps;
if (rps->class & ATOM_PPLIB_CLASSIFICATION_UVDSTATE)
0x08, 0x72, 0x08, 0x72
};
-int rv770_set_smc_sram_address(struct radeon_device *rdev,
- u16 smc_address, u16 limit)
+static int rv770_set_smc_sram_address(struct radeon_device *rdev,
+ u16 smc_address, u16 limit)
{
u32 addr;
u16 smc_start_address, const u8 *src,
u16 byte_count, u16 limit)
{
+ unsigned long flags;
u32 data, original_data, extra_shift;
u16 addr;
- int ret;
+ int ret = 0;
if (smc_start_address & 3)
return -EINVAL;
addr = smc_start_address;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
while (byte_count >= 4) {
/* SMC address space is BE */
data = (src[0] << 24) | (src[1] << 16) | (src[2] << 8) | src[3];
ret = rv770_set_smc_sram_address(rdev, addr, limit);
if (ret)
- return ret;
+ goto done;
WREG32(SMC_SRAM_DATA, data);
ret = rv770_set_smc_sram_address(rdev, addr, limit);
if (ret)
- return ret;
+ goto done;
original_data = RREG32(SMC_SRAM_DATA);
ret = rv770_set_smc_sram_address(rdev, addr, limit);
if (ret)
- return ret;
+ goto done;
WREG32(SMC_SRAM_DATA, data);
}
- return 0;
+done:
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
+
+ return ret;
}
static int rv770_program_interrupt_vectors(struct radeon_device *rdev,
static void rv770_clear_smc_sram(struct radeon_device *rdev, u16 limit)
{
+ unsigned long flags;
u16 i;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
for (i = 0; i < limit; i += 4) {
rv770_set_smc_sram_address(rdev, i, limit);
WREG32(SMC_SRAM_DATA, 0);
}
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
}
int rv770_load_smc_ucode(struct radeon_device *rdev,
int rv770_read_smc_sram_dword(struct radeon_device *rdev,
u16 smc_address, u32 *value, u16 limit)
{
+ unsigned long flags;
int ret;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
ret = rv770_set_smc_sram_address(rdev, smc_address, limit);
- if (ret)
- return ret;
-
- *value = RREG32(SMC_SRAM_DATA);
+ if (ret == 0)
+ *value = RREG32(SMC_SRAM_DATA);
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
- return 0;
+ return ret;
}
int rv770_write_smc_sram_dword(struct radeon_device *rdev,
u16 smc_address, u32 value, u16 limit)
{
+ unsigned long flags;
int ret;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
ret = rv770_set_smc_sram_address(rdev, smc_address, limit);
- if (ret)
- return ret;
+ if (ret == 0)
+ WREG32(SMC_SRAM_DATA, value);
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
- WREG32(SMC_SRAM_DATA, value);
-
- return 0;
+ return ret;
}
#define RV770_SMC_SOFT_REGISTER_uvd_enabled 0x9C
#define RV770_SMC_SOFT_REGISTER_is_asic_lombok 0xA0
-int rv770_set_smc_sram_address(struct radeon_device *rdev,
- u16 smc_address, u16 limit);
int rv770_copy_bytes_to_smc(struct radeon_device *rdev,
u16 smc_start_address, const u8 *src,
u16 byte_count, u16 limit);
#define AFMT_VBI_PACKET_CONTROL 0x7608
# define AFMT_GENERIC0_UPDATE (1 << 2)
#define AFMT_INFOFRAME_CONTROL0 0x760c
-# define AFMT_AUDIO_INFO_SOURCE (1 << 6) /* 0 - sound block; 1 - hmdi regs */
+# define AFMT_AUDIO_INFO_SOURCE (1 << 6) /* 0 - sound block; 1 - hdmi regs */
# define AFMT_AUDIO_INFO_UPDATE (1 << 7)
# define AFMT_MPEG_INFO_UPDATE (1 << 10)
#define AFMT_GENERIC0_7 0x7610
uint64_t pe,
uint64_t addr, unsigned count,
uint32_t incr, uint32_t flags);
+static void si_enable_gui_idle_interrupt(struct radeon_device *rdev,
+ bool enable);
+static void si_fini_pg(struct radeon_device *rdev);
+static void si_fini_cg(struct radeon_device *rdev);
+static void si_rlc_stop(struct radeon_device *rdev);
static const u32 verde_rlc_save_restore_register_list[] =
{
fw_name);
release_firmware(rdev->smc_fw);
rdev->smc_fw = NULL;
+ err = 0;
} else if (rdev->smc_fw->size != smc_req_size) {
printk(KERN_ERR
"si_smc: Bogus length %zu in firmware \"%s\"\n",
u32 rb_bufsz;
int r;
+ si_enable_gui_idle_interrupt(rdev, false);
+
WREG32(CP_SEM_WAIT_TIMER, 0x0);
WREG32(CP_SEM_INCOMPLETE_TIMER_CNTL, 0x0);
rdev->ring[CAYMAN_RING_TYPE_CP2_INDEX].ready = false;
}
+ si_enable_gui_idle_interrupt(rdev, true);
+
return 0;
}
dev_info(rdev->dev, " VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x%08X\n",
RREG32(VM_CONTEXT1_PROTECTION_FAULT_STATUS));
+ /* disable PG/CG */
+ si_fini_pg(rdev);
+ si_fini_cg(rdev);
+
+ /* stop the rlc */
+ si_rlc_stop(rdev);
+
/* Disable CP parsing/prefetching */
WREG32(CP_ME_CNTL, CP_ME_HALT | CP_PFP_HALT | CP_CE_HALT);
{
u32 tmp;
- if (enable && (rdev->pg_flags & RADEON_PG_SUPPORT_GFX_CG)) {
+ if (enable && (rdev->pg_flags & RADEON_PG_SUPPORT_GFX_PG)) {
tmp = RLC_PUD(0x10) | RLC_PDD(0x10) | RLC_TTPD(0x10) | RLC_MSD(0x10);
WREG32(RLC_TTOP_D, tmp);
u32 block, bool enable)
{
if (block & RADEON_CG_BLOCK_GFX) {
+ si_enable_gui_idle_interrupt(rdev, false);
/* order matters! */
if (enable) {
si_enable_mgcg(rdev, true);
si_enable_cgcg(rdev, false);
si_enable_mgcg(rdev, false);
}
+ si_enable_gui_idle_interrupt(rdev, true);
}
if (block & RADEON_CG_BLOCK_MC) {
si_init_dma_pg(rdev);
}
si_init_ao_cu_mask(rdev);
- if (rdev->pg_flags & RADEON_PG_SUPPORT_GFX_CG) {
+ if (rdev->pg_flags & RADEON_PG_SUPPORT_GFX_PG) {
si_init_gfx_cgpg(rdev);
}
si_enable_dma_pg(rdev, true);
{
u32 tmp;
- WREG32(CP_INT_CNTL_RING0, CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE);
+ tmp = RREG32(CP_INT_CNTL_RING0) &
+ (CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE);
+ WREG32(CP_INT_CNTL_RING0, tmp);
WREG32(CP_INT_CNTL_RING1, 0);
WREG32(CP_INT_CNTL_RING2, 0);
tmp = RREG32(DMA_CNTL + DMA0_REGISTER_OFFSET) & ~TRAP_ENABLE;
int si_irq_set(struct radeon_device *rdev)
{
- u32 cp_int_cntl = CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE;
+ u32 cp_int_cntl;
u32 cp_int_cntl1 = 0, cp_int_cntl2 = 0;
u32 crtc1 = 0, crtc2 = 0, crtc3 = 0, crtc4 = 0, crtc5 = 0, crtc6 = 0;
u32 hpd1 = 0, hpd2 = 0, hpd3 = 0, hpd4 = 0, hpd5 = 0, hpd6 = 0;
return 0;
}
+ cp_int_cntl = RREG32(CP_INT_CNTL_RING0) &
+ (CNTX_BUSY_INT_ENABLE | CNTX_EMPTY_INT_ENABLE);
+
if (!ASIC_IS_NODCE(rdev)) {
hpd1 = RREG32(DC_HPD1_INT_CONTROL) & ~DC_HPDx_INT_EN;
hpd2 = RREG32(DC_HPD2_INT_CONTROL) & ~DC_HPDx_INT_EN;
bool disable_sclk_switching = false;
u32 mclk, sclk;
u16 vddc, vddci;
+ u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc;
int i;
if ((rdev->pm.dpm.new_active_crtc_count > 1) ||
}
}
+ /* limit clocks to max supported clocks based on voltage dependency tables */
+ btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk,
+ &max_sclk_vddc);
+ btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk,
+ &max_mclk_vddci);
+ btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk,
+ &max_mclk_vddc);
+
+ for (i = 0; i < ps->performance_level_count; i++) {
+ if (max_sclk_vddc) {
+ if (ps->performance_levels[i].sclk > max_sclk_vddc)
+ ps->performance_levels[i].sclk = max_sclk_vddc;
+ }
+ if (max_mclk_vddci) {
+ if (ps->performance_levels[i].mclk > max_mclk_vddci)
+ ps->performance_levels[i].mclk = max_mclk_vddci;
+ }
+ if (max_mclk_vddc) {
+ if (ps->performance_levels[i].mclk > max_mclk_vddc)
+ ps->performance_levels[i].mclk = max_mclk_vddc;
+ }
+ }
+
/* XXX validate the min clocks required for display */
if (disable_mclk_switching) {
table->mc_reg_table_entry[k].mc_data[j] |= 0x100;
}
j++;
- if (j > SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE)
+ if (j >= SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE)
return -EINVAL;
if (!pi->mem_gddr5) {
table->mc_reg_table_entry[k].mc_data[j] =
(table->mc_reg_table_entry[k].mc_data[i] & 0xffff0000) >> 16;
j++;
- if (j > SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE)
+ if (j >= SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE)
return -EINVAL;
}
break;
(temp_reg & 0xffff0000) |
(table->mc_reg_table_entry[k].mc_data[i] & 0x0000ffff);
j++;
- if (j > SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE)
+ if (j >= SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE)
return -EINVAL;
break;
default:
return ret;
}
- ret = si_dpm_force_performance_level(rdev, RADEON_DPM_FORCED_LEVEL_AUTO);
- if (ret) {
- DRM_ERROR("si_dpm_force_performance_level failed\n");
- return ret;
- }
-
si_update_cg(rdev, (RADEON_CG_BLOCK_GFX |
RADEON_CG_BLOCK_MC |
RADEON_CG_BLOCK_SDMA |
#include "ppsmc.h"
#include "radeon_ucode.h"
-int si_set_smc_sram_address(struct radeon_device *rdev,
- u32 smc_address, u32 limit)
+static int si_set_smc_sram_address(struct radeon_device *rdev,
+ u32 smc_address, u32 limit)
{
if (smc_address & 3)
return -EINVAL;
u32 smc_start_address,
const u8 *src, u32 byte_count, u32 limit)
{
- int ret;
+ unsigned long flags;
+ int ret = 0;
u32 data, original_data, addr, extra_shift;
if (smc_start_address & 3)
addr = smc_start_address;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
while (byte_count >= 4) {
/* SMC address space is BE */
data = (src[0] << 24) | (src[1] << 16) | (src[2] << 8) | src[3];
ret = si_set_smc_sram_address(rdev, addr, limit);
if (ret)
- return ret;
+ goto done;
WREG32(SMC_IND_DATA_0, data);
ret = si_set_smc_sram_address(rdev, addr, limit);
if (ret)
- return ret;
+ goto done;
original_data = RREG32(SMC_IND_DATA_0);
ret = si_set_smc_sram_address(rdev, addr, limit);
if (ret)
- return ret;
+ goto done;
WREG32(SMC_IND_DATA_0, data);
}
- return 0;
+
+done:
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
+
+ return ret;
}
void si_start_smc(struct radeon_device *rdev)
int si_load_smc_ucode(struct radeon_device *rdev, u32 limit)
{
+ unsigned long flags;
u32 ucode_start_address;
u32 ucode_size;
const u8 *src;
return -EINVAL;
src = (const u8 *)rdev->smc_fw->data;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
WREG32(SMC_IND_INDEX_0, ucode_start_address);
WREG32_P(SMC_IND_ACCESS_CNTL, AUTO_INCREMENT_IND_0, ~AUTO_INCREMENT_IND_0);
while (ucode_size >= 4) {
ucode_size -= 4;
}
WREG32_P(SMC_IND_ACCESS_CNTL, 0, ~AUTO_INCREMENT_IND_0);
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
return 0;
}
int si_read_smc_sram_dword(struct radeon_device *rdev, u32 smc_address,
u32 *value, u32 limit)
{
+ unsigned long flags;
int ret;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
ret = si_set_smc_sram_address(rdev, smc_address, limit);
- if (ret)
- return ret;
+ if (ret == 0)
+ *value = RREG32(SMC_IND_DATA_0);
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
- *value = RREG32(SMC_IND_DATA_0);
- return 0;
+ return ret;
}
int si_write_smc_sram_dword(struct radeon_device *rdev, u32 smc_address,
u32 value, u32 limit)
{
+ unsigned long flags;
int ret;
+ spin_lock_irqsave(&rdev->smc_idx_lock, flags);
ret = si_set_smc_sram_address(rdev, smc_address, limit);
- if (ret)
- return ret;
+ if (ret == 0)
+ WREG32(SMC_IND_DATA_0, value);
+ spin_unlock_irqrestore(&rdev->smc_idx_lock, flags);
- WREG32(SMC_IND_DATA_0, value);
- return 0;
+ return ret;
}
* 6. COMMAND [30:21] | BYTE_COUNT [20:0]
*/
# define PACKET3_CP_DMA_DST_SEL(x) ((x) << 20)
- /* 0 - SRC_ADDR
+ /* 0 - DST_ADDR
* 1 - GDS
*/
# define PACKET3_CP_DMA_ENGINE(x) ((x) << 27)
# define PACKET3_CP_DMA_CP_SYNC (1 << 31)
/* COMMAND */
# define PACKET3_CP_DMA_DIS_WC (1 << 21)
-# define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 23)
+# define PACKET3_CP_DMA_CMD_SRC_SWAP(x) ((x) << 22)
/* 0 - none
* 1 - 8 in 16
* 2 - 8 in 32
if (pi->enable_dpm)
sumo_set_uvd_clock_after_set_eng_clock(rdev, new_ps, old_ps);
- rdev->pm.dpm.forced_level = RADEON_DPM_FORCED_LEVEL_AUTO;
-
return 0;
}
pi->requested_rps.ps_priv = &pi->requested_ps;
}
+void trinity_dpm_enable_bapm(struct radeon_device *rdev, bool enable)
+{
+ struct trinity_power_info *pi = trinity_get_pi(rdev);
+
+ if (pi->enable_bapm) {
+ trinity_acquire_mutex(rdev);
+ trinity_dpm_bapm_enable(rdev, enable);
+ trinity_release_mutex(rdev);
+ }
+}
+
int trinity_dpm_enable(struct radeon_device *rdev)
{
struct trinity_power_info *pi = trinity_get_pi(rdev);
trinity_program_sclk_dpm(rdev);
trinity_start_dpm(rdev);
trinity_wait_for_dpm_enabled(rdev);
+ trinity_dpm_bapm_enable(rdev, false);
trinity_release_mutex(rdev);
if (rdev->irq.installed &&
trinity_release_mutex(rdev);
return;
}
+ trinity_dpm_bapm_enable(rdev, false);
trinity_disable_clock_power_gating(rdev);
sumo_clear_vc(rdev);
trinity_wait_for_level_0(rdev);
trinity_acquire_mutex(rdev);
if (pi->enable_dpm) {
+ if (pi->enable_bapm)
+ trinity_dpm_bapm_enable(rdev, rdev->pm.dpm.ac_power);
trinity_set_uvd_clock_before_set_eng_clock(rdev, new_ps, old_ps);
trinity_enable_power_level_0(rdev);
trinity_force_level_0(rdev);
trinity_force_level_0(rdev);
trinity_unforce_levels(rdev);
trinity_set_uvd_clock_after_set_eng_clock(rdev, new_ps, old_ps);
- rdev->pm.dpm.forced_level = RADEON_DPM_FORCED_LEVEL_AUTO;
}
trinity_release_mutex(rdev);
for (i = 0; i < SUMO_MAX_HARDWARE_POWERLEVELS; i++)
pi->at[i] = TRINITY_AT_DFLT;
+ pi->enable_bapm = false;
pi->enable_nbps_policy = true;
pi->enable_sclk_ds = true;
pi->enable_gfx_power_gating = true;
bool enable_auto_thermal_throttling;
bool enable_dpm;
bool enable_sclk_ds;
+ bool enable_bapm;
bool uvd_dpm;
struct radeon_ps current_rps;
struct trinity_ps current_ps;
#define TRINITY_AT_DFLT 30
/* trinity_smc.c */
+int trinity_dpm_bapm_enable(struct radeon_device *rdev, bool enable);
int trinity_dpm_config(struct radeon_device *rdev, bool enable);
int trinity_uvd_dpm_config(struct radeon_device *rdev);
int trinity_dpm_force_state(struct radeon_device *rdev, u32 n);
return 0;
}
+int trinity_dpm_bapm_enable(struct radeon_device *rdev, bool enable)
+{
+ if (enable)
+ return trinity_notify_message_to_smu(rdev, PPSMC_MSG_EnableBAPM);
+ else
+ return trinity_notify_message_to_smu(rdev, PPSMC_MSG_DisableBAPM);
+}
+
int trinity_dpm_config(struct radeon_device *rdev, bool enable)
{
if (enable)
uint32_t key)
{
struct ttm_object_device *tdev = tfile->tdev;
- struct ttm_base_object *base;
+ struct ttm_base_object *uninitialized_var(base);
struct drm_hash_item *hash;
int ret;
ttm_tt_unbind(ttm);
}
- if (likely(ttm->pages != NULL)) {
+ if (ttm->state == tt_unbound) {
ttm->bdev->driver->ttm_tt_unpopulate(ttm);
}
ret = vm_insert_page(vma, (unsigned long)vmf->virtual_address, page);
switch (ret) {
case -EAGAIN:
- set_need_resched();
case 0:
case -ERESTARTSYS:
return VM_FAULT_NOPAGE;
struct vmw_fpriv *vmw_fp;
vmw_fp = vmw_fpriv(file_priv);
- ttm_object_file_release(&vmw_fp->tfile);
- if (vmw_fp->locked_master)
+
+ if (vmw_fp->locked_master) {
+ struct vmw_master *vmaster =
+ vmw_master(vmw_fp->locked_master);
+
+ ttm_lock_set_kill(&vmaster->lock, true, SIGTERM);
+ ttm_vt_unlock(&vmaster->lock);
drm_master_put(&vmw_fp->locked_master);
+ }
+
+ ttm_object_file_release(&vmw_fp->tfile);
kfree(vmw_fp);
}
vmw_fp->locked_master = drm_master_get(file_priv->master);
ret = ttm_vt_lock(&vmaster->lock, false, vmw_fp->tfile);
- vmw_execbuf_release_pinned_bo(dev_priv);
-
if (unlikely((ret != 0))) {
DRM_ERROR("Unable to lock TTM at VT switch.\n");
drm_master_put(&vmw_fp->locked_master);
}
- ttm_lock_set_kill(&vmaster->lock, true, SIGTERM);
+ ttm_lock_set_kill(&vmaster->lock, false, SIGTERM);
+ vmw_execbuf_release_pinned_bo(dev_priv);
if (!dev_priv->enable_fb) {
ret = ttm_bo_evict_mm(&dev_priv->bdev, TTM_PL_VRAM);
if (new_backup)
res->backup_offset = new_backup_offset;
- if (!res->func->may_evict)
+ if (!res->func->may_evict || res->id == -1)
return;
write_lock(&dev_priv->resource_lock);
- Sharkoon Drakonia / Perixx MX-2000 gaming mice
- Tracer Sniper TRM-503 / NOVA Gaming Slider X200 /
Zalman ZM-GM1
+ - SHARKOON DarkGlider Gaming mouse
config HOLTEK_FF
bool "Holtek On Line Grip force feedback support"
static struct hid_field *hid_register_field(struct hid_report *report, unsigned usages, unsigned values)
{
struct hid_field *field;
- int i;
if (report->maxfield == HID_MAX_FIELDS) {
hid_err(report->device, "too many fields in report\n");
field->value = (s32 *)(field->usage + usages);
field->report = report;
- for (i = 0; i < usages; i++)
- field->usage[i].usage_index = i;
-
return field;
}
{
struct hid_report *report;
struct hid_field *field;
- int usages;
+ unsigned usages;
unsigned offset;
- int i;
+ unsigned i;
report = hid_register_report(parser->device, report_type, parser->global.report_id);
if (!report) {
if (!parser->local.usage_index) /* Ignore padding fields */
return 0;
- usages = max_t(int, parser->local.usage_index, parser->global.report_count);
+ usages = max_t(unsigned, parser->local.usage_index,
+ parser->global.report_count);
field = hid_register_field(report, usages, parser->global.report_count);
if (!field)
field->application = hid_lookup_collection(parser, HID_COLLECTION_APPLICATION);
for (i = 0; i < usages; i++) {
- int j = i;
+ unsigned j = i;
/* Duplicate the last usage we parsed if we have excess values */
if (i >= parser->local.usage_index)
j = parser->local.usage_index - 1;
field->usage[i].hid = parser->local.usage[j];
field->usage[i].collection_index =
parser->local.collection_index[j];
+ field->usage[i].usage_index = i;
}
field->maxusage = usages;
static int hid_parser_global(struct hid_parser *parser, struct hid_item *item)
{
- __u32 raw_value;
+ __s32 raw_value;
switch (item->tag) {
case HID_GLOBAL_ITEM_TAG_PUSH:
return 0;
case HID_GLOBAL_ITEM_TAG_UNIT_EXPONENT:
- /* Units exponent negative numbers are given through a
- * two's complement.
- * See "6.2.2.7 Global Items" for more information. */
- raw_value = item_udata(item);
+ /* Many devices provide unit exponent as a two's complement
+ * nibble due to the common misunderstanding of HID
+ * specification 1.11, 6.2.2.7 Global Items. Attempt to handle
+ * both this and the standard encoding. */
+ raw_value = item_sdata(item);
if (!(raw_value & 0xfffffff0))
parser->global.unit_exponent = hid_snto32(raw_value, 4);
else
}
EXPORT_SYMBOL_GPL(hid_parse_report);
+static const char * const hid_report_names[] = {
+ "HID_INPUT_REPORT",
+ "HID_OUTPUT_REPORT",
+ "HID_FEATURE_REPORT",
+};
+/**
+ * hid_validate_values - validate existing device report's value indexes
+ *
+ * @device: hid device
+ * @type: which report type to examine
+ * @id: which report ID to examine (0 for first)
+ * @field_index: which report field to examine
+ * @report_counts: expected number of values
+ *
+ * Validate the number of values in a given field of a given report, after
+ * parsing.
+ */
+struct hid_report *hid_validate_values(struct hid_device *hid,
+ unsigned int type, unsigned int id,
+ unsigned int field_index,
+ unsigned int report_counts)
+{
+ struct hid_report *report;
+
+ if (type > HID_FEATURE_REPORT) {
+ hid_err(hid, "invalid HID report type %u\n", type);
+ return NULL;
+ }
+
+ if (id >= HID_MAX_IDS) {
+ hid_err(hid, "invalid HID report id %u\n", id);
+ return NULL;
+ }
+
+ /*
+ * Explicitly not using hid_get_report() here since it depends on
+ * ->numbered being checked, which may not always be the case when
+ * drivers go to access report values.
+ */
+ report = hid->report_enum[type].report_id_hash[id];
+ if (!report) {
+ hid_err(hid, "missing %s %u\n", hid_report_names[type], id);
+ return NULL;
+ }
+ if (report->maxfield <= field_index) {
+ hid_err(hid, "not enough fields in %s %u\n",
+ hid_report_names[type], id);
+ return NULL;
+ }
+ if (report->field[field_index]->report_count < report_counts) {
+ hid_err(hid, "not enough values in %s %u field %u\n",
+ hid_report_names[type], id, field_index);
+ return NULL;
+ }
+ return report;
+}
+EXPORT_SYMBOL_GPL(hid_validate_values);
+
/**
* hid_open_report - open a driver-specific device report
*
goto out;
}
- if (hid->claimed != HID_CLAIMED_HIDRAW) {
+ if (hid->claimed != HID_CLAIMED_HIDRAW && report->maxfield) {
for (a = 0; a < report->maxfield; a++)
hid_input_field(hid, report->field[a], cdata, interrupt);
hdrv = hid->driver;
{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD) },
{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A) },
{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT, USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081) },
{ HID_USB_DEVICE(USB_VENDOR_ID_HUION, USB_DEVICE_ID_HUION_580) },
{ HID_USB_DEVICE(USB_VENDOR_ID_JESS2, USB_DEVICE_ID_JESS2_COLOR_RUMBLE_PAD) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_ION, USB_DEVICE_ID_ICADE) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_PRESENTER_8K_BT) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, USB_DEVICE_ID_NINTENDO_WIIMOTE) },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO2, USB_DEVICE_ID_NINTENDO_WIIMOTE) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO, USB_DEVICE_ID_NINTENDO_WIIMOTE2) },
{ }
};
* - USB ID 04d9:a067, sold as Sharkoon Drakonia and Perixx MX-2000
* - USB ID 04d9:a04a, sold as Tracer Sniper TRM-503, NOVA Gaming Slider X200
* and Zalman ZM-GM1
+ * - USB ID 04d9:a081, sold as SHARKOON DarkGlider Gaming mouse
*/
static __u8 *holtek_mouse_report_fixup(struct hid_device *hdev, __u8 *rdesc,
}
break;
case USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A:
+ case USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081:
if (*rsize >= 113 && rdesc[106] == 0xff && rdesc[107] == 0x7f
&& rdesc[111] == 0xff && rdesc[112] == 0x7f) {
hid_info(hdev, "Fixing up report descriptor\n");
USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067) },
{ HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT,
USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_HOLTEK_ALT,
+ USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081) },
{ }
};
MODULE_DEVICE_TABLE(hid, holtek_mouse_devices);
#define USB_DEVICE_ID_HOLTEK_ALT_KEYBOARD 0xa055
#define USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A067 0xa067
#define USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A04A 0xa04a
+#define USB_DEVICE_ID_HOLTEK_ALT_MOUSE_A081 0xa081
#define USB_VENDOR_ID_IMATION 0x0718
#define USB_DEVICE_ID_DISC_STAKKA 0xd000
#define USB_DEVICE_ID_NEXTWINDOW_TOUCHSCREEN 0x0003
#define USB_VENDOR_ID_NINTENDO 0x057e
+#define USB_VENDOR_ID_NINTENDO2 0x054c
#define USB_DEVICE_ID_NINTENDO_WIIMOTE 0x0306
#define USB_DEVICE_ID_NINTENDO_WIIMOTE2 0x0330
#define USB_DEVICE_ID_SYNAPTICS_COMP_TP 0x0009
#define USB_DEVICE_ID_SYNAPTICS_WTP 0x0010
#define USB_DEVICE_ID_SYNAPTICS_DPAD 0x0013
+#define USB_DEVICE_ID_SYNAPTICS_LTS1 0x0af8
+#define USB_DEVICE_ID_SYNAPTICS_LTS2 0x1d10
#define USB_VENDOR_ID_THINGM 0x27b8
#define USB_DEVICE_ID_BLINK1 0x01ed
#define USB_VENDOR_ID_PRIMAX 0x0461
#define USB_DEVICE_ID_PRIMAX_KEYBOARD 0x4e05
+#define USB_VENDOR_ID_SIS 0x0457
+#define USB_DEVICE_ID_SIS_TS 0x1013
+
#endif
return -EINVAL;
}
+
/**
* hidinput_calc_abs_res - calculate an absolute axis resolution
* @field: the HID report field to calculate resolution for
case ABS_MT_TOOL_Y:
case ABS_MT_TOUCH_MAJOR:
case ABS_MT_TOUCH_MINOR:
- if (field->unit & 0xffffff00) /* Not a length */
- return 0;
- unit_exponent += hid_snto32(field->unit >> 4, 4) - 1;
- switch (field->unit & 0xf) {
- case 0x1: /* If centimeters */
+ if (field->unit == 0x11) { /* If centimeters */
/* Convert to millimeters */
unit_exponent += 1;
- break;
- case 0x3: /* If inches */
+ } else if (field->unit == 0x13) { /* If inches */
/* Convert to millimeters */
prev = physical_extents;
physical_extents *= 254;
if (physical_extents < prev)
return 0;
unit_exponent -= 1;
- break;
- default:
+ } else {
return 0;
}
break;
if (field->flags & HID_MAIN_ITEM_CONSTANT)
goto ignore;
+ /* Ignore if report count is out of bounds. */
+ if (field->report_count < 1)
+ goto ignore;
+
/* only LED usages are supported in output fields */
if (field->report_type == HID_OUTPUT_REPORT &&
(usage->hid & HID_USAGE_PAGE) != HID_UP_LED) {
rep_enum = &hid->report_enum[HID_FEATURE_REPORT];
list_for_each_entry(rep, &rep_enum->report_list, list)
- for (i = 0; i < rep->maxfield; i++)
+ for (i = 0; i < rep->maxfield; i++) {
+ /* Ignore if report count is out of bounds. */
+ if (rep->field[i]->report_count < 1)
+ continue;
+
for (j = 0; j < rep->field[i]->maxusage; j++) {
/* Verify if Battery Strength feature is available */
hidinput_setup_battery(hid, HID_FEATURE_REPORT, rep->field[i]);
drv->feature_mapping(hid, rep->field[i],
rep->field[i]->usage + j);
}
+ }
}
static struct hid_input *hidinput_allocate(struct hid_device *hid)
struct tpkbd_data_pointer *data_pointer;
size_t name_sz = strlen(dev_name(dev)) + 16;
char *name_mute, *name_micmute;
- int ret;
+ int i, ret;
+
+ /* Validate required reports. */
+ for (i = 0; i < 4; i++) {
+ if (!hid_validate_values(hdev, HID_FEATURE_REPORT, 4, i, 1))
+ return -ENODEV;
+ }
+ if (!hid_validate_values(hdev, HID_OUTPUT_REPORT, 3, 0, 2))
+ return -ENODEV;
if (sysfs_create_group(&hdev->dev.kobj,
&tpkbd_attr_group_pointer)) {
ret = hid_parse(hdev);
if (ret) {
hid_err(hdev, "hid_parse failed\n");
- goto err_free;
+ goto err;
}
ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT);
if (ret) {
hid_err(hdev, "hid_hw_start failed\n");
- goto err_free;
+ goto err;
}
uhdev = (struct usbhid_device *) hdev->driver_data;
- if (uhdev->ifnum == 1)
- return tpkbd_probe_tp(hdev);
+ if (uhdev->ifnum == 1) {
+ ret = tpkbd_probe_tp(hdev);
+ if (ret)
+ goto err_hid;
+ }
return 0;
-err_free:
+err_hid:
+ hid_hw_stop(hdev);
+err:
return ret;
}
struct hid_report *report;
struct hid_input *hidinput = list_entry(hid->inputs.next,
struct hid_input, list);
- struct list_head *report_list =
- &hid->report_enum[HID_OUTPUT_REPORT].report_list;
struct input_dev *dev = hidinput->input;
int error;
- if (list_empty(report_list)) {
- hid_err(hid, "no output report found\n");
+ /* Check that the report looks ok */
+ report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7);
+ if (!report)
return -ENODEV;
- }
-
- report = list_entry(report_list->next, struct hid_report, list);
-
- if (report->maxfield < 1) {
- hid_err(hid, "output report is empty\n");
- return -ENODEV;
- }
- if (report->field[0]->report_count < 7) {
- hid_err(hid, "not enough values in the field\n");
- return -ENODEV;
- }
lg2ff = kmalloc(sizeof(struct lg2ff_device), GFP_KERNEL);
if (!lg2ff)
int x, y;
/*
- * Maxusage should always be 63 (maximum fields)
- * likely a better way to ensure this data is clean
+ * Available values in the field should always be 63, but we only use up to
+ * 35. Instead, clear the entire area, however big it is.
*/
- memset(report->field[0]->value, 0, sizeof(__s32)*report->field[0]->maxusage);
+ memset(report->field[0]->value, 0,
+ sizeof(__s32) * report->field[0]->report_count);
switch (effect->type) {
case FF_CONSTANT:
int lg3ff_init(struct hid_device *hid)
{
struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list);
- struct list_head *report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list;
struct input_dev *dev = hidinput->input;
- struct hid_report *report;
- struct hid_field *field;
const signed short *ff_bits = ff3_joystick_ac;
int error;
int i;
- /* Find the report to use */
- if (list_empty(report_list)) {
- hid_err(hid, "No output report found\n");
- return -1;
- }
-
/* Check that the report looks ok */
- report = list_entry(report_list->next, struct hid_report, list);
- if (!report) {
- hid_err(hid, "NULL output report\n");
- return -1;
- }
-
- field = report->field[0];
- if (!field) {
- hid_err(hid, "NULL field\n");
- return -1;
- }
+ if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 35))
+ return -ENODEV;
/* Assume single fixed device G940 */
for (i = 0; ff_bits[i] >= 0; i++)
int lg4ff_init(struct hid_device *hid)
{
struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list);
- struct list_head *report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list;
struct input_dev *dev = hidinput->input;
- struct hid_report *report;
- struct hid_field *field;
struct lg4ff_device_entry *entry;
struct lg_drv_data *drv_data;
struct usb_device_descriptor *udesc;
int error, i, j;
__u16 bcdDevice, rev_maj, rev_min;
- /* Find the report to use */
- if (list_empty(report_list)) {
- hid_err(hid, "No output report found\n");
- return -1;
- }
-
/* Check that the report looks ok */
- report = list_entry(report_list->next, struct hid_report, list);
- if (!report) {
- hid_err(hid, "NULL output report\n");
+ if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7))
return -1;
- }
-
- field = report->field[0];
- if (!field) {
- hid_err(hid, "NULL field\n");
- return -1;
- }
/* Check what wheel has been connected */
for (i = 0; i < ARRAY_SIZE(lg4ff_devices); i++) {
int lgff_init(struct hid_device* hid)
{
struct hid_input *hidinput = list_entry(hid->inputs.next, struct hid_input, list);
- struct list_head *report_list = &hid->report_enum[HID_OUTPUT_REPORT].report_list;
struct input_dev *dev = hidinput->input;
- struct hid_report *report;
- struct hid_field *field;
const signed short *ff_bits = ff_joystick;
int error;
int i;
- /* Find the report to use */
- if (list_empty(report_list)) {
- hid_err(hid, "No output report found\n");
- return -1;
- }
-
/* Check that the report looks ok */
- report = list_entry(report_list->next, struct hid_report, list);
- field = report->field[0];
- if (!field) {
- hid_err(hid, "NULL field\n");
- return -1;
- }
+ if (!hid_validate_values(hid, HID_OUTPUT_REPORT, 0, 0, 7))
+ return -ENODEV;
for (i = 0; i < ARRAY_SIZE(devices); i++) {
if (dev->id.vendor == devices[i].idVendor &&
struct hid_report *report;
struct hid_report_enum *output_report_enum;
u8 *data = (u8 *)(&dj_report->device_index);
- int i;
+ unsigned int i;
output_report_enum = &hdev->report_enum[HID_OUTPUT_REPORT];
report = output_report_enum->report_id_hash[REPORT_ID_DJ_SHORT];
return -ENODEV;
}
- for (i = 0; i < report->field[0]->report_count; i++)
+ for (i = 0; i < DJREPORT_SHORT_LENGTH - 1; i++)
report->field[0]->value[i] = data[i];
hid_hw_request(hdev, report, HID_REQ_SET_REPORT);
goto hid_parse_fail;
}
+ if (!hid_validate_values(hdev, HID_OUTPUT_REPORT, REPORT_ID_DJ_SHORT,
+ 0, DJREPORT_SHORT_LENGTH - 1)) {
+ retval = -ENODEV;
+ goto hid_parse_fail;
+ }
+
/* Starts the usb device and connects to upper interfaces hiddev and
* hidraw */
retval = hid_hw_start(hdev, HID_CONNECT_DEFAULT);
unsigned last_slot_field; /* the last field of a slot */
unsigned mt_report_id; /* the report ID of the multitouch device */
unsigned pen_report_id; /* the report ID of the pen device */
- __s8 inputmode; /* InputMode HID feature, -1 if non-existent */
- __s8 inputmode_index; /* InputMode HID feature index in the report */
- __s8 maxcontact_report_id; /* Maximum Contact Number HID feature,
+ __s16 inputmode; /* InputMode HID feature, -1 if non-existent */
+ __s16 inputmode_index; /* InputMode HID feature index in the report */
+ __s16 maxcontact_report_id; /* Maximum Contact Number HID feature,
-1 if non-existent */
__u8 num_received; /* how many contacts we received */
__u8 num_expected; /* expected last contact index */
struct hid_field *field, struct hid_usage *usage)
{
struct mt_device *td = hid_get_drvdata(hdev);
- int i;
switch (usage->hid) {
case HID_DG_INPUTMODE:
- td->inputmode = field->report->id;
- td->inputmode_index = 0; /* has to be updated below */
-
- for (i=0; i < field->maxusage; i++) {
- if (field->usage[i].hid == usage->hid) {
- td->inputmode_index = i;
- break;
- }
+ /* Ignore if value index is out of bounds. */
+ if (usage->usage_index >= field->report_count) {
+ dev_err(&hdev->dev, "HID_DG_INPUTMODE out of range\n");
+ break;
}
+ td->inputmode = field->report->id;
+ td->inputmode_index = usage->usage_index;
+
break;
case HID_DG_CONTACTMAX:
td->maxcontact_report_id = field->report->id;
mt_store_field(usage, td, hi);
return 1;
case HID_DG_CONTACTCOUNT:
+ /* Ignore if indexes are out of bounds. */
+ if (field->index >= field->report->maxfield ||
+ usage->usage_index >= field->report_count)
+ return 1;
td->cc_index = field->index;
td->cc_value_index = usage->usage_index;
return 1;
}
#define PROFILE_ATTR(number) \
static struct bin_attribute bin_attr_profile##number = { \
- .attr = { .name = "profile##number", .mode = 0660 }, \
+ .attr = { .name = "profile" #number, .mode = 0660 }, \
.size = sizeof(struct kone_profile), \
.read = kone_sysfs_read_profilex, \
.write = kone_sysfs_write_profilex, \
#define PROFILE_ATTR(number) \
static struct bin_attribute bin_attr_profile##number##_settings = { \
- .attr = { .name = "profile##number##_settings", .mode = 0440 }, \
+ .attr = { .name = "profile" #number "_settings", .mode = 0440 }, \
.size = KONEPLUS_SIZE_PROFILE_SETTINGS, \
.read = koneplus_sysfs_read_profilex_settings, \
.private = &profile_numbers[number-1], \
}; \
static struct bin_attribute bin_attr_profile##number##_buttons = { \
- .attr = { .name = "profile##number##_buttons", .mode = 0440 }, \
+ .attr = { .name = "profile" #number "_buttons", .mode = 0440 }, \
.size = KONEPLUS_SIZE_PROFILE_BUTTONS, \
.read = koneplus_sysfs_read_profilex_buttons, \
.private = &profile_numbers[number-1], \
#define PROFILE_ATTR(number) \
static struct bin_attribute bin_attr_profile##number##_settings = { \
- .attr = { .name = "profile##number##_settings", .mode = 0440 }, \
+ .attr = { .name = "profile" #number "_settings", .mode = 0440 }, \
.size = KOVAPLUS_SIZE_PROFILE_SETTINGS, \
.read = kovaplus_sysfs_read_profilex_settings, \
.private = &profile_numbers[number-1], \
}; \
static struct bin_attribute bin_attr_profile##number##_buttons = { \
- .attr = { .name = "profile##number##_buttons", .mode = 0440 }, \
+ .attr = { .name = "profile" #number "_buttons", .mode = 0440 }, \
.size = KOVAPLUS_SIZE_PROFILE_BUTTONS, \
.read = kovaplus_sysfs_read_profilex_buttons, \
.private = &profile_numbers[number-1], \
#define PROFILE_ATTR(number) \
static struct bin_attribute bin_attr_profile##number##_settings = { \
- .attr = { .name = "profile##number##_settings", .mode = 0440 }, \
+ .attr = { .name = "profile" #number "_settings", .mode = 0440 }, \
.size = PYRA_SIZE_PROFILE_SETTINGS, \
.read = pyra_sysfs_read_profilex_settings, \
.private = &profile_numbers[number-1], \
}; \
static struct bin_attribute bin_attr_profile##number##_buttons = { \
- .attr = { .name = "profile##number##_buttons", .mode = 0440 }, \
+ .attr = { .name = "profile" #number "_buttons", .mode = 0440 }, \
.size = PYRA_SIZE_PROFILE_BUTTONS, \
.read = pyra_sysfs_read_profilex_buttons, \
.private = &profile_numbers[number-1], \
drv_data = hid_get_drvdata(hdev);
BUG_ON(!(drv_data->quirks & BUZZ_CONTROLLER));
+ /* Validate expected report characteristics. */
+ if (!hid_validate_values(hdev, HID_OUTPUT_REPORT, 0, 0, 7))
+ return -ENODEV;
+
buzz = kzalloc(sizeof(*buzz), GFP_KERNEL);
if (!buzz) {
hid_err(hdev, "Insufficient memory, cannot allocate driver data\n");
goto err_free;
}
+ if (!hid_validate_values(hdev, HID_OUTPUT_REPORT, 0, 0, 16)) {
+ ret = -ENODEV;
+ goto err_free;
+ }
+
ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT);
if (ret) {
hid_err(hdev, "hw start failed\n");
goto done;
}
- if (vendor == USB_VENDOR_ID_NINTENDO) {
+ if (vendor == USB_VENDOR_ID_NINTENDO ||
+ vendor == USB_VENDOR_ID_NINTENDO2) {
if (product == USB_DEVICE_ID_NINTENDO_WIIMOTE) {
devtype = WIIMOTE_DEV_GEN10;
goto done;
static const struct hid_device_id wiimote_hid_devices[] = {
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO,
USB_DEVICE_ID_NINTENDO_WIIMOTE) },
+ { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO2,
+ USB_DEVICE_ID_NINTENDO_WIIMOTE) },
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_NINTENDO,
USB_DEVICE_ID_NINTENDO_WIIMOTE2) },
{ }
* the rumble motor, this flag shouldn't be set.
*/
+/* used by wiimod_rumble and wiipro_rumble */
+static void wiimod_rumble_worker(struct work_struct *work)
+{
+ struct wiimote_data *wdata = container_of(work, struct wiimote_data,
+ rumble_worker);
+
+ spin_lock_irq(&wdata->state.lock);
+ wiiproto_req_rumble(wdata, wdata->state.cache_rumble);
+ spin_unlock_irq(&wdata->state.lock);
+}
+
static int wiimod_rumble_play(struct input_dev *dev, void *data,
struct ff_effect *eff)
{
struct wiimote_data *wdata = input_get_drvdata(dev);
__u8 value;
- unsigned long flags;
/*
* The wiimote supports only a single rumble motor so if any magnitude
else
value = 0;
- spin_lock_irqsave(&wdata->state.lock, flags);
- wiiproto_req_rumble(wdata, value);
- spin_unlock_irqrestore(&wdata->state.lock, flags);
+ /* Locking state.lock here might deadlock with input_event() calls.
+ * schedule_work acts as barrier. Merging multiple changes is fine. */
+ wdata->state.cache_rumble = value;
+ schedule_work(&wdata->rumble_worker);
return 0;
}
static int wiimod_rumble_probe(const struct wiimod_ops *ops,
struct wiimote_data *wdata)
{
+ INIT_WORK(&wdata->rumble_worker, wiimod_rumble_worker);
+
set_bit(FF_RUMBLE, wdata->input->ffbit);
if (input_ff_create_memless(wdata->input, NULL, wiimod_rumble_play))
return -ENOMEM;
{
unsigned long flags;
+ cancel_work_sync(&wdata->rumble_worker);
+
spin_lock_irqsave(&wdata->state.lock, flags);
wiiproto_req_rumble(wdata, 0);
spin_unlock_irqrestore(&wdata->state.lock, flags);
{
struct wiimote_data *wdata = input_get_drvdata(dev);
__u8 value;
- unsigned long flags;
/*
* The wiimote supports only a single rumble motor so if any magnitude
else
value = 0;
- spin_lock_irqsave(&wdata->state.lock, flags);
- wiiproto_req_rumble(wdata, value);
- spin_unlock_irqrestore(&wdata->state.lock, flags);
+ /* Locking state.lock here might deadlock with input_event() calls.
+ * schedule_work acts as barrier. Merging multiple changes is fine. */
+ wdata->state.cache_rumble = value;
+ schedule_work(&wdata->rumble_worker);
return 0;
}
{
int ret, i;
+ INIT_WORK(&wdata->rumble_worker, wiimod_rumble_worker);
+
wdata->extension.input = input_allocate_device();
if (!wdata->extension.input)
return -ENOMEM;
if (!wdata->extension.input)
return;
+ input_unregister_device(wdata->extension.input);
+ wdata->extension.input = NULL;
+ cancel_work_sync(&wdata->rumble_worker);
+
spin_lock_irqsave(&wdata->state.lock, flags);
wiiproto_req_rumble(wdata, 0);
spin_unlock_irqrestore(&wdata->state.lock, flags);
-
- input_unregister_device(wdata->extension.input);
- wdata->extension.input = NULL;
}
static const struct wiimod_ops wiimod_pro = {
__u8 *cmd_read_buf;
__u8 cmd_read_size;
- /* calibration data */
+ /* calibration/cache data */
__u16 calib_bboard[4][3];
+ __u8 cache_rumble;
};
struct wiimote_data {
struct hid_device *hdev;
struct input_dev *input;
+ struct work_struct rumble_worker;
struct led_classdev *leds[4];
struct input_dev *accel;
struct input_dev *ir;
struct hid_report *report;
struct hid_input *hidinput = list_entry(hid->inputs.next,
struct hid_input, list);
- struct list_head *report_list =
- &hid->report_enum[HID_OUTPUT_REPORT].report_list;
struct input_dev *dev = hidinput->input;
- int error;
+ int i, error;
- if (list_empty(report_list)) {
- hid_err(hid, "no output report found\n");
- return -ENODEV;
- }
-
- report = list_entry(report_list->next, struct hid_report, list);
-
- if (report->maxfield < 4) {
- hid_err(hid, "not enough fields in report\n");
- return -ENODEV;
+ for (i = 0; i < 4; i++) {
+ report = hid_validate_values(hid, HID_OUTPUT_REPORT, 0, i, 1);
+ if (!report)
+ return -ENODEV;
}
zpff = kzalloc(sizeof(struct zpff_device), GFP_KERNEL);
static void drop_ref(struct hidraw *hidraw, int exists_bit)
{
if (exists_bit) {
- hid_hw_close(hidraw->hid);
hidraw->exist = 0;
- if (hidraw->open)
+ if (hidraw->open) {
+ hid_hw_close(hidraw->hid);
wake_up_interruptible(&hidraw->wait);
+ }
} else {
--hidraw->open;
}
-
- if (!hidraw->open && !hidraw->exist) {
- device_destroy(hidraw_class, MKDEV(hidraw_major, hidraw->minor));
- hidraw_table[hidraw->minor] = NULL;
- kfree(hidraw);
+ if (!hidraw->open) {
+ if (!hidraw->exist) {
+ device_destroy(hidraw_class,
+ MKDEV(hidraw_major, hidraw->minor));
+ hidraw_table[hidraw->minor] = NULL;
+ kfree(hidraw);
+ } else {
+ /* close device for last reader */
+ hid_hw_power(hidraw->hid, PM_HINT_NORMAL);
+ hid_hw_close(hidraw->hid);
+ }
}
}
static struct miscdevice uhid_misc = {
.fops = &uhid_fops,
- .minor = MISC_DYNAMIC_MINOR,
+ .minor = UHID_MINOR,
.name = UHID_NAME,
};
MODULE_LICENSE("GPL");
MODULE_AUTHOR("David Herrmann <dh.herrmann@gmail.com>");
MODULE_DESCRIPTION("User-space I/O driver support for HID subsystem");
+MODULE_ALIAS_MISCDEV(UHID_MINOR);
MODULE_ALIAS("devname:" UHID_NAME);
{ USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X, HID_QUIRK_MULTI_INPUT },
{ USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M610X, HID_QUIRK_MULTI_INPUT },
{ USB_VENDOR_ID_NTRIG, USB_DEVICE_ID_NTRIG_DUOSENSE, HID_QUIRK_NO_INIT_REPORTS },
+ { USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_LTS1, HID_QUIRK_NO_INIT_REPORTS },
+ { USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_LTS2, HID_QUIRK_NO_INIT_REPORTS },
+ { USB_VENDOR_ID_SIS, USB_DEVICE_ID_SIS_TS, HID_QUIRK_NO_INIT_REPORTS },
{ 0, 0 }
};
do {
ret = vmbus_negotiate_version(msginfo, version);
- if (ret)
+ if (ret == -ETIMEDOUT)
goto cleanup;
if (vmbus_connection.conn_state == CONNECTED)
/*
* Pre win8 version numbers used in ws2008 and ws 2008 r2 (win7)
*/
+#define WS2008_SRV_MAJOR 1
+#define WS2008_SRV_MINOR 0
+#define WS2008_SRV_VERSION (WS2008_SRV_MAJOR << 16 | WS2008_SRV_MINOR)
+
#define WIN7_SRV_MAJOR 3
#define WIN7_SRV_MINOR 0
-#define WIN7_SRV_MAJOR_MINOR (WIN7_SRV_MAJOR << 16 | WIN7_SRV_MINOR)
+#define WIN7_SRV_VERSION (WIN7_SRV_MAJOR << 16 | WIN7_SRV_MINOR)
#define WIN8_SRV_MAJOR 4
#define WIN8_SRV_MINOR 0
-#define WIN8_SRV_MAJOR_MINOR (WIN8_SRV_MAJOR << 16 | WIN8_SRV_MINOR)
+#define WIN8_SRV_VERSION (WIN8_SRV_MAJOR << 16 | WIN8_SRV_MINOR)
/*
* Global state maintained for transaction that is being processed.
struct icmsg_hdr *icmsghdrp;
struct icmsg_negotiate *negop = NULL;
+ int util_fw_version;
+ int kvp_srv_version;
if (kvp_transaction.active) {
/*
if (icmsghdrp->icmsgtype == ICMSGTYPE_NEGOTIATE) {
/*
- * We start with win8 version and if the host cannot
- * support that we use the previous version.
+ * Based on the host, select appropriate
+ * framework and service versions we will
+ * negotiate.
*/
- if (vmbus_prep_negotiate_resp(icmsghdrp, negop,
- recv_buffer, UTIL_FW_MAJOR_MINOR,
- WIN8_SRV_MAJOR_MINOR))
- goto done;
-
+ switch (vmbus_proto_version) {
+ case (VERSION_WS2008):
+ util_fw_version = UTIL_WS2K8_FW_VERSION;
+ kvp_srv_version = WS2008_SRV_VERSION;
+ break;
+ case (VERSION_WIN7):
+ util_fw_version = UTIL_FW_VERSION;
+ kvp_srv_version = WIN7_SRV_VERSION;
+ break;
+ default:
+ util_fw_version = UTIL_FW_VERSION;
+ kvp_srv_version = WIN8_SRV_VERSION;
+ }
vmbus_prep_negotiate_resp(icmsghdrp, negop,
- recv_buffer, UTIL_FW_MAJOR_MINOR,
- WIN7_SRV_MAJOR_MINOR);
+ recv_buffer, util_fw_version,
+ kvp_srv_version);
} else {
kvp_msg = (struct hv_kvp_msg *)&recv_buffer[
return;
}
-done:
icmsghdrp->icflags = ICMSGHDRFLAG_TRANSACTION
| ICMSGHDRFLAG_RESPONSE;
#define VSS_MAJOR 5
#define VSS_MINOR 0
-#define VSS_MAJOR_MINOR (VSS_MAJOR << 16 | VSS_MINOR)
+#define VSS_VERSION (VSS_MAJOR << 16 | VSS_MINOR)
if (icmsghdrp->icmsgtype == ICMSGTYPE_NEGOTIATE) {
vmbus_prep_negotiate_resp(icmsghdrp, negop,
- recv_buffer, UTIL_FW_MAJOR_MINOR,
- VSS_MAJOR_MINOR);
+ recv_buffer, UTIL_FW_VERSION,
+ VSS_VERSION);
} else {
vss_msg = (struct hv_vss_msg *)&recv_buffer[
sizeof(struct vmbuspipe_hdr) +
#include <linux/reboot.h>
#include <linux/hyperv.h>
-#define SHUTDOWN_MAJOR 3
-#define SHUTDOWN_MINOR 0
-#define SHUTDOWN_MAJOR_MINOR (SHUTDOWN_MAJOR << 16 | SHUTDOWN_MINOR)
-#define TIMESYNCH_MAJOR 3
-#define TIMESYNCH_MINOR 0
-#define TIMESYNCH_MAJOR_MINOR (TIMESYNCH_MAJOR << 16 | TIMESYNCH_MINOR)
+#define SD_MAJOR 3
+#define SD_MINOR 0
+#define SD_VERSION (SD_MAJOR << 16 | SD_MINOR)
-#define HEARTBEAT_MAJOR 3
-#define HEARTBEAT_MINOR 0
-#define HEARTBEAT_MAJOR_MINOR (HEARTBEAT_MAJOR << 16 | HEARTBEAT_MINOR)
+#define SD_WS2008_MAJOR 1
+#define SD_WS2008_VERSION (SD_WS2008_MAJOR << 16 | SD_MINOR)
+
+#define TS_MAJOR 3
+#define TS_MINOR 0
+#define TS_VERSION (TS_MAJOR << 16 | TS_MINOR)
+
+#define TS_WS2008_MAJOR 1
+#define TS_WS2008_VERSION (TS_WS2008_MAJOR << 16 | TS_MINOR)
+
+#define HB_MAJOR 3
+#define HB_MINOR 0
+#define HB_VERSION (HB_MAJOR << 16 | HB_MINOR)
+
+#define HB_WS2008_MAJOR 1
+#define HB_WS2008_VERSION (HB_WS2008_MAJOR << 16 | HB_MINOR)
+
+static int sd_srv_version;
+static int ts_srv_version;
+static int hb_srv_version;
+static int util_fw_version;
static void shutdown_onchannelcallback(void *context);
static struct hv_util_service util_shutdown = {
if (icmsghdrp->icmsgtype == ICMSGTYPE_NEGOTIATE) {
vmbus_prep_negotiate_resp(icmsghdrp, negop,
- shut_txf_buf, UTIL_FW_MAJOR_MINOR,
- SHUTDOWN_MAJOR_MINOR);
+ shut_txf_buf, util_fw_version,
+ sd_srv_version);
} else {
shutdown_msg =
(struct shutdown_msg_data *)&shut_txf_buf[
struct icmsg_hdr *icmsghdrp;
struct ictimesync_data *timedatap;
u8 *time_txf_buf = util_timesynch.recv_buffer;
+ struct icmsg_negotiate *negop = NULL;
vmbus_recvpacket(channel, time_txf_buf,
PAGE_SIZE, &recvlen, &requestid);
sizeof(struct vmbuspipe_hdr)];
if (icmsghdrp->icmsgtype == ICMSGTYPE_NEGOTIATE) {
- vmbus_prep_negotiate_resp(icmsghdrp, NULL, time_txf_buf,
- UTIL_FW_MAJOR_MINOR,
- TIMESYNCH_MAJOR_MINOR);
+ vmbus_prep_negotiate_resp(icmsghdrp, negop,
+ time_txf_buf,
+ util_fw_version,
+ ts_srv_version);
} else {
timedatap = (struct ictimesync_data *)&time_txf_buf[
sizeof(struct vmbuspipe_hdr) +
struct icmsg_hdr *icmsghdrp;
struct heartbeat_msg_data *heartbeat_msg;
u8 *hbeat_txf_buf = util_heartbeat.recv_buffer;
+ struct icmsg_negotiate *negop = NULL;
vmbus_recvpacket(channel, hbeat_txf_buf,
PAGE_SIZE, &recvlen, &requestid);
sizeof(struct vmbuspipe_hdr)];
if (icmsghdrp->icmsgtype == ICMSGTYPE_NEGOTIATE) {
- vmbus_prep_negotiate_resp(icmsghdrp, NULL,
- hbeat_txf_buf, UTIL_FW_MAJOR_MINOR,
- HEARTBEAT_MAJOR_MINOR);
+ vmbus_prep_negotiate_resp(icmsghdrp, negop,
+ hbeat_txf_buf, util_fw_version,
+ hb_srv_version);
} else {
heartbeat_msg =
(struct heartbeat_msg_data *)&hbeat_txf_buf[
goto error;
hv_set_drvdata(dev, srv);
+ /*
+ * Based on the host; initialize the framework and
+ * service version numbers we will negotiate.
+ */
+ switch (vmbus_proto_version) {
+ case (VERSION_WS2008):
+ util_fw_version = UTIL_WS2K8_FW_VERSION;
+ sd_srv_version = SD_WS2008_VERSION;
+ ts_srv_version = TS_WS2008_VERSION;
+ hb_srv_version = HB_WS2008_VERSION;
+ break;
+
+ default:
+ util_fw_version = UTIL_FW_VERSION;
+ sd_srv_version = SD_VERSION;
+ ts_srv_version = TS_VERSION;
+ hb_srv_version = HB_VERSION;
+ }
+
return 0;
error:
static int read_smc(u8 cmd, const char *key, u8 *buffer, u8 len)
{
+ u8 status, data = 0;
int i;
if (send_command(cmd) || send_argument(key)) {
return -EIO;
}
+ /* This has no effect on newer (2012) SMCs */
if (send_byte(len, APPLESMC_DATA_PORT)) {
pr_warn("%.4s: read len fail\n", key);
return -EIO;
buffer[i] = inb(APPLESMC_DATA_PORT);
}
+ /* Read the data port until bit0 is cleared */
+ for (i = 0; i < 16; i++) {
+ udelay(APPLESMC_MIN_WAIT);
+ status = inb(APPLESMC_CMD_PORT);
+ if (!(status & 0x01))
+ break;
+ data = inb(APPLESMC_DATA_PORT);
+ }
+ if (i)
+ pr_warn("flushed %d bytes, last value is: %d\n", i, data);
+
return 0;
}
{
struct applesmc_registers *s = &smcreg;
bool left_light_sensor, right_light_sensor;
+ unsigned int count;
u8 tmp[1];
int ret;
if (s->init_complete)
return 0;
- ret = read_register_count(&s->key_count);
+ ret = read_register_count(&count);
if (ret)
return ret;
+ if (s->cache && s->key_count != count) {
+ pr_warn("key count changed from %d to %d\n",
+ s->key_count, count);
+ kfree(s->cache);
+ s->cache = NULL;
+ }
+ s->key_count = count;
+
if (!s->cache)
s->cache = kcalloc(s->key_count, sizeof(*s->cache), GFP_KERNEL);
if (!s->cache)
#define DW_IC_ERR_TX_ABRT 0x1
+#define DW_IC_TAR_10BITADDR_MASTER BIT(12)
+
/*
* status codes
*/
static void i2c_dw_xfer_init(struct dw_i2c_dev *dev)
{
struct i2c_msg *msgs = dev->msgs;
- u32 ic_con;
+ u32 ic_con, ic_tar = 0;
/* Disable the adapter */
__i2c_dw_enable(dev, false);
- /* set the slave (target) address */
- dw_writel(dev, msgs[dev->msg_write_idx].addr, DW_IC_TAR);
-
/* if the slave address is ten bit address, enable 10BITADDR */
ic_con = dw_readl(dev, DW_IC_CON);
- if (msgs[dev->msg_write_idx].flags & I2C_M_TEN)
+ if (msgs[dev->msg_write_idx].flags & I2C_M_TEN) {
ic_con |= DW_IC_CON_10BITADDR_MASTER;
- else
+ /*
+ * If I2C_DYNAMIC_TAR_UPDATE is set, the 10-bit addressing
+ * mode has to be enabled via bit 12 of IC_TAR register.
+ * We set it always as I2C_DYNAMIC_TAR_UPDATE can't be
+ * detected from registers.
+ */
+ ic_tar = DW_IC_TAR_10BITADDR_MASTER;
+ } else {
ic_con &= ~DW_IC_CON_10BITADDR_MASTER;
+ }
+
dw_writel(dev, ic_con, DW_IC_CON);
+ /*
+ * Set the slave (target) address and enable 10-bit addressing mode
+ * if applicable.
+ */
+ dw_writel(dev, msgs[dev->msg_write_idx].addr | ic_tar, DW_IC_TAR);
+
/* Enable the adapter */
__i2c_dw_enable(dev, true);
MODULE_ALIAS("platform:i2c_designware");
static struct platform_driver dw_i2c_driver = {
- .remove = dw_i2c_remove,
+ .probe = dw_i2c_probe,
+ .remove = dw_i2c_remove,
.driver = {
.name = "i2c_designware",
.owner = THIS_MODULE,
static int __init dw_i2c_init_driver(void)
{
- return platform_driver_probe(&dw_i2c_driver, dw_i2c_probe);
+ return platform_driver_register(&dw_i2c_driver);
}
subsys_initcall(dw_i2c_init_driver);
clk_disable_unprepare(i2c_imx->clk);
}
-static void __init i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx,
+static void i2c_imx_set_clk(struct imx_i2c_struct *i2c_imx,
unsigned int rate)
{
struct imx_i2c_clk_pair *i2c_clk_div = i2c_imx->hwdata->clk_div;
.functionality = i2c_imx_func,
};
-static int __init i2c_imx_probe(struct platform_device *pdev)
+static int i2c_imx_probe(struct platform_device *pdev)
{
const struct of_device_id *of_id = of_match_device(i2c_imx_dt_ids,
&pdev->dev);
return 0; /* Return OK */
}
-static int __exit i2c_imx_remove(struct platform_device *pdev)
+static int i2c_imx_remove(struct platform_device *pdev)
{
struct imx_i2c_struct *i2c_imx = platform_get_drvdata(pdev);
}
static struct platform_driver i2c_imx_driver = {
- .remove = __exit_p(i2c_imx_remove),
+ .probe = i2c_imx_probe,
+ .remove = i2c_imx_remove,
.driver = {
.name = DRIVER_NAME,
.owner = THIS_MODULE,
static int __init i2c_adap_imx_init(void)
{
- return platform_driver_probe(&i2c_imx_driver, i2c_imx_probe);
+ return platform_driver_register(&i2c_imx_driver);
}
subsys_initcall(i2c_adap_imx_init);
desc = &priv->hw[priv->head];
+ /* Initialize the DMA buffer */
+ memset(priv->dma_buffer, 0, sizeof(priv->dma_buffer));
+
/* Initialize the descriptor */
memset(desc, 0, sizeof(struct ismt_desc));
desc->tgtaddr_rw = ISMT_DESC_ADDR_RW(addr, read_write);
ctrl_reg |= MV64XXX_I2C_BRIDGE_CONTROL_WR |
(msg->len - 1) << MV64XXX_I2C_BRIDGE_CONTROL_TX_SIZE_SHIFT;
- writel_relaxed(data_reg_lo,
+ writel(data_reg_lo,
drv_data->reg_base + MV64XXX_I2C_REG_TX_DATA_LO);
- writel_relaxed(data_reg_hi,
+ writel(data_reg_hi,
drv_data->reg_base + MV64XXX_I2C_REG_TX_DATA_HI);
} else {
MODULE_DEVICE_TABLE(of, mv64xxx_i2c_of_match_table);
#ifdef CONFIG_OF
+#ifdef CONFIG_HAVE_CLK
static int
mv64xxx_calc_freq(const int tclk, const int n, const int m)
{
return false;
return true;
}
+#endif /* CONFIG_HAVE_CLK */
static int
mv64xxx_of_config(struct mv64xxx_i2c_data *drv_data,
struct device *dev)
{
- const struct of_device_id *device;
- struct device_node *np = dev->of_node;
- u32 bus_freq, tclk;
- int rc = 0;
-
/* CLK is mandatory when using DT to describe the i2c bus. We
* need to know tclk in order to calculate bus clock
* factors.
/* Have OF but no CLK */
return -ENODEV;
#else
+ const struct of_device_id *device;
+ struct device_node *np = dev->of_node;
+ u32 bus_freq, tclk;
+ int rc = 0;
+
if (IS_ERR(drv_data->clk)) {
rc = -ENODEV;
goto out;
.owner = THIS_MODULE,
.of_match_table = mxs_i2c_dt_ids,
},
+ .probe = mxs_i2c_probe,
.remove = mxs_i2c_remove,
};
static int __init mxs_i2c_init(void)
{
- return platform_driver_probe(&mxs_i2c_driver, mxs_i2c_probe);
+ return platform_driver_register(&mxs_i2c_driver);
}
subsys_initcall(mxs_i2c_init);
/*
* ProDB0017052: Clear ARDY bit twice
*/
+ if (stat & OMAP_I2C_STAT_ARDY)
+ omap_i2c_ack_stat(dev, OMAP_I2C_STAT_ARDY);
+
if (stat & (OMAP_I2C_STAT_ARDY | OMAP_I2C_STAT_NACK |
OMAP_I2C_STAT_AL)) {
omap_i2c_ack_stat(dev, (OMAP_I2C_STAT_RRDY |
i2c_del_adapter(&i2c->adap);
- clk_disable_unprepare(i2c->clk);
-
if (pdev->dev.of_node && IS_ERR(i2c->pctrl))
s3c24xx_i2c_dt_gpio_free(i2c);
.functionality = stu300_func,
};
-static int __init
-stu300_probe(struct platform_device *pdev)
+static int stu300_probe(struct platform_device *pdev)
{
struct stu300_dev *dev;
struct i2c_adapter *adap;
#define STU300_I2C_PM NULL
#endif
-static int __exit
-stu300_remove(struct platform_device *pdev)
+static int stu300_remove(struct platform_device *pdev)
{
struct stu300_dev *dev = platform_get_drvdata(pdev);
.pm = STU300_I2C_PM,
.of_match_table = stu300_dt_match,
},
- .remove = __exit_p(stu300_remove),
+ .probe = stu300_probe,
+ .remove = stu300_remove,
};
static int __init stu300_init(void)
{
- return platform_driver_probe(&stu300_i2c_driver, stu300_probe);
+ return platform_driver_register(&stu300_i2c_driver);
}
static void __exit stu300_exit(void)
acpi_handle handle;
acpi_status status;
+ if (!adap->dev.parent)
+ return;
+
handle = ACPI_HANDLE(adap->dev.parent);
if (!handle)
return;
arb->parent = of_find_i2c_adapter_by_node(parent_np);
if (!arb->parent) {
dev_err(dev, "Cannot find parent bus\n");
- return -EINVAL;
+ return -EPROBE_DEFER;
}
/* Actually add the mux adapter */
struct device_node *adapter_np, *child;
struct i2c_adapter *adapter;
unsigned *values, *gpios;
- int i = 0;
+ int i = 0, ret;
if (!np)
return -ENODEV;
adapter = of_find_i2c_adapter_by_node(adapter_np);
if (!adapter) {
dev_err(&pdev->dev, "Cannot find parent bus\n");
- return -ENODEV;
+ return -EPROBE_DEFER;
}
mux->data.parent = i2c_adapter_id(adapter);
put_device(&adapter->dev);
return -ENOMEM;
}
- for (i = 0; i < mux->data.n_gpios; i++)
- gpios[i] = of_get_named_gpio(np, "mux-gpios", i);
+ for (i = 0; i < mux->data.n_gpios; i++) {
+ ret = of_get_named_gpio(np, "mux-gpios", i);
+ if (ret < 0)
+ return ret;
+ gpios[i] = ret;
+ }
mux->data.gpios = gpios;
if (!parent) {
dev_err(&pdev->dev, "Parent adapter (%d) not found\n",
mux->data.parent);
- return -ENODEV;
+ return -EPROBE_DEFER;
}
mux->parent = parent;
adapter = of_find_i2c_adapter_by_node(adapter_np);
if (!adapter) {
dev_err(mux->dev, "Cannot find parent bus\n");
- return -ENODEV;
+ return -EPROBE_DEFER;
}
mux->pdata->parent_bus_num = i2c_adapter_id(adapter);
put_device(&adapter->dev);
if (!mux->parent) {
dev_err(&pdev->dev, "Parent adapter (%d) not found\n",
mux->pdata->parent_bus_num);
- ret = -ENODEV;
+ ret = -EPROBE_DEFER;
goto err;
}
#ifdef CONFIG_PM_SLEEP
static int bma180_suspend(struct device *dev)
{
- struct iio_dev *indio_dev = dev_to_iio_dev(dev);
+ struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev));
struct bma180_data *data = iio_priv(indio_dev);
int ret;
static int bma180_resume(struct device *dev)
{
- struct iio_dev *indio_dev = dev_to_iio_dev(dev);
+ struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev));
struct bma180_data *data = iio_priv(indio_dev);
int ret;
static int at91_adc_probe(struct platform_device *pdev)
{
- unsigned int prsc, mstrclk, ticks, adc_clk, shtim;
+ unsigned int prsc, mstrclk, ticks, adc_clk, adc_clk_khz, shtim;
int ret;
struct iio_dev *idev;
struct at91_adc_state *st;
*/
mstrclk = clk_get_rate(st->clk);
adc_clk = clk_get_rate(st->adc_clk);
+ adc_clk_khz = adc_clk / 1000;
prsc = (mstrclk / (2 * adc_clk)) - 1;
if (!st->startup_time) {
* defined in the electrical characteristics of the board, divided by 8.
* The formula thus is : Startup Time = (ticks + 1) * 8 / ADC Clock
*/
- ticks = round_up((st->startup_time * adc_clk /
- 1000000) - 1, 8) / 8;
+ ticks = round_up((st->startup_time * adc_clk_khz /
+ 1000) - 1, 8) / 8;
/*
* a minimal Sample and Hold Time is necessary for the ADC to guarantee
* the best converted final value between two channels selection
* The formula thus is : Sample and Hold Time = (shtim + 1) / ADCClock
*/
- shtim = round_up((st->sample_hold_time * adc_clk /
- 1000000) - 1, 1);
+ shtim = round_up((st->sample_hold_time * adc_clk_khz /
+ 1000) - 1, 1);
reg = AT91_ADC_PRESCAL_(prsc) & st->registers->mr_prescal_mask;
reg |= AT91_ADC_STARTUP_(ticks) & st->registers->mr_startup_mask;
iio_device_unregister(indio_dev);
- if (!IS_ERR(reg)) {
+ if (!IS_ERR(reg))
regulator_disable(reg);
- regulator_put(reg);
- }
return 0;
}
goto error_ret;
}
+ iio_buffer_init(&cb_buff->buffer);
+
cb_buff->private = private;
cb_buff->cb = cb;
cb_buff->buffer.access = &iio_cb_access;
static int mcp4725_suspend(struct device *dev)
{
- struct iio_dev *indio_dev = dev_to_iio_dev(dev);
- struct mcp4725_data *data = iio_priv(indio_dev);
+ struct mcp4725_data *data = iio_priv(i2c_get_clientdata(
+ to_i2c_client(dev)));
u8 outbuf[2];
outbuf[0] = (data->powerdown_mode + 1) << 4;
outbuf[1] = 0;
data->powerdown = true;
- return i2c_master_send(to_i2c_client(dev), outbuf, 2);
+ return i2c_master_send(data->client, outbuf, 2);
}
static int mcp4725_resume(struct device *dev)
{
- struct iio_dev *indio_dev = dev_to_iio_dev(dev);
- struct mcp4725_data *data = iio_priv(indio_dev);
+ struct mcp4725_data *data = iio_priv(i2c_get_clientdata(
+ to_i2c_client(dev)));
u8 outbuf[2];
/* restore previous DAC value */
outbuf[1] = data->dac_value & 0xff;
data->powerdown = false;
- return i2c_master_send(to_i2c_client(dev), outbuf, 2);
+ return i2c_master_send(data->client, outbuf, 2);
}
#ifdef CONFIG_PM_SLEEP
}
indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*st));
- if (indio_dev == NULL)
- return -ENOMEM;
+ if (indio_dev == NULL) {
+ ret = -ENOMEM;
+ goto error_disable_clk;
+ }
st = iio_priv(indio_dev);
#define iio_buffer_poll_addr (&iio_buffer_poll)
#define iio_buffer_read_first_n_outer_addr (&iio_buffer_read_first_n_outer)
+void iio_disable_all_buffers(struct iio_dev *indio_dev);
+
#else
#define iio_buffer_poll_addr NULL
#define iio_buffer_read_first_n_outer_addr NULL
+static inline void iio_disable_all_buffers(struct iio_dev *indio_dev) {}
+
#endif
int iio_device_register_eventset(struct iio_dev *indio_dev);
return bytes;
}
+void iio_disable_all_buffers(struct iio_dev *indio_dev)
+{
+ struct iio_buffer *buffer, *_buffer;
+
+ if (list_empty(&indio_dev->buffer_list))
+ return;
+
+ if (indio_dev->setup_ops->predisable)
+ indio_dev->setup_ops->predisable(indio_dev);
+
+ list_for_each_entry_safe(buffer, _buffer,
+ &indio_dev->buffer_list, buffer_list)
+ list_del_init(&buffer->buffer_list);
+
+ indio_dev->currentmode = INDIO_DIRECT_MODE;
+ if (indio_dev->setup_ops->postdisable)
+ indio_dev->setup_ops->postdisable(indio_dev);
+
+ if (indio_dev->available_scan_masks == NULL)
+ kfree(indio_dev->active_scan_mask);
+}
+
int iio_update_buffers(struct iio_dev *indio_dev,
struct iio_buffer *insert_buffer,
struct iio_buffer *remove_buffer)
* Note can only occur when adding a buffer.
*/
list_del(&insert_buffer->buffer_list);
- indio_dev->active_scan_mask = old_mask;
- success = -EINVAL;
+ if (old_mask) {
+ indio_dev->active_scan_mask = old_mask;
+ success = -EINVAL;
+ }
+ else {
+ kfree(compound_mask);
+ ret = -EINVAL;
+ goto error_ret;
+ }
}
} else {
indio_dev->active_scan_mask = compound_mask;
static void iio_dev_release(struct device *device)
{
struct iio_dev *indio_dev = dev_to_iio_dev(device);
- if (indio_dev->chrdev.dev)
- cdev_del(&indio_dev->chrdev);
if (indio_dev->modes & INDIO_BUFFER_TRIGGERED)
iio_device_unregister_trigger_consumer(indio_dev);
iio_device_unregister_eventset(indio_dev);
iio_device_unregister_sysfs(indio_dev);
- iio_device_unregister_debugfs(indio_dev);
ida_simple_remove(&iio_ida, indio_dev->id);
kfree(indio_dev);
if (test_and_set_bit(IIO_BUSY_BIT_POS, &indio_dev->flags))
return -EBUSY;
+ iio_device_get(indio_dev);
+
filp->private_data = indio_dev;
return 0;
struct iio_dev *indio_dev = container_of(inode->i_cdev,
struct iio_dev, chrdev);
clear_bit(IIO_BUSY_BIT_POS, &indio_dev->flags);
+ iio_device_put(indio_dev);
+
return 0;
}
indio_dev->setup_ops == NULL)
indio_dev->setup_ops = &noop_ring_setup_ops;
- ret = device_add(&indio_dev->dev);
- if (ret < 0)
- goto error_unreg_eventset;
cdev_init(&indio_dev->chrdev, &iio_buffer_fileops);
indio_dev->chrdev.owner = indio_dev->info->driver_module;
+ indio_dev->chrdev.kobj.parent = &indio_dev->dev.kobj;
ret = cdev_add(&indio_dev->chrdev, indio_dev->dev.devt, 1);
if (ret < 0)
- goto error_del_device;
- return 0;
+ goto error_unreg_eventset;
-error_del_device:
- device_del(&indio_dev->dev);
+ ret = device_add(&indio_dev->dev);
+ if (ret < 0)
+ goto error_cdev_del;
+
+ return 0;
+error_cdev_del:
+ cdev_del(&indio_dev->chrdev);
error_unreg_eventset:
iio_device_unregister_eventset(indio_dev);
error_free_sysfs:
void iio_device_unregister(struct iio_dev *indio_dev)
{
mutex_lock(&indio_dev->info_exist_lock);
+
+ device_del(&indio_dev->dev);
+
+ if (indio_dev->chrdev.dev)
+ cdev_del(&indio_dev->chrdev);
+ iio_device_unregister_debugfs(indio_dev);
+
+ iio_disable_all_buffers(indio_dev);
+
indio_dev->info = NULL;
mutex_unlock(&indio_dev->info_exist_lock);
- device_del(&indio_dev->dev);
}
EXPORT_SYMBOL(iio_device_unregister);
subsys_initcall(iio_init);
static unsigned int iio_event_poll(struct file *filep,
struct poll_table_struct *wait)
{
- struct iio_event_interface *ev_int = filep->private_data;
+ struct iio_dev *indio_dev = filep->private_data;
+ struct iio_event_interface *ev_int = indio_dev->event_interface;
unsigned int events = 0;
poll_wait(filep, &ev_int->wait, wait);
size_t count,
loff_t *f_ps)
{
- struct iio_event_interface *ev_int = filep->private_data;
+ struct iio_dev *indio_dev = filep->private_data;
+ struct iio_event_interface *ev_int = indio_dev->event_interface;
unsigned int copied;
int ret;
static int iio_event_chrdev_release(struct inode *inode, struct file *filep)
{
- struct iio_event_interface *ev_int = filep->private_data;
+ struct iio_dev *indio_dev = filep->private_data;
+ struct iio_event_interface *ev_int = indio_dev->event_interface;
spin_lock_irq(&ev_int->wait.lock);
__clear_bit(IIO_BUSY_BIT_POS, &ev_int->flags);
kfifo_reset_out(&ev_int->det_events);
spin_unlock_irq(&ev_int->wait.lock);
+ iio_device_put(indio_dev);
+
return 0;
}
return -EBUSY;
}
spin_unlock_irq(&ev_int->wait.lock);
- fd = anon_inode_getfd("iio:event",
- &iio_event_chrdev_fileops, ev_int, O_RDONLY);
+ iio_device_get(indio_dev);
+
+ fd = anon_inode_getfd("iio:event", &iio_event_chrdev_fileops,
+ indio_dev, O_RDONLY);
if (fd < 0) {
spin_lock_irq(&ev_int->wait.lock);
__clear_bit(IIO_BUSY_BIT_POS, &ev_int->flags);
spin_unlock_irq(&ev_int->wait.lock);
+ iio_device_put(indio_dev);
}
return fd;
}
goto error_ret;
}
if (chan->modified)
- mask = IIO_MOD_EVENT_CODE(chan->type, 0, chan->channel,
+ mask = IIO_MOD_EVENT_CODE(chan->type, 0, chan->channel2,
i/IIO_EV_DIR_MAX,
i%IIO_EV_DIR_MAX);
else if (chan->differential)
#define ST_MAGN_NUMBER_DATA_CHANNELS 3
/* DEFAULT VALUE FOR SENSORS */
-#define ST_MAGN_DEFAULT_OUT_X_L_ADDR 0X04
-#define ST_MAGN_DEFAULT_OUT_Y_L_ADDR 0X08
-#define ST_MAGN_DEFAULT_OUT_Z_L_ADDR 0X06
+#define ST_MAGN_DEFAULT_OUT_X_H_ADDR 0X03
+#define ST_MAGN_DEFAULT_OUT_Y_H_ADDR 0X07
+#define ST_MAGN_DEFAULT_OUT_Z_H_ADDR 0X05
/* FULLSCALE */
#define ST_MAGN_FS_AVL_1300MG 1300
static const struct iio_chan_spec st_magn_16bit_channels[] = {
ST_SENSORS_LSM_CHANNELS(IIO_MAGN,
BIT(IIO_CHAN_INFO_RAW) | BIT(IIO_CHAN_INFO_SCALE),
- ST_SENSORS_SCAN_X, 1, IIO_MOD_X, 's', IIO_LE, 16, 16,
- ST_MAGN_DEFAULT_OUT_X_L_ADDR),
+ ST_SENSORS_SCAN_X, 1, IIO_MOD_X, 's', IIO_BE, 16, 16,
+ ST_MAGN_DEFAULT_OUT_X_H_ADDR),
ST_SENSORS_LSM_CHANNELS(IIO_MAGN,
BIT(IIO_CHAN_INFO_RAW) | BIT(IIO_CHAN_INFO_SCALE),
- ST_SENSORS_SCAN_Y, 1, IIO_MOD_Y, 's', IIO_LE, 16, 16,
- ST_MAGN_DEFAULT_OUT_Y_L_ADDR),
+ ST_SENSORS_SCAN_Y, 1, IIO_MOD_Y, 's', IIO_BE, 16, 16,
+ ST_MAGN_DEFAULT_OUT_Y_H_ADDR),
ST_SENSORS_LSM_CHANNELS(IIO_MAGN,
BIT(IIO_CHAN_INFO_RAW) | BIT(IIO_CHAN_INFO_SCALE),
- ST_SENSORS_SCAN_Z, 1, IIO_MOD_Z, 's', IIO_LE, 16, 16,
- ST_MAGN_DEFAULT_OUT_Z_L_ADDR),
+ ST_SENSORS_SCAN_Z, 1, IIO_MOD_Z, 's', IIO_BE, 16, 16,
+ ST_MAGN_DEFAULT_OUT_Z_H_ADDR),
IIO_CHAN_SOFT_TIMESTAMP(3)
};
#ifdef CONFIG_PM_SLEEP
static int tmp006_suspend(struct device *dev)
{
- return tmp006_powerdown(iio_priv(dev_to_iio_dev(dev)));
+ struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev));
+ return tmp006_powerdown(iio_priv(indio_dev));
}
static int tmp006_resume(struct device *dev)
{
- struct tmp006_data *data = iio_priv(dev_to_iio_dev(dev));
+ struct tmp006_data *data = iio_priv(i2c_get_clientdata(
+ to_i2c_client(dev)));
return i2c_smbus_write_word_swapped(data->client, TMP006_CONFIG,
data->config | TMP006_CONFIG_MOD_MASK);
}
libibverbs, libibcm and a hardware driver library from
<http://www.openfabrics.org/git/>.
+config INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING
+ bool "Experimental and unstable ABI for userspace access to flow steering verbs"
+ depends on INFINIBAND_USER_ACCESS
+ depends on STAGING
+ ---help---
+ The final ABI for userspace access to flow steering verbs
+ has not been defined. To use the current ABI, *WHICH WILL
+ CHANGE IN THE FUTURE*, say Y here.
+
+ If unsure, say N.
+
config INFINIBAND_USER_MEM
bool
depends on INFINIBAND_USER_ACCESS != n
IB_UVERBS_DECLARE_CMD(create_xsrq);
IB_UVERBS_DECLARE_CMD(open_xrcd);
IB_UVERBS_DECLARE_CMD(close_xrcd);
+#ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING
IB_UVERBS_DECLARE_CMD(create_flow);
IB_UVERBS_DECLARE_CMD(destroy_flow);
+#endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */
#endif /* UVERBS_H */
static struct uverbs_lock_class ah_lock_class = { .name = "AH-uobj" };
static struct uverbs_lock_class srq_lock_class = { .name = "SRQ-uobj" };
static struct uverbs_lock_class xrcd_lock_class = { .name = "XRCD-uobj" };
+#ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING
static struct uverbs_lock_class rule_lock_class = { .name = "RULE-uobj" };
+#endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */
#define INIT_UDATA(udata, ibuf, obuf, ilen, olen) \
do { \
return ret ? ret : in_len;
}
+#ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING
static int kern_spec_to_ib_spec(struct ib_kern_spec *kern_spec,
union ib_flow_spec *ib_spec)
{
return ret ? ret : in_len;
}
+#endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */
static int __uverbs_create_xsrq(struct ib_uverbs_file *file,
struct ib_uverbs_create_xsrq *cmd,
[IB_USER_VERBS_CMD_CLOSE_XRCD] = ib_uverbs_close_xrcd,
[IB_USER_VERBS_CMD_CREATE_XSRQ] = ib_uverbs_create_xsrq,
[IB_USER_VERBS_CMD_OPEN_QP] = ib_uverbs_open_qp,
+#ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING
[IB_USER_VERBS_CMD_CREATE_FLOW] = ib_uverbs_create_flow,
[IB_USER_VERBS_CMD_DESTROY_FLOW] = ib_uverbs_destroy_flow
+#endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */
};
static void ib_uverbs_add_one(struct ib_device *device);
if (!(file->device->ib_dev->uverbs_cmd_mask & (1ull << hdr.command)))
return -ENOSYS;
+#ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING
if (hdr.command >= IB_USER_VERBS_CMD_THRESHOLD) {
struct ib_uverbs_cmd_hdr_ex hdr_ex;
(hdr_ex.out_words +
hdr_ex.provider_out_words) * 4);
} else {
+#endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */
if (hdr.in_words * 4 != count)
return -EINVAL;
buf + sizeof(hdr),
hdr.in_words * 4,
hdr.out_words * 4);
+#ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING
}
+#endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */
}
static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma)
return "C2_QP_STATE_ERROR";
default:
return "<invalid QP state>";
- };
+ }
}
void c2_ae_event(struct c2_dev *c2dev, u32 mq_index)
ibdev->ib_dev.create_flow = mlx4_ib_create_flow;
ibdev->ib_dev.destroy_flow = mlx4_ib_destroy_flow;
+#ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING
ibdev->ib_dev.uverbs_cmd_mask |=
(1ull << IB_USER_VERBS_CMD_CREATE_FLOW) |
(1ull << IB_USER_VERBS_CMD_DESTROY_FLOW);
+#endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */
}
mlx4_ib_alloc_eqs(dev, ibdev);
static int alloc_comp_eqs(struct mlx5_ib_dev *dev)
{
struct mlx5_eq_table *table = &dev->mdev.priv.eq_table;
+ char name[MLX5_MAX_EQ_NAME];
struct mlx5_eq *eq, *n;
int ncomp_vec;
int nent;
goto clean;
}
- snprintf(eq->name, MLX5_MAX_EQ_NAME, "mlx5_comp%d", i);
+ snprintf(name, MLX5_MAX_EQ_NAME, "mlx5_comp%d", i);
err = mlx5_create_map_eq(&dev->mdev, eq,
i + MLX5_EQ_VEC_COMP_BASE, nent, 0,
- eq->name,
- &dev->mdev.priv.uuari.uars[0]);
+ name, &dev->mdev.priv.uuari.uars[0]);
if (err) {
kfree(eq);
goto clean;
props->max_srq_sge = max_rq_sg - 1;
props->max_fast_reg_page_list_len = (unsigned int)-1;
props->local_ca_ack_delay = dev->mdev.caps.local_ca_ack_delay;
- props->atomic_cap = dev->mdev.caps.flags & MLX5_DEV_CAP_FLAG_ATOMIC ?
- IB_ATOMIC_HCA : IB_ATOMIC_NONE;
- props->masked_atomic_cap = IB_ATOMIC_HCA;
+ props->atomic_cap = IB_ATOMIC_NONE;
+ props->masked_atomic_cap = IB_ATOMIC_NONE;
props->max_pkeys = be16_to_cpup((__be16 *)(out_mad->data + 28));
props->max_mcast_grp = 1 << dev->mdev.caps.log_max_mcg;
props->max_mcast_qp_attach = dev->mdev.caps.max_qp_mcg;
ibev.device = &ibdev->ib_dev;
ibev.element.port_num = port;
+ if (port < 1 || port > ibdev->num_ports) {
+ mlx5_ib_warn(ibdev, "warning: event on port %d\n", port);
+ return;
+ }
+
if (ibdev->ib_active)
ib_dispatch_event(&ibev);
}
DEF_CACHE_SIZE = 10,
};
+enum {
+ MLX5_UMR_ALIGN = 2048
+};
+
static __be64 *mr_align(__be64 *ptr, int align)
{
unsigned long mask = align - 1;
static int add_keys(struct mlx5_ib_dev *dev, int c, int num)
{
- struct device *ddev = dev->ib_dev.dma_device;
struct mlx5_mr_cache *cache = &dev->cache;
struct mlx5_cache_ent *ent = &cache->ent[c];
struct mlx5_create_mkey_mbox_in *in;
struct mlx5_ib_mr *mr;
int npages = 1 << ent->order;
- int size = sizeof(u64) * npages;
int err = 0;
int i;
}
mr->order = ent->order;
mr->umred = 1;
- mr->pas = kmalloc(size + 0x3f, GFP_KERNEL);
- if (!mr->pas) {
- kfree(mr);
- err = -ENOMEM;
- goto out;
- }
- mr->dma = dma_map_single(ddev, mr_align(mr->pas, 0x40), size,
- DMA_TO_DEVICE);
- if (dma_mapping_error(ddev, mr->dma)) {
- kfree(mr->pas);
- kfree(mr);
- err = -ENOMEM;
- goto out;
- }
-
in->seg.status = 1 << 6;
in->seg.xlt_oct_size = cpu_to_be32((npages + 1) / 2);
in->seg.qpn_mkey7_0 = cpu_to_be32(0xffffff << 8);
sizeof(*in));
if (err) {
mlx5_ib_warn(dev, "create mkey failed %d\n", err);
- dma_unmap_single(ddev, mr->dma, size, DMA_TO_DEVICE);
- kfree(mr->pas);
kfree(mr);
goto out;
}
static void remove_keys(struct mlx5_ib_dev *dev, int c, int num)
{
- struct device *ddev = dev->ib_dev.dma_device;
struct mlx5_mr_cache *cache = &dev->cache;
struct mlx5_cache_ent *ent = &cache->ent[c];
struct mlx5_ib_mr *mr;
- int size;
int err;
int i;
ent->size--;
spin_unlock(&ent->lock);
err = mlx5_core_destroy_mkey(&dev->mdev, &mr->mmr);
- if (err) {
+ if (err)
mlx5_ib_warn(dev, "failed destroy mkey\n");
- } else {
- size = ALIGN(sizeof(u64) * (1 << mr->order), 0x40);
- dma_unmap_single(ddev, mr->dma, size, DMA_TO_DEVICE);
- kfree(mr->pas);
+ else
kfree(mr);
- }
}
}
static void clean_keys(struct mlx5_ib_dev *dev, int c)
{
- struct device *ddev = dev->ib_dev.dma_device;
struct mlx5_mr_cache *cache = &dev->cache;
struct mlx5_cache_ent *ent = &cache->ent[c];
struct mlx5_ib_mr *mr;
- int size;
int err;
+ cancel_delayed_work(&ent->dwork);
while (1) {
spin_lock(&ent->lock);
if (list_empty(&ent->head)) {
ent->size--;
spin_unlock(&ent->lock);
err = mlx5_core_destroy_mkey(&dev->mdev, &mr->mmr);
- if (err) {
+ if (err)
mlx5_ib_warn(dev, "failed destroy mkey\n");
- } else {
- size = ALIGN(sizeof(u64) * (1 << mr->order), 0x40);
- dma_unmap_single(ddev, mr->dma, size, DMA_TO_DEVICE);
- kfree(mr->pas);
+ else
kfree(mr);
- }
}
}
int i;
dev->cache.stopped = 1;
- destroy_workqueue(dev->cache.wq);
+ flush_workqueue(dev->cache.wq);
mlx5_mr_cache_debugfs_cleanup(dev);
for (i = 0; i < MAX_MR_CACHE_ENTRIES; i++)
clean_keys(dev, i);
+ destroy_workqueue(dev->cache.wq);
+
return 0;
}
int page_shift, int order, int access_flags)
{
struct mlx5_ib_dev *dev = to_mdev(pd->device);
+ struct device *ddev = dev->ib_dev.dma_device;
struct umr_common *umrc = &dev->umrc;
struct ib_send_wr wr, *bad;
struct mlx5_ib_mr *mr;
struct ib_sge sg;
+ int size = sizeof(u64) * npages;
int err;
int i;
if (!mr)
return ERR_PTR(-EAGAIN);
- mlx5_ib_populate_pas(dev, umem, page_shift, mr_align(mr->pas, 0x40), 1);
+ mr->pas = kmalloc(size + MLX5_UMR_ALIGN - 1, GFP_KERNEL);
+ if (!mr->pas) {
+ err = -ENOMEM;
+ goto error;
+ }
+
+ mlx5_ib_populate_pas(dev, umem, page_shift,
+ mr_align(mr->pas, MLX5_UMR_ALIGN), 1);
+
+ mr->dma = dma_map_single(ddev, mr_align(mr->pas, MLX5_UMR_ALIGN), size,
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(ddev, mr->dma)) {
+ kfree(mr->pas);
+ err = -ENOMEM;
+ goto error;
+ }
memset(&wr, 0, sizeof(wr));
wr.wr_id = (u64)(unsigned long)mr;
wait_for_completion(&mr->done);
up(&umrc->sem);
+ dma_unmap_single(ddev, mr->dma, size, DMA_TO_DEVICE);
+ kfree(mr->pas);
+
if (mr->status != IB_WC_SUCCESS) {
mlx5_ib_warn(dev, "reg umr failed\n");
err = -EFAULT;
switch (qp_type) {
case IB_QPT_XRC_INI:
- size = sizeof(struct mlx5_wqe_xrc_seg);
+ size += sizeof(struct mlx5_wqe_xrc_seg);
/* fall through */
case IB_QPT_RC:
size += sizeof(struct mlx5_wqe_ctrl_seg) +
sizeof(struct mlx5_wqe_raddr_seg);
break;
+ case IB_QPT_XRC_TGT:
+ return 0;
+
case IB_QPT_UC:
- size = sizeof(struct mlx5_wqe_ctrl_seg) +
+ size += sizeof(struct mlx5_wqe_ctrl_seg) +
sizeof(struct mlx5_wqe_raddr_seg);
break;
case IB_QPT_UD:
case IB_QPT_SMI:
case IB_QPT_GSI:
- size = sizeof(struct mlx5_wqe_ctrl_seg) +
+ size += sizeof(struct mlx5_wqe_ctrl_seg) +
sizeof(struct mlx5_wqe_datagram_seg);
break;
case MLX5_IB_QPT_REG_UMR:
- size = sizeof(struct mlx5_wqe_ctrl_seg) +
+ size += sizeof(struct mlx5_wqe_ctrl_seg) +
sizeof(struct mlx5_wqe_umr_ctrl_seg) +
sizeof(struct mlx5_mkey_seg);
break;
return wqe_size;
if (wqe_size > dev->mdev.caps.max_sq_desc_sz) {
- mlx5_ib_dbg(dev, "\n");
+ mlx5_ib_dbg(dev, "wqe_size(%d) > max_sq_desc_sz(%d)\n",
+ wqe_size, dev->mdev.caps.max_sq_desc_sz);
return -EINVAL;
}
wq_size = roundup_pow_of_two(attr->cap.max_send_wr * wqe_size);
qp->sq.wqe_cnt = wq_size / MLX5_SEND_WQE_BB;
+ if (qp->sq.wqe_cnt > dev->mdev.caps.max_wqes) {
+ mlx5_ib_dbg(dev, "wqe count(%d) exceeds limits(%d)\n",
+ qp->sq.wqe_cnt, dev->mdev.caps.max_wqes);
+ return -ENOMEM;
+ }
qp->sq.wqe_shift = ilog2(MLX5_SEND_WQE_BB);
qp->sq.max_gs = attr->cap.max_send_sge;
- qp->sq.max_post = 1 << ilog2(wq_size / wqe_size);
+ qp->sq.max_post = wq_size / wqe_size;
+ attr->cap.max_send_wr = qp->sq.max_post;
return wq_size;
}
MLX5_QP_OPTPAR_Q_KEY,
[MLX5_QP_ST_MLX] = MLX5_QP_OPTPAR_PKEY_INDEX |
MLX5_QP_OPTPAR_Q_KEY,
+ [MLX5_QP_ST_XRC] = MLX5_QP_OPTPAR_ALT_ADDR_PATH |
+ MLX5_QP_OPTPAR_RRE |
+ MLX5_QP_OPTPAR_RAE |
+ MLX5_QP_OPTPAR_RWE |
+ MLX5_QP_OPTPAR_PKEY_INDEX,
},
},
[MLX5_QP_STATE_RTR] = {
[MLX5_QP_STATE_RTS] = {
[MLX5_QP_ST_UD] = MLX5_QP_OPTPAR_Q_KEY,
[MLX5_QP_ST_MLX] = MLX5_QP_OPTPAR_Q_KEY,
+ [MLX5_QP_ST_UC] = MLX5_QP_OPTPAR_RWE,
+ [MLX5_QP_ST_RC] = MLX5_QP_OPTPAR_RNR_TIMEOUT |
+ MLX5_QP_OPTPAR_RWE |
+ MLX5_QP_OPTPAR_RAE |
+ MLX5_QP_OPTPAR_RRE,
},
},
};
rseg->reserved = 0;
}
-static void set_atomic_seg(struct mlx5_wqe_atomic_seg *aseg, struct ib_send_wr *wr)
-{
- if (wr->opcode == IB_WR_ATOMIC_CMP_AND_SWP) {
- aseg->swap_add = cpu_to_be64(wr->wr.atomic.swap);
- aseg->compare = cpu_to_be64(wr->wr.atomic.compare_add);
- } else if (wr->opcode == IB_WR_MASKED_ATOMIC_FETCH_AND_ADD) {
- aseg->swap_add = cpu_to_be64(wr->wr.atomic.compare_add);
- aseg->compare = cpu_to_be64(wr->wr.atomic.compare_add_mask);
- } else {
- aseg->swap_add = cpu_to_be64(wr->wr.atomic.compare_add);
- aseg->compare = 0;
- }
-}
-
-static void set_masked_atomic_seg(struct mlx5_wqe_masked_atomic_seg *aseg,
- struct ib_send_wr *wr)
-{
- aseg->swap_add = cpu_to_be64(wr->wr.atomic.swap);
- aseg->swap_add_mask = cpu_to_be64(wr->wr.atomic.swap_mask);
- aseg->compare = cpu_to_be64(wr->wr.atomic.compare_add);
- aseg->compare_mask = cpu_to_be64(wr->wr.atomic.compare_add_mask);
-}
-
static void set_datagram_seg(struct mlx5_wqe_datagram_seg *dseg,
struct ib_send_wr *wr)
{
case IB_WR_ATOMIC_CMP_AND_SWP:
case IB_WR_ATOMIC_FETCH_AND_ADD:
- set_raddr_seg(seg, wr->wr.atomic.remote_addr,
- wr->wr.atomic.rkey);
- seg += sizeof(struct mlx5_wqe_raddr_seg);
-
- set_atomic_seg(seg, wr);
- seg += sizeof(struct mlx5_wqe_atomic_seg);
-
- size += (sizeof(struct mlx5_wqe_raddr_seg) +
- sizeof(struct mlx5_wqe_atomic_seg)) / 16;
- break;
-
case IB_WR_MASKED_ATOMIC_CMP_AND_SWP:
- set_raddr_seg(seg, wr->wr.atomic.remote_addr,
- wr->wr.atomic.rkey);
- seg += sizeof(struct mlx5_wqe_raddr_seg);
-
- set_masked_atomic_seg(seg, wr);
- seg += sizeof(struct mlx5_wqe_masked_atomic_seg);
-
- size += (sizeof(struct mlx5_wqe_raddr_seg) +
- sizeof(struct mlx5_wqe_masked_atomic_seg)) / 16;
- break;
+ mlx5_ib_warn(dev, "Atomic operations are not supported yet\n");
+ err = -ENOSYS;
+ *bad_wr = wr;
+ goto out;
case IB_WR_LOCAL_INV:
next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL;
mlx5_vfree(in);
if (err) {
mlx5_ib_dbg(dev, "create SRQ failed, err %d\n", err);
- goto err_srq;
+ goto err_usr_kern_srq;
}
mlx5_ib_dbg(dev, "create SRQ with srqn 0x%x\n", srq->msrq.srqn);
err_core:
mlx5_core_destroy_srq(&dev->mdev, &srq->msrq);
+
+err_usr_kern_srq:
if (pd->uobject)
destroy_srq_user(pd, srq);
else
mthca_warn(dev, "Unhandled event %02x(%02x) on EQ %d\n",
eqe->type, eqe->subtype, eq->eqn);
break;
- };
+ }
set_eqe_hw(eqe);
++eq->cons_index;
return IB_QPS_SQE;
case OCRDMA_QPS_ERR:
return IB_QPS_ERR;
- };
+ }
return IB_QPS_ERR;
}
return OCRDMA_QPS_SQE;
case IB_QPS_ERR:
return OCRDMA_QPS_ERR;
- };
+ }
return OCRDMA_QPS_ERR;
}
break;
default:
return -EINVAL;
- };
+ }
cmd = ocrdma_init_emb_mqe(OCRDMA_CMD_CREATE_QP, sizeof(*cmd));
if (!cmd)
case BE_DEV_DOWN:
ocrdma_close(dev);
break;
- };
+ }
}
static struct ocrdma_driver ocrdma_drv = {
/* Unsupported */
*ib_speed = IB_SPEED_SDR;
*ib_width = IB_WIDTH_1X;
- };
+ }
}
default:
ibwc_status = IB_WC_GENERAL_ERR;
break;
- };
+ }
return ibwc_status;
}
pr_err("%s() invalid opcode received = 0x%x\n",
__func__, hdr->cw & OCRDMA_WQE_OPCODE_MASK);
break;
- };
+ }
}
static void ocrdma_set_cqe_status_flushed(struct ocrdma_qp *qp,
pr_debug("Entering isert_connect_release(): >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n");
- if (device->use_frwr)
+ if (device && device->use_frwr)
isert_conn_free_frwr_pool(isert_conn);
if (isert_conn->conn_qp) {
int resp_data_len;
int resp_len;
- resp_data_len = (rsp_code == SRP_TSK_MGMT_SUCCESS) ? 0 : 4;
+ resp_data_len = 4;
resp_len = sizeof(*srp_rsp) + resp_data_len;
srp_rsp = ioctx->ioctx.buf;
+ atomic_xchg(&ch->req_lim_delta, 0));
srp_rsp->tag = tag;
- if (rsp_code != SRP_TSK_MGMT_SUCCESS) {
- srp_rsp->flags |= SRP_RSP_FLAG_RSPVALID;
- srp_rsp->resp_data_len = cpu_to_be32(resp_data_len);
- srp_rsp->data[3] = rsp_code;
- }
+ srp_rsp->flags |= SRP_RSP_FLAG_RSPVALID;
+ srp_rsp->resp_data_len = cpu_to_be32(resp_data_len);
+ srp_rsp->data[3] = rsp_code;
return resp_len;
}
transport_deregister_session(se_sess);
ch->sess = NULL;
+ ib_destroy_cm_id(ch->cm_id);
+
srpt_destroy_ch_ib(ch);
srpt_free_ioctx_ring((struct srpt_ioctx **)ch->ioctx_ring,
list_del(&ch->list);
spin_unlock_irq(&sdev->spinlock);
- ib_destroy_cm_id(ch->cm_id);
-
if (ch->release_done)
complete(ch->release_done);
*/
struct input_dev *input_allocate_device(void)
{
+ static atomic_t input_no = ATOMIC_INIT(0);
struct input_dev *dev;
dev = kzalloc(sizeof(struct input_dev), GFP_KERNEL);
device_initialize(&dev->dev);
mutex_init(&dev->mutex);
spin_lock_init(&dev->event_lock);
+ init_timer(&dev->timer);
INIT_LIST_HEAD(&dev->h_list);
INIT_LIST_HEAD(&dev->node);
+ dev_set_name(&dev->dev, "input%ld",
+ (unsigned long) atomic_inc_return(&input_no) - 1);
+
__module_get(THIS_MODULE);
}
*/
int input_register_device(struct input_dev *dev)
{
- static atomic_t input_no = ATOMIC_INIT(0);
struct input_devres *devres = NULL;
struct input_handler *handler;
unsigned int packet_size;
* If delay and period are pre-set by the driver, then autorepeating
* is handled by the driver itself and we don't do it in input.c.
*/
- init_timer(&dev->timer);
if (!dev->rep[REP_DELAY] && !dev->rep[REP_PERIOD]) {
dev->timer.data = (long) dev;
dev->timer.function = input_repeat_key;
if (!dev->setkeycode)
dev->setkeycode = input_default_setkeycode;
- dev_set_name(&dev->dev, "input%ld",
- (unsigned long) atomic_inc_return(&input_no) - 1);
-
error = device_add(&dev->dev);
if (error)
goto err_free_vals;
input_dev->evbit[0] = BIT_MASK(EV_KEY) | BIT_MASK(EV_REP);
input_set_capability(input_dev, EV_MSC, MSC_SCAN);
- if (pdata)
+ if (pdata) {
error = pxa27x_keypad_build_keycode(keypad);
- else
+ } else {
error = pxa27x_keypad_build_keycode_from_dt(keypad);
+ /*
+ * Data that we get from DT resides in dynamically
+ * allocated memory so we need to update our pdata
+ * pointer.
+ */
+ pdata = keypad->pdata;
+ }
if (error) {
dev_err(&pdev->dev, "failed to build keycode\n");
goto failed_put_clk;
if (status) {
if (status == -ESHUTDOWN)
return;
- dev_err(&dev->intf->dev, "%s: urb status %d\n", __func__, status);
+ dev_err_ratelimited(&dev->intf->dev, "%s: urb status %d\n",
+ __func__, status);
+ goto out;
}
/* Special keys */
dev->ctl_data->byte[2],
dev->ctl_data->byte[3]);
- if (status)
- dev_err(&dev->intf->dev, "%s: urb status %d\n", __func__, status);
+ if (status) {
+ if (status == -ESHUTDOWN)
+ return;
+ dev_err_ratelimited(&dev->intf->dev, "%s: urb status %d\n",
+ __func__, status);
+ }
spin_lock(&dev->ctl_submit_lock);
if (likely(!dev->shutdown)) {
- if (dev->buzzer_pending) {
+ if (dev->buzzer_pending || status) {
dev->buzzer_pending = 0;
dev->ctl_urb_pending = 1;
cm109_submit_buzz_toggle(dev);
/* Dell Latitude E5500, E6400, E6500, Precision M4400 */
{ { 0x62, 0x02, 0x14 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf,
ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED },
+ { { 0x73, 0x00, 0x14 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf, ALPS_DUALPOINT }, /* Dell XT2 */
{ { 0x73, 0x02, 0x50 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf, ALPS_FOUR_BUTTONS }, /* Dell Vostro 1400 */
{ { 0x52, 0x01, 0x14 }, 0x00, ALPS_PROTO_V2, 0xff, 0xff,
ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED }, /* Toshiba Tecra A11-11L */
{
unsigned long flags;
unsigned char data, str;
- int i = 0;
+ int count = 0;
+ int retval = 0;
spin_lock_irqsave(&i8042_lock, flags);
- while (((str = i8042_read_status()) & I8042_STR_OBF) && (i < I8042_BUFFER_SIZE)) {
- udelay(50);
- data = i8042_read_data();
- i++;
- dbg("%02x <- i8042 (flush, %s)\n",
- data, str & I8042_STR_AUXDATA ? "aux" : "kbd");
+ while ((str = i8042_read_status()) & I8042_STR_OBF) {
+ if (count++ < I8042_BUFFER_SIZE) {
+ udelay(50);
+ data = i8042_read_data();
+ dbg("%02x <- i8042 (flush, %s)\n",
+ data, str & I8042_STR_AUXDATA ? "aux" : "kbd");
+ } else {
+ retval = -EIO;
+ break;
+ }
}
spin_unlock_irqrestore(&i8042_lock, flags);
- return i;
+ return retval;
}
/*
static int i8042_controller_check(void)
{
- if (i8042_flush() == I8042_BUFFER_SIZE) {
+ if (i8042_flush()) {
pr_err("No controller found\n");
return -ENODEV;
}
}
static enum power_supply_property wacom_battery_props[] = {
+ POWER_SUPPLY_PROP_SCOPE,
POWER_SUPPLY_PROP_CAPACITY
};
int ret = 0;
switch (psp) {
+ case POWER_SUPPLY_PROP_SCOPE:
+ val->intval = POWER_SUPPLY_SCOPE_DEVICE;
+ break;
case POWER_SUPPLY_PROP_CAPACITY:
val->intval =
wacom->wacom_wac.battery_capacity * 100 / 31;
static const struct wacom_features wacom_features_0x10D =
{ "Wacom ISDv4 10D", WACOM_PKGLEN_MTTPC, 26202, 16325, 255,
0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES };
+static const struct wacom_features wacom_features_0x10E =
+ { "Wacom ISDv4 10E", WACOM_PKGLEN_MTTPC, 27760, 15694, 255,
+ 0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES };
+static const struct wacom_features wacom_features_0x10F =
+ { "Wacom ISDv4 10F", WACOM_PKGLEN_MTTPC, 27760, 15694, 255,
+ 0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES };
static const struct wacom_features wacom_features_0x4001 =
{ "Wacom ISDv4 4001", WACOM_PKGLEN_MTTPC, 26202, 16325, 255,
0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES };
{ USB_DEVICE_WACOM(0x100) },
{ USB_DEVICE_WACOM(0x101) },
{ USB_DEVICE_WACOM(0x10D) },
+ { USB_DEVICE_WACOM(0x10E) },
+ { USB_DEVICE_WACOM(0x10F) },
{ USB_DEVICE_WACOM(0x300) },
{ USB_DEVICE_WACOM(0x301) },
{ USB_DEVICE_WACOM(0x304) },
select PCI_PRI
select PCI_PASID
select IOMMU_API
- depends on X86_64 && PCI && ACPI && X86_IO_APIC
+ depends on X86_64 && PCI && ACPI
---help---
With this option you can enable support for AMD IOMMU hardware in
your system. An IOMMU is a hardware component which provides
u32 cbar;
pgd_t *pgd;
};
+#define INVALID_IRPTNDX 0xff
#define ARM_SMMU_CB_ASID(cfg) ((cfg)->cbndx)
#define ARM_SMMU_CB_VMID(cfg) ((cfg)->cbndx + 1)
if (IS_ERR_VALUE(ret)) {
dev_err(smmu->dev, "failed to request context IRQ %d (%u)\n",
root_cfg->irptndx, irq);
- root_cfg->irptndx = -1;
+ root_cfg->irptndx = INVALID_IRPTNDX;
goto out_free_context;
}
writel_relaxed(0, cb_base + ARM_SMMU_CB_SCTLR);
arm_smmu_tlb_inv_context(root_cfg);
- if (root_cfg->irptndx != -1) {
+ if (root_cfg->irptndx != INVALID_IRPTNDX) {
irq = smmu->irqs[smmu->num_global_irqs + root_cfg->irptndx];
free_irq(irq, domain);
}
goto out_put_parent;
}
- arm_smmu_device_reset(smmu);
-
for (i = 0; i < smmu->num_global_irqs; ++i) {
err = request_irq(smmu->irqs[i],
arm_smmu_global_fault,
spin_lock(&arm_smmu_devices_lock);
list_add(&smmu->list, &arm_smmu_devices);
spin_unlock(&arm_smmu_devices_lock);
+
+ arm_smmu_device_reset(smmu);
return 0;
out_free_irqs:
return ret;
/* Oh, for a proper bus abstraction */
- if (!iommu_present(&platform_bus_type));
+ if (!iommu_present(&platform_bus_type))
bus_set_iommu(&platform_bus_type, &arm_smmu_ops);
- if (!iommu_present(&amba_bustype));
+ if (!iommu_present(&amba_bustype))
bus_set_iommu(&amba_bustype, &arm_smmu_ops);
return 0;
static void
hfcpci_softirq(void *arg)
{
- (void) driver_for_each_device(&hfc_driver.driver, NULL, arg,
- _hfcpci_softirq);
+ WARN_ON_ONCE(driver_for_each_device(&hfc_driver.driver, NULL, arg,
+ _hfcpci_softirq) != 0);
/* if next event would be in the past ... */
if ((s32)(hfc_jiffies + tics - jiffies) <= 0)
t += sprintf(t, "Amd7930: empty_Dfifo cnt: %d |", cs->rcvidx);
QuickHex(t, cs->rcvbuf, cs->rcvidx);
- debugl1(cs, cs->dlog);
+ debugl1(cs, "%s", cs->dlog);
}
/* moves received data in sk-buffer */
memcpy(skb_put(skb, cs->rcvidx), cs->rcvbuf, cs->rcvidx);
t += sprintf(t, "Amd7930: fill_Dfifo cnt: %d |", count);
QuickHex(t, deb_ptr, count);
- debugl1(cs, cs->dlog);
+ debugl1(cs, "%s", cs->dlog);
}
/* AMD interrupts on */
AmdIrqOn(cs);
t += sprintf(t, "hdlc_empty_fifo %c cnt %d",
bcs->channel ? 'B' : 'A', count);
QuickHex(t, p, count);
- debugl1(cs, bcs->blog);
+ debugl1(cs, "%s", bcs->blog);
}
}
t += sprintf(t, "hdlc_fill_fifo %c cnt %d",
bcs->channel ? 'B' : 'A', count);
QuickHex(t, p, count);
- debugl1(cs, bcs->blog);
+ debugl1(cs, "%s", bcs->blog);
}
}
ptr--;
*ptr++ = '\n';
*ptr = 0;
- HiSax_putstatus(cs, NULL, cs->dlog);
+ HiSax_putstatus(cs, NULL, "%s", cs->dlog);
} else
HiSax_putstatus(cs, "LogEcho: ",
"warning Frame too big (%d)",
t += sprintf(t, "hscx_empty_fifo %c cnt %d",
bcs->hw.hscx.hscx ? 'B' : 'A', count);
QuickHex(t, ptr, count);
- debugl1(cs, bcs->blog);
+ debugl1(cs, "%s", bcs->blog);
}
}
t += sprintf(t, "hscx_fill_fifo %c cnt %d",
bcs->hw.hscx.hscx ? 'B' : 'A', count);
QuickHex(t, ptr, count);
- debugl1(cs, bcs->blog);
+ debugl1(cs, "%s", bcs->blog);
}
}
t = tmp;
t += sprintf(tmp, "Arcofi data");
QuickHex(t, p, cs->dc.isac.mon_rxp);
- debugl1(cs, tmp);
+ debugl1(cs, "%s", tmp);
if ((cs->dc.isac.mon_rxp == 2) && (cs->dc.isac.mon_rx[0] == 0xa0)) {
switch (cs->dc.isac.mon_rx[1]) {
case 0x80:
t += sprintf(t, "modem read cnt %d", cs->hw.elsa.rcvcnt);
QuickHex(t, cs->hw.elsa.rcvbuf, cs->hw.elsa.rcvcnt);
- debugl1(cs, tmp);
+ debugl1(cs, "%s", tmp);
}
cs->hw.elsa.rcvcnt = 0;
}
ptr--;
*ptr++ = '\n';
*ptr = 0;
- HiSax_putstatus(cs, NULL, cs->dlog);
+ HiSax_putstatus(cs, NULL, "%s", cs->dlog);
} else
HiSax_putstatus(cs, "LogEcho: ", "warning Frame too big (%d)", total - 3);
}
ptr--;
*ptr++ = '\n';
*ptr = 0;
- HiSax_putstatus(cs, NULL, cs->dlog);
+ HiSax_putstatus(cs, NULL, "%s", cs->dlog);
} else
HiSax_putstatus(cs, "LogEcho: ", "warning Frame too big (%d)", skb->len);
}
t += sprintf(t, "hscx_empty_fifo %c cnt %d",
bcs->hw.hscx.hscx ? 'B' : 'A', count);
QuickHex(t, ptr, count);
- debugl1(cs, bcs->blog);
+ debugl1(cs, "%s", bcs->blog);
}
}
t += sprintf(t, "hscx_fill_fifo %c cnt %d",
bcs->hw.hscx.hscx ? 'B' : 'A', count);
QuickHex(t, ptr, count);
- debugl1(cs, bcs->blog);
+ debugl1(cs, "%s", bcs->blog);
}
}
t += sprintf(t, "icc_empty_fifo cnt %d", count);
QuickHex(t, ptr, count);
- debugl1(cs, cs->dlog);
+ debugl1(cs, "%s", cs->dlog);
}
}
t += sprintf(t, "icc_fill_fifo cnt %d", count);
QuickHex(t, ptr, count);
- debugl1(cs, cs->dlog);
+ debugl1(cs, "%s", cs->dlog);
}
}
t += sprintf(t, "dch_empty_fifo() cnt %d", count);
QuickHex(t, ptr, count);
- debugl1(cs, cs->dlog);
+ debugl1(cs, "%s", cs->dlog);
}
}
t += sprintf(t, "dch_fill_fifo() cnt %d", count);
QuickHex(t, ptr, count);
- debugl1(cs, cs->dlog);
+ debugl1(cs, "%s", cs->dlog);
}
}
t += sprintf(t, "bch_empty_fifo() B-%d cnt %d", hscx, count);
QuickHex(t, ptr, count);
- debugl1(cs, bcs->blog);
+ debugl1(cs, "%s", bcs->blog);
}
}
t += sprintf(t, "chb_fill_fifo() B-%d cnt %d", hscx, count);
QuickHex(t, ptr, count);
- debugl1(cs, bcs->blog);
+ debugl1(cs, "%s", bcs->blog);
}
}
t += sprintf(t, "isac_empty_fifo cnt %d", count);
QuickHex(t, ptr, count);
- debugl1(cs, cs->dlog);
+ debugl1(cs, "%s", cs->dlog);
}
}
t += sprintf(t, "isac_fill_fifo cnt %d", count);
QuickHex(t, ptr, count);
- debugl1(cs, cs->dlog);
+ debugl1(cs, "%s", cs->dlog);
}
}
t = tmp;
t += sprintf(t, "sendmbox cnt %d", len);
QuickHex(t, &msg[len-i], (i > 64) ? 64 : i);
- debugl1(cs, tmp);
+ debugl1(cs, "%s", tmp);
i -= 64;
}
}
t = tmp;
t += sprintf(t, "rcv_mbox cnt %d", ireg->clsb);
QuickHex(t, &msg[ireg->clsb - i], (i > 64) ? 64 : i);
- debugl1(cs, tmp);
+ debugl1(cs, "%s", tmp);
i -= 64;
}
}
tp += sprintf(debbuf, "msg iis(%x) msb(%x)",
ireg->iis, ireg->cmsb);
QuickHex(tp, (u_char *)ireg->par, ireg->clsb);
- debugl1(cs, debbuf);
+ debugl1(cs, "%s", debbuf);
}
break;
case ISAR_IIS_INVMSG:
int jade = bcs->hw.hscx.hscx;
if (cs->debug & L1_DEB_HSCX) {
- char tmp[40];
- sprintf(tmp, "jade %c mode %d ichan %d",
- 'A' + jade, mode, bc);
- debugl1(cs, tmp);
+ debugl1(cs, "jade %c mode %d ichan %d", 'A' + jade, mode, bc);
}
bcs->mode = mode;
bcs->channel = bc;
clear_pending_jade_ints(struct IsdnCardState *cs)
{
int val;
- char tmp[64];
cs->BC_Write_Reg(cs, 0, jade_HDLC_IMR, 0x00);
cs->BC_Write_Reg(cs, 1, jade_HDLC_IMR, 0x00);
val = cs->BC_Read_Reg(cs, 1, jade_HDLC_ISR);
- sprintf(tmp, "jade B ISTA %x", val);
- debugl1(cs, tmp);
+ debugl1(cs, "jade B ISTA %x", val);
val = cs->BC_Read_Reg(cs, 0, jade_HDLC_ISR);
- sprintf(tmp, "jade A ISTA %x", val);
- debugl1(cs, tmp);
+ debugl1(cs, "jade A ISTA %x", val);
val = cs->BC_Read_Reg(cs, 1, jade_HDLC_STAR);
- sprintf(tmp, "jade B STAR %x", val);
- debugl1(cs, tmp);
+ debugl1(cs, "jade B STAR %x", val);
val = cs->BC_Read_Reg(cs, 0, jade_HDLC_STAR);
- sprintf(tmp, "jade A STAR %x", val);
- debugl1(cs, tmp);
+ debugl1(cs, "jade A STAR %x", val);
/* Unmask ints */
cs->BC_Write_Reg(cs, 0, jade_HDLC_IMR, 0xF8);
cs->BC_Write_Reg(cs, 1, jade_HDLC_IMR, 0xF8);
t += sprintf(t, "jade_empty_fifo %c cnt %d",
bcs->hw.hscx.hscx ? 'B' : 'A', count);
QuickHex(t, ptr, count);
- debugl1(cs, bcs->blog);
+ debugl1(cs, "%s", bcs->blog);
}
}
t += sprintf(t, "jade_fill_fifo %c cnt %d",
bcs->hw.hscx.hscx ? 'B' : 'A', count);
QuickHex(t, ptr, count);
- debugl1(cs, bcs->blog);
+ debugl1(cs, "%s", bcs->blog);
}
}
{
dev_kfree_skb(skb);
if (pc->st->l3.debug & L3_DEB_WARN)
- l3_debug(pc->st, msg);
+ l3_debug(pc->st, "%s", msg);
l3_1tr6_release_req(pc, 0, NULL);
}
{
u_char *p;
int bcfound = 0;
- char tmp[80];
struct sk_buff *skb = arg;
/* Channel Identification */
/* Signal all services, linklevel takes care of Service-Indicator */
if (bcfound) {
if ((pc->para.setup.si1 != 7) && (pc->st->l3.debug & L3_DEB_WARN)) {
- sprintf(tmp, "non-digital call: %s -> %s",
+ l3_debug(pc->st, "non-digital call: %s -> %s",
pc->para.setup.phone,
pc->para.setup.eazmsn);
- l3_debug(pc->st, tmp);
}
newl3state(pc, 6);
pc->st->l3.l3l4(pc->st, CC_SETUP | INDICATION, pc);
{
u_char *p;
int i, tmpcharge = 0;
- char a_charge[8], tmp[32];
+ char a_charge[8];
struct sk_buff *skb = arg;
p = skb->data;
pc->st->l3.l3l4(pc->st, CC_CHARGE | INDICATION, pc);
}
if (pc->st->l3.debug & L3_DEB_CHARGE) {
- sprintf(tmp, "charging info %d", pc->para.chargeinfo);
- l3_debug(pc->st, tmp);
+ l3_debug(pc->st, "charging info %d",
+ pc->para.chargeinfo);
}
} else if (pc->st->l3.debug & L3_DEB_CHARGE)
l3_debug(pc->st, "charging info not found");
struct sk_buff *skb = arg;
u_char *p;
int i, tmpcharge = 0;
- char a_charge[8], tmp[32];
+ char a_charge[8];
StopAllL3Timer(pc);
p = skb->data;
pc->st->l3.l3l4(pc->st, CC_CHARGE | INDICATION, pc);
}
if (pc->st->l3.debug & L3_DEB_CHARGE) {
- sprintf(tmp, "charging info %d", pc->para.chargeinfo);
- l3_debug(pc->st, tmp);
+ l3_debug(pc->st, "charging info %d",
+ pc->para.chargeinfo);
}
} else if (pc->st->l3.debug & L3_DEB_CHARGE)
l3_debug(pc->st, "charging info not found");
int i, mt, cr;
struct l3_process *proc;
struct sk_buff *skb = arg;
- char tmp[80];
switch (pr) {
case (DL_DATA | INDICATION):
}
if (skb->len < 4) {
if (st->l3.debug & L3_DEB_PROTERR) {
- sprintf(tmp, "up1tr6 len only %d", skb->len);
- l3_debug(st, tmp);
+ l3_debug(st, "up1tr6 len only %d", skb->len);
}
dev_kfree_skb(skb);
return;
}
if ((skb->data[0] & 0xfe) != PROTO_DIS_N0) {
if (st->l3.debug & L3_DEB_PROTERR) {
- sprintf(tmp, "up1tr6%sunexpected discriminator %x message len %d",
+ l3_debug(st, "up1tr6%sunexpected discriminator %x message len %d",
(pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ",
skb->data[0], skb->len);
- l3_debug(st, tmp);
}
dev_kfree_skb(skb);
return;
}
if (skb->data[1] != 1) {
if (st->l3.debug & L3_DEB_PROTERR) {
- sprintf(tmp, "up1tr6 CR len not 1");
- l3_debug(st, tmp);
+ l3_debug(st, "up1tr6 CR len not 1");
}
dev_kfree_skb(skb);
return;
if (skb->data[0] == PROTO_DIS_N0) {
dev_kfree_skb(skb);
if (st->l3.debug & L3_DEB_STATE) {
- sprintf(tmp, "up1tr6%s N0 mt %x unhandled",
+ l3_debug(st, "up1tr6%s N0 mt %x unhandled",
(pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ", mt);
- l3_debug(st, tmp);
}
} else if (skb->data[0] == PROTO_DIS_N1) {
if (!(proc = getl3proc(st, cr))) {
if (cr < 128) {
if (!(proc = new_l3_process(st, cr))) {
if (st->l3.debug & L3_DEB_PROTERR) {
- sprintf(tmp, "up1tr6 no roc mem");
- l3_debug(st, tmp);
+ l3_debug(st, "up1tr6 no roc mem");
}
dev_kfree_skb(skb);
return;
} else {
if (!(proc = new_l3_process(st, cr))) {
if (st->l3.debug & L3_DEB_PROTERR) {
- sprintf(tmp, "up1tr6 no roc mem");
- l3_debug(st, tmp);
+ l3_debug(st, "up1tr6 no roc mem");
}
dev_kfree_skb(skb);
return;
if (i == ARRAY_SIZE(datastln1)) {
dev_kfree_skb(skb);
if (st->l3.debug & L3_DEB_STATE) {
- sprintf(tmp, "up1tr6%sstate %d mt %x unhandled",
+ l3_debug(st, "up1tr6%sstate %d mt %x unhandled",
(pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ",
proc->state, mt);
- l3_debug(st, tmp);
}
return;
} else {
if (st->l3.debug & L3_DEB_STATE) {
- sprintf(tmp, "up1tr6%sstate %d mt %x",
+ l3_debug(st, "up1tr6%sstate %d mt %x",
(pr == (DL_DATA | INDICATION)) ? " " : "(broadcast) ",
proc->state, mt);
- l3_debug(st, tmp);
}
datastln1[i].rout(proc, pr, skb);
}
int i, cr;
struct l3_process *proc;
struct Channel *chan;
- char tmp[80];
if ((DL_ESTABLISH | REQUEST) == pr) {
l3_msg(st, pr, NULL);
break;
if (i == ARRAY_SIZE(downstl)) {
if (st->l3.debug & L3_DEB_STATE) {
- sprintf(tmp, "down1tr6 state %d prim %d unhandled",
+ l3_debug(st, "down1tr6 state %d prim %d unhandled",
proc->state, pr);
- l3_debug(st, tmp);
}
} else {
if (st->l3.debug & L3_DEB_STATE) {
- sprintf(tmp, "down1tr6 state %d prim %d",
+ l3_debug(st, "down1tr6 state %d prim %d",
proc->state, pr);
- l3_debug(st, tmp);
}
downstl[i].rout(proc, pr, arg);
}
else
j = i;
QuickHex(t, p, j);
- debugl1(cs, tmp);
+ debugl1(cs, "%s", tmp);
p += j;
i -= j;
t = tmp;
dp--;
*dp++ = '\n';
*dp = 0;
- HiSax_putstatus(cs, NULL, cs->dlog);
+ HiSax_putstatus(cs, NULL, "%s", cs->dlog);
} else
HiSax_putstatus(cs, "LogFrame: ", "warning Frame too big (%d)", size);
}
}
if (finish) {
*dp = 0;
- HiSax_putstatus(cs, NULL, cs->dlog);
+ HiSax_putstatus(cs, NULL, "%s", cs->dlog);
return;
}
if ((0xfe & buf[0]) == PROTO_DIS_N0) { /* 1TR6 */
dp += sprintf(dp, "Unknown protocol %x!", buf[0]);
}
*dp = 0;
- HiSax_putstatus(cs, NULL, cs->dlog);
+ HiSax_putstatus(cs, NULL, "%s", cs->dlog);
}
t += sprintf(t, "W6692_empty_fifo cnt %d", count);
QuickHex(t, ptr, count);
- debugl1(cs, cs->dlog);
+ debugl1(cs, "%s", cs->dlog);
}
}
t += sprintf(t, "W6692_fill_fifo cnt %d", count);
QuickHex(t, ptr, count);
- debugl1(cs, cs->dlog);
+ debugl1(cs, "%s", cs->dlog);
}
}
t += sprintf(t, "W6692B_empty_fifo %c cnt %d",
bcs->channel + '1', count);
QuickHex(t, ptr, count);
- debugl1(cs, bcs->blog);
+ debugl1(cs, "%s", bcs->blog);
}
}
t += sprintf(t, "W6692B_fill_fifo %c cnt %d",
bcs->channel + '1', count);
QuickHex(t, ptr, count);
- debugl1(cs, bcs->blog);
+ debugl1(cs, "%s", bcs->blog);
}
}
kfree(privblk);
kfree(mboxblk);
kfree(list);
- platform_set_drvdata(pdev, NULL);
return 0;
}
*/
atomic_t has_dirty;
- struct ratelimit writeback_rate;
+ struct bch_ratelimit writeback_rate;
struct delayed_work writeback_rate_update;
/*
*/
sector_t last_read;
- /* Number of writeback bios in flight */
- atomic_t in_flight;
+ /* Limit number of writeback bios in flight */
+ struct semaphore in_flight;
struct closure_with_timer writeback;
- struct closure_waitlist writeback_wait;
struct keybuf writeback_keys;
/* Mergesort */
+static void sort_key_next(struct btree_iter *iter,
+ struct btree_iter_set *i)
+{
+ i->k = bkey_next(i->k);
+
+ if (i->k == i->end)
+ *i = iter->data[--iter->used];
+}
+
static void btree_sort_fixup(struct btree_iter *iter)
{
while (iter->used > 1) {
struct btree_iter_set *top = iter->data, *i = top + 1;
- struct bkey *k;
if (iter->used > 2 &&
btree_iter_cmp(i[0], i[1]))
i++;
- for (k = i->k;
- k != i->end && bkey_cmp(top->k, &START_KEY(k)) > 0;
- k = bkey_next(k))
- if (top->k > i->k)
- __bch_cut_front(top->k, k);
- else if (KEY_SIZE(k))
- bch_cut_back(&START_KEY(k), top->k);
-
- if (top->k < i->k || k == i->k)
+ if (bkey_cmp(top->k, &START_KEY(i->k)) <= 0)
break;
- heap_sift(iter, i - top, btree_iter_cmp);
+ if (!KEY_SIZE(i->k)) {
+ sort_key_next(iter, i);
+ heap_sift(iter, i - top, btree_iter_cmp);
+ continue;
+ }
+
+ if (top->k > i->k) {
+ if (bkey_cmp(top->k, i->k) >= 0)
+ sort_key_next(iter, i);
+ else
+ bch_cut_front(top->k, i->k);
+
+ heap_sift(iter, i - top, btree_iter_cmp);
+ } else {
+ /* can't happen because of comparison func */
+ BUG_ON(!bkey_cmp(&START_KEY(top->k), &START_KEY(i->k)));
+ bch_cut_back(&START_KEY(i->k), top->k);
+ }
}
}
return;
err:
- bch_cache_set_error(b->c, "io error reading bucket %lu",
+ bch_cache_set_error(b->c, "io error reading bucket %zu",
PTR_BUCKET_NR(b->c, &b->key, 0));
}
return SHRINK_STOP;
/* Return -1 if we can't do anything right now */
- if (sc->gfp_mask & __GFP_WAIT)
+ if (sc->gfp_mask & __GFP_IO)
mutex_lock(&c->bucket_lock);
else if (!mutex_trylock(&c->bucket_lock))
return -1;
bitmap_zero(bitmap, SB_JOURNAL_BUCKETS);
pr_debug("%u journal buckets", ca->sb.njournal_buckets);
- /* Read journal buckets ordered by golden ratio hash to quickly
+ /*
+ * Read journal buckets ordered by golden ratio hash to quickly
* find a sequence of buckets with valid journal entries
*/
for (i = 0; i < ca->sb.njournal_buckets; i++) {
goto bsearch;
}
- /* If that fails, check all the buckets we haven't checked
+ /*
+ * If that fails, check all the buckets we haven't checked
* already
*/
pr_debug("falling back to linear search");
- for (l = 0; l < ca->sb.njournal_buckets; l++) {
- if (test_bit(l, bitmap))
- continue;
-
+ for (l = find_first_zero_bit(bitmap, ca->sb.njournal_buckets);
+ l < ca->sb.njournal_buckets;
+ l = find_next_zero_bit(bitmap, ca->sb.njournal_buckets, l + 1))
if (read_bucket(l))
goto bsearch;
- }
+
+ if (list_empty(list))
+ continue;
bsearch:
/* Binary search */
m = r = find_next_bit(bitmap, ca->sb.njournal_buckets, l + 1);
r = m;
}
- /* Read buckets in reverse order until we stop finding more
+ /*
+ * Read buckets in reverse order until we stop finding more
* journal entries
*/
- pr_debug("finishing up");
+ pr_debug("finishing up: m %u njournal_buckets %u",
+ m, ca->sb.njournal_buckets);
l = m;
while (1) {
}
}
- c->journal.seq = list_entry(list->prev,
- struct journal_replay,
- list)->j.seq;
+ if (!list_empty(list))
+ c->journal.seq = list_entry(list->prev,
+ struct journal_replay,
+ list)->j.seq;
return 0;
#undef read_bucket
return;
}
- switch (atomic_read(&ja->discard_in_flight) == DISCARD_IN_FLIGHT) {
+ switch (atomic_read(&ja->discard_in_flight)) {
case DISCARD_IN_FLIGHT:
return;
if (cl)
BUG_ON(!closure_wait(&w->wait, cl));
+ closure_flush(&c->journal.io);
__journal_try_write(c, true);
}
}
closure_bio_submit(bio, cl, s->d);
} else {
bch_writeback_add(dc);
+ s->op.cache_bio = bio;
- if (s->op.flush_journal) {
+ if (bio->bi_rw & REQ_FLUSH) {
/* Also need to send a flush to the backing device */
- s->op.cache_bio = bio_clone_bioset(bio, GFP_NOIO,
- dc->disk.bio_split);
-
- bio->bi_size = 0;
- bio->bi_vcnt = 0;
- closure_bio_submit(bio, cl, s->d);
- } else {
- s->op.cache_bio = bio;
+ struct bio *flush = bio_alloc_bioset(GFP_NOIO, 0,
+ dc->disk.bio_split);
+
+ flush->bi_rw = WRITE_FLUSH;
+ flush->bi_bdev = bio->bi_bdev;
+ flush->bi_end_io = request_endio;
+ flush->bi_private = cl;
+
+ closure_bio_submit(flush, cl, s->d);
}
}
out:
}
if (attr == &sysfs_label) {
- /* note: endlines are preserved */
- memcpy(dc->sb.label, buf, SB_LABEL_SIZE);
+ if (size > SB_LABEL_SIZE)
+ return -EINVAL;
+ memcpy(dc->sb.label, buf, size);
+ if (size < SB_LABEL_SIZE)
+ dc->sb.label[size] = '\0';
+ if (size && dc->sb.label[size - 1] == '\n')
+ dc->sb.label[size - 1] = '\0';
bch_write_bdev_super(dc, NULL);
if (dc->disk.c) {
memcpy(dc->disk.c->uuids[dc->disk.id].label,
stats->last = now ?: 1;
}
-unsigned bch_next_delay(struct ratelimit *d, uint64_t done)
+/**
+ * bch_next_delay() - increment @d by the amount of work done, and return how
+ * long to delay until the next time to do some work.
+ *
+ * @d - the struct bch_ratelimit to update
+ * @done - the amount of work done, in arbitrary units
+ *
+ * Returns the amount of time to delay by, in jiffies
+ */
+uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done)
{
uint64_t now = local_clock();
(ewma) >> factor; \
})
-struct ratelimit {
+struct bch_ratelimit {
+ /* Next time we want to do some work, in nanoseconds */
uint64_t next;
+
+ /*
+ * Rate at which we want to do work, in units per nanosecond
+ * The units here correspond to the units passed to bch_next_delay()
+ */
unsigned rate;
};
-static inline void ratelimit_reset(struct ratelimit *d)
+static inline void bch_ratelimit_reset(struct bch_ratelimit *d)
{
d->next = local_clock();
}
-unsigned bch_next_delay(struct ratelimit *d, uint64_t done);
+uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done);
#define __DIV_SAFE(n, d, zero) \
({ \
static unsigned writeback_delay(struct cached_dev *dc, unsigned sectors)
{
+ uint64_t ret;
+
if (atomic_read(&dc->disk.detaching) ||
!dc->writeback_percent)
return 0;
- return bch_next_delay(&dc->writeback_rate, sectors * 10000000ULL);
+ ret = bch_next_delay(&dc->writeback_rate, sectors * 10000000ULL);
+
+ return min_t(uint64_t, ret, HZ);
}
/* Background writeback */
up_write(&dc->writeback_lock);
- ratelimit_reset(&dc->writeback_rate);
+ bch_ratelimit_reset(&dc->writeback_rate);
/* Punt to workqueue only so we don't recurse and blow the stack */
continue_at(cl, read_dirty, dirty_wq);
}
bch_keybuf_del(&dc->writeback_keys, w);
- atomic_dec_bug(&dc->in_flight);
-
- closure_wake_up(&dc->writeback_wait);
+ up(&dc->in_flight);
closure_return_with_destructor(cl, dirty_io_destructor);
}
closure_bio_submit(&io->bio, cl, &io->dc->disk);
- continue_at(cl, write_dirty_finish, dirty_wq);
+ continue_at(cl, write_dirty_finish, system_wq);
}
static void read_dirty_endio(struct bio *bio, int error)
closure_bio_submit(&io->bio, cl, &io->dc->disk);
- continue_at(cl, write_dirty, dirty_wq);
+ continue_at(cl, write_dirty, system_wq);
}
static void read_dirty(struct closure *cl)
if (delay > 0 &&
(KEY_START(&w->key) != dc->last_read ||
- jiffies_to_msecs(delay) > 50)) {
- w->private = NULL;
-
- closure_delay(&dc->writeback, delay);
- continue_at(cl, read_dirty, dirty_wq);
- }
+ jiffies_to_msecs(delay) > 50))
+ delay = schedule_timeout_uninterruptible(delay);
dc->last_read = KEY_OFFSET(&w->key);
trace_bcache_writeback(&w->key);
- closure_call(&io->cl, read_dirty_submit, NULL, &dc->disk.cl);
+ down(&dc->in_flight);
+ closure_call(&io->cl, read_dirty_submit, NULL, cl);
delay = writeback_delay(dc, KEY_SIZE(&w->key));
-
- atomic_inc(&dc->in_flight);
-
- if (!closure_wait_event(&dc->writeback_wait, cl,
- atomic_read(&dc->in_flight) < 64))
- continue_at(cl, read_dirty, dirty_wq);
}
if (0) {
bch_keybuf_del(&dc->writeback_keys, w);
}
- refill_dirty(cl);
+ /*
+ * Wait for outstanding writeback IOs to finish (and keybuf slots to be
+ * freed) before refilling again
+ */
+ continue_at(cl, refill_dirty, dirty_wq);
}
/* Init */
void bch_cached_dev_writeback_init(struct cached_dev *dc)
{
+ sema_init(&dc->in_flight, 64);
closure_init_unlocked(&dc->writeback);
init_rwsem(&dc->writeback_lock);
int __init bch_writeback_init(void)
{
- dirty_wq = create_singlethread_workqueue("bcache_writeback");
+ dirty_wq = create_workqueue("bcache_writeback");
if (!dirty_wq)
return -ENOMEM;
#define DM_MSG_PREFIX "io"
#define DM_IO_MAX_REGIONS BITS_PER_LONG
-#define MIN_IOS 16
-#define MIN_BIOS 16
struct dm_io_client {
mempool_t *pool;
struct dm_io_client *dm_io_client_create(void)
{
struct dm_io_client *client;
+ unsigned min_ios = dm_get_reserved_bio_based_ios();
client = kmalloc(sizeof(*client), GFP_KERNEL);
if (!client)
return ERR_PTR(-ENOMEM);
- client->pool = mempool_create_slab_pool(MIN_IOS, _dm_io_cache);
+ client->pool = mempool_create_slab_pool(min_ios, _dm_io_cache);
if (!client->pool)
goto bad;
- client->bios = bioset_create(MIN_BIOS, 0);
+ client->bios = bioset_create(min_ios, 0);
if (!client->bios)
goto bad;
#include <linux/device-mapper.h>
+#include "dm.h"
#include "dm-path-selector.h"
#include "dm-uevent.h"
typedef int (*action_fn) (struct pgpath *pgpath);
-#define MIN_IOS 256 /* Mempool size */
-
static struct kmem_cache *_mpio_cache;
static struct workqueue_struct *kmultipathd, *kmpath_handlerd;
static struct multipath *alloc_multipath(struct dm_target *ti)
{
struct multipath *m;
+ unsigned min_ios = dm_get_reserved_rq_based_ios();
m = kzalloc(sizeof(*m), GFP_KERNEL);
if (m) {
INIT_WORK(&m->trigger_event, trigger_event);
init_waitqueue_head(&m->pg_init_wait);
mutex_init(&m->work_mutex);
- m->mpio_pool = mempool_create_slab_pool(MIN_IOS, _mpio_cache);
+ m->mpio_pool = mempool_create_slab_pool(min_ios, _mpio_cache);
if (!m->mpio_pool) {
kfree(m);
return NULL;
case -EREMOTEIO:
case -EILSEQ:
case -ENODATA:
+ case -ENOSPC:
return 1;
}
if (!error && !clone->errors)
return 0; /* I/O complete */
- if (noretry_error(error))
+ if (noretry_error(error)) {
+ if ((clone->cmd_flags & REQ_WRITE_SAME) &&
+ !clone->q->limits.max_write_same_sectors) {
+ struct queue_limits *limits;
+
+ /* device doesn't really support WRITE SAME, disable it */
+ limits = dm_get_queue_limits(dm_table_get_md(m->ti->table));
+ limits->max_write_same_sectors = 0;
+ }
return error;
+ }
if (mpio->pgpath)
fail_path(mpio->pgpath);
*/
INIT_WORK_ONSTACK(&req.work, do_metadata);
queue_work(ps->metadata_wq, &req.work);
- flush_work(&req.work);
+ flush_workqueue(ps->metadata_wq);
return req.result;
}
return NUM_SNAPSHOT_HDR_CHUNKS + ((ps->exceptions_per_area + 1) * area);
}
+static void skip_metadata(struct pstore *ps)
+{
+ uint32_t stride = ps->exceptions_per_area + 1;
+ chunk_t next_free = ps->next_free;
+ if (sector_div(next_free, stride) == NUM_SNAPSHOT_HDR_CHUNKS)
+ ps->next_free++;
+}
+
/*
* Read or write a metadata area. Remembering to skip the first
* chunk which holds the header.
ps->current_area--;
+ skip_metadata(ps);
+
return 0;
}
struct dm_exception *e)
{
struct pstore *ps = get_info(store);
- uint32_t stride;
- chunk_t next_free;
sector_t size = get_dev_size(dm_snap_cow(store->snap)->bdev);
/* Is there enough room ? */
* Move onto the next free pending, making sure to take
* into account the location of the metadata chunks.
*/
- stride = (ps->exceptions_per_area + 1);
- next_free = ++ps->next_free;
- if (sector_div(next_free, stride) == 1)
- ps->next_free++;
+ ps->next_free++;
+ skip_metadata(ps);
atomic_inc(&ps->pending_count);
return 0;
*/
static int init_hash_tables(struct dm_snapshot *s)
{
- sector_t hash_size, cow_dev_size, origin_dev_size, max_buckets;
+ sector_t hash_size, cow_dev_size, max_buckets;
/*
* Calculate based on the size of the original volume or
* the COW volume...
*/
cow_dev_size = get_dev_size(s->cow->bdev);
- origin_dev_size = get_dev_size(s->origin->bdev);
max_buckets = calc_max_buckets();
- hash_size = min(origin_dev_size, cow_dev_size) >> s->store->chunk_shift;
+ hash_size = cow_dev_size >> s->store->chunk_shift;
hash_size = min(hash_size, max_buckets);
if (hash_size < 64)
struct dm_stat_percpu *p;
/*
- * For strict correctness we should use local_irq_disable/enable
+ * For strict correctness we should use local_irq_save/restore
* instead of preempt_disable/enable.
*
- * This is racy if the driver finishes bios from non-interrupt
- * context as well as from interrupt context or from more different
- * interrupts.
+ * preempt_disable/enable is racy if the driver finishes bios
+ * from non-interrupt context as well as from interrupt context
+ * or from more different interrupts.
*
- * However, the race only results in not counting some events,
- * so it is acceptable.
+ * On 64-bit architectures the race only results in not counting some
+ * events, so it is acceptable. On 32-bit architectures the race could
+ * cause the counter going off by 2^32, so we need to do proper locking
+ * there.
*
* part_stat_lock()/part_stat_unlock() have this race too.
*/
+#if BITS_PER_LONG == 32
+ unsigned long flags;
+ local_irq_save(flags);
+#else
preempt_disable();
+#endif
p = &s->stat_percpu[smp_processor_id()][entry];
if (!end) {
p->ticks[idx] += duration;
}
+#if BITS_PER_LONG == 32
+ local_irq_restore(flags);
+#else
preempt_enable();
+#endif
}
static void __dm_stat_bio(struct dm_stat *s, unsigned long bi_rw,
* them down to the data device. The thin device's discard
* processing will cause mappings to be removed from the btree.
*/
+ ti->discard_zeroes_data_unsupported = true;
if (pf.discard_enabled && pf.discard_passdown) {
ti->num_discard_bios = 1;
* thin devices' discard limits consistent).
*/
ti->discards_supported = true;
- ti->discard_zeroes_data_unsupported = true;
}
ti->private = pt;
* They get transferred to the live pool in bind_control_target()
* called from pool_preresume().
*/
- if (!pt->adjusted_pf.discard_enabled)
+ if (!pt->adjusted_pf.discard_enabled) {
+ /*
+ * Must explicitly disallow stacking discard limits otherwise the
+ * block layer will stack them if pool's data device has support.
+ * QUEUE_FLAG_DISCARD wouldn't be set but there is no way for the
+ * user to see that, so make sure to set all discard limits to 0.
+ */
+ limits->discard_granularity = 0;
return;
+ }
disable_passdown_if_not_supported(pt);
ti->per_bio_data_size = sizeof(struct dm_thin_endio_hook);
/* In case the pool supports discards, pass them on. */
+ ti->discard_zeroes_data_unsupported = true;
if (tc->pool->pf.discard_enabled) {
ti->discards_supported = true;
ti->num_discard_bios = 1;
- ti->discard_zeroes_data_unsupported = true;
/* Discard bios must be split on a block boundary */
ti->split_discard_bios = true;
}
struct bio_set *bs;
};
-#define MIN_IOS 256
+#define RESERVED_BIO_BASED_IOS 16
+#define RESERVED_REQUEST_BASED_IOS 256
+#define RESERVED_MAX_IOS 1024
static struct kmem_cache *_io_cache;
static struct kmem_cache *_rq_tio_cache;
+/*
+ * Bio-based DM's mempools' reserved IOs set by the user.
+ */
+static unsigned reserved_bio_based_ios = RESERVED_BIO_BASED_IOS;
+
+/*
+ * Request-based DM's mempools' reserved IOs set by the user.
+ */
+static unsigned reserved_rq_based_ios = RESERVED_REQUEST_BASED_IOS;
+
+static unsigned __dm_get_reserved_ios(unsigned *reserved_ios,
+ unsigned def, unsigned max)
+{
+ unsigned ios = ACCESS_ONCE(*reserved_ios);
+ unsigned modified_ios = 0;
+
+ if (!ios)
+ modified_ios = def;
+ else if (ios > max)
+ modified_ios = max;
+
+ if (modified_ios) {
+ (void)cmpxchg(reserved_ios, ios, modified_ios);
+ ios = modified_ios;
+ }
+
+ return ios;
+}
+
+unsigned dm_get_reserved_bio_based_ios(void)
+{
+ return __dm_get_reserved_ios(&reserved_bio_based_ios,
+ RESERVED_BIO_BASED_IOS, RESERVED_MAX_IOS);
+}
+EXPORT_SYMBOL_GPL(dm_get_reserved_bio_based_ios);
+
+unsigned dm_get_reserved_rq_based_ios(void)
+{
+ return __dm_get_reserved_ios(&reserved_rq_based_ios,
+ RESERVED_REQUEST_BASED_IOS, RESERVED_MAX_IOS);
+}
+EXPORT_SYMBOL_GPL(dm_get_reserved_rq_based_ios);
+
static int __init local_init(void)
{
int r = -ENOMEM;
}
/*
+ * The queue_limits are only valid as long as you have a reference
+ * count on 'md'.
+ */
+struct queue_limits *dm_get_queue_limits(struct mapped_device *md)
+{
+ BUG_ON(!atomic_read(&md->holders));
+ return &md->queue->limits;
+}
+EXPORT_SYMBOL_GPL(dm_get_queue_limits);
+
+/*
* Fully initialize a request-based queue (->elevator, ->request_fn, etc).
*/
static int dm_init_request_based_queue(struct mapped_device *md)
if (type == DM_TYPE_BIO_BASED) {
cachep = _io_cache;
- pool_size = 16;
+ pool_size = dm_get_reserved_bio_based_ios();
front_pad = roundup(per_bio_data_size, __alignof__(struct dm_target_io)) + offsetof(struct dm_target_io, clone);
} else if (type == DM_TYPE_REQUEST_BASED) {
cachep = _rq_tio_cache;
- pool_size = MIN_IOS;
+ pool_size = dm_get_reserved_rq_based_ios();
front_pad = offsetof(struct dm_rq_clone_bio_info, clone);
/* per_bio_data_size is not used. See __bind_mempools(). */
WARN_ON(per_bio_data_size != 0);
} else
goto out;
- pools->io_pool = mempool_create_slab_pool(MIN_IOS, cachep);
+ pools->io_pool = mempool_create_slab_pool(pool_size, cachep);
if (!pools->io_pool)
goto out;
module_param(major, uint, 0);
MODULE_PARM_DESC(major, "The major number of the device mapper");
+
+module_param(reserved_bio_based_ios, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(reserved_bio_based_ios, "Reserved IOs in bio-based mempools");
+
+module_param(reserved_rq_based_ios, uint, S_IRUGO | S_IWUSR);
+MODULE_PARM_DESC(reserved_rq_based_ios, "Reserved IOs in request-based mempools");
+
MODULE_DESCRIPTION(DM_NAME " driver");
MODULE_AUTHOR("Joe Thornber <dm-devel@redhat.com>");
MODULE_LICENSE("GPL");
/*
* Helpers that are used by DM core
*/
+unsigned dm_get_reserved_bio_based_ios(void);
+unsigned dm_get_reserved_rq_based_ios(void);
+
static inline bool dm_message_test_buffer_overflow(char *result, unsigned maxlen)
{
return !maxlen || strlen(result) + 1 >= maxlen;
u64 *p;
int lo, hi;
int rv = 1;
+ unsigned long flags;
if (bb->shift < 0)
/* badblocks are disabled */
sectors = next - s;
}
- write_seqlock_irq(&bb->lock);
+ write_seqlock_irqsave(&bb->lock, flags);
p = bb->page;
lo = 0;
bb->changed = 1;
if (!acknowledged)
bb->unacked_exist = 1;
- write_sequnlock_irq(&bb->lock);
+ write_sequnlock_irqrestore(&bb->lock, flags);
return rv;
}
}
}
if (rdev
+ && rdev->recovery_offset == MaxSector
&& !test_bit(Faulty, &rdev->flags)
&& !test_and_set_bit(In_sync, &rdev->flags)) {
count++;
}
sysfs_notify_dirent_safe(tmp->replacement->sysfs_state);
} else if (tmp->rdev
+ && tmp->rdev->recovery_offset == MaxSector
&& !test_bit(Faulty, &tmp->rdev->flags)
&& !test_and_set_bit(In_sync, &tmp->rdev->flags)) {
count++;
bi->bi_io_vec[0].bv_len = STRIPE_SIZE;
bi->bi_io_vec[0].bv_offset = 0;
bi->bi_size = STRIPE_SIZE;
+ /*
+ * If this is discard request, set bi_vcnt 0. We don't
+ * want to confuse SCSI because SCSI will replace payload
+ */
+ if (rw & REQ_DISCARD)
+ bi->bi_vcnt = 0;
if (rrdev)
set_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags);
rbi->bi_io_vec[0].bv_len = STRIPE_SIZE;
rbi->bi_io_vec[0].bv_offset = 0;
rbi->bi_size = STRIPE_SIZE;
+ /*
+ * If this is discard request, set bi_vcnt 0. We don't
+ * want to confuse SCSI because SCSI will replace payload
+ */
+ if (rw & REQ_DISCARD)
+ rbi->bi_vcnt = 0;
if (conf->mddev->gendisk)
trace_block_bio_remap(bdev_get_queue(rbi->bi_bdev),
rbi, disk_devt(conf->mddev->gendisk),
}
/* now that discard is done we can proceed with any sync */
clear_bit(STRIPE_DISCARD, &sh->state);
+ /*
+ * SCSI discard will change some bio fields and the stripe has
+ * no updated data, so remove it from hash list and the stripe
+ * will be reinitialized
+ */
+ spin_lock_irq(&conf->device_lock);
+ remove_hash(sh);
+ spin_unlock_irq(&conf->device_lock);
if (test_bit(STRIPE_SYNC_REQUESTED, &sh->state))
set_bit(STRIPE_HANDLE, &sh->state);
{ 0xd5, 0x03, 0x03 },
};
- /* firmware status */
- ret = tda10071_rd_reg(priv, 0x51, &tmp);
- if (ret)
- goto error;
-
- if (!tmp) {
+ if (priv->warm) {
/* warm state - wake up device from sleep */
- priv->warm = 1;
for (i = 0; i < ARRAY_SIZE(tab); i++) {
ret = tda10071_wr_reg_mask(priv, tab[i].reg,
goto error;
} else {
/* cold state - try to download firmware */
- priv->warm = 0;
/* request the firmware, this will block and timeout */
ret = request_firmware(&fw, fw_file, priv->i2c->dev.parent);
static const struct v4l2_dv_timings_cap ad9389b_timings_cap = {
.type = V4L2_DV_BT_656_1120,
- .bt = {
- .max_width = 1920,
- .max_height = 1200,
- .min_pixelclock = 25000000,
- .max_pixelclock = 170000000,
- .standards = V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT |
+ /* keep this initialization for compatibility with GCC < 4.4.6 */
+ .reserved = { 0 },
+ V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1200, 25000000, 170000000,
+ V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT |
V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT,
- .capabilities = V4L2_DV_BT_CAP_PROGRESSIVE |
- V4L2_DV_BT_CAP_REDUCED_BLANKING | V4L2_DV_BT_CAP_CUSTOM,
- },
+ V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING |
+ V4L2_DV_BT_CAP_CUSTOM)
};
static int ad9389b_s_dv_timings(struct v4l2_subdev *sd,
static const struct v4l2_dv_timings_cap adv7511_timings_cap = {
.type = V4L2_DV_BT_656_1120,
- .bt = {
- .max_width = ADV7511_MAX_WIDTH,
- .max_height = ADV7511_MAX_HEIGHT,
- .min_pixelclock = ADV7511_MIN_PIXELCLOCK,
- .max_pixelclock = ADV7511_MAX_PIXELCLOCK,
- .standards = V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT |
+ /* keep this initialization for compatibility with GCC < 4.4.6 */
+ .reserved = { 0 },
+ V4L2_INIT_BT_TIMINGS(0, ADV7511_MAX_WIDTH, 0, ADV7511_MAX_HEIGHT,
+ ADV7511_MIN_PIXELCLOCK, ADV7511_MAX_PIXELCLOCK,
+ V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT |
V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT,
- .capabilities = V4L2_DV_BT_CAP_PROGRESSIVE |
- V4L2_DV_BT_CAP_REDUCED_BLANKING | V4L2_DV_BT_CAP_CUSTOM,
- },
+ V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING |
+ V4L2_DV_BT_CAP_CUSTOM)
};
static inline struct adv7511_state *get_adv7511_state(struct v4l2_subdev *sd)
state->i2c_edid = i2c_new_dummy(client->adapter, state->i2c_edid_addr >> 1);
if (state->i2c_edid == NULL) {
v4l2_err(sd, "failed to register edid i2c client\n");
+ err = -ENOMEM;
goto err_entity;
}
state->work_queue = create_singlethread_workqueue(sd->name);
if (state->work_queue == NULL) {
v4l2_err(sd, "could not create workqueue\n");
+ err = -ENOMEM;
goto err_unreg_cec;
}
static const struct v4l2_dv_timings_cap adv7842_timings_cap_analog = {
.type = V4L2_DV_BT_656_1120,
- .bt = {
- .max_width = 1920,
- .max_height = 1200,
- .min_pixelclock = 25000000,
- .max_pixelclock = 170000000,
- .standards = V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT |
+ /* keep this initialization for compatibility with GCC < 4.4.6 */
+ .reserved = { 0 },
+ V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1200, 25000000, 170000000,
+ V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT |
V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT,
- .capabilities = V4L2_DV_BT_CAP_PROGRESSIVE |
- V4L2_DV_BT_CAP_REDUCED_BLANKING | V4L2_DV_BT_CAP_CUSTOM,
- },
+ V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING |
+ V4L2_DV_BT_CAP_CUSTOM)
};
static const struct v4l2_dv_timings_cap adv7842_timings_cap_digital = {
.type = V4L2_DV_BT_656_1120,
- .bt = {
- .max_width = 1920,
- .max_height = 1200,
- .min_pixelclock = 25000000,
- .max_pixelclock = 225000000,
- .standards = V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT |
+ /* keep this initialization for compatibility with GCC < 4.4.6 */
+ .reserved = { 0 },
+ V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1200, 25000000, 225000000,
+ V4L2_DV_BT_STD_CEA861 | V4L2_DV_BT_STD_DMT |
V4L2_DV_BT_STD_GTF | V4L2_DV_BT_STD_CVT,
- .capabilities = V4L2_DV_BT_CAP_PROGRESSIVE |
- V4L2_DV_BT_CAP_REDUCED_BLANKING | V4L2_DV_BT_CAP_CUSTOM,
- },
+ V4L2_DV_BT_CAP_PROGRESSIVE | V4L2_DV_BT_CAP_REDUCED_BLANKING |
+ V4L2_DV_BT_CAP_CUSTOM)
};
static inline const struct v4l2_dv_timings_cap *
static const struct v4l2_dv_timings_cap ths8200_timings_cap = {
.type = V4L2_DV_BT_656_1120,
- .bt = {
- .max_width = 1920,
- .max_height = 1080,
- .min_pixelclock = 25000000,
- .max_pixelclock = 148500000,
- .standards = V4L2_DV_BT_STD_CEA861,
- .capabilities = V4L2_DV_BT_CAP_PROGRESSIVE,
- },
+ /* keep this initialization for compatibility with GCC < 4.4.6 */
+ .reserved = { 0 },
+ V4L2_INIT_BT_TIMINGS(0, 1920, 0, 1080, 25000000, 148500000,
+ V4L2_DV_BT_STD_CEA861, V4L2_DV_BT_CAP_PROGRESSIVE)
};
static inline struct ths8200_state *to_state(struct v4l2_subdev *sd)
/* stop video capture */
if (res_check(fh, RESOURCE_VIDEO)) {
+ pm_qos_remove_request(&dev->qos_request);
videobuf_streamoff(&fh->cap);
res_free(dev,fh,RESOURCE_VIDEO);
}
jpeg->vfd_decoder->release = video_device_release;
jpeg->vfd_decoder->lock = &jpeg->lock;
jpeg->vfd_decoder->v4l2_dev = &jpeg->v4l2_dev;
+ jpeg->vfd_decoder->vfl_dir = VFL_DIR_M2M;
ret = video_register_device(jpeg->vfd_decoder, VFL_TYPE_GRABBER, -1);
if (ret) {
v4l_bound_align_image(&pix->width, 0, VOU_MAX_IMAGE_WIDTH, 1,
&pix->height, 0, VOU_MAX_IMAGE_HEIGHT, 1, 0);
- for (i = 0; ARRAY_SIZE(vou_fmt); i++)
+ for (i = 0; i < ARRAY_SIZE(vou_fmt); i++)
if (vou_fmt[i].pfmt == pix->pixelformat)
return 0;
struct idmac_channel *ichan = mx3_cam->idmac_channel[0];
struct idmac_video_param *video = &ichan->params.video;
const struct soc_mbus_pixelfmt *host_fmt = icd->current_fmt->host_fmt;
- unsigned long flags;
dma_cookie_t cookie;
size_t new_size;
memset(vb2_plane_vaddr(vb, 0), 0xaa, vb2_get_plane_payload(vb, 0));
#endif
- spin_lock_irqsave(&mx3_cam->lock, flags);
+ spin_lock_irq(&mx3_cam->lock);
list_add_tail(&buf->queue, &mx3_cam->capture);
if (!mx3_cam->active)
if (mx3_cam->active == buf)
mx3_cam->active = NULL;
- spin_unlock_irqrestore(&mx3_cam->lock, flags);
+ spin_unlock_irq(&mx3_cam->lock);
error:
vb2_buffer_done(vb, VB2_BUF_STATE_ERROR);
}
*/
#include "e4000_priv.h"
+#include <linux/math64.h>
/* write multiple registers */
static int e4000_wr_regs(struct e4000_priv *priv, u8 reg, u8 *val, int len)
* or more.
*/
f_vco = c->frequency * e4000_pll_lut[i].mul;
- sigma_delta = 0x10000UL * (f_vco % priv->cfg->clock) / priv->cfg->clock;
+ sigma_delta = div_u64(0x10000ULL * (f_vco % priv->cfg->clock), priv->cfg->clock);
buf[0] = f_vco / priv->cfg->clock;
buf[1] = (sigma_delta >> 0) & 0xff;
buf[2] = (sigma_delta >> 8) & 0xff;
DMI_MATCH(DMI_PRODUCT_NAME, "F3JC")
}
},
+ {
+ .ident = "T12Rg-H",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "HCL Infosystems Limited"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "T12Rg-H")
+ }
+ },
{}
};
.bInterfaceSubClass = 1,
.bInterfaceProtocol = 0,
.driver_info = UVC_QUIRK_PROBE_MINMAX },
+ /* Microsoft Lifecam NX-3000 */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+ .idVendor = 0x045e,
+ .idProduct = 0x0721,
+ .bInterfaceClass = USB_CLASS_VIDEO,
+ .bInterfaceSubClass = 1,
+ .bInterfaceProtocol = 0,
+ .driver_info = UVC_QUIRK_PROBE_DEF },
/* Microsoft Lifecam VX-7000 */
{ .match_flags = USB_DEVICE_ID_MATCH_DEVICE
| USB_DEVICE_ID_MATCH_INT_INFO,
.bInterfaceSubClass = 1,
.bInterfaceProtocol = 0,
.driver_info = UVC_QUIRK_PROBE_DEF },
+ /* Dell SP2008WFP Monitor */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+ .idVendor = 0x05a9,
+ .idProduct = 0x2641,
+ .bInterfaceClass = USB_CLASS_VIDEO,
+ .bInterfaceSubClass = 1,
+ .bInterfaceProtocol = 0,
+ .driver_info = UVC_QUIRK_PROBE_DEF },
/* Dell Alienware X51 */
{ .match_flags = USB_DEVICE_ID_MATCH_DEVICE
| USB_DEVICE_ID_MATCH_INT_INFO,
if (b->m.planes[plane].bytesused > length)
return -EINVAL;
- if (b->m.planes[plane].data_offset >=
+
+ if (b->m.planes[plane].data_offset > 0 &&
+ b->m.planes[plane].data_offset >=
b->m.planes[plane].bytesused)
return -EINVAL;
}
return !!(vma->vm_flags & (VM_IO | VM_PFNMAP));
}
+static int vb2_dc_get_user_pfn(unsigned long start, int n_pages,
+ struct vm_area_struct *vma, unsigned long *res)
+{
+ unsigned long pfn, start_pfn, prev_pfn;
+ unsigned int i;
+ int ret;
+
+ if (!vma_is_io(vma))
+ return -EFAULT;
+
+ ret = follow_pfn(vma, start, &pfn);
+ if (ret)
+ return ret;
+
+ start_pfn = pfn;
+ start += PAGE_SIZE;
+
+ for (i = 1; i < n_pages; ++i, start += PAGE_SIZE) {
+ prev_pfn = pfn;
+ ret = follow_pfn(vma, start, &pfn);
+
+ if (ret) {
+ pr_err("no page for address %lu\n", start);
+ return ret;
+ }
+ if (pfn != prev_pfn + 1)
+ return -EINVAL;
+ }
+
+ *res = start_pfn;
+ return 0;
+}
+
static int vb2_dc_get_user_pages(unsigned long start, struct page **pages,
int n_pages, struct vm_area_struct *vma, int write)
{
unsigned long pfn;
int ret = follow_pfn(vma, start, &pfn);
+ if (!pfn_valid(pfn))
+ return -EINVAL;
+
if (ret) {
pr_err("no page for address %lu\n", start);
return ret;
struct vb2_dc_buf *buf = buf_priv;
struct sg_table *sgt = buf->dma_sgt;
- dma_unmap_sg(buf->dev, sgt->sgl, sgt->orig_nents, buf->dma_dir);
- if (!vma_is_io(buf->vma))
- vb2_dc_sgt_foreach_page(sgt, vb2_dc_put_dirty_page);
+ if (sgt) {
+ dma_unmap_sg(buf->dev, sgt->sgl, sgt->orig_nents, buf->dma_dir);
+ if (!vma_is_io(buf->vma))
+ vb2_dc_sgt_foreach_page(sgt, vb2_dc_put_dirty_page);
- sg_free_table(sgt);
- kfree(sgt);
+ sg_free_table(sgt);
+ kfree(sgt);
+ }
vb2_put_vma(buf->vma);
kfree(buf);
}
+/*
+ * For some kind of reserved memory there might be no struct page available,
+ * so all that can be done to support such 'pages' is to try to convert
+ * pfn to dma address or at the last resort just assume that
+ * dma address == physical address (like it has been assumed in earlier version
+ * of videobuf2-dma-contig
+ */
+
+#ifdef __arch_pfn_to_dma
+static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)
+{
+ return (dma_addr_t)__arch_pfn_to_dma(dev, pfn);
+}
+#elif defined(__pfn_to_bus)
+static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)
+{
+ return (dma_addr_t)__pfn_to_bus(pfn);
+}
+#elif defined(__pfn_to_phys)
+static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)
+{
+ return (dma_addr_t)__pfn_to_phys(pfn);
+}
+#else
+static inline dma_addr_t vb2_dc_pfn_to_dma(struct device *dev, unsigned long pfn)
+{
+ /* really, we cannot do anything better at this point */
+ return (dma_addr_t)(pfn) << PAGE_SHIFT;
+}
+#endif
+
static void *vb2_dc_get_userptr(void *alloc_ctx, unsigned long vaddr,
unsigned long size, int write)
{
/* extract page list from userspace mapping */
ret = vb2_dc_get_user_pages(start, pages, n_pages, vma, write);
if (ret) {
+ unsigned long pfn;
+ if (vb2_dc_get_user_pfn(start, n_pages, vma, &pfn) == 0) {
+ buf->dma_addr = vb2_dc_pfn_to_dma(buf->dev, pfn);
+ buf->size = size;
+ kfree(pages);
+ return buf;
+ }
+
pr_err("failed to get user pages\n");
goto fail_vma;
}
{
int ret;
- BUG_ON(!mutex_is_locked(&mc13xxx->lock));
-
if (offset > MC13XXX_NUMREGS)
return -EINVAL;
int mc13xxx_reg_write(struct mc13xxx *mc13xxx, unsigned int offset, u32 val)
{
- BUG_ON(!mutex_is_locked(&mc13xxx->lock));
-
dev_vdbg(mc13xxx->dev, "[0x%02x] <- 0x%06x\n", offset, val);
if (offset > MC13XXX_NUMREGS || val > 0xffffff)
int mc13xxx_reg_rmw(struct mc13xxx *mc13xxx, unsigned int offset,
u32 mask, u32 val)
{
- BUG_ON(!mutex_is_locked(&mc13xxx->lock));
BUG_ON(val & ~mask);
dev_vdbg(mc13xxx->dev, "[0x%02x] <- 0x%06x (mask: 0x%06x)\n",
offset, val, mask);
{
struct device *dev = context;
struct spi_device *spi = to_spi_device(dev);
+ const char *reg = data;
if (count != 4)
return -ENOTSUPP;
+ /* include errata fix for spi audio problems */
+ if (*reg == MC13783_AUDIO_CODEC || *reg == MC13783_AUDIO_DAC)
+ spi_write(spi, data, count);
+
return spi_write(spi, data, count);
}
dev->iamthif_ioctl = false;
dev->iamthif_state = MEI_IAMTHIF_IDLE;
dev->iamthif_timer = 0;
+ dev->iamthif_stall_timer = 0;
}
/**
if (cl->reading_state != MEI_READ_COMPLETE &&
!waitqueue_active(&cl->rx_wait)) {
+
mutex_unlock(&dev->device_lock);
if (wait_event_interruptible(cl->rx_wait,
- (MEI_READ_COMPLETE == cl->reading_state))) {
+ cl->reading_state == MEI_READ_COMPLETE ||
+ mei_cl_is_transitioning(cl))) {
+
if (signal_pending(current))
return -EINTR;
return -ERESTARTSYS;
cl->dev->dev_state == MEI_DEV_ENABLED &&
cl->state == MEI_FILE_CONNECTED);
}
+static inline bool mei_cl_is_transitioning(struct mei_cl *cl)
+{
+ return (MEI_FILE_INITIALIZING == cl->state ||
+ MEI_FILE_DISCONNECTED == cl->state ||
+ MEI_FILE_DISCONNECTING == cl->state);
+}
bool mei_cl_is_other_connecting(struct mei_cl *cl);
int mei_cl_disconnect(struct mei_cl *cl);
struct mei_me_client *clients;
int b;
+ dev->me_clients_num = 0;
+ dev->me_client_presentation_num = 0;
+ dev->me_client_index = 0;
+
/* count how many ME clients we have */
for_each_set_bit(b, dev->me_clients_map, MEI_CLIENTS_MAX)
dev->me_clients_num++;
- if (dev->me_clients_num <= 0)
+ if (dev->me_clients_num == 0)
return;
kfree(dev->me_clients);
struct hbm_props_request *prop_req;
const size_t len = sizeof(struct hbm_props_request);
unsigned long next_client_index;
- u8 client_num;
+ unsigned long client_num;
client_num = dev->me_client_presentation_num;
if (dev->dev_state == MEI_DEV_INIT_CLIENTS &&
dev->hbm_state == MEI_HBM_ENUM_CLIENTS) {
dev->init_clients_timer = 0;
- dev->me_client_presentation_num = 0;
- dev->me_client_index = 0;
mei_hbm_me_cl_allocate(dev);
dev->hbm_state = MEI_HBM_CLIENT_PROPERTIES;
memset(&dev->wr_ext_msg, 0, sizeof(dev->wr_ext_msg));
}
+ /* we're already in reset, cancel the init timer */
+ dev->init_clients_timer = 0;
+
dev->me_clients_num = 0;
dev->rd_msg_hdr = 0;
dev->wd_pending = false;
mutex_unlock(&dev->device_lock);
if (wait_event_interruptible(cl->rx_wait,
- (MEI_READ_COMPLETE == cl->reading_state ||
- MEI_FILE_INITIALIZING == cl->state ||
- MEI_FILE_DISCONNECTED == cl->state ||
- MEI_FILE_DISCONNECTING == cl->state))) {
+ MEI_READ_COMPLETE == cl->reading_state ||
+ mei_cl_is_transitioning(cl))) {
+
if (signal_pending(current))
return -EINTR;
return -ERESTARTSYS;
}
mutex_lock(&dev->device_lock);
- if (MEI_FILE_INITIALIZING == cl->state ||
- MEI_FILE_DISCONNECTED == cl->state ||
- MEI_FILE_DISCONNECTING == cl->state) {
+ if (mei_cl_is_transitioning(cl)) {
rets = -EBUSY;
goto out;
}
struct mei_me_client *me_clients; /* Note: memory has to be allocated */
DECLARE_BITMAP(me_clients_map, MEI_CLIENTS_MAX);
DECLARE_BITMAP(host_clients_map, MEI_CLIENTS_MAX);
- u8 me_clients_num;
- u8 me_client_presentation_num;
- u8 me_client_index;
+ unsigned long me_clients_num;
+ unsigned long me_client_presentation_num;
+ unsigned long me_client_index;
struct mei_cl wd_cl;
enum mei_wd_states wd_state;
};
static const struct of_device_id sh_mobile_sdhi_of_match[] = {
- { .compatible = "renesas,shmobile-sdhi" },
- { .compatible = "renesas,sh7372-sdhi" },
- { .compatible = "renesas,sh73a0-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], },
- { .compatible = "renesas,r8a73a4-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], },
- { .compatible = "renesas,r8a7740-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], },
- { .compatible = "renesas,r8a7778-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], },
- { .compatible = "renesas,r8a7779-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], },
- { .compatible = "renesas,r8a7790-sdhi", .data = &sh_mobile_sdhi_of_cfg[0], },
+ { .compatible = "renesas,sdhi-shmobile" },
+ { .compatible = "renesas,sdhi-sh7372" },
+ { .compatible = "renesas,sdhi-sh73a0", .data = &sh_mobile_sdhi_of_cfg[0], },
+ { .compatible = "renesas,sdhi-r8a73a4", .data = &sh_mobile_sdhi_of_cfg[0], },
+ { .compatible = "renesas,sdhi-r8a7740", .data = &sh_mobile_sdhi_of_cfg[0], },
+ { .compatible = "renesas,sdhi-r8a7778", .data = &sh_mobile_sdhi_of_cfg[0], },
+ { .compatible = "renesas,sdhi-r8a7779", .data = &sh_mobile_sdhi_of_cfg[0], },
+ { .compatible = "renesas,sdhi-r8a7790", .data = &sh_mobile_sdhi_of_cfg[0], },
{},
};
MODULE_DEVICE_TABLE(of, sh_mobile_sdhi_of_match);
*/
static inline int set_4byte(struct m25p *flash, u32 jedec_id, int enable)
{
+ int status;
+ bool need_wren = false;
+
switch (JEDEC_MFR(jedec_id)) {
- case CFI_MFR_MACRONIX:
case CFI_MFR_ST: /* Micron, actually */
+ /* Some Micron need WREN command; all will accept it */
+ need_wren = true;
+ case CFI_MFR_MACRONIX:
case 0xEF /* winbond */:
+ if (need_wren)
+ write_enable(flash);
+
flash->command[0] = enable ? OPCODE_EN4B : OPCODE_EX4B;
- return spi_write(flash->spi, flash->command, 1);
+ status = spi_write(flash->spi, flash->command, 1);
+
+ if (need_wren)
+ write_disable(flash);
+
+ return status;
default:
/* Spansion style */
flash->command[0] = OPCODE_BRWR;
int common_nfc_set_geometry(struct gpmi_nand_data *this)
{
- return set_geometry_by_ecc_info(this) ? 0 : legacy_set_geometry(this);
+ return legacy_set_geometry(this);
}
struct dma_chan *get_dma_chan(struct gpmi_nand_data *this)
len = le16_to_cpu(p->ext_param_page_length) * 16;
ep = kmalloc(len, GFP_KERNEL);
- if (!ep) {
- ret = -ENOMEM;
- goto ext_out;
- }
+ if (!ep)
+ return -ENOMEM;
/* Send our own NAND_CMD_PARAM. */
chip->cmdfunc(mtd, NAND_CMD_PARAM, 0, -1);
}
pr_info("ONFI extended param page detected.\n");
- return 0;
+ ret = 0;
ext_out:
kfree(ep);
return 0;
}
-#ifdef CONFIG_OF
static struct of_device_id pxa3xx_nand_dt_ids[] = {
{
.compatible = "marvell,pxa3xx-nand",
return 0;
}
-#else
-static inline int pxa3xx_nand_probe_dt(struct platform_device *pdev)
-{
- return 0;
-}
-#endif
static int pxa3xx_nand_probe(struct platform_device *pdev)
{
for (cs = 0; cs < pdata->num_cs; cs++) {
struct mtd_info *mtd = info->host[cs]->mtd;
- mtd->name = pdev->name;
+ /*
+ * The mtd name matches the one used in 'mtdparts' kernel
+ * parameter. This name cannot be changed or otherwise
+ * user's mtd partitions configuration would get broken.
+ */
+ mtd->name = "pxa3xx_nand-0";
info->cs = cs;
ret = pxa3xx_nand_scan(mtd);
if (ret) {
bond_info->lp_counter++;
/* send learning packets */
- if (bond_info->lp_counter >= BOND_ALB_LP_TICKS) {
+ if (bond_info->lp_counter >= BOND_ALB_LP_TICKS(bond)) {
/* change of curr_active_slave involves swapping of mac addresses.
* in order to avoid this swapping from happening while
* sending the learning packets, the curr_slave_lock must be held for
* Used for division - never set
* to zero !!!
*/
-#define BOND_ALB_LP_INTERVAL 1 /* In seconds, periodic send of
- * learning packets to the switch
- */
+#define BOND_ALB_DEFAULT_LP_INTERVAL 1
+#define BOND_ALB_LP_INTERVAL(bond) (bond->params.lp_interval) /* In seconds, periodic send of
+ * learning packets to the switch
+ */
#define BOND_TLB_REBALANCE_TICKS (BOND_TLB_REBALANCE_INTERVAL \
* ALB_TIMER_TICKS_PER_SEC)
-#define BOND_ALB_LP_TICKS (BOND_ALB_LP_INTERVAL \
+#define BOND_ALB_LP_TICKS(bond) (BOND_ALB_LP_INTERVAL(bond) \
* ALB_TIMER_TICKS_PER_SEC)
#define TLB_HASH_TABLE_SIZE 256 /* The size of the clients hash table.
struct bonding *bond = netdev_priv(bond_dev);
struct slave *slave, *oldcurrent;
struct sockaddr addr;
+ int old_flags = bond_dev->flags;
netdev_features_t old_features = bond_dev->features;
/* slave is not a slave or master is not master of this slave */
* bond_change_active_slave(..., NULL)
*/
if (!USES_PRIMARY(bond->params.mode)) {
- /* unset promiscuity level from slave */
- if (bond_dev->flags & IFF_PROMISC)
+ /* unset promiscuity level from slave
+ * NOTE: The NETDEV_CHANGEADDR call above may change the value
+ * of the IFF_PROMISC flag in the bond_dev, but we need the
+ * value of that flag before that change, as that was the value
+ * when this slave was attached, so we cache at the start of the
+ * function and use it here. Same goes for ALLMULTI below
+ */
+ if (old_flags & IFF_PROMISC)
dev_set_promiscuity(slave_dev, -1);
/* unset allmulti level from slave */
- if (bond_dev->flags & IFF_ALLMULTI)
+ if (old_flags & IFF_ALLMULTI)
dev_set_allmulti(slave_dev, -1);
bond_hw_addr_flush(bond_dev, slave_dev);
params->all_slaves_active = all_slaves_active;
params->resend_igmp = resend_igmp;
params->min_links = min_links;
+ params->lp_interval = BOND_ALB_DEFAULT_LP_INTERVAL;
if (primary) {
strncpy(params->primary, primary, IFNAMSIZ);
static DEVICE_ATTR(resend_igmp, S_IRUGO | S_IWUSR,
bonding_show_resend_igmp, bonding_store_resend_igmp);
+
+static ssize_t bonding_show_lp_interval(struct device *d,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct bonding *bond = to_bond(d);
+ return sprintf(buf, "%d\n", bond->params.lp_interval);
+}
+
+static ssize_t bonding_store_lp_interval(struct device *d,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct bonding *bond = to_bond(d);
+ int new_value, ret = count;
+
+ if (sscanf(buf, "%d", &new_value) != 1) {
+ pr_err("%s: no lp interval value specified.\n",
+ bond->dev->name);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (new_value <= 0) {
+ pr_err ("%s: lp_interval must be between 1 and %d\n",
+ bond->dev->name, INT_MAX);
+ ret = -EINVAL;
+ goto out;
+ }
+
+ bond->params.lp_interval = new_value;
+out:
+ return ret;
+}
+
+static DEVICE_ATTR(lp_interval, S_IRUGO | S_IWUSR,
+ bonding_show_lp_interval, bonding_store_lp_interval);
+
static struct attribute *per_bond_attrs[] = {
&dev_attr_slaves.attr,
&dev_attr_mode.attr,
&dev_attr_all_slaves_active.attr,
&dev_attr_resend_igmp.attr,
&dev_attr_min_links.attr,
+ &dev_attr_lp_interval.attr,
NULL,
};
int tx_queues;
int all_slaves_active;
int resend_igmp;
+ int lp_interval;
};
struct bond_parm_tbl {
static const struct platform_device_id at91_can_id_table[] = {
{
- .name = "at91_can",
+ .name = "at91sam9x5_can",
.driver_data = (kernel_ulong_t)&at91_at91sam9x5_data,
}, {
- .name = "at91sam9x5_can",
+ .name = "at91_can",
.driver_data = (kernel_ulong_t)&at91_at91sam9263_data,
}, {
/* sentinel */
size_t size;
size = nla_total_size(sizeof(u32)); /* IFLA_CAN_STATE */
- size += sizeof(struct can_ctrlmode); /* IFLA_CAN_CTRLMODE */
+ size += nla_total_size(sizeof(struct can_ctrlmode)); /* IFLA_CAN_CTRLMODE */
size += nla_total_size(sizeof(u32)); /* IFLA_CAN_RESTART_MS */
- size += sizeof(struct can_bittiming); /* IFLA_CAN_BITTIMING */
- size += sizeof(struct can_clock); /* IFLA_CAN_CLOCK */
+ size += nla_total_size(sizeof(struct can_bittiming)); /* IFLA_CAN_BITTIMING */
+ size += nla_total_size(sizeof(struct can_clock)); /* IFLA_CAN_CLOCK */
if (priv->do_get_berr_counter) /* IFLA_CAN_BERR_COUNTER */
- size += sizeof(struct can_berr_counter);
+ size += nla_total_size(sizeof(struct can_berr_counter));
if (priv->bittiming_const) /* IFLA_CAN_BITTIMING_CONST */
- size += sizeof(struct can_bittiming_const);
+ size += nla_total_size(sizeof(struct can_bittiming_const));
return size;
}
#define FLEXCAN_MCR_BCC BIT(16)
#define FLEXCAN_MCR_LPRIO_EN BIT(13)
#define FLEXCAN_MCR_AEN BIT(12)
-#define FLEXCAN_MCR_MAXMB(x) ((x) & 0xf)
+#define FLEXCAN_MCR_MAXMB(x) ((x) & 0x1f)
#define FLEXCAN_MCR_IDAM_A (0 << 8)
#define FLEXCAN_MCR_IDAM_B (1 << 8)
#define FLEXCAN_MCR_IDAM_C (2 << 8)
{
struct flexcan_priv *priv = netdev_priv(dev);
struct flexcan_regs __iomem *regs = priv->base;
- unsigned int i;
int err;
u32 reg_mcr, reg_ctrl;
*
*/
reg_mcr = flexcan_read(®s->mcr);
+ reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff);
reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_FEN | FLEXCAN_MCR_HALT |
FLEXCAN_MCR_SUPV | FLEXCAN_MCR_WRN_EN |
- FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_SRX_DIS;
+ FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_SRX_DIS |
+ FLEXCAN_MCR_MAXMB(FLEXCAN_TX_BUF_ID);
netdev_dbg(dev, "%s: writing mcr=0x%08x", __func__, reg_mcr);
flexcan_write(reg_mcr, ®s->mcr);
netdev_dbg(dev, "%s: writing ctrl=0x%08x", __func__, reg_ctrl);
flexcan_write(reg_ctrl, ®s->ctrl);
- for (i = 0; i < ARRAY_SIZE(regs->cantxfg); i++) {
- flexcan_write(0, ®s->cantxfg[i].can_ctrl);
- flexcan_write(0, ®s->cantxfg[i].can_id);
- flexcan_write(0, ®s->cantxfg[i].data[0]);
- flexcan_write(0, ®s->cantxfg[i].data[1]);
-
- /* put MB into rx queue */
- flexcan_write(FLEXCAN_MB_CNT_CODE(0x4),
- ®s->cantxfg[i].can_ctrl);
- }
+ /* Abort any pending TX, mark Mailbox as INACTIVE */
+ flexcan_write(FLEXCAN_MB_CNT_CODE(0x4),
+ ®s->cantxfg[FLEXCAN_TX_BUF_ID].can_ctrl);
/* acceptance mask/acceptance code (accept everything) */
flexcan_write(0x0, ®s->rxgmask);
}
static const struct of_device_id flexcan_of_match[] = {
- { .compatible = "fsl,p1010-flexcan", .data = &fsl_p1010_devtype_data, },
- { .compatible = "fsl,imx28-flexcan", .data = &fsl_imx28_devtype_data, },
{ .compatible = "fsl,imx6q-flexcan", .data = &fsl_imx6q_devtype_data, },
+ { .compatible = "fsl,imx28-flexcan", .data = &fsl_imx28_devtype_data, },
+ { .compatible = "fsl,p1010-flexcan", .data = &fsl_p1010_devtype_data, },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, flexcan_of_match);
/* maximum rx buffer len: extended CAN frame with timestamp */
#define SLC_MTU (sizeof("T1111222281122334455667788EA5F\r")+1)
+#define SLC_CMD_LEN 1
+#define SLC_SFF_ID_LEN 3
+#define SLC_EFF_ID_LEN 8
+
struct slcan {
int magic;
{
struct sk_buff *skb;
struct can_frame cf;
- int i, dlc_pos, tmp;
- unsigned long ultmp;
- char cmd = sl->rbuff[0];
-
- if ((cmd != 't') && (cmd != 'T') && (cmd != 'r') && (cmd != 'R'))
+ int i, tmp;
+ u32 tmpid;
+ char *cmd = sl->rbuff;
+
+ cf.can_id = 0;
+
+ switch (*cmd) {
+ case 'r':
+ cf.can_id = CAN_RTR_FLAG;
+ /* fallthrough */
+ case 't':
+ /* store dlc ASCII value and terminate SFF CAN ID string */
+ cf.can_dlc = sl->rbuff[SLC_CMD_LEN + SLC_SFF_ID_LEN];
+ sl->rbuff[SLC_CMD_LEN + SLC_SFF_ID_LEN] = 0;
+ /* point to payload data behind the dlc */
+ cmd += SLC_CMD_LEN + SLC_SFF_ID_LEN + 1;
+ break;
+ case 'R':
+ cf.can_id = CAN_RTR_FLAG;
+ /* fallthrough */
+ case 'T':
+ cf.can_id |= CAN_EFF_FLAG;
+ /* store dlc ASCII value and terminate EFF CAN ID string */
+ cf.can_dlc = sl->rbuff[SLC_CMD_LEN + SLC_EFF_ID_LEN];
+ sl->rbuff[SLC_CMD_LEN + SLC_EFF_ID_LEN] = 0;
+ /* point to payload data behind the dlc */
+ cmd += SLC_CMD_LEN + SLC_EFF_ID_LEN + 1;
+ break;
+ default:
return;
+ }
- if (cmd & 0x20) /* tiny chars 'r' 't' => standard frame format */
- dlc_pos = 4; /* dlc position tiiid */
- else
- dlc_pos = 9; /* dlc position Tiiiiiiiid */
-
- if (!((sl->rbuff[dlc_pos] >= '0') && (sl->rbuff[dlc_pos] < '9')))
+ if (kstrtou32(sl->rbuff + SLC_CMD_LEN, 16, &tmpid))
return;
- cf.can_dlc = sl->rbuff[dlc_pos] - '0'; /* get can_dlc from ASCII val */
+ cf.can_id |= tmpid;
- sl->rbuff[dlc_pos] = 0; /* terminate can_id string */
-
- if (kstrtoul(sl->rbuff+1, 16, &ultmp))
+ /* get can_dlc from sanitized ASCII value */
+ if (cf.can_dlc >= '0' && cf.can_dlc < '9')
+ cf.can_dlc -= '0';
+ else
return;
- cf.can_id = ultmp;
-
- if (!(cmd & 0x20)) /* NO tiny chars => extended frame format */
- cf.can_id |= CAN_EFF_FLAG;
-
- if ((cmd | 0x20) == 'r') /* RTR frame */
- cf.can_id |= CAN_RTR_FLAG;
-
*(u64 *) (&cf.data) = 0; /* clear payload */
- for (i = 0, dlc_pos++; i < cf.can_dlc; i++) {
- tmp = hex_to_bin(sl->rbuff[dlc_pos++]);
- if (tmp < 0)
- return;
- cf.data[i] = (tmp << 4);
- tmp = hex_to_bin(sl->rbuff[dlc_pos++]);
- if (tmp < 0)
- return;
- cf.data[i] |= tmp;
+ /* RTR frames may have a dlc > 0 but they never have any data bytes */
+ if (!(cf.can_id & CAN_RTR_FLAG)) {
+ for (i = 0; i < cf.can_dlc; i++) {
+ tmp = hex_to_bin(*cmd++);
+ if (tmp < 0)
+ return;
+ cf.data[i] = (tmp << 4);
+ tmp = hex_to_bin(*cmd++);
+ if (tmp < 0)
+ return;
+ cf.data[i] |= tmp;
+ }
}
skb = dev_alloc_skb(sizeof(struct can_frame) +
/* parse tty input stream */
static void slcan_unesc(struct slcan *sl, unsigned char s)
{
-
if ((s == '\r') || (s == '\a')) { /* CR or BEL ends the pdu */
if (!test_and_clear_bit(SLF_ERROR, &sl->flags) &&
(sl->rcount > 4)) {
/* Encapsulate one can_frame and stuff into a TTY queue. */
static void slc_encaps(struct slcan *sl, struct can_frame *cf)
{
- int actual, idx, i;
- char cmd;
+ int actual, i;
+ unsigned char *pos;
+ unsigned char *endpos;
+ canid_t id = cf->can_id;
+
+ pos = sl->xbuff;
if (cf->can_id & CAN_RTR_FLAG)
- cmd = 'R'; /* becomes 'r' in standard frame format */
+ *pos = 'R'; /* becomes 'r' in standard frame format (SFF) */
else
- cmd = 'T'; /* becomes 't' in standard frame format */
+ *pos = 'T'; /* becomes 't' in standard frame format (SSF) */
- if (cf->can_id & CAN_EFF_FLAG)
- sprintf(sl->xbuff, "%c%08X%d", cmd,
- cf->can_id & CAN_EFF_MASK, cf->can_dlc);
- else
- sprintf(sl->xbuff, "%c%03X%d", cmd | 0x20,
- cf->can_id & CAN_SFF_MASK, cf->can_dlc);
+ /* determine number of chars for the CAN-identifier */
+ if (cf->can_id & CAN_EFF_FLAG) {
+ id &= CAN_EFF_MASK;
+ endpos = pos + SLC_EFF_ID_LEN;
+ } else {
+ *pos |= 0x20; /* convert R/T to lower case for SFF */
+ id &= CAN_SFF_MASK;
+ endpos = pos + SLC_SFF_ID_LEN;
+ }
- idx = strlen(sl->xbuff);
+ /* build 3 (SFF) or 8 (EFF) digit CAN identifier */
+ pos++;
+ while (endpos >= pos) {
+ *endpos-- = hex_asc_upper[id & 0xf];
+ id >>= 4;
+ }
+
+ pos += (cf->can_id & CAN_EFF_FLAG) ? SLC_EFF_ID_LEN : SLC_SFF_ID_LEN;
- for (i = 0; i < cf->can_dlc; i++)
- sprintf(&sl->xbuff[idx + 2*i], "%02X", cf->data[i]);
+ *pos++ = cf->can_dlc + '0';
+
+ /* RTR frames may have a dlc > 0 but they never have any data bytes */
+ if (!(cf->can_id & CAN_RTR_FLAG)) {
+ for (i = 0; i < cf->can_dlc; i++)
+ pos = hex_byte_pack_upper(pos, cf->data[i]);
+ }
- strcat(sl->xbuff, "\r"); /* add terminating character */
+ *pos++ = '\r';
/* Order of next two lines is *very* important.
* When we are sending a little amount of data,
* 14 Oct 1994 Dmitry Gorodchanin.
*/
set_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags);
- actual = sl->tty->ops->write(sl->tty, sl->xbuff, strlen(sl->xbuff));
- sl->xleft = strlen(sl->xbuff) - actual;
+ actual = sl->tty->ops->write(sl->tty, sl->xbuff, pos - sl->xbuff);
+ sl->xleft = (pos - sl->xbuff) - actual;
sl->xhead = sl->xbuff + actual;
sl->dev->stats.tx_bytes += cf->can_dlc;
}
if (!sl || sl->magic != SLCAN_MAGIC || !netif_running(sl->dev))
return;
+ spin_lock(&sl->lock);
if (sl->xleft <= 0) {
/* Now serial buffer is almost free & we can start
* transmission of another packet */
sl->dev->stats.tx_packets++;
clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+ spin_unlock(&sl->lock);
netif_wake_queue(sl->dev);
return;
}
actual = tty->ops->write(tty, sl->xhead, sl->xleft);
sl->xleft -= actual;
sl->xhead += actual;
+ spin_unlock(&sl->lock);
}
/* Send a can_frame to a TTY queue. */
if (i < PCAN_USB_MAX_TX_URBS) {
if (i == 0) {
netdev_err(netdev, "couldn't setup any tx URB\n");
- return err;
+ goto err_tx;
}
netdev_warn(netdev, "tx performance may be slow\n");
if (dev->adapter->dev_start) {
err = dev->adapter->dev_start(dev);
if (err)
- goto failed;
+ goto err_adapter;
}
dev->state |= PCAN_USB_STATE_STARTED;
if (dev->adapter->dev_set_bus) {
err = dev->adapter->dev_set_bus(dev, 1);
if (err)
- goto failed;
+ goto err_adapter;
}
dev->can.state = CAN_STATE_ERROR_ACTIVE;
return 0;
-failed:
+err_adapter:
if (err == -ENODEV)
netif_device_detach(dev->netdev);
netdev_warn(netdev, "couldn't submit control: %d\n", err);
+ for (i = 0; i < PCAN_USB_MAX_TX_URBS; i++) {
+ usb_free_urb(dev->tx_contexts[i].urb);
+ dev->tx_contexts[i].urb = NULL;
+ }
+err_tx:
+ usb_kill_anchored_urbs(&dev->rx_submitted);
+
return err;
}
if (lp->wol && !lp->irq_wake_requested) {
/* register wake irq handler */
rc = request_irq(IRQ_MAC_WAKEDET, bfin_mac_wake_interrupt,
- IRQF_DISABLED, "EMAC_WAKE", dev);
+ 0, "EMAC_WAKE", dev);
if (rc)
return rc;
lp->irq_wake_requested = true;
/* now, enable interrupts */
/* register irq handler */
rc = request_irq(IRQ_MAC_RX, bfin_mac_interrupt,
- IRQF_DISABLED, "EMAC_RX", ndev);
+ 0, "EMAC_RX", ndev);
if (rc) {
dev_err(&pdev->dev, "Cannot request Blackfin MAC RX IRQ!\n");
rc = -EBUSY;
REGA(CSR0) = CSR0_STOP;
- if (request_irq(LANCE_IRQ, lance_interrupt, IRQF_DISABLED, "SUN3 Lance", dev) < 0) {
+ if (request_irq(LANCE_IRQ, lance_interrupt, 0, "SUN3 Lance", dev) < 0) {
#ifdef CONFIG_SUN3
iounmap((void __iomem *)ioaddr);
#endif
struct alx_priv *alx;
struct alx_hw *hw;
bool phy_configured;
- int bars, pm_cap, err;
+ int bars, err;
err = pci_enable_device_mem(pdev);
if (err)
pci_enable_pcie_error_reporting(pdev);
pci_set_master(pdev);
- pm_cap = pci_find_capability(pdev, PCI_CAP_ID_PM);
- if (pm_cap == 0) {
+ if (!pdev->pm_cap) {
dev_err(&pdev->dev,
"Can't find power management capability, aborting\n");
err = -EIO;
goto out_pci_release;
}
- err = pci_set_power_state(pdev, PCI_D0);
- if (err)
- goto out_pci_release;
-
netdev = alloc_etherdev(sizeof(*alx));
if (!netdev) {
err = -ENOMEM;
if (++ring->end >= BGMAC_TX_RING_SLOTS)
ring->end = 0;
bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_TX_INDEX,
+ ring->index_base +
ring->end * sizeof(struct bgmac_dma_desc));
/* Always keep one slot free to allow detecting bugged calls. */
/* The last slot that hardware didn't consume yet */
empty_slot = bgmac_read(bgmac, ring->mmio_base + BGMAC_DMA_TX_STATUS);
empty_slot &= BGMAC_DMA_TX_STATDPTR;
+ empty_slot -= ring->index_base;
+ empty_slot &= BGMAC_DMA_TX_STATDPTR;
empty_slot /= sizeof(struct bgmac_dma_desc);
while (ring->start != empty_slot) {
end_slot = bgmac_read(bgmac, ring->mmio_base + BGMAC_DMA_RX_STATUS);
end_slot &= BGMAC_DMA_RX_STATDPTR;
+ end_slot -= ring->index_base;
+ end_slot &= BGMAC_DMA_RX_STATDPTR;
end_slot /= sizeof(struct bgmac_dma_desc);
ring->end = end_slot;
ring = &bgmac->tx_ring[i];
ring->num_slots = BGMAC_TX_RING_SLOTS;
ring->mmio_base = ring_base[i];
- if (bgmac_dma_unaligned(bgmac, ring, BGMAC_DMA_RING_TX))
- bgmac_warn(bgmac, "TX on ring 0x%X supports unaligned addressing but this feature is not implemented\n",
- ring->mmio_base);
/* Alloc ring of descriptors */
size = ring->num_slots * sizeof(struct bgmac_dma_desc);
if (ring->dma_base & 0xC0000000)
bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n");
+ ring->unaligned = bgmac_dma_unaligned(bgmac, ring,
+ BGMAC_DMA_RING_TX);
+ if (ring->unaligned)
+ ring->index_base = lower_32_bits(ring->dma_base);
+ else
+ ring->index_base = 0;
+
/* No need to alloc TX slots yet */
}
ring = &bgmac->rx_ring[i];
ring->num_slots = BGMAC_RX_RING_SLOTS;
ring->mmio_base = ring_base[i];
- if (bgmac_dma_unaligned(bgmac, ring, BGMAC_DMA_RING_RX))
- bgmac_warn(bgmac, "RX on ring 0x%X supports unaligned addressing but this feature is not implemented\n",
- ring->mmio_base);
/* Alloc ring of descriptors */
size = ring->num_slots * sizeof(struct bgmac_dma_desc);
if (ring->dma_base & 0xC0000000)
bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n");
+ ring->unaligned = bgmac_dma_unaligned(bgmac, ring,
+ BGMAC_DMA_RING_RX);
+ if (ring->unaligned)
+ ring->index_base = lower_32_bits(ring->dma_base);
+ else
+ ring->index_base = 0;
+
/* Alloc RX slots */
for (j = 0; j < ring->num_slots; j++) {
err = bgmac_dma_rx_skb_for_slot(bgmac, &ring->slots[j]);
for (i = 0; i < BGMAC_MAX_TX_RINGS; i++) {
ring = &bgmac->tx_ring[i];
- /* We don't implement unaligned addressing, so enable first */
- bgmac_dma_tx_enable(bgmac, ring);
+ if (!ring->unaligned)
+ bgmac_dma_tx_enable(bgmac, ring);
bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_TX_RINGLO,
lower_32_bits(ring->dma_base));
bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_TX_RINGHI,
upper_32_bits(ring->dma_base));
+ if (ring->unaligned)
+ bgmac_dma_tx_enable(bgmac, ring);
ring->start = 0;
ring->end = 0; /* Points the slot that should *not* be read */
ring = &bgmac->rx_ring[i];
- /* We don't implement unaligned addressing, so enable first */
- bgmac_dma_rx_enable(bgmac, ring);
+ if (!ring->unaligned)
+ bgmac_dma_rx_enable(bgmac, ring);
bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_RX_RINGLO,
lower_32_bits(ring->dma_base));
bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_RX_RINGHI,
upper_32_bits(ring->dma_base));
+ if (ring->unaligned)
+ bgmac_dma_rx_enable(bgmac, ring);
for (j = 0, dma_desc = ring->cpu_base; j < ring->num_slots;
j++, dma_desc++) {
}
bgmac_write(bgmac, ring->mmio_base + BGMAC_DMA_RX_INDEX,
+ ring->index_base +
ring->num_slots * sizeof(struct bgmac_dma_desc));
ring->start = 0;
struct bcma_drv_cc *cc = &bgmac->core->bus->drv_cc;
u8 et_swtype = 0;
u8 sw_type = BGMAC_CHIPCTL_1_SW_TYPE_EPHY |
- BGMAC_CHIPCTL_1_IF_TYPE_RMII;
- char buf[2];
+ BGMAC_CHIPCTL_1_IF_TYPE_MII;
+ char buf[4];
- if (bcm47xx_nvram_getenv("et_swtype", buf, 1) > 0) {
+ if (bcm47xx_nvram_getenv("et_swtype", buf, sizeof(buf)) > 0) {
if (kstrtou8(buf, 0, &et_swtype))
bgmac_err(bgmac, "Failed to parse et_swtype (%s)\n",
buf);
#define BGMAC_CHIPCTL_1_IF_TYPE_MASK 0x00000030
#define BGMAC_CHIPCTL_1_IF_TYPE_RMII 0x00000000
-#define BGMAC_CHIPCTL_1_IF_TYPE_MI 0x00000010
+#define BGMAC_CHIPCTL_1_IF_TYPE_MII 0x00000010
#define BGMAC_CHIPCTL_1_IF_TYPE_RGMII 0x00000020
#define BGMAC_CHIPCTL_1_SW_TYPE_MASK 0x000000C0
#define BGMAC_CHIPCTL_1_SW_TYPE_EPHY 0x00000000
u16 mmio_base;
struct bgmac_dma_desc *cpu_base;
dma_addr_t dma_base;
+ u32 index_base; /* Used for unaligned rings only, otherwise 0 */
+ bool unaligned;
struct bgmac_slot_info slots[BGMAC_RX_RING_SLOTS];
};
BNX2X_MAX_CNIC_ETH_CL_ID_IDX,
};
-#define BNX2X_CNIC_START_ETH_CID(bp) (BNX2X_NUM_NON_CNIC_QUEUES(bp) *\
+/* use a value high enough to be above all the PFs, which has least significant
+ * nibble as 8, so when cnic needs to come up with a CID for UIO to use to
+ * calculate doorbell address according to old doorbell configuration scheme
+ * (db_msg_sz 1 << 7 * cid + 0x40 DPM offset) it can come up with a valid number
+ * We must avoid coming up with cid 8 for iscsi since according to this method
+ * the designated UIO cid will come out 0 and it has a special handling for that
+ * case which doesn't suit us. Therefore will will cieling to closes cid which
+ * has least signigifcant nibble 8 and if it is 8 we will move forward to 0x18.
+ */
+
+#define BNX2X_1st_NON_L2_ETH_CID(bp) (BNX2X_NUM_NON_CNIC_QUEUES(bp) * \
(bp)->max_cos)
+/* amount of cids traversed by UIO's DPM addition to doorbell */
+#define UIO_DPM 8
+/* roundup to DPM offset */
+#define UIO_ROUNDUP(bp) (roundup(BNX2X_1st_NON_L2_ETH_CID(bp), \
+ UIO_DPM))
+/* offset to nearest value which has lsb nibble matching DPM */
+#define UIO_CID_OFFSET(bp) ((UIO_ROUNDUP(bp) + UIO_DPM) % \
+ (UIO_DPM * 2))
+/* add offset to rounded-up cid to get a value which could be used with UIO */
+#define UIO_DPM_ALIGN(bp) (UIO_ROUNDUP(bp) + UIO_CID_OFFSET(bp))
+/* but wait - avoid UIO special case for cid 0 */
+#define UIO_DPM_CID0_OFFSET(bp) ((UIO_DPM * 2) * \
+ (UIO_DPM_ALIGN(bp) == UIO_DPM))
+/* Properly DPM aligned CID dajusted to cid 0 secal case */
+#define BNX2X_CNIC_START_ETH_CID(bp) (UIO_DPM_ALIGN(bp) + \
+ (UIO_DPM_CID0_OFFSET(bp)))
+/* how many cids were wasted - need this value for cid allocation */
+#define UIO_CID_PAD(bp) (BNX2X_CNIC_START_ETH_CID(bp) - \
+ BNX2X_1st_NON_L2_ETH_CID(bp))
/* iSCSI L2 */
#define BNX2X_ISCSI_ETH_CID(bp) (BNX2X_CNIC_START_ETH_CID(bp))
/* FCoE L2 */
/* TM (timers) host DB constants */
#define TM_ILT_PAGE_SZ_HW 0
#define TM_ILT_PAGE_SZ (4096 << TM_ILT_PAGE_SZ_HW) /* 4K */
-/* #define TM_CONN_NUM (CNIC_STARTING_CID+CNIC_ISCSI_CXT_MAX) */
-#define TM_CONN_NUM 1024
+#define TM_CONN_NUM (BNX2X_FIRST_VF_CID + \
+ BNX2X_VF_CIDS + \
+ CNIC_ISCSI_CID_MAX)
#define TM_ILT_SZ (8 * TM_CONN_NUM)
#define TM_ILT_LINES DIV_ROUND_UP(TM_ILT_SZ, TM_ILT_PAGE_SZ)
#define PCI_32BIT_FLAG (1 << 1)
#define ONE_PORT_FLAG (1 << 2)
#define NO_WOL_FLAG (1 << 3)
-#define USING_DAC_FLAG (1 << 4)
#define USING_MSIX_FLAG (1 << 5)
#define USING_MSI_FLAG (1 << 6)
#define DISABLE_MSI_FLAG (1 << 7)
*/
bool fcoe_init;
- int pm_cap;
int mrrs;
struct delayed_work sp_task;
u16 rx_ticks_int;
u16 rx_ticks;
/* Maximal coalescing timeout in us */
-#define BNX2X_MAX_COALESCE_TOUT (0xf0*12)
+#define BNX2X_MAX_COALESCE_TOUT (0xff*BNX2X_BTR)
u32 lin_cnt;
* Maximum CID count that might be required by the bnx2x:
* Max RSS * Max_Tx_Multi_Cos + FCoE + iSCSI
*/
+
#define BNX2X_L2_CID_COUNT(bp) (BNX2X_NUM_ETH_QUEUES(bp) * BNX2X_MULTI_TX_COS \
- + 2 * CNIC_SUPPORT(bp))
+ + CNIC_SUPPORT(bp) * (2 + UIO_CID_PAD(bp)))
#define BNX2X_L2_MAX_CID(bp) (BNX2X_MAX_RSS_COUNT(bp) * BNX2X_MULTI_TX_COS \
- + 2 * CNIC_SUPPORT(bp))
+ + CNIC_SUPPORT(bp) * (2 + UIO_CID_PAD(bp)))
#define L2_ILT_LINES(bp) (DIV_ROUND_UP(BNX2X_L2_CID_COUNT(bp),\
ILT_PAGE_CIDS))
void bnx2x_prep_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae,
u8 src_type, u8 dst_type);
-int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae);
+int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae,
+ u32 *comp);
/* FLR related routines */
u32 bnx2x_flr_clnup_poll_count(struct bnx2x *bp);
};
void bnx2x_set_local_cmng(struct bnx2x *bp);
+
+#define MCPR_SCRATCH_BASE(bp) \
+ (CHIP_IS_E1x(bp) ? MCP_REG_MCPR_SCRATCH : MCP_A_REG_MCPR_SCRATCH)
+
#endif /* bnx2x.h */
}
}
#endif
+ skb_record_rx_queue(skb, fp->rx_queue);
napi_gro_receive(&fp->napi, skb);
}
load_error_cnic1:
bnx2x_napi_disable_cnic(bp);
/* Update the number of queues without the cnic queues */
- rc = bnx2x_set_real_num_queues(bp, 0);
- if (rc)
+ if (bnx2x_set_real_num_queues(bp, 0))
BNX2X_ERR("Unable to set real_num_queues not including cnic\n");
load_error_cnic0:
BNX2X_ERR("CNIC-related load failed\n");
u16 pmcsr;
/* If there is no power capability, silently succeed */
- if (!bp->pm_cap) {
+ if (!bp->pdev->pm_cap) {
BNX2X_DEV_INFO("No power capability. Breaking.\n");
return 0;
}
- pci_read_config_word(bp->pdev, bp->pm_cap + PCI_PM_CTRL, &pmcsr);
+ pci_read_config_word(bp->pdev, bp->pdev->pm_cap + PCI_PM_CTRL, &pmcsr);
switch (state) {
case PCI_D0:
- pci_write_config_word(bp->pdev, bp->pm_cap + PCI_PM_CTRL,
+ pci_write_config_word(bp->pdev, bp->pdev->pm_cap + PCI_PM_CTRL,
((pmcsr & ~PCI_PM_CTRL_STATE_MASK) |
PCI_PM_CTRL_PME_STATUS));
if (bp->wol)
pmcsr |= PCI_PM_CTRL_PME_ENABLE;
- pci_write_config_word(bp->pdev, bp->pm_cap + PCI_PM_CTRL,
+ pci_write_config_word(bp->pdev, bp->pdev->pm_cap + PCI_PM_CTRL,
pmcsr);
/* No more memory access after this point until
* will re-enable parity attentions right after the dump.
*/
- /* Disable parity on path 0 */
- bnx2x_pretend_func(bp, 0);
bnx2x_disable_blocks_parity(bp);
- /* Disable parity on path 1 */
- bnx2x_pretend_func(bp, 1);
- bnx2x_disable_blocks_parity(bp);
-
- /* Return to current function */
- bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
-
dump_hdr.header_size = (sizeof(struct dump_header) / 4) - 1;
dump_hdr.preset = DUMP_ALL_PRESETS;
dump_hdr.version = BNX2X_DUMP_VERSION;
/* Actually read the registers */
__bnx2x_get_regs(bp, p);
- /* Re-enable parity attentions on path 0 */
- bnx2x_pretend_func(bp, 0);
+ /* Re-enable parity attentions */
bnx2x_clear_blocks_parity(bp);
bnx2x_enable_blocks_parity(bp);
-
- /* Re-enable parity attentions on path 1 */
- bnx2x_pretend_func(bp, 1);
- bnx2x_clear_blocks_parity(bp);
- bnx2x_enable_blocks_parity(bp);
-
- /* Return to current function */
- bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
}
static int bnx2x_get_preset_regs_len(struct net_device *dev, u32 preset)
* will re-enable parity attentions right after the dump.
*/
- /* Disable parity on path 0 */
- bnx2x_pretend_func(bp, 0);
bnx2x_disable_blocks_parity(bp);
- /* Disable parity on path 1 */
- bnx2x_pretend_func(bp, 1);
- bnx2x_disable_blocks_parity(bp);
-
- /* Return to current function */
- bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
-
dump_hdr.header_size = (sizeof(struct dump_header) / 4) - 1;
dump_hdr.preset = bp->dump_preset_idx;
dump_hdr.version = BNX2X_DUMP_VERSION;
/* Actually read the registers */
__bnx2x_get_preset_regs(bp, p, dump_hdr.preset);
- /* Re-enable parity attentions on path 0 */
- bnx2x_pretend_func(bp, 0);
+ /* Re-enable parity attentions */
bnx2x_clear_blocks_parity(bp);
bnx2x_enable_blocks_parity(bp);
- /* Re-enable parity attentions on path 1 */
- bnx2x_pretend_func(bp, 1);
- bnx2x_clear_blocks_parity(bp);
- bnx2x_enable_blocks_parity(bp);
-
- /* Return to current function */
- bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
-
return 0;
}
u16 pm = 0;
struct net_device *dev = pci_get_drvdata(bp->pdev);
- if (bp->pm_cap)
+ if (bp->pdev->pm_cap)
rc = pci_read_config_word(bp->pdev,
- bp->pm_cap + PCI_PM_CTRL, &pm);
+ bp->pdev->pm_cap + PCI_PM_CTRL, &pm);
if ((rc && !netif_running(dev)) ||
(!rc && ((pm & PCI_PM_CTRL_STATE_MASK) != (__force u16)PCI_D0)))
* [30] MCP Latched ump_tx_parity
* [31] MCP Latched scpad_parity
*/
-#define MISC_AEU_ENABLE_MCP_PRTY_BITS \
+#define MISC_AEU_ENABLE_MCP_PRTY_SUB_BITS \
(AEU_INPUTS_ATTN_BITS_MCP_LATCHED_ROM_PARITY | \
AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_RX_PARITY | \
- AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_TX_PARITY | \
+ AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_TX_PARITY)
+
+#define MISC_AEU_ENABLE_MCP_PRTY_BITS \
+ (MISC_AEU_ENABLE_MCP_PRTY_SUB_BITS | \
AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY)
/* Below registers control the MCP parity attention output. When
* MISC_AEU_ENABLE_MCP_PRTY_BITS are set - attentions are
* enabled, when cleared - disabled.
*/
-static const u32 mcp_attn_ctl_regs[] = {
- MISC_REG_AEU_ENABLE4_FUNC_0_OUT_0,
- MISC_REG_AEU_ENABLE4_NIG_0,
- MISC_REG_AEU_ENABLE4_PXP_0,
- MISC_REG_AEU_ENABLE4_FUNC_1_OUT_0,
- MISC_REG_AEU_ENABLE4_NIG_1,
- MISC_REG_AEU_ENABLE4_PXP_1
+static const struct {
+ u32 addr;
+ u32 bits;
+} mcp_attn_ctl_regs[] = {
+ { MISC_REG_AEU_ENABLE4_FUNC_0_OUT_0,
+ MISC_AEU_ENABLE_MCP_PRTY_BITS },
+ { MISC_REG_AEU_ENABLE4_NIG_0,
+ MISC_AEU_ENABLE_MCP_PRTY_SUB_BITS },
+ { MISC_REG_AEU_ENABLE4_PXP_0,
+ MISC_AEU_ENABLE_MCP_PRTY_SUB_BITS },
+ { MISC_REG_AEU_ENABLE4_FUNC_1_OUT_0,
+ MISC_AEU_ENABLE_MCP_PRTY_BITS },
+ { MISC_REG_AEU_ENABLE4_NIG_1,
+ MISC_AEU_ENABLE_MCP_PRTY_SUB_BITS },
+ { MISC_REG_AEU_ENABLE4_PXP_1,
+ MISC_AEU_ENABLE_MCP_PRTY_SUB_BITS }
};
static inline void bnx2x_set_mcp_parity(struct bnx2x *bp, u8 enable)
u32 reg_val;
for (i = 0; i < ARRAY_SIZE(mcp_attn_ctl_regs); i++) {
- reg_val = REG_RD(bp, mcp_attn_ctl_regs[i]);
+ reg_val = REG_RD(bp, mcp_attn_ctl_regs[i].addr);
if (enable)
- reg_val |= MISC_AEU_ENABLE_MCP_PRTY_BITS;
+ reg_val |= mcp_attn_ctl_regs[i].bits;
else
- reg_val &= ~MISC_AEU_ENABLE_MCP_PRTY_BITS;
+ reg_val &= ~mcp_attn_ctl_regs[i].bits;
- REG_WR(bp, mcp_attn_ctl_regs[i], reg_val);
+ REG_WR(bp, mcp_attn_ctl_regs[i].addr, reg_val);
}
}
#define EDC_MODE_LINEAR 0x0022
#define EDC_MODE_LIMITING 0x0044
#define EDC_MODE_PASSIVE_DAC 0x0055
+#define EDC_MODE_ACTIVE_DAC 0x0066
/* ETS defines*/
#define DCBX_INVALID_COS (0xFF)
bnx2x_update_link_attr(params, vars->link_attr_sync);
}
+static void bnx2x_disable_kr2(struct link_params *params,
+ struct link_vars *vars,
+ struct bnx2x_phy *phy)
+{
+ struct bnx2x *bp = params->bp;
+ int i;
+ static struct bnx2x_reg_set reg_set[] = {
+ /* Step 1 - Program the TX/RX alignment markers */
+ {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL5, 0x7690},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL7, 0xe647},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL6, 0xc4f0},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL9, 0x7690},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_RX_CTRL11, 0xe647},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_RX_CTRL10, 0xc4f0},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_USERB0_CTRL, 0x000c},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CTRL1, 0x6000},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CTRL3, 0x0000},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CODE_FIELD, 0x0002},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI1, 0x0000},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI2, 0x0af7},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI3, 0x0af7},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_LD_BAM_CODE, 0x0002},
+ {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_LD_UD_CODE, 0x0000}
+ };
+ DP(NETIF_MSG_LINK, "Disabling 20G-KR2\n");
+
+ for (i = 0; i < ARRAY_SIZE(reg_set); i++)
+ bnx2x_cl45_write(bp, phy, reg_set[i].devad, reg_set[i].reg,
+ reg_set[i].val);
+ vars->link_attr_sync &= ~LINK_ATTR_SYNC_KR2_ENABLE;
+ bnx2x_update_link_attr(params, vars->link_attr_sync);
+
+ vars->check_kr2_recovery_cnt = CHECK_KR2_RECOVERY_CNT;
+}
+
static void bnx2x_warpcore_set_lpi_passthrough(struct bnx2x_phy *phy,
struct link_params *params)
{
struct link_params *params,
struct link_vars *vars) {
u16 lane, i, cl72_ctrl, an_adv = 0;
- u16 ucode_ver;
struct bnx2x *bp = params->bp;
static struct bnx2x_reg_set reg_set[] = {
{MDIO_WC_DEVAD, MDIO_WC_REG_SERDESDIGITAL_CONTROL1000X2, 0x7},
/* Advertise pause */
bnx2x_ext_phy_set_pause(params, phy, vars);
- /* Set KR Autoneg Work-Around flag for Warpcore version older than D108
- */
- bnx2x_cl45_read(bp, phy, MDIO_WC_DEVAD,
- MDIO_WC_REG_UC_INFO_B1_VERSION, &ucode_ver);
- if (ucode_ver < 0xd108) {
- DP(NETIF_MSG_LINK, "Enable AN KR work-around. WC ver:0x%x\n",
- ucode_ver);
- vars->rx_tx_asic_rst = MAX_KR_LINK_RETRY;
- }
+ vars->rx_tx_asic_rst = MAX_KR_LINK_RETRY;
bnx2x_cl45_read_or_write(bp, phy, MDIO_WC_DEVAD,
MDIO_WC_REG_DIGITAL5_MISC7, 0x100);
bnx2x_set_aer_mmd(params, phy);
bnx2x_warpcore_enable_AN_KR2(phy, params, vars);
+ } else {
+ bnx2x_disable_kr2(params, vars, phy);
}
/* Enable Autoneg: only on the main lane */
struct bnx2x *bp = params->bp;
u32 serdes_net_if;
u16 gp_status1 = 0, lnkup = 0, lnkup_kr = 0;
- u16 lane = bnx2x_get_warpcore_lane(phy, params);
vars->turn_to_run_wc_rt = vars->turn_to_run_wc_rt ? 0 : 1;
if (!vars->turn_to_run_wc_rt)
return;
- /* Return if there is no link partner */
- if (!(bnx2x_warpcore_get_sigdet(phy, params))) {
- DP(NETIF_MSG_LINK, "bnx2x_warpcore_get_sigdet false\n");
- return;
- }
-
if (vars->rx_tx_asic_rst) {
+ u16 lane = bnx2x_get_warpcore_lane(phy, params);
serdes_net_if = (REG_RD(bp, params->shmem_base +
offsetof(struct shmem_region, dev_info.
port_hw_config[params->port].default_cfg)) &
/*10G KR*/
lnkup_kr = (gp_status1 >> (12+lane)) & 0x1;
- DP(NETIF_MSG_LINK,
- "gp_status1 0x%x\n", gp_status1);
-
if (lnkup_kr || lnkup) {
- vars->rx_tx_asic_rst = 0;
- DP(NETIF_MSG_LINK,
- "link up, rx_tx_asic_rst 0x%x\n",
- vars->rx_tx_asic_rst);
+ vars->rx_tx_asic_rst = 0;
} else {
/* Reset the lane to see if link comes up.*/
bnx2x_warpcore_reset_lane(bp, phy, 1);
* enabled transmitter to avoid current leakage in case
* no module is connected
*/
- if (bnx2x_is_sfp_module_plugged(phy, params))
- bnx2x_sfp_module_detection(phy, params);
- else
- bnx2x_sfp_e3_set_transmitter(params, phy, 1);
+ if ((params->loopback_mode == LOOPBACK_NONE) ||
+ (params->loopback_mode == LOOPBACK_EXT)) {
+ if (bnx2x_is_sfp_module_plugged(phy, params))
+ bnx2x_sfp_module_detection(phy, params);
+ else
+ bnx2x_sfp_e3_set_transmitter(params,
+ phy, 1);
+ }
bnx2x_warpcore_config_sfi(phy, params);
break;
rc = bnx2x_get_link_speed_duplex(phy, params, vars, link_up, gp_speed,
duplex);
+ /* In case of KR link down, start up the recovering procedure */
+ if ((!link_up) && (phy->media_type == ETH_PHY_KR) &&
+ (!(phy->flags & FLAGS_WC_DUAL_MODE)))
+ vars->rx_tx_asic_rst = MAX_KR_LINK_RETRY;
+
DP(NETIF_MSG_LINK, "duplex %x flow_ctrl 0x%x link_status 0x%x\n",
vars->duplex, vars->flow_ctrl, vars->link_status);
return rc;
params->phy[INT_PHY].config_init(phy, params, vars);
}
+ /* Re-read this value in case it was changed inside config_init due to
+ * limitations of optic module
+ */
+ vars->line_speed = params->phy[INT_PHY].req_line_speed;
+
/* Init external phy*/
if (non_ext_phy) {
if (params->phy[INT_PHY].supported &
if (copper_module_type &
SFP_EEPROM_FC_TX_TECH_BITMASK_COPPER_ACTIVE) {
DP(NETIF_MSG_LINK, "Active Copper cable detected\n");
- check_limiting_mode = 1;
+ if (phy->type == PORT_HW_CFG_XGXS_EXT_PHY_TYPE_DIRECT)
+ *edc_mode = EDC_MODE_ACTIVE_DAC;
+ else
+ check_limiting_mode = 1;
} else if (copper_module_type &
SFP_EEPROM_FC_TX_TECH_BITMASK_COPPER_PASSIVE) {
DP(NETIF_MSG_LINK,
mode = MDIO_WC_REG_UC_INFO_B1_FIRMWARE_MODE_DEFAULT;
break;
case EDC_MODE_PASSIVE_DAC:
+ case EDC_MODE_ACTIVE_DAC:
mode = MDIO_WC_REG_UC_INFO_B1_FIRMWARE_MODE_SFP_DAC;
break;
default:
MDIO_AN_DEVAD, MDIO_AN_REG_8481_1000T_CTRL,
an_1000_val);
- /* set 100 speed advertisement */
- if ((phy->req_line_speed == SPEED_AUTO_NEG) &&
- (phy->speed_cap_mask &
- (PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_FULL |
- PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_HALF))) {
- an_10_100_val |= (1<<7);
- /* Enable autoneg and restart autoneg for legacy speeds */
- autoneg_val |= (1<<9 | 1<<12);
-
- if (phy->req_duplex == DUPLEX_FULL)
+ /* Set 10/100 speed advertisement */
+ if (phy->req_line_speed == SPEED_AUTO_NEG) {
+ if (phy->speed_cap_mask &
+ PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_FULL) {
+ /* Enable autoneg and restart autoneg for legacy speeds
+ */
+ autoneg_val |= (1<<9 | 1<<12);
an_10_100_val |= (1<<8);
- DP(NETIF_MSG_LINK, "Advertising 100M\n");
- }
- /* set 10 speed advertisement */
- if (((phy->req_line_speed == SPEED_AUTO_NEG) &&
- (phy->speed_cap_mask &
- (PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_FULL |
- PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_HALF)) &&
- (phy->supported &
- (SUPPORTED_10baseT_Half |
- SUPPORTED_10baseT_Full)))) {
- an_10_100_val |= (1<<5);
- autoneg_val |= (1<<9 | 1<<12);
- if (phy->req_duplex == DUPLEX_FULL)
+ DP(NETIF_MSG_LINK, "Advertising 100M-FD\n");
+ }
+
+ if (phy->speed_cap_mask &
+ PORT_HW_CFG_SPEED_CAPABILITY_D0_100M_HALF) {
+ /* Enable autoneg and restart autoneg for legacy speeds
+ */
+ autoneg_val |= (1<<9 | 1<<12);
+ an_10_100_val |= (1<<7);
+ DP(NETIF_MSG_LINK, "Advertising 100M-HD\n");
+ }
+
+ if ((phy->speed_cap_mask &
+ PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_FULL) &&
+ (phy->supported & SUPPORTED_10baseT_Full)) {
an_10_100_val |= (1<<6);
- DP(NETIF_MSG_LINK, "Advertising 10M\n");
+ autoneg_val |= (1<<9 | 1<<12);
+ DP(NETIF_MSG_LINK, "Advertising 10M-FD\n");
+ }
+
+ if ((phy->speed_cap_mask &
+ PORT_HW_CFG_SPEED_CAPABILITY_D0_10M_HALF) &&
+ (phy->supported & SUPPORTED_10baseT_Half)) {
+ an_10_100_val |= (1<<5);
+ autoneg_val |= (1<<9 | 1<<12);
+ DP(NETIF_MSG_LINK, "Advertising 10M-HD\n");
+ }
}
/* Only 10/100 are allowed to work in FORCE mode */
}
}
}
-static void bnx2x_disable_kr2(struct link_params *params,
- struct link_vars *vars,
- struct bnx2x_phy *phy)
-{
- struct bnx2x *bp = params->bp;
- int i;
- static struct bnx2x_reg_set reg_set[] = {
- /* Step 1 - Program the TX/RX alignment markers */
- {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL5, 0x7690},
- {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL7, 0xe647},
- {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL6, 0xc4f0},
- {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_TX_CTRL9, 0x7690},
- {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_RX_CTRL11, 0xe647},
- {MDIO_WC_DEVAD, MDIO_WC_REG_CL82_USERB1_RX_CTRL10, 0xc4f0},
- {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_USERB0_CTRL, 0x000c},
- {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CTRL1, 0x6000},
- {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CTRL3, 0x0000},
- {MDIO_WC_DEVAD, MDIO_WC_REG_CL73_BAM_CODE_FIELD, 0x0002},
- {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI1, 0x0000},
- {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI2, 0x0af7},
- {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_OUI3, 0x0af7},
- {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_LD_BAM_CODE, 0x0002},
- {MDIO_WC_DEVAD, MDIO_WC_REG_ETA_CL73_LD_UD_CODE, 0x0000}
- };
- DP(NETIF_MSG_LINK, "Disabling 20G-KR2\n");
-
- for (i = 0; i < ARRAY_SIZE(reg_set); i++)
- bnx2x_cl45_write(bp, phy, reg_set[i].devad, reg_set[i].reg,
- reg_set[i].val);
- vars->link_attr_sync &= ~LINK_ATTR_SYNC_KR2_ENABLE;
- bnx2x_update_link_attr(params, vars->link_attr_sync);
-
- vars->check_kr2_recovery_cnt = CHECK_KR2_RECOVERY_CNT;
- /* Restart AN on leading lane */
- bnx2x_warpcore_restart_AN_KR(phy, params);
-}
-
static void bnx2x_kr2_recovery(struct link_params *params,
struct link_vars *vars,
struct bnx2x_phy *phy)
/* Disable KR2 on both lanes */
DP(NETIF_MSG_LINK, "BP=0x%x, NP=0x%x\n", base_page, next_page);
bnx2x_disable_kr2(params, vars, phy);
+ /* Restart AN on leading lane */
+ bnx2x_warpcore_restart_AN_KR(phy, params);
return;
}
}
}
/* issue a dmae command over the init-channel and wait for completion */
-int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae)
+int bnx2x_issue_dmae_with_comp(struct bnx2x *bp, struct dmae_command *dmae,
+ u32 *comp)
{
- u32 *wb_comp = bnx2x_sp(bp, wb_comp);
int cnt = CHIP_REV_IS_SLOW(bp) ? (400000) : 4000;
int rc = 0;
spin_lock_bh(&bp->dmae_lock);
/* reset completion */
- *wb_comp = 0;
+ *comp = 0;
/* post the command on the channel used for initializations */
bnx2x_post_dmae(bp, dmae, INIT_DMAE_C(bp));
/* wait for completion */
udelay(5);
- while ((*wb_comp & ~DMAE_PCI_ERR_FLAG) != DMAE_COMP_VAL) {
+ while ((*comp & ~DMAE_PCI_ERR_FLAG) != DMAE_COMP_VAL) {
if (!cnt ||
(bp->recovery_state != BNX2X_RECOVERY_DONE &&
cnt--;
udelay(50);
}
- if (*wb_comp & DMAE_PCI_ERR_FLAG) {
+ if (*comp & DMAE_PCI_ERR_FLAG) {
BNX2X_ERR("DMAE PCI error!\n");
rc = DMAE_PCI_ERROR;
}
dmae.len = len32;
/* issue the command and wait for completion */
- rc = bnx2x_issue_dmae_with_comp(bp, &dmae);
+ rc = bnx2x_issue_dmae_with_comp(bp, &dmae, bnx2x_sp(bp, wb_comp));
if (rc) {
BNX2X_ERR("DMAE returned failure %d\n", rc);
bnx2x_panic();
dmae.len = len32;
/* issue the command and wait for completion */
- rc = bnx2x_issue_dmae_with_comp(bp, &dmae);
+ rc = bnx2x_issue_dmae_with_comp(bp, &dmae, bnx2x_sp(bp, wb_comp));
if (rc) {
BNX2X_ERR("DMAE returned failure %d\n", rc);
bnx2x_panic();
return rc;
}
+#define MCPR_TRACE_BUFFER_SIZE (0x800)
+#define SCRATCH_BUFFER_SIZE(bp) \
+ (CHIP_IS_E1(bp) ? 0x10000 : (CHIP_IS_E1H(bp) ? 0x20000 : 0x28000))
+
void bnx2x_fw_dump_lvl(struct bnx2x *bp, const char *lvl)
{
u32 addr, val;
trace_shmem_base = bp->common.shmem_base;
else
trace_shmem_base = SHMEM2_RD(bp, other_shmem_base_addr);
- addr = trace_shmem_base - 0x800;
+
+ /* sanity */
+ if (trace_shmem_base < MCPR_SCRATCH_BASE(bp) + MCPR_TRACE_BUFFER_SIZE ||
+ trace_shmem_base >= MCPR_SCRATCH_BASE(bp) +
+ SCRATCH_BUFFER_SIZE(bp)) {
+ BNX2X_ERR("Unable to dump trace buffer (mark %x)\n",
+ trace_shmem_base);
+ return;
+ }
+
+ addr = trace_shmem_base - MCPR_TRACE_BUFFER_SIZE;
/* validate TRCB signature */
mark = REG_RD(bp, addr);
/* read cyclic buffer pointer */
addr += 4;
mark = REG_RD(bp, addr);
- mark = (CHIP_IS_E1x(bp) ? MCP_REG_MCPR_SCRATCH : MCP_A_REG_MCPR_SCRATCH)
- + ((mark + 0x3) & ~0x3) - 0x08000000;
+ mark = MCPR_SCRATCH_BASE(bp) + ((mark + 0x3) & ~0x3) - 0x08000000;
+ if (mark >= trace_shmem_base || mark < addr + 4) {
+ BNX2X_ERR("Mark doesn't fall inside Trace Buffer\n");
+ return;
+ }
printk("%s" "begin fw dump (mark 0x%x)\n", lvl, mark);
printk("%s", lvl);
/* dump buffer after the mark */
- for (offset = mark; offset <= trace_shmem_base; offset += 0x8*4) {
+ for (offset = mark; offset < trace_shmem_base; offset += 0x8*4) {
for (word = 0; word < 8; word++)
data[word] = htonl(REG_RD(bp, offset + 4*word));
data[8] = 0x0;
pr_cont("%s%s", idx ? ", " : "", blk);
}
-static int bnx2x_check_blocks_with_parity0(struct bnx2x *bp, u32 sig,
- int par_num, bool print)
+static bool bnx2x_check_blocks_with_parity0(struct bnx2x *bp, u32 sig,
+ int *par_num, bool print)
{
- int i = 0;
- u32 cur_bit = 0;
+ u32 cur_bit;
+ bool res;
+ int i;
+
+ res = false;
+
for (i = 0; sig; i++) {
- cur_bit = ((u32)0x1 << i);
+ cur_bit = (0x1UL << i);
if (sig & cur_bit) {
- switch (cur_bit) {
- case AEU_INPUTS_ATTN_BITS_BRB_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "BRB");
+ res |= true; /* Each bit is real error! */
+
+ if (print) {
+ switch (cur_bit) {
+ case AEU_INPUTS_ATTN_BITS_BRB_PARITY_ERROR:
+ _print_next_block((*par_num)++, "BRB");
_print_parity(bp,
BRB1_REG_BRB1_PRTY_STS);
- }
- break;
- case AEU_INPUTS_ATTN_BITS_PARSER_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "PARSER");
+ break;
+ case AEU_INPUTS_ATTN_BITS_PARSER_PARITY_ERROR:
+ _print_next_block((*par_num)++,
+ "PARSER");
_print_parity(bp, PRS_REG_PRS_PRTY_STS);
- }
- break;
- case AEU_INPUTS_ATTN_BITS_TSDM_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "TSDM");
+ break;
+ case AEU_INPUTS_ATTN_BITS_TSDM_PARITY_ERROR:
+ _print_next_block((*par_num)++, "TSDM");
_print_parity(bp,
TSDM_REG_TSDM_PRTY_STS);
- }
- break;
- case AEU_INPUTS_ATTN_BITS_SEARCHER_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++,
+ break;
+ case AEU_INPUTS_ATTN_BITS_SEARCHER_PARITY_ERROR:
+ _print_next_block((*par_num)++,
"SEARCHER");
_print_parity(bp, SRC_REG_SRC_PRTY_STS);
- }
- break;
- case AEU_INPUTS_ATTN_BITS_TCM_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "TCM");
- _print_parity(bp,
- TCM_REG_TCM_PRTY_STS);
- }
- break;
- case AEU_INPUTS_ATTN_BITS_TSEMI_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "TSEMI");
+ break;
+ case AEU_INPUTS_ATTN_BITS_TCM_PARITY_ERROR:
+ _print_next_block((*par_num)++, "TCM");
+ _print_parity(bp, TCM_REG_TCM_PRTY_STS);
+ break;
+ case AEU_INPUTS_ATTN_BITS_TSEMI_PARITY_ERROR:
+ _print_next_block((*par_num)++,
+ "TSEMI");
_print_parity(bp,
TSEM_REG_TSEM_PRTY_STS_0);
_print_parity(bp,
TSEM_REG_TSEM_PRTY_STS_1);
- }
- break;
- case AEU_INPUTS_ATTN_BITS_PBCLIENT_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "XPB");
+ break;
+ case AEU_INPUTS_ATTN_BITS_PBCLIENT_PARITY_ERROR:
+ _print_next_block((*par_num)++, "XPB");
_print_parity(bp, GRCBASE_XPB +
PB_REG_PB_PRTY_STS);
+ break;
}
- break;
}
/* Clear the bit */
}
}
- return par_num;
+ return res;
}
-static int bnx2x_check_blocks_with_parity1(struct bnx2x *bp, u32 sig,
- int par_num, bool *global,
+static bool bnx2x_check_blocks_with_parity1(struct bnx2x *bp, u32 sig,
+ int *par_num, bool *global,
bool print)
{
- int i = 0;
- u32 cur_bit = 0;
+ u32 cur_bit;
+ bool res;
+ int i;
+
+ res = false;
+
for (i = 0; sig; i++) {
- cur_bit = ((u32)0x1 << i);
+ cur_bit = (0x1UL << i);
if (sig & cur_bit) {
+ res |= true; /* Each bit is real error! */
switch (cur_bit) {
case AEU_INPUTS_ATTN_BITS_PBF_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "PBF");
+ _print_next_block((*par_num)++, "PBF");
_print_parity(bp, PBF_REG_PBF_PRTY_STS);
}
break;
case AEU_INPUTS_ATTN_BITS_QM_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "QM");
+ _print_next_block((*par_num)++, "QM");
_print_parity(bp, QM_REG_QM_PRTY_STS);
}
break;
case AEU_INPUTS_ATTN_BITS_TIMERS_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "TM");
+ _print_next_block((*par_num)++, "TM");
_print_parity(bp, TM_REG_TM_PRTY_STS);
}
break;
case AEU_INPUTS_ATTN_BITS_XSDM_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "XSDM");
+ _print_next_block((*par_num)++, "XSDM");
_print_parity(bp,
XSDM_REG_XSDM_PRTY_STS);
}
break;
case AEU_INPUTS_ATTN_BITS_XCM_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "XCM");
+ _print_next_block((*par_num)++, "XCM");
_print_parity(bp, XCM_REG_XCM_PRTY_STS);
}
break;
case AEU_INPUTS_ATTN_BITS_XSEMI_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "XSEMI");
+ _print_next_block((*par_num)++,
+ "XSEMI");
_print_parity(bp,
XSEM_REG_XSEM_PRTY_STS_0);
_print_parity(bp,
break;
case AEU_INPUTS_ATTN_BITS_DOORBELLQ_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++,
+ _print_next_block((*par_num)++,
"DOORBELLQ");
_print_parity(bp,
DORQ_REG_DORQ_PRTY_STS);
break;
case AEU_INPUTS_ATTN_BITS_NIG_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "NIG");
+ _print_next_block((*par_num)++, "NIG");
if (CHIP_IS_E1x(bp)) {
_print_parity(bp,
NIG_REG_NIG_PRTY_STS);
break;
case AEU_INPUTS_ATTN_BITS_VAUX_PCI_CORE_PARITY_ERROR:
if (print)
- _print_next_block(par_num++,
+ _print_next_block((*par_num)++,
"VAUX PCI CORE");
*global = true;
break;
case AEU_INPUTS_ATTN_BITS_DEBUG_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "DEBUG");
+ _print_next_block((*par_num)++,
+ "DEBUG");
_print_parity(bp, DBG_REG_DBG_PRTY_STS);
}
break;
case AEU_INPUTS_ATTN_BITS_USDM_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "USDM");
+ _print_next_block((*par_num)++, "USDM");
_print_parity(bp,
USDM_REG_USDM_PRTY_STS);
}
break;
case AEU_INPUTS_ATTN_BITS_UCM_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "UCM");
+ _print_next_block((*par_num)++, "UCM");
_print_parity(bp, UCM_REG_UCM_PRTY_STS);
}
break;
case AEU_INPUTS_ATTN_BITS_USEMI_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "USEMI");
+ _print_next_block((*par_num)++,
+ "USEMI");
_print_parity(bp,
USEM_REG_USEM_PRTY_STS_0);
_print_parity(bp,
break;
case AEU_INPUTS_ATTN_BITS_UPB_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "UPB");
+ _print_next_block((*par_num)++, "UPB");
_print_parity(bp, GRCBASE_UPB +
PB_REG_PB_PRTY_STS);
}
break;
case AEU_INPUTS_ATTN_BITS_CSDM_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "CSDM");
+ _print_next_block((*par_num)++, "CSDM");
_print_parity(bp,
CSDM_REG_CSDM_PRTY_STS);
}
break;
case AEU_INPUTS_ATTN_BITS_CCM_PARITY_ERROR:
if (print) {
- _print_next_block(par_num++, "CCM");
+ _print_next_block((*par_num)++, "CCM");
_print_parity(bp, CCM_REG_CCM_PRTY_STS);
}
break;
}
}
- return par_num;
+ return res;
}
-static int bnx2x_check_blocks_with_parity2(struct bnx2x *bp, u32 sig,
- int par_num, bool print)
+static bool bnx2x_check_blocks_with_parity2(struct bnx2x *bp, u32 sig,
+ int *par_num, bool print)
{
- int i = 0;
- u32 cur_bit = 0;
+ u32 cur_bit;
+ bool res;
+ int i;
+
+ res = false;
+
for (i = 0; sig; i++) {
- cur_bit = ((u32)0x1 << i);
+ cur_bit = (0x1UL << i);
if (sig & cur_bit) {
- switch (cur_bit) {
- case AEU_INPUTS_ATTN_BITS_CSEMI_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "CSEMI");
+ res |= true; /* Each bit is real error! */
+ if (print) {
+ switch (cur_bit) {
+ case AEU_INPUTS_ATTN_BITS_CSEMI_PARITY_ERROR:
+ _print_next_block((*par_num)++,
+ "CSEMI");
_print_parity(bp,
CSEM_REG_CSEM_PRTY_STS_0);
_print_parity(bp,
CSEM_REG_CSEM_PRTY_STS_1);
- }
- break;
- case AEU_INPUTS_ATTN_BITS_PXP_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "PXP");
+ break;
+ case AEU_INPUTS_ATTN_BITS_PXP_PARITY_ERROR:
+ _print_next_block((*par_num)++, "PXP");
_print_parity(bp, PXP_REG_PXP_PRTY_STS);
_print_parity(bp,
PXP2_REG_PXP2_PRTY_STS_0);
_print_parity(bp,
PXP2_REG_PXP2_PRTY_STS_1);
- }
- break;
- case AEU_IN_ATTN_BITS_PXPPCICLOCKCLIENT_PARITY_ERROR:
- if (print)
- _print_next_block(par_num++,
- "PXPPCICLOCKCLIENT");
- break;
- case AEU_INPUTS_ATTN_BITS_CFC_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "CFC");
+ break;
+ case AEU_IN_ATTN_BITS_PXPPCICLOCKCLIENT_PARITY_ERROR:
+ _print_next_block((*par_num)++,
+ "PXPPCICLOCKCLIENT");
+ break;
+ case AEU_INPUTS_ATTN_BITS_CFC_PARITY_ERROR:
+ _print_next_block((*par_num)++, "CFC");
_print_parity(bp,
CFC_REG_CFC_PRTY_STS);
- }
- break;
- case AEU_INPUTS_ATTN_BITS_CDU_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "CDU");
+ break;
+ case AEU_INPUTS_ATTN_BITS_CDU_PARITY_ERROR:
+ _print_next_block((*par_num)++, "CDU");
_print_parity(bp, CDU_REG_CDU_PRTY_STS);
- }
- break;
- case AEU_INPUTS_ATTN_BITS_DMAE_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "DMAE");
+ break;
+ case AEU_INPUTS_ATTN_BITS_DMAE_PARITY_ERROR:
+ _print_next_block((*par_num)++, "DMAE");
_print_parity(bp,
DMAE_REG_DMAE_PRTY_STS);
- }
- break;
- case AEU_INPUTS_ATTN_BITS_IGU_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "IGU");
+ break;
+ case AEU_INPUTS_ATTN_BITS_IGU_PARITY_ERROR:
+ _print_next_block((*par_num)++, "IGU");
if (CHIP_IS_E1x(bp))
_print_parity(bp,
HC_REG_HC_PRTY_STS);
else
_print_parity(bp,
IGU_REG_IGU_PRTY_STS);
- }
- break;
- case AEU_INPUTS_ATTN_BITS_MISC_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "MISC");
+ break;
+ case AEU_INPUTS_ATTN_BITS_MISC_PARITY_ERROR:
+ _print_next_block((*par_num)++, "MISC");
_print_parity(bp,
MISC_REG_MISC_PRTY_STS);
+ break;
}
- break;
}
/* Clear the bit */
}
}
- return par_num;
+ return res;
}
-static int bnx2x_check_blocks_with_parity3(u32 sig, int par_num,
- bool *global, bool print)
+static bool bnx2x_check_blocks_with_parity3(struct bnx2x *bp, u32 sig,
+ int *par_num, bool *global,
+ bool print)
{
- int i = 0;
- u32 cur_bit = 0;
+ bool res = false;
+ u32 cur_bit;
+ int i;
+
for (i = 0; sig; i++) {
- cur_bit = ((u32)0x1 << i);
+ cur_bit = (0x1UL << i);
if (sig & cur_bit) {
switch (cur_bit) {
case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_ROM_PARITY:
if (print)
- _print_next_block(par_num++, "MCP ROM");
+ _print_next_block((*par_num)++,
+ "MCP ROM");
*global = true;
+ res |= true;
break;
case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_RX_PARITY:
if (print)
- _print_next_block(par_num++,
+ _print_next_block((*par_num)++,
"MCP UMP RX");
*global = true;
+ res |= true;
break;
case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_UMP_TX_PARITY:
if (print)
- _print_next_block(par_num++,
+ _print_next_block((*par_num)++,
"MCP UMP TX");
*global = true;
+ res |= true;
break;
case AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY:
if (print)
- _print_next_block(par_num++,
+ _print_next_block((*par_num)++,
"MCP SCPAD");
- *global = true;
+ /* clear latched SCPAD PATIRY from MCP */
+ REG_WR(bp, MISC_REG_AEU_CLR_LATCH_SIGNAL,
+ 1UL << 10);
break;
}
}
}
- return par_num;
+ return res;
}
-static int bnx2x_check_blocks_with_parity4(struct bnx2x *bp, u32 sig,
- int par_num, bool print)
+static bool bnx2x_check_blocks_with_parity4(struct bnx2x *bp, u32 sig,
+ int *par_num, bool print)
{
- int i = 0;
- u32 cur_bit = 0;
+ u32 cur_bit;
+ bool res;
+ int i;
+
+ res = false;
+
for (i = 0; sig; i++) {
- cur_bit = ((u32)0x1 << i);
+ cur_bit = (0x1UL << i);
if (sig & cur_bit) {
- switch (cur_bit) {
- case AEU_INPUTS_ATTN_BITS_PGLUE_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "PGLUE_B");
+ res |= true; /* Each bit is real error! */
+ if (print) {
+ switch (cur_bit) {
+ case AEU_INPUTS_ATTN_BITS_PGLUE_PARITY_ERROR:
+ _print_next_block((*par_num)++,
+ "PGLUE_B");
_print_parity(bp,
- PGLUE_B_REG_PGLUE_B_PRTY_STS);
- }
- break;
- case AEU_INPUTS_ATTN_BITS_ATC_PARITY_ERROR:
- if (print) {
- _print_next_block(par_num++, "ATC");
+ PGLUE_B_REG_PGLUE_B_PRTY_STS);
+ break;
+ case AEU_INPUTS_ATTN_BITS_ATC_PARITY_ERROR:
+ _print_next_block((*par_num)++, "ATC");
_print_parity(bp,
ATC_REG_ATC_PRTY_STS);
+ break;
}
- break;
}
-
/* Clear the bit */
sig &= ~cur_bit;
}
}
- return par_num;
+ return res;
}
static bool bnx2x_parity_attn(struct bnx2x *bp, bool *global, bool print,
u32 *sig)
{
+ bool res = false;
+
if ((sig[0] & HW_PRTY_ASSERT_SET_0) ||
(sig[1] & HW_PRTY_ASSERT_SET_1) ||
(sig[2] & HW_PRTY_ASSERT_SET_2) ||
if (print)
netdev_err(bp->dev,
"Parity errors detected in blocks: ");
- par_num = bnx2x_check_blocks_with_parity0(bp,
- sig[0] & HW_PRTY_ASSERT_SET_0, par_num, print);
- par_num = bnx2x_check_blocks_with_parity1(bp,
- sig[1] & HW_PRTY_ASSERT_SET_1, par_num, global, print);
- par_num = bnx2x_check_blocks_with_parity2(bp,
- sig[2] & HW_PRTY_ASSERT_SET_2, par_num, print);
- par_num = bnx2x_check_blocks_with_parity3(
- sig[3] & HW_PRTY_ASSERT_SET_3, par_num, global, print);
- par_num = bnx2x_check_blocks_with_parity4(bp,
- sig[4] & HW_PRTY_ASSERT_SET_4, par_num, print);
+ res |= bnx2x_check_blocks_with_parity0(bp,
+ sig[0] & HW_PRTY_ASSERT_SET_0, &par_num, print);
+ res |= bnx2x_check_blocks_with_parity1(bp,
+ sig[1] & HW_PRTY_ASSERT_SET_1, &par_num, global, print);
+ res |= bnx2x_check_blocks_with_parity2(bp,
+ sig[2] & HW_PRTY_ASSERT_SET_2, &par_num, print);
+ res |= bnx2x_check_blocks_with_parity3(bp,
+ sig[3] & HW_PRTY_ASSERT_SET_3, &par_num, global, print);
+ res |= bnx2x_check_blocks_with_parity4(bp,
+ sig[4] & HW_PRTY_ASSERT_SET_4, &par_num, print);
if (print)
pr_cont("\n");
+ }
- return true;
- } else
- return false;
+ return res;
}
/**
attn.sig[3] = REG_RD(bp,
MISC_REG_AEU_AFTER_INVERT_4_FUNC_0 +
port*4);
+ /* Since MCP attentions can't be disabled inside the block, we need to
+ * read AEU registers to see whether they're currently disabled
+ */
+ attn.sig[3] &= ((REG_RD(bp,
+ !port ? MISC_REG_AEU_ENABLE4_FUNC_0_OUT_0
+ : MISC_REG_AEU_ENABLE4_FUNC_1_OUT_0) &
+ MISC_AEU_ENABLE_MCP_PRTY_BITS) |
+ ~MISC_AEU_ENABLE_MCP_PRTY_BITS);
if (!CHIP_IS_E1x(bp))
attn.sig[4] = REG_RD(bp,
if (IS_PF(bp) &&
!BP_NOMCP(bp)) {
int mb_idx = BP_FW_MB_IDX(bp);
- u32 drv_pulse;
- u32 mcp_pulse;
+ u16 drv_pulse;
+ u16 mcp_pulse;
++bp->fw_drv_pulse_wr_seq;
bp->fw_drv_pulse_wr_seq &= DRV_PULSE_SEQ_MASK;
- /* TBD - add SYSTEM_TIME */
drv_pulse = bp->fw_drv_pulse_wr_seq;
bnx2x_drv_pulse(bp);
mcp_pulse = (SHMEM_RD(bp, func_mb[mb_idx].mcp_pulse_mb) &
MCP_PULSE_SEQ_MASK);
/* The delta between driver pulse and mcp response
- * should be 1 (before mcp response) or 0 (after mcp response)
+ * should not get too big. If the MFW is more than 5 pulses
+ * behind, we should worry about it enough to generate an error
+ * log.
*/
- if ((drv_pulse != mcp_pulse) &&
- (drv_pulse != ((mcp_pulse + 1) & MCP_PULSE_SEQ_MASK))) {
- /* someone lost a heartbeat... */
- BNX2X_ERR("drv_pulse (0x%x) != mcp_pulse (0x%x)\n",
+ if (((drv_pulse - mcp_pulse) & MCP_PULSE_SEQ_MASK) > 5)
+ BNX2X_ERR("MFW seems hanged: drv_pulse (0x%x) != mcp_pulse (0x%x)\n",
drv_pulse, mcp_pulse);
- }
}
if (bp->state == BNX2X_STATE_OPEN)
int port = BP_PORT(bp);
int init_phase = port ? PHASE_PORT1 : PHASE_PORT0;
u32 low, high;
- u32 val;
+ u32 val, reg;
DP(NETIF_MSG_HW, "starting port init port %d\n", port);
val |= CHIP_IS_E1(bp) ? 0 : 0x10;
REG_WR(bp, MISC_REG_AEU_MASK_ATTN_FUNC_0 + port*4, val);
+ /* SCPAD_PARITY should NOT trigger close the gates */
+ reg = port ? MISC_REG_AEU_ENABLE4_NIG_1 : MISC_REG_AEU_ENABLE4_NIG_0;
+ REG_WR(bp, reg,
+ REG_RD(bp, reg) &
+ ~AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY);
+
+ reg = port ? MISC_REG_AEU_ENABLE4_PXP_1 : MISC_REG_AEU_ENABLE4_PXP_0;
+ REG_WR(bp, reg,
+ REG_RD(bp, reg) &
+ ~AEU_INPUTS_ATTN_BITS_MCP_LATCHED_SCPAD_PARITY);
+
bnx2x_init_block(bp, BLOCK_NIG, init_phase);
if (!CHIP_IS_E1x(bp)) {
else if (bp->wol) {
u32 emac_base = port ? GRCBASE_EMAC1 : GRCBASE_EMAC0;
u8 *mac_addr = bp->dev->dev_addr;
+ struct pci_dev *pdev = bp->pdev;
u32 val;
u16 pmc;
EMAC_WR(bp, EMAC_REG_EMAC_MAC_MATCH + entry + 4, val);
/* Enable the PME and clear the status */
- pci_read_config_word(bp->pdev, bp->pm_cap + PCI_PM_CTRL, &pmc);
+ pci_read_config_word(pdev, pdev->pm_cap + PCI_PM_CTRL, &pmc);
pmc |= PCI_PM_CTRL_PME_ENABLE | PCI_PM_CTRL_PME_STATUS;
- pci_write_config_word(bp->pdev, bp->pm_cap + PCI_PM_CTRL, pmc);
+ pci_write_config_word(pdev, pdev->pm_cap + PCI_PM_CTRL, pmc);
reset_code = DRV_MSG_CODE_UNLOAD_REQ_WOL_EN;
break;
}
- pci_read_config_word(bp->pdev, bp->pm_cap + PCI_PM_PMC, &pmc);
+ pci_read_config_word(bp->pdev, bp->pdev->pm_cap + PCI_PM_PMC, &pmc);
bp->flags |= (pmc & PCI_PM_CAP_PME_D3cold) ? 0 : NO_WOL_FLAG;
BNX2X_DEV_INFO("%sWoL capable\n",
static int bnx2x_open(struct net_device *dev)
{
struct bnx2x *bp = netdev_priv(dev);
- bool global = false;
- int other_engine = BP_PATH(bp) ? 0 : 1;
- bool other_load_status, load_status;
int rc;
bp->stats_init = true;
* Parity recovery is only relevant for PF driver.
*/
if (IS_PF(bp)) {
+ int other_engine = BP_PATH(bp) ? 0 : 1;
+ bool other_load_status, load_status;
+ bool global = false;
+
other_load_status = bnx2x_get_load_status(bp, other_engine);
load_status = bnx2x_get_load_status(bp, BP_PATH(bp));
if (!bnx2x_reset_is_done(bp, BP_PATH(bp)) ||
struct device *dev = &bp->pdev->dev;
if (dma_set_mask(dev, DMA_BIT_MASK(64)) == 0) {
- bp->flags |= USING_DAC_FLAG;
if (dma_set_coherent_mask(dev, DMA_BIT_MASK(64)) != 0) {
dev_err(dev, "dma_set_coherent_mask failed, aborting\n");
return -EIO;
}
if (IS_PF(bp)) {
- bp->pm_cap = pdev->pm_cap;
- if (bp->pm_cap == 0) {
+ if (!pdev->pm_cap) {
dev_err(&bp->pdev->dev,
"Cannot find power management capability, aborting\n");
rc = -EIO;
NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 | NETIF_F_HIGHDMA;
dev->features |= dev->hw_features | NETIF_F_HW_VLAN_CTAG_RX;
- if (bp->flags & USING_DAC_FLAG)
- dev->features |= NETIF_F_HIGHDMA;
+ dev->features |= NETIF_F_HIGHDMA;
/* Add Loopback capability to the device */
dev->hw_features |= NETIF_F_LOOPBACK;
return BNX2X_MULTI_TX_COS_E1X;
case BCM57712:
case BCM57712_MF:
- case BCM57712_VF:
return BNX2X_MULTI_TX_COS_E2_E3A0;
case BCM57800:
case BCM57800_MF:
- case BCM57800_VF:
case BCM57810:
case BCM57810_MF:
case BCM57840_4_10:
case BCM57840_2_20:
case BCM57840_O:
case BCM57840_MFO:
- case BCM57810_VF:
case BCM57840_MF:
- case BCM57840_VF:
case BCM57811:
case BCM57811_MF:
- case BCM57811_VF:
return BNX2X_MULTI_TX_COS_E3B0;
+ case BCM57712_VF:
+ case BCM57800_VF:
+ case BCM57810_VF:
+ case BCM57840_VF:
+ case BCM57811_VF:
return 1;
default:
pr_err("Unknown board_type (%d), aborting\n", chip_id);
cp->fcoe_init_cid = BNX2X_FCOE_ETH_CID(bp);
cp->iscsi_l2_cid = BNX2X_ISCSI_ETH_CID(bp);
+ DP(NETIF_MSG_IFUP, "BNX2X_1st_NON_L2_ETH_CID(bp) %x, cp->starting_cid %x, cp->fcoe_init_cid %x, cp->iscsi_l2_cid %x\n",
+ BNX2X_1st_NON_L2_ETH_CID(bp), cp->starting_cid, cp->fcoe_init_cid,
+ cp->iscsi_l2_cid);
+
if (NO_ISCSI_OOO(bp))
cp->drv_state |= CNIC_DRV_STATE_NO_ISCSI_OOO;
}
bnx2x_vfop_qdtor, cmd->done);
return bnx2x_vfop_transition(bp, vf, bnx2x_vfop_qdtor,
cmd->block);
+ } else {
+ BNX2X_ERR("VF[%d] failed to add a vfop\n", vf->abs_vfid);
+ return -ENOMEM;
}
- DP(BNX2X_MSG_IOV, "VF[%d] failed to add a vfop. rc %d\n",
- vf->abs_vfid, vfop->rc);
- return -ENOMEM;
}
static void
fid = GET_FIELD((val), IGU_REG_MAPPING_MEMORY_FID);
if (fid & IGU_FID_ENCODE_IS_PF)
current_pf = fid & IGU_FID_PF_NUM_MASK;
- else if (current_pf == BP_ABS_FUNC(bp))
+ else if (current_pf == BP_FUNC(bp))
bnx2x_vf_set_igu_info(bp, sb_id,
(fid & IGU_FID_VF_NUM_MASK));
DP(BNX2X_MSG_IOV, "%s[%d], igu_sb_id=%d, msix=%d\n",
/* set local queue arrays */
vf->vfqs = &bp->vfdb->vfqs[qcount];
qcount += vf_sb_count(vf);
+ bnx2x_iov_static_resc(bp, vf);
}
/* prepare msix vectors in VF configuration space */
bnx2x_pretend_func(bp, HW_VF_HANDLE(bp, vf_idx));
REG_WR(bp, PCICFG_OFFSET + GRC_CONFIG_REG_VF_MSIX_CONTROL,
num_vf_queues);
+ DP(BNX2X_MSG_IOV, "set msix vec num in VF %d cfg space to %d\n",
+ vf_idx, num_vf_queues);
}
bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
rc = bnx2x_del_all_macs(bp, mac_obj, BNX2X_ETH_MAC, true);
if (rc) {
BNX2X_ERR("failed to delete eth macs\n");
- return -EINVAL;
+ rc = -EINVAL;
+ goto out;
}
/* remove existing uc list macs */
rc = bnx2x_del_all_macs(bp, mac_obj, BNX2X_UC_LIST_MAC, true);
if (rc) {
BNX2X_ERR("failed to delete uc_list macs\n");
- return -EINVAL;
+ rc = -EINVAL;
+ goto out;
}
/* configure the new mac to device */
bnx2x_set_mac_one(bp, (u8 *)&bulletin->mac, mac_obj, true,
BNX2X_ETH_MAC, &ramrod_flags);
+out:
bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_MAC);
}
&ramrod_flags);
if (rc) {
BNX2X_ERR("failed to delete vlans\n");
- return -EINVAL;
+ rc = -EINVAL;
+ goto out;
}
/* send queue update ramrod to configure default vlan and silent
rc = bnx2x_config_vlan_mac(bp, &ramrod_param);
if (rc) {
BNX2X_ERR("failed to configure vlan\n");
- return -EINVAL;
+ rc = -EINVAL;
+ goto out;
}
/* configure default vlan to vf queue and set silent
rc = bnx2x_queue_state_change(bp, &q_params);
if (rc) {
BNX2X_ERR("Failed to configure default VLAN\n");
- return rc;
+ goto out;
}
/* clear the flag indicating that this VF needs its vlan
- * (will only be set if the HV configured th Vlan before vf was
- * and we were called because the VF came up later
+ * (will only be set if the HV configured the Vlan before vf was
+ * up and we were called because the VF came up later
*/
+out:
vf->cfg_flags &= ~VF_CFG_VLAN;
-
bnx2x_unlock_vf_pf_channel(bp, vf, CHANNEL_TLV_PF_SET_VLAN);
}
- return 0;
+ return rc;
}
/* crc is the first field in the bulletin board. Compute the crc over the
} else if (bp->func_stx) {
*stats_comp = 0;
- bnx2x_post_dmae(bp, dmae, INIT_DMAE_C(bp));
+ bnx2x_issue_dmae_with_comp(bp, dmae, stats_comp);
}
}
dmae.len = len32;
/* issue the command and wait for completion */
- return bnx2x_issue_dmae_with_comp(bp, &dmae);
+ return bnx2x_issue_dmae_with_comp(bp, &dmae, bnx2x_sp(bp, wb_comp));
}
static void bnx2x_vf_mbx_resp(struct bnx2x *bp, struct bnx2x_virtf *vf)
switch (mbx->first_tlv.tl.type) {
case CHANNEL_TLV_ACQUIRE:
bnx2x_vf_mbx_acquire(bp, vf, mbx);
- break;
+ return;
case CHANNEL_TLV_INIT:
bnx2x_vf_mbx_init_vf(bp, vf, mbx);
- break;
+ return;
case CHANNEL_TLV_SETUP_Q:
bnx2x_vf_mbx_setup_q(bp, vf, mbx);
- break;
+ return;
case CHANNEL_TLV_SET_Q_FILTERS:
bnx2x_vf_mbx_set_q_filters(bp, vf, mbx);
- break;
+ return;
case CHANNEL_TLV_TEARDOWN_Q:
bnx2x_vf_mbx_teardown_q(bp, vf, mbx);
- break;
+ return;
case CHANNEL_TLV_CLOSE:
bnx2x_vf_mbx_close_vf(bp, vf, mbx);
- break;
+ return;
case CHANNEL_TLV_RELEASE:
bnx2x_vf_mbx_release_vf(bp, vf, mbx);
- break;
+ return;
case CHANNEL_TLV_UPDATE_RSS:
bnx2x_vf_mbx_update_rss(bp, vf, mbx);
- break;
+ return;
}
} else {
for (i = 0; i < 20; i++)
DP_CONT(BNX2X_MSG_IOV, "%x ",
mbx->msg->req.tlv_buf_size.tlv_buffer[i]);
+ }
- /* test whether we can respond to the VF (do we have an address
- * for it?)
- */
- if (vf->state == VF_ACQUIRED || vf->state == VF_ENABLED) {
- /* mbx_resp uses the op_rc of the VF */
- vf->op_rc = PFVF_STATUS_NOT_SUPPORTED;
+ /* can we respond to VF (do we have an address for it?) */
+ if (vf->state == VF_ACQUIRED || vf->state == VF_ENABLED) {
+ /* mbx_resp uses the op_rc of the VF */
+ vf->op_rc = PFVF_STATUS_NOT_SUPPORTED;
- /* notify the VF that we do not support this request */
- bnx2x_vf_mbx_resp(bp, vf);
- } else {
- /* can't send a response since this VF is unknown to us
- * just ack the FW to release the mailbox and unlock
- * the channel.
- */
- storm_memset_vf_mbx_ack(bp, vf->abs_vfid);
- mmiowb();
- bnx2x_unlock_vf_pf_channel(bp, vf,
- mbx->first_tlv.tl.type);
- }
+ /* notify the VF that we do not support this request */
+ bnx2x_vf_mbx_resp(bp, vf);
+ } else {
+ /* can't send a response since this VF is unknown to us
+ * just ack the FW to release the mailbox and unlock
+ * the channel.
+ */
+ storm_memset_vf_mbx_ack(bp, vf->abs_vfid);
+ /* Firmware ack should be written before unlocking channel */
+ mmiowb();
+ bnx2x_unlock_vf_pf_channel(bp, vf, mbx->first_tlv.tl.type);
}
}
{
struct cnic_dev *dev = (struct cnic_dev *) data;
struct cnic_local *cp = dev->cnic_priv;
+ struct bnx2x *bp = netdev_priv(dev->netdev);
u32 status_idx, new_status_idx;
if (unlikely(!test_bit(CNIC_F_CNIC_UP, &dev->flags)))
CNIC_WR16(dev, cp->kcq1.io_addr,
cp->kcq1.sw_prod_idx + MAX_KCQ_IDX);
- if (cp->ethdev->drv_state & CNIC_DRV_STATE_NO_FCOE) {
+ if (!CNIC_SUPPORTS_FCOE(bp)) {
cp->arm_int(dev, status_idx);
break;
}
"iSCSI CLIENT_SETUP did not complete\n");
cnic_spq_completion(dev, DRV_CTL_RET_L2_SPQ_CREDIT_CMD, 1);
cnic_ring_ctl(dev, cid, cli, 1);
- *cid_ptr = cid;
+ *cid_ptr = cid >> 4;
+ *(cid_ptr + 1) = cid * bp->db_size;
}
}
{
switch (tg3_asic_rev(tp)) {
case ASIC_REV_5719:
+ case ASIC_REV_5720:
if ((tp->phy_flags & TG3_PHYFLG_MII_SERDES) &&
!tp->pci_fn)
return true;
* So explicitly force the chip into D0 here.
*/
pci_read_config_dword(tp->pdev,
- tp->pm_cap + PCI_PM_CTRL,
+ tp->pdev->pm_cap + PCI_PM_CTRL,
&pm_reg);
pm_reg &= ~PCI_PM_CTRL_STATE_MASK;
pm_reg |= PCI_PM_CTRL_PME_ENABLE | 0 /* D0 */;
pci_write_config_dword(tp->pdev,
- tp->pm_cap + PCI_PM_CTRL,
+ tp->pdev->pm_cap + PCI_PM_CTRL,
pm_reg);
/* Also, force SERR#/PERR# in PCI command. */
tp = netdev_priv(dev);
tp->pdev = pdev;
tp->dev = dev;
- tp->pm_cap = pdev->pm_cap;
tp->rx_mode = TG3_DEF_RX_MODE;
tp->tx_mode = TG3_DEF_TX_MODE;
tp->irq_sync = 1;
u8 pci_lat_timer;
int pci_fn;
- int pm_cap;
int msi_cap;
int pcix_cap;
int pcie_readrq;
#define XGMAC_DMA_HW_FEATURE 0x00000f58 /* Enabled Hardware Features */
#define XGMAC_ADDR_AE 0x80000000
-#define XGMAC_MAX_FILTER_ADDR 31
/* PMT Control and Status */
#define XGMAC_PMT_POINTER_RESET 0x80000000
struct device *device;
struct napi_struct napi;
+ int max_macs;
struct xgmac_extra_stats xstats;
spinlock_t stats_lock;
netdev_dbg(priv->dev, "# mcasts %d, # unicast %d\n",
netdev_mc_count(dev), netdev_uc_count(dev));
- if (dev->flags & IFF_PROMISC) {
- writel(XGMAC_FRAME_FILTER_PR, ioaddr + XGMAC_FRAME_FILTER);
- return;
- }
+ if (dev->flags & IFF_PROMISC)
+ value |= XGMAC_FRAME_FILTER_PR;
memset(hash_filter, 0, sizeof(hash_filter));
- if (netdev_uc_count(dev) > XGMAC_MAX_FILTER_ADDR) {
+ if (netdev_uc_count(dev) > priv->max_macs) {
use_hash = true;
value |= XGMAC_FRAME_FILTER_HUC | XGMAC_FRAME_FILTER_HPF;
}
goto out;
}
- if ((netdev_mc_count(dev) + reg - 1) > XGMAC_MAX_FILTER_ADDR) {
+ if ((netdev_mc_count(dev) + reg - 1) > priv->max_macs) {
use_hash = true;
value |= XGMAC_FRAME_FILTER_HMC | XGMAC_FRAME_FILTER_HPF;
} else {
}
out:
- for (i = reg; i < XGMAC_MAX_FILTER_ADDR; i++)
- xgmac_set_mac_addr(ioaddr, NULL, reg);
+ for (i = reg; i <= priv->max_macs; i++)
+ xgmac_set_mac_addr(ioaddr, NULL, i);
for (i = 0; i < XGMAC_NUM_HASH; i++)
writel(hash_filter[i], ioaddr + XGMAC_HASH(i));
uid = readl(priv->base + XGMAC_VERSION);
netdev_info(ndev, "h/w version is 0x%x\n", uid);
+ /* Figure out how many valid mac address filter registers we have */
+ writel(1, priv->base + XGMAC_ADDR_HIGH(31));
+ if (readl(priv->base + XGMAC_ADDR_HIGH(31)) == 1)
+ priv->max_macs = 31;
+ else
+ priv->max_macs = 7;
+
writel(0, priv->base + XGMAC_DMA_INTR_ENA);
ndev->irq = platform_get_irq(pdev, 0);
if (ndev->irq == -ENXIO) {
pr_warn("could not create debugfs entry, continuing\n");
ret = pci_register_driver(&cxgb4_driver);
- if (ret < 0)
+ if (ret < 0) {
debugfs_remove(cxgb4_debugfs_root);
+ destroy_workqueue(workq);
+ }
register_inet6addr_notifier(&cxgb4_inet6addr_notifier);
/* DM9000 network board routine ---------------------------- */
-static void
-dm9000_reset(board_info_t * db)
-{
- dev_dbg(db->dev, "resetting device\n");
-
- /* RESET device */
- writeb(DM9000_NCR, db->io_addr);
- udelay(200);
- writeb(NCR_RST, db->io_data);
- udelay(200);
-}
-
/*
* Read a byte from I/O port
*/
writeb(value, db->io_data);
}
+static void
+dm9000_reset(board_info_t *db)
+{
+ dev_dbg(db->dev, "resetting device\n");
+
+ /* Reset DM9000, see DM9000 Application Notes V1.22 Jun 11, 2004 page 29
+ * The essential point is that we have to do a double reset, and the
+ * instruction is to set LBK into MAC internal loopback mode.
+ */
+ iow(db, DM9000_NCR, 0x03);
+ udelay(100); /* Application note says at least 20 us */
+ if (ior(db, DM9000_NCR) & 1)
+ dev_err(db->dev, "dm9000 did not respond to first reset\n");
+
+ iow(db, DM9000_NCR, 0);
+ iow(db, DM9000_NCR, 0x03);
+ udelay(100);
+ if (ior(db, DM9000_NCR) & 1)
+ dev_err(db->dev, "dm9000 did not respond to second reset\n");
+}
+
/* routines for sending block to chip */
static void dm9000_outblk_8bit(void __iomem *reg, void *data, int count)
static void dm9000_show_carrier(board_info_t *db,
unsigned carrier, unsigned nsr)
{
+ int lpa;
struct net_device *ndev = db->ndev;
+ struct mii_if_info *mii = &db->mii;
unsigned ncr = dm9000_read_locked(db, DM9000_NCR);
- if (carrier)
- dev_info(db->dev, "%s: link up, %dMbps, %s-duplex, no LPA\n",
+ if (carrier) {
+ lpa = mii->mdio_read(mii->dev, mii->phy_id, MII_LPA);
+ dev_info(db->dev,
+ "%s: link up, %dMbps, %s-duplex, lpa 0x%04X\n",
ndev->name, (nsr & NSR_SPEED) ? 10 : 100,
- (ncr & NCR_FDX) ? "full" : "half");
- else
+ (ncr & NCR_FDX) ? "full" : "half", lpa);
+ } else {
dev_info(db->dev, "%s: link down\n", ndev->name);
+ }
}
static void
(dev->features & NETIF_F_RXCSUM) ? RCSR_CSUM : 0);
iow(db, DM9000_GPCR, GPCR_GEP_CNTL); /* Let GPIO0 output */
+ iow(db, DM9000_GPR, 0);
- dm9000_phy_write(dev, 0, MII_BMCR, BMCR_RESET); /* PHY RESET */
- dm9000_phy_write(dev, 0, MII_DM_DSPCR, DSPCR_INIT_PARAM); /* Init */
+ /* If we are dealing with DM9000B, some extra steps are required: a
+ * manual phy reset, and setting init params.
+ */
+ if (db->type == TYPE_DM9000B) {
+ dm9000_phy_write(dev, 0, MII_BMCR, BMCR_RESET);
+ dm9000_phy_write(dev, 0, MII_DM_DSPCR, DSPCR_INIT_PARAM);
+ }
ncr = (db->flags & DM9000_PLATF_EXT_PHY) ? NCR_EXT_PHY : 0;
if (request_irq(dev->irq, de4x5_interrupt, IRQF_SHARED,
lp->adapter_name, dev)) {
printk("de4x5_open(): Requested IRQ%d is busy - attemping FAST/SHARE...", dev->irq);
- if (request_irq(dev->irq, de4x5_interrupt, IRQF_DISABLED | IRQF_SHARED,
+ if (request_irq(dev->irq, de4x5_interrupt, IRQF_SHARED,
lp->adapter_name, dev)) {
printk("\n Cannot get IRQ- reconfigure your hardware.\n");
disable_ast(dev);
#define BE_MIN_MTU 256
#define BE_NUM_VLANS_SUPPORTED 64
+#define BE_UMC_NUM_VLANS_SUPPORTED 15
#define BE_MAX_EQD 96u
#define BE_MAX_TX_FRAG_COUNT 30
#define BE_FLAGS_LINK_STATUS_INIT 1
#define BE_FLAGS_WORKER_SCHEDULED (1 << 3)
+#define BE_FLAGS_VLAN_PROMISC (1 << 4)
#define BE_FLAGS_NAPI_ENABLED (1 << 9)
#define BE_UC_PMAC_COUNT 30
#define BE_VF_UC_PMAC_COUNT 2
dev_err(&adapter->pdev->dev,
"opcode %d-%d failed:status %d-%d\n",
opcode, subsystem, compl_status, extd_status);
+
+ if (extd_status == MCC_ADDL_STS_INSUFFICIENT_RESOURCES)
+ return extd_status;
}
}
done:
if (lancer_chip(adapter)) {
req->hdr.version = 1;
- req->if_id = cpu_to_le16(adapter->if_handle);
} else if (BEx_chip(adapter)) {
if (adapter->function_caps & BE_FUNCTION_CAPS_SUPER_NIC)
req->hdr.version = 2;
req->hdr.version = 2;
}
+ if (req->hdr.version > 0)
+ req->if_id = cpu_to_le16(adapter->if_handle);
req->num_pages = PAGES_4K_SPANNED(q_mem->va, q_mem->size);
req->ulp_num = BE_ULP1_NUM;
req->type = BE_ETH_TX_RING_TYPE_STANDARD;
} else if (flags & IFF_ALLMULTI) {
req->if_flags_mask = req->if_flags =
cpu_to_le32(BE_IF_FLAGS_MCAST_PROMISCUOUS);
+ } else if (flags & BE_FLAGS_VLAN_PROMISC) {
+ req->if_flags_mask = cpu_to_le32(BE_IF_FLAGS_VLAN_PROMISCUOUS);
+
+ if (value == ON)
+ req->if_flags =
+ cpu_to_le32(BE_IF_FLAGS_VLAN_PROMISCUOUS);
} else {
struct netdev_hw_addr *ha;
int i = 0;
MCC_STATUS_NOT_SUPPORTED = 66
};
+#define MCC_ADDL_STS_INSUFFICIENT_RESOURCES 0x16
+
#define CQE_STATUS_COMPL_MASK 0xFFFF
#define CQE_STATUS_COMPL_SHIFT 0 /* bits 0 - 15 */
#define CQE_STATUS_EXTD_MASK 0xFFFF
u8 acpi_params;
u8 wol_param;
u16 rsvd7;
- u32 rsvd8[3];
+ u32 rsvd8[7];
} __packed;
struct be_cmd_req_get_func_config {
unsigned int eth_hdr_len;
struct iphdr *ip;
- /* Lancer ASIC has a bug wherein packets that are 32 bytes or less
+ /* Lancer, SH-R ASICs have a bug wherein Packets that are 32 bytes or less
* may cause a transmit stall on that port. So the work-around is to
- * pad such packets to a 36-byte length.
+ * pad short packets (<= 32 bytes) to a 36-byte length.
*/
- if (unlikely(lancer_chip(adapter) && skb->len <= 32)) {
+ if (unlikely(!BEx_chip(adapter) && skb->len <= 32)) {
if (skb_padto(skb, 36))
goto tx_drop;
skb->len = 36;
status = be_cmd_vlan_config(adapter, adapter->if_handle,
vids, num, 1, 0);
- /* Set to VLAN promisc mode as setting VLAN filter failed */
if (status) {
- dev_info(&adapter->pdev->dev, "Exhausted VLAN HW filters.\n");
- dev_info(&adapter->pdev->dev, "Disabling HW VLAN filtering.\n");
- goto set_vlan_promisc;
+ /* Set to VLAN promisc mode as setting VLAN filter failed */
+ if (status == MCC_ADDL_STS_INSUFFICIENT_RESOURCES)
+ goto set_vlan_promisc;
+ dev_err(&adapter->pdev->dev,
+ "Setting HW VLAN filtering failed.\n");
+ } else {
+ if (adapter->flags & BE_FLAGS_VLAN_PROMISC) {
+ /* hw VLAN filtering re-enabled. */
+ status = be_cmd_rx_filter(adapter,
+ BE_FLAGS_VLAN_PROMISC, OFF);
+ if (!status) {
+ dev_info(&adapter->pdev->dev,
+ "Disabling VLAN Promiscuous mode.\n");
+ adapter->flags &= ~BE_FLAGS_VLAN_PROMISC;
+ dev_info(&adapter->pdev->dev,
+ "Re-Enabling HW VLAN filtering\n");
+ }
+ }
}
return status;
set_vlan_promisc:
- status = be_cmd_vlan_config(adapter, adapter->if_handle,
- NULL, 0, 1, 1);
+ dev_warn(&adapter->pdev->dev, "Exhausted VLAN HW filters.\n");
+
+ status = be_cmd_rx_filter(adapter, BE_FLAGS_VLAN_PROMISC, ON);
+ if (!status) {
+ dev_info(&adapter->pdev->dev, "Enable VLAN Promiscuous mode\n");
+ dev_info(&adapter->pdev->dev, "Disabling HW VLAN filtering\n");
+ adapter->flags |= BE_FLAGS_VLAN_PROMISC;
+ } else
+ dev_err(&adapter->pdev->dev,
+ "Failed to enable VLAN Promiscuous mode.\n");
return status;
}
struct be_adapter *adapter = netdev_priv(netdev);
int status = 0;
- if (!lancer_chip(adapter) && !be_physfn(adapter)) {
- status = -EINVAL;
- goto ret;
- }
/* Packets with VID 0 are always received by Lancer by default */
if (lancer_chip(adapter) && vid == 0)
struct be_adapter *adapter = netdev_priv(netdev);
int status = 0;
- if (!lancer_chip(adapter) && !be_physfn(adapter)) {
- status = -EINVAL;
- goto ret;
- }
-
/* Packets with VID 0 are always received by Lancer by default */
if (lancer_chip(adapter) && vid == 0)
goto ret;
vi->vf = vf;
vi->tx_rate = vf_cfg->tx_rate;
- vi->vlan = vf_cfg->vlan_tag;
- vi->qos = 0;
+ vi->vlan = vf_cfg->vlan_tag & VLAN_VID_MASK;
+ vi->qos = vf_cfg->vlan_tag >> VLAN_PRIO_SHIFT;
memcpy(&vi->mac, vf_cfg->mac_addr, ETH_ALEN);
return 0;
int vf, u16 vlan, u8 qos)
{
struct be_adapter *adapter = netdev_priv(netdev);
+ struct be_vf_cfg *vf_cfg = &adapter->vf_cfg[vf];
int status = 0;
if (!sriov_enabled(adapter))
return -EPERM;
- if (vf >= adapter->num_vfs || vlan > 4095)
+ if (vf >= adapter->num_vfs || vlan > 4095 || qos > 7)
return -EINVAL;
- if (vlan) {
- if (adapter->vf_cfg[vf].vlan_tag != vlan) {
+ if (vlan || qos) {
+ vlan |= qos << VLAN_PRIO_SHIFT;
+ if (vf_cfg->vlan_tag != vlan) {
/* If this is new value, program it. Else skip. */
- adapter->vf_cfg[vf].vlan_tag = vlan;
-
- status = be_cmd_set_hsw_config(adapter, vlan,
- vf + 1, adapter->vf_cfg[vf].if_handle, 0);
+ vf_cfg->vlan_tag = vlan;
+ status = be_cmd_set_hsw_config(adapter, vlan, vf + 1,
+ vf_cfg->if_handle, 0);
}
} else {
/* Reset Transparent Vlan Tagging. */
- adapter->vf_cfg[vf].vlan_tag = 0;
- vlan = adapter->vf_cfg[vf].def_vid;
+ vf_cfg->vlan_tag = 0;
+ vlan = vf_cfg->def_vid;
status = be_cmd_set_hsw_config(adapter, vlan, vf + 1,
- adapter->vf_cfg[vf].if_handle, 0);
+ vf_cfg->if_handle, 0);
}
struct be_resources res = {0};
struct be_vf_cfg *vf_cfg;
u32 cap_flags, en_flags, vf;
- int status;
+ int status = 0;
cap_flags = BE_IF_FLAGS_UNTAGGED | BE_IF_FLAGS_BROADCAST |
BE_IF_FLAGS_MULTICAST;
if (adapter->function_mode & FLEX10_MODE)
res->max_vlans = BE_NUM_VLANS_SUPPORTED/8;
+ else if (adapter->function_mode & UMC_ENABLED)
+ res->max_vlans = BE_UMC_NUM_VLANS_SUPPORTED;
else
res->max_vlans = BE_NUM_VLANS_SUPPORTED;
res->max_mcast_mac = BE_MAX_MC;
goto failed_irq;
}
ret = devm_request_irq(&pdev->dev, irq, fec_enet_interrupt,
- IRQF_DISABLED, pdev->name, ndev);
+ 0, pdev->name, ndev);
if (ret)
goto failed_irq;
}
#include <asm/io.h>
#include <asm/reg.h>
+#include <asm/mpc85xx.h>
#include <asm/irq.h>
#include <asm/uaccess.h>
#include <linux/module.h>
}
}
-static void gfar_detect_errata(struct gfar_private *priv)
+static void __gfar_detect_errata_83xx(struct gfar_private *priv)
{
- struct device *dev = &priv->ofdev->dev;
unsigned int pvr = mfspr(SPRN_PVR);
unsigned int svr = mfspr(SPRN_SVR);
unsigned int mod = (svr >> 16) & 0xfff6; /* w/o E suffix */
(pvr == 0x80861010 && (mod & 0xfff9) == 0x80c0))
priv->errata |= GFAR_ERRATA_76;
- /* MPC8313 and MPC837x all rev */
- if ((pvr == 0x80850010 && mod == 0x80b0) ||
- (pvr == 0x80861010 && (mod & 0xfff9) == 0x80c0))
- priv->errata |= GFAR_ERRATA_A002;
+ /* MPC8313 Rev < 2.0 */
+ if (pvr == 0x80850010 && mod == 0x80b0 && rev < 0x0020)
+ priv->errata |= GFAR_ERRATA_12;
+}
- /* MPC8313 Rev < 2.0, MPC8548 rev 2.0 */
- if ((pvr == 0x80850010 && mod == 0x80b0 && rev < 0x0020) ||
- (pvr == 0x80210020 && mod == 0x8030 && rev == 0x0020))
+static void __gfar_detect_errata_85xx(struct gfar_private *priv)
+{
+ unsigned int svr = mfspr(SPRN_SVR);
+
+ if ((SVR_SOC_VER(svr) == SVR_8548) && (SVR_REV(svr) == 0x20))
priv->errata |= GFAR_ERRATA_12;
+ if (((SVR_SOC_VER(svr) == SVR_P2020) && (SVR_REV(svr) < 0x20)) ||
+ ((SVR_SOC_VER(svr) == SVR_P2010) && (SVR_REV(svr) < 0x20)))
+ priv->errata |= GFAR_ERRATA_76; /* aka eTSEC 20 */
+}
+
+static void gfar_detect_errata(struct gfar_private *priv)
+{
+ struct device *dev = &priv->ofdev->dev;
+
+ /* no plans to fix */
+ priv->errata |= GFAR_ERRATA_A002;
+
+ if (pvr_version_is(PVR_VER_E500V1) || pvr_version_is(PVR_VER_E500V2))
+ __gfar_detect_errata_85xx(priv);
+ else /* non-mpc85xx parts, i.e. e300 core based */
+ __gfar_detect_errata_83xx(priv);
if (priv->errata)
dev_info(dev, "enabled errata workarounds, flags: 0x%x\n",
/* Normaly TSEC should not hang on GRS commands, so we should
* actually wait for IEVENT_GRSC flag.
*/
- if (likely(!gfar_has_errata(priv, GFAR_ERRATA_A002)))
+ if (!gfar_has_errata(priv, GFAR_ERRATA_A002))
return 0;
/* Read the eTSEC register at offset 0xD1C. If bits 7-14 are
err = -ENODEV;
etsects->caps = ptp_gianfar_caps;
- etsects->cksel = DEFAULT_CKSEL;
+
+ if (get_of_u32(node, "fsl,cksel", &etsects->cksel))
+ etsects->cksel = DEFAULT_CKSEL;
if (get_of_u32(node, "fsl,tclk-period", &etsects->tclk_period) ||
get_of_u32(node, "fsl,tmr-prsc", &etsects->tmr_prsc) ||
/* New: if bus is PCI or EISA, interrupts might be shared interrupts */
if (request_irq(dev->irq, hp100_interrupt,
lp->bus == HP100_BUS_PCI || lp->bus ==
- HP100_BUS_EISA ? IRQF_SHARED : IRQF_DISABLED,
+ HP100_BUS_EISA ? IRQF_SHARED : 0,
"hp100", dev)) {
printk("hp100: %s: unable to get IRQ %d\n", dev->name, dev->irq);
return -EAGAIN;
static int ehea_remove(struct platform_device *dev);
+static struct of_device_id ehea_module_device_table[] = {
+ {
+ .name = "lhea",
+ .compatible = "IBM,lhea",
+ },
+ {
+ .type = "network",
+ .compatible = "IBM,lhea-ethernet",
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(of, ehea_module_device_table);
+
static struct of_device_id ehea_device_table[] = {
{
.name = "lhea",
},
{},
};
-MODULE_DEVICE_TABLE(of, ehea_device_table);
static struct platform_driver ehea_driver = {
.driver = {
ret = ibmebus_request_irq(port->qp_eq->attr.ist1,
ehea_qp_aff_irq_handler,
- IRQF_DISABLED, port->int_aff_name, port);
+ 0, port->int_aff_name, port);
if (ret) {
netdev_err(dev, "failed registering irq for qp_aff_irq_handler:ist=%X\n",
port->qp_eq->attr.ist1);
"%s-queue%d", dev->name, i);
ret = ibmebus_request_irq(pr->eq->attr.ist1,
ehea_recv_irq_handler,
- IRQF_DISABLED, pr->int_send_name,
- pr);
+ 0, pr->int_send_name, pr);
if (ret) {
netdev_err(dev, "failed registering irq for ehea_queue port_res_nr:%d, ist=%X\n",
i, pr->eq->attr.ist1);
}
ret = ibmebus_request_irq(adapter->neq->attr.ist1,
- ehea_interrupt_neq, IRQF_DISABLED,
+ ehea_interrupt_neq, 0,
"ehea_neq", adapter);
if (ret) {
dev_err(&dev->dev, "requesting NEQ IRQ failed\n");
else
mask &= ~(1 << 30);
}
+ if (mac->type == e1000_pch2lan) {
+ /* SHRAH[0,1,2] different than previous */
+ if (i == 7)
+ mask &= 0xFFF4FFFF;
+ /* SHRAH[3] different than SHRAH[0,1,2] */
+ if (i == 10)
+ mask |= (1 << 30);
+ }
REG_PATTERN_TEST_ARRAY(E1000_RA, ((i << 1) + 1), mask,
0xFFFFFFFF);
return;
}
- if (index < hw->mac.rar_entry_count) {
+ /* RAR[1-6] are owned by manageability. Skip those and program the
+ * next address into the SHRA register array.
+ */
+ if (index < (u32)(hw->mac.rar_entry_count - 6)) {
s32 ret_val;
ret_val = e1000_acquire_swflag_ich8lan(hw);
if (ret_val)
goto release;
- /* Copy both RAL/H (rar_entry_count) and SHRAL/H (+4) to PHY */
- for (i = 0; i < (hw->mac.rar_entry_count + 4); i++) {
+ /* Copy both RAL/H (rar_entry_count) and SHRAL/H to PHY */
+ for (i = 0; i < (hw->mac.rar_entry_count); i++) {
mac_reg = er32(RAL(i));
hw->phy.ops.write_reg_page(hw, BM_RAR_L(i),
(u16)(mac_reg & 0xFFFF));
return ret_val;
if (enable) {
- /* Write Rx addresses (rar_entry_count for RAL/H, +4 for
+ /* Write Rx addresses (rar_entry_count for RAL/H, and
* SHRAL/H) and initial CRC values to the MAC
*/
- for (i = 0; i < (hw->mac.rar_entry_count + 4); i++) {
+ for (i = 0; i < hw->mac.rar_entry_count; i++) {
u8 mac_addr[ETH_ALEN] = { 0 };
u32 addr_high, addr_low;
#define PCIE_ICH8_SNOOP_ALL PCIE_NO_SNOOP_ALL
#define E1000_ICH_RAR_ENTRIES 7
-#define E1000_PCH2_RAR_ENTRIES 5 /* RAR[0], SHRA[0-3] */
+#define E1000_PCH2_RAR_ENTRIES 11 /* RAR[0-6], SHRA[0-3] */
#define E1000_PCH_LPT_RAR_ENTRIES 12 /* RAR[0], SHRA[0-10] */
#define PHY_PAGE_SHIFT 5
*/
if ((hw->phy.type == e1000_phy_igp_3 ||
hw->phy.type == e1000_phy_bm) &&
- (hw->mac.autoneg == true) &&
+ hw->mac.autoneg &&
(adapter->link_speed == SPEED_10 ||
adapter->link_speed == SPEED_100) &&
(adapter->link_duplex == HALF_DUPLEX)) {
details = I40E_ADMINQ_DETAILS(hw->aq.asq, hw->aq.asq.next_to_use);
if (cmd_details) {
- memcpy(details, cmd_details,
- sizeof(struct i40e_asq_cmd_details));
+ *details = *cmd_details;
/* If the cmd_details are defined copy the cookie. The
* cpu_to_le32 is not needed here because the data is ignored
desc_on_ring = I40E_ADMINQ_DESC(hw->aq.asq, hw->aq.asq.next_to_use);
/* if the desc is available copy the temp desc to the right place */
- memcpy(desc_on_ring, desc, sizeof(struct i40e_aq_desc));
+ *desc_on_ring = *desc;
/* if buff is not NULL assume indirect command */
if (buff != NULL) {
/* if ready, copy the desc back to temp */
if (i40e_asq_done(hw)) {
- memcpy(desc, desc_on_ring, sizeof(struct i40e_aq_desc));
+ *desc = *desc_on_ring;
if (buff != NULL)
memcpy(buff, dma_buff->va, buff_size);
retval = le16_to_cpu(desc->retval);
/* save link status information */
if (link)
- memcpy(link, hw_link_info, sizeof(struct i40e_link_status));
+ *link = *hw_link_info;
/* flag cleared so helper functions don't call AQ again */
hw->phy.get_link_info = false;
mem->size = ALIGN(size, alignment);
mem->va = dma_zalloc_coherent(&pf->pdev->dev, mem->size,
&mem->pa, GFP_KERNEL);
- if (mem->va)
- return 0;
+ if (!mem->va)
+ return -ENOMEM;
- return -ENOMEM;
+ return 0;
}
/**
mem->size = size;
mem->va = kzalloc(size, GFP_KERNEL);
- if (mem->va)
- return 0;
+ if (!mem->va)
+ return -ENOMEM;
- return -ENOMEM;
+ return 0;
}
/**
u16 needed, u16 id)
{
int ret = -ENOMEM;
- int i = 0;
- int j = 0;
+ int i, j;
if (!pile || needed == 0 || id >= I40E_PILE_VALID_BIT) {
dev_info(&pf->pdev->dev,
/* start the linear search with an imperfect hint */
i = pile->search_hint;
- while (i < pile->num_entries && ret < 0) {
+ while (i < pile->num_entries) {
/* skip already allocated entries */
if (pile->list[i] & I40E_PILE_VALID_BIT) {
i++;
pile->list[i+j] = id | I40E_PILE_VALID_BIT;
ret = i;
pile->search_hint = i + j;
+ break;
} else {
/* not enough, so skip over it and continue looking */
i += j;
bool add_happened = false;
int filter_list_len = 0;
u32 changed_flags = 0;
- i40e_status ret = 0;
+ i40e_status aq_ret = 0;
struct i40e_pf *pf;
int num_add = 0;
int num_del = 0;
/* flush a full buffer */
if (num_del == filter_list_len) {
- ret = i40e_aq_remove_macvlan(&pf->hw,
+ aq_ret = i40e_aq_remove_macvlan(&pf->hw,
vsi->seid, del_list, num_del,
NULL);
num_del = 0;
memset(del_list, 0, sizeof(*del_list));
- if (ret)
+ if (aq_ret)
dev_info(&pf->pdev->dev,
"ignoring delete macvlan error, err %d, aq_err %d while flushing a full buffer\n",
- ret,
+ aq_ret,
pf->hw.aq.asq_last_status);
}
}
if (num_del) {
- ret = i40e_aq_remove_macvlan(&pf->hw, vsi->seid,
+ aq_ret = i40e_aq_remove_macvlan(&pf->hw, vsi->seid,
del_list, num_del, NULL);
num_del = 0;
- if (ret)
+ if (aq_ret)
dev_info(&pf->pdev->dev,
"ignoring delete macvlan error, err %d, aq_err %d\n",
- ret, pf->hw.aq.asq_last_status);
+ aq_ret, pf->hw.aq.asq_last_status);
}
kfree(del_list);
/* flush a full buffer */
if (num_add == filter_list_len) {
- ret = i40e_aq_add_macvlan(&pf->hw,
- vsi->seid,
- add_list,
- num_add,
- NULL);
+ aq_ret = i40e_aq_add_macvlan(&pf->hw, vsi->seid,
+ add_list, num_add,
+ NULL);
num_add = 0;
- if (ret)
+ if (aq_ret)
break;
memset(add_list, 0, sizeof(*add_list));
}
}
if (num_add) {
- ret = i40e_aq_add_macvlan(&pf->hw, vsi->seid,
- add_list, num_add, NULL);
+ aq_ret = i40e_aq_add_macvlan(&pf->hw, vsi->seid,
+ add_list, num_add, NULL);
num_add = 0;
}
kfree(add_list);
add_list = NULL;
- if (add_happened && (!ret)) {
+ if (add_happened && (!aq_ret)) {
/* do nothing */;
- } else if (add_happened && (ret)) {
+ } else if (add_happened && (aq_ret)) {
dev_info(&pf->pdev->dev,
"add filter failed, err %d, aq_err %d\n",
- ret, pf->hw.aq.asq_last_status);
+ aq_ret, pf->hw.aq.asq_last_status);
if ((pf->hw.aq.asq_last_status == I40E_AQ_RC_ENOSPC) &&
!test_bit(__I40E_FILTER_OVERFLOW_PROMISC,
&vsi->state)) {
if (changed_flags & IFF_ALLMULTI) {
bool cur_multipromisc;
cur_multipromisc = !!(vsi->current_netdev_flags & IFF_ALLMULTI);
- ret = i40e_aq_set_vsi_multicast_promiscuous(&vsi->back->hw,
- vsi->seid,
- cur_multipromisc,
- NULL);
- if (ret)
+ aq_ret = i40e_aq_set_vsi_multicast_promiscuous(&vsi->back->hw,
+ vsi->seid,
+ cur_multipromisc,
+ NULL);
+ if (aq_ret)
dev_info(&pf->pdev->dev,
"set multi promisc failed, err %d, aq_err %d\n",
- ret, pf->hw.aq.asq_last_status);
+ aq_ret, pf->hw.aq.asq_last_status);
}
if ((changed_flags & IFF_PROMISC) || promisc_forced_on) {
bool cur_promisc;
cur_promisc = (!!(vsi->current_netdev_flags & IFF_PROMISC) ||
test_bit(__I40E_FILTER_OVERFLOW_PROMISC,
&vsi->state));
- ret = i40e_aq_set_vsi_unicast_promiscuous(&vsi->back->hw,
- vsi->seid,
- cur_promisc,
- NULL);
- if (ret)
+ aq_ret = i40e_aq_set_vsi_unicast_promiscuous(&vsi->back->hw,
+ vsi->seid,
+ cur_promisc, NULL);
+ if (aq_ret)
dev_info(&pf->pdev->dev,
"set uni promisc failed, err %d, aq_err %d\n",
- ret, pf->hw.aq.asq_last_status);
+ aq_ret, pf->hw.aq.asq_last_status);
}
clear_bit(__I40E_CONFIG_BUSY, &vsi->state);
* i40e_vsi_kill_vlan - Remove vsi membership for given vlan
* @vsi: the vsi being configured
* @vid: vlan id to be removed (0 = untagged only , -1 = any)
+ *
+ * Return: 0 on success or negative otherwise
**/
int i40e_vsi_kill_vlan(struct i40e_vsi *vsi, s16 vid)
{
* i40e_vlan_rx_add_vid - Add a vlan id filter to HW offload
* @netdev: network interface to be adjusted
* @vid: vlan id to be added
+ *
+ * net_device_ops implementation for adding vlan ids
**/
static int i40e_vlan_rx_add_vid(struct net_device *netdev,
__always_unused __be16 proto, u16 vid)
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_vsi *vsi = np->vsi;
- int ret;
+ int ret = 0;
if (vid > 4095)
- return 0;
+ return -EINVAL;
+
+ netdev_info(netdev, "adding %pM vid=%d\n", netdev->dev_addr, vid);
- netdev_info(vsi->netdev, "adding %pM vid=%d\n",
- netdev->dev_addr, vid);
/* If the network stack called us with vid = 0, we should
* indicate to i40e_vsi_add_vlan() that we want to receive
* any traffic (i.e. with any vlan tag, or untagged)
*/
ret = i40e_vsi_add_vlan(vsi, vid ? vid : I40E_VLAN_ANY);
- if (!ret) {
- if (vid < VLAN_N_VID)
- set_bit(vid, vsi->active_vlans);
- }
+ if (!ret && (vid < VLAN_N_VID))
+ set_bit(vid, vsi->active_vlans);
- return 0;
+ return ret;
}
/**
* i40e_vlan_rx_kill_vid - Remove a vlan id filter from HW offload
* @netdev: network interface to be adjusted
* @vid: vlan id to be removed
+ *
+ * net_device_ops implementation for adding vlan ids
**/
static int i40e_vlan_rx_kill_vid(struct net_device *netdev,
__always_unused __be16 proto, u16 vid)
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_vsi *vsi = np->vsi;
- netdev_info(vsi->netdev, "removing %pM vid=%d\n",
- netdev->dev_addr, vid);
+ netdev_info(netdev, "removing %pM vid=%d\n", netdev->dev_addr, vid);
+
/* return code is ignored as there is nothing a user
* can do about failure to remove and a log message was
- * already printed from another function
+ * already printed from the other function
*/
i40e_vsi_kill_vlan(vsi, vid);
clear_bit(vid, vsi->active_vlans);
+
return 0;
}
* @vsi: the vsi being adjusted
* @vid: the vlan id to set as a PVID
**/
-i40e_status i40e_vsi_add_pvid(struct i40e_vsi *vsi, u16 vid)
+int i40e_vsi_add_pvid(struct i40e_vsi *vsi, u16 vid)
{
struct i40e_vsi_context ctxt;
- i40e_status ret;
+ i40e_status aq_ret;
vsi->info.valid_sections = cpu_to_le16(I40E_AQ_VSI_PROP_VLAN_VALID);
vsi->info.pvid = cpu_to_le16(vid);
ctxt.seid = vsi->seid;
memcpy(&ctxt.info, &vsi->info, sizeof(vsi->info));
- ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL);
- if (ret) {
+ aq_ret = i40e_aq_update_vsi_params(&vsi->back->hw, &ctxt, NULL);
+ if (aq_ret) {
dev_info(&vsi->back->pdev->dev,
"%s: update vsi failed, aq_err=%d\n",
__func__, vsi->back->hw.aq.asq_last_status);
+ return -ENOENT;
}
- return ret;
+ return 0;
}
/**
**/
static u8 i40e_dcb_get_num_tc(struct i40e_dcbx_config *dcbcfg)
{
- int num_tc = 0, i;
+ u8 num_tc = 0;
+ int i;
/* Scan the ETS Config Priority Table to find
* traffic class enabled for a given priority
/* Traffic class index starts from zero so
* increment to return the actual count
*/
- num_tc++;
-
- return num_tc;
+ return num_tc + 1;
}
/**
struct i40e_aqc_query_vsi_bw_config_resp bw_config = {0};
struct i40e_pf *pf = vsi->back;
struct i40e_hw *hw = &pf->hw;
+ i40e_status aq_ret;
u32 tc_bw_max;
- int ret;
int i;
/* Get the VSI level BW configuration */
- ret = i40e_aq_query_vsi_bw_config(hw, vsi->seid, &bw_config, NULL);
- if (ret) {
+ aq_ret = i40e_aq_query_vsi_bw_config(hw, vsi->seid, &bw_config, NULL);
+ if (aq_ret) {
dev_info(&pf->pdev->dev,
"couldn't get pf vsi bw config, err %d, aq_err %d\n",
- ret, pf->hw.aq.asq_last_status);
- return ret;
+ aq_ret, pf->hw.aq.asq_last_status);
+ return -EINVAL;
}
/* Get the VSI level BW configuration per TC */
- ret = i40e_aq_query_vsi_ets_sla_config(hw, vsi->seid,
- &bw_ets_config,
- NULL);
- if (ret) {
+ aq_ret = i40e_aq_query_vsi_ets_sla_config(hw, vsi->seid, &bw_ets_config,
+ NULL);
+ if (aq_ret) {
dev_info(&pf->pdev->dev,
"couldn't get pf vsi ets bw config, err %d, aq_err %d\n",
- ret, pf->hw.aq.asq_last_status);
- return ret;
+ aq_ret, pf->hw.aq.asq_last_status);
+ return -EINVAL;
}
if (bw_config.tc_valid_bits != bw_ets_config.tc_valid_bits) {
/* 3 bits out of 4 for each TC */
vsi->bw_ets_max_quanta[i] = (u8)((tc_bw_max >> (i*4)) & 0x7);
}
- return ret;
+
+ return 0;
}
/**
*
* Returns 0 on success, negative value on failure
**/
-static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi,
- u8 enabled_tc,
+static int i40e_vsi_configure_bw_alloc(struct i40e_vsi *vsi, u8 enabled_tc,
u8 *bw_share)
{
struct i40e_aqc_configure_vsi_tc_bw_data bw_data;
- int i, ret = 0;
+ i40e_status aq_ret;
+ int i;
bw_data.tc_valid_bits = enabled_tc;
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
bw_data.tc_bw_credits[i] = bw_share[i];
- ret = i40e_aq_config_vsi_tc_bw(&vsi->back->hw, vsi->seid,
- &bw_data, NULL);
- if (ret) {
+ aq_ret = i40e_aq_config_vsi_tc_bw(&vsi->back->hw, vsi->seid, &bw_data,
+ NULL);
+ if (aq_ret) {
dev_info(&vsi->back->pdev->dev,
"%s: AQ command Config VSI BW allocation per TC failed = %d\n",
__func__, vsi->back->hw.aq.asq_last_status);
- return ret;
+ return -EINVAL;
}
for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)
vsi->info.qs_handle[i] = bw_data.qs_handles[i];
- return ret;
+ return 0;
}
/**
u32 ctrl_ext;
u32 mdic;
+ /* Extra read required for some PHY's on i354 */
+ if (hw->mac.type == e1000_i354)
+ igb_get_phy_id(hw);
+
/* For SGMII PHYs, we try the list of possible addresses until
* we find one that works. For non-SGMII PHYs
* (e.g. integrated copper PHYs), an address of 1 should
static s32 igb_set_default_fc(struct e1000_hw *hw)
{
s32 ret_val = 0;
+ u16 lan_offset;
u16 nvm_data;
/* Read and store word 0x0F of the EEPROM. This word contains bits
* control setting, then the variable hw->fc will
* be initialized based on a value in the EEPROM.
*/
- ret_val = hw->nvm.ops.read(hw, NVM_INIT_CONTROL2_REG, 1, &nvm_data);
+ if (hw->mac.type == e1000_i350) {
+ lan_offset = NVM_82580_LAN_FUNC_OFFSET(hw->bus.func);
+ ret_val = hw->nvm.ops.read(hw, NVM_INIT_CONTROL2_REG
+ + lan_offset, 1, &nvm_data);
+ } else {
+ ret_val = hw->nvm.ops.read(hw, NVM_INIT_CONTROL2_REG,
+ 1, &nvm_data);
+ }
if (ret_val) {
hw_dbg("NVM Read Error\n");
igb_write_phy_reg(hw, I347AT4_PAGE_SELECT, 0);
igb_write_phy_reg(hw, PHY_CONTROL, 0x4140);
}
+ } else if (hw->phy.type == e1000_phy_82580) {
+ /* enable MII loopback */
+ igb_write_phy_reg(hw, I82580_PHY_LBK_CTRL, 0x8041);
}
/* add small delay to avoid loopback test failure */
(hw->phy.media_type != e1000_media_type_copper))
return -EOPNOTSUPP;
+ memset(&eee_curr, 0, sizeof(struct ethtool_eee));
+
ret_val = igb_get_eee(netdev, &eee_curr);
if (ret_val)
return ret_val;
bool autoneg = false;
bool link_up;
+ /* SFP type is needed for get_link_capabilities */
+ if (hw->phy.media_type & (ixgbe_media_type_fiber |
+ ixgbe_media_type_fiber_qsfp)) {
+ if (hw->phy.sfp_type == ixgbe_sfp_type_not_present)
+ hw->phy.ops.identify_sfp(hw);
+ }
+
hw->mac.ops.get_link_capabilities(hw, &supported_link, &autoneg);
/* set the supported link speeds */
ecmd->advertising |= ADVERTISED_1000baseT_Full;
if (supported_link & IXGBE_LINK_SPEED_100_FULL)
ecmd->advertising |= ADVERTISED_100baseT_Full;
+
+ if (hw->phy.multispeed_fiber && !autoneg) {
+ if (supported_link & IXGBE_LINK_SPEED_10GB_FULL)
+ ecmd->advertising = ADVERTISED_10000baseT_Full;
+ }
}
if (autoneg) {
if (ecmd->advertising & ~ecmd->supported)
return -EINVAL;
+ /* only allow one speed at a time if no autoneg */
+ if (!ecmd->autoneg && hw->phy.multispeed_fiber) {
+ if (ecmd->advertising ==
+ (ADVERTISED_10000baseT_Full |
+ ADVERTISED_1000baseT_Full))
+ return -EINVAL;
+ }
+
old = hw->phy.autoneg_advertised;
advertised = 0;
if (ecmd->advertising & ADVERTISED_10000baseT_Full)
unsigned int size = 1024;
netdev_tx_t tx_ret_val;
struct sk_buff *skb;
+ u32 flags_orig = adapter->flags;
+
+ /* DCB can modify the frames on Tx */
+ adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED;
/* allocate test skb */
skb = alloc_skb(size, GFP_KERNEL);
/* free the original skb */
kfree_skb(skb);
+ adapter->flags = flags_orig;
return ret_val;
}
{
struct ixgbe_hw *hw = &adapter->hw;
int i;
- u32 rxctrl;
+ u32 rxctrl, rfctl;
/* disable receives while setting up the descriptors */
rxctrl = IXGBE_READ_REG(hw, IXGBE_RXCTRL);
ixgbe_setup_psrtype(adapter);
ixgbe_setup_rdrxctl(adapter);
+ /* RSC Setup */
+ rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
+ rfctl &= ~IXGBE_RFCTL_RSC_DIS;
+ if (!(adapter->flags2 & IXGBE_FLAG2_RSC_ENABLED))
+ rfctl |= IXGBE_RFCTL_RSC_DIS;
+ IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
+
/* Program registers for the distribution of queues */
ixgbe_setup_mrqc(adapter);
adapter->flags &= ~IXGBE_FLAG_NEED_LINK_CONFIG;
speed = hw->phy.autoneg_advertised;
- if ((!speed) && (hw->mac.ops.get_link_capabilities))
+ if ((!speed) && (hw->mac.ops.get_link_capabilities)) {
hw->mac.ops.get_link_capabilities(hw, &speed, &autoneg);
+
+ /* setup the highest link when no autoneg */
+ if (!autoneg) {
+ if (speed & IXGBE_LINK_SPEED_10GB_FULL)
+ speed = IXGBE_LINK_SPEED_10GB_FULL;
+ }
+ }
+
if (hw->mac.ops.setup_link)
hw->mac.ops.setup_link(hw, speed, true);
#define IXGBE_RFCTL_ISCSI_DIS 0x00000001
#define IXGBE_RFCTL_ISCSI_DWC_MASK 0x0000003E
#define IXGBE_RFCTL_ISCSI_DWC_SHIFT 1
+#define IXGBE_RFCTL_RSC_DIS 0x00000020
#define IXGBE_RFCTL_NFSW_DIS 0x00000040
#define IXGBE_RFCTL_NFSR_DIS 0x00000080
#define IXGBE_RFCTL_NFS_VER_MASK 0x00000300
if (IS_TX(i)) {
ltq_dma_alloc_tx(&ch->dma);
- request_irq(irq, ltq_etop_dma_irq, IRQF_DISABLED,
- "etop_tx", priv);
+ request_irq(irq, ltq_etop_dma_irq, 0, "etop_tx", priv);
} else if (IS_RX(i)) {
ltq_dma_alloc_rx(&ch->dma);
for (ch->dma.desc = 0; ch->dma.desc < LTQ_DESC_NUM;
if (ltq_etop_alloc_skb(ch))
return -ENOMEM;
ch->dma.desc = 0;
- request_irq(irq, ltq_etop_dma_irq, IRQF_DISABLED,
- "etop_rx", priv);
+ request_irq(irq, ltq_etop_dma_irq, 0, "etop_rx", priv);
}
ch->dma.irq = irq;
}
p->rx_discard += rdlp(mp, RX_DISCARD_FRAME_CNT);
p->rx_overrun += rdlp(mp, RX_OVERRUN_FRAME_CNT);
spin_unlock_bh(&mp->mib_counters_lock);
-
- mod_timer(&mp->mib_counters_timer, jiffies + 30 * HZ);
}
static void mib_counters_timer_wrapper(unsigned long _mp)
{
struct mv643xx_eth_private *mp = (void *)_mp;
-
mib_counters_update(mp);
+ mod_timer(&mp->mib_counters_timer, jiffies + 30 * HZ);
}
mp->int_mask |= INT_TX_END_0 << i;
}
+ add_timer(&mp->mib_counters_timer);
port_start(mp);
wrlp(mp, INT_MASK_EXT, INT_EXT_LINK_PHY | INT_EXT_TX);
if (!ppdev)
return -ENOMEM;
ppdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
+ ppdev->dev.of_node = pnp;
ret = platform_device_add_resources(ppdev, &res, 1);
if (ret)
mp->mib_counters_timer.data = (unsigned long)mp;
mp->mib_counters_timer.function = mib_counters_timer_wrapper;
mp->mib_counters_timer.expires = jiffies + 30 * HZ;
- add_timer(&mp->mib_counters_timer);
spin_lock_init(&mp->mib_counters_lock);
struct pxa168_eth_private *pep = netdev_priv(dev);
int err;
- err = request_irq(dev->irq, pxa168_eth_int_handler,
- IRQF_DISABLED, dev->name, dev);
+ err = request_irq(dev->irq, pxa168_eth_int_handler, 0, dev->name, dev);
if (err) {
dev_err(&dev->dev, "can't assign irq\n");
return -EAGAIN;
PCI_DMA_FROMDEVICE);
skge_rx_reuse(e, skge->rx_buf_size);
} else {
+ struct skge_element ee;
struct sk_buff *nskb;
nskb = netdev_alloc_skb_ip_align(dev, skge->rx_buf_size);
if (!nskb)
goto resubmit;
+ ee = *e;
+
+ skb = ee.skb;
+ prefetch(skb->data);
+
if (skge_rx_setup(skge, e, nskb, skge->rx_buf_size) < 0) {
dev_kfree_skb(nskb);
goto resubmit;
}
pci_unmap_single(skge->hw->pdev,
- dma_unmap_addr(e, mapaddr),
- dma_unmap_len(e, maplen),
+ dma_unmap_addr(&ee, mapaddr),
+ dma_unmap_len(&ee, maplen),
PCI_DMA_FROMDEVICE);
- skb = e->skb;
- prefetch(skb->data);
}
skb_put(skb, len);
for (i = 0; i < priv->tx_ring_num; i++) {
priv->tx_cq[i].moder_cnt = priv->tx_frames;
priv->tx_cq[i].moder_time = priv->tx_usecs;
- err = mlx4_en_set_cq_moder(priv, &priv->tx_cq[i]);
- if (err)
- return err;
+ if (priv->port_up) {
+ err = mlx4_en_set_cq_moder(priv, &priv->tx_cq[i]);
+ if (err)
+ return err;
+ }
}
if (priv->adaptive_rx_coal)
priv->rx_cq[i].moder_cnt = priv->rx_frames;
priv->rx_cq[i].moder_time = priv->rx_usecs;
priv->last_moder_time[i] = MLX4_EN_AUTO_CONF;
- err = mlx4_en_set_cq_moder(priv, &priv->rx_cq[i]);
- if (err)
- return err;
+ if (priv->port_up) {
+ err = mlx4_en_set_cq_moder(priv, &priv->rx_cq[i]);
+ if (err)
+ return err;
+ }
}
return err;
put_page(page);
return -ENOMEM;
}
- page_alloc->size = PAGE_SIZE << order;
+ page_alloc->page_size = PAGE_SIZE << order;
page_alloc->page = page;
page_alloc->dma = dma;
- page_alloc->offset = frag_info->frag_align;
+ page_alloc->page_offset = frag_info->frag_align;
/* Not doing get_page() for each frag is a big win
* on asymetric workloads.
*/
- atomic_set(&page->_count, page_alloc->size / frag_info->frag_stride);
+ atomic_set(&page->_count,
+ page_alloc->page_size / frag_info->frag_stride);
return 0;
}
for (i = 0; i < priv->num_frags; i++) {
frag_info = &priv->frag_info[i];
page_alloc[i] = ring_alloc[i];
- page_alloc[i].offset += frag_info->frag_stride;
- if (page_alloc[i].offset + frag_info->frag_stride <= ring_alloc[i].size)
+ page_alloc[i].page_offset += frag_info->frag_stride;
+
+ if (page_alloc[i].page_offset + frag_info->frag_stride <=
+ ring_alloc[i].page_size)
continue;
+
if (mlx4_alloc_pages(priv, &page_alloc[i], frag_info, gfp))
goto out;
}
for (i = 0; i < priv->num_frags; i++) {
frags[i] = ring_alloc[i];
- dma = ring_alloc[i].dma + ring_alloc[i].offset;
+ dma = ring_alloc[i].dma + ring_alloc[i].page_offset;
ring_alloc[i] = page_alloc[i];
rx_desc->data[i].addr = cpu_to_be64(dma);
}
frag_info = &priv->frag_info[i];
if (page_alloc[i].page != ring_alloc[i].page) {
dma_unmap_page(priv->ddev, page_alloc[i].dma,
- page_alloc[i].size, PCI_DMA_FROMDEVICE);
+ page_alloc[i].page_size, PCI_DMA_FROMDEVICE);
page = page_alloc[i].page;
atomic_set(&page->_count, 1);
put_page(page);
int i)
{
const struct mlx4_en_frag_info *frag_info = &priv->frag_info[i];
+ u32 next_frag_end = frags[i].page_offset + 2 * frag_info->frag_stride;
+
- if (frags[i].offset + frag_info->frag_stride > frags[i].size)
- dma_unmap_page(priv->ddev, frags[i].dma, frags[i].size,
- PCI_DMA_FROMDEVICE);
+ if (next_frag_end > frags[i].page_size)
+ dma_unmap_page(priv->ddev, frags[i].dma, frags[i].page_size,
+ PCI_DMA_FROMDEVICE);
if (frags[i].page)
put_page(frags[i].page);
page_alloc = &ring->page_alloc[i];
dma_unmap_page(priv->ddev, page_alloc->dma,
- page_alloc->size, PCI_DMA_FROMDEVICE);
+ page_alloc->page_size, PCI_DMA_FROMDEVICE);
page = page_alloc->page;
atomic_set(&page->_count, 1);
put_page(page);
i, page_count(page_alloc->page));
dma_unmap_page(priv->ddev, page_alloc->dma,
- page_alloc->size, PCI_DMA_FROMDEVICE);
- while (page_alloc->offset + frag_info->frag_stride < page_alloc->size) {
+ page_alloc->page_size, PCI_DMA_FROMDEVICE);
+ while (page_alloc->page_offset + frag_info->frag_stride <
+ page_alloc->page_size) {
put_page(page_alloc->page);
- page_alloc->offset += frag_info->frag_stride;
+ page_alloc->page_offset += frag_info->frag_stride;
}
page_alloc->page = NULL;
}
/* Save page reference in skb */
__skb_frag_set_page(&skb_frags_rx[nr], frags[nr].page);
skb_frag_size_set(&skb_frags_rx[nr], frag_info->frag_size);
- skb_frags_rx[nr].page_offset = frags[nr].offset;
+ skb_frags_rx[nr].page_offset = frags[nr].page_offset;
skb->truesize += frag_info->frag_stride;
frags[nr].page = NULL;
}
/* Get pointer to first fragment so we could copy the headers into the
* (linear part of the) skb */
- va = page_address(frags[0].page) + frags[0].offset;
+ va = page_address(frags[0].page) + frags[0].page_offset;
if (length <= SMALL_PACKET_SIZE) {
/* We are copying all relevant data to the skb - temporarily
dma_sync_single_for_cpu(priv->ddev, dma, sizeof(*ethh),
DMA_FROM_DEVICE);
ethh = (struct ethhdr *)(page_address(frags[0].page) +
- frags[0].offset);
+ frags[0].page_offset);
if (is_multicast_ether_addr(ethh->h_dest)) {
struct mlx4_mac_entry *entry;
struct mlx4_en_rx_alloc {
struct page *page;
dma_addr_t dma;
- u32 offset;
- u32 size;
+ u32 page_offset;
+ u32 page_size;
};
struct mlx4_en_tx_ring {
return 0;
}
-static void calc_block_sig(struct mlx5_cmd_prot_block *block, u8 token)
+static void calc_block_sig(struct mlx5_cmd_prot_block *block, u8 token,
+ int csum)
{
block->token = token;
- block->ctrl_sig = ~xor8_buf(block->rsvd0, sizeof(*block) - sizeof(block->data) - 2);
- block->sig = ~xor8_buf(block, sizeof(*block) - 1);
+ if (csum) {
+ block->ctrl_sig = ~xor8_buf(block->rsvd0, sizeof(*block) -
+ sizeof(block->data) - 2);
+ block->sig = ~xor8_buf(block, sizeof(*block) - 1);
+ }
}
-static void calc_chain_sig(struct mlx5_cmd_msg *msg, u8 token)
+static void calc_chain_sig(struct mlx5_cmd_msg *msg, u8 token, int csum)
{
struct mlx5_cmd_mailbox *next = msg->next;
while (next) {
- calc_block_sig(next->buf, token);
+ calc_block_sig(next->buf, token, csum);
next = next->next;
}
}
-static void set_signature(struct mlx5_cmd_work_ent *ent)
+static void set_signature(struct mlx5_cmd_work_ent *ent, int csum)
{
ent->lay->sig = ~xor8_buf(ent->lay, sizeof(*ent->lay));
- calc_chain_sig(ent->in, ent->token);
- calc_chain_sig(ent->out, ent->token);
+ calc_chain_sig(ent->in, ent->token, csum);
+ calc_chain_sig(ent->out, ent->token, csum);
}
static void poll_timeout(struct mlx5_cmd_work_ent *ent)
lay->type = MLX5_PCI_CMD_XPORT;
lay->token = ent->token;
lay->status_own = CMD_OWNER_HW;
- if (!cmd->checksum_disabled)
- set_signature(ent);
+ set_signature(ent, !cmd->checksum_disabled);
dump_command(dev, ent, 1);
ktime_get_ts(&ent->ts1);
copy = min_t(int, size, MLX5_CMD_DATA_BLOCK_SIZE);
block = next->buf;
- if (xor8_buf(block, sizeof(*block)) != 0xff)
- return -EINVAL;
memcpy(to, block->data, copy);
to += copy;
goto err_map;
}
+ cmd->checksum_disabled = 1;
cmd->max_reg_cmds = (1 << cmd->log_sz) - 1;
cmd->bitmask = (1 << cmd->max_reg_cmds) - 1;
case MLX5_CMD_STAT_BAD_SYS_STATE_ERR: return -EIO;
case MLX5_CMD_STAT_BAD_RES_ERR: return -EINVAL;
case MLX5_CMD_STAT_RES_BUSY: return -EBUSY;
- case MLX5_CMD_STAT_LIM_ERR: return -EINVAL;
+ case MLX5_CMD_STAT_LIM_ERR: return -ENOMEM;
case MLX5_CMD_STAT_BAD_RES_STATE_ERR: return -EINVAL;
case MLX5_CMD_STAT_IX_ERR: return -EINVAL;
case MLX5_CMD_STAT_NO_RES_ERR: return -EAGAIN;
goto err_in;
}
+ snprintf(eq->name, MLX5_MAX_EQ_NAME, "%s@pci:%s",
+ name, pci_name(dev->pdev));
eq->eqn = out.eq_number;
err = request_irq(table->msix_arr[vecidx].vector, mlx5_msix_handler, 0,
- name, eq);
+ eq->name, eq);
if (err)
goto err_eq;
struct mlx5_cmd_set_hca_cap_mbox_in *set_ctx = NULL;
struct mlx5_cmd_query_hca_cap_mbox_in query_ctx;
struct mlx5_cmd_set_hca_cap_mbox_out set_out;
- struct mlx5_profile *prof = dev->profile;
u64 flags;
- int csum = 1;
int err;
memset(&query_ctx, 0, sizeof(query_ctx));
memcpy(&set_ctx->hca_cap, &query_out->hca_cap,
sizeof(set_ctx->hca_cap));
- if (prof->mask & MLX5_PROF_MASK_CMDIF_CSUM) {
- csum = !!prof->cmdif_csum;
- flags = be64_to_cpu(set_ctx->hca_cap.flags);
- if (csum)
- flags |= MLX5_DEV_CAP_FLAG_CMDIF_CSUM;
- else
- flags &= ~MLX5_DEV_CAP_FLAG_CMDIF_CSUM;
-
- set_ctx->hca_cap.flags = cpu_to_be64(flags);
- }
-
if (dev->profile->mask & MLX5_PROF_MASK_QP_SIZE)
set_ctx->hca_cap.log_max_qp = dev->profile->log_max_qp;
+ flags = be64_to_cpu(query_out->hca_cap.flags);
+ /* disable checksum */
+ flags &= ~MLX5_DEV_CAP_FLAG_CMDIF_CSUM;
+
+ set_ctx->hca_cap.flags = cpu_to_be64(flags);
memset(&set_out, 0, sizeof(set_out));
set_ctx->hca_cap.log_uar_page_sz = cpu_to_be16(PAGE_SHIFT - 12);
set_ctx->hdr.opcode = cpu_to_be16(MLX5_CMD_OP_SET_HCA_CAP);
if (err)
goto query_ex;
- if (!csum)
- dev->cmd.checksum_disabled = 1;
-
query_ex:
kfree(query_out);
kfree(set_ctx);
__be64 pas[0];
};
+enum {
+ MAX_RECLAIM_TIME_MSECS = 5000,
+};
+
static int insert_page(struct mlx5_core_dev *dev, u64 addr, struct page *page, u16 func_id)
{
struct rb_root *root = &dev->priv.page_root;
int err;
int i;
+ if (nclaimed)
+ *nclaimed = 0;
+
memset(&in, 0, sizeof(in));
outlen = sizeof(*out) + npages * sizeof(out->pas[0]);
out = mlx5_vzalloc(outlen);
int mlx5_reclaim_startup_pages(struct mlx5_core_dev *dev)
{
- unsigned long end = jiffies + msecs_to_jiffies(5000);
+ unsigned long end = jiffies + msecs_to_jiffies(MAX_RECLAIM_TIME_MSECS);
struct fw_page *fwp;
struct rb_node *p;
+ int nclaimed = 0;
int err;
do {
p = rb_first(&dev->priv.page_root);
if (p) {
fwp = rb_entry(p, struct fw_page, rb_node);
- err = reclaim_pages(dev, fwp->func_id, optimal_reclaimed_pages(), NULL);
+ err = reclaim_pages(dev, fwp->func_id,
+ optimal_reclaimed_pages(),
+ &nclaimed);
if (err) {
mlx5_core_warn(dev, "failed reclaiming pages (%d)\n", err);
return err;
}
+ if (nclaimed)
+ end = jiffies + msecs_to_jiffies(MAX_RECLAIM_TIME_MSECS);
}
if (time_after(jiffies, end)) {
mlx5_core_warn(dev, "FW did not return all pages. giving up...\n");
struct ks_net *ks = netdev_priv(netdev);
int err;
-#define KS_INT_FLAGS (IRQF_DISABLED|IRQF_TRIGGER_LOW)
+#define KS_INT_FLAGS IRQF_TRIGGER_LOW
/* lock the card, even if we may not actually do anything
* else at the moment.
*/
irq = irq_of_parse_and_map(node, 0);
if (irq <= 0) {
netdev_err(ndev, "irq_of_parse_and_map failed\n");
- return -EINVAL;
+ ret = -EINVAL;
+ goto irq_map_fail;
}
priv = netdev_priv(ndev);
priv->tx_desc_base = dma_alloc_coherent(NULL, TX_REG_DESC_SIZE *
TX_DESC_NUM, &priv->tx_base,
GFP_DMA | GFP_KERNEL);
- if (priv->tx_desc_base == NULL)
+ if (priv->tx_desc_base == NULL) {
+ ret = -ENOMEM;
goto init_fail;
+ }
priv->rx_desc_base = dma_alloc_coherent(NULL, RX_REG_DESC_SIZE *
RX_DESC_NUM, &priv->rx_base,
GFP_DMA | GFP_KERNEL);
- if (priv->rx_desc_base == NULL)
+ if (priv->rx_desc_base == NULL) {
+ ret = -ENOMEM;
goto init_fail;
+ }
priv->tx_buf_base = kmalloc(priv->tx_buf_size * TX_DESC_NUM,
GFP_ATOMIC);
- if (!priv->tx_buf_base)
+ if (!priv->tx_buf_base) {
+ ret = -ENOMEM;
goto init_fail;
+ }
priv->rx_buf_base = kmalloc(priv->rx_buf_size * RX_DESC_NUM,
GFP_ATOMIC);
- if (!priv->rx_buf_base)
+ if (!priv->rx_buf_base) {
+ ret = -ENOMEM;
goto init_fail;
+ }
platform_set_drvdata(pdev, ndev);
init_fail:
netdev_err(ndev, "init failed\n");
moxart_mac_free_memory(ndev);
-
+irq_map_fail:
+ free_netdev(ndev);
return ret;
}
{ }
};
-struct __initdata platform_driver moxart_mac_driver = {
+static struct platform_driver moxart_mac_driver = {
.probe = moxart_mac_probe,
.remove = moxart_remove,
.driver = {
{
int retval;
- retval = request_irq(dev->irq, sonic_interrupt, IRQF_DISABLED,
- "sonic", dev);
+ retval = request_irq(dev->irq, sonic_interrupt, 0, "sonic", dev);
if (retval) {
printk(KERN_ERR "%s: unable to get IRQ %d.\n",
dev->name, dev->irq);
{
int retval;
- retval = request_irq(dev->irq, sonic_interrupt, IRQF_DISABLED,
- "sonic", dev);
+ retval = request_irq(dev->irq, sonic_interrupt, 0, "sonic", dev);
if (retval) {
printk(KERN_ERR "%s: unable to get IRQ %d.\n",
dev->name, dev->irq);
snprintf(mac->tx_irq_name, sizeof(mac->tx_irq_name), "%s tx",
dev->name);
- ret = request_irq(mac->tx->chan.irq, pasemi_mac_tx_intr, IRQF_DISABLED,
+ ret = request_irq(mac->tx->chan.irq, pasemi_mac_tx_intr, 0,
mac->tx_irq_name, mac->tx);
if (ret) {
dev_err(&mac->pdev->dev, "request_irq of irq %d failed: %d\n",
snprintf(mac->rx_irq_name, sizeof(mac->rx_irq_name), "%s rx",
dev->name);
- ret = request_irq(mac->rx->chan.irq, pasemi_mac_rx_intr, IRQF_DISABLED,
+ ret = request_irq(mac->rx->chan.irq, pasemi_mac_rx_intr, 0,
mac->rx_irq_name, mac->rx);
if (ret) {
dev_err(&mac->pdev->dev, "request_irq of irq %d failed: %d\n",
return err;
}
- if (channel->tx_count) {
+ if (qlcnic_82xx_check(adapter) && channel->tx_count) {
err = qlcnic_validate_max_tx_rings(adapter, channel->tx_count);
if (err)
return err;
.set_msglevel = qlcnic_set_msglevel,
.get_msglevel = qlcnic_get_msglevel,
};
+
+const struct ethtool_ops qlcnic_ethtool_failed_ops = {
+ .get_settings = qlcnic_get_settings,
+ .get_drvinfo = qlcnic_get_drvinfo,
+ .set_msglevel = qlcnic_set_msglevel,
+ .get_msglevel = qlcnic_get_msglevel,
+ .set_dump = qlcnic_set_dump,
+};
while (test_and_set_bit(__QLCNIC_RESETTING, &adapter->state))
usleep_range(10000, 11000);
+ if (!adapter->fw_work.work.func)
+ return;
+
cancel_delayed_work_sync(&adapter->fw_work);
}
err = qlcnic_alloc_adapter_resources(adapter);
if (err)
- goto err_out_free_netdev;
+ goto err_out_free_wq;
adapter->dev_rst_time = jiffies;
adapter->ahw->revision_id = pdev->revision;
adapter->portnum = adapter->ahw->pci_func;
err = qlcnic_start_firmware(adapter);
if (err) {
- dev_err(&pdev->dev, "Loading fw failed.Please Reboot\n");
- goto err_out_free_hw;
+ dev_err(&pdev->dev, "Loading fw failed.Please Reboot\n"
+ "\t\tIf reboot doesn't help, try flashing the card\n");
+ goto err_out_maintenance_mode;
}
qlcnic_get_multiq_capability(adapter);
err_out_free_hw:
qlcnic_free_adapter_resources(adapter);
+err_out_free_wq:
+ destroy_workqueue(adapter->qlcnic_wq);
+
err_out_free_netdev:
free_netdev(netdev);
pci_set_drvdata(pdev, NULL);
pci_disable_device(pdev);
return err;
+
+err_out_maintenance_mode:
+ netdev->netdev_ops = &qlcnic_netdev_failed_ops;
+ SET_ETHTOOL_OPS(netdev, &qlcnic_ethtool_failed_ops);
+ err = register_netdev(netdev);
+
+ if (err) {
+ dev_err(&pdev->dev, "Failed to register net device\n");
+ qlcnic_clr_all_drv_state(adapter, 0);
+ goto err_out_free_hw;
+ }
+
+ pci_set_drvdata(pdev, adapter);
+ qlcnic_add_sysfs(adapter);
+
+ return 0;
}
static void qlcnic_remove(struct pci_dev *pdev)
static int qlcnic_open(struct net_device *netdev)
{
struct qlcnic_adapter *adapter = netdev_priv(netdev);
+ u32 state;
int err;
+ state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE);
+ if (state == QLCNIC_DEV_FAILED || state == QLCNIC_DEV_BADBAD) {
+ netdev_err(netdev, "%s: Device is in FAILED state\n", __func__);
+
+ return -EIO;
+ }
+
netif_carrier_off(netdev);
err = qlcnic_attach(adapter);
return;
state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE);
+ if (state == QLCNIC_DEV_FAILED || state == QLCNIC_DEV_BADBAD) {
+ netdev_err(adapter->netdev, "%s: Device is in FAILED state\n",
+ __func__);
+ qlcnic_api_unlock(adapter);
+
+ return;
+ }
if (state == QLCNIC_DEV_READY) {
QLC_SHARED_REG_WR32(adapter, QLCNIC_CRB_DEV_STATE,
u8 max_hw = QLCNIC_MAX_TX_RINGS;
u32 max_allowed;
- if (!qlcnic_82xx_check(adapter)) {
- netdev_err(netdev, "No Multi TX-Q support\n");
- return -EINVAL;
- }
-
if (!qlcnic_use_msi_x && !qlcnic_use_msi) {
netdev_err(netdev, "No Multi TX-Q support in INT-x mode\n");
return -EINVAL;
u8 max_hw = adapter->ahw->max_rx_ques;
u32 max_allowed;
- if (qlcnic_82xx_check(adapter) && !qlcnic_use_msi_x &&
- !qlcnic_use_msi) {
+ if (!qlcnic_use_msi_x && !qlcnic_use_msi) {
netdev_err(netdev, "No RSS support in INT-x mode\n");
return -EINVAL;
}
{
int err;
+ adapter->need_fw_reset = 0;
qlcnic_83xx_reinit_mbx_work(adapter->ahw->mailbox);
qlcnic_83xx_enable_mbx_interrupt(adapter);
{
struct net_device *netdev = adapter->netdev;
+ rtnl_lock();
if (netif_running(netdev))
__qlcnic_down(adapter, netdev);
/* After disabling SRIOV re-init the driver in default mode
configure opmode based on op_mode of function
*/
- if (qlcnic_83xx_configure_opmode(adapter))
+ if (qlcnic_83xx_configure_opmode(adapter)) {
+ rtnl_unlock();
return -EIO;
+ }
if (netif_running(netdev))
__qlcnic_up(adapter, netdev);
+ rtnl_unlock();
return 0;
}
return -EIO;
}
+ rtnl_lock();
if (netif_running(netdev))
__qlcnic_down(adapter, netdev);
__qlcnic_up(adapter, netdev);
error:
+ rtnl_unlock();
return err;
}
void qlcnic_create_diag_entries(struct qlcnic_adapter *adapter)
{
struct device *dev = &adapter->pdev->dev;
+ u32 state;
if (device_create_bin_file(dev, &bin_attr_port_stats))
dev_info(dev, "failed to create port stats sysfs entry");
if (device_create_bin_file(dev, &bin_attr_mem))
dev_info(dev, "failed to create mem sysfs entry\n");
+ state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE);
+ if (state == QLCNIC_DEV_FAILED || state == QLCNIC_DEV_BADBAD)
+ return;
+
if (device_create_bin_file(dev, &bin_attr_pci_config))
dev_info(dev, "failed to create pci config sysfs entry");
+
if (device_create_file(dev, &dev_attr_beacon))
dev_info(dev, "failed to create beacon sysfs entry");
void qlcnic_remove_diag_entries(struct qlcnic_adapter *adapter)
{
struct device *dev = &adapter->pdev->dev;
+ u32 state;
device_remove_bin_file(dev, &bin_attr_port_stats);
device_remove_file(dev, &dev_attr_diag_mode);
device_remove_bin_file(dev, &bin_attr_crb);
device_remove_bin_file(dev, &bin_attr_mem);
+
+ state = QLC_SHARED_REG_RD32(adapter, QLCNIC_CRB_DEV_STATE);
+ if (state == QLCNIC_DEV_FAILED || state == QLCNIC_DEV_BADBAD)
+ return;
+
device_remove_bin_file(dev, &bin_attr_pci_config);
device_remove_file(dev, &dev_attr_beacon);
if (!(adapter->flags & QLCNIC_ESWITCH_ENABLED))
int i;
if (!mpi_coredump) {
- netif_err(qdev, drv, qdev->ndev, "No memory available\n");
- return -ENOMEM;
+ netif_err(qdev, drv, qdev->ndev, "No memory allocated\n");
+ return -EINVAL;
}
/* Try to get the spinlock, but dont worry if
return;
}
- if (!ql_core_dump(qdev, qdev->mpi_coredump)) {
+ if (qdev->mpi_coredump && !ql_core_dump(qdev, qdev->mpi_coredump)) {
netif_err(qdev, drv, qdev->ndev, "Core is dumped!\n");
qdev->core_is_dumped = 1;
queue_delayed_work(qdev->workqueue,
case RTL_GIGA_MAC_VER_23:
case RTL_GIGA_MAC_VER_24:
case RTL_GIGA_MAC_VER_34:
+ case RTL_GIGA_MAC_VER_35:
RTL_W32(RxConfig, RX128_INT_EN | RX_MULTI_EN | RX_DMA_BURST);
break;
case RTL_GIGA_MAC_VER_40:
.eesr_err_check = EESR_TWB1 | EESR_TWB | EESR_TABT | EESR_RABT |
EESR_RFE | EESR_RDE | EESR_RFRMER | EESR_TFE |
EESR_TDE | EESR_ECI,
+ .fdr_value = 0x0000070f,
+ .rmcr_value = 0x00000001,
.apr = 1,
.mpr = 1,
.tpauser = 1,
.bculr = 1,
.hw_swap = 1,
+ .rpadir = 1,
+ .rpadir_value = 2 << 16,
.no_trimd = 1,
.no_ade = 1,
.tsu = 1,
select I2C_ALGOBIT
select PTP_1588_CLOCK
---help---
- This driver supports 10-gigabit Ethernet cards based on
+ This driver supports 10/40-gigabit Ethernet cards based on
the Solarflare SFC4000, SFC9000-family and SFC9100-family
controllers.
return resource_size(&efx->pci_dev->resource[EFX_MEM_BAR]);
}
-static int efx_ef10_init_capabilities(struct efx_nic *efx)
+static int efx_ef10_init_datapath_caps(struct efx_nic *efx)
{
MCDI_DECLARE_BUF(outbuf, MC_CMD_GET_CAPABILITIES_OUT_LEN);
struct efx_ef10_nic_data *nic_data = efx->nic_data;
outbuf, sizeof(outbuf), &outlen);
if (rc)
return rc;
+ if (outlen < sizeof(outbuf)) {
+ netif_err(efx, drv, efx->net_dev,
+ "unable to read datapath firmware capabilities\n");
+ return -EIO;
+ }
- if (outlen >= sizeof(outbuf)) {
- nic_data->datapath_caps =
- MCDI_DWORD(outbuf, GET_CAPABILITIES_OUT_FLAGS1);
- if (!(nic_data->datapath_caps &
- (1 << MC_CMD_GET_CAPABILITIES_OUT_TX_TSO_LBN))) {
- netif_err(efx, drv, efx->net_dev,
- "Capabilities don't indicate TSO support.\n");
- return -ENODEV;
- }
+ nic_data->datapath_caps =
+ MCDI_DWORD(outbuf, GET_CAPABILITIES_OUT_FLAGS1);
+
+ if (!(nic_data->datapath_caps &
+ (1 << MC_CMD_GET_CAPABILITIES_OUT_TX_TSO_LBN))) {
+ netif_err(efx, drv, efx->net_dev,
+ "current firmware does not support TSO\n");
+ return -ENODEV;
+ }
+
+ if (!(nic_data->datapath_caps &
+ (1 << MC_CMD_GET_CAPABILITIES_OUT_RX_PREFIX_LEN_14_LBN))) {
+ netif_err(efx, probe, efx->net_dev,
+ "current firmware does not support an RX prefix\n");
+ return -ENODEV;
}
return 0;
if (rc)
goto fail3;
- rc = efx_ef10_init_capabilities(efx);
+ rc = efx_ef10_init_datapath_caps(efx);
if (rc < 0)
goto fail3;
efx->rx_packet_len_offset =
ES_DZ_RX_PREFIX_PKTLEN_OFST - ES_DZ_RX_PREFIX_SIZE;
- if (!(nic_data->datapath_caps &
- (1 << MC_CMD_GET_CAPABILITIES_OUT_RX_PREFIX_LEN_14_LBN))) {
- netif_err(efx, probe, efx->net_dev,
- "current firmware does not support an RX prefix\n");
- rc = -ENODEV;
- goto fail3;
- }
-
rc = efx_mcdi_port_get_number(efx);
if (rc < 0)
goto fail3;
if (rc)
goto fail3;
- efx_ptp_probe(efx);
-
return 0;
fail3:
struct efx_ef10_nic_data *nic_data = efx->nic_data;
int rc;
+ if (nic_data->must_check_datapath_caps) {
+ rc = efx_ef10_init_datapath_caps(efx);
+ if (rc)
+ return rc;
+ nic_data->must_check_datapath_caps = false;
+ }
+
if (nic_data->must_realloc_vis) {
/* We cannot let the number of VIs change now */
rc = efx_ef10_alloc_vis(efx, nic_data->n_allocated_vis,
EF10_DMA_STAT(rx_align_error, RX_ALIGN_ERROR_PKTS),
EF10_DMA_STAT(rx_length_error, RX_LENGTH_ERROR_PKTS),
EF10_DMA_STAT(rx_nodesc_drops, RX_NODESC_DROPS),
+ EF10_DMA_STAT(rx_pm_trunc_bb_overflow, PM_TRUNC_BB_OVERFLOW),
+ EF10_DMA_STAT(rx_pm_discard_bb_overflow, PM_DISCARD_BB_OVERFLOW),
+ EF10_DMA_STAT(rx_pm_trunc_vfifo_full, PM_TRUNC_VFIFO_FULL),
+ EF10_DMA_STAT(rx_pm_discard_vfifo_full, PM_DISCARD_VFIFO_FULL),
+ EF10_DMA_STAT(rx_pm_trunc_qbb, PM_TRUNC_QBB),
+ EF10_DMA_STAT(rx_pm_discard_qbb, PM_DISCARD_QBB),
+ EF10_DMA_STAT(rx_pm_discard_mapping, PM_DISCARD_MAPPING),
+ EF10_DMA_STAT(rx_dp_q_disabled_packets, RXDP_Q_DISABLED_PKTS),
+ EF10_DMA_STAT(rx_dp_di_dropped_packets, RXDP_DI_DROPPED_PKTS),
+ EF10_DMA_STAT(rx_dp_streaming_packets, RXDP_STREAMING_PKTS),
+ EF10_DMA_STAT(rx_dp_emerg_fetch, RXDP_EMERGENCY_FETCH_CONDITIONS),
+ EF10_DMA_STAT(rx_dp_emerg_wait, RXDP_EMERGENCY_WAIT_CONDITIONS),
};
#define HUNT_COMMON_STAT_MASK ((1ULL << EF10_STAT_tx_bytes) | \
#define HUNT_40G_EXTRA_STAT_MASK ((1ULL << EF10_STAT_rx_align_error) | \
(1ULL << EF10_STAT_rx_length_error))
-#if BITS_PER_LONG == 64
-#define STAT_MASK_BITMAP(bits) (bits)
-#else
-#define STAT_MASK_BITMAP(bits) (bits) & 0xffffffff, (bits) >> 32
-#endif
-
-static const unsigned long *efx_ef10_stat_mask(struct efx_nic *efx)
-{
- static const unsigned long hunt_40g_stat_mask[] = {
- STAT_MASK_BITMAP(HUNT_COMMON_STAT_MASK |
- HUNT_40G_EXTRA_STAT_MASK)
- };
- static const unsigned long hunt_10g_only_stat_mask[] = {
- STAT_MASK_BITMAP(HUNT_COMMON_STAT_MASK |
- HUNT_10G_ONLY_STAT_MASK)
- };
+/* These statistics are only provided if the firmware supports the
+ * capability PM_AND_RXDP_COUNTERS.
+ */
+#define HUNT_PM_AND_RXDP_STAT_MASK ( \
+ (1ULL << EF10_STAT_rx_pm_trunc_bb_overflow) | \
+ (1ULL << EF10_STAT_rx_pm_discard_bb_overflow) | \
+ (1ULL << EF10_STAT_rx_pm_trunc_vfifo_full) | \
+ (1ULL << EF10_STAT_rx_pm_discard_vfifo_full) | \
+ (1ULL << EF10_STAT_rx_pm_trunc_qbb) | \
+ (1ULL << EF10_STAT_rx_pm_discard_qbb) | \
+ (1ULL << EF10_STAT_rx_pm_discard_mapping) | \
+ (1ULL << EF10_STAT_rx_dp_q_disabled_packets) | \
+ (1ULL << EF10_STAT_rx_dp_di_dropped_packets) | \
+ (1ULL << EF10_STAT_rx_dp_streaming_packets) | \
+ (1ULL << EF10_STAT_rx_dp_emerg_fetch) | \
+ (1ULL << EF10_STAT_rx_dp_emerg_wait))
+
+static u64 efx_ef10_raw_stat_mask(struct efx_nic *efx)
+{
+ u64 raw_mask = HUNT_COMMON_STAT_MASK;
u32 port_caps = efx_mcdi_phy_get_caps(efx);
+ struct efx_ef10_nic_data *nic_data = efx->nic_data;
if (port_caps & (1 << MC_CMD_PHY_CAP_40000FDX_LBN))
- return hunt_40g_stat_mask;
+ raw_mask |= HUNT_40G_EXTRA_STAT_MASK;
else
- return hunt_10g_only_stat_mask;
+ raw_mask |= HUNT_10G_ONLY_STAT_MASK;
+
+ if (nic_data->datapath_caps &
+ (1 << MC_CMD_GET_CAPABILITIES_OUT_PM_AND_RXDP_COUNTERS_LBN))
+ raw_mask |= HUNT_PM_AND_RXDP_STAT_MASK;
+
+ return raw_mask;
+}
+
+static void efx_ef10_get_stat_mask(struct efx_nic *efx, unsigned long *mask)
+{
+ u64 raw_mask = efx_ef10_raw_stat_mask(efx);
+
+#if BITS_PER_LONG == 64
+ mask[0] = raw_mask;
+#else
+ mask[0] = raw_mask & 0xffffffff;
+ mask[1] = raw_mask >> 32;
+#endif
}
static size_t efx_ef10_describe_stats(struct efx_nic *efx, u8 *names)
{
+ DECLARE_BITMAP(mask, EF10_STAT_COUNT);
+
+ efx_ef10_get_stat_mask(efx, mask);
return efx_nic_describe_stats(efx_ef10_stat_desc, EF10_STAT_COUNT,
- efx_ef10_stat_mask(efx), names);
+ mask, names);
}
static int efx_ef10_try_update_nic_stats(struct efx_nic *efx)
{
struct efx_ef10_nic_data *nic_data = efx->nic_data;
- const unsigned long *stats_mask = efx_ef10_stat_mask(efx);
+ DECLARE_BITMAP(mask, EF10_STAT_COUNT);
__le64 generation_start, generation_end;
u64 *stats = nic_data->stats;
__le64 *dma_stats;
+ efx_ef10_get_stat_mask(efx, mask);
+
dma_stats = efx->stats_buffer.addr;
nic_data = efx->nic_data;
if (generation_end == EFX_MC_STATS_GENERATION_INVALID)
return 0;
rmb();
- efx_nic_update_stats(efx_ef10_stat_desc, EF10_STAT_COUNT, stats_mask,
+ efx_nic_update_stats(efx_ef10_stat_desc, EF10_STAT_COUNT, mask,
stats, efx->stats_buffer.addr, false);
+ rmb();
generation_start = dma_stats[MC_CMD_MAC_GENERATION_START];
if (generation_end != generation_start)
return -EAGAIN;
static size_t efx_ef10_update_stats(struct efx_nic *efx, u64 *full_stats,
struct rtnl_link_stats64 *core_stats)
{
- const unsigned long *mask = efx_ef10_stat_mask(efx);
+ DECLARE_BITMAP(mask, EF10_STAT_COUNT);
struct efx_ef10_nic_data *nic_data = efx->nic_data;
u64 *stats = nic_data->stats;
size_t stats_count = 0, index;
int retry;
+ efx_ef10_get_stat_mask(efx, mask);
+
/* If we're unlucky enough to read statistics during the DMA, wait
* up to 10ms for it to finish (typically takes <500us)
*/
nic_data->must_restore_filters = true;
nic_data->rx_rss_context = EFX_EF10_RSS_CONTEXT_INVALID;
+ /* The datapath firmware might have been changed */
+ nic_data->must_check_datapath_caps = true;
+
+ /* MAC statistics have been cleared on the NIC; clear the local
+ * statistic that we update with efx_update_diff_stat().
+ */
+ nic_data->stats[EF10_STAT_rx_bad_bytes] = 0;
+
return -EIO;
}
/* A reboot/assertion causes the MCDI status word to be set after the
* command word is set or a REBOOT event is sent. If we notice a reboot
- * via these mechanisms then wait 20ms for the status word to be set.
+ * via these mechanisms then wait 250ms for the status word to be set.
*/
#define MCDI_STATUS_DELAY_US 100
-#define MCDI_STATUS_DELAY_COUNT 200
+#define MCDI_STATUS_DELAY_COUNT 2500
#define MCDI_STATUS_SLEEP_MS \
(MCDI_STATUS_DELAY_US * MCDI_STATUS_DELAY_COUNT / 1000)
} else {
int count;
- /* Nobody was waiting for an MCDI request, so trigger a reset */
- efx_schedule_reset(efx, RESET_TYPE_MC_FAILURE);
-
/* Consume the status word since efx_mcdi_rpc_finish() won't */
for (count = 0; count < MCDI_STATUS_DELAY_COUNT; ++count) {
if (efx_mcdi_poll_reboot(efx))
udelay(MCDI_STATUS_DELAY_US);
}
mcdi->new_epoch = true;
+
+ /* Nobody was waiting for an MCDI request, so trigger a reset */
+ efx_schedule_reset(efx, RESET_TYPE_MC_FAILURE);
}
spin_unlock(&mcdi->iface_lock);
bool *was_attached)
{
MCDI_DECLARE_BUF(inbuf, MC_CMD_DRV_ATTACH_IN_LEN);
- MCDI_DECLARE_BUF(outbuf, MC_CMD_DRV_ATTACH_OUT_LEN);
+ MCDI_DECLARE_BUF(outbuf, MC_CMD_DRV_ATTACH_EXT_OUT_LEN);
size_t outlen;
int rc;
goto fail;
}
+ /* We currently assume we have control of the external link
+ * and are completely trusted by firmware. Abort probing
+ * if that's not true for this function.
+ */
+ if (driver_operating &&
+ outlen >= MC_CMD_DRV_ATTACH_EXT_OUT_LEN &&
+ (MCDI_DWORD(outbuf, DRV_ATTACH_EXT_OUT_FUNC_FLAGS) &
+ (1 << MC_CMD_DRV_ATTACH_EXT_OUT_FLAG_LINKCTRL |
+ 1 << MC_CMD_DRV_ATTACH_EXT_OUT_FLAG_TRUSTED)) !=
+ (1 << MC_CMD_DRV_ATTACH_EXT_OUT_FLAG_LINKCTRL |
+ 1 << MC_CMD_DRV_ATTACH_EXT_OUT_FLAG_TRUSTED)) {
+ netif_err(efx, probe, efx->net_dev,
+ "This driver version only supports one function per port\n");
+ return -ENODEV;
+ }
+
if (was_attached != NULL)
*was_attached = MCDI_DWORD(outbuf, DRV_ATTACH_OUT_OLD_STATE);
return 0;
#define MC_CMD_MAC_RX_LANES01_DISP_ERR 0x39 /* enum */
#define MC_CMD_MAC_RX_LANES23_DISP_ERR 0x3a /* enum */
#define MC_CMD_MAC_RX_MATCH_FAULT 0x3b /* enum */
-#define MC_CMD_GMAC_DMABUF_START 0x40 /* enum */
-#define MC_CMD_GMAC_DMABUF_END 0x5f /* enum */
+/* enum: PM trunc_bb_overflow counter. Valid for EF10 with PM_AND_RXDP_COUNTERS
+ * capability only.
+ */
+#define MC_CMD_MAC_PM_TRUNC_BB_OVERFLOW 0x3c
+/* enum: PM discard_bb_overflow counter. Valid for EF10 with
+ * PM_AND_RXDP_COUNTERS capability only.
+ */
+#define MC_CMD_MAC_PM_DISCARD_BB_OVERFLOW 0x3d
+/* enum: PM trunc_vfifo_full counter. Valid for EF10 with PM_AND_RXDP_COUNTERS
+ * capability only.
+ */
+#define MC_CMD_MAC_PM_TRUNC_VFIFO_FULL 0x3e
+/* enum: PM discard_vfifo_full counter. Valid for EF10 with
+ * PM_AND_RXDP_COUNTERS capability only.
+ */
+#define MC_CMD_MAC_PM_DISCARD_VFIFO_FULL 0x3f
+/* enum: PM trunc_qbb counter. Valid for EF10 with PM_AND_RXDP_COUNTERS
+ * capability only.
+ */
+#define MC_CMD_MAC_PM_TRUNC_QBB 0x40
+/* enum: PM discard_qbb counter. Valid for EF10 with PM_AND_RXDP_COUNTERS
+ * capability only.
+ */
+#define MC_CMD_MAC_PM_DISCARD_QBB 0x41
+/* enum: PM discard_mapping counter. Valid for EF10 with PM_AND_RXDP_COUNTERS
+ * capability only.
+ */
+#define MC_CMD_MAC_PM_DISCARD_MAPPING 0x42
+/* enum: RXDP counter: Number of packets dropped due to the queue being
+ * disabled. Valid for EF10 with PM_AND_RXDP_COUNTERS capability only.
+ */
+#define MC_CMD_MAC_RXDP_Q_DISABLED_PKTS 0x43
+/* enum: RXDP counter: Number of packets dropped by the DICPU. Valid for EF10
+ * with PM_AND_RXDP_COUNTERS capability only.
+ */
+#define MC_CMD_MAC_RXDP_DI_DROPPED_PKTS 0x45
+/* enum: RXDP counter: Number of non-host packets. Valid for EF10 with
+ * PM_AND_RXDP_COUNTERS capability only.
+ */
+#define MC_CMD_MAC_RXDP_STREAMING_PKTS 0x46
+/* enum: RXDP counter: Number of times an emergency descriptor fetch was
+ * performed. Valid for EF10 with PM_AND_RXDP_COUNTERS capability only.
+ */
+#define MC_CMD_MAC_RXDP_EMERGENCY_FETCH_CONDITIONS 0x47
+/* enum: RXDP counter: Number of times the DPCPU waited for an existing
+ * descriptor fetch. Valid for EF10 with PM_AND_RXDP_COUNTERS capability only.
+ */
+#define MC_CMD_MAC_RXDP_EMERGENCY_WAIT_CONDITIONS 0x48
+/* enum: Start of GMAC stats buffer space, for Siena only. */
+#define MC_CMD_GMAC_DMABUF_START 0x40
+/* enum: End of GMAC stats buffer space, for Siena only. */
+#define MC_CMD_GMAC_DMABUF_END 0x5f
#define MC_CMD_MAC_GENERATION_END 0x60 /* enum */
#define MC_CMD_MAC_NSTATS 0x61 /* enum */
#define MC_CMD_GET_CAPABILITIES_OUT_RX_BATCHING_WIDTH 1
#define MC_CMD_GET_CAPABILITIES_OUT_MCAST_FILTER_CHAINING_LBN 26
#define MC_CMD_GET_CAPABILITIES_OUT_MCAST_FILTER_CHAINING_WIDTH 1
+#define MC_CMD_GET_CAPABILITIES_OUT_PM_AND_RXDP_COUNTERS_LBN 27
+#define MC_CMD_GET_CAPABILITIES_OUT_PM_AND_RXDP_COUNTERS_WIDTH 1
/* RxDPCPU firmware id. */
#define MC_CMD_GET_CAPABILITIES_OUT_RX_DPCPU_FW_ID_OFST 4
#define MC_CMD_GET_CAPABILITIES_OUT_RX_DPCPU_FW_ID_LEN 2
case 100: caps = 1 << MC_CMD_PHY_CAP_100FDX_LBN; break;
case 1000: caps = 1 << MC_CMD_PHY_CAP_1000FDX_LBN; break;
case 10000: caps = 1 << MC_CMD_PHY_CAP_10000FDX_LBN; break;
+ case 40000: caps = 1 << MC_CMD_PHY_CAP_40000FDX_LBN; break;
default: return -EINVAL;
}
} else {
[MCDI_EVENT_LINKCHANGE_SPEED_100M] = 100,
[MCDI_EVENT_LINKCHANGE_SPEED_1G] = 1000,
[MCDI_EVENT_LINKCHANGE_SPEED_10G] = 10000,
+ [MCDI_EVENT_LINKCHANGE_SPEED_40G] = 40000,
};
void efx_mcdi_process_link_change(struct efx_nic *efx, efx_qword_t *ev)
* @count: Length of the @desc array
* @mask: Bitmask of which elements of @desc are enabled
* @stats: Buffer to update with the converted statistics. The length
- * of this array must be at least the number of set bits in the
- * first @count bits of @mask.
+ * of this array must be at least @count.
* @dma_buf: DMA buffer containing hardware statistics
* @accumulate: If set, the converted values will be added rather than
* directly stored to the corresponding elements of @stats
}
if (accumulate)
- *stats += val;
+ stats[index] += val;
else
- *stats = val;
+ stats[index] = val;
}
-
- ++stats;
}
}
EF10_STAT_rx_align_error,
EF10_STAT_rx_length_error,
EF10_STAT_rx_nodesc_drops,
+ EF10_STAT_rx_pm_trunc_bb_overflow,
+ EF10_STAT_rx_pm_discard_bb_overflow,
+ EF10_STAT_rx_pm_trunc_vfifo_full,
+ EF10_STAT_rx_pm_discard_vfifo_full,
+ EF10_STAT_rx_pm_trunc_qbb,
+ EF10_STAT_rx_pm_discard_qbb,
+ EF10_STAT_rx_pm_discard_mapping,
+ EF10_STAT_rx_dp_q_disabled_packets,
+ EF10_STAT_rx_dp_di_dropped_packets,
+ EF10_STAT_rx_dp_streaming_packets,
+ EF10_STAT_rx_dp_emerg_fetch,
+ EF10_STAT_rx_dp_emerg_wait,
EF10_STAT_COUNT
};
* @rx_rss_context: Firmware handle for our RSS context
* @stats: Hardware statistics
* @workaround_35388: Flag: firmware supports workaround for bug 35388
+ * @must_check_datapath_caps: Flag: @datapath_caps needs to be revalidated
+ * after MC reboot
* @datapath_caps: Capabilities of datapath firmware (FLAGS1 field of
* %MC_CMD_GET_CAPABILITIES response)
*/
u32 rx_rss_context;
u64 stats[EF10_STAT_COUNT];
bool workaround_35388;
+ bool must_check_datapath_caps;
u32 datapath_caps;
};
#define SMC_insw(a, r, p, l) mcf_insw(a + r, p, l)
#define SMC_outsw(a, r, p, l) mcf_outsw(a + r, p, l)
-#define SMC_IRQ_FLAGS (IRQF_DISABLED)
+#define SMC_IRQ_FLAGS 0
#else
void __iomem *__ioaddr = ioaddr; \
if (__len >= 2 && (unsigned long)__ptr & 2) { \
__len -= 2; \
- SMC_outw(*(u16 *)__ptr, ioaddr, \
- DATA_REG(lp)); \
+ SMC_outsw(ioaddr, DATA_REG(lp), __ptr, 1); \
__ptr += 2; \
} \
if (SMC_CAN_USE_DATACS && lp->datacs) \
SMC_outsl(__ioaddr, DATA_REG(lp), __ptr, __len>>2); \
if (__len & 2) { \
__ptr += (__len & ~3); \
- SMC_outw(*((u16 *)__ptr), ioaddr, \
- DATA_REG(lp)); \
+ SMC_outsw(ioaddr, DATA_REG(lp), __ptr, 1); \
} \
} else if (SMC_16BIT(lp)) \
SMC_outsw(ioaddr, DATA_REG(lp), p, (l) >> 1); \
smsc9420_reg_write(pd, INT_STAT, 0xFFFFFFFF);
smsc9420_pci_flush_write(pd);
- result = request_irq(irq, smsc9420_isr, IRQF_SHARED | IRQF_DISABLED,
- DRV_NAME, pd);
+ result = request_irq(irq, smsc9420_isr, IRQF_SHARED, DRV_NAME, pd);
if (result) {
smsc_warn(IFUP, "Unable to use IRQ = %d", irq);
result = -ENODEV;
static irqreturn_t cpsw_interrupt(int irq, void *dev_id)
{
struct cpsw_priv *priv = dev_id;
- u32 rx, tx, rx_thresh;
-
- rx_thresh = __raw_readl(&priv->wr_regs->rx_thresh_stat);
- rx = __raw_readl(&priv->wr_regs->rx_stat);
- tx = __raw_readl(&priv->wr_regs->tx_stat);
- if (!rx_thresh && !rx && !tx)
- return IRQ_NONE;
cpsw_intr_disable(priv);
if (priv->irq_enabled == true) {
}
}
+ napi_enable(&priv->napi);
cpdma_ctlr_start(priv->dma);
cpsw_intr_enable(priv);
- napi_enable(&priv->napi);
cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_RX);
cpdma_ctlr_eoi(priv->dma, CPDMA_EOI_TX);
}
data->mac_control = prop;
- if (!of_property_read_u32(node, "dual_emac", &prop))
- data->dual_emac = prop;
+ if (of_property_read_bool(node, "dual_emac"))
+ data->dual_emac = 1;
/*
* Populate all the child nodes here...
if (ret)
pr_warn("Doesn't have any child node\n");
- for_each_node_by_name(slave_node, "slave") {
+ for_each_child_of_node(node, slave_node) {
struct cpsw_slave_data *slave_data = data->slave_data + i;
const void *mac_addr = NULL;
u32 phyid;
struct device_node *mdio_node;
struct platform_device *mdio;
+ /* This is no slave child node, continue */
+ if (strcmp(slave_node->name, "slave"))
+ continue;
+
parp = of_get_property(slave_node, "phy_id", &lenp);
if ((parp == NULL) || (lenp != (sizeof(void *) * 2))) {
pr_err("Missing slave[%d] phy_id property\n", i);
netdev_mc_count(ndev) > EMAC_DEF_MAX_MULTICAST_ADDRESSES) {
mbp_enable = (mbp_enable | EMAC_MBP_RXMCAST);
emac_add_mcast(priv, EMAC_ALL_MULTI_SET, NULL);
- }
- if (!netdev_mc_empty(ndev)) {
+ } else if (!netdev_mc_empty(ndev)) {
struct netdev_hw_addr *ha;
mbp_enable = (mbp_enable | EMAC_MBP_RXMCAST);
goto fail_alloc_irq;
}
result = request_irq(card->irq, gelic_card_interrupt,
- IRQF_DISABLED, netdev->name, card);
+ 0, netdev->name, card);
if (result) {
dev_info(ctodev(card), "%s:request_irq failed (%d)\n",
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#define DRV_NAME "via-rhine"
-#define DRV_VERSION "1.5.0"
+#define DRV_VERSION "1.5.1"
#define DRV_RELDATE "2010-10-09"
#include <linux/types.h>
cpu_to_le32(TXDESC | (skb->len >= ETH_ZLEN ? skb->len : ETH_ZLEN));
if (unlikely(vlan_tx_tag_present(skb))) {
- rp->tx_ring[entry].tx_status = cpu_to_le32((vlan_tx_tag_get(skb)) << 16);
+ u16 vid_pcp = vlan_tx_tag_get(skb);
+
+ /* drop CFI/DEI bit, register needs VID and PCP */
+ vid_pcp = (vid_pcp & VLAN_VID_MASK) |
+ ((vid_pcp & VLAN_PRIO_MASK) >> 1);
+ rp->tx_ring[entry].tx_status = cpu_to_le32((vid_pcp) << 16);
/* request tagging */
rp->tx_ring[entry].desc_length |= cpu_to_le32(0x020000);
}
lp->rx_bd_p + (sizeof(*lp->rx_bd_v) * (RX_BD_NUM - 1)));
lp->dma_out(lp, TX_CURDESC_PTR, lp->tx_bd_p);
+ /* Init descriptor indexes */
+ lp->tx_bd_ci = 0;
+ lp->tx_bd_next = 0;
+ lp->tx_bd_tail = 0;
+ lp->rx_bd_ci = 0;
+
return 0;
out:
return -EINVAL; /* Cannot change this parameter when up */
if ((ym = kmalloc(sizeof(struct yamdrv_ioctl_mcs), GFP_KERNEL)) == NULL)
return -ENOBUFS;
- ym->bitrate = 9600;
if (copy_from_user(ym, ifr->ifr_data, sizeof(struct yamdrv_ioctl_mcs))) {
kfree(ym);
return -EFAULT;
struct mutex buffer_mutex; /* only used to protect buf */
struct completion tx_complete;
- struct work_struct irqwork;
u8 *buf; /* 3 bytes. Used for SPI single-register transfers. */
};
if (ret)
goto err;
+ INIT_COMPLETION(devrec->tx_complete);
+
/* Set TXNTRIG bit of TXNCON to send packet */
ret = read_short_reg(devrec, REG_TXNCON, &val);
if (ret)
val |= 0x4;
write_short_reg(devrec, REG_TXNCON, val);
- INIT_COMPLETION(devrec->tx_complete);
-
/* Wait for the device to send the TX complete interrupt. */
ret = wait_for_completion_interruptible_timeout(
&devrec->tx_complete,
static irqreturn_t mrf24j40_isr(int irq, void *data)
{
struct mrf24j40 *devrec = data;
-
- disable_irq_nosync(irq);
-
- schedule_work(&devrec->irqwork);
-
- return IRQ_HANDLED;
-}
-
-static void mrf24j40_isrwork(struct work_struct *work)
-{
- struct mrf24j40 *devrec = container_of(work, struct mrf24j40, irqwork);
u8 intstat;
int ret;
mrf24j40_handle_rx(devrec);
out:
- enable_irq(devrec->spi->irq);
+ return IRQ_HANDLED;
}
static int mrf24j40_probe(struct spi_device *spi)
mutex_init(&devrec->buffer_mutex);
init_completion(&devrec->tx_complete);
- INIT_WORK(&devrec->irqwork, mrf24j40_isrwork);
devrec->spi = spi;
spi_set_drvdata(spi, devrec);
val &= ~0x3; /* Clear RX mode (normal) */
write_short_reg(devrec, REG_RXMCR, val);
- ret = request_irq(spi->irq,
- mrf24j40_isr,
- IRQF_TRIGGER_FALLING,
- dev_name(&spi->dev),
- devrec);
+ ret = request_threaded_irq(spi->irq,
+ NULL,
+ mrf24j40_isr,
+ IRQF_TRIGGER_LOW|IRQF_ONESHOT,
+ dev_name(&spi->dev),
+ devrec);
if (ret) {
dev_err(printdev(devrec), "Unable to get IRQ");
dev_dbg(printdev(devrec), "remove\n");
free_irq(spi->irq, devrec);
- flush_work(&devrec->irqwork); /* TODO: Is this the right call? */
ieee802154_unregister_device(devrec->dev);
ieee802154_free_device(devrec->dev);
/* TODO: Will ieee802154_free_device() wait until ->xmit() is
goto error;
ret = 0;
- error:
- return ret;
+error:
+ return ret;
}
/* Setup a communication between mcs7780 and agilent chip. */
return 0;
mcs->rx_urb = usb_alloc_urb(0, GFP_KERNEL);
- if (!mcs->rx_urb)
+ if (!mcs->rx_urb) {
+ usb_free_urb(mcs->tx_urb);
+ mcs->tx_urb = NULL;
return 0;
+ }
return 1;
}
ret = mcs_set_reg(mcs, MCS_MODE_REG, rval);
mcs->speed = mcs->new_speed;
- error:
- mcs->new_speed = 0;
- return ret;
+error:
+ mcs->new_speed = 0;
+ return ret;
}
/* Ioctl calls not supported at this time. Can be an area of future work. */
ret = mcs_receive_start(mcs);
if (ret)
- goto error3;
+ goto error4;
netif_start_queue(netdev);
return 0;
- error3:
- irlap_close(mcs->irlap);
- error2:
- kfree_skb(mcs->rx_buff.skb);
- error1:
- return ret;
+error4:
+ usb_free_urb(mcs->rx_urb);
+ usb_free_urb(mcs->tx_urb);
+error3:
+ irlap_close(mcs->irlap);
+error2:
+ kfree_skb(mcs->rx_buff.skb);
+error1:
+ return ret;
}
/* Receive callback function. */
usb_set_intfdata(intf, mcs);
return 0;
- error2:
- free_netdev(ndev);
+error2:
+ free_netdev(ndev);
- error1:
- return ret;
+error1:
+ return ret;
}
/* The current device is removed, the USB layer tells us to shut down. */
static void loopback_dev_free(struct net_device *dev)
{
+ dev_net(dev)->loopback_dev = NULL;
free_percpu(dev->lstats);
free_netdev(dev);
}
case NETDEV_RELEASE:
case NETDEV_JOIN:
case NETDEV_UNREGISTER:
- /*
- * rtnl_lock already held
+ /* rtnl_lock already held
* we might sleep in __netpoll_cleanup()
*/
spin_unlock_irqrestore(&target_list_lock, flags);
- mutex_lock(&nt->mutex);
__netpoll_cleanup(&nt->np);
- mutex_unlock(&nt->mutex);
spin_lock_irqsave(&target_list_lock, flags);
dev_put(nt->np.dev);
#include <linux/ethtool.h>
#include <linux/phy.h>
-#include <asm/io.h>
+#include <linux/io.h>
#include <asm/irq.h>
-#include <asm/uaccess.h>
+#include <linux/uaccess.h>
/* Cicada Extended Control Register 1 */
#define MII_CIS8201_EXT_CON1 0x17
nf_reset(skb);
skb->ip_summed = CHECKSUM_NONE;
- ip_select_ident(iph, &rt->dst, NULL);
+ ip_select_ident(skb, &rt->dst, NULL);
ip_send_check(iph);
ip_local_out(skb);
if (!sl || sl->magic != SLIP_MAGIC || !netif_running(sl->dev))
return;
+ spin_lock(&sl->lock);
if (sl->xleft <= 0) {
/* Now serial buffer is almost free & we can start
* transmission of another packet */
sl->dev->stats.tx_packets++;
clear_bit(TTY_DO_WRITE_WAKEUP, &tty->flags);
+ spin_unlock(&sl->lock);
sl_unlock(sl);
return;
}
actual = tty->ops->write(tty, sl->xhead, sl->xleft);
sl->xleft -= actual;
sl->xhead += actual;
+ spin_unlock(&sl->lock);
}
static void sl_tx_timeout(struct net_device *dev)
if (unlikely(!noblock))
add_wait_queue(&tfile->wq.wait, &wait);
while (len) {
- current->state = TASK_INTERRUPTIBLE;
+ if (unlikely(!noblock))
+ current->state = TASK_INTERRUPTIBLE;
/* Read frames from the queue */
if (!(skb = skb_dequeue(&tfile->socket.sk->sk_receive_queue))) {
break;
}
- current->state = TASK_RUNNING;
- if (unlikely(!noblock))
+ if (unlikely(!noblock)) {
+ current->state = TASK_RUNNING;
remove_wait_queue(&tfile->wq.wait, &wait);
+ }
return ret;
}
INIT_LIST_HEAD(&tun->disabled);
err = tun_attach(tun, file, false);
if (err < 0)
- goto err_free_dev;
+ goto err_free_flow;
err = register_netdevice(tun->dev);
if (err < 0)
- goto err_free_dev;
+ goto err_detach;
if (device_create_file(&tun->dev->dev, &dev_attr_tun_flags) ||
device_create_file(&tun->dev->dev, &dev_attr_owner) ||
strcpy(ifr->ifr_name, tun->dev->name);
return 0;
- err_free_dev:
+err_detach:
+ tun_detach_all(dev);
+err_free_flow:
+ tun_flow_uninit(tun);
+ security_tun_dev_free_security(tun->security);
+err_free_dev:
free_netdev(dev);
return err;
}
#define AX_RXHDR_L4_TYPE_TCP 16
#define AX_RXHDR_L3CSUM_ERR 2
#define AX_RXHDR_L4CSUM_ERR 1
-#define AX_RXHDR_CRC_ERR ((u32)BIT(31))
-#define AX_RXHDR_DROP_ERR ((u32)BIT(30))
+#define AX_RXHDR_CRC_ERR ((u32)BIT(29))
+#define AX_RXHDR_DROP_ERR ((u32)BIT(31))
#define AX_ACCESS_MAC 0x01
#define AX_ACCESS_PHY 0x02
#define AX_ACCESS_EEPROM 0x04
.tx_fixup = ax88179_tx_fixup,
};
+static const struct driver_info samsung_info = {
+ .description = "Samsung USB Ethernet Adapter",
+ .bind = ax88179_bind,
+ .unbind = ax88179_unbind,
+ .status = ax88179_status,
+ .link_reset = ax88179_link_reset,
+ .reset = ax88179_reset,
+ .stop = ax88179_stop,
+ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+ .tx_fixup = ax88179_tx_fixup,
+};
+
static const struct usb_device_id products[] = {
{
/* ASIX AX88179 10/100/1000 */
}, {
/* Sitecom USB 3.0 to Gigabit Adapter */
USB_DEVICE(0x0df6, 0x0072),
- .driver_info = (unsigned long) &sitecom_info,
+ .driver_info = (unsigned long)&sitecom_info,
+}, {
+ /* Samsung USB Ethernet Adapter */
+ USB_DEVICE(0x04e8, 0xa100),
+ .driver_info = (unsigned long)&samsung_info,
},
{ },
};
#include <linux/usb/usbnet.h>
-#if defined(CONFIG_USB_NET_RNDIS_HOST) || defined(CONFIG_USB_NET_RNDIS_HOST_MODULE)
+#if IS_ENABLED(CONFIG_USB_NET_RNDIS_HOST)
static int is_rndis(struct usb_interface_descriptor *desc)
{
0xa6, 0x07, 0xc0, 0xff, 0xcb, 0x7e, 0x39, 0x2a,
};
-/*
- * probes control interface, claims data interface, collects the bulk
+/* probes control interface, claims data interface, collects the bulk
* endpoints, activates data interface (if needed), maybe sets MTU.
* all pure cdc, except for certain firmware workarounds, and knowing
* that rndis uses one different rule.
struct usb_cdc_mdlm_desc *desc = NULL;
struct usb_cdc_mdlm_detail_desc *detail = NULL;
- if (sizeof dev->data < sizeof *info)
+ if (sizeof(dev->data) < sizeof(*info))
return -EDOM;
/* expect strict spec conformance for the descriptors, but
is_activesync(&intf->cur_altsetting->desc) ||
is_wireless_rndis(&intf->cur_altsetting->desc));
- memset(info, 0, sizeof *info);
+ memset(info, 0, sizeof(*info));
info->control = intf;
while (len > 3) {
- if (buf [1] != USB_DT_CS_INTERFACE)
+ if (buf[1] != USB_DT_CS_INTERFACE)
goto next_desc;
/* use bDescriptorSubType to identify the CDC descriptors.
* in favor of a complicated OID-based RPC scheme doing what
* CDC Ethernet achieves with a simple descriptor.
*/
- switch (buf [2]) {
+ switch (buf[2]) {
case USB_CDC_HEADER_TYPE:
if (info->header) {
dev_dbg(&intf->dev, "extra CDC header\n");
goto bad_desc;
}
info->header = (void *) buf;
- if (info->header->bLength != sizeof *info->header) {
+ if (info->header->bLength != sizeof(*info->header)) {
dev_dbg(&intf->dev, "CDC header len %u\n",
info->header->bLength);
goto bad_desc;
goto bad_desc;
}
info->u = (void *) buf;
- if (info->u->bLength != sizeof *info->u) {
+ if (info->u->bLength != sizeof(*info->u)) {
dev_dbg(&intf->dev, "CDC union len %u\n",
info->u->bLength);
goto bad_desc;
goto bad_desc;
}
info->ether = (void *) buf;
- if (info->ether->bLength != sizeof *info->ether) {
+ if (info->ether->bLength != sizeof(*info->ether)) {
dev_dbg(&intf->dev, "CDC ether len %u\n",
info->ether->bLength);
goto bad_desc;
break;
}
next_desc:
- len -= buf [0]; /* bLength */
- buf += buf [0];
+ len -= buf[0]; /* bLength */
+ buf += buf[0];
}
/* Microsoft ActiveSync based and some regular RNDIS devices lack the
}
EXPORT_SYMBOL_GPL(usbnet_cdc_unbind);
-/*-------------------------------------------------------------------------
- *
- * Communications Device Class, Ethernet Control model
+/* Communications Device Class, Ethernet Control model
*
* Takes two interfaces. The DATA interface is inactive till an altsetting
* is selected. Configuration data includes class descriptors. There's
*
* This should interop with whatever the 2.4 "CDCEther.c" driver
* (by Brad Hards) talked with, with more functionality.
- *
- *-------------------------------------------------------------------------*/
+ */
static void dumpspeed(struct usbnet *dev, __le32 *speeds)
{
{
struct usb_cdc_notification *event;
- if (urb->actual_length < sizeof *event)
+ if (urb->actual_length < sizeof(*event))
return;
/* SPEED_CHANGE can get split into two 8-byte packets */
case USB_CDC_NOTIFY_SPEED_CHANGE: /* tx/rx rates */
netif_dbg(dev, timer, dev->net, "CDC: speed change (len %d)\n",
urb->actual_length);
- if (urb->actual_length != (sizeof *event + 8))
+ if (urb->actual_length != (sizeof(*event) + 8))
set_bit(EVENT_STS_SPLIT, &dev->flags);
else
dumpspeed(dev, (__le32 *) &event[1]);
static const struct driver_info cdc_info = {
.description = "CDC Ethernet Device",
.flags = FLAG_ETHER | FLAG_POINTTOPOINT,
- // .check_connect = cdc_check_connect,
.bind = usbnet_cdc_bind,
.unbind = usbnet_cdc_unbind,
.status = usbnet_cdc_status,
#define DELL_VENDOR_ID 0x413C
#define REALTEK_VENDOR_ID 0x0bda
-static const struct usb_device_id products [] = {
-/*
- * BLACKLIST !!
+static const struct usb_device_id products[] = {
+/* BLACKLIST !!
*
* First blacklist any products that are egregiously nonconformant
* with the CDC Ethernet specs. Minor braindamage we cope with; when
.driver_info = 0,
}, {
.match_flags = USB_DEVICE_ID_MATCH_INT_INFO
- | USB_DEVICE_ID_MATCH_DEVICE,
+ | USB_DEVICE_ID_MATCH_DEVICE,
.idVendor = 0x04DD,
.idProduct = 0x8007, /* C-700 */
ZAURUS_MASTER_INTERFACE,
.driver_info = 0,
},
-/*
- * WHITELIST!!!
+/* WHITELIST!!!
*
* CDC Ether uses two interfaces, not necessarily consecutive.
* We match the main interface, ignoring the optional device
*/
{
/* ZTE (Vodafone) K3805-Z */
- .match_flags = USB_DEVICE_ID_MATCH_VENDOR
- | USB_DEVICE_ID_MATCH_PRODUCT
- | USB_DEVICE_ID_MATCH_INT_INFO,
- .idVendor = ZTE_VENDOR_ID,
- .idProduct = 0x1003,
- .bInterfaceClass = USB_CLASS_COMM,
- .bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET,
- .bInterfaceProtocol = USB_CDC_PROTO_NONE,
+ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1003, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET,
+ USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&wwan_info,
}, {
/* ZTE (Vodafone) K3806-Z */
- .match_flags = USB_DEVICE_ID_MATCH_VENDOR
- | USB_DEVICE_ID_MATCH_PRODUCT
- | USB_DEVICE_ID_MATCH_INT_INFO,
- .idVendor = ZTE_VENDOR_ID,
- .idProduct = 0x1015,
- .bInterfaceClass = USB_CLASS_COMM,
- .bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET,
- .bInterfaceProtocol = USB_CDC_PROTO_NONE,
+ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1015, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET,
+ USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&wwan_info,
}, {
/* ZTE (Vodafone) K4510-Z */
- .match_flags = USB_DEVICE_ID_MATCH_VENDOR
- | USB_DEVICE_ID_MATCH_PRODUCT
- | USB_DEVICE_ID_MATCH_INT_INFO,
- .idVendor = ZTE_VENDOR_ID,
- .idProduct = 0x1173,
- .bInterfaceClass = USB_CLASS_COMM,
- .bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET,
- .bInterfaceProtocol = USB_CDC_PROTO_NONE,
+ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1173, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET,
+ USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&wwan_info,
}, {
/* ZTE (Vodafone) K3770-Z */
- .match_flags = USB_DEVICE_ID_MATCH_VENDOR
- | USB_DEVICE_ID_MATCH_PRODUCT
- | USB_DEVICE_ID_MATCH_INT_INFO,
- .idVendor = ZTE_VENDOR_ID,
- .idProduct = 0x1177,
- .bInterfaceClass = USB_CLASS_COMM,
- .bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET,
- .bInterfaceProtocol = USB_CDC_PROTO_NONE,
+ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1177, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET,
+ USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&wwan_info,
}, {
/* ZTE (Vodafone) K3772-Z */
- .match_flags = USB_DEVICE_ID_MATCH_VENDOR
- | USB_DEVICE_ID_MATCH_PRODUCT
- | USB_DEVICE_ID_MATCH_INT_INFO,
- .idVendor = ZTE_VENDOR_ID,
- .idProduct = 0x1181,
- .bInterfaceClass = USB_CLASS_COMM,
- .bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET,
- .bInterfaceProtocol = USB_CDC_PROTO_NONE,
+ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1181, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET,
+ USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&wwan_info,
}, {
+ /* Telit modules */
+ USB_VENDOR_AND_INTERFACE_INFO(0x1bc7, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
+ .driver_info = (kernel_ulong_t) &wwan_info,
+}, {
USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ETHERNET,
USB_CDC_PROTO_NONE),
.driver_info = (unsigned long) &cdc_info,
}, {
/* Various Huawei modems with a network port like the UMG1831 */
- .match_flags = USB_DEVICE_ID_MATCH_VENDOR
- | USB_DEVICE_ID_MATCH_INT_INFO,
- .idVendor = HUAWEI_VENDOR_ID,
- .bInterfaceClass = USB_CLASS_COMM,
- .bInterfaceSubClass = USB_CDC_SUBCLASS_ETHERNET,
- .bInterfaceProtocol = 255,
+ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_ETHERNET, 255),
.driver_info = (unsigned long)&wwan_info,
},
- { }, // END
+ { }, /* END */
};
MODULE_DEVICE_TABLE(usb, products);
rx_ctl |= 0x02;
} else if (net->flags & IFF_ALLMULTI ||
netdev_mc_count(net) > DM_MAX_MCAST) {
- rx_ctl |= 0x04;
+ rx_ctl |= 0x08;
} else if (!netdev_mc_empty(net)) {
struct netdev_hw_addr *ha;
{QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */
{QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */
{QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
- {QMI_FIXED_INTF(0x1e2d, 0x12d1, 4)}, /* Cinterion PLxx */
+ {QMI_FIXED_INTF(0x0b3c, 0xc005, 6)}, /* Olivetti Olicard 200 */
+ {QMI_FIXED_INTF(0x1e2d, 0x0060, 4)}, /* Cinterion PLxx */
/* 4. Gobi 1000 devices */
{QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
if (num_sgs == 1)
return 0;
- urb->sg = kmalloc(num_sgs * sizeof(struct scatterlist), GFP_ATOMIC);
+ /* reserve one for zero packet */
+ urb->sg = kmalloc((num_sgs + 1) * sizeof(struct scatterlist),
+ GFP_ATOMIC);
if (!urb->sg)
return -ENOMEM;
if (build_dma_sg(skb, urb) < 0)
goto drop;
}
- entry->length = length = urb->transfer_buffer_length;
+ length = urb->transfer_buffer_length;
/* don't assume the hardware handles USB_ZERO_PACKET
* NOTE: strictly conforming cdc-ether devices should expect
if (length % dev->maxpacket == 0) {
if (!(info->flags & FLAG_SEND_ZLP)) {
if (!(info->flags & FLAG_MULTI_PACKET)) {
- urb->transfer_buffer_length++;
- if (skb_tailroom(skb)) {
+ length++;
+ if (skb_tailroom(skb) && !urb->num_sgs) {
skb->data[skb->len] = 0;
__skb_put(skb, 1);
- }
+ } else if (urb->num_sgs)
+ sg_set_buf(&urb->sg[urb->num_sgs++],
+ dev->padding_pkt, 1);
}
} else
urb->transfer_flags |= URB_ZERO_PACKET;
}
+ entry->length = urb->transfer_buffer_length = length;
spin_lock_irqsave(&dev->txq.lock, flags);
retval = usb_autopm_get_interface_async(dev->intf);
usb_kill_urb(dev->interrupt);
usb_free_urb(dev->interrupt);
+ kfree(dev->padding_pkt);
free_netdev(net);
}
/* initialize max rx_qlen and tx_qlen */
usbnet_update_max_qlen(dev);
+ if (dev->can_dma_sg && !(info->flags & FLAG_SEND_ZLP) &&
+ !(info->flags & FLAG_MULTI_PACKET)) {
+ dev->padding_pkt = kzalloc(1, GFP_KERNEL);
+ if (!dev->padding_pkt) {
+ status = -ENOMEM;
+ goto out4;
+ }
+ }
+
status = register_netdev (net);
if (status)
- goto out4;
+ goto out5;
netif_info(dev, probe, dev->net,
"register '%s' at usb-%s-%s, %s, %pM\n",
udev->dev.driver->name,
return 0;
+out5:
+ kfree(dev->padding_pkt);
out4:
usb_free_urb(dev->interrupt);
out3:
return -EINVAL;
} else {
vi->curr_queue_pairs = queue_pairs;
- schedule_delayed_work(&vi->refill, 0);
+ /* virtnet_open() will refill when device is going to up. */
+ if (dev->flags & IFF_UP)
+ schedule_delayed_work(&vi->refill, 0);
}
return 0;
{
struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb);
+ mutex_lock(&vi->config_lock);
+
+ if (!vi->config_enable)
+ goto done;
+
switch(action & ~CPU_TASKS_FROZEN) {
case CPU_ONLINE:
case CPU_DOWN_FAILED:
default:
break;
}
+
+done:
+ mutex_unlock(&vi->config_lock);
return NOTIFY_OK;
}
vi->config_enable = true;
mutex_unlock(&vi->config_lock);
+ rtnl_lock();
virtnet_set_queues(vi, vi->curr_queue_pairs);
+ rtnl_unlock();
return 0;
}
struct net_device *dev;
struct net *net = sock_net(sk);
sa_family_t sa_family = sk->sk_family;
- u16 port = htons(inet_sk(sk)->inet_sport);
+ __be16 port = inet_sk(sk)->inet_sport;
rcu_read_lock();
for_each_netdev_rcu(net, dev) {
struct net_device *dev;
struct net *net = sock_net(sk);
sa_family_t sa_family = sk->sk_family;
- u16 port = htons(inet_sk(sk)->inet_sport);
+ __be16 port = inet_sk(sk)->inet_sport;
rcu_read_lock();
for_each_netdev_rcu(net, dev) {
spin_lock(&vn->sock_lock);
hlist_del_rcu(&vs->hlist);
- smp_wmb();
- vs->sock->sk->sk_user_data = NULL;
+ rcu_assign_sk_user_data(vs->sock->sk, NULL);
vxlan_notify_del_rx_port(sk);
spin_unlock(&vn->sock_lock);
port = inet_sk(sk)->inet_sport;
- smp_read_barrier_depends();
- vs = (struct vxlan_sock *)sk->sk_user_data;
+ vs = rcu_dereference_sk_user_data(sk);
if (!vs)
goto drop;
};
/* Calls the ndo_add_vxlan_port of the caller in order to
- * supply the listening VXLAN udp ports.
+ * supply the listening VXLAN udp ports. Callers are expected
+ * to implement the ndo_add_vxlan_port.
*/
void vxlan_get_rx_port(struct net_device *dev)
{
struct net *net = dev_net(dev);
struct vxlan_net *vn = net_generic(net, vxlan_net_id);
sa_family_t sa_family;
- u16 port;
- int i;
-
- if (!dev || !dev->netdev_ops || !dev->netdev_ops->ndo_add_vxlan_port)
- return;
+ __be16 port;
+ unsigned int i;
spin_lock(&vn->sock_lock);
for (i = 0; i < PORT_HASH_SIZE; ++i) {
- hlist_for_each_entry_rcu(vs, vs_head(net, i), hlist) {
- port = htons(inet_sk(vs->sock->sk)->inet_sport);
+ hlist_for_each_entry_rcu(vs, &vn->sock_list[i], hlist) {
+ port = inet_sk(vs->sock->sk)->inet_sport;
sa_family = vs->sock->sk->sk_family;
dev->netdev_ops->ndo_add_vxlan_port(dev, sa_family,
port);
atomic_set(&vs->refcnt, 1);
vs->rcv = rcv;
vs->data = data;
- smp_wmb();
- vs->sock->sk->sk_user_data = vs;
+ rcu_assign_sk_user_data(vs->sock->sk, vs);
spin_lock(&vn->sock_lock);
hlist_add_head_rcu(&vs->hlist, vs_head(net, port));
SET_ETHTOOL_OPS(dev, &vxlan_ethtool_ops);
- /* create an fdb entry for default destination */
- err = vxlan_fdb_create(vxlan, all_zeros_mac,
- &vxlan->default_dst.remote_ip,
- NUD_REACHABLE|NUD_PERMANENT,
- NLM_F_EXCL|NLM_F_CREATE,
- vxlan->dst_port, vxlan->default_dst.remote_vni,
- vxlan->default_dst.remote_ifindex, NTF_SELF);
- if (err)
- return err;
+ /* create an fdb entry for a valid default destination */
+ if (!vxlan_addr_any(&vxlan->default_dst.remote_ip)) {
+ err = vxlan_fdb_create(vxlan, all_zeros_mac,
+ &vxlan->default_dst.remote_ip,
+ NUD_REACHABLE|NUD_PERMANENT,
+ NLM_F_EXCL|NLM_F_CREATE,
+ vxlan->dst_port,
+ vxlan->default_dst.remote_vni,
+ vxlan->default_dst.remote_ifindex,
+ NTF_SELF);
+ if (err)
+ return err;
+ }
err = register_netdevice(dev);
if (err) {
}
i = port->index;
+ memset(&sync, 0, sizeof(sync));
sync.clock_rate = FST_RDL(card, portConfig[i].lineSpeed);
/* Lucky card and linux use same encoding here */
sync.clock_type = FST_RDB(card, portConfig[i].internalClock) ==
ifr->ifr_settings.size = size; /* data size wanted */
return -ENOBUFS;
}
+ memset(&line, 0, sizeof(line));
line.clock_type = get_status(port)->clocking;
line.clock_rate = 0;
line.loopback = 0;
struct ath_hw *ah = sc->sc_ah;
struct ath_common *common = ath9k_hw_common(ah);
unsigned long flags;
+ int i;
if (ath_startrecv(sc) != 0) {
ath_err(common, "Unable to restart recv logic\n");
}
work:
ath_restart_work(sc);
+
+ for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) {
+ if (!ATH_TXQ_SETUP(sc, i))
+ continue;
+
+ spin_lock_bh(&sc->tx.txq[i].axq_lock);
+ ath_txq_schedule(sc, &sc->tx.txq[i]);
+ spin_unlock_bh(&sc->tx.txq[i].axq_lock);
+ }
}
ieee80211_wake_queues(sc->hw);
static int ath_reset(struct ath_softc *sc)
{
- int i, r;
+ int r;
ath9k_ps_wakeup(sc);
-
r = ath_reset_internal(sc, NULL);
-
- for (i = 0; i < ATH9K_NUM_TX_QUEUES; i++) {
- if (!ATH_TXQ_SETUP(sc, i))
- continue;
-
- spin_lock_bh(&sc->tx.txq[i].axq_lock);
- ath_txq_schedule(sc, &sc->tx.txq[i]);
- spin_unlock_bh(&sc->tx.txq[i].axq_lock);
- }
-
ath9k_ps_restore(sc);
return r;
return;
/*
- * All MPDUs in an aggregate will use the same LNA
- * as the first MPDU.
- */
- if (rs->rs_isaggr && !rs->rs_firstaggr)
- return;
-
- /*
* Change the default rx antenna if rx diversity
* chooses the other antenna 3 times in a row.
*/
tbf->bf_buf_addr = bf->bf_buf_addr;
memcpy(tbf->bf_desc, bf->bf_desc, sc->sc_ah->caps.tx_desc_len);
tbf->bf_state = bf->bf_state;
+ tbf->bf_state.stale = false;
return tbf;
}
u16 tid, u16 *ssn)
{
struct ath_atx_tid *txtid;
+ struct ath_txq *txq;
struct ath_node *an;
u8 density;
an = (struct ath_node *)sta->drv_priv;
txtid = ATH_AN_2_TID(an, tid);
+ txq = txtid->ac->txq;
+
+ ath_txq_lock(sc, txq);
/* update ampdu factor/density, they may have changed. This may happen
* in HT IBSS when a beacon with HT-info is received after the station
memset(txtid->tx_buf, 0, sizeof(txtid->tx_buf));
txtid->baw_head = txtid->baw_tail = 0;
+ ath_txq_unlock_complete(sc, txq);
+
return 0;
}
__skb_unlink(bf->bf_mpdu, tid_q);
list_add_tail(&bf->list, &bf_q);
ath_set_rates(tid->an->vif, tid->an->sta, bf);
- ath_tx_addto_baw(sc, tid, bf);
- bf->bf_state.bf_type &= ~BUF_AGGR;
+ if (bf_isampdu(bf)) {
+ ath_tx_addto_baw(sc, tid, bf);
+ bf->bf_state.bf_type &= ~BUF_AGGR;
+ }
if (bf_tail)
bf_tail->bf_next = bf;
if (bf_is_ampdu_not_probing(bf))
txq->axq_ampdu_depth++;
- bf = bf->bf_lastbf->bf_next;
+ bf_last = bf->bf_lastbf;
+ bf = bf_last->bf_next;
+ bf_last->bf_next = NULL;
}
}
}
static void ath_tx_send_normal(struct ath_softc *sc, struct ath_txq *txq,
struct ath_atx_tid *tid, struct sk_buff *skb)
{
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
struct ath_frame_info *fi = get_frame_info(skb);
struct list_head bf_head;
- struct ath_buf *bf;
-
- bf = fi->bf;
+ struct ath_buf *bf = fi->bf;
INIT_LIST_HEAD(&bf_head);
list_add_tail(&bf->list, &bf_head);
bf->bf_state.bf_type = 0;
+ if (tid && (tx_info->flags & IEEE80211_TX_CTL_AMPDU)) {
+ bf->bf_state.bf_type = BUF_AMPDU;
+ ath_tx_addto_baw(sc, tid, bf);
+ }
bf->bf_next = NULL;
bf->bf_lastbf = bf;
config BRCMFMAC_SDIO
bool "SDIO bus interface support for FullMAC driver"
- depends on MMC
+ depends on (MMC = y || MMC = BRCMFMAC)
depends on BRCMFMAC
select FW_LOADER
default y
config BRCMFMAC_USB
bool "USB bus interface support for FullMAC driver"
- depends on USB
+ depends on (USB = y || USB = BRCMFMAC)
depends on BRCMFMAC
select FW_LOADER
---help---
static int brcmf_sdio_pd_probe(struct platform_device *pdev)
{
- int ret;
-
brcmf_dbg(SDIO, "Enter\n");
brcmfmac_sdio_pdata = pdev->dev.platform_data;
if (brcmfmac_sdio_pdata->power_on)
brcmfmac_sdio_pdata->power_on();
- ret = sdio_register_driver(&brcmf_sdmmc_driver);
- if (ret)
- brcmf_err("sdio_register_driver failed: %d\n", ret);
-
- return ret;
+ return 0;
}
static int brcmf_sdio_pd_remove(struct platform_device *pdev)
}
};
+void brcmf_sdio_register(void)
+{
+ int ret;
+
+ ret = sdio_register_driver(&brcmf_sdmmc_driver);
+ if (ret)
+ brcmf_err("sdio_register_driver failed: %d\n", ret);
+}
+
void brcmf_sdio_exit(void)
{
brcmf_dbg(SDIO, "Enter\n");
sdio_unregister_driver(&brcmf_sdmmc_driver);
}
-void brcmf_sdio_init(void)
+void __init brcmf_sdio_init(void)
{
int ret;
brcmf_dbg(SDIO, "Enter\n");
ret = platform_driver_probe(&brcmf_sdio_pd, brcmf_sdio_pd_probe);
- if (ret == -ENODEV) {
- brcmf_dbg(SDIO, "No platform data available, registering without.\n");
- ret = sdio_register_driver(&brcmf_sdmmc_driver);
- }
-
- if (ret)
- brcmf_err("driver registration failed: %d\n", ret);
+ if (ret == -ENODEV)
+ brcmf_dbg(SDIO, "No platform data available.\n");
}
#ifdef CONFIG_BRCMFMAC_SDIO
extern void brcmf_sdio_exit(void);
extern void brcmf_sdio_init(void);
+extern void brcmf_sdio_register(void);
#endif
#ifdef CONFIG_BRCMFMAC_USB
extern void brcmf_usb_exit(void);
-extern void brcmf_usb_init(void);
+extern void brcmf_usb_register(void);
#endif
#endif /* _BRCMF_BUS_H_ */
return bus->chip << 4 | bus->chiprev;
}
-static void brcmf_driver_init(struct work_struct *work)
+static void brcmf_driver_register(struct work_struct *work)
{
- brcmf_debugfs_init();
-
#ifdef CONFIG_BRCMFMAC_SDIO
- brcmf_sdio_init();
+ brcmf_sdio_register();
#endif
#ifdef CONFIG_BRCMFMAC_USB
- brcmf_usb_init();
+ brcmf_usb_register();
#endif
}
-static DECLARE_WORK(brcmf_driver_work, brcmf_driver_init);
+static DECLARE_WORK(brcmf_driver_work, brcmf_driver_register);
static int __init brcmfmac_module_init(void)
{
+ brcmf_debugfs_init();
+#ifdef CONFIG_BRCMFMAC_SDIO
+ brcmf_sdio_init();
+#endif
if (!schedule_work(&brcmf_driver_work))
return -EBUSY;
brcmf_release_fw(&fw_image_list);
}
-void brcmf_usb_init(void)
+void brcmf_usb_register(void)
{
brcmf_dbg(USB, "Enter\n");
INIT_LIST_HEAD(&fw_image_list);
if (err != 0)
brcms_err(wl->wlc->hw->d11core, "%s: brcms_up() returned %d\n",
__func__, err);
+
+ bcma_core_pci_power_save(wl->wlc->hw->d11core->bus, true);
return err;
}
return;
}
+ bcma_core_pci_power_save(wl->wlc->hw->d11core->bus, false);
+
/* put driver in down state */
spin_lock_bh(&wl->lock);
brcms_down(wl);
struct cw1200_common *core;
const struct cw1200_platform_data_spi *pdata;
spinlock_t lock; /* Serialize all bus operations */
+ wait_queue_head_t wq;
int claimed;
};
{
unsigned long flags;
+ DECLARE_WAITQUEUE(wait, current);
+
might_sleep();
+ add_wait_queue(&self->wq, &wait);
spin_lock_irqsave(&self->lock, flags);
while (1) {
set_current_state(TASK_UNINTERRUPTIBLE);
set_current_state(TASK_RUNNING);
self->claimed = 1;
spin_unlock_irqrestore(&self->lock, flags);
+ remove_wait_queue(&self->wq, &wait);
return;
}
spin_lock_irqsave(&self->lock, flags);
self->claimed = 0;
spin_unlock_irqrestore(&self->lock, flags);
+ wake_up(&self->wq);
+
return;
}
struct hwbus_priv *self = dev_id;
if (self->core) {
+ cw1200_spi_lock(self);
cw1200_irq_handler(self->core);
+ cw1200_spi_unlock(self);
return IRQ_HANDLED;
} else {
return IRQ_NONE;
pr_debug("SW IRQ subscribe\n");
- ret = request_any_context_irq(self->func->irq, cw1200_spi_irq_handler,
- IRQF_TRIGGER_HIGH,
- "cw1200_wlan_irq", self);
+ ret = request_threaded_irq(self->func->irq, NULL,
+ cw1200_spi_irq_handler,
+ IRQF_TRIGGER_HIGH | IRQF_ONESHOT,
+ "cw1200_wlan_irq", self);
if (WARN_ON(ret < 0))
goto exit;
spi_set_drvdata(func, self);
+ init_waitqueue_head(&self->wq);
+
status = cw1200_spi_irq_subscribe(self);
status = cw1200_core_probe(&cw1200_spi_hwbus_ops,
.ht_params = &iwl6000_ht_params,
};
+const struct iwl_cfg iwl6035_2agn_sff_cfg = {
+ .name = "Intel(R) Centrino(R) Ultimate-N 6235 AGN",
+ IWL_DEVICE_6035,
+ .ht_params = &iwl6000_ht_params,
+};
+
const struct iwl_cfg iwl1030_bgn_cfg = {
.name = "Intel(R) Centrino(R) Wireless-N 1030 BGN",
IWL_DEVICE_6030,
extern const struct iwl_cfg iwl2000_2bgn_d_cfg;
extern const struct iwl_cfg iwl2030_2bgn_cfg;
extern const struct iwl_cfg iwl6035_2agn_cfg;
+extern const struct iwl_cfg iwl6035_2agn_sff_cfg;
extern const struct iwl_cfg iwl105_bgn_cfg;
extern const struct iwl_cfg iwl105_bgn_d_cfg;
extern const struct iwl_cfg iwl135_bgn_cfg;
{
int ret;
- WARN_ONCE(trans->state != IWL_TRANS_FW_ALIVE,
- "%s bad state = %d", __func__, trans->state);
+ if (trans->state != IWL_TRANS_FW_ALIVE) {
+ IWL_ERR(trans, "%s bad state = %d", __func__, trans->state);
+ return -EIO;
+ }
if (!(cmd->flags & CMD_ASYNC))
lock_map_acquire_read(&trans->sync_cmd_lockdep_map);
if (!mvmvif->queue_params[ac].uapsd)
continue;
- cmd->flags |= cpu_to_le16(POWER_FLAGS_ADVANCE_PM_ENA_MSK);
+ if (mvm->cur_ucode != IWL_UCODE_WOWLAN)
+ cmd->flags |=
+ cpu_to_le16(POWER_FLAGS_ADVANCE_PM_ENA_MSK);
+
cmd->uapsd_ac_flags |= BIT(ac);
/* QNDP TID - the highest TID with no admission control */
return false;
}
+ /*
+ * If scan cannot be aborted, it means that we had a
+ * SCAN_COMPLETE_NOTIFICATION in the pipe and it called
+ * ieee80211_scan_completed already.
+ */
IWL_DEBUG_SCAN(mvm, "Scan cannot be aborted, exit now: %d\n",
*resp);
return true;
SCAN_COMPLETE_NOTIFICATION };
int ret;
+ if (mvm->scan_status == IWL_MVM_SCAN_NONE)
+ return;
+
iwl_init_notification_wait(&mvm->notif_wait, &wait_scan_abort,
scan_abort_notif,
ARRAY_SIZE(scan_abort_notif),
iwl_mvm_scan_abort_notif, NULL);
- ret = iwl_mvm_send_cmd_pdu(mvm, SCAN_ABORT_CMD, CMD_SYNC, 0, NULL);
+ ret = iwl_mvm_send_cmd_pdu(mvm, SCAN_ABORT_CMD,
+ CMD_SYNC | CMD_SEND_IN_RFKILL, 0, NULL);
if (ret) {
IWL_ERR(mvm, "Couldn't send SCAN_ABORT_CMD: %d\n", ret);
+ /* mac80211's state will be cleaned in the fw_restart flow */
goto out_remove_notif;
}
/* 6x00 Series */
{IWL_PCI_DEVICE(0x422B, 0x1101, iwl6000_3agn_cfg)},
+ {IWL_PCI_DEVICE(0x422B, 0x1108, iwl6000_3agn_cfg)},
{IWL_PCI_DEVICE(0x422B, 0x1121, iwl6000_3agn_cfg)},
+ {IWL_PCI_DEVICE(0x422B, 0x1128, iwl6000_3agn_cfg)},
{IWL_PCI_DEVICE(0x422C, 0x1301, iwl6000i_2agn_cfg)},
{IWL_PCI_DEVICE(0x422C, 0x1306, iwl6000i_2abg_cfg)},
{IWL_PCI_DEVICE(0x422C, 0x1307, iwl6000i_2bg_cfg)},
{IWL_PCI_DEVICE(0x422C, 0x1321, iwl6000i_2agn_cfg)},
{IWL_PCI_DEVICE(0x422C, 0x1326, iwl6000i_2abg_cfg)},
{IWL_PCI_DEVICE(0x4238, 0x1111, iwl6000_3agn_cfg)},
+ {IWL_PCI_DEVICE(0x4238, 0x1118, iwl6000_3agn_cfg)},
{IWL_PCI_DEVICE(0x4239, 0x1311, iwl6000i_2agn_cfg)},
{IWL_PCI_DEVICE(0x4239, 0x1316, iwl6000i_2abg_cfg)},
{IWL_PCI_DEVICE(0x0082, 0x1301, iwl6005_2agn_cfg)},
{IWL_PCI_DEVICE(0x0082, 0x1306, iwl6005_2abg_cfg)},
{IWL_PCI_DEVICE(0x0082, 0x1307, iwl6005_2bg_cfg)},
+ {IWL_PCI_DEVICE(0x0082, 0x1308, iwl6005_2agn_cfg)},
{IWL_PCI_DEVICE(0x0082, 0x1321, iwl6005_2agn_cfg)},
{IWL_PCI_DEVICE(0x0082, 0x1326, iwl6005_2abg_cfg)},
+ {IWL_PCI_DEVICE(0x0082, 0x1328, iwl6005_2agn_cfg)},
{IWL_PCI_DEVICE(0x0085, 0x1311, iwl6005_2agn_cfg)},
+ {IWL_PCI_DEVICE(0x0085, 0x1318, iwl6005_2agn_cfg)},
{IWL_PCI_DEVICE(0x0085, 0x1316, iwl6005_2abg_cfg)},
{IWL_PCI_DEVICE(0x0082, 0xC020, iwl6005_2agn_sff_cfg)},
{IWL_PCI_DEVICE(0x0085, 0xC220, iwl6005_2agn_sff_cfg)},
+ {IWL_PCI_DEVICE(0x0085, 0xC228, iwl6005_2agn_sff_cfg)},
{IWL_PCI_DEVICE(0x0082, 0x4820, iwl6005_2agn_d_cfg)},
{IWL_PCI_DEVICE(0x0082, 0x1304, iwl6005_2agn_mow1_cfg)},/* low 5GHz active */
{IWL_PCI_DEVICE(0x0082, 0x1305, iwl6005_2agn_mow2_cfg)},/* high 5GHz active */
/* 6x35 Series */
{IWL_PCI_DEVICE(0x088E, 0x4060, iwl6035_2agn_cfg)},
+ {IWL_PCI_DEVICE(0x088E, 0x406A, iwl6035_2agn_sff_cfg)},
{IWL_PCI_DEVICE(0x088F, 0x4260, iwl6035_2agn_cfg)},
+ {IWL_PCI_DEVICE(0x088F, 0x426A, iwl6035_2agn_sff_cfg)},
{IWL_PCI_DEVICE(0x088E, 0x4460, iwl6035_2agn_cfg)},
+ {IWL_PCI_DEVICE(0x088E, 0x446A, iwl6035_2agn_sff_cfg)},
{IWL_PCI_DEVICE(0x088E, 0x4860, iwl6035_2agn_cfg)},
{IWL_PCI_DEVICE(0x088F, 0x5260, iwl6035_2agn_cfg)},
#if IS_ENABLED(CONFIG_IWLMVM)
/* 7000 Series */
{IWL_PCI_DEVICE(0x08B1, 0x4070, iwl7260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0x4072, iwl7260_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x4170, iwl7260_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x4060, iwl7260_2n_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0x406A, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x4160, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x4062, iwl7260_n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x4162, iwl7260_n_cfg)},
{IWL_PCI_DEVICE(0x08B2, 0x4270, iwl7260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B2, 0x4272, iwl7260_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B2, 0x4260, iwl7260_2n_cfg)},
+ {IWL_PCI_DEVICE(0x08B2, 0x426A, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B2, 0x4262, iwl7260_n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x4470, iwl7260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0x4472, iwl7260_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x4460, iwl7260_2n_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0x446A, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x4462, iwl7260_n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x4870, iwl7260_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x486E, iwl7260_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x4A70, iwl7260_2ac_cfg_high_temp)},
{IWL_PCI_DEVICE(0x08B1, 0x4A6E, iwl7260_2ac_cfg_high_temp)},
{IWL_PCI_DEVICE(0x08B1, 0x4A6C, iwl7260_2ac_cfg_high_temp)},
+ {IWL_PCI_DEVICE(0x08B1, 0x4570, iwl7260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0x4560, iwl7260_2n_cfg)},
+ {IWL_PCI_DEVICE(0x08B2, 0x4370, iwl7260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B2, 0x4360, iwl7260_2n_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0x5070, iwl7260_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x4020, iwl7260_2n_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0x402A, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B2, 0x4220, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0x4420, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0xC070, iwl7260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0xC072, iwl7260_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0xC170, iwl7260_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0xC060, iwl7260_2n_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0xC06A, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0xC160, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0xC062, iwl7260_n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0xC162, iwl7260_n_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0xC770, iwl7260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0xC760, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B2, 0xC270, iwl7260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B2, 0xC272, iwl7260_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B2, 0xC260, iwl7260_2n_cfg)},
+ {IWL_PCI_DEVICE(0x08B2, 0xC26A, iwl7260_n_cfg)},
{IWL_PCI_DEVICE(0x08B2, 0xC262, iwl7260_n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0xC470, iwl7260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0xC472, iwl7260_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0xC460, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0xC462, iwl7260_n_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0xC570, iwl7260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0xC560, iwl7260_2n_cfg)},
+ {IWL_PCI_DEVICE(0x08B2, 0xC370, iwl7260_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0xC360, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0xC020, iwl7260_2n_cfg)},
+ {IWL_PCI_DEVICE(0x08B1, 0xC02A, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B2, 0xC220, iwl7260_2n_cfg)},
{IWL_PCI_DEVICE(0x08B1, 0xC420, iwl7260_2n_cfg)},
/* 3160 Series */
{IWL_PCI_DEVICE(0x08B3, 0x0070, iwl3160_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B3, 0x0072, iwl3160_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x0170, iwl3160_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B3, 0x0172, iwl3160_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x0060, iwl3160_2n_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x0062, iwl3160_n_cfg)},
{IWL_PCI_DEVICE(0x08B4, 0x0270, iwl3160_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B4, 0x0272, iwl3160_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x0470, iwl3160_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B3, 0x0472, iwl3160_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B4, 0x0370, iwl3160_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x8070, iwl3160_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B3, 0x8072, iwl3160_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x8170, iwl3160_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B3, 0x8172, iwl3160_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x8060, iwl3160_2n_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x8062, iwl3160_n_cfg)},
{IWL_PCI_DEVICE(0x08B4, 0x8270, iwl3160_2ac_cfg)},
{IWL_PCI_DEVICE(0x08B3, 0x8470, iwl3160_2ac_cfg)},
+ {IWL_PCI_DEVICE(0x08B3, 0x8570, iwl3160_2ac_cfg)},
#endif /* CONFIG_IWLMVM */
{0}
spin_lock_init(&trans_pcie->reg_lock);
init_waitqueue_head(&trans_pcie->ucode_write_waitq);
+ err = pci_enable_device(pdev);
+ if (err)
+ goto out_no_pci;
+
if (!cfg->base_params->pcie_l1_allowed) {
/*
* W/A - seems to solve weird behavior. We need to remove this
PCIE_LINK_STATE_CLKPM);
}
- err = pci_enable_device(pdev);
- if (err)
- goto out_no_pci;
-
pci_set_master(pdev);
err = pci_set_dma_mask(pdev, DMA_BIT_MASK(36));
* non-AGG queue.
*/
iwl_clear_bits_prph(trans, SCD_AGGR_SEL, BIT(txq_id));
+
+ ssn = trans_pcie->txq[txq_id].q.read_ptr;
}
/* Place first TFD at index corresponding to start sequence number.
*/
int
mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv,
- struct mwifiex_ra_list_tbl *pra_list, int headroom,
+ struct mwifiex_ra_list_tbl *pra_list,
int ptrindex, unsigned long ra_list_flags)
__releases(&priv->wmm.ra_list_spinlock)
{
int pad = 0, ret;
struct mwifiex_tx_param tx_param;
struct txpd *ptx_pd = NULL;
+ int headroom = adapter->iface_type == MWIFIEX_USB ? 0 : INTF_HEADER_LEN;
skb_src = skb_peek(&pra_list->skb_head);
if (!skb_src) {
int mwifiex_11n_deaggregate_pkt(struct mwifiex_private *priv,
struct sk_buff *skb);
int mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv,
- struct mwifiex_ra_list_tbl *ptr, int headroom,
+ struct mwifiex_ra_list_tbl *ptr,
int ptr_index, unsigned long flags)
__releases(&priv->wmm.ra_list_spinlock);
uint32_t conditions = le32_to_cpu(phs_cfg->params.hs_config.conditions);
if (phs_cfg->action == cpu_to_le16(HS_ACTIVATE) &&
- adapter->iface_type == MWIFIEX_SDIO) {
+ adapter->iface_type != MWIFIEX_USB) {
mwifiex_hs_activated_event(priv, true);
return 0;
} else {
}
if (conditions != HS_CFG_CANCEL) {
adapter->is_hs_configured = true;
- if (adapter->iface_type == MWIFIEX_USB ||
- adapter->iface_type == MWIFIEX_PCIE)
+ if (adapter->iface_type == MWIFIEX_USB)
mwifiex_hs_activated_event(priv, true);
} else {
adapter->is_hs_configured = false;
*/
int mwifiex_deauthenticate(struct mwifiex_private *priv, u8 *mac)
{
+ int ret = 0;
+
if (!priv->media_connected)
return 0;
switch (priv->bss_mode) {
case NL80211_IFTYPE_STATION:
case NL80211_IFTYPE_P2P_CLIENT:
- return mwifiex_deauthenticate_infra(priv, mac);
+ ret = mwifiex_deauthenticate_infra(priv, mac);
+ if (ret)
+ cfg80211_disconnected(priv->netdev, 0, NULL, 0,
+ GFP_KERNEL);
+ break;
case NL80211_IFTYPE_ADHOC:
return mwifiex_send_cmd_sync(priv,
HostCmd_CMD_802_11_AD_HOC_STOP,
break;
}
- return 0;
+ return ret;
}
EXPORT_SYMBOL_GPL(mwifiex_deauthenticate);
}
} while (true);
- if ((adapter->int_status) || IS_CARD_RX_RCVD(adapter))
+ spin_lock_irqsave(&adapter->main_proc_lock, flags);
+ if ((adapter->int_status) || IS_CARD_RX_RCVD(adapter)) {
+ spin_unlock_irqrestore(&adapter->main_proc_lock, flags);
goto process_start;
+ }
- spin_lock_irqsave(&adapter->main_proc_lock, flags);
adapter->mwifiex_processing = false;
spin_unlock_irqrestore(&adapter->main_proc_lock, flags);
dev_dbg(adapter->dev,
"info: successfully disconnected from %pM: reason code %d\n",
priv->cfg_bssid, reason_code);
- if (priv->bss_mode == NL80211_IFTYPE_STATION) {
+ if (priv->bss_mode == NL80211_IFTYPE_STATION ||
+ priv->bss_mode == NL80211_IFTYPE_P2P_CLIENT) {
cfg80211_disconnected(priv->netdev, reason_code, NULL, 0,
GFP_KERNEL);
}
*/
adapter->is_suspended = true;
- for (i = 0; i < adapter->priv_num; i++)
- netif_carrier_off(adapter->priv[i]->netdev);
-
if (atomic_read(&card->rx_cmd_urb_pending) && card->rx_cmd.urb)
usb_kill_urb(card->rx_cmd.urb);
MWIFIEX_RX_CMD_BUF_SIZE);
}
- for (i = 0; i < adapter->priv_num; i++)
- if (adapter->priv[i]->media_connected)
- netif_carrier_on(adapter->priv[i]->netdev);
-
/* Disable Host Sleep */
if (adapter->hs_activated)
mwifiex_cancel_hs(mwifiex_get_priv(adapter,
if (enable_tx_amsdu && mwifiex_is_amsdu_allowed(priv, tid) &&
mwifiex_is_11n_aggragation_possible(priv, ptr,
adapter->tx_buf_size))
- mwifiex_11n_aggregate_pkt(priv, ptr, INTF_HEADER_LEN,
- ptr_index, flags);
+ mwifiex_11n_aggregate_pkt(priv, ptr, ptr_index, flags);
/* ra_list_spinlock has been freed in
mwifiex_11n_aggregate_pkt() */
else
{USB_DEVICE(0x06a9, 0x000e)}, /* Westell 802.11g USB (A90-211WG-01) */
{USB_DEVICE(0x06b9, 0x0121)}, /* Thomson SpeedTouch 121g */
{USB_DEVICE(0x0707, 0xee13)}, /* SMC 2862W-G version 2 */
+ {USB_DEVICE(0x07aa, 0x0020)}, /* Corega WLUSB2GTST USB */
{USB_DEVICE(0x0803, 0x4310)}, /* Zoom 4410a */
{USB_DEVICE(0x083a, 0x4521)}, /* Siemens Gigaset USB Adapter 54 version 2 */
{USB_DEVICE(0x083a, 0x4531)}, /* T-Com Sinus 154 data II */
if (err) {
dev_err(&priv->udev->dev, "(p54usb) cannot load firmware %s "
"(%d)!\n", p54u_fwlist[i].fw, err);
+ usb_put_dev(udev);
}
return err;
rt2800_init_registers(rt2x00dev)))
return -EIO;
+ if (unlikely(rt2800_wait_bbp_rf_ready(rt2x00dev)))
+ return -EIO;
+
/*
* Send signal to firmware during boot time.
*/
rt2800_register_write(rt2x00dev, H2M_BBP_AGENT, 0);
rt2800_register_write(rt2x00dev, H2M_MAILBOX_CSR, 0);
- if (rt2x00_is_usb(rt2x00dev)) {
+ if (rt2x00_is_usb(rt2x00dev))
rt2800_register_write(rt2x00dev, H2M_INT_SRC, 0);
- rt2800_mcu_request(rt2x00dev, MCU_BOOT_SIGNAL, 0, 0, 0);
- }
+ rt2800_mcu_request(rt2x00dev, MCU_BOOT_SIGNAL, 0, 0, 0);
msleep(1);
- if (unlikely(rt2800_wait_bbp_rf_ready(rt2x00dev) ||
- rt2800_wait_bbp_ready(rt2x00dev)))
+ if (unlikely(rt2800_wait_bbp_ready(rt2x00dev)))
return -EIO;
rt2800_init_bbp(rt2x00dev);
goto exit_release_regions;
}
- pci_enable_msi(pci_dev);
-
hw = ieee80211_alloc_hw(sizeof(struct rt2x00_dev), ops->hw);
if (!hw) {
rt2x00_probe_err("Failed to allocate hardware\n");
retval = -ENOMEM;
- goto exit_disable_msi;
+ goto exit_release_regions;
}
pci_set_drvdata(pci_dev, hw);
exit_free_device:
ieee80211_free_hw(hw);
-exit_disable_msi:
- pci_disable_msi(pci_dev);
-
exit_release_regions:
pci_release_regions(pci_dev);
rt2x00pci_free_reg(rt2x00dev);
ieee80211_free_hw(hw);
- pci_disable_msi(pci_dev);
-
/*
* Free the PCI device data.
*/
skb_queue_tail(&priv->rx_queue, skb);
usb_anchor_urb(entry, &priv->anchored);
ret = usb_submit_urb(entry, GFP_KERNEL);
+ usb_put_urb(entry);
if (ret) {
skb_unlink(skb, &priv->rx_queue);
usb_unanchor_urb(entry);
goto err;
}
- usb_free_urb(entry);
}
return ret;
err:
- usb_free_urb(entry);
kfree_skb(skb);
usb_kill_anchored_urbs(&priv->anchored);
return ret;
(RETRY_COUNT << 8 /* short retry limit */) |
(RETRY_COUNT << 0 /* long retry limit */) |
(7 << 21 /* MAX TX DMA */));
- rtl8187_init_urbs(dev);
- rtl8187b_init_status_urb(dev);
+ ret = rtl8187_init_urbs(dev);
+ if (ret)
+ goto rtl8187_start_exit;
+ ret = rtl8187b_init_status_urb(dev);
+ if (ret)
+ usb_kill_anchored_urbs(&priv->anchored);
goto rtl8187_start_exit;
}
rtl818x_iowrite32(priv, &priv->map->MAR[0], ~0);
rtl818x_iowrite32(priv, &priv->map->MAR[1], ~0);
- rtl8187_init_urbs(dev);
+ ret = rtl8187_init_urbs(dev);
+ if (ret)
+ goto rtl8187_start_exit;
reg = RTL818X_RX_CONF_ONLYERLPKT |
RTL818X_RX_CONF_RX_AUTORESETPHY |
(bool)GET_RX_DESC_PAGGR(pdesc));
rx_status->mactime = GET_RX_DESC_TSFL(pdesc);
if (phystatus) {
- p_drvinfo = (struct rx_fwinfo_92c *)(pdesc + RTL_RX_DESC_SIZE);
+ p_drvinfo = (struct rx_fwinfo_92c *)(skb->data +
+ stats->rx_bufshift);
rtl92c_translate_rx_signal_stuff(hw, skb, stats, pdesc,
p_drvinfo);
}
that it points to the data allocated
beyond this structure like:
rtl_pci_priv or rtl_usb_priv */
- u8 priv[0];
+ u8 priv[0] __aligned(sizeof(void *));
};
#define rtl_priv(hw) (((struct rtl_priv *)(hw)->priv))
unsigned long rx_ring_ref, unsigned int tx_evtchn,
unsigned int rx_evtchn);
void xenvif_disconnect(struct xenvif *vif);
+void xenvif_free(struct xenvif *vif);
int xenvif_xenbus_init(void);
void xenvif_xenbus_fini(void);
}
netdev_dbg(dev, "Successfully created xenvif\n");
+
+ __module_get(THIS_MODULE);
+
return vif;
}
if (vif->tx_irq)
return 0;
- __module_get(THIS_MODULE);
-
err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
if (err < 0)
goto err;
init_waitqueue_head(&vif->wq);
vif->task = kthread_create(xenvif_kthread,
- (void *)vif, vif->dev->name);
+ (void *)vif, "%s", vif->dev->name);
if (IS_ERR(vif->task)) {
pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
err = PTR_ERR(vif->task);
void xenvif_disconnect(struct xenvif *vif)
{
- /* Disconnect funtion might get called by generic framework
- * even before vif connects, so we need to check if we really
- * need to do a module_put.
- */
- int need_module_put = 0;
-
if (netif_carrier_ok(vif->dev))
xenvif_carrier_off(vif);
unbind_from_irqhandler(vif->tx_irq, vif);
unbind_from_irqhandler(vif->rx_irq, vif);
}
- /* vif->irq is valid, we had a module_get in
- * xenvif_connect.
- */
- need_module_put = 1;
+ vif->tx_irq = 0;
}
if (vif->task)
kthread_stop(vif->task);
+ xenvif_unmap_frontend_rings(vif);
+}
+
+void xenvif_free(struct xenvif *vif)
+{
netif_napi_del(&vif->napi);
unregister_netdev(vif->dev);
- xenvif_unmap_frontend_rings(vif);
-
free_netdev(vif->dev);
- if (need_module_put)
- module_put(THIS_MODULE);
+ module_put(THIS_MODULE);
}
return false;
}
+struct xenvif_count_slot_state {
+ unsigned long copy_off;
+ bool head;
+};
+
+unsigned int xenvif_count_frag_slots(struct xenvif *vif,
+ unsigned long offset, unsigned long size,
+ struct xenvif_count_slot_state *state)
+{
+ unsigned count = 0;
+
+ offset &= ~PAGE_MASK;
+
+ while (size > 0) {
+ unsigned long bytes;
+
+ bytes = PAGE_SIZE - offset;
+
+ if (bytes > size)
+ bytes = size;
+
+ if (start_new_rx_buffer(state->copy_off, bytes, state->head)) {
+ count++;
+ state->copy_off = 0;
+ }
+
+ if (state->copy_off + bytes > MAX_BUFFER_OFFSET)
+ bytes = MAX_BUFFER_OFFSET - state->copy_off;
+
+ state->copy_off += bytes;
+
+ offset += bytes;
+ size -= bytes;
+
+ if (offset == PAGE_SIZE)
+ offset = 0;
+
+ state->head = false;
+ }
+
+ return count;
+}
+
/*
* Figure out how many ring slots we're going to need to send @skb to
* the guest. This function is essentially a dry run of
*/
unsigned int xenvif_count_skb_slots(struct xenvif *vif, struct sk_buff *skb)
{
+ struct xenvif_count_slot_state state;
unsigned int count;
- int i, copy_off;
+ unsigned char *data;
+ unsigned i;
- count = DIV_ROUND_UP(skb_headlen(skb), PAGE_SIZE);
+ state.head = true;
+ state.copy_off = 0;
- copy_off = skb_headlen(skb) % PAGE_SIZE;
+ /* Slot for the first (partial) page of data. */
+ count = 1;
+ /* Need a slot for the GSO prefix for GSO extra data? */
if (skb_shinfo(skb)->gso_size)
count++;
- for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
- unsigned long size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
- unsigned long offset = skb_shinfo(skb)->frags[i].page_offset;
- unsigned long bytes;
-
- offset &= ~PAGE_MASK;
-
- while (size > 0) {
- BUG_ON(offset >= PAGE_SIZE);
- BUG_ON(copy_off > MAX_BUFFER_OFFSET);
-
- bytes = PAGE_SIZE - offset;
-
- if (bytes > size)
- bytes = size;
+ data = skb->data;
+ while (data < skb_tail_pointer(skb)) {
+ unsigned long offset = offset_in_page(data);
+ unsigned long size = PAGE_SIZE - offset;
- if (start_new_rx_buffer(copy_off, bytes, 0)) {
- count++;
- copy_off = 0;
- }
+ if (data + size > skb_tail_pointer(skb))
+ size = skb_tail_pointer(skb) - data;
- if (copy_off + bytes > MAX_BUFFER_OFFSET)
- bytes = MAX_BUFFER_OFFSET - copy_off;
+ count += xenvif_count_frag_slots(vif, offset, size, &state);
- copy_off += bytes;
+ data += size;
+ }
- offset += bytes;
- size -= bytes;
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ unsigned long size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
+ unsigned long offset = skb_shinfo(skb)->frags[i].page_offset;
- if (offset == PAGE_SIZE)
- offset = 0;
- }
+ count += xenvif_count_frag_slots(vif, offset, size, &state);
}
return count;
}
struct backend_info {
struct xenbus_device *dev;
struct xenvif *vif;
+
+ /* This is the state that will be reflected in xenstore when any
+ * active hotplug script completes.
+ */
+ enum xenbus_state state;
+
enum xenbus_state frontend_state;
struct xenbus_watch hotplug_status_watch;
u8 have_hotplug_status_watch:1;
static void connect(struct backend_info *);
static void backend_create_xenvif(struct backend_info *be);
static void unregister_hotplug_status_watch(struct backend_info *be);
+static void set_backend_state(struct backend_info *be,
+ enum xenbus_state state);
static int netback_remove(struct xenbus_device *dev)
{
struct backend_info *be = dev_get_drvdata(&dev->dev);
+ set_backend_state(be, XenbusStateClosed);
+
unregister_hotplug_status_watch(be);
if (be->vif) {
kobject_uevent(&dev->dev.kobj, KOBJ_OFFLINE);
xenbus_rm(XBT_NIL, dev->nodename, "hotplug-status");
- xenvif_disconnect(be->vif);
+ xenvif_free(be->vif);
be->vif = NULL;
}
kfree(be);
if (err)
goto fail;
+ be->state = XenbusStateInitWait;
+
/* This kicks hotplug scripts, so do it immediately. */
backend_create_xenvif(be);
kobject_uevent(&dev->dev.kobj, KOBJ_ONLINE);
}
+static void backend_disconnect(struct backend_info *be)
+{
+ if (be->vif)
+ xenvif_disconnect(be->vif);
+}
-static void disconnect_backend(struct xenbus_device *dev)
+static void backend_connect(struct backend_info *be)
{
- struct backend_info *be = dev_get_drvdata(&dev->dev);
+ if (be->vif)
+ connect(be);
+}
- if (be->vif) {
- xenbus_rm(XBT_NIL, dev->nodename, "hotplug-status");
- xenvif_disconnect(be->vif);
- be->vif = NULL;
+static inline void backend_switch_state(struct backend_info *be,
+ enum xenbus_state state)
+{
+ struct xenbus_device *dev = be->dev;
+
+ pr_debug("%s -> %s\n", dev->nodename, xenbus_strstate(state));
+ be->state = state;
+
+ /* If we are waiting for a hotplug script then defer the
+ * actual xenbus state change.
+ */
+ if (!be->have_hotplug_status_watch)
+ xenbus_switch_state(dev, state);
+}
+
+/* Handle backend state transitions:
+ *
+ * The backend state starts in InitWait and the following transitions are
+ * allowed.
+ *
+ * InitWait -> Connected
+ *
+ * ^ \ |
+ * | \ |
+ * | \ |
+ * | \ |
+ * | \ |
+ * | \ |
+ * | V V
+ *
+ * Closed <-> Closing
+ *
+ * The state argument specifies the eventual state of the backend and the
+ * function transitions to that state via the shortest path.
+ */
+static void set_backend_state(struct backend_info *be,
+ enum xenbus_state state)
+{
+ while (be->state != state) {
+ switch (be->state) {
+ case XenbusStateClosed:
+ switch (state) {
+ case XenbusStateInitWait:
+ case XenbusStateConnected:
+ pr_info("%s: prepare for reconnect\n",
+ be->dev->nodename);
+ backend_switch_state(be, XenbusStateInitWait);
+ break;
+ case XenbusStateClosing:
+ backend_switch_state(be, XenbusStateClosing);
+ break;
+ default:
+ BUG();
+ }
+ break;
+ case XenbusStateInitWait:
+ switch (state) {
+ case XenbusStateConnected:
+ backend_connect(be);
+ backend_switch_state(be, XenbusStateConnected);
+ break;
+ case XenbusStateClosing:
+ case XenbusStateClosed:
+ backend_switch_state(be, XenbusStateClosing);
+ break;
+ default:
+ BUG();
+ }
+ break;
+ case XenbusStateConnected:
+ switch (state) {
+ case XenbusStateInitWait:
+ case XenbusStateClosing:
+ case XenbusStateClosed:
+ backend_disconnect(be);
+ backend_switch_state(be, XenbusStateClosing);
+ break;
+ default:
+ BUG();
+ }
+ break;
+ case XenbusStateClosing:
+ switch (state) {
+ case XenbusStateInitWait:
+ case XenbusStateConnected:
+ case XenbusStateClosed:
+ backend_switch_state(be, XenbusStateClosed);
+ break;
+ default:
+ BUG();
+ }
+ break;
+ default:
+ BUG();
+ }
}
}
{
struct backend_info *be = dev_get_drvdata(&dev->dev);
- pr_debug("frontend state %s\n", xenbus_strstate(frontend_state));
+ pr_debug("%s -> %s\n", dev->otherend, xenbus_strstate(frontend_state));
be->frontend_state = frontend_state;
switch (frontend_state) {
case XenbusStateInitialising:
- if (dev->state == XenbusStateClosed) {
- pr_info("%s: prepare for reconnect\n", dev->nodename);
- xenbus_switch_state(dev, XenbusStateInitWait);
- }
+ set_backend_state(be, XenbusStateInitWait);
break;
case XenbusStateInitialised:
break;
case XenbusStateConnected:
- if (dev->state == XenbusStateConnected)
- break;
- backend_create_xenvif(be);
- if (be->vif)
- connect(be);
+ set_backend_state(be, XenbusStateConnected);
break;
case XenbusStateClosing:
- if (be->vif)
- kobject_uevent(&dev->dev.kobj, KOBJ_OFFLINE);
- disconnect_backend(dev);
- xenbus_switch_state(dev, XenbusStateClosing);
+ set_backend_state(be, XenbusStateClosing);
break;
case XenbusStateClosed:
- xenbus_switch_state(dev, XenbusStateClosed);
+ set_backend_state(be, XenbusStateClosed);
if (xenbus_dev_is_online(dev))
break;
/* fall through if not online */
case XenbusStateUnknown:
+ set_backend_state(be, XenbusStateClosed);
device_unregister(&dev->dev);
break;
if (IS_ERR(str))
return;
if (len == sizeof("connected")-1 && !memcmp(str, "connected", len)) {
- xenbus_switch_state(be->dev, XenbusStateConnected);
+ /* Complete any pending state change */
+ xenbus_switch_state(be->dev, be->state);
+
/* Not interested in this watch anymore. */
unregister_hotplug_status_watch(be);
}
err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
hotplug_status_changed,
"%s/%s", dev->nodename, "hotplug-status");
- if (err) {
- /* Switch now, since we can't do a watch. */
- xenbus_switch_state(dev, XenbusStateConnected);
- } else {
+ if (!err)
be->have_hotplug_status_watch = 1;
- }
netif_wake_queue(be->vif->dev);
}
depends on MTD
def_bool y
-config OF_RESERVED_MEM
- depends on OF_FLATTREE && (DMA_CMA || (HAVE_GENERIC_DMA_COHERENT && HAVE_MEMBLOCK))
- def_bool y
- help
- Initialization code for DMA reserved memory
-
endmenu # OF
obj-$(CONFIG_OF_PCI) += of_pci.o
obj-$(CONFIG_OF_PCI_IRQ) += of_pci_irq.o
obj-$(CONFIG_OF_MTD) += of_mtd.o
-obj-$(CONFIG_OF_RESERVED_MEM) += of_reserved_mem.o
struct device_node *cpun, *cpus;
cpus = of_find_node_by_path("/cpus");
- if (!cpus) {
- pr_warn("Missing cpus node, bailing out\n");
+ if (!cpus)
return NULL;
- }
for_each_child_of_node(cpus, cpun) {
if (of_node_cmp(cpun->type, "cpu"))
#include <linux/string.h>
#include <linux/errno.h>
#include <linux/slab.h>
-#include <linux/random.h>
#include <asm/setup.h> /* for COMMAND_LINE_SIZE */
#ifdef CONFIG_PPC
}
#endif /* CONFIG_OF_EARLY_FLATTREE */
-
-/* Feed entire flattened device tree into the random pool */
-static int __init add_fdt_randomness(void)
-{
- if (initial_boot_params)
- add_device_randomness(initial_boot_params,
- be32_to_cpu(initial_boot_params->totalsize));
-
- return 0;
-}
-core_initcall(add_fdt_randomness);
+++ /dev/null
-/*
- * Device tree based initialization code for reserved memory.
- *
- * Copyright (c) 2013 Samsung Electronics Co., Ltd.
- * http://www.samsung.com
- * Author: Marek Szyprowski <m.szyprowski@samsung.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; either version 2 of the
- * License or (at your optional) any later version of the license.
- */
-
-#include <linux/memblock.h>
-#include <linux/err.h>
-#include <linux/of.h>
-#include <linux/of_fdt.h>
-#include <linux/of_platform.h>
-#include <linux/mm.h>
-#include <linux/sizes.h>
-#include <linux/mm_types.h>
-#include <linux/dma-contiguous.h>
-#include <linux/dma-mapping.h>
-#include <linux/of_reserved_mem.h>
-
-#define MAX_RESERVED_REGIONS 16
-struct reserved_mem {
- phys_addr_t base;
- unsigned long size;
- struct cma *cma;
- char name[32];
-};
-static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
-static int reserved_mem_count;
-
-static int __init fdt_scan_reserved_mem(unsigned long node, const char *uname,
- int depth, void *data)
-{
- struct reserved_mem *rmem = &reserved_mem[reserved_mem_count];
- phys_addr_t base, size;
- int is_cma, is_reserved;
- unsigned long len;
- const char *status;
- __be32 *prop;
-
- is_cma = IS_ENABLED(CONFIG_DMA_CMA) &&
- of_flat_dt_is_compatible(node, "linux,contiguous-memory-region");
- is_reserved = of_flat_dt_is_compatible(node, "reserved-memory-region");
-
- if (!is_reserved && !is_cma) {
- /* ignore node and scan next one */
- return 0;
- }
-
- status = of_get_flat_dt_prop(node, "status", &len);
- if (status && strcmp(status, "okay") != 0) {
- /* ignore disabled node nad scan next one */
- return 0;
- }
-
- prop = of_get_flat_dt_prop(node, "reg", &len);
- if (!prop || (len < (dt_root_size_cells + dt_root_addr_cells) *
- sizeof(__be32))) {
- pr_err("Reserved mem: node %s, incorrect \"reg\" property\n",
- uname);
- /* ignore node and scan next one */
- return 0;
- }
- base = dt_mem_next_cell(dt_root_addr_cells, &prop);
- size = dt_mem_next_cell(dt_root_size_cells, &prop);
-
- if (!size) {
- /* ignore node and scan next one */
- return 0;
- }
-
- pr_info("Reserved mem: found %s, memory base %lx, size %ld MiB\n",
- uname, (unsigned long)base, (unsigned long)size / SZ_1M);
-
- if (reserved_mem_count == ARRAY_SIZE(reserved_mem))
- return -ENOSPC;
-
- rmem->base = base;
- rmem->size = size;
- strlcpy(rmem->name, uname, sizeof(rmem->name));
-
- if (is_cma) {
- struct cma *cma;
- if (dma_contiguous_reserve_area(size, base, 0, &cma) == 0) {
- rmem->cma = cma;
- reserved_mem_count++;
- if (of_get_flat_dt_prop(node,
- "linux,default-contiguous-region",
- NULL))
- dma_contiguous_set_default(cma);
- }
- } else if (is_reserved) {
- if (memblock_remove(base, size) == 0)
- reserved_mem_count++;
- else
- pr_err("Failed to reserve memory for %s\n", uname);
- }
-
- return 0;
-}
-
-static struct reserved_mem *get_dma_memory_region(struct device *dev)
-{
- struct device_node *node;
- const char *name;
- int i;
-
- node = of_parse_phandle(dev->of_node, "memory-region", 0);
- if (!node)
- return NULL;
-
- name = kbasename(node->full_name);
- for (i = 0; i < reserved_mem_count; i++)
- if (strcmp(name, reserved_mem[i].name) == 0)
- return &reserved_mem[i];
- return NULL;
-}
-
-/**
- * of_reserved_mem_device_init() - assign reserved memory region to given device
- *
- * This function assign memory region pointed by "memory-region" device tree
- * property to the given device.
- */
-void of_reserved_mem_device_init(struct device *dev)
-{
- struct reserved_mem *region = get_dma_memory_region(dev);
- if (!region)
- return;
-
- if (region->cma) {
- dev_set_cma_area(dev, region->cma);
- pr_info("Assigned CMA %s to %s device\n", region->name,
- dev_name(dev));
- } else {
- if (dma_declare_coherent_memory(dev, region->base, region->base,
- region->size, DMA_MEMORY_MAP | DMA_MEMORY_EXCLUSIVE) != 0)
- pr_info("Declared reserved memory %s to %s device\n",
- region->name, dev_name(dev));
- }
-}
-
-/**
- * of_reserved_mem_device_release() - release reserved memory device structures
- *
- * This function releases structures allocated for memory region handling for
- * the given device.
- */
-void of_reserved_mem_device_release(struct device *dev)
-{
- struct reserved_mem *region = get_dma_memory_region(dev);
- if (!region && !region->cma)
- dma_release_declared_memory(dev);
-}
-
-/**
- * early_init_dt_scan_reserved_mem() - create reserved memory regions
- *
- * This function grabs memory from early allocator for device exclusive use
- * defined in device tree structures. It should be called by arch specific code
- * once the early allocator (memblock) has been activated and all other
- * subsystems have already allocated/reserved memory.
- */
-void __init early_init_dt_scan_reserved_mem(void)
-{
- of_scan_flat_dt_by_path("/memory/reserved-memory",
- fdt_scan_reserved_mem, NULL);
-}
#include <linux/of_device.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
-#include <linux/of_reserved_mem.h>
#include <linux/platform_device.h>
const struct of_device_id of_default_bus_match_table[] = {
dev->dev.bus = &platform_bus_type;
dev->dev.platform_data = platform_data;
- of_reserved_mem_device_init(&dev->dev);
-
/* We do not fill the DMA ops for platform devices by default.
* This is currently the responsibility of the platform code
* to do such, possibly using a device notifier
if (of_device_add(dev) != 0) {
platform_device_put(dev);
- of_reserved_mem_device_release(&dev->dev);
return NULL;
}
struct acpiphp_func *func;
int max, pass;
LIST_HEAD(add_list);
- int nr_found;
- nr_found = acpiphp_rescan_slot(slot);
+ acpiphp_rescan_slot(slot);
max = acpiphp_max_busnr(bus);
for (pass = 0; pass < 2; pass++) {
list_for_each_entry(dev, &bus->devices, bus_list) {
}
}
__pci_bus_assign_resources(bus, &add_list, NULL);
- /* Nothing more to do here if there are no new devices on this bus. */
- if (!nr_found && (slot->flags & SLOT_ENABLED))
- return;
acpiphp_sanitize_bus(bus);
acpiphp_set_hpp_values(bus);
/*
* This bridge should have been registered as a hotplug function
- * under its parent, so the context has to be there. If not, we
- * are in deep goo.
+ * under its parent, so the context should be there, unless the
+ * parent is going to be handled by pciehp, in which case this
+ * bridge is not interesting to us either.
*/
mutex_lock(&acpiphp_context_lock);
context = acpiphp_get_context(handle);
- if (WARN_ON(!context)) {
+ if (!context) {
mutex_unlock(&acpiphp_context_lock);
put_device(&bus->dev);
+ pci_dev_put(bridge->pci_dev);
kfree(bridge);
return;
}
if (event != ACPI_NOTIFY_DEVICE_WAKE || !pci_dev)
return;
+ if (pci_dev->pme_poll)
+ pci_dev->pme_poll = false;
+
if (pci_dev->current_state == PCI_D3cold) {
pci_wakeup_event(pci_dev);
pm_runtime_resume(&pci_dev->dev);
if (pci_dev->pme_support)
pci_check_pme_status(pci_dev);
- if (pci_dev->pme_poll)
- pci_dev->pme_poll = false;
-
pci_wakeup_event(pci_dev);
pm_runtime_resume(&pci_dev->dev);
pci_enable_bridge(dev->bus->self);
- if (pci_is_enabled(dev))
+ if (pci_is_enabled(dev)) {
+ if (!dev->is_busmaster) {
+ dev_warn(&dev->dev, "driver skip pci_set_master, fix it!\n");
+ pci_set_master(dev);
+ }
return;
+ }
+
retval = pci_enable_device(dev);
if (retval)
dev_err(&dev->dev, "Error enabling bridge (%d), continuing\n",
* <devicename> <state> <pinname> are values that should match the pinctrl-maps
* <newvalue> reflects the new config and is driver dependant
*/
-static int pinconf_dbg_config_write(struct file *file,
+static ssize_t pinconf_dbg_config_write(struct file *file,
const char __user *user_buf, size_t count, loff_t *ppos)
{
struct pinctrl_maps *maps_node;
int i;
/* Get userspace string and assure termination */
- buf_size = min(count, (size_t)(sizeof(buf)-1));
+ buf_size = min(count, sizeof(buf) - 1);
if (copy_from_user(buf, user_buf, buf_size))
return -EFAULT;
buf[buf_size] = 0;
/* pin banks of s5pv210 pin-controller */
static struct samsung_pin_bank s5pv210_pin_bank[] = {
EXYNOS_PIN_BANK_EINTG(8, 0x000, "gpa0", 0x00),
- EXYNOS_PIN_BANK_EINTG(6, 0x020, "gpa1", 0x04),
+ EXYNOS_PIN_BANK_EINTG(4, 0x020, "gpa1", 0x04),
EXYNOS_PIN_BANK_EINTG(8, 0x040, "gpb", 0x08),
EXYNOS_PIN_BANK_EINTG(5, 0x060, "gpc0", 0x0c),
EXYNOS_PIN_BANK_EINTG(5, 0x080, "gpc1", 0x10),
EXYNOS_PIN_BANK_EINTG(4, 0x0a0, "gpd0", 0x14),
- EXYNOS_PIN_BANK_EINTG(4, 0x0c0, "gpd1", 0x18),
- EXYNOS_PIN_BANK_EINTG(5, 0x0e0, "gpe0", 0x1c),
- EXYNOS_PIN_BANK_EINTG(8, 0x100, "gpe1", 0x20),
- EXYNOS_PIN_BANK_EINTG(6, 0x120, "gpf0", 0x24),
+ EXYNOS_PIN_BANK_EINTG(6, 0x0c0, "gpd1", 0x18),
+ EXYNOS_PIN_BANK_EINTG(8, 0x0e0, "gpe0", 0x1c),
+ EXYNOS_PIN_BANK_EINTG(5, 0x100, "gpe1", 0x20),
+ EXYNOS_PIN_BANK_EINTG(8, 0x120, "gpf0", 0x24),
EXYNOS_PIN_BANK_EINTG(8, 0x140, "gpf1", 0x28),
EXYNOS_PIN_BANK_EINTG(8, 0x160, "gpf2", 0x2c),
- EXYNOS_PIN_BANK_EINTG(8, 0x180, "gpf3", 0x30),
+ EXYNOS_PIN_BANK_EINTG(6, 0x180, "gpf3", 0x30),
EXYNOS_PIN_BANK_EINTG(7, 0x1a0, "gpg0", 0x34),
EXYNOS_PIN_BANK_EINTG(7, 0x1c0, "gpg1", 0x38),
EXYNOS_PIN_BANK_EINTG(7, 0x1e0, "gpg2", 0x3c),
param = pinconf_to_config_param(configs[i]);
param_val = pinconf_to_config_argument(configs[i]);
+ if (param == PIN_CONFIG_BIAS_PULL_PIN_DEFAULT)
+ continue;
+
switch (param) {
- case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT:
- return 0;
case PIN_CONFIG_BIAS_DISABLE:
case PIN_CONFIG_BIAS_PULL_UP:
case PIN_CONFIG_BIAS_PULL_DOWN:
*
* Copyright (c) 2012-2013, NVIDIA CORPORATION. All rights reserved.
*
- * Arthur: Pritesh Raithatha <praithatha@nvidia.com>
+ * Author: Pritesh Raithatha <praithatha@nvidia.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
};
module_platform_driver(tegra114_pinctrl_driver);
-MODULE_ALIAS("platform:tegra114-pinctrl");
MODULE_AUTHOR("Pritesh Raithatha <praithatha@nvidia.com>");
-MODULE_DESCRIPTION("NVIDIA Tegra114 pincontrol driver");
+MODULE_DESCRIPTION("NVIDIA Tegra114 pinctrl driver");
MODULE_LICENSE("GPL v2");
depends on BACKLIGHT_CLASS_DEVICE
depends on RFKILL || RFKILL = n
depends on HOTPLUG_PCI
+ depends on ACPI_VIDEO || ACPI_VIDEO = n
select INPUT_SPARSEKMAP
select LEDS_CLASS
select NEW_LEDS
"default is -1 (automatic)");
#endif
-static int kbd_backlight = 1;
+static int kbd_backlight = -1;
module_param(kbd_backlight, int, 0444);
MODULE_PARM_DESC(kbd_backlight,
"set this to 0 to disable keyboard backlight, "
- "1 to enable it (default: 0)");
+ "1 to enable it (default: no change from current value)");
-static int kbd_backlight_timeout; /* = 0 */
+static int kbd_backlight_timeout = -1;
module_param(kbd_backlight_timeout, int, 0444);
MODULE_PARM_DESC(kbd_backlight_timeout,
- "set this to 0 to set the default 10 seconds timeout, "
- "1 for 30 seconds, 2 for 60 seconds and 3 to disable timeout "
- "(default: 0)");
+ "meaningful values vary from 0 to 3 and their meaning depends "
+ "on the model (default: no change from current value)");
#ifdef CONFIG_PM_SLEEP
static void sony_nc_kbd_backlight_resume(void);
if (!kbdbl_ctl)
return -ENOMEM;
+ kbdbl_ctl->mode = kbd_backlight;
+ kbdbl_ctl->timeout = kbd_backlight_timeout;
kbdbl_ctl->handle = handle;
if (handle == 0x0137)
kbdbl_ctl->base = 0x0C00;
if (ret)
goto outmode;
- __sony_nc_kbd_backlight_mode_set(kbd_backlight);
- __sony_nc_kbd_backlight_timeout_set(kbd_backlight_timeout);
+ __sony_nc_kbd_backlight_mode_set(kbdbl_ctl->mode);
+ __sony_nc_kbd_backlight_timeout_set(kbdbl_ctl->timeout);
return 0;
static void sony_nc_kbd_backlight_cleanup(struct platform_device *pd)
{
if (kbdbl_ctl) {
- int result;
-
device_remove_file(&pd->dev, &kbdbl_ctl->mode_attr);
device_remove_file(&pd->dev, &kbdbl_ctl->timeout_attr);
-
- /* restore the default hw behaviour */
- sony_call_snc_handle(kbdbl_ctl->handle,
- kbdbl_ctl->base | 0x10000, &result);
- sony_call_snc_handle(kbdbl_ctl->handle,
- kbdbl_ctl->base + 0x200, &result);
-
kfree(kbdbl_ctl);
kbdbl_ctl = NULL;
}
struct of_regulator_match **da9063_reg_matches)
{
da9063_reg_matches = NULL;
- return PTR_ERR(-ENODEV);
+ return ERR_PTR(-ENODEV);
}
#endif
#define SMPS_CTRL_MODE_ECO 0x02
#define SMPS_CTRL_MODE_PWM 0x03
-/* These values are derived from the data sheet. And are the number of steps
- * where there is a voltage change, the ranges at beginning and end of register
- * max/min values where there are no change are ommitted.
- *
- * So they are basically (maxV-minV)/stepV
- */
-#define PALMAS_SMPS_NUM_VOLTAGES 117
+#define PALMAS_SMPS_NUM_VOLTAGES 122
#define PALMAS_SMPS10_NUM_VOLTAGES 2
#define PALMAS_LDO_NUM_VOLTAGES 50
pmic->desc[id].min_uV = 900000;
pmic->desc[id].uV_step = 50000;
pmic->desc[id].linear_min_sel = 1;
+ pmic->desc[id].enable_time = 500;
pmic->desc[id].vsel_reg =
PALMAS_BASE_TO_REG(PALMAS_LDO_BASE,
palmas_regs_info[id].vsel_addr);
pmic->desc[id].min_uV = 450000;
pmic->desc[id].uV_step = 25000;
}
+
+ /* LOD6 in vibrator mode will have enable time 2000us */
+ if (pdata && pdata->ldo6_vibrator &&
+ (id == PALMAS_REG_LDO6))
+ pmic->desc[id].enable_time = 2000;
} else {
pmic->desc[id].n_voltages = 1;
pmic->desc[id].ops = &palmas_ops_extreg;
ti_abb_rmw(regs->opp_sel_mask, info->opp_sel, regs->control_reg,
abb->base);
- /* program LDO VBB vset override if needed */
- if (abb->ldo_base)
+ /*
+ * program LDO VBB vset override if needed for !bypass mode
+ * XXX: Do not switch sequence - for !bypass, LDO override reset *must*
+ * be performed *before* switch to bias mode else VBB glitches.
+ */
+ if (abb->ldo_base && info->opp_sel != TI_ABB_NOMINAL_OPP)
ti_abb_program_ldovbb(dev, abb, info);
/* Initiate ABB ldo change */
if (ret)
goto out;
+ /*
+ * Reset LDO VBB vset override bypass mode
+ * XXX: Do not switch sequence - for bypass, LDO override reset *must*
+ * be performed *after* switch to bypass else VBB glitches.
+ */
+ if (abb->ldo_base && info->opp_sel == TI_ABB_NOMINAL_OPP)
+ ti_abb_program_ldovbb(dev, abb, info);
+
out:
return ret;
}
*/
static const struct regulator_linear_range wm831x_gp_ldo_ranges[] = {
- { .min_uV = 900000, .max_uV = 1650000, .min_sel = 0, .max_sel = 14,
+ { .min_uV = 900000, .max_uV = 1600000, .min_sel = 0, .max_sel = 14,
.uV_step = 50000 },
{ .min_uV = 1700000, .max_uV = 3300000, .min_sel = 15, .max_sel = 31,
.uV_step = 100000 },
*/
static const struct regulator_linear_range wm831x_aldo_ranges[] = {
- { .min_uV = 1000000, .max_uV = 1650000, .min_sel = 0, .max_sel = 12,
+ { .min_uV = 1000000, .max_uV = 1600000, .min_sel = 0, .max_sel = 12,
.uV_step = 50000 },
{ .min_uV = 1700000, .max_uV = 3500000, .min_sel = 13, .max_sel = 31,
.uV_step = 100000 },
}
static const struct regulator_linear_range wm8350_ldo_ranges[] = {
- { .min_uV = 900000, .max_uV = 1750000, .min_sel = 0, .max_sel = 15,
+ { .min_uV = 900000, .max_uV = 1650000, .min_sel = 0, .max_sel = 15,
.uV_step = 50000 },
{ .min_uV = 1800000, .max_uV = 3300000, .min_sel = 16, .max_sel = 31,
.uV_step = 100000 },
int intensity = 0;
int r0_perm;
int nr_tracks;
+ int use_prefix;
startdev = dasd_alias_get_start_dev(base);
if (!startdev)
intensity = fdata->intensity;
}
+ use_prefix = base_priv->features.feature[8] & 0x01;
+
switch (intensity) {
case 0x00: /* Normal format */
case 0x08: /* Normal format, use cdl. */
cplength = 2 + (rpt*nr_tracks);
- datasize = sizeof(struct PFX_eckd_data) +
- sizeof(struct LO_eckd_data) +
- rpt * nr_tracks * sizeof(struct eckd_count);
+ if (use_prefix)
+ datasize = sizeof(struct PFX_eckd_data) +
+ sizeof(struct LO_eckd_data) +
+ rpt * nr_tracks * sizeof(struct eckd_count);
+ else
+ datasize = sizeof(struct DE_eckd_data) +
+ sizeof(struct LO_eckd_data) +
+ rpt * nr_tracks * sizeof(struct eckd_count);
break;
case 0x01: /* Write record zero and format track. */
case 0x09: /* Write record zero and format track, use cdl. */
cplength = 2 + rpt * nr_tracks;
- datasize = sizeof(struct PFX_eckd_data) +
- sizeof(struct LO_eckd_data) +
- sizeof(struct eckd_count) +
- rpt * nr_tracks * sizeof(struct eckd_count);
+ if (use_prefix)
+ datasize = sizeof(struct PFX_eckd_data) +
+ sizeof(struct LO_eckd_data) +
+ sizeof(struct eckd_count) +
+ rpt * nr_tracks * sizeof(struct eckd_count);
+ else
+ datasize = sizeof(struct DE_eckd_data) +
+ sizeof(struct LO_eckd_data) +
+ sizeof(struct eckd_count) +
+ rpt * nr_tracks * sizeof(struct eckd_count);
break;
case 0x04: /* Invalidate track. */
case 0x0c: /* Invalidate track, use cdl. */
cplength = 3;
- datasize = sizeof(struct PFX_eckd_data) +
- sizeof(struct LO_eckd_data) +
- sizeof(struct eckd_count);
+ if (use_prefix)
+ datasize = sizeof(struct PFX_eckd_data) +
+ sizeof(struct LO_eckd_data) +
+ sizeof(struct eckd_count);
+ else
+ datasize = sizeof(struct DE_eckd_data) +
+ sizeof(struct LO_eckd_data) +
+ sizeof(struct eckd_count);
break;
default:
dev_warn(&startdev->cdev->dev,
switch (intensity & ~0x08) {
case 0x00: /* Normal format. */
- prefix(ccw++, (struct PFX_eckd_data *) data,
- fdata->start_unit, fdata->stop_unit,
- DASD_ECKD_CCW_WRITE_CKD, base, startdev);
- /* grant subsystem permission to format R0 */
- if (r0_perm)
- ((struct PFX_eckd_data *)data)
- ->define_extent.ga_extended |= 0x04;
- data += sizeof(struct PFX_eckd_data);
+ if (use_prefix) {
+ prefix(ccw++, (struct PFX_eckd_data *) data,
+ fdata->start_unit, fdata->stop_unit,
+ DASD_ECKD_CCW_WRITE_CKD, base, startdev);
+ /* grant subsystem permission to format R0 */
+ if (r0_perm)
+ ((struct PFX_eckd_data *)data)
+ ->define_extent.ga_extended |= 0x04;
+ data += sizeof(struct PFX_eckd_data);
+ } else {
+ define_extent(ccw++, (struct DE_eckd_data *) data,
+ fdata->start_unit, fdata->stop_unit,
+ DASD_ECKD_CCW_WRITE_CKD, startdev);
+ /* grant subsystem permission to format R0 */
+ if (r0_perm)
+ ((struct DE_eckd_data *) data)
+ ->ga_extended |= 0x04;
+ data += sizeof(struct DE_eckd_data);
+ }
ccw[-1].flags |= CCW_FLAG_CC;
locate_record(ccw++, (struct LO_eckd_data *) data,
fdata->start_unit, 0, rpt*nr_tracks,
data += sizeof(struct LO_eckd_data);
break;
case 0x01: /* Write record zero + format track. */
- prefix(ccw++, (struct PFX_eckd_data *) data,
- fdata->start_unit, fdata->stop_unit,
- DASD_ECKD_CCW_WRITE_RECORD_ZERO,
- base, startdev);
- data += sizeof(struct PFX_eckd_data);
+ if (use_prefix) {
+ prefix(ccw++, (struct PFX_eckd_data *) data,
+ fdata->start_unit, fdata->stop_unit,
+ DASD_ECKD_CCW_WRITE_RECORD_ZERO,
+ base, startdev);
+ data += sizeof(struct PFX_eckd_data);
+ } else {
+ define_extent(ccw++, (struct DE_eckd_data *) data,
+ fdata->start_unit, fdata->stop_unit,
+ DASD_ECKD_CCW_WRITE_RECORD_ZERO, startdev);
+ data += sizeof(struct DE_eckd_data);
+ }
ccw[-1].flags |= CCW_FLAG_CC;
locate_record(ccw++, (struct LO_eckd_data *) data,
fdata->start_unit, 0, rpt * nr_tracks + 1,
data += sizeof(struct LO_eckd_data);
break;
case 0x04: /* Invalidate track. */
- prefix(ccw++, (struct PFX_eckd_data *) data,
- fdata->start_unit, fdata->stop_unit,
- DASD_ECKD_CCW_WRITE_CKD, base, startdev);
- data += sizeof(struct PFX_eckd_data);
+ if (use_prefix) {
+ prefix(ccw++, (struct PFX_eckd_data *) data,
+ fdata->start_unit, fdata->stop_unit,
+ DASD_ECKD_CCW_WRITE_CKD, base, startdev);
+ data += sizeof(struct PFX_eckd_data);
+ } else {
+ define_extent(ccw++, (struct DE_eckd_data *) data,
+ fdata->start_unit, fdata->stop_unit,
+ DASD_ECKD_CCW_WRITE_CKD, startdev);
+ data += sizeof(struct DE_eckd_data);
+ }
ccw[-1].flags |= CCW_FLAG_CC;
locate_record(ccw++, (struct LO_eckd_data *) data,
fdata->start_unit, 0, 1,
timeout = 0;
if (timer_pending(&sclp_request_timer)) {
/* Get timeout TOD value */
- timeout = get_tod_clock() +
+ timeout = get_tod_clock_fast() +
sclp_tod_from_jiffies(sclp_request_timer.expires -
jiffies);
}
while (sclp_running_state != sclp_running_state_idle) {
/* Check for expired request timer */
if (timer_pending(&sclp_request_timer) &&
- get_tod_clock() > timeout &&
+ get_tod_clock_fast() > timeout &&
del_timer(&sclp_request_timer))
sclp_request_timer.function(sclp_request_timer.data);
cpu_relax();
if (sccb->header.response_code != 0x20)
return 0;
- if (sccb->sclp_send_mask & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK))
- return 1;
- return 0;
+ if (!(sccb->sclp_send_mask & (EVTYP_OPCMD_MASK | EVTYP_PMSGCMD_MASK)))
+ return 0;
+ if (!(sccb->sclp_receive_mask & (EVTYP_MSG_MASK | EVTYP_PMSGCMD_MASK)))
+ return 0;
+ return 1;
}
bool __init sclp_has_vt220(void)
struct winsize ws;
screen = tty3270_alloc_screen(tp->n_rows, tp->n_cols);
- if (!screen)
+ if (IS_ERR(screen))
return;
/* Switch to new output size */
spin_lock_bh(&tp->view.lock);
int ret;
dev_num = iminor(inode);
- if (dev_num > MAXMINOR)
+ if (dev_num >= MAXMINOR)
return -ENODEV;
logptr = &sys_ser[dev_num];
atomic_inc(&chpid_reset_count);
}
/* Wait for machine check for all channel paths. */
- timeout = get_tod_clock() + (RCHP_TIMEOUT << 12);
+ timeout = get_tod_clock_fast() + (RCHP_TIMEOUT << 12);
while (atomic_read(&chpid_reset_count) != 0) {
- if (get_tod_clock() > timeout)
+ if (get_tod_clock_fast() > timeout)
break;
cpu_relax();
}
retries++;
if (!start_time) {
- start_time = get_tod_clock();
+ start_time = get_tod_clock_fast();
goto again;
}
- if ((get_tod_clock() - start_time) < QDIO_BUSY_BIT_PATIENCE)
+ if (get_tod_clock_fast() - start_time < QDIO_BUSY_BIT_PATIENCE)
goto again;
}
if (retries) {
int count, stop;
unsigned char state = 0;
- q->timestamp = get_tod_clock();
+ q->timestamp = get_tod_clock_fast();
/*
* Don't check 128 buffers, as otherwise qdio_inbound_q_moved
* At this point we know, that inbound first_to_check
* has (probably) not moved (see qdio_inbound_processing).
*/
- if (get_tod_clock() > q->u.in.timestamp + QDIO_INPUT_THRESHOLD) {
+ if (get_tod_clock_fast() > q->u.in.timestamp + QDIO_INPUT_THRESHOLD) {
DBF_DEV_EVENT(DBF_INFO, q->irq_ptr, "in done:%02x",
q->first_to_check);
return 1;
int count, stop;
unsigned char state = 0;
- q->timestamp = get_tod_clock();
+ q->timestamp = get_tod_clock_fast();
if (need_siga_sync(q))
if (((queue_type(q) != QDIO_IQDIO_QFMT) &&
while ((pci_device = pci_get_device(PCI_VENDOR_ID_BUSLOGIC,
PCI_DEVICE_ID_BUSLOGIC_MULTIMASTER,
pci_device)) != NULL) {
- struct blogic_adapter *adapter = adapter;
+ struct blogic_adapter *host_adapter = adapter;
struct blogic_adapter_info adapter_info;
enum blogic_isa_ioport mod_ioaddr_req;
unsigned char bus;
known and enabled, note that the particular Standard ISA I/O
Address should not be probed.
*/
- adapter->io_addr = io_addr;
- blogic_intreset(adapter);
- if (blogic_cmd(adapter, BLOGIC_INQ_PCI_INFO, NULL, 0,
+ host_adapter->io_addr = io_addr;
+ blogic_intreset(host_adapter);
+ if (blogic_cmd(host_adapter, BLOGIC_INQ_PCI_INFO, NULL, 0,
&adapter_info, sizeof(adapter_info)) ==
sizeof(adapter_info)) {
if (adapter_info.isa_port < 6)
I/O Address assigned at system initialization.
*/
mod_ioaddr_req = BLOGIC_IO_DISABLE;
- blogic_cmd(adapter, BLOGIC_MOD_IOADDR, &mod_ioaddr_req,
+ blogic_cmd(host_adapter, BLOGIC_MOD_IOADDR, &mod_ioaddr_req,
sizeof(mod_ioaddr_req), NULL, 0);
/*
For the first MultiMaster Host Adapter enumerated,
fetch_localram.offset = BLOGIC_AUTOSCSI_BASE + 45;
fetch_localram.count = sizeof(autoscsi_byte45);
- blogic_cmd(adapter, BLOGIC_FETCH_LOCALRAM,
+ blogic_cmd(host_adapter, BLOGIC_FETCH_LOCALRAM,
&fetch_localram, sizeof(fetch_localram),
&autoscsi_byte45,
sizeof(autoscsi_byte45));
- blogic_cmd(adapter, BLOGIC_GET_BOARD_ID, NULL, 0, &id,
- sizeof(id));
+ blogic_cmd(host_adapter, BLOGIC_GET_BOARD_ID, NULL, 0,
+ &id, sizeof(id));
if (id.fw_ver_digit1 == '5')
force_scan_order =
autoscsi_byte45.force_scan_order;
static int aac_compat_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
{
struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata;
+ if (!capable(CAP_SYS_RAWIO))
+ return -EPERM;
return aac_compat_do_ioctl(dev, cmd, (unsigned long)arg);
}
#define BNX2FC_RQ_WQE_SIZE (BNX2FC_RQ_BUF_SZ)
#define BNX2FC_XFERQ_WQE_SIZE (sizeof(struct fcoe_xfrqe))
#define BNX2FC_CONFQ_WQE_SIZE (sizeof(struct fcoe_confqe))
-#define BNX2FC_5771X_DB_PAGE_SIZE 128
+#define BNX2X_DB_SHIFT 3
#define BNX2FC_TASK_SIZE 128
#define BNX2FC_TASKS_PER_PAGE (PAGE_SIZE/BNX2FC_TASK_SIZE)
reg_base = pci_resource_start(hba->pcidev,
BNX2X_DOORBELL_PCI_BAR);
- reg_off = BNX2FC_5771X_DB_PAGE_SIZE *
- (context_id & 0x1FFFF) + DPM_TRIGER_TYPE;
+ reg_off = (1 << BNX2X_DB_SHIFT) * (context_id & 0x1FFFF);
tgt->ctx_base = ioremap_nocache(reg_base + reg_off, 4);
if (!tgt->ctx_base)
return -ENOMEM;
#define MAX_PAGES_PER_CTRL_STRUCT_POOL 8
#define BNX2I_RESERVED_SLOW_PATH_CMD_SLOTS 4
-#define BNX2I_5771X_DBELL_PAGE_SIZE 128
+#define BNX2X_DB_SHIFT 3
/* 5706/08 hardware has limit on maximum buffer size per BD it can handle */
#define MAX_BD_LENGTH 65535
if (test_bit(BNX2I_NX2_DEV_57710, &ep->hba->cnic_dev_type)) {
reg_base = pci_resource_start(ep->hba->pcidev,
BNX2X_DOORBELL_PCI_BAR);
- reg_off = BNX2I_5771X_DBELL_PAGE_SIZE * (cid_num & 0x1FFFF) +
- DPM_TRIGER_TYPE;
+ reg_off = (1 << BNX2X_DB_SHIFT) * (cid_num & 0x1FFFF);
ep->qp.ctx_base = ioremap_nocache(reg_base + reg_off, 4);
goto arm_cq;
}
* | Device Discovery | 0x2095 | 0x2020-0x2022, |
* | | | 0x2011-0x2012, |
* | | | 0x2016 |
- * | Queue Command and IO tracing | 0x3058 | 0x3006-0x300b |
+ * | Queue Command and IO tracing | 0x3059 | 0x3006-0x300b |
* | | | 0x3027-0x3028 |
* | | | 0x303d-0x3041 |
* | | | 0x302d,0x3033 |
que = MSW(sts->handle);
req = ha->req_q_map[que];
+ /* Check for invalid queue pointer */
+ if (req == NULL ||
+ que >= find_first_zero_bit(ha->req_qid_map, ha->max_req_queues)) {
+ ql_dbg(ql_dbg_io, vha, 0x3059,
+ "Invalid status handle (0x%x): Bad req pointer. req=%p, "
+ "que=%u.\n", sts->handle, req, que);
+ return;
+ }
+
/* Validate handle. */
if (handle < req->num_outstanding_cmds)
sp = req->outstanding_cmds[handle];
gd->events |= DISK_EVENT_MEDIA_CHANGE;
}
+ blk_pm_runtime_init(sdp->request_queue, dev);
add_disk(gd);
if (sdkp->capacity)
sd_dif_config_host(sdkp);
sd_printk(KERN_NOTICE, sdkp, "Attached SCSI %sdisk\n",
sdp->removable ? "removable " : "");
- blk_pm_runtime_init(sdp->request_queue, dev);
scsi_autopm_put_device(sdp);
put_device(&sdkp->dev);
}
static int sg_add(struct device *, struct class_interface *);
static void sg_remove(struct device *, struct class_interface *);
+static DEFINE_SPINLOCK(sg_open_exclusive_lock);
+
static DEFINE_IDR(sg_index_idr);
-static DEFINE_RWLOCK(sg_index_lock);
+static DEFINE_RWLOCK(sg_index_lock); /* Also used to lock
+ file descriptor list for device */
static struct class_interface sg_interface = {
.add_dev = sg_add,
} Sg_request;
typedef struct sg_fd { /* holds the state of a file descriptor */
- struct list_head sfd_siblings; /* protected by sfd_lock of device */
+ /* sfd_siblings is protected by sg_index_lock */
+ struct list_head sfd_siblings;
struct sg_device *parentdp; /* owning device */
wait_queue_head_t read_wait; /* queue read until command done */
rwlock_t rq_list_lock; /* protect access to list in req_arr */
typedef struct sg_device { /* holds the state of each scsi generic device */
struct scsi_device *device;
+ wait_queue_head_t o_excl_wait; /* queue open() when O_EXCL in use */
int sg_tablesize; /* adapter's max scatter-gather table size */
u32 index; /* device index number */
- spinlock_t sfd_lock; /* protect file descriptor list for device */
+ /* sfds is protected by sg_index_lock */
struct list_head sfds;
- struct rw_semaphore o_sem; /* exclude open should hold this rwsem */
volatile char detached; /* 0->attached, 1->detached pending removal */
+ /* exclude protected by sg_open_exclusive_lock */
char exclude; /* opened for exclusive access */
char sgdebug; /* 0->off, 1->sense, 9->dump dev, 10-> all devs */
struct gendisk *disk;
return blk_verify_command(cmd, filp->f_mode & FMODE_WRITE);
}
+static int get_exclude(Sg_device *sdp)
+{
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&sg_open_exclusive_lock, flags);
+ ret = sdp->exclude;
+ spin_unlock_irqrestore(&sg_open_exclusive_lock, flags);
+ return ret;
+}
+
+static int set_exclude(Sg_device *sdp, char val)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&sg_open_exclusive_lock, flags);
+ sdp->exclude = val;
+ spin_unlock_irqrestore(&sg_open_exclusive_lock, flags);
+ return val;
+}
+
static int sfds_list_empty(Sg_device *sdp)
{
unsigned long flags;
int ret;
- spin_lock_irqsave(&sdp->sfd_lock, flags);
+ read_lock_irqsave(&sg_index_lock, flags);
ret = list_empty(&sdp->sfds);
- spin_unlock_irqrestore(&sdp->sfd_lock, flags);
+ read_unlock_irqrestore(&sg_index_lock, flags);
return ret;
}
struct request_queue *q;
Sg_device *sdp;
Sg_fd *sfp;
+ int res;
int retval;
nonseekable_open(inode, filp);
goto error_out;
}
- if ((flags & O_EXCL) && (O_RDONLY == (flags & O_ACCMODE))) {
- retval = -EPERM; /* Can't lock it with read only access */
- goto error_out;
- }
- if (flags & O_NONBLOCK) {
- if (flags & O_EXCL) {
- if (!down_write_trylock(&sdp->o_sem)) {
- retval = -EBUSY;
- goto error_out;
- }
- } else {
- if (!down_read_trylock(&sdp->o_sem)) {
- retval = -EBUSY;
- goto error_out;
- }
+ if (flags & O_EXCL) {
+ if (O_RDONLY == (flags & O_ACCMODE)) {
+ retval = -EPERM; /* Can't lock it with read only access */
+ goto error_out;
+ }
+ if (!sfds_list_empty(sdp) && (flags & O_NONBLOCK)) {
+ retval = -EBUSY;
+ goto error_out;
+ }
+ res = wait_event_interruptible(sdp->o_excl_wait,
+ ((!sfds_list_empty(sdp) || get_exclude(sdp)) ? 0 : set_exclude(sdp, 1)));
+ if (res) {
+ retval = res; /* -ERESTARTSYS because signal hit process */
+ goto error_out;
+ }
+ } else if (get_exclude(sdp)) { /* some other fd has an exclusive lock on dev */
+ if (flags & O_NONBLOCK) {
+ retval = -EBUSY;
+ goto error_out;
+ }
+ res = wait_event_interruptible(sdp->o_excl_wait, !get_exclude(sdp));
+ if (res) {
+ retval = res; /* -ERESTARTSYS because signal hit process */
+ goto error_out;
}
- } else {
- if (flags & O_EXCL)
- down_write(&sdp->o_sem);
- else
- down_read(&sdp->o_sem);
}
- /* Since write lock is held, no need to check sfd_list */
- if (flags & O_EXCL)
- sdp->exclude = 1; /* used by release lock */
-
+ if (sdp->detached) {
+ retval = -ENODEV;
+ goto error_out;
+ }
if (sfds_list_empty(sdp)) { /* no existing opens on this device */
sdp->sgdebug = 0;
q = sdp->device->request_queue;
sdp->sg_tablesize = queue_max_segments(q);
}
- sfp = sg_add_sfp(sdp, dev);
- if (!IS_ERR(sfp))
+ if ((sfp = sg_add_sfp(sdp, dev)))
filp->private_data = sfp;
- /* retval is already provably zero at this point because of the
- * check after retval = scsi_autopm_get_device(sdp->device))
- */
else {
- retval = PTR_ERR(sfp);
-
if (flags & O_EXCL) {
- sdp->exclude = 0; /* undo if error */
- up_write(&sdp->o_sem);
- } else
- up_read(&sdp->o_sem);
+ set_exclude(sdp, 0); /* undo if error */
+ wake_up_interruptible(&sdp->o_excl_wait);
+ }
+ retval = -ENOMEM;
+ goto error_out;
+ }
+ retval = 0;
error_out:
+ if (retval) {
scsi_autopm_put_device(sdp->device);
sdp_put:
scsi_device_put(sdp->device);
{
Sg_device *sdp;
Sg_fd *sfp;
- int excl;
if ((!(sfp = (Sg_fd *) filp->private_data)) || (!(sdp = sfp->parentdp)))
return -ENXIO;
SCSI_LOG_TIMEOUT(3, printk("sg_release: %s\n", sdp->disk->disk_name));
- excl = sdp->exclude;
- sdp->exclude = 0;
- if (excl)
- up_write(&sdp->o_sem);
- else
- up_read(&sdp->o_sem);
+ set_exclude(sdp, 0);
+ wake_up_interruptible(&sdp->o_excl_wait);
scsi_autopm_put_device(sdp->device);
kref_put(&sfp->f_ref, sg_remove_sfp);
disk->first_minor = k;
sdp->disk = disk;
sdp->device = scsidp;
- spin_lock_init(&sdp->sfd_lock);
INIT_LIST_HEAD(&sdp->sfds);
- init_rwsem(&sdp->o_sem);
+ init_waitqueue_head(&sdp->o_excl_wait);
sdp->sg_tablesize = queue_max_segments(q);
sdp->index = k;
kref_init(&sdp->d_ref);
/* Need a write lock to set sdp->detached. */
write_lock_irqsave(&sg_index_lock, iflags);
- spin_lock(&sdp->sfd_lock);
sdp->detached = 1;
list_for_each_entry(sfp, &sdp->sfds, sfd_siblings) {
wake_up_interruptible(&sfp->read_wait);
kill_fasync(&sfp->async_qp, SIGPOLL, POLL_HUP);
}
- spin_unlock(&sdp->sfd_lock);
write_unlock_irqrestore(&sg_index_lock, iflags);
sysfs_remove_link(&scsidp->sdev_gendev.kobj, "generic");
sfp = kzalloc(sizeof(*sfp), GFP_ATOMIC | __GFP_NOWARN);
if (!sfp)
- return ERR_PTR(-ENOMEM);
+ return NULL;
init_waitqueue_head(&sfp->read_wait);
rwlock_init(&sfp->rq_list_lock);
sfp->cmd_q = SG_DEF_COMMAND_Q;
sfp->keep_orphan = SG_DEF_KEEP_ORPHAN;
sfp->parentdp = sdp;
- spin_lock_irqsave(&sdp->sfd_lock, iflags);
- if (sdp->detached) {
- spin_unlock_irqrestore(&sdp->sfd_lock, iflags);
- return ERR_PTR(-ENODEV);
- }
+ write_lock_irqsave(&sg_index_lock, iflags);
list_add_tail(&sfp->sfd_siblings, &sdp->sfds);
- spin_unlock_irqrestore(&sdp->sfd_lock, iflags);
+ write_unlock_irqrestore(&sg_index_lock, iflags);
SCSI_LOG_TIMEOUT(3, printk("sg_add_sfp: sfp=0x%p\n", sfp));
if (unlikely(sg_big_buff != def_reserved_size))
sg_big_buff = def_reserved_size;
struct sg_device *sdp = sfp->parentdp;
unsigned long iflags;
- spin_lock_irqsave(&sdp->sfd_lock, iflags);
+ write_lock_irqsave(&sg_index_lock, iflags);
list_del(&sfp->sfd_siblings);
- spin_unlock_irqrestore(&sdp->sfd_lock, iflags);
+ write_unlock_irqrestore(&sg_index_lock, iflags);
+ wake_up_interruptible(&sdp->o_excl_wait);
INIT_WORK(&sfp->ew.work, sg_remove_sfp_usercontext);
schedule_work(&sfp->ew.work);
return 0;
}
-/* must be called while holding sg_index_lock and sfd_lock */
+/* must be called while holding sg_index_lock */
static void sg_proc_debug_helper(struct seq_file *s, Sg_device * sdp)
{
int k, m, new_interface, blen, usg;
read_lock_irqsave(&sg_index_lock, iflags);
sdp = it ? sg_lookup_dev(it->index) : NULL;
- if (sdp) {
- spin_lock(&sdp->sfd_lock);
- if (!list_empty(&sdp->sfds)) {
- struct scsi_device *scsidp = sdp->device;
+ if (sdp && !list_empty(&sdp->sfds)) {
+ struct scsi_device *scsidp = sdp->device;
- seq_printf(s, " >>> device=%s ", sdp->disk->disk_name);
- if (sdp->detached)
- seq_printf(s, "detached pending close ");
- else
- seq_printf
- (s, "scsi%d chan=%d id=%d lun=%d em=%d",
- scsidp->host->host_no,
- scsidp->channel, scsidp->id,
- scsidp->lun,
- scsidp->host->hostt->emulated);
- seq_printf(s, " sg_tablesize=%d excl=%d\n",
- sdp->sg_tablesize, sdp->exclude);
- sg_proc_debug_helper(s, sdp);
- }
- spin_unlock(&sdp->sfd_lock);
+ seq_printf(s, " >>> device=%s ", sdp->disk->disk_name);
+ if (sdp->detached)
+ seq_printf(s, "detached pending close ");
+ else
+ seq_printf
+ (s, "scsi%d chan=%d id=%d lun=%d em=%d",
+ scsidp->host->host_no,
+ scsidp->channel, scsidp->id,
+ scsidp->lun,
+ scsidp->host->hostt->emulated);
+ seq_printf(s, " sg_tablesize=%d excl=%d\n",
+ sdp->sg_tablesize, get_exclude(sdp));
+ sg_proc_debug_helper(s, sdp);
}
read_unlock_irqrestore(&sg_index_lock, iflags);
return 0;
/* Initialize the hardware */
ret = clk_prepare_enable(clk);
if (ret)
- goto out_unmap_regs;
+ goto out_free_irq;
spi_writel(as, CR, SPI_BIT(SWRST));
spi_writel(as, CR, SPI_BIT(SWRST)); /* AT91SAM9263 Rev B workaround */
if (as->caps.has_wdrbt) {
spi_writel(as, CR, SPI_BIT(SWRST));
spi_writel(as, CR, SPI_BIT(SWRST)); /* AT91SAM9263 Rev B workaround */
clk_disable_unprepare(clk);
+out_free_irq:
free_irq(irq, master);
out_unmap_regs:
iounmap(as->regs);
dev_name(&pdev->dev), hw);
if (ret) {
dev_err(&pdev->dev, "Can't request IRQ\n");
- clk_put(hw->spi_clk);
goto clk_out;
}
gpio_free(hw->chipselect[i]);
spi_master_put(master);
- kfree(master);
return ret;
}
gpio_free(hw->chipselect[i]);
spi_unregister_master(master);
- kfree(master);
return 0;
}
master->bus_num = bus_num;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (!res) {
- dev_err(&pdev->dev, "can't get platform resource\n");
- ret = -EINVAL;
- goto out_master_put;
- }
-
dspi->base = devm_ioremap_resource(&pdev->dev, res);
- if (!dspi->base) {
- ret = -EINVAL;
+ if (IS_ERR(dspi->base)) {
+ ret = PTR_ERR(dspi->base);
goto out_master_put;
}
psc_num = master->bus_num;
snprintf(clk_name, sizeof(clk_name), "psc%d_mclk", psc_num);
clk = devm_clk_get(dev, clk_name);
- if (IS_ERR(clk))
+ if (IS_ERR(clk)) {
+ ret = PTR_ERR(clk);
goto free_irq;
+ }
ret = clk_prepare_enable(clk);
if (ret)
goto free_irq;
if (pm_runtime_suspended(&drv_data->pdev->dev))
return IRQ_NONE;
- sccr1_reg = read_SSCR1(reg);
+ /*
+ * If the device is not yet in RPM suspended state and we get an
+ * interrupt that is meant for another device, check if status bits
+ * are all set to one. That means that the device is already
+ * powered off.
+ */
status = read_SSSR(reg);
+ if (status == ~0)
+ return IRQ_NONE;
+
+ sccr1_reg = read_SSCR1(reg);
/* Ignore possible writes if we don't need to write */
if (!(sccr1_reg & SSCR1_TIE))
S3C64XX_SPI_INT_TX_OVERRUN_EN | S3C64XX_SPI_INT_TX_UNDERRUN_EN,
sdd->regs + S3C64XX_SPI_INT_EN);
+ pm_runtime_enable(&pdev->dev);
+
if (spi_register_master(master)) {
dev_err(&pdev->dev, "cannot register SPI master\n");
ret = -EBUSY;
mem_res,
sdd->rx_dma.dmach, sdd->tx_dma.dmach);
- pm_runtime_enable(&pdev->dev);
-
return 0;
err3:
goto error1;
}
+ pm_runtime_enable(&pdev->dev);
+
master->num_chipselect = 1;
master->bus_num = pdev->id;
master->setup = hspi_setup;
goto error1;
}
- pm_runtime_enable(&pdev->dev);
-
return 0;
error1:
BCM_DEBUG_PRINT(Adapter, DBG_TYPE_OTHERS, OSAL_DBG, DBG_LVL_ALL, "Called IOCTL_BCM_GET_DEVICE_DRIVER_INFO\n");
+ memset(&DevInfo, 0, sizeof(DevInfo));
DevInfo.MaxRDMBufferSize = BUFFER_4K;
DevInfo.u32DSDStartOffset = EEPROM_CALPARAM_START;
DevInfo.u32RxAlignmentCorrection = 0;
To compile this driver as a module, choose M here: the module will be
called skel.
+config COMEDI_SSV_DNP
+ tristate "SSV Embedded Systems DIL/Net-PC support"
+ depends on X86_32 || COMPILE_TEST
+ ---help---
+ Enable support for SSV Embedded Systems DIL/Net-PC
+
+ To compile this driver as a module, choose M here: the module will be
+ called ssv_dnp.
+
endif # COMEDI_MISC_DRIVERS
menuconfig COMEDI_ISA_DRIVERS
To compile this driver as a module, choose M here: the module will be
called dmm32at.
+config COMEDI_UNIOXX5
+ tristate "Fastwel UNIOxx-5 analog and digital io board support"
+ ---help---
+ Enable support for Fastwel UNIOxx-5 (analog and digital i/o) boards
+
+ To compile this driver as a module, choose M here: the module will be
+ called unioxx5.
+
config COMEDI_FL512
tristate "FL512 ISA card support"
---help---
To compile this driver as a module, choose M here: the module will be
called dyna_pci10xx.
-config COMEDI_UNIOXX5
- tristate "Fastwel UNIOxx-5 analog and digital io board support"
- ---help---
- Enable support for Fastwel UNIOxx-5 (analog and digital i/o) boards
-
- To compile this driver as a module, choose M here: the module will be
- called unioxx5.
-
config COMEDI_GSC_HPDI
tristate "General Standards PCI-HPDI32 / PMC-HPDI32 support"
select COMEDI_FC
To compile this driver as a module, choose M here: the module will be
called s626.
-config COMEDI_SSV_DNP
- tristate "SSV Embedded Systems DIL/Net-PC support"
- ---help---
- Enable support for SSV Embedded Systems DIL/Net-PC
-
- To compile this driver as a module, choose M here: the module will be
- called ssv_dnp.
-
config COMEDI_MITE
depends on HAS_DMA
tristate
{
const struct ni_65xx_board *board = comedi_board(dev);
struct ni_65xx_private *devpriv = dev->private;
- unsigned base_bitfield_channel;
- const unsigned max_ports_per_bitfield = 5;
+ int base_bitfield_channel;
unsigned read_bits = 0;
- unsigned j;
+ int last_port_offset = ni_65xx_port_by_channel(s->n_chan - 1);
+ int port_offset;
base_bitfield_channel = CR_CHAN(insn->chanspec);
- for (j = 0; j < max_ports_per_bitfield; ++j) {
- const unsigned port_offset =
- ni_65xx_port_by_channel(base_bitfield_channel) + j;
- const unsigned port =
- sprivate(s)->base_port + port_offset;
- unsigned base_port_channel;
+ for (port_offset = ni_65xx_port_by_channel(base_bitfield_channel);
+ port_offset <= last_port_offset; port_offset++) {
+ unsigned port = sprivate(s)->base_port + port_offset;
+ int base_port_channel = port_offset * ni_65xx_channels_per_port;
unsigned port_mask, port_data, port_read_bits;
- int bitshift;
- if (port >= ni_65xx_total_num_ports(board))
+ int bitshift = base_port_channel - base_bitfield_channel;
+
+ if (bitshift >= 32)
break;
- base_port_channel = port_offset * ni_65xx_channels_per_port;
port_mask = data[0];
port_data = data[1];
- bitshift = base_port_channel - base_bitfield_channel;
- if (bitshift >= 32 || bitshift <= -32)
- break;
if (bitshift > 0) {
port_mask >>= bitshift;
port_data >>= bitshift;
DGAP_LOCK(dgap_global_lock, flags);
brd->msgbuf = NULL;
- printk(brd->msgbuf_head);
+ printk("%s", brd->msgbuf_head);
kfree(brd->msgbuf_head);
brd->msgbuf_head = NULL;
DGAP_UNLOCK(dgap_global_lock, flags);
DPR_INIT(("dgap_scan(%d) - printing out the msgbuf\n", i));
DGAP_LOCK(dgap_global_lock, flags);
brd->msgbuf = NULL;
- printk(brd->msgbuf_head);
+ printk("%s", brd->msgbuf_head);
kfree(brd->msgbuf_head);
brd->msgbuf_head = NULL;
DGAP_UNLOCK(dgap_global_lock, flags);
char buf[1024];
int i;
unsigned long flags;
+ size_t length;
DGAP_LOCK(dgap_global_lock, flags);
/* Format buf using fmt and arguments contained in ap. */
va_start(ap, fmt);
- i = vsprintf(buf, fmt, ap);
+ i = vsnprintf(buf, sizeof(buf), fmt, ap);
va_end(ap);
DPR((buf));
if (!brd || !brd->msgbuf) {
- printk(buf);
+ printk("%s", buf);
DGAP_UNLOCK(dgap_global_lock, flags);
return;
}
- memcpy(brd->msgbuf, buf, strlen(buf));
- brd->msgbuf += strlen(buf);
- *brd->msgbuf = 0;
+ length = strlen(buf) + 1;
+ if (brd->msgbuf - brd->msgbuf_head < length)
+ length = brd->msgbuf - brd->msgbuf_head;
+ memcpy(brd->msgbuf, buf, length);
+ brd->msgbuf += length;
DGAP_UNLOCK(dgap_global_lock, flags);
}
DGNC_LOCK(dgnc_global_lock, flags);
brd->msgbuf = NULL;
- printk(brd->msgbuf_head);
+ printk("%s", brd->msgbuf_head);
kfree(brd->msgbuf_head);
brd->msgbuf_head = NULL;
DGNC_UNLOCK(dgnc_global_lock, flags);
DPR_INIT(("dgnc_scan(%d) - printing out the msgbuf\n", i));
DGNC_LOCK(dgnc_global_lock, flags);
brd->msgbuf = NULL;
- printk(brd->msgbuf_head);
+ printk("%s", brd->msgbuf_head);
kfree(brd->msgbuf_head);
brd->msgbuf_head = NULL;
DGNC_UNLOCK(dgnc_global_lock, flags);
config IIO_SIMPLE_DUMMY_BUFFER
boolean "Buffered capture support"
- depends on IIO_KFIFO_BUF
+ select IIO_KFIFO_BUF
help
Add buffered data capture to the simple dummy driver.
mutex_init(&chip->lock);
chip->lux_scale = 1;
+ chip->lux_uscale = 0;
chip->range = 1000;
chip->adc_bit = 16;
chip->suspended = false;
if (result < 0)
return -EINVAL;
- *val = result;
+ *val = sign_extend32(result, 15);
return IIO_VAL_INT;
}
if (ret)
iio_device_free(indio_dev);
- return 0;
+ return ret;
}
static int ade7854_spi_remove(struct spi_device *spi)
struct list_head encoder_list;
struct list_head connector_list;
struct mutex mutex;
- int references;
int pipes;
struct drm_fbdev_cma *fbhelper;
};
}
}
- imxdrm->references++;
-
return imxdrm->drm;
unwind_crtc:
list_for_each_entry(enc, &imxdrm->encoder_list, list)
module_put(enc->owner);
- imxdrm->references--;
-
mutex_unlock(&imxdrm->mutex);
}
EXPORT_SYMBOL_GPL(imx_drm_device_put);
mutex_lock(&imxdrm->mutex);
- if (imxdrm->references) {
+ if (imxdrm->drm->open_count) {
ret = -EBUSY;
goto err_busy;
}
mutex_lock(&imxdrm->mutex);
- if (imxdrm->references) {
+ if (imxdrm->drm->open_count) {
ret = -EBUSY;
goto err_busy;
}
mutex_lock(&imxdrm->mutex);
- if (imxdrm->references) {
+ if (imxdrm->drm->open_count) {
ret = -EBUSY;
goto err_busy;
}
struct snd_line6_pcm *line6pcm = snd_kcontrol_chip(kcontrol);
struct usb_line6_toneport *toneport =
(struct usb_line6_toneport *)line6pcm->line6;
+ unsigned int source;
- if (ucontrol->value.enumerated.item[0] == toneport->source)
+ source = ucontrol->value.enumerated.item[0];
+ if (source >= ARRAY_SIZE(toneport_source_info))
+ return -EINVAL;
+ if (source == toneport->source)
return 0;
- toneport->source = ucontrol->value.enumerated.item[0];
+ toneport->source = source;
toneport_send_cmd(toneport->line6.usbdev,
- toneport_source_info[toneport->source].code, 0x0000);
+ toneport_source_info[source].code, 0x0000);
return 1;
}
int
kiblnd_thread_start(int (*fn)(void *arg), void *arg, char *name)
{
- struct task_struct *task = kthread_run(fn, arg, name);
+ struct task_struct *task = kthread_run(fn, arg, "%s", name);
if (IS_ERR(task))
return PTR_ERR(task);
int
ksocknal_thread_start(int (*fn)(void *arg), void *arg, char *name)
{
- struct task_struct *task = kthread_run(fn, arg, name);
+ struct task_struct *task = kthread_run(fn, arg, "%s", name);
if (IS_ERR(task))
return PTR_ERR(task);
config LUSTRE_FS
tristate "Lustre file system client support"
- depends on INET && m
+ depends on INET && m && !MIPS && !XTENSA && !SUPERH
select LNET
select CRYPTO
select CRYPTO_CRC32
config LUSTRE_TRANSLATE_ERRNOS
bool
depends on LUSTRE_FS && !X86
- default true
+ default y
config LUSTRE_LLITE_LLOOP
bool "Lustre virtual block device"
init_completion(&bltd.bltd_comp);
bltd.bltd_num = atomic_read(&blp->blp_num_threads);
- snprintf(bltd.bltd_name, sizeof(bltd.bltd_name) - 1,
+ snprintf(bltd.bltd_name, sizeof(bltd.bltd_name),
"ldlm_bl_%02d", bltd.bltd_num);
- task = kthread_run(ldlm_bl_thread_main, &bltd, bltd.bltd_name);
+ task = kthread_run(ldlm_bl_thread_main, &bltd, "%s", bltd.bltd_name);
if (IS_ERR(task)) {
CERROR("cannot start LDLM thread ldlm_bl_%02d: rc %ld\n",
atomic_read(&blp->blp_num_threads), PTR_ERR(task));
sched->ws_name, sched->ws_nthreads);
}
- task = kthread_run(cfs_wi_scheduler, sched, name);
+ task = kthread_run(cfs_wi_scheduler, sched, "%s", name);
if (!IS_ERR(task)) {
nthrs--;
continue;
if (nob > ulsm_nob)
return (-EINVAL);
- if (copy_to_user (ulsm, lsm, sizeof(ulsm)))
+ if (copy_to_user (ulsm, lsm, sizeof(*ulsm)))
return (-EFAULT);
for (i = 0; i < lsm->lsm_stripe_count; i++) {
/* CLONE_VM and CLONE_FILES just avoid a needless copy, because we
* just drop the VM and FILES in cfs_daemonize_ctxt() right away. */
- rc = PTR_ERR(kthread_run(ptlrpc_pinger_main,
- &pinger_thread, pinger_thread.t_name));
+ rc = PTR_ERR(kthread_run(ptlrpc_pinger_main, &pinger_thread,
+ "%s", pinger_thread.t_name));
if (IS_ERR_VALUE(rc)) {
CERROR("cannot start thread: %d\n", rc);
return rc;
init_completion(&pc->pc_starting);
init_completion(&pc->pc_finishing);
spin_lock_init(&pc->pc_lock);
- strncpy(pc->pc_name, name, sizeof(pc->pc_name) - 1);
+ strlcpy(pc->pc_name, name, sizeof(pc->pc_name));
pc->pc_set = ptlrpc_prep_set();
if (pc->pc_set == NULL)
GOTO(out, rc = -ENOMEM);
GOTO(out, rc);
}
- task = kthread_run(ptlrpcd, pc, pc->pc_name);
+ task = kthread_run(ptlrpcd, pc, "%s", pc->pc_name);
if (IS_ERR(task))
GOTO(out, rc = PTR_ERR(task));
if (ptlrpcds == NULL)
GOTO(out, rc = -ENOMEM);
- snprintf(name, 15, "ptlrpcd_rcv");
+ snprintf(name, sizeof(name), "ptlrpcd_rcv");
set_bit(LIOD_RECOVERY, &ptlrpcds->pd_thread_rcv.pc_flags);
rc = ptlrpcd_start(-1, nthreads, name, &ptlrpcds->pd_thread_rcv);
if (rc < 0)
* unnecessary dependency. But how to distribute async RPCs load
* among all the ptlrpc daemons becomes another trouble. */
for (i = 0; i < nthreads; i++) {
- snprintf(name, 15, "ptlrpcd_%d", i);
+ snprintf(name, sizeof(name), "ptlrpcd_%d", i);
rc = ptlrpcd_start(i, nthreads, name, &ptlrpcds->pd_threads[i]);
if (rc < 0)
GOTO(out, rc);
****************************************/
-#define PTRS_PER_PAGE (PAGE_CACHE_SIZE / sizeof(void *))
-#define PAGES_PER_POOL (PTRS_PER_PAGE)
+#define POINTERS_PER_PAGE (PAGE_CACHE_SIZE / sizeof(void *))
+#define PAGES_PER_POOL (POINTERS_PER_PAGE)
#define IDLE_IDX_MAX (100)
#define IDLE_IDX_WEIGHT (3)
spin_unlock(&svcpt->scp_lock);
if (svcpt->scp_cpt >= 0) {
- snprintf(thread->t_name, PTLRPC_THR_NAME_LEN, "%s%02d_%03d",
+ snprintf(thread->t_name, sizeof(thread->t_name), "%s%02d_%03d",
svc->srv_thread_name, svcpt->scp_cpt, thread->t_id);
} else {
- snprintf(thread->t_name, PTLRPC_THR_NAME_LEN, "%s_%04d",
+ snprintf(thread->t_name, sizeof(thread->t_name), "%s_%04d",
svc->srv_thread_name, thread->t_id);
}
CDEBUG(D_RPCTRACE, "starting thread '%s'\n", thread->t_name);
- rc = PTR_ERR(kthread_run(ptlrpc_main, thread, thread->t_name));
+ rc = PTR_ERR(kthread_run(ptlrpc_main, thread, "%s", thread->t_name));
if (IS_ERR_VALUE(rc)) {
CERROR("cannot start thread '%s': rc %d\n",
thread->t_name, rc);
config USB_MSI3101
tristate "Mirics MSi3101 SDR Dongle"
depends on USB && VIDEO_DEV && VIDEO_V4L2
+ select VIDEOBUF2_VMALLOC
/* Absolute min and max number of buffers available for mmap() */
*nbuffers = 32;
*nplanes = 1;
- sizes[0] = PAGE_ALIGN(3 * 3072); /* 3 * 768 * 4 */
+ /*
+ * 3, wMaxPacketSize 3x 1024 bytes
+ * 504, max IQ sample pairs per 1024 frame
+ * 2, two samples, I and Q
+ * 4, 32-bit float
+ */
+ sizes[0] = PAGE_ALIGN(3 * 504 * 2 * 4); /* = 12096 */
dev_dbg(&s->udev->dev, "%s: nbuffers=%d sizes[0]=%d\n",
__func__, *nbuffers, sizes[0]);
return 0;
f->frequency * 625UL / 10UL);
}
-const struct v4l2_ioctl_ops msi3101_ioctl_ops = {
+static const struct v4l2_ioctl_ops msi3101_ioctl_ops = {
.vidioc_querycap = msi3101_querycap,
.vidioc_enum_input = msi3101_enum_input,
}
}
- memset(usb, 0, sizeof(usb));
+ memset(usb, 0, sizeof(*usb));
usb->init_flags = flags;
/* Initialize the USB state structure */
while (freed) {
struct sk_buff *skb = dev_alloc_skb(size + 256);
- if (unlikely(skb == NULL)) {
- pr_warning
- ("Failed to allocate skb for hardware pool %d\n",
- pool);
+ if (unlikely(skb == NULL))
break;
- }
-
skb_reserve(skb, 256 - (((unsigned long)skb->data) & 0x7f));
*(struct sk_buff **)(skb->data - sizeof(void *)) = skb;
cvmx_fpa_free(skb->data, pool, DONT_WRITEBACK(size / 128));
* Enable interrupts on inband status changes
* for this port.
*/
- gmx_rx_int_en.u64 =
- cvmx_read_csr(CVMX_GMXX_RXX_INT_EN
- (index, interface));
+ gmx_rx_int_en.u64 = 0;
gmx_rx_int_en.s.phy_dupx = 1;
gmx_rx_int_en.s.phy_link = 1;
gmx_rx_int_en.s.phy_spd = 1;
if (backlog > budget * cores_in_use && napi != NULL)
cvm_oct_enable_one_cpu();
}
+ rx_count++;
skb_in_hw = USE_SKBUFFS_IN_HW && work->word2.s.bufs == 1;
if (likely(skb_in_hw)) {
*/
skb = dev_alloc_skb(work->len);
if (!skb) {
- printk_ratelimited("Port %d failed to allocate "
- "skbuff, packet dropped\n",
- work->ipprt);
cvm_oct_free_work(work);
continue;
}
#endif
}
netif_receive_skb(skb);
- rx_count++;
} else {
/* Drop any packet received for a device that isn't up */
/*
struct oz_app_hdr *app_hdr;
struct oz_serial_ctx *ctx;
+ if (count > sizeof(ei->data) - sizeof(*elt) - sizeof(*app_hdr))
+ return -EINVAL;
+
spin_lock_bh(&g_cdev.lock);
pd = g_cdev.active_pd;
if (pd)
*frlen = *frlen + (len + 2);
- return pbuf + len + 2;
_func_exit_;
+ return pbuf + len + 2;
}
inline u8 *rtw_set_ie_ch_switch (u8 *buf, u32 *buf_len, u8 ch_switch_mode,
#ifdef CONFIG_88EU_P2P
-static int get_reg_classes_full_count(struct p2p_channels channel_list)
+static int get_reg_classes_full_count(struct p2p_channels *channel_list)
{
int cnt = 0;
int i;
- for (i = 0; i < channel_list.reg_classes; i++) {
- cnt += channel_list.reg_class[i].channels;
+ for (i = 0; i < channel_list->reg_classes; i++) {
+ cnt += channel_list->reg_class[i].channels;
}
return cnt;
/* + number of channels in all classes */
len_channellist_attr = 3
+ (1 + 1) * (u16)(pmlmeext->channel_list.reg_classes)
- + get_reg_classes_full_count(pmlmeext->channel_list);
+ + get_reg_classes_full_count(&pmlmeext->channel_list);
*(__le16 *)(p2pie + p2pielen) = cpu_to_le16(len_channellist_attr);
p2pielen += 2;
/* + number of channels in all classes */
len_channellist_attr = 3
+ (1 + 1) * (u16)pmlmeext->channel_list.reg_classes
- + get_reg_classes_full_count(pmlmeext->channel_list);
+ + get_reg_classes_full_count(&pmlmeext->channel_list);
*(__le16 *)(p2pie + p2pielen) = cpu_to_le16(len_channellist_attr);
/* + number of channels in all classes */
len_channellist_attr = 3
+ (1 + 1) * (u16)pmlmeext->channel_list.reg_classes
- + get_reg_classes_full_count(pmlmeext->channel_list);
+ + get_reg_classes_full_count(&pmlmeext->channel_list);
*(__le16 *)(p2pie + p2pielen) = cpu_to_le16(len_channellist_attr);
/* + number of channels in all classes */
len_channellist_attr = 3
+ (1 + 1) * (u16)pmlmeext->channel_list.reg_classes
- + get_reg_classes_full_count(pmlmeext->channel_list);
+ + get_reg_classes_full_count(&pmlmeext->channel_list);
*(__le16 *)(p2pie + p2pielen) = cpu_to_le16(len_channellist_attr);
p2pielen += 2;
sscanf(data, "pts =%d, start =%d, stop =%d", &psd_pts, &psd_start, &psd_stop);
}
- _rtw_memset(data, '\0', sizeof(data));
+ _rtw_memset(data, '\0', sizeof(*data));
i = psd_start;
while (i < psd_stop) {
inx[0] = 0; inx[1] = 1; inx[2] = 2; inx[3] = 3;
if (pregpriv->wifi_spec == 1) {
- u32 j, tmp, change_inx;
+ u32 j, tmp, change_inx = false;
/* entry indx: 0->vo, 1->vi, 2->be, 3->bk. */
for (i = 0; i < 4; i++) {
u8 cut_ver, fab_ver;
/* Init Value */
- _rtw_memset(dm_odm, 0, sizeof(dm_odm));
+ _rtw_memset(dm_odm, 0, sizeof(*dm_odm));
dm_odm->Adapter = Adapter;
#define DM_false_ALARM_THRESH_LOW 400
#define DM_false_ALARM_THRESH_HIGH 1000
-#define DM_DIG_MAX_NIC 0x3e
+#define DM_DIG_MAX_NIC 0x4e
#define DM_DIG_MIN_NIC 0x1e /* 0x22/0x1c */
#define DM_DIG_MAX_AP 0x32
struct txpowerinfo24g {
u8 IndexCCK_Base[MAX_RF_PATH][MAX_CHNL_GROUP_24G];
- u8 IndexBW40_Base[MAX_RF_PATH][MAX_CHNL_GROUP_24G-1];
+ u8 IndexBW40_Base[MAX_RF_PATH][MAX_CHNL_GROUP_24G];
/* If only one tx, only BW20 and OFDM are used. */
s8 CCK_Diff[MAX_RF_PATH][MAX_TX_COUNT];
s8 OFDM_Diff[MAX_RF_PATH][MAX_TX_COUNT];
{0, NULL},
{0, NULL},
{0, &rtw_cpwm_event_callback},
+ {0, NULL},
};
#endif/* _RTL_MLME_EXT_C_ */
stop = strncmp(extra, "stop", 4);
sscanf(extra, "count =%d, pkt", &count);
- _rtw_memset(extra, '\0', sizeof(extra));
+ _rtw_memset(extra, '\0', sizeof(*extra));
if (stop == 0) {
bStartTest = 0; /* To set Stop */
/*=== Customer ID ===*/
/****** 8188EUS ********/
{USB_DEVICE(0x8179, 0x07B8)}, /* Abocom - Abocom */
+ {USB_DEVICE(0x2001, 0x330F)}, /* DLink DWA-125 REV D1 */
{} /* Terminating entry */
};
/* Get TCB and local buffer from common pool.
(It is shared by CmdQ, MgntQ, and USB coalesce DataQ) */
skb = dev_alloc_skb(USB_HWDESC_HEADER_LEN + DataLen + 4);
+ if (!skb)
+ return RT_STATUS_FAILURE;
memcpy((unsigned char *)(skb->cb), &dev, sizeof(dev));
tcb_desc = (cb_desc *)(skb->cb + MAX_DEV_ADDR_SIZE);
tcb_desc->queue_index = TXCMD_QUEUE;
static int mp_get_count(struct sb_uart_state *state, struct serial_icounter_struct *icnt)
{
- struct serial_icounter_struct icount;
+ struct serial_icounter_struct icount = {};
struct sb_uart_icount cnow;
struct sb_uart_port *port = state->port;
if (!CARDbIsOFDMinBasicRate(pDevice)) {
DBG_PRT(MSG_LEVEL_DEBUG, KERN_INFO
"swGetOFDMControlRate:(NO OFDM) %d\n", wRateIdx);
- if (wRateIdx > RATE_24M)
- wRateIdx = RATE_24M;
+ if (wRateIdx > RATE_24M)
+ wRateIdx = RATE_24M;
return wRateIdx;
}
if (pMgmt == NULL)
return -EFAULT;
+ if (!(pDevice->flags & DEVICE_FLAGS_OPENED))
+ return -ENODEV;
+
buf = kzalloc(sizeof(struct viawget_wpa_param), GFP_KERNEL);
if (buf == NULL)
return -ENOMEM;
memset(pMgmt->abyCurrBSSID, 0, 6);
pMgmt->eCurrState = WMAC_STATE_IDLE;
+ pDevice->flags &= ~DEVICE_FLAGS_OPENED;
+
device_free_tx_bufs(pDevice);
device_free_rx_bufs(pDevice);
device_free_int_bufs(pDevice);
usb_free_urb(pDevice->pInterruptURB);
BSSvClearNodeDBTable(pDevice, 0);
- pDevice->flags &=(~DEVICE_FLAGS_OPENED);
DBG_PRT(MSG_LEVEL_DEBUG, KERN_INFO "device_close2 \n");
DBG_PRT(MSG_LEVEL_DEBUG, KERN_INFO"GetFreeContext()\n");
for (ii = 0; ii < pDevice->cbTD; ii++) {
+ if (!pDevice->apTD[ii])
+ return NULL;
pContext = pDevice->apTD[ii];
if (pContext->bBoolInUse == false) {
pContext->bBoolInUse = true;
ltv_t *pLtv;
bool_t ltvAllocated = FALSE;
ENCSTRCT sEncryption;
+ size_t len;
#ifdef USE_WDS
hcf_16 hcfPort = HCF_PORT_0;
break;
case CFG_CNF_OWN_NAME:
memset(lp->StationName, 0, sizeof(lp->StationName));
- memcpy((void *)lp->StationName, (void *)&pLtv->u.u8[2], (size_t)pLtv->u.u16[0]);
+ len = min_t(size_t, pLtv->u.u16[0], sizeof(lp->StationName));
+ strlcpy(lp->StationName, &pLtv->u.u8[2], len);
pLtv->u.u16[0] = CNV_INT_TO_LITTLE(pLtv->u.u16[0]);
break;
case CFG_CNF_LOAD_BALANCING:
{
struct wl_private *lp = wl_priv(dev);
unsigned long flags;
+ size_t len;
int ret = 0;
/*------------------------------------------------------------------------*/
wl_lock(lp, &flags);
memset(lp->StationName, 0, sizeof(lp->StationName));
-
- memcpy(lp->StationName, extra, wrqu->data.length);
+ len = min_t(size_t, wrqu->data.length, sizeof(lp->StationName));
+ strlcpy(lp->StationName, extra, len);
/* Commit the adapter parameters */
wl_apply(lp);
NULL,
MKDEV(major, i),
NULL,
- devname);
+ "%s", devname);
if (IS_ERR(device)) {
pr_warn("xillybus: Failed to create %s "
MODULE_LICENSE("Dual BSD/GPL");
MODULE_AUTHOR("Nitin Gupta <ngupta@vflare.org>");
MODULE_DESCRIPTION("Compressed RAM Block Device");
-MODULE_ALIAS("devname:zram");
static void iscsit_ack_from_expstatsn(struct iscsi_conn *conn, u32 exp_statsn)
{
- struct iscsi_cmd *cmd;
+ LIST_HEAD(ack_list);
+ struct iscsi_cmd *cmd, *cmd_p;
conn->exp_statsn = exp_statsn;
return;
spin_lock_bh(&conn->cmd_lock);
- list_for_each_entry(cmd, &conn->conn_cmd_list, i_conn_node) {
+ list_for_each_entry_safe(cmd, cmd_p, &conn->conn_cmd_list, i_conn_node) {
spin_lock(&cmd->istate_lock);
if ((cmd->i_state == ISTATE_SENT_STATUS) &&
iscsi_sna_lt(cmd->stat_sn, exp_statsn)) {
cmd->i_state = ISTATE_REMOVE;
spin_unlock(&cmd->istate_lock);
- iscsit_add_cmd_to_immediate_queue(cmd, conn,
- cmd->i_state);
+ list_move_tail(&cmd->i_conn_node, &ack_list);
continue;
}
spin_unlock(&cmd->istate_lock);
}
spin_unlock_bh(&conn->cmd_lock);
+
+ list_for_each_entry_safe(cmd, cmd_p, &ack_list, i_conn_node) {
+ list_del(&cmd->i_conn_node);
+ iscsit_free_cmd(cmd, false);
+ }
}
static int iscsit_allocate_iovecs(struct iscsi_cmd *cmd)
*/
alloc_tags:
tag_num = max_t(u32, ISCSIT_MIN_TAGS, queue_depth);
- tag_num += ISCSIT_EXTRA_TAGS;
+ tag_num += (tag_num / 2) + ISCSIT_EXTRA_TAGS;
tag_size = sizeof(struct iscsi_cmd) + conn->conn_transport->priv_size;
ret = transport_alloc_session_tags(sess->se_sess, tag_num, tag_size);
* Fallthrough
*/
case ISCSI_OP_SCSI_TMFUNC:
- rc = transport_generic_free_cmd(&cmd->se_cmd, 1);
+ rc = transport_generic_free_cmd(&cmd->se_cmd, shutdown);
if (!rc && shutdown && se_cmd && se_cmd->se_sess) {
__iscsit_free_cmd(cmd, true, shutdown);
target_put_sess_cmd(se_cmd->se_sess, se_cmd);
se_cmd = &cmd->se_cmd;
__iscsit_free_cmd(cmd, true, shutdown);
- rc = transport_generic_free_cmd(&cmd->se_cmd, 1);
+ rc = transport_generic_free_cmd(&cmd->se_cmd, shutdown);
if (!rc && shutdown && se_cmd->se_sess) {
__iscsit_free_cmd(cmd, true, shutdown);
target_put_sess_cmd(se_cmd->se_sess, se_cmd);
* pSCSI Host ID and enable for phba mode
*/
sh = scsi_host_lookup(phv->phv_host_id);
- if (IS_ERR(sh)) {
+ if (!sh) {
pr_err("pSCSI: Unable to locate SCSI Host for"
" phv_host_id: %d\n", phv->phv_host_id);
- return PTR_ERR(sh);
+ return -EINVAL;
}
phv->phv_lld_host = sh;
sh = phv->phv_lld_host;
} else {
sh = scsi_host_lookup(pdv->pdv_host_id);
- if (IS_ERR(sh)) {
+ if (!sh) {
pr_err("pSCSI: Unable to locate"
" pdv_host_id: %d\n", pdv->pdv_host_id);
- return PTR_ERR(sh);
+ return -EINVAL;
}
}
} else {
sectors, cmd->se_dev->dev_attrib.max_write_same_len);
return TCM_INVALID_CDB_FIELD;
}
+ /* We always have ANC_SUP == 0 so setting ANCHOR is always an error */
+ if (flags[0] & 0x10) {
+ pr_warn("WRITE SAME with ANCHOR not supported\n");
+ return TCM_INVALID_CDB_FIELD;
+ }
/*
* Special case for WRITE_SAME w/ UNMAP=1 that ends up getting
* translated into block discard requests within backend code.
{
struct se_device *dev = cmd->se_dev;
- cmd->se_cmd_flags |= SCF_COMPARE_AND_WRITE_POST;
+ /*
+ * Only set SCF_COMPARE_AND_WRITE_POST to force a response fall-through
+ * within target_complete_ok_work() if the command was successfully
+ * sent to the backend driver.
+ */
+ spin_lock_irq(&cmd->t_state_lock);
+ if ((cmd->transport_state & CMD_T_SENT) && !cmd->scsi_status)
+ cmd->se_cmd_flags |= SCF_COMPARE_AND_WRITE_POST;
+ spin_unlock_irq(&cmd->t_state_lock);
+
/*
* Unlock ->caw_sem originally obtained during sbc_compare_and_write()
* before the original READ I/O submission.
{
struct se_device *dev = cmd->se_dev;
struct scatterlist *write_sg = NULL, *sg;
- unsigned char *buf, *addr;
+ unsigned char *buf = NULL, *addr;
struct sg_mapping_iter m;
unsigned int offset = 0, len;
unsigned int nlbas = cmd->t_task_nolb;
*/
if (!cmd->t_data_sg || !cmd->t_bidi_data_sg)
return TCM_NO_SENSE;
+ /*
+ * Immediately exit + release dev->caw_sem if command has already
+ * been failed with a non-zero SCSI status.
+ */
+ if (cmd->scsi_status) {
+ pr_err("compare_and_write_callback: non zero scsi_status:"
+ " 0x%02x\n", cmd->scsi_status);
+ goto out;
+ }
buf = kzalloc(cmd->data_length, GFP_KERNEL);
if (!buf) {
cmd->transport_complete_callback = NULL;
return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
}
+ /*
+ * Reset cmd->data_length to individual block_size in order to not
+ * confuse backend drivers that depend on this value matching the
+ * size of the I/O being submitted.
+ */
+ cmd->data_length = cmd->t_task_nolb * dev->dev_attrib.block_size;
ret = cmd->execute_rw(cmd, cmd->t_bidi_data_sg, cmd->t_bidi_data_nents,
DMA_FROM_DEVICE);
{
int rc;
- se_sess->sess_cmd_map = kzalloc(tag_num * tag_size, GFP_KERNEL);
+ se_sess->sess_cmd_map = kzalloc(tag_num * tag_size,
+ GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT);
if (!se_sess->sess_cmd_map) {
- pr_err("Unable to allocate se_sess->sess_cmd_map\n");
- return -ENOMEM;
+ se_sess->sess_cmd_map = vzalloc(tag_num * tag_size);
+ if (!se_sess->sess_cmd_map) {
+ pr_err("Unable to allocate se_sess->sess_cmd_map\n");
+ return -ENOMEM;
+ }
}
rc = percpu_ida_init(&se_sess->sess_tag_pool, tag_num);
if (rc < 0) {
pr_err("Unable to init se_sess->sess_tag_pool,"
" tag_num: %u\n", tag_num);
- kfree(se_sess->sess_cmd_map);
+ if (is_vmalloc_addr(se_sess->sess_cmd_map))
+ vfree(se_sess->sess_cmd_map);
+ else
+ kfree(se_sess->sess_cmd_map);
se_sess->sess_cmd_map = NULL;
return -ENOMEM;
}
{
if (se_sess->sess_cmd_map) {
percpu_ida_destroy(&se_sess->sess_tag_pool);
- kfree(se_sess->sess_cmd_map);
+ if (is_vmalloc_addr(se_sess->sess_cmd_map))
+ vfree(se_sess->sess_cmd_map);
+ else
+ kfree(se_sess->sess_cmd_map);
}
kmem_cache_free(se_sess_cache, se_sess);
}
mutex_lock(&g_device_mutex);
list_for_each_entry(se_dev, &g_device_list, g_dev_node) {
+ if (!se_dev->dev_attrib.emulate_3pc)
+ continue;
+
memset(&tmp_dev_wwn[0], 0, XCOPY_NAA_IEEE_REGEX_LEN);
target_xcopy_gen_naa_ieee(se_dev, &tmp_dev_wwn[0]);
(unsigned long long)xop->dst_lba);
if (dc != 0) {
- xop->dbl = (desc[29] << 16) & 0xff;
- xop->dbl |= (desc[30] << 8) & 0xff;
+ xop->dbl = (desc[29] & 0xff) << 16;
+ xop->dbl |= (desc[30] & 0xff) << 8;
xop->dbl |= desc[31] & 0xff;
pr_debug("XCOPY seg desc 0x02: DC=1 w/ dbl: %u\n", xop->dbl);
struct se_cmd se_cmd;
struct xcopy_op *xcopy_op;
struct completion xpt_passthrough_sem;
+ unsigned char sense_buffer[TRANSPORT_SENSE_BUFFER];
};
static struct se_port xcopy_pt_port;
pr_debug("target_xcopy_issue_pt_cmd(): SCSI status: 0x%02x\n",
se_cmd->scsi_status);
- return 0;
+
+ return (se_cmd->scsi_status) ? -EINVAL : 0;
}
static int target_xcopy_read_source(
(unsigned long long)src_lba, src_sectors, length);
transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, NULL, length,
- DMA_FROM_DEVICE, 0, NULL);
+ DMA_FROM_DEVICE, 0, &xpt_cmd->sense_buffer[0]);
xop->src_pt_cmd = xpt_cmd;
rc = target_xcopy_setup_pt_cmd(xpt_cmd, xop, src_dev, &cdb[0],
(unsigned long long)dst_lba, dst_sectors, length);
transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, NULL, length,
- DMA_TO_DEVICE, 0, NULL);
+ DMA_TO_DEVICE, 0, &xpt_cmd->sense_buffer[0]);
xop->dst_pt_cmd = xpt_cmd;
rc = target_xcopy_setup_pt_cmd(xpt_cmd, xop, dst_dev, &cdb[0],
sense_reason_t target_do_xcopy(struct se_cmd *se_cmd)
{
+ struct se_device *dev = se_cmd->se_dev;
struct xcopy_op *xop = NULL;
unsigned char *p = NULL, *seg_desc;
unsigned int list_id, list_id_usage, sdll, inline_dl, sa;
+ sense_reason_t ret = TCM_INVALID_PARAMETER_LIST;
int rc;
unsigned short tdll;
+ if (!dev->dev_attrib.emulate_3pc) {
+ pr_err("EXTENDED_COPY operation explicitly disabled\n");
+ return TCM_UNSUPPORTED_SCSI_OPCODE;
+ }
+
sa = se_cmd->t_task_cdb[1] & 0x1f;
if (sa != 0x00) {
pr_err("EXTENDED_COPY(LID4) not supported\n");
return TCM_UNSUPPORTED_SCSI_OPCODE;
}
+ xop = kzalloc(sizeof(struct xcopy_op), GFP_KERNEL);
+ if (!xop) {
+ pr_err("Unable to allocate xcopy_op\n");
+ return TCM_OUT_OF_RESOURCES;
+ }
+ xop->xop_se_cmd = se_cmd;
+
p = transport_kmap_data_sg(se_cmd);
if (!p) {
pr_err("transport_kmap_data_sg() failed in target_do_xcopy\n");
+ kfree(xop);
return TCM_OUT_OF_RESOURCES;
}
list_id = p[0];
- if (list_id != 0x00) {
- pr_err("XCOPY with non zero list_id: 0x%02x\n", list_id);
- goto out;
- }
- list_id_usage = (p[1] & 0x18);
+ list_id_usage = (p[1] & 0x18) >> 3;
+
/*
* Determine TARGET DESCRIPTOR LIST LENGTH + SEGMENT DESCRIPTOR LIST LENGTH
*/
goto out;
}
- xop = kzalloc(sizeof(struct xcopy_op), GFP_KERNEL);
- if (!xop) {
- pr_err("Unable to allocate xcopy_op\n");
- goto out;
- }
- xop->xop_se_cmd = se_cmd;
-
pr_debug("Processing XCOPY with list_id: 0x%02x list_id_usage: 0x%02x"
" tdll: %hu sdll: %u inline_dl: %u\n", list_id, list_id_usage,
tdll, sdll, inline_dl);
if (rc <= 0)
goto out;
+ if (xop->src_dev->dev_attrib.block_size !=
+ xop->dst_dev->dev_attrib.block_size) {
+ pr_err("XCOPY: Non matching src_dev block_size: %u + dst_dev"
+ " block_size: %u currently unsupported\n",
+ xop->src_dev->dev_attrib.block_size,
+ xop->dst_dev->dev_attrib.block_size);
+ xcopy_pt_undepend_remotedev(xop);
+ ret = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ goto out;
+ }
+
pr_debug("XCOPY: Processed %d target descriptors, length: %u\n", rc,
rc * XCOPY_TARGET_DESC_LEN);
seg_desc = &p[16];
if (p)
transport_kunmap_data_sg(se_cmd);
kfree(xop);
- return TCM_INVALID_CDB_FIELD;
+ return ret;
}
static sense_reason_t target_rcr_operating_parameters(struct se_cmd *se_cmd)
}
th_zone = conf->pzone_data;
- if (th_zone->therm_dev)
- return;
if (th_zone->bind == false) {
for (i = 0; i < th_zone->cool_dev_size; i++) {
con = readl(data->base + reg->tmu_ctrl);
+ if (pdata->test_mux)
+ con |= (pdata->test_mux << reg->test_mux_addr_shift);
+
if (pdata->reference_voltage) {
con &= ~(reg->buf_vref_sel_mask << reg->buf_vref_sel_shift);
con |= pdata->reference_voltage << reg->buf_vref_sel_shift;
},
{
.compatible = "samsung,exynos4412-tmu",
- .data = (void *)EXYNOS5250_TMU_DRV_DATA,
+ .data = (void *)EXYNOS4412_TMU_DRV_DATA,
},
{
.compatible = "samsung,exynos5250-tmu",
if (ret)
return ret;
- if (pdata->type == SOC_ARCH_EXYNOS ||
- pdata->type == SOC_ARCH_EXYNOS4210 ||
- pdata->type == SOC_ARCH_EXYNOS5440)
+ if (pdata->type == SOC_ARCH_EXYNOS4210 ||
+ pdata->type == SOC_ARCH_EXYNOS4412 ||
+ pdata->type == SOC_ARCH_EXYNOS5250 ||
+ pdata->type == SOC_ARCH_EXYNOS5440)
data->soc = pdata->type;
else {
ret = -EINVAL;
enum soc_type {
SOC_ARCH_EXYNOS4210 = 1,
- SOC_ARCH_EXYNOS,
+ SOC_ARCH_EXYNOS4412,
+ SOC_ARCH_EXYNOS5250,
SOC_ARCH_EXYNOS5440,
};
* @triminfo_reload_shift: shift of triminfo reload enable bit in triminfo_ctrl
reg.
* @tmu_ctrl: TMU main controller register.
+ * @test_mux_addr_shift: shift bits of test mux address.
* @buf_vref_sel_shift: shift bits of reference voltage in tmu_ctrl register.
* @buf_vref_sel_mask: mask bits of reference voltage in tmu_ctrl register.
* @therm_trip_mode_shift: shift bits of tripping mode in tmu_ctrl register.
u32 triminfo_reload_shift;
u32 tmu_ctrl;
+ u32 test_mux_addr_shift;
u32 buf_vref_sel_shift;
u32 buf_vref_sel_mask;
u32 therm_trip_mode_shift;
* @first_point_trim: temp value of the first point trimming
* @second_point_trim: temp value of the second point trimming
* @default_temp_offset: default temperature offset in case of no trimming
+ * @test_mux; information if SoC supports test MUX
* @cal_type: calibration type for temperature
* @cal_mode: calibration mode for temperature
* @freq_clip_table: Table representing frequency reduction percentage.
u8 first_point_trim;
u8 second_point_trim;
u8 default_temp_offset;
+ u8 test_mux;
enum calibration_type cal_type;
enum calibration_mode cal_mode;
};
#endif
-#if defined(CONFIG_SOC_EXYNOS5250) || defined(CONFIG_SOC_EXYNOS4412)
-static const struct exynos_tmu_registers exynos5250_tmu_registers = {
+#if defined(CONFIG_SOC_EXYNOS4412) || defined(CONFIG_SOC_EXYNOS5250)
+static const struct exynos_tmu_registers exynos4412_tmu_registers = {
.triminfo_data = EXYNOS_TMU_REG_TRIMINFO,
.triminfo_25_shift = EXYNOS_TRIMINFO_25_SHIFT,
.triminfo_85_shift = EXYNOS_TRIMINFO_85_SHIFT,
.triminfo_ctrl = EXYNOS_TMU_TRIMINFO_CON,
.triminfo_reload_shift = EXYNOS_TRIMINFO_RELOAD_SHIFT,
.tmu_ctrl = EXYNOS_TMU_REG_CONTROL,
+ .test_mux_addr_shift = EXYNOS4412_MUX_ADDR_SHIFT,
.buf_vref_sel_shift = EXYNOS_TMU_REF_VOLTAGE_SHIFT,
.buf_vref_sel_mask = EXYNOS_TMU_REF_VOLTAGE_MASK,
.therm_trip_mode_shift = EXYNOS_TMU_TRIP_MODE_SHIFT,
.emul_time_mask = EXYNOS_EMUL_TIME_MASK,
};
-#define EXYNOS5250_TMU_DATA \
+#define EXYNOS4412_TMU_DATA \
.threshold_falling = 10, \
.trigger_levels[0] = 85, \
.trigger_levels[1] = 103, \
.temp_level = 103, \
}, \
.freq_tab_count = 2, \
- .type = SOC_ARCH_EXYNOS, \
- .registers = &exynos5250_tmu_registers, \
+ .registers = &exynos4412_tmu_registers, \
.features = (TMU_SUPPORT_EMULATION | TMU_SUPPORT_TRIM_RELOAD | \
TMU_SUPPORT_FALLING_TRIP | TMU_SUPPORT_READY_STATUS | \
TMU_SUPPORT_EMUL_TIME)
+#endif
+#if defined(CONFIG_SOC_EXYNOS4412)
+struct exynos_tmu_init_data const exynos4412_default_tmu_data = {
+ .tmu_data = {
+ {
+ EXYNOS4412_TMU_DATA,
+ .type = SOC_ARCH_EXYNOS4412,
+ .test_mux = EXYNOS4412_MUX_ADDR_VALUE,
+ },
+ },
+ .tmu_count = 1,
+};
+#endif
+
+#if defined(CONFIG_SOC_EXYNOS5250)
struct exynos_tmu_init_data const exynos5250_default_tmu_data = {
.tmu_data = {
- { EXYNOS5250_TMU_DATA },
+ {
+ EXYNOS4412_TMU_DATA,
+ .type = SOC_ARCH_EXYNOS5250,
+ },
},
.tmu_count = 1,
};
#define EXYNOS_MAX_TRIGGER_PER_REG 4
+/* Exynos4412 specific */
+#define EXYNOS4412_MUX_ADDR_VALUE 6
+#define EXYNOS4412_MUX_ADDR_SHIFT 20
+
/*exynos5440 specific registers*/
#define EXYNOS5440_TMU_S0_7_TRIM 0x000
#define EXYNOS5440_TMU_S0_7_CTRL 0x020
#define EXYNOS4210_TMU_DRV_DATA (NULL)
#endif
-#if (defined(CONFIG_SOC_EXYNOS5250) || defined(CONFIG_SOC_EXYNOS4412))
+#if defined(CONFIG_SOC_EXYNOS4412)
+extern struct exynos_tmu_init_data const exynos4412_default_tmu_data;
+#define EXYNOS4412_TMU_DRV_DATA (&exynos4412_default_tmu_data)
+#else
+#define EXYNOS4412_TMU_DRV_DATA (NULL)
+#endif
+
+#if defined(CONFIG_SOC_EXYNOS5250)
extern struct exynos_tmu_init_data const exynos5250_default_tmu_data;
#define EXYNOS5250_TMU_DRV_DATA (&exynos5250_default_tmu_data)
#else
INIT_LIST_HEAD(&hwmon->tz_list);
strlcpy(hwmon->type, tz->type, THERMAL_NAME_LENGTH);
- hwmon->device = hwmon_device_register(&tz->device);
+ hwmon->device = hwmon_device_register(NULL);
if (IS_ERR(hwmon->device)) {
result = PTR_ERR(hwmon->device);
goto free_mem;
} else {
dev_err(bgp->dev,
"Failed to read PCB state. Using defaults\n");
+ ret = 0;
}
}
*temp = ti_thermal_hotspot_temperature(tmp, slope, constant);
int phy_id = topology_physical_package_id(cpu);
struct phy_dev_entry *phdev = pkg_temp_thermal_get_phy_entry(cpu);
bool notify = false;
+ unsigned long flags;
if (!phdev)
return;
- spin_lock(&pkg_work_lock);
+ spin_lock_irqsave(&pkg_work_lock, flags);
++pkg_work_cnt;
if (unlikely(phy_id > max_phy_id)) {
- spin_unlock(&pkg_work_lock);
+ spin_unlock_irqrestore(&pkg_work_lock, flags);
return;
}
pkg_work_scheduled[phy_id] = 0;
- spin_unlock(&pkg_work_lock);
+ spin_unlock_irqrestore(&pkg_work_lock, flags);
enable_pkg_thres_interrupt();
rdmsrl(MSR_IA32_PACKAGE_THERM_STATUS, msr_val);
int thres_count;
u32 eax, ebx, ecx, edx;
u8 *temp;
+ unsigned long flags;
cpuid(6, &eax, &ebx, &ecx, &edx);
thres_count = ebx & 0x07;
goto err_ret_unlock;
}
- spin_lock(&pkg_work_lock);
+ spin_lock_irqsave(&pkg_work_lock, flags);
if (topology_physical_package_id(cpu) > max_phy_id)
max_phy_id = topology_physical_package_id(cpu);
temp = krealloc(pkg_work_scheduled,
(max_phy_id+1) * sizeof(u8), GFP_ATOMIC);
if (!temp) {
- spin_unlock(&pkg_work_lock);
+ spin_unlock_irqrestore(&pkg_work_lock, flags);
err = -ENOMEM;
goto err_ret_free;
}
pkg_work_scheduled = temp;
pkg_work_scheduled[topology_physical_package_id(cpu)] = 0;
- spin_unlock(&pkg_work_lock);
+ spin_unlock_irqrestore(&pkg_work_lock, flags);
phy_dev_entry->phys_proc_id = topology_physical_package_id(cpu);
phy_dev_entry->first_cpu = cpu;
.name = "xenboot",
.write = xenboot_write_console,
.flags = CON_PRINTBUFFER | CON_BOOT | CON_ANYTIME,
+ .index = -1,
};
#endif /* CONFIG_EARLY_PRINTK */
canon_change = (old->c_lflag ^ tty->termios.c_lflag) & ICANON;
if (canon_change) {
bitmap_zero(ldata->read_flags, N_TTY_BUF_SIZE);
- ldata->line_start = 0;
- ldata->canon_head = ldata->read_tail;
+ ldata->line_start = ldata->canon_head = ldata->read_tail;
ldata->erasing = 0;
ldata->lnext = 0;
}
if (!input_available_p(tty, 0)) {
if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) {
- retval = -EIO;
- break;
- }
- if (tty_hung_up_p(file))
- break;
- if (!timeout)
- break;
- if (file->f_flags & O_NONBLOCK) {
- retval = -EAGAIN;
- break;
- }
- if (signal_pending(current)) {
- retval = -ERESTARTSYS;
- break;
- }
- n_tty_set_room(tty);
- up_read(&tty->termios_rwsem);
+ up_read(&tty->termios_rwsem);
+ tty_flush_to_ldisc(tty);
+ down_read(&tty->termios_rwsem);
+ if (!input_available_p(tty, 0)) {
+ retval = -EIO;
+ break;
+ }
+ } else {
+ if (tty_hung_up_p(file))
+ break;
+ if (!timeout)
+ break;
+ if (file->f_flags & O_NONBLOCK) {
+ retval = -EAGAIN;
+ break;
+ }
+ if (signal_pending(current)) {
+ retval = -ERESTARTSYS;
+ break;
+ }
+ n_tty_set_room(tty);
+ up_read(&tty->termios_rwsem);
- timeout = schedule_timeout(timeout);
+ timeout = schedule_timeout(timeout);
- down_read(&tty->termios_rwsem);
- continue;
+ down_read(&tty->termios_rwsem);
+ continue;
+ }
}
__set_current_state(TASK_RUNNING);
/*
* Get ip name usart or uart
*/
-static int atmel_get_ip_name(struct uart_port *port)
+static void atmel_get_ip_name(struct uart_port *port)
{
struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
int name = UART_GET_IP_NAME(port);
atmel_port->is_usart = false;
} else {
dev_err(port->dev, "Not supported ip name, set to uart\n");
- return -EINVAL;
}
-
- return 0;
}
/*
/*
* Get port name of usart or uart
*/
- ret = atmel_get_ip_name(&port->uart);
- if (ret < 0)
- goto err_add_port;
+ atmel_get_ip_name(&port->uart);
return 0;
sport->devdata = of_id->data;
- if (of_device_is_stdout_path(np))
- add_preferred_console(imx_reg.cons->name, sport->port.line, 0);
-
return 0;
}
#else
static int dma_push_rx(struct eg20t_port *priv, int size)
{
- struct tty_struct *tty;
int room;
struct uart_port *port = &priv->port;
struct tty_port *tport = &port->state->port;
- port = &priv->port;
- tty = tty_port_tty_get(tport);
- if (!tty) {
- dev_dbg(priv->port.dev, "%s:tty is busy now", __func__);
- return 0;
- }
-
room = tty_buffer_request_room(tport, size);
if (room < size)
dev_warn(port->dev, "Rx overrun: dropping %u bytes\n",
size - room);
if (!room)
- return room;
+ return 0;
tty_insert_flip_string(tport, sg_virt(&priv->sg_rx), size);
port->icount.rx += room;
- tty_kref_put(tty);
return room;
}
if (tty == NULL) {
for (i = 0; error_msg[i] != NULL; i++)
dev_err(&priv->pdev->dev, error_msg[i]);
+ } else {
+ tty_kref_put(tty);
}
}
static void tegra_uart_stop_rx(struct uart_port *u)
{
struct tegra_uart_port *tup = to_tegra_uport(u);
- struct tty_struct *tty = tty_port_tty_get(&tup->uport.state->port);
+ struct tty_struct *tty;
struct tty_port *port = &u->state->port;
struct dma_tx_state state;
unsigned long ier;
if (!tup->rx_in_progress)
return;
+ tty = tty_port_tty_get(&tup->uport.state->port);
+
tegra_uart_wait_sym_time(tup, 1); /* wait a character interval */
ier = tup->ier_shadow;
if (!mmres || !irqres)
return -ENODEV;
- if (np)
+ if (np) {
port = of_alias_get_id(np, "serial");
if (port >= VT8500_MAX_PORTS)
port = -1;
- else
+ } else {
port = -1;
+ }
if (port < 0) {
/* calculate the port id */
struct pid *tty_pgrp = tty_get_pgrp(tty);
if (tty_pgrp) {
kill_pgrp(tty_pgrp, SIGHUP, on_exit);
- kill_pgrp(tty_pgrp, SIGCONT, on_exit);
+ if (!on_exit)
+ kill_pgrp(tty_pgrp, SIGCONT, on_exit);
put_pid(tty_pgrp);
}
}
}
return 0;
case TCFLSH:
+ retval = tty_check_change(tty);
+ if (retval)
+ return retval;
return __tty_perform_flush(tty, arg);
default:
/* Try the mode commands */
{
struct uio_device *idev = vma->vm_private_data;
int mi = uio_find_mem_index(vma);
+ struct uio_mem *mem;
if (mi < 0)
return -EINVAL;
+ mem = idev->info->mem + mi;
- vma->vm_ops = &uio_physical_vm_ops;
+ if (vma->vm_end - vma->vm_start > mem->size)
+ return -EINVAL;
+ vma->vm_ops = &uio_physical_vm_ops;
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ /*
+ * We cannot use the vm_iomap_memory() helper here,
+ * because vma->vm_pgoff is the map index we looked
+ * up above in uio_find_mem_index(), rather than an
+ * actual page offset into the mmap.
+ *
+ * So we just do the physical mmap without a page
+ * offset.
+ */
return remap_pfn_range(vma,
vma->vm_start,
- idev->info->mem[mi].addr >> PAGE_SHIFT,
+ mem->addr >> PAGE_SHIFT,
vma->vm_end - vma->vm_start,
vma->vm_page_prot);
}
config USB_CHIPIDEA
tristate "ChipIdea Highspeed Dual Role Controller"
- depends on (USB_EHCI_HCD && USB_GADGET) || (USB_EHCI_HCD && !USB_GADGET) || (!USB_EHCI_HCD && USB_GADGET)
+ depends on ((USB_EHCI_HCD && USB_GADGET) || (USB_EHCI_HCD && !USB_GADGET) || (!USB_EHCI_HCD && USB_GADGET)) && HAS_DMA
help
Say Y here if your system has a dual role high speed USB
controller based on ChipIdea silicon IP. Currently, only the
if (ret) {
dev_err(&pdev->dev, "usbmisc init failed, ret=%d\n",
ret);
- goto err_clk;
+ goto err_phy;
}
}
dev_err(&pdev->dev,
"Can't register ci_hdrc platform device, err=%d\n",
ret);
- goto err_clk;
+ goto err_phy;
}
if (data->usbmisc_data) {
disable_device:
ci_hdrc_remove_device(data->ci_pdev);
+err_phy:
+ if (data->phy)
+ usb_phy_shutdown(data->phy);
err_clk:
clk_disable_unprepare(data->clk);
return ret;
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x0829),
.driver_data = (kernel_ulong_t)&penwell_pci_platdata,
},
- { 0, 0, 0, 0, 0, 0, 0 /* end: all zeroes */ }
+ {
+ /* Intel Clovertrail */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xe006),
+ .driver_data = (kernel_ulong_t)&penwell_pci_platdata,
+ },
+ { 0 } /* end: all zeroes */
};
MODULE_DEVICE_TABLE(pci, ci_hdrc_pci_id_table);
dbg_remove_files(ci);
free_irq(ci->irq, ci);
ci_role_destroy(ci);
+ kfree(ci->hw_bank.regmap);
return 0;
}
{
struct usb_hcd *hcd = ci->hcd;
- usb_remove_hcd(hcd);
- usb_put_hcd(hcd);
+ if (hcd) {
+ usb_remove_hcd(hcd);
+ usb_put_hcd(hcd);
+ }
if (ci->platdata->reg_vbus)
regulator_disable(ci->platdata->reg_vbus);
}
for (i = 0; i < ci->hw_ep_max; i++) {
struct ci_hw_ep *hwep = &ci->ci_hw_ep[i];
+ if (hwep->pending_td)
+ free_pending_td(hwep);
dma_pool_free(ci->qh_pool, hwep->qh.ptr, hwep->qh.dma);
}
}
if (ci->platdata->notify_event)
ci->platdata->notify_event(ci,
CI_HDRC_CONTROLLER_STOPPED_EVENT);
- ci->driver = NULL;
spin_unlock_irqrestore(&ci->lock, flags);
_gadget_stop_activity(&ci->gadget);
spin_lock_irqsave(&ci->lock, flags);
pm_runtime_put(&ci->gadget.dev);
}
+ ci->driver = NULL;
spin_unlock_irqrestore(&ci->lock, flags);
return 0;
if ((index & ~USB_DIR_IN) == 0)
return 0;
ret = findintfep(ps->dev, index);
+ if (ret < 0) {
+ /*
+ * Some not fully compliant Win apps seem to get
+ * index wrong and have the endpoint number here
+ * rather than the endpoint address (with the
+ * correct direction). Win does let this through,
+ * so we'll not reject it here but leave it to
+ * the device to not break KVM. But we warn.
+ */
+ ret = findintfep(ps->dev, index ^ 0x80);
+ if (ret >= 0)
+ dev_info(&ps->dev->dev,
+ "%s: process %i (%s) requesting ep %02x but needs %02x\n",
+ __func__, task_pid_nr(current),
+ current->comm, index, index ^ 0x80);
+ }
if (ret >= 0)
ret = checkintf(ps, ret);
break;
unsigned long long u2_pel;
int ret;
+ if (udev->state != USB_STATE_CONFIGURED)
+ return 0;
+
/* Convert SEL and PEL stored in ns to us */
u1_sel = DIV_ROUND_UP(udev->u1_params.sel, 1000);
u1_pel = DIV_ROUND_UP(udev->u1_params.pel, 1000);
/* Alcor Micro Corp. Hub */
{ USB_DEVICE(0x058f, 0x9254), .driver_info = USB_QUIRK_RESET_RESUME },
+ /* MicroTouch Systems touchscreen */
+ { USB_DEVICE(0x0596, 0x051e), .driver_info = USB_QUIRK_RESET_RESUME },
+
/* appletouch */
{ USB_DEVICE(0x05ac, 0x021a), .driver_info = USB_QUIRK_RESET_RESUME },
/* Broadcom BCM92035DGROM BT dongle */
{ USB_DEVICE(0x0a5c, 0x2021), .driver_info = USB_QUIRK_RESET_RESUME },
+ /* MAYA44USB sound device */
+ { USB_DEVICE(0x0a92, 0x0091), .driver_info = USB_QUIRK_RESET_RESUME },
+
/* Action Semiconductor flash disk */
{ USB_DEVICE(0x10d6, 0x2200), .driver_info =
USB_QUIRK_STRING_FETCH_255 },
config USB_DWC3
tristate "DesignWare USB3 DRD Core Support"
depends on (USB || USB_GADGET) && HAS_DMA
- depends on EXTCON
select USB_XHCI_PLATFORM if USB_SUPPORT && USB_XHCI_HCD
help
Say Y or M here if your system has a Dual Role SuperSpeed
/* FIXME define these in <linux/pci_ids.h> */
#define PCI_VENDOR_ID_SYNOPSYS 0x16c3
#define PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3 0xabcd
+#define PCI_DEVICE_ID_INTEL_BYT 0x0f37
+#define PCI_DEVICE_ID_INTEL_MRFLD 0x119e
struct dwc3_pci {
struct device *dev;
PCI_DEVICE(PCI_VENDOR_ID_SYNOPSYS,
PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3),
},
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BYT), },
+ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MRFLD), },
{ } /* Terminating Entry */
};
MODULE_DEVICE_TABLE(pci, dwc3_pci_id_table);
ret = usb_add_gadget_udc(dwc->dev, &dwc->gadget);
if (ret) {
dev_err(dwc->dev, "failed to register udc\n");
- goto err5;
+ goto err4;
}
return 0;
-err5:
- dwc3_gadget_free_endpoints(dwc);
-
err4:
+ dwc3_gadget_free_endpoints(dwc);
dma_free_coherent(dwc->dev, DWC3_EP0_BOUNCE_SIZE,
dwc->ep0_bounce, dwc->ep0_bounce_addr);
c->bmAttributes |= USB_CONFIG_ATT_WAKEUP;
}
- fi_ecm = usb_get_function_instance("ecm");
- if (IS_ERR(fi_ecm)) {
- status = PTR_ERR(fi_ecm);
- goto err_func_ecm;
- }
-
f_ecm = usb_get_function(fi_ecm);
if (IS_ERR(f_ecm)) {
status = PTR_ERR(f_ecm);
if (status)
goto err_add_ecm;
- fi_serial = usb_get_function_instance("acm");
- if (IS_ERR(fi_serial)) {
- status = PTR_ERR(fi_serial);
- goto err_get_acm;
- }
-
f_acm = usb_get_function(fi_serial);
if (IS_ERR(f_acm)) {
status = PTR_ERR(f_acm);
- goto err_func_acm;
+ goto err_get_acm;
}
status = usb_add_function(c, f_acm);
if (status)
goto err_add_acm;
-
return 0;
err_add_acm:
usb_put_function(f_acm);
-err_func_acm:
- usb_put_function_instance(fi_serial);
err_get_acm:
usb_remove_function(c, f_ecm);
err_add_ecm:
usb_put_function(f_ecm);
err_get_ecm:
- usb_put_function_instance(fi_ecm);
-err_func_ecm:
return status;
}
struct dummy_hcd *dum_hcd = gadget_to_dummy_hcd(g);
struct dummy *dum = dum_hcd->dum;
- dev_dbg(udc_dev(dum), "unregister gadget driver '%s'\n",
- driver->driver.name);
+ if (driver)
+ dev_dbg(udc_dev(dum), "unregister gadget driver '%s'\n",
+ driver->driver.name);
dum->driver = NULL;
{
struct dummy *dum = platform_get_drvdata(pdev);
- usb_del_gadget_udc(&dum->gadget);
device_remove_file(&dum->gadget.dev, &dev_attr_function);
+ usb_del_gadget_udc(&dum->gadget);
return 0;
}
usb_ep_free_request(ecm->notify, ecm->notify_req);
}
-struct usb_function *ecm_alloc(struct usb_function_instance *fi)
+static struct usb_function *ecm_alloc(struct usb_function_instance *fi)
{
struct f_ecm *ecm;
struct f_ecm_opts *opts;
usb_free_all_descriptors(f);
}
-struct usb_function *eem_alloc(struct usb_function_instance *fi)
+static struct usb_function *eem_alloc(struct usb_function_instance *fi)
{
struct f_eem *eem;
struct f_eem_opts *opts;
struct ffs_file_perms perms;
umode_t root_mode;
const char *dev_name;
- union {
- /* set by ffs_fs_mount(), read by ffs_sb_fill() */
- void *private_data;
- /* set by ffs_sb_fill(), read by ffs_fs_mount */
- struct ffs_data *ffs_data;
- };
+ struct ffs_data *ffs_data;
};
static int ffs_sb_fill(struct super_block *sb, void *_data, int silent)
{
struct ffs_sb_fill_data *data = _data;
struct inode *inode;
- struct ffs_data *ffs;
+ struct ffs_data *ffs = data->ffs_data;
ENTER();
- /* Initialise data */
- ffs = ffs_data_new();
- if (unlikely(!ffs))
- goto Enomem;
-
ffs->sb = sb;
- ffs->dev_name = kstrdup(data->dev_name, GFP_KERNEL);
- if (unlikely(!ffs->dev_name))
- goto Enomem;
- ffs->file_perms = data->perms;
- ffs->private_data = data->private_data;
-
- /* used by the caller of this function */
- data->ffs_data = ffs;
-
+ data->ffs_data = NULL;
sb->s_fs_info = ffs;
sb->s_blocksize = PAGE_CACHE_SIZE;
sb->s_blocksize_bits = PAGE_CACHE_SHIFT;
&data->perms);
sb->s_root = d_make_root(inode);
if (unlikely(!sb->s_root))
- goto Enomem;
+ return -ENOMEM;
/* EP0 file */
if (unlikely(!ffs_sb_create_file(sb, "ep0", ffs,
&ffs_ep0_operations, NULL)))
- goto Enomem;
+ return -ENOMEM;
return 0;
-
-Enomem:
- return -ENOMEM;
}
static int ffs_fs_parse_opts(struct ffs_sb_fill_data *data, char *opts)
struct dentry *rv;
int ret;
void *ffs_dev;
+ struct ffs_data *ffs;
ENTER();
if (unlikely(ret < 0))
return ERR_PTR(ret);
+ ffs = ffs_data_new();
+ if (unlikely(!ffs))
+ return ERR_PTR(-ENOMEM);
+ ffs->file_perms = data.perms;
+
+ ffs->dev_name = kstrdup(dev_name, GFP_KERNEL);
+ if (unlikely(!ffs->dev_name)) {
+ ffs_data_put(ffs);
+ return ERR_PTR(-ENOMEM);
+ }
+
ffs_dev = functionfs_acquire_dev_callback(dev_name);
- if (IS_ERR(ffs_dev))
- return ffs_dev;
+ if (IS_ERR(ffs_dev)) {
+ ffs_data_put(ffs);
+ return ERR_CAST(ffs_dev);
+ }
+ ffs->private_data = ffs_dev;
+ data.ffs_data = ffs;
- data.dev_name = dev_name;
- data.private_data = ffs_dev;
rv = mount_nodev(t, flags, &data, ffs_sb_fill);
-
- /* data.ffs_data is set by ffs_sb_fill */
- if (IS_ERR(rv))
+ if (IS_ERR(rv) && data.ffs_data) {
functionfs_release_dev_callback(data.ffs_data);
-
+ ffs_data_put(data.ffs_data);
+ }
return rv;
}
data->raw_descs + ret,
(sizeof data->raw_descs) - ret,
__ffs_func_bind_do_descs, func);
+ if (unlikely(ret < 0))
+ goto error;
}
/*
/* Disable the endpoints */
if (fsg->bulk_in_enabled) {
usb_ep_disable(fsg->bulk_in);
+ fsg->bulk_in->driver_data = NULL;
fsg->bulk_in_enabled = 0;
}
if (fsg->bulk_out_enabled) {
usb_ep_disable(fsg->bulk_out);
+ fsg->bulk_out->driver_data = NULL;
fsg->bulk_out_enabled = 0;
}
module_platform_driver(fotg210_driver);
-MODULE_AUTHOR("Yuan-Hsin Chen <yhchen@faraday-tech.com>");
+MODULE_AUTHOR("Yuan-Hsin Chen, Feng-Hsin Chiang <john453@faraday-tech.com>");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION(DRIVER_DESC);
MODULE_DESCRIPTION("FUSB300 USB gadget driver");
MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Yuan Hsin Chen <yhchen@faraday-tech.com>");
+MODULE_AUTHOR("Yuan-Hsin Chen, Feng-Hsin Chiang <john453@faraday-tech.com>");
MODULE_ALIAS("platform:fusb300_udc");
#define DRIVER_VERSION "20 October 2010"
return ret;
}
-static int rndis_config_register(struct usb_composite_dev *cdev)
+static __ref int rndis_config_register(struct usb_composite_dev *cdev)
{
static struct usb_configuration config = {
.bConfigurationValue = MULTI_RNDIS_CONFIG_NUM,
#else
-static int rndis_config_register(struct usb_composite_dev *cdev)
+static __ref int rndis_config_register(struct usb_composite_dev *cdev)
{
return 0;
}
return ret;
}
-static int cdc_config_register(struct usb_composite_dev *cdev)
+static __ref int cdc_config_register(struct usb_composite_dev *cdev)
{
static struct usb_configuration config = {
.bConfigurationValue = MULTI_CDC_CONFIG_NUM,
#else
-static int cdc_config_register(struct usb_composite_dev *cdev)
+static __ref int cdc_config_register(struct usb_composite_dev *cdev)
{
return 0;
}
struct mv_u3d_ep *ep;
struct mv_u3d_ep_context *ep_context;
u32 epxcr, direction;
+ unsigned long flags;
if (!_ep)
return -EINVAL;
direction = mv_u3d_ep_dir(ep);
/* nuke all pending requests (does flush) */
+ spin_lock_irqsave(&u3d->lock, flags);
mv_u3d_nuke(ep, -ESHUTDOWN);
+ spin_unlock_irqrestore(&u3d->lock, flags);
/* Disable the endpoint for Rx or Tx and reset the endpoint type */
if (direction == MV_U3D_EP_DIR_OUT) {
/*
* probe - binds to the platform device
*/
-static int __init pxa25x_udc_probe(struct platform_device *pdev)
+static int pxa25x_udc_probe(struct platform_device *pdev)
{
struct pxa25x_udc *dev = &memory;
int retval, irq;
pullup_off();
}
-static int __exit pxa25x_udc_remove(struct platform_device *pdev)
+static int pxa25x_udc_remove(struct platform_device *pdev)
{
struct pxa25x_udc *dev = platform_get_drvdata(pdev);
static struct platform_driver udc_driver = {
.shutdown = pxa25x_udc_shutdown,
- .remove = __exit_p(pxa25x_udc_remove),
+ .probe = pxa25x_udc_probe,
+ .remove = pxa25x_udc_remove,
.suspend = pxa25x_udc_suspend,
.resume = pxa25x_udc_resume,
.driver = {
},
};
-module_platform_driver_probe(udc_driver, pxa25x_udc_probe);
+module_platform_driver(udc_driver);
MODULE_DESCRIPTION(DRIVER_DESC);
MODULE_AUTHOR("Frank Becker, Robert Schwebel, David Brownell");
* FIFO, requests of >512 cause the endpoint to get stuck with a
* fragment of the end of the transfer in it.
*/
- if (can_write > 512)
+ if (can_write > 512 && !periodic)
can_write = 512;
/*
if (gintsts & GINTSTS_ErlySusp) {
dev_dbg(hsotg->dev, "GINTSTS_ErlySusp\n");
writel(GINTSTS_ErlySusp, hsotg->regs + GINTSTS);
-
- s3c_hsotg_disconnect(hsotg);
}
/*
if (!hsotg)
return -ENODEV;
- if (!driver || driver != hsotg->driver || !driver->unbind)
- return -EINVAL;
-
/* all endpoints should be shutdown */
for (ep = 0; ep < hsotg->num_of_eps; ep++)
s3c_hsotg_ep_disable(&hsotg->eps[ep].ep);
spin_lock_irqsave(&hsotg->lock, flags);
s3c_hsotg_phy_disable(hsotg);
- regulator_bulk_disable(ARRAY_SIZE(hsotg->supplies), hsotg->supplies);
- hsotg->driver = NULL;
+ if (!driver)
+ hsotg->driver = NULL;
+
hsotg->gadget.speed = USB_SPEED_UNKNOWN;
spin_unlock_irqrestore(&hsotg->lock, flags);
- dev_info(hsotg->dev, "unregistered gadget driver '%s'\n",
- driver->driver.name);
+ regulator_bulk_disable(ARRAY_SIZE(hsotg->supplies), hsotg->supplies);
return 0;
}
}
/* Enable USB controller, 83xx or 8536 */
- if (pdata->have_sysif_regs)
+ if (pdata->have_sysif_regs && pdata->controller_ver < FSL_USB_VER_1_6)
setbits32(hcd->regs + FSL_SOC_USB_CTRL, 0x4);
/* Don't need to set host mode here. It will be done by tdi_reset() */
case FSL_USB2_PHY_ULPI:
if (pdata->have_sysif_regs && pdata->controller_ver) {
/* controller version 1.6 or above */
+ clrbits32(non_ehci + FSL_SOC_USB_CTRL, UTMI_PHY_EN);
setbits32(non_ehci + FSL_SOC_USB_CTRL,
- ULPI_PHY_CLK_SEL);
- /*
- * Due to controller issue of PHY_CLK_VALID in ULPI
- * mode, we set USB_CTRL_USB_EN before checking
- * PHY_CLK_VALID, otherwise PHY_CLK_VALID doesn't work.
- */
- clrsetbits_be32(non_ehci + FSL_SOC_USB_CTRL,
- UTMI_PHY_EN, USB_CTRL_USB_EN);
+ ULPI_PHY_CLK_SEL | USB_CTRL_USB_EN);
}
portsc |= PORT_PTS_ULPI;
break;
if (pdata->have_sysif_regs && pdata->controller_ver &&
(phy_mode == FSL_USB2_PHY_ULPI)) {
/* check PHY_CLK_VALID to get phy clk valid */
- if (!spin_event_timeout(in_be32(non_ehci + FSL_SOC_USB_CTRL) &
- PHY_CLK_VALID, FSL_USB_PHY_CLK_TIMEOUT, 0)) {
+ if (!(spin_event_timeout(in_be32(non_ehci + FSL_SOC_USB_CTRL) &
+ PHY_CLK_VALID, FSL_USB_PHY_CLK_TIMEOUT, 0) ||
+ in_be32(non_ehci + FSL_SOC_USB_PRICTRL))) {
printk(KERN_WARNING "fsl-ehci: USB PHY clock invalid\n");
return -EINVAL;
}
* generic hardware linkage
*/
.irq = ehci_irq,
- .flags = HCD_USB2 | HCD_MEMORY | HCD_BH,
+ .flags = HCD_USB2 | HCD_MEMORY,
/*
* basic lifecycle operations
* generic hardware linkage
*/
.irq = ehci_irq,
- .flags = HCD_MEMORY | HCD_USB2 | HCD_BH,
+ .flags = HCD_MEMORY | HCD_USB2,
/*
* basic lifecycle operations
* generic hardware linkage
*/
.irq = ehci_irq,
- .flags = HCD_MEMORY | HCD_USB2 | HCD_BH,
+ .flags = HCD_MEMORY | HCD_USB2,
/*
* basic lifecycle operations
* generic hardware linkage
*/
.irq = ehci_irq,
- .flags = HCD_MEMORY | HCD_USB2 | HCD_BH,
+ .flags = HCD_MEMORY | HCD_USB2,
/*
* basic lifecycle operations
* generic hardware linkage
*/
.irq = ehci_irq,
- .flags = HCD_MEMORY | HCD_USB2 | HCD_BH,
+ .flags = HCD_MEMORY | HCD_USB2,
/*
* basic lifecycle operations
.remove = usb_hcd_pci_remove,
.shutdown = usb_hcd_pci_shutdown,
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_PM
.driver = {
.pm = &usb_hcd_pci_pm_ops
},
#else
.irq = ehci_irq,
#endif
- .flags = HCD_MEMORY | HCD_USB2 | HCD_BH,
+ .flags = HCD_MEMORY | HCD_USB2,
/*
* basic lifecycle operations
* generic hardware linkage
*/
.irq = ehci_irq,
- .flags = HCD_MEMORY | HCD_USB2 | HCD_BH,
+ .flags = HCD_MEMORY | HCD_USB2,
/*
* basic lifecycle operations
.product_desc = "PS3 EHCI Host Controller",
.hcd_priv_size = sizeof(struct ehci_hcd),
.irq = ehci_irq,
- .flags = HCD_MEMORY | HCD_USB2 | HCD_BH,
+ .flags = HCD_MEMORY | HCD_USB2,
.reset = ps3_ehci_hc_reset,
.start = ehci_run,
.stop = ehci_stop,
static void
ehci_urb_done(struct ehci_hcd *ehci, struct urb *urb, int status)
+__releases(ehci->lock)
+__acquires(ehci->lock)
{
if (usb_pipetype(urb->pipe) == PIPE_INTERRUPT) {
/* ... update hc-wide periodic stats */
urb->actual_length, urb->transfer_buffer_length);
#endif
+ /* complete() can reenter this HCD */
usb_hcd_unlink_urb_from_ep(ehci_to_hcd(ehci), urb);
+ spin_unlock (&ehci->lock);
usb_hcd_giveback_urb(ehci_to_hcd(ehci), urb, status);
+ spin_lock (&ehci->lock);
}
static int qh_schedule (struct ehci_hcd *ehci, struct ehci_qh *qh);
* generic hardware linkage
*/
.irq = ehci_irq,
- .flags = HCD_MEMORY | HCD_USB2 | HCD_BH,
+ .flags = HCD_MEMORY | HCD_USB2,
/*
* basic lifecycle operations
* generic hardware linkage
*/
.irq = ehci_irq,
- .flags = HCD_USB2 | HCD_MEMORY | HCD_BH,
+ .flags = HCD_USB2 | HCD_MEMORY,
/*
* basic lifecycle operations
* Generic hardware linkage.
*/
.irq = ehci_irq,
- .flags = HCD_MEMORY | HCD_USB2 | HCD_BH,
+ .flags = HCD_MEMORY | HCD_USB2,
/*
* Basic lifecycle operations.
* generic hardware linkage
*/
.irq = ehci_irq,
- .flags = HCD_USB2|HCD_MEMORY|HCD_BH,
+ .flags = HCD_USB2|HCD_MEMORY,
/*
* basic lifecycle operations
* generic hardware linkage
*/
.irq = ehci_irq,
- .flags = HCD_MEMORY | HCD_USB2 | HCD_BH,
+ .flags = HCD_MEMORY | HCD_USB2,
/*
* basic lifecycle operations
enum fsl_usb2_operating_modes op_mode; /* operating mode */
};
-struct fsl_usb2_dev_data dr_mode_data[] = {
+static struct fsl_usb2_dev_data dr_mode_data[] = {
{
.dr_mode = "host",
.drivers = { "fsl-ehci", NULL, NULL, },
},
};
-struct fsl_usb2_dev_data *get_dr_mode_data(struct device_node *np)
+static struct fsl_usb2_dev_data *get_dr_mode_data(struct device_node *np)
{
const unsigned char *prop;
int i;
return FSL_USB2_PHY_NONE;
}
-struct platform_device *fsl_usb2_device_register(
+static struct platform_device *fsl_usb2_device_register(
struct platform_device *ofdev,
struct fsl_usb2_platform_data *pdata,
const char *name, int id)
i = DIV_ROUND_UP(wrap_frame(
cur_frame - urb->start_frame),
urb->interval);
- if (urb->transfer_flags & URB_ISO_ASAP) {
+
+ /* Treat underruns as if URB_ISO_ASAP was set */
+ if ((urb->transfer_flags & URB_ISO_ASAP) ||
+ i >= urb->number_of_packets) {
urb->start_frame = wrap_frame(urb->start_frame
+ i * urb->interval);
i = 0;
- } else if (i >= urb->number_of_packets) {
- ret = -EXDEV;
- goto alloc_dmem_failed;
}
}
}
frame &= ~(ed->interval - 1);
frame |= ed->branch;
urb->start_frame = frame;
+ ed->last_iso = frame + ed->interval * (size - 1);
}
} else if (ed->type == PIPE_ISOCHRONOUS) {
u16 next = ohci_frame_no(ohci) + 1;
u16 frame = ed->last_iso + ed->interval;
+ u16 length = ed->interval * (size - 1);
/* Behind the scheduling threshold? */
if (unlikely(tick_before(frame, next))) {
- /* USB_ISO_ASAP: Round up to the first available slot */
+ /* URB_ISO_ASAP: Round up to the first available slot */
if (urb->transfer_flags & URB_ISO_ASAP) {
frame += (next - frame + ed->interval - 1) &
-ed->interval;
/*
- * Not ASAP: Use the next slot in the stream. If
- * the entire URB falls before the threshold, fail.
+ * Not ASAP: Use the next slot in the stream,
+ * no matter what.
*/
} else {
- if (tick_before(frame + ed->interval *
- (urb->number_of_packets - 1), next)) {
- retval = -EXDEV;
- usb_hcd_unlink_urb_from_ep(hcd, urb);
- goto fail;
- }
-
/*
* Some OHCI hardware doesn't handle late TDs
* correctly. After retiring them it proceeds
urb_priv->td_cnt = DIV_ROUND_UP(
(u16) (next - frame),
ed->interval);
+ if (urb_priv->td_cnt >= urb_priv->length) {
+ ++urb_priv->td_cnt; /* Mark it */
+ ohci_dbg(ohci, "iso underrun %p (%u+%u < %u)\n",
+ urb, frame, length,
+ next);
+ }
}
}
urb->start_frame = frame;
+ ed->last_iso = frame + length;
}
/* fill the TDs and link them to the ed; and
__releases(ohci->lock)
__acquires(ohci->lock)
{
- struct device *dev = ohci_to_hcd(ohci)->self.controller;
+ struct device *dev = ohci_to_hcd(ohci)->self.controller;
+ struct usb_host_endpoint *ep = urb->ep;
+ struct urb_priv *urb_priv;
+
// ASSERT (urb->hcpriv != 0);
+ restart:
urb_free_priv (ohci, urb->hcpriv);
urb->hcpriv = NULL;
if (likely(status == -EINPROGRESS))
ohci->hc_control &= ~(OHCI_CTRL_PLE|OHCI_CTRL_IE);
ohci_writel (ohci, ohci->hc_control, &ohci->regs->control);
}
+
+ /*
+ * An isochronous URB that is sumitted too late won't have any TDs
+ * (marked by the fact that the td_cnt value is larger than the
+ * actual number of TDs). If the next URB on this endpoint is like
+ * that, give it back now.
+ */
+ if (!list_empty(&ep->urb_list)) {
+ urb = list_first_entry(&ep->urb_list, struct urb, urb_list);
+ urb_priv = urb->hcpriv;
+ if (urb_priv->td_cnt > urb_priv->length) {
+ status = 0;
+ goto restart;
+ }
+ }
}
td->hwCBP = cpu_to_hc32 (ohci, data & 0xFFFFF000);
*ohci_hwPSWp(ohci, td, 0) = cpu_to_hc16 (ohci,
(data & 0x0FFF) | 0xE000);
- td->ed->last_iso = info & 0xffff;
} else {
td->hwCBP = cpu_to_hc32 (ohci, data);
}
urb_priv->td_cnt++;
/* if URB is done, clean up */
- if (urb_priv->td_cnt == urb_priv->length) {
+ if (urb_priv->td_cnt >= urb_priv->length) {
modified = completed = 1;
finish_urb(ohci, urb, 0);
}
urb_priv->td_cnt++;
/* If all this urb's TDs are done, call complete() */
- if (urb_priv->td_cnt == urb_priv->length)
+ if (urb_priv->td_cnt >= urb_priv->length)
finish_urb(ohci, urb, status);
/* clean schedule: unlink EDs that are no longer busy */
* switchable ports.
*/
pci_write_config_dword(xhci_pdev, USB_INTEL_USB3_PSSEN,
- cpu_to_le32(ports_available));
+ ports_available);
pci_read_config_dword(xhci_pdev, USB_INTEL_USB3_PSSEN,
&ports_available);
* host.
*/
pci_write_config_dword(xhci_pdev, USB_INTEL_XUSB2PR,
- cpu_to_le32(ports_available));
+ ports_available);
pci_read_config_dword(xhci_pdev, USB_INTEL_XUSB2PR,
&ports_available);
.remove = usb_hcd_pci_remove,
.shutdown = uhci_shutdown,
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_PM
.driver = {
.pm = &usb_hcd_pci_pm_ops
},
}
/* Fell behind? */
- if (uhci_frame_before_eq(frame, next)) {
+ if (!uhci_frame_before_eq(next, frame)) {
/* USB_ISO_ASAP: Round up to the first available slot */
if (urb->transfer_flags & URB_ISO_ASAP)
-qh->period;
/*
- * Not ASAP: Use the next slot in the stream. If
- * the entire URB falls before the threshold, fail.
+ * Not ASAP: Use the next slot in the stream,
+ * no matter what.
*/
else if (!uhci_frame_before_eq(next,
frame + (urb->number_of_packets - 1) *
qh->period))
- return -EXDEV;
+ dev_dbg(uhci_dev(uhci), "iso underrun %p (%u+%u < %u)\n",
+ urb, frame,
+ (urb->number_of_packets - 1) *
+ qh->period,
+ next);
}
}
if (virt_dev->eps[i].ring && virt_dev->eps[i].ring->dequeue)
xhci_queue_stop_endpoint(xhci, slot_id, i, suspend);
}
- cmd->command_trb = xhci->cmd_ring->enqueue;
+ cmd->command_trb = xhci_find_next_enqueue(xhci->cmd_ring);
list_add_tail(&cmd->cmd_list, &virt_dev->cmd_list);
xhci_queue_stop_endpoint(xhci, slot_id, 0, suspend);
xhci_ring_cmd_db(xhci);
* - Mark a port as being done with device resume,
* and ring the endpoint doorbells.
* - Stop the Synopsys redriver Compliance Mode polling.
+ * - Drop and reacquire the xHCI lock, in order to wait for port resume.
*/
static u32 xhci_get_port_status(struct usb_hcd *hcd,
struct xhci_bus_state *bus_state,
__le32 __iomem **port_array,
- u16 wIndex, u32 raw_port_status)
+ u16 wIndex, u32 raw_port_status,
+ unsigned long flags)
+ __releases(&xhci->lock)
+ __acquires(&xhci->lock)
{
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
u32 status = 0;
return 0xffffffff;
if (time_after_eq(jiffies,
bus_state->resume_done[wIndex])) {
+ int time_left;
+
xhci_dbg(xhci, "Resume USB2 port %d\n",
wIndex + 1);
bus_state->resume_done[wIndex] = 0;
clear_bit(wIndex, &bus_state->resuming_ports);
+
+ set_bit(wIndex, &bus_state->rexit_ports);
xhci_set_link_state(xhci, port_array, wIndex,
XDEV_U0);
- xhci_dbg(xhci, "set port %d resume\n",
- wIndex + 1);
- slot_id = xhci_find_slot_id_by_port(hcd, xhci,
- wIndex + 1);
- if (!slot_id) {
- xhci_dbg(xhci, "slot_id is zero\n");
- return 0xffffffff;
+
+ spin_unlock_irqrestore(&xhci->lock, flags);
+ time_left = wait_for_completion_timeout(
+ &bus_state->rexit_done[wIndex],
+ msecs_to_jiffies(
+ XHCI_MAX_REXIT_TIMEOUT));
+ spin_lock_irqsave(&xhci->lock, flags);
+
+ if (time_left) {
+ slot_id = xhci_find_slot_id_by_port(hcd,
+ xhci, wIndex + 1);
+ if (!slot_id) {
+ xhci_dbg(xhci, "slot_id is zero\n");
+ return 0xffffffff;
+ }
+ xhci_ring_device(xhci, slot_id);
+ } else {
+ int port_status = xhci_readl(xhci,
+ port_array[wIndex]);
+ xhci_warn(xhci, "Port resume took longer than %i msec, port status = 0x%x\n",
+ XHCI_MAX_REXIT_TIMEOUT,
+ port_status);
+ status |= USB_PORT_STAT_SUSPEND;
+ clear_bit(wIndex, &bus_state->rexit_ports);
}
- xhci_ring_device(xhci, slot_id);
+
bus_state->port_c_suspend |= 1 << wIndex;
bus_state->suspended_ports &= ~(1 << wIndex);
} else {
break;
}
status = xhci_get_port_status(hcd, bus_state, port_array,
- wIndex, temp);
+ wIndex, temp, flags);
if (status == 0xffffffff)
goto error;
t1 = xhci_port_state_to_neutral(t1);
if (t1 != t2)
xhci_writel(xhci, t2, port_array[port_index]);
-
- if (hcd->speed != HCD_USB3) {
- /* enable remote wake up for USB 2.0 */
- __le32 __iomem *addr;
- u32 tmp;
-
- /* Get the port power control register address. */
- addr = port_array[port_index] + PORTPMSC;
- tmp = xhci_readl(xhci, addr);
- tmp |= PORT_RWE;
- xhci_writel(xhci, tmp, addr);
- }
}
hcd->state = HC_STATE_SUSPENDED;
bus_state->next_statechange = jiffies + msecs_to_jiffies(10);
xhci_ring_device(xhci, slot_id);
} else
xhci_writel(xhci, temp, port_array[port_index]);
-
- if (hcd->speed != HCD_USB3) {
- /* disable remote wake up for USB 2.0 */
- __le32 __iomem *addr;
- u32 tmp;
-
- /* Add one to the port status register address to get
- * the port power control register address.
- */
- addr = port_array[port_index] + PORTPMSC;
- tmp = xhci_readl(xhci, addr);
- tmp &= ~PORT_RWE;
- xhci_writel(xhci, tmp, addr);
- }
}
(void) xhci_readl(xhci, &xhci->op_regs->command);
for (i = 0; i < USB_MAXCHILDREN; ++i) {
xhci->bus_state[0].resume_done[i] = 0;
xhci->bus_state[1].resume_done[i] = 0;
+ /* Only the USB 2.0 completions will ever be used. */
+ init_completion(&xhci->bus_state[1].rexit_done[i]);
}
if (scratchpad_alloc(xhci, flags))
#define PCI_VENDOR_ID_ETRON 0x1b6f
#define PCI_DEVICE_ID_ASROCK_P67 0x7023
+#define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31
+#define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31
+
static const char hcd_name[] = "xhci_hcd";
/* called after powerup, by probe or system-pm "wakeup" */
"QUIRK: Fresco Logic xHC needs configure"
" endpoint cmd after reset endpoint");
}
+ if (pdev->device == PCI_DEVICE_ID_FRESCO_LOGIC_PDK &&
+ pdev->revision == 0x4) {
+ xhci->quirks |= XHCI_SLOW_SUSPEND;
+ xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
+ "QUIRK: Fresco Logic xHC revision %u"
+ "must be suspended extra slowly",
+ pdev->revision);
+ }
/* Fresco Logic confirms: all revisions of this chip do not
* support MSI, even though some of them claim to in their PCI
* capabilities.
xhci->quirks |= XHCI_SPURIOUS_REBOOT;
xhci->quirks |= XHCI_AVOID_BEI;
}
+ if (pdev->vendor == PCI_VENDOR_ID_INTEL &&
+ (pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI ||
+ pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI)) {
+ /* Workaround for occasional spurious wakeups from S5 (or
+ * any other sleep) on Haswell machines with LPT and LPT-LP
+ * with the new Intel BIOS
+ */
+ xhci->quirks |= XHCI_SPURIOUS_WAKEUP;
+ }
if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
pdev->device == PCI_DEVICE_ID_ASROCK_P67) {
xhci->quirks |= XHCI_RESET_ON_RESUME;
usb_put_hcd(xhci->shared_hcd);
}
usb_hcd_pci_remove(dev);
+
+ /* Workaround for spurious wakeups at shutdown with HSW */
+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
+ pci_set_power_state(dev, PCI_D3hot);
+
kfree(xhci);
}
/* suspend and resume implemented later */
.shutdown = usb_hcd_pci_shutdown,
-#ifdef CONFIG_PM_SLEEP
+#ifdef CONFIG_PM
.driver = {
.pm = &usb_hcd_pci_pm_ops
},
return TRB_TYPE_LINK_LE32(link->control);
}
+union xhci_trb *xhci_find_next_enqueue(struct xhci_ring *ring)
+{
+ /* Enqueue pointer can be left pointing to the link TRB,
+ * we must handle that
+ */
+ if (TRB_TYPE_LINK_LE32(ring->enqueue->link.control))
+ return ring->enq_seg->next->trbs;
+ return ring->enqueue;
+}
+
/* Updates trb to point to the next TRB in the ring, and updates seg if the next
* TRB is in a new segment. This does not skip over link TRBs, and it does not
* effect the ring dequeue or enqueue pointers.
/* Otherwise ring the doorbell(s) to restart queued transfers */
ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
}
- ep->stopped_td = NULL;
- ep->stopped_trb = NULL;
+
+ /* Clear stopped_td and stopped_trb if endpoint is not halted */
+ if (!(ep->ep_state & EP_HALTED)) {
+ ep->stopped_td = NULL;
+ ep->stopped_trb = NULL;
+ }
/*
* Drop the lock and complete the URBs in the cancelled TD list.
inc_deq(xhci, xhci->cmd_ring);
return;
}
+ /* There is no command to handle if we get a stop event when the
+ * command ring is empty, event->cmd_trb points to the next
+ * unset command
+ */
+ if (xhci->cmd_ring->dequeue == xhci->cmd_ring->enqueue)
+ return;
}
switch (le32_to_cpu(xhci->cmd_ring->dequeue->generic.field[3])
}
}
+ /*
+ * Check to see if xhci-hub.c is waiting on RExit to U0 transition (or
+ * RExit to a disconnect state). If so, let the the driver know it's
+ * out of the RExit state.
+ */
+ if (!DEV_SUPERSPEED(temp) &&
+ test_and_clear_bit(faked_port_index,
+ &bus_state->rexit_ports)) {
+ complete(&bus_state->rexit_done[faked_port_index]);
+ bogus_port_status = true;
+ goto cleanup;
+ }
+
if (hcd->speed != HCD_USB3)
xhci_test_and_clear_bit(xhci, port_array, faked_port_index,
PORT_PLC);
spin_lock_irq(&xhci->lock);
xhci_halt(xhci);
+ /* Workaround for spurious wakeups at shutdown with HSW */
+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
+ xhci_reset(xhci);
spin_unlock_irq(&xhci->lock);
xhci_cleanup_msix(xhci);
xhci_dbg_trace(xhci, trace_xhci_dbg_init,
"xhci_shutdown completed - status = %x",
xhci_readl(xhci, &xhci->op_regs->status));
+
+ /* Yet another workaround for spurious wakeups at shutdown with HSW */
+ if (xhci->quirks & XHCI_SPURIOUS_WAKEUP)
+ pci_set_power_state(to_pci_dev(hcd->self.controller), PCI_D3hot);
}
#ifdef CONFIG_PM
int xhci_suspend(struct xhci_hcd *xhci)
{
int rc = 0;
+ unsigned int delay = XHCI_MAX_HALT_USEC;
struct usb_hcd *hcd = xhci_to_hcd(xhci);
u32 command;
command = xhci_readl(xhci, &xhci->op_regs->command);
command &= ~CMD_RUN;
xhci_writel(xhci, command, &xhci->op_regs->command);
+
+ /* Some chips from Fresco Logic need an extraordinary delay */
+ delay *= (xhci->quirks & XHCI_SLOW_SUSPEND) ? 10 : 1;
+
if (xhci_handshake(xhci, &xhci->op_regs->status,
- STS_HALT, STS_HALT, XHCI_MAX_HALT_USEC)) {
+ STS_HALT, STS_HALT, delay)) {
xhci_warn(xhci, "WARN: xHC CMD_RUN timeout\n");
spin_unlock_irq(&xhci->lock);
return -ETIMEDOUT;
if (command) {
cmd_completion = command->completion;
cmd_status = &command->status;
- command->command_trb = xhci->cmd_ring->enqueue;
-
- /* Enqueue pointer can be left pointing to the link TRB,
- * we must handle that
- */
- if (TRB_TYPE_LINK_LE32(command->command_trb->link.control))
- command->command_trb =
- xhci->cmd_ring->enq_seg->next->trbs;
-
+ command->command_trb = xhci_find_next_enqueue(xhci->cmd_ring);
list_add_tail(&command->cmd_list, &virt_dev->cmd_list);
} else {
cmd_completion = &virt_dev->cmd_completion;
}
init_completion(cmd_completion);
- cmd_trb = xhci->cmd_ring->dequeue;
+ cmd_trb = xhci_find_next_enqueue(xhci->cmd_ring);
if (!ctx_change)
ret = xhci_queue_configure_endpoint(xhci, in_ctx->dma,
udev->slot_id, must_succeed);
/* Attempt to submit the Reset Device command to the command ring */
spin_lock_irqsave(&xhci->lock, flags);
- reset_device_cmd->command_trb = xhci->cmd_ring->enqueue;
-
- /* Enqueue pointer can be left pointing to the link TRB,
- * we must handle that
- */
- if (TRB_TYPE_LINK_LE32(reset_device_cmd->command_trb->link.control))
- reset_device_cmd->command_trb =
- xhci->cmd_ring->enq_seg->next->trbs;
+ reset_device_cmd->command_trb = xhci_find_next_enqueue(xhci->cmd_ring);
list_add_tail(&reset_device_cmd->cmd_list, &virt_dev->cmd_list);
ret = xhci_queue_reset_device(xhci, slot_id);
union xhci_trb *cmd_trb;
spin_lock_irqsave(&xhci->lock, flags);
- cmd_trb = xhci->cmd_ring->dequeue;
+ cmd_trb = xhci_find_next_enqueue(xhci->cmd_ring);
ret = xhci_queue_slot_control(xhci, TRB_ENABLE_SLOT, 0);
if (ret) {
spin_unlock_irqrestore(&xhci->lock, flags);
slot_ctx->dev_info >> 27);
spin_lock_irqsave(&xhci->lock, flags);
- cmd_trb = xhci->cmd_ring->dequeue;
+ cmd_trb = xhci_find_next_enqueue(xhci->cmd_ring);
ret = xhci_queue_address_device(xhci, virt_dev->in_ctx->dma,
udev->slot_id);
if (ret) {
unsigned long resume_done[USB_MAXCHILDREN];
/* which ports have started to resume */
unsigned long resuming_ports;
+ /* Which ports are waiting on RExit to U0 transition. */
+ unsigned long rexit_ports;
+ struct completion rexit_done[USB_MAXCHILDREN];
};
+
+/*
+ * It can take up to 20 ms to transition from RExit to U0 on the
+ * Intel Lynx Point LP xHCI host.
+ */
+#define XHCI_MAX_REXIT_TIMEOUT (20 * 1000)
+
static inline unsigned int hcd_index(struct usb_hcd *hcd)
{
if (hcd->speed == HCD_USB3)
#define XHCI_COMP_MODE_QUIRK (1 << 14)
#define XHCI_AVOID_BEI (1 << 15)
#define XHCI_PLAT (1 << 16)
+#define XHCI_SLOW_SUSPEND (1 << 17)
+#define XHCI_SPURIOUS_WAKEUP (1 << 18)
unsigned int num_active_eps;
unsigned int limit_active_eps;
/* There are two roothubs to keep track of bus suspend info for */
union xhci_trb *cmd_trb);
void xhci_ring_ep_doorbell(struct xhci_hcd *xhci, unsigned int slot_id,
unsigned int ep_index, unsigned int stream_id);
+union xhci_trb *xhci_find_next_enqueue(struct xhci_ring *ring);
/* xHCI roothub code */
void xhci_set_link_state(struct xhci_hcd *xhci, __le32 __iomem **port_array,
config USB_HSIC_USB3503
tristate "USB3503 HSIC to USB20 Driver"
depends on I2C
- select REGMAP
+ select REGMAP_I2C
help
This option enables support for SMSC USB3503 HSIC to USB 2.0 Driver.
}
/*
+ * Program the HDRC to start (enable interrupts, dma, etc.).
+ */
+void musb_start(struct musb *musb)
+{
+ void __iomem *regs = musb->mregs;
+ u8 devctl = musb_readb(regs, MUSB_DEVCTL);
+
+ dev_dbg(musb->controller, "<== devctl %02x\n", devctl);
+
+ /* Set INT enable registers, enable interrupts */
+ musb->intrtxe = musb->epmask;
+ musb_writew(regs, MUSB_INTRTXE, musb->intrtxe);
+ musb->intrrxe = musb->epmask & 0xfffe;
+ musb_writew(regs, MUSB_INTRRXE, musb->intrrxe);
+ musb_writeb(regs, MUSB_INTRUSBE, 0xf7);
+
+ musb_writeb(regs, MUSB_TESTMODE, 0);
+
+ /* put into basic highspeed mode and start session */
+ musb_writeb(regs, MUSB_POWER, MUSB_POWER_ISOUPDATE
+ | MUSB_POWER_HSENAB
+ /* ENSUSPEND wedges tusb */
+ /* | MUSB_POWER_ENSUSPEND */
+ );
+
+ musb->is_active = 0;
+ devctl = musb_readb(regs, MUSB_DEVCTL);
+ devctl &= ~MUSB_DEVCTL_SESSION;
+
+ /* session started after:
+ * (a) ID-grounded irq, host mode;
+ * (b) vbus present/connect IRQ, peripheral mode;
+ * (c) peripheral initiates, using SRP
+ */
+ if (musb->port_mode != MUSB_PORT_MODE_HOST &&
+ (devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS) {
+ musb->is_active = 1;
+ } else {
+ devctl |= MUSB_DEVCTL_SESSION;
+ }
+
+ musb_platform_enable(musb);
+ musb_writeb(regs, MUSB_DEVCTL, devctl);
+}
+
+/*
* Make the HDRC stop (disable interrupts, etc.);
* reversible by musb_start
* called on gadget driver unregister
extern const char musb_driver_name[];
extern void musb_stop(struct musb *musb);
+extern void musb_start(struct musb *musb);
extern void musb_write_fifo(struct musb_hw_ep *ep, u16 len, const u8 *src);
extern void musb_read_fifo(struct musb_hw_ep *ep, u16 len, u8 *dst);
struct dsps_glue *glue;
int ret;
+ if (!strcmp(pdev->name, "musb-hdrc"))
+ return -ENODEV;
+
match = of_match_node(musb_dsps_of_match, pdev->dev.of_node);
if (!match) {
dev_err(&pdev->dev, "fail to get matching of_match struct\n");
musb->g.max_speed = USB_SPEED_HIGH;
musb->g.speed = USB_SPEED_UNKNOWN;
+ MUSB_DEV_MODE(musb);
+ musb->xceiv->otg->default_a = 0;
+ musb->xceiv->state = OTG_STATE_B_IDLE;
+
/* this "gadget" abstracts/virtualizes the controller */
musb->g.name = musb_driver_name;
musb->g.is_otg = 1;
musb->xceiv->state = OTG_STATE_B_IDLE;
spin_unlock_irqrestore(&musb->lock, flags);
+ musb_start(musb);
+
/* REVISIT: funcall to other code, which also
* handles power budgeting ... this way also
* ensures HdrcStart is indirectly called.
#include "musb_core.h"
-/*
-* Program the HDRC to start (enable interrupts, dma, etc.).
-*/
-static void musb_start(struct musb *musb)
-{
- void __iomem *regs = musb->mregs;
- u8 devctl = musb_readb(regs, MUSB_DEVCTL);
-
- dev_dbg(musb->controller, "<== devctl %02x\n", devctl);
-
- /* Set INT enable registers, enable interrupts */
- musb->intrtxe = musb->epmask;
- musb_writew(regs, MUSB_INTRTXE, musb->intrtxe);
- musb->intrrxe = musb->epmask & 0xfffe;
- musb_writew(regs, MUSB_INTRRXE, musb->intrrxe);
- musb_writeb(regs, MUSB_INTRUSBE, 0xf7);
-
- musb_writeb(regs, MUSB_TESTMODE, 0);
-
- /* put into basic highspeed mode and start session */
- musb_writeb(regs, MUSB_POWER, MUSB_POWER_ISOUPDATE
- | MUSB_POWER_HSENAB
- /* ENSUSPEND wedges tusb */
- /* | MUSB_POWER_ENSUSPEND */
- );
-
- musb->is_active = 0;
- devctl = musb_readb(regs, MUSB_DEVCTL);
- devctl &= ~MUSB_DEVCTL_SESSION;
-
- /* session started after:
- * (a) ID-grounded irq, host mode;
- * (b) vbus present/connect IRQ, peripheral mode;
- * (c) peripheral initiates, using SRP
- */
- if (musb->port_mode != MUSB_PORT_MODE_HOST &&
- (devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS) {
- musb->is_active = 1;
- } else {
- devctl |= MUSB_DEVCTL_SESSION;
- }
-
- musb_platform_enable(musb);
- musb_writeb(regs, MUSB_DEVCTL, devctl);
-}
-
static void musb_port_suspend(struct musb *musb, bool do_suspend)
{
struct usb_otg *otg = musb->xceiv->otg;
/* platform driver interface */
-static int __init gpio_vbus_probe(struct platform_device *pdev)
+static int gpio_vbus_probe(struct platform_device *pdev)
{
struct gpio_vbus_mach_info *pdata = dev_get_platdata(&pdev->dev);
struct gpio_vbus_data *gpio_vbus;
return err;
}
-static int __exit gpio_vbus_remove(struct platform_device *pdev)
+static int gpio_vbus_remove(struct platform_device *pdev)
{
struct gpio_vbus_data *gpio_vbus = platform_get_drvdata(pdev);
struct gpio_vbus_mach_info *pdata = dev_get_platdata(&pdev->dev);
};
#endif
-/* NOTE: the gpio-vbus device may *NOT* be hotplugged */
-
MODULE_ALIAS("platform:gpio-vbus");
static struct platform_driver gpio_vbus_driver = {
.pm = &gpio_vbus_dev_pm_ops,
#endif
},
- .remove = __exit_p(gpio_vbus_remove),
+ .probe = gpio_vbus_probe,
+ .remove = gpio_vbus_remove,
};
-module_platform_driver_probe(gpio_vbus_driver, gpio_vbus_probe);
+module_platform_driver(gpio_vbus_driver);
MODULE_DESCRIPTION("simple GPIO controlled OTG transceiver driver");
MODULE_AUTHOR("Philipp Zabel");
return &dpll_map[i].params;
}
- return 0;
+ return NULL;
}
static int omap_usb3_suspend(struct usb_phy *x, int suspend)
- Suunto ANT+ USB device.
- Fundamental Software dongle.
- HP4x calculators
- - a number of Motoroloa phones
+ - a number of Motorola phones
- Siemens USB/MPI adapter.
- ViVOtech ViVOpay USB device.
- Infineon Modem Flashloader USB interface
{ USB_DEVICE(FTDI_VID, FTDI_LUMEL_PD12_PID) },
/* Crucible Devices */
{ USB_DEVICE(FTDI_VID, FTDI_CT_COMET_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_Z3X_PID) },
{ } /* Terminating entry */
};
* Manufacturer: Crucible Technologies
*/
#define FTDI_CT_COMET_PID 0x8e08
+
+/*
+ * Product: Z3X Box
+ * Manufacturer: Smart GSM Team
+ */
+#define FTDI_Z3X_PID 0x0011
#define HUAWEI_VENDOR_ID 0x12D1
#define HUAWEI_PRODUCT_E173 0x140C
+#define HUAWEI_PRODUCT_E1750 0x1406
#define HUAWEI_PRODUCT_K4505 0x1464
#define HUAWEI_PRODUCT_K3765 0x1465
#define HUAWEI_PRODUCT_K4605 0x14C6
#define CHANGHONG_VENDOR_ID 0x2077
#define CHANGHONG_PRODUCT_CH690 0x7001
+/* Inovia */
+#define INOVIA_VENDOR_ID 0x20a6
+#define INOVIA_SEW858 0x1105
+
/* some devices interfaces need special handling due to a number of reasons */
enum option_blacklist_reason {
OPTION_BLACKLIST_NONE = 0,
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1c23, USB_CLASS_COMM, 0x02, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E173, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t) &net_intf1_blacklist },
+ { USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_E1750, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t) &net_intf2_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1441, USB_CLASS_COMM, 0x02, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0x1442, USB_CLASS_COMM, 0x02, 0xff) },
{ USB_DEVICE_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, HUAWEI_PRODUCT_K4505, 0xff, 0xff, 0xff),
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x7A) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x7B) },
{ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x02, 0x7C) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x01) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x02) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x03) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x04) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x05) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x06) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0D) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0E) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x0F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x10) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x12) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x13) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x14) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x15) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x17) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x18) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x19) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x1A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x1B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x1C) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x31) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x32) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x33) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x34) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x35) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x36) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3D) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3E) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x3F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x48) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x49) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x4A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x4B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x4C) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x61) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x62) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x63) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x64) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x65) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x66) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6D) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6E) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x78) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x79) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x7A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x7B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x03, 0x7C) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x01) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x02) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x03) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x04) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x05) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x06) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0D) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0E) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x0F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x10) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x12) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x13) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x14) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x15) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x17) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x18) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x19) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x1A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x1B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x1C) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x31) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x32) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x33) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x34) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x35) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x36) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3D) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3E) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x3F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x48) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x49) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x4A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x4B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x4C) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x61) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x62) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x63) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x64) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x65) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x66) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6D) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6E) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x78) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x79) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x7A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x7B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x04, 0x7C) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x01) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x02) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x03) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x04) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x05) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x06) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0D) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0E) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x0F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x10) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x12) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x13) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x14) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x15) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x17) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x18) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x19) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x1A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x1B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x1C) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x31) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x32) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x33) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x34) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x35) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x36) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3D) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3E) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x3F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x48) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x49) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x4A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x4B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x4C) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x61) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x62) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x63) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x64) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x65) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x66) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6D) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6E) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x78) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x79) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x7A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x7B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x05, 0x7C) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x01) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x02) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x03) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x04) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x05) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x06) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0D) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0E) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x0F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x10) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x12) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x13) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x14) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x15) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x17) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x18) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x19) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x1A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x1B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x1C) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x31) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x32) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x33) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x34) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x35) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x36) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3D) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3E) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x3F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x48) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x49) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x4A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x4B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x4C) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x61) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x62) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x63) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x64) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x65) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x66) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6D) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6E) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x6F) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x78) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x79) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7A) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7B) },
+ { USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, 0xff, 0x06, 0x7C) },
{ USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_V640) },
{ USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD100) },
{ USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD145) },
- { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD200) },
+ { USB_DEVICE(OLIVETTI_VENDOR_ID, OLIVETTI_PRODUCT_OLICARD200),
+ .driver_info = (kernel_ulong_t)&net_intf6_blacklist
+ },
{ USB_DEVICE(CELOT_VENDOR_ID, CELOT_PRODUCT_CT680M) }, /* CT-650 CDMA 450 1xEVDO modem */
{ USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_GT_B3730, USB_CLASS_CDC_DATA, 0x00, 0x00) }, /* Samsung GT-B3730 LTE USB modem.*/
{ USB_DEVICE(YUGA_VENDOR_ID, YUGA_PRODUCT_CEM600) },
{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x00, 0x00) },
{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */
+ { USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) },
{ } /* Terminating entry */
};
MODULE_DEVICE_TABLE(usb, option_ids);
* Copyright (C) 2001-2007 Greg Kroah-Hartman (greg@kroah.com)
* Copyright (C) 2003 IBM Corp.
*
- * Copyright (C) 2009, 2013 Frank Schäfer <fschaefer.oss@googlemail.com>
- * - fixes, improvements and documentation for the baud rate encoding methods
- * Copyright (C) 2013 Reinhard Max <max@suse.de>
- * - fixes and improvements for the divisor based baud rate encoding method
- *
* Original driver for 2.2.x by anonymous
*
* This program is free software; you can redistribute it and/or
enum pl2303_type {
- type_0, /* H version ? */
- type_1, /* H version ? */
- HX_TA, /* HX(A) / X(A) / TA version */ /* TODO: improve */
- HXD_EA_RA_SA, /* HXD / EA / RA / SA version */ /* TODO: improve */
- TB, /* TB version */
+ type_0, /* don't know the difference between type 0 and */
+ type_1, /* type 1, until someone from prolific tells us... */
+ HX, /* HX version of the pl2303 chip */
};
-/*
- * NOTE: don't know the difference between type 0 and type 1,
- * until someone from Prolific tells us...
- * TODO: distinguish between X/HX, TA and HXD, EA, RA, SA variants
- */
struct pl2303_serial_private {
enum pl2303_type type;
{
struct pl2303_serial_private *spriv;
enum pl2303_type type = type_0;
- char *type_str = "unknown (treating as type_0)";
unsigned char *buf;
spriv = kzalloc(sizeof(*spriv), GFP_KERNEL);
return -ENOMEM;
}
- if (serial->dev->descriptor.bDeviceClass == 0x02) {
+ if (serial->dev->descriptor.bDeviceClass == 0x02)
type = type_0;
- type_str = "type_0";
- } else if (serial->dev->descriptor.bMaxPacketSize0 == 0x40) {
- /*
- * NOTE: The bcdDevice version is the only difference between
- * the device descriptors of the X/HX, HXD, EA, RA, SA, TA, TB
- */
- if (le16_to_cpu(serial->dev->descriptor.bcdDevice) == 0x300) {
- type = HX_TA;
- type_str = "X/HX/TA";
- } else if (le16_to_cpu(serial->dev->descriptor.bcdDevice)
- == 0x400) {
- type = HXD_EA_RA_SA;
- type_str = "HXD/EA/RA/SA";
- } else if (le16_to_cpu(serial->dev->descriptor.bcdDevice)
- == 0x500) {
- type = TB;
- type_str = "TB";
- } else {
- dev_info(&serial->interface->dev,
- "unknown/unsupported device type\n");
- kfree(spriv);
- kfree(buf);
- return -ENODEV;
- }
- } else if (serial->dev->descriptor.bDeviceClass == 0x00
- || serial->dev->descriptor.bDeviceClass == 0xFF) {
+ else if (serial->dev->descriptor.bMaxPacketSize0 == 0x40)
+ type = HX;
+ else if (serial->dev->descriptor.bDeviceClass == 0x00)
type = type_1;
- type_str = "type_1";
- }
- dev_dbg(&serial->interface->dev, "device type: %s\n", type_str);
+ else if (serial->dev->descriptor.bDeviceClass == 0xFF)
+ type = type_1;
+ dev_dbg(&serial->interface->dev, "device type: %d\n", type);
spriv->type = type;
usb_set_serial_data(serial, spriv);
pl2303_vendor_read(0x8383, 0, serial, buf);
pl2303_vendor_write(0, 1, serial);
pl2303_vendor_write(1, 0, serial);
- if (type == type_0 || type == type_1)
- pl2303_vendor_write(2, 0x24, serial);
- else
+ if (type == HX)
pl2303_vendor_write(2, 0x44, serial);
+ else
+ pl2303_vendor_write(2, 0x24, serial);
kfree(buf);
return 0;
return retval;
}
-static int pl2303_baudrate_encode_direct(int baud, enum pl2303_type type,
- u8 buf[4])
+static void pl2303_encode_baudrate(struct tty_struct *tty,
+ struct usb_serial_port *port,
+ u8 buf[4])
{
- /*
- * NOTE: Only the values defined in baud_sup are supported !
- * => if unsupported values are set, the PL2303 seems to
- * use 9600 baud (at least my PL2303X always does)
- */
const int baud_sup[] = { 75, 150, 300, 600, 1200, 1800, 2400, 3600,
- 4800, 7200, 9600, 14400, 19200, 28800, 38400,
- 57600, 115200, 230400, 460800, 614400, 921600,
- 1228800, 2457600, 3000000, 6000000, 12000000 };
+ 4800, 7200, 9600, 14400, 19200, 28800, 38400,
+ 57600, 115200, 230400, 460800, 500000, 614400,
+ 921600, 1228800, 2457600, 3000000, 6000000 };
+
+ struct usb_serial *serial = port->serial;
+ struct pl2303_serial_private *spriv = usb_get_serial_data(serial);
+ int baud;
+ int i;
+
/*
- * NOTE: With the exception of type_0/1 devices, the following
- * additional baud rates are supported (tested with HX rev. 3A only):
- * 110*, 56000*, 128000, 134400, 161280, 201600, 256000*, 268800,
- * 403200, 806400. (*: not HX)
- *
- * Maximum values: HXD, TB: 12000000; HX, TA: 6000000;
- * type_0+1: 1228800; RA: 921600; SA: 115200
- *
- * As long as we are not using this encoding method for anything else
- * than the type_0+1 and HX chips, there is no point in complicating
- * the code to support them.
+ * NOTE: Only the values defined in baud_sup are supported!
+ * => if unsupported values are set, the PL2303 seems to use
+ * 9600 baud (at least my PL2303X always does)
*/
- int i;
+ baud = tty_get_baud_rate(tty);
+ dev_dbg(&port->dev, "baud requested = %d\n", baud);
+ if (!baud)
+ return;
/* Set baudrate to nearest supported value */
for (i = 0; i < ARRAY_SIZE(baud_sup); ++i) {
if (baud_sup[i] > baud)
break;
}
+
if (i == ARRAY_SIZE(baud_sup))
baud = baud_sup[i - 1];
else if (i > 0 && (baud_sup[i] - baud) > (baud - baud_sup[i - 1]))
baud = baud_sup[i - 1];
else
baud = baud_sup[i];
- /* Respect the chip type specific baud rate limits */
- /*
- * FIXME: as long as we don't know how to distinguish between the
- * HXD, EA, RA, and SA chip variants, allow the max. value of 12M.
- */
- if (type == HX_TA)
- baud = min_t(int, baud, 6000000);
- else if (type == type_0 || type == type_1)
- baud = min_t(int, baud, 1228800);
- /* Direct (standard) baud rate encoding method */
- put_unaligned_le32(baud, buf);
- return baud;
-}
-
-static int pl2303_baudrate_encode_divisor(int baud, enum pl2303_type type,
- u8 buf[4])
-{
- /*
- * Divisor based baud rate encoding method
- *
- * NOTE: it's not clear if the type_0/1 chips support this method
- *
- * divisor = 12MHz * 32 / baudrate = 2^A * B
- *
- * with
- *
- * A = buf[1] & 0x0e
- * B = buf[0] + (buf[1] & 0x01) << 8
- *
- * Special cases:
- * => 8 < B < 16: device seems to work not properly
- * => B <= 8: device uses the max. value B = 512 instead
- */
- unsigned int A, B;
+ /* type_0, type_1 only support up to 1228800 baud */
+ if (spriv->type != HX)
+ baud = min_t(int, baud, 1228800);
- /*
- * NOTE: The Windows driver allows maximum baud rates of 110% of the
- * specified maximium value.
- * Quick tests with early (2004) HX (rev. A) chips suggest, that even
- * higher baud rates (up to the maximum of 24M baud !) are working fine,
- * but that should really be tested carefully in "real life" scenarios
- * before removing the upper limit completely.
- * Baud rates smaller than the specified 75 baud are definitely working
- * fine.
- */
- if (type == type_0 || type == type_1)
- baud = min_t(int, baud, 1228800 * 1.1);
- else if (type == HX_TA)
- baud = min_t(int, baud, 6000000 * 1.1);
- else if (type == HXD_EA_RA_SA)
- /* HXD, EA: 12Mbps; RA: 1Mbps; SA: 115200 bps */
- /*
- * FIXME: as long as we don't know how to distinguish between
- * these chip variants, allow the max. of these values
- */
- baud = min_t(int, baud, 12000000 * 1.1);
- else if (type == TB)
- baud = min_t(int, baud, 12000000 * 1.1);
- /* Determine factors A and B */
- A = 0;
- B = 12000000 * 32 / baud; /* 12MHz */
- B <<= 1; /* Add one bit for rounding */
- while (B > (512 << 1) && A <= 14) {
- A += 2;
- B >>= 2;
- }
- if (A > 14) { /* max. divisor = min. baudrate reached */
- A = 14;
- B = 512;
- /* => ~45.78 baud */
+ if (baud <= 115200) {
+ put_unaligned_le32(baud, buf);
} else {
- B = (B + 1) >> 1; /* Round the last bit */
- }
- /* Handle special cases */
- if (B == 512)
- B = 0; /* also: 1 to 8 */
- else if (B < 16)
/*
- * NOTE: With the current algorithm this happens
- * only for A=0 and means that the min. divisor
- * (respectively: the max. baudrate) is reached.
+ * Apparently the formula for higher speeds is:
+ * baudrate = 12M * 32 / (2^buf[1]) / buf[0]
*/
- B = 16; /* => 24 MBaud */
- /* Encode the baud rate */
- buf[3] = 0x80; /* Select divisor encoding method */
- buf[2] = 0;
- buf[1] = (A & 0x0e); /* A */
- buf[1] |= ((B & 0x100) >> 8); /* MSB of B */
- buf[0] = B & 0xff; /* 8 LSBs of B */
- /* Calculate the actual/resulting baud rate */
- if (B <= 8)
- B = 512;
- baud = 12000000 * 32 / ((1 << A) * B);
-
- return baud;
-}
-
-static void pl2303_encode_baudrate(struct tty_struct *tty,
- struct usb_serial_port *port,
- enum pl2303_type type,
- u8 buf[4])
-{
- int baud;
+ unsigned tmp = 12000000 * 32 / baud;
+ buf[3] = 0x80;
+ buf[2] = 0;
+ buf[1] = (tmp >= 256);
+ while (tmp >= 256) {
+ tmp >>= 2;
+ buf[1] <<= 1;
+ }
+ buf[0] = tmp;
+ }
- baud = tty_get_baud_rate(tty);
- dev_dbg(&port->dev, "baud requested = %d\n", baud);
- if (!baud)
- return;
- /*
- * There are two methods for setting/encoding the baud rate
- * 1) Direct method: encodes the baud rate value directly
- * => supported by all chip types
- * 2) Divisor based method: encodes a divisor to a base value (12MHz*32)
- * => supported by HX chips (and likely not by type_0/1 chips)
- *
- * NOTE: Although the divisor based baud rate encoding method is much
- * more flexible, some of the standard baud rate values can not be
- * realized exactly. But the difference is very small (max. 0.2%) and
- * the device likely uses the same baud rate generator for both methods
- * so that there is likley no difference.
- */
- if (type == type_0 || type == type_1)
- baud = pl2303_baudrate_encode_direct(baud, type, buf);
- else
- baud = pl2303_baudrate_encode_divisor(baud, type, buf);
/* Save resulting baud rate */
tty_encode_baud_rate(tty, baud, baud);
dev_dbg(&port->dev, "baud set = %d\n", baud);
dev_dbg(&port->dev, "data bits = %d\n", buf[6]);
}
- /* For reference: buf[0]:buf[3] baud rate value */
- pl2303_encode_baudrate(tty, port, spriv->type, buf);
+ /* For reference buf[0]:buf[3] baud rate value */
+ pl2303_encode_baudrate(tty, port, &buf[0]);
/* For reference buf[4]=0 is 1 stop bits */
/* For reference buf[4]=1 is 1.5 stop bits */
dev_dbg(&port->dev, "0xa1:0x21:0:0 %d - %7ph\n", i, buf);
if (C_CRTSCTS(tty)) {
- if (spriv->type == type_0 || spriv->type == type_1)
- pl2303_vendor_write(0x0, 0x41, serial);
- else
+ if (spriv->type == HX)
pl2303_vendor_write(0x0, 0x61, serial);
+ else
+ pl2303_vendor_write(0x0, 0x41, serial);
} else {
pl2303_vendor_write(0x0, 0x0, serial);
}
struct pl2303_serial_private *spriv = usb_get_serial_data(serial);
int result;
- if (spriv->type == type_0 || spriv->type == type_1) {
+ if (spriv->type != HX) {
usb_clear_halt(serial->dev, port->write_urb->pipe);
usb_clear_halt(serial->dev, port->read_urb->pipe);
} else {
{ USB_DEVICE(IBM_VENDOR_ID, IBM_454B_PRODUCT_ID) },
{ USB_DEVICE(IBM_VENDOR_ID, IBM_454C_PRODUCT_ID) },
{ USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_PRODUCT_ID) },
+ { USB_DEVICE(ABBOTT_VENDOR_ID, ABBOTT_STRIP_PORT_ID) },
{ USB_DEVICE(TI_VENDOR_ID, FRI2_PRODUCT_ID) },
{ } /* terminator */
};
/*
* Many devices do not respond properly to READ_CAPACITY_16.
* Tell the SCSI layer to try READ_CAPACITY_10 first.
+ * However some USB 3.0 drive enclosures return capacity
+ * modulo 2TB. Those must use READ_CAPACITY_16
*/
- sdev->try_rc_10_first = 1;
+ if (!(us->fflags & US_FL_NEEDS_CAP16))
+ sdev->try_rc_10_first = 1;
/* assume SPC3 or latter devices support sense size > 18 */
if (sdev->scsi_level > SCSI_SPC_2)
USB_SC_DEVICE, USB_PR_DEVICE, NULL,
US_FL_IGNORE_RESIDUE ),
+/* Reported by Oliver Neukum <oneukum@suse.com> */
+UNUSUAL_DEV( 0x174c, 0x55aa, 0x0100, 0x0100,
+ "ASMedia",
+ "AS2105",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NEEDS_CAP16),
+
/* Reported by Jesse Feddema <jdfeddema@gmail.com> */
UNUSUAL_DEV( 0x177f, 0x0400, 0x0000, 0x0000,
"Yarvik",
long npage;
int ret = 0, prot = 0;
uint64_t mask;
+ struct vfio_dma *dma = NULL;
+ unsigned long pfn;
end = map->iova + map->size;
}
for (iova = map->iova; iova < end; iova += size, vaddr += size) {
- struct vfio_dma *dma = NULL;
- unsigned long pfn;
long i;
/* Pin a contiguous chunk of memory */
if (npage <= 0) {
WARN_ON(!npage);
ret = (int)npage;
- break;
+ goto out;
}
/* Verify pages are not already mapped */
for (i = 0; i < npage; i++) {
if (iommu_iova_to_phys(iommu->domain,
iova + (i << PAGE_SHIFT))) {
- vfio_unpin_pages(pfn, npage, prot, true);
ret = -EBUSY;
- break;
+ goto out_unpin;
}
}
if (ret) {
if (ret != -EBUSY ||
map_try_harder(iommu, iova, pfn, npage, prot)) {
- vfio_unpin_pages(pfn, npage, prot, true);
- break;
+ goto out_unpin;
}
}
dma = kzalloc(sizeof(*dma), GFP_KERNEL);
if (!dma) {
iommu_unmap(iommu->domain, iova, size);
- vfio_unpin_pages(pfn, npage, prot, true);
ret = -ENOMEM;
- break;
+ goto out_unpin;
}
dma->size = size;
}
}
- if (ret) {
- struct vfio_dma *tmp;
- iova = map->iova;
- size = map->size;
- while ((tmp = vfio_find_dma(iommu, iova, size))) {
- int r = vfio_remove_dma_overlap(iommu, iova,
- &size, tmp);
- if (WARN_ON(r || !size))
- break;
- }
+ WARN_ON(ret);
+ mutex_unlock(&iommu->lock);
+ return ret;
+
+out_unpin:
+ vfio_unpin_pages(pfn, npage, prot, true);
+
+out:
+ iova = map->iova;
+ size = map->size;
+ while ((dma = vfio_find_dma(iommu, iova, size))) {
+ int r = vfio_remove_dma_overlap(iommu, iova,
+ &size, dma);
+ if (WARN_ON(r || !size))
+ break;
}
mutex_unlock(&iommu->lock);
u32 i;
for (i = 0; i < tv_cmd->tvc_sgl_count; i++)
put_page(sg_page(&tv_cmd->tvc_sgl[i]));
- }
+ }
tcm_vhost_put_inflight(tv_cmd->inflight);
percpu_ida_free(&se_sess->sess_tag_pool, se_cmd->map_tag);
}
se_sess = tv_nexus->tvn_se_sess;
- tag = percpu_ida_alloc(&se_sess->sess_tag_pool, GFP_KERNEL);
+ tag = percpu_ida_alloc(&se_sess->sess_tag_pool, GFP_ATOMIC);
+ if (tag < 0) {
+ pr_err("Unable to obtain tag for tcm_vhost_cmd\n");
+ return ERR_PTR(-ENOMEM);
+ }
+
cmd = &((struct tcm_vhost_cmd *)se_sess->sess_cmd_map)[tag];
sg = cmd->tvc_sgl;
pages = cmd->tvc_upages;
if (data_direction != DMA_NONE) {
ret = vhost_scsi_map_iov_to_sgl(cmd,
&vq->iov[data_first], data_num,
- data_direction == DMA_TO_DEVICE);
+ data_direction == DMA_FROM_DEVICE);
if (unlikely(ret)) {
vq_err(vq, "Failed to map iov to sgl\n");
goto err_free;
return 0;
}
+static void vhost_scsi_free(struct vhost_scsi *vs)
+{
+ if (is_vmalloc_addr(vs))
+ vfree(vs);
+ else
+ kfree(vs);
+}
+
static int vhost_scsi_open(struct inode *inode, struct file *f)
{
struct vhost_scsi *vs;
struct vhost_virtqueue **vqs;
- int r, i;
+ int r = -ENOMEM, i;
- vs = kzalloc(sizeof(*vs), GFP_KERNEL);
- if (!vs)
- return -ENOMEM;
+ vs = kzalloc(sizeof(*vs), GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT);
+ if (!vs) {
+ vs = vzalloc(sizeof(*vs));
+ if (!vs)
+ goto err_vs;
+ }
vqs = kmalloc(VHOST_SCSI_MAX_VQ * sizeof(*vqs), GFP_KERNEL);
- if (!vqs) {
- kfree(vs);
- return -ENOMEM;
- }
+ if (!vqs)
+ goto err_vqs;
vhost_work_init(&vs->vs_completion_work, vhost_scsi_complete_cmd_work);
vhost_work_init(&vs->vs_event_work, tcm_vhost_evt_work);
tcm_vhost_init_inflight(vs, NULL);
- if (r < 0) {
- kfree(vqs);
- kfree(vs);
- return r;
- }
+ if (r < 0)
+ goto err_init;
f->private_data = vs;
return 0;
+
+err_init:
+ kfree(vqs);
+err_vqs:
+ vhost_scsi_free(vs);
+err_vs:
+ return r;
}
static int vhost_scsi_release(struct inode *inode, struct file *f)
/* Jobs can re-queue themselves in evt kick handler. Do extra flush. */
vhost_scsi_flush(vs);
kfree(vs->dev.vqs);
- kfree(vs);
+ vhost_scsi_free(vs);
return 0;
}
if (list_empty(&work->node)) {
list_add_tail(&work->node, &dev->work_list);
work->queue_seq++;
+ spin_unlock_irqrestore(&dev->work_lock, flags);
wake_up_process(dev->worker);
+ } else {
+ spin_unlock_irqrestore(&dev->work_lock, flags);
}
- spin_unlock_irqrestore(&dev->work_lock, flags);
}
EXPORT_SYMBOL_GPL(vhost_work_queue);
int au1100fb_fb_mmap(struct fb_info *fbi, struct vm_area_struct *vma)
{
struct au1100fb_device *fbdev;
- unsigned int len;
- unsigned long start=0, off;
fbdev = to_au1100fb_device(fbi);
- if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT)) {
- return -EINVAL;
- }
-
- start = fbdev->fb_phys & PAGE_MASK;
- len = PAGE_ALIGN((start & ~PAGE_MASK) + fbdev->fb_len);
-
- off = vma->vm_pgoff << PAGE_SHIFT;
-
- if ((vma->vm_end - vma->vm_start + off) > len) {
- return -EINVAL;
- }
-
- off += start;
- vma->vm_pgoff = off >> PAGE_SHIFT;
-
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
pgprot_val(vma->vm_page_prot) |= (6 << 9); //CCA=6
- if (io_remap_pfn_range(vma, vma->vm_start, off >> PAGE_SHIFT,
- vma->vm_end - vma->vm_start,
- vma->vm_page_prot)) {
- return -EAGAIN;
- }
-
- return 0;
+ return vm_iomap_memory(vma, fbdev->fb_phys, fbdev->fb_len);
}
static struct fb_ops au1100fb_ops =
* method mainly to allow the use of the TLB streaming flag (CCA=6)
*/
static int au1200fb_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
-
{
- unsigned int len;
- unsigned long start=0, off;
struct au1200fb_device *fbdev = info->par;
- if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT)) {
- return -EINVAL;
- }
-
- start = fbdev->fb_phys & PAGE_MASK;
- len = PAGE_ALIGN((start & ~PAGE_MASK) + fbdev->fb_len);
-
- off = vma->vm_pgoff << PAGE_SHIFT;
-
- if ((vma->vm_end - vma->vm_start + off) > len) {
- return -EINVAL;
- }
-
- off += start;
- vma->vm_pgoff = off >> PAGE_SHIFT;
-
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
pgprot_val(vma->vm_page_prot) |= _CACHE_MASK; /* CCA=7 */
- return io_remap_pfn_range(vma, vma->vm_start, off >> PAGE_SHIFT,
- vma->vm_end - vma->vm_start,
- vma->vm_page_prot);
+ return vm_iomap_memory(vma, fbdev->fb_phys, fbdev->fb_len);
}
static void set_global(u_int cmd, struct au1200_lcd_global_regs_t *pdata)
if (IS_ERR(ctrl->clk)) {
dev_err(ctrl->dev, "unable to get clk %s\n", mi->clk_name);
ret = -ENOENT;
- goto failed_get_clk;
+ goto failed;
}
clk_prepare_enable(ctrl->clk);
path_deinit(path_plat);
}
- if (ctrl->clk) {
- devm_clk_put(ctrl->dev, ctrl->clk);
- clk_disable_unprepare(ctrl->clk);
- }
-failed_get_clk:
- devm_free_irq(ctrl->dev, ctrl->irq, ctrl);
+ clk_disable_unprepare(ctrl->clk);
failed:
- if (ctrl) {
- if (ctrl->reg_base)
- devm_iounmap(ctrl->dev, ctrl->reg_base);
- devm_release_mem_region(ctrl->dev, res->start,
- resource_size(res));
- devm_kfree(ctrl->dev, ctrl);
- }
-
dev_err(&pdev->dev, "device init failed\n");
return ret;
break;
case 3:
bits_per_pixel = 32;
+ break;
case 1:
default:
return -EINVAL;
if (!fb_find_mode(&info->var, info, mode_option, NULL, 0,
info->monspecs.modedb, 16)) {
printk(KERN_ERR "neofb: Unable to find usable video mode.\n");
+ err = -EINVAL;
goto err_map_video;
}
info->fix.smem_len >> 10, info->var.xres,
info->var.yres, h_sync / 1000, h_sync % 1000, v_sync);
- if (fb_alloc_cmap(&info->cmap, 256, 0) < 0)
+ err = fb_alloc_cmap(&info->cmap, 256, 0);
+ if (err < 0)
goto err_map_video;
err = register_framebuffer(info);
return -EINVAL;
}
- timing_np = of_find_node_by_name(np, name);
+ timing_np = of_get_child_by_name(np, name);
if (!timing_np) {
pr_err("%s: could not find node '%s'\n",
of_node_full_name(np), name);
struct display_timings *disp;
if (!np) {
- pr_err("%s: no devicenode given\n", of_node_full_name(np));
+ pr_err("%s: no device node given\n", of_node_full_name(np));
return NULL;
}
- timings_np = of_find_node_by_name(np, "display-timings");
+ timings_np = of_get_child_by_name(np, "display-timings");
if (!timings_np) {
pr_err("%s: could not find display-timings node\n",
of_node_full_name(np));
config DISPLAY_PANEL_DSI_CM
tristate "Generic DSI Command Mode Panel"
+ depends on BACKLIGHT_CLASS_DEVICE
help
Driver for generic DSI command mode panels.
in = omap_dss_find_output(pdata->source);
if (in == NULL) {
dev_err(&pdev->dev, "Failed to find video source\n");
- return -ENODEV;
+ return -EPROBE_DEFER;
}
ddata->in = in;
in = omap_dss_find_output(pdata->source);
if (in == NULL) {
dev_err(&pdev->dev, "Failed to find video source\n");
- return -ENODEV;
+ return -EPROBE_DEFER;
}
ddata->in = in;
in = omap_dss_find_output(pdata->source);
if (in == NULL) {
dev_err(&pdev->dev, "Failed to find video source\n");
- return -ENODEV;
+ return -EPROBE_DEFER;
}
ddata->in = in;
}
pm_runtime_enable(&pdev->dev);
+ pm_runtime_irq_safe(&pdev->dev);
r = dispc_runtime_get();
if (r)
(info->var.bits_per_pixel * info->var.xres_virtual);
if (info->var.yres_virtual < info->var.yres) {
dev_err(info->device, "virtual vertical size smaller than real\n");
- goto err_find_mode;
- }
-
- /* maximize virtual vertical size for fast scrolling */
- info->var.yres_virtual = info->fix.smem_len * 8 /
- (info->var.bits_per_pixel * info->var.xres_virtual);
- if (info->var.yres_virtual < info->var.yres) {
- dev_err(info->device, "virtual vertical size smaller than real\n");
+ rc = -EINVAL;
goto err_find_mode;
}
sl = dev_to_w1_slave(dev);
fops = sl->family->fops;
+ if (!fops)
+ return 0;
+
switch (action) {
case BUS_NOTIFY_ADD_DEVICE:
/* if the family driver needs to initialize something... */
atomic_set(&sl->refcnt, 0);
init_completion(&sl->released);
+ /* slave modules need to be loaded in a context with unlocked mutex */
+ mutex_unlock(&dev->mutex);
request_module("w1-family-0x%0x", rn->family);
+ mutex_lock(&dev->mutex);
spin_lock(&w1_flock);
f = w1_family_registered(rn->family);
return -ENODEV;
}
+ /*
+ * Ignore all auxilary iLO devices with the following PCI ID
+ */
+ if (dev->subsystem_device == 0x1979)
+ return -ENODEV;
+
if (pci_enable_device(dev)) {
dev_warn(&dev->dev,
"Not possible to enable PCI Device: 0x%x:0x%x.\n",
#define KEMPLD_WDT_STAGE_TIMEOUT(x) (0x1b + (x) * 4)
#define KEMPLD_WDT_STAGE_CFG(x) (0x18 + (x))
#define STAGE_CFG_GET_PRESCALER(x) (((x) & 0x30) >> 4)
-#define STAGE_CFG_SET_PRESCALER(x) (((x) & 0x30) << 4)
+#define STAGE_CFG_SET_PRESCALER(x) (((x) & 0x3) << 4)
#define STAGE_CFG_PRESCALER_MASK 0x30
#define STAGE_CFG_ACTION_MASK 0x7
#define STAGE_CFG_ASSERT (1 << 3)
.set_timeout = sunxi_wdt_set_timeout,
};
-static int __init sunxi_wdt_probe(struct platform_device *pdev)
+static int sunxi_wdt_probe(struct platform_device *pdev)
{
struct sunxi_wdt_dev *sunxi_wdt;
struct resource *res;
return 0;
}
-static int __exit sunxi_wdt_remove(struct platform_device *pdev)
+static int sunxi_wdt_remove(struct platform_device *pdev)
{
struct sunxi_wdt_dev *sunxi_wdt = platform_get_drvdata(pdev);
case WDIOC_GETSTATUS:
case WDIOC_GETBOOTSTATUS:
- return put_user(0, p);
+ error = put_user(0, p);
+ break;
case WDIOC_KEEPALIVE:
ts72xx_wdt_kick(wdt);
if (nr_pages > ARRAY_SIZE(frame_list))
nr_pages = ARRAY_SIZE(frame_list);
- scratch_page = get_balloon_scratch_page();
-
for (i = 0; i < nr_pages; i++) {
page = alloc_page(gfp);
if (page == NULL) {
scrub_page(page);
+ /*
+ * Ballooned out frames are effectively replaced with
+ * a scratch frame. Ensure direct mappings and the
+ * p2m are consistent.
+ */
+ scratch_page = get_balloon_scratch_page();
#ifdef CONFIG_XEN_HAVE_PVMMU
if (xen_pv_domain() && !PageHighMem(page)) {
ret = HYPERVISOR_update_va_mapping(
BUG_ON(ret);
}
#endif
- }
-
- /* Ensure that ballooned highmem pages don't have kmaps. */
- kmap_flush_unused();
- flush_tlb_all();
-
- /* No more mappings: invalidate P2M and add to balloon. */
- for (i = 0; i < nr_pages; i++) {
- pfn = mfn_to_pfn(frame_list[i]);
if (!xen_feature(XENFEAT_auto_translated_physmap)) {
unsigned long p;
p = page_to_pfn(scratch_page);
__set_phys_to_machine(pfn, pfn_to_mfn(p));
}
+ put_balloon_scratch_page();
+
balloon_append(pfn_to_page(pfn));
}
- put_balloon_scratch_page();
+ /* Ensure that ballooned highmem pages don't have kmaps. */
+ kmap_flush_unused();
+ flush_tlb_all();
set_xen_guest_handle(reservation.extent_start, frame_list);
reservation.nr_extents = nr_pages;
if (ret < 0)
return ret;
#ifdef CONFIG_9P_FSCACHE
- return fscache_register_netfs(&v9fs_cache_netfs);
-#else
- return ret;
+ ret = fscache_register_netfs(&v9fs_cache_netfs);
+ if (ret < 0)
+ v9fs_destroy_inode_cache();
#endif
+ return ret;
}
static void v9fs_cache_unregister(void)
}
/* Only creates */
- if (!(flags & O_CREAT))
+ if (!(flags & O_CREAT) || dentry->d_inode)
return finish_no_open(file, res);
- else if (dentry->d_inode) {
- if ((flags & (O_CREAT | O_EXCL)) == (O_CREAT | O_EXCL))
- return -EEXIST;
- else
- return finish_no_open(file, res);
- }
v9ses = v9fs_inode2v9ses(dir);
/* lock down the parent dentry so we can peer at it */
parent = dget_parent(dentry);
- if (!parent->d_inode)
- goto out_bad;
-
dir = AFS_FS_I(parent->d_inode);
/* validate the parent directory */
}
__initcall(aio_setup);
+static void put_aio_ring_file(struct kioctx *ctx)
+{
+ struct file *aio_ring_file = ctx->aio_ring_file;
+ if (aio_ring_file) {
+ truncate_setsize(aio_ring_file->f_inode, 0);
+
+ /* Prevent further access to the kioctx from migratepages */
+ spin_lock(&aio_ring_file->f_inode->i_mapping->private_lock);
+ aio_ring_file->f_inode->i_mapping->private_data = NULL;
+ ctx->aio_ring_file = NULL;
+ spin_unlock(&aio_ring_file->f_inode->i_mapping->private_lock);
+
+ fput(aio_ring_file);
+ }
+}
+
static void aio_free_ring(struct kioctx *ctx)
{
int i;
- struct file *aio_ring_file = ctx->aio_ring_file;
for (i = 0; i < ctx->nr_pages; i++) {
pr_debug("pid(%d) [%d] page->count=%d\n", current->pid, i,
put_page(ctx->ring_pages[i]);
}
+ put_aio_ring_file(ctx);
+
if (ctx->ring_pages && ctx->ring_pages != ctx->internal_pages)
kfree(ctx->ring_pages);
-
- if (aio_ring_file) {
- truncate_setsize(aio_ring_file->f_inode, 0);
- fput(aio_ring_file);
- ctx->aio_ring_file = NULL;
- }
}
static int aio_ring_mmap(struct file *file, struct vm_area_struct *vma)
static int aio_migratepage(struct address_space *mapping, struct page *new,
struct page *old, enum migrate_mode mode)
{
- struct kioctx *ctx = mapping->private_data;
+ struct kioctx *ctx;
unsigned long flags;
- unsigned idx = old->index;
int rc;
/* Writeback must be complete */
get_page(new);
- spin_lock_irqsave(&ctx->completion_lock, flags);
- migrate_page_copy(new, old);
- ctx->ring_pages[idx] = new;
- spin_unlock_irqrestore(&ctx->completion_lock, flags);
+ /* We can potentially race against kioctx teardown here. Use the
+ * address_space's private data lock to protect the mapping's
+ * private_data.
+ */
+ spin_lock(&mapping->private_lock);
+ ctx = mapping->private_data;
+ if (ctx) {
+ pgoff_t idx;
+ spin_lock_irqsave(&ctx->completion_lock, flags);
+ migrate_page_copy(new, old);
+ idx = old->index;
+ if (idx < (pgoff_t)ctx->nr_pages)
+ ctx->ring_pages[idx] = new;
+ spin_unlock_irqrestore(&ctx->completion_lock, flags);
+ } else
+ rc = -EBUSY;
+ spin_unlock(&mapping->private_lock);
return rc;
}
out_freeref:
free_percpu(ctx->users.pcpu_count);
out_freectx:
- if (ctx->aio_ring_file)
- fput(ctx->aio_ring_file);
+ put_aio_ring_file(ctx);
kmem_cache_free(kioctx_cachep, ctx);
pr_debug("error allocating ioctx %d\n", err);
return ERR_PTR(err);
pkt.hdr.proto_version = sbi->version;
pkt.hdr.type = type;
- mutex_lock(&sbi->wq_mutex);
- /* Check if we have become catatonic */
- if (sbi->catatonic) {
- mutex_unlock(&sbi->wq_mutex);
- return;
- }
switch (type) {
/* Kernel protocol v4 missing and expire packets */
case autofs_ptype_missing:
wq->tgid = current->tgid;
wq->status = -EINTR; /* Status return if interrupted */
wq->wait_ctr = 2;
- mutex_unlock(&sbi->wq_mutex);
if (sbi->version < 5) {
if (notify == NFY_MOUNT)
(unsigned long) wq->wait_queue_token, wq->name.len,
wq->name.name, notify);
- /* autofs4_notify_daemon() may block */
+ /* autofs4_notify_daemon() may block; it will unlock ->wq_mutex */
autofs4_notify_daemon(sbi, wq, type);
} else {
wq->wait_ctr++;
- mutex_unlock(&sbi->wq_mutex);
- kfree(qstr.name);
DPRINTK("existing wait id = 0x%08lx, name = %.*s, nfy=%d",
(unsigned long) wq->wait_queue_token, wq->name.len,
wq->name.name, notify);
+ mutex_unlock(&sbi->wq_mutex);
+ kfree(qstr.name);
}
/*
* long file_ofs
* followed by COUNT filenames in ASCII: "FILE1" NUL "FILE2" NUL...
*/
-static void fill_files_note(struct memelfnote *note)
+static int fill_files_note(struct memelfnote *note)
{
struct vm_area_struct *vma;
unsigned count, size, names_ofs, remaining, n;
names_ofs = (2 + 3 * count) * sizeof(data[0]);
alloc:
if (size >= MAX_FILE_NOTE_SIZE) /* paranoia check */
- goto err;
+ return -EINVAL;
size = round_up(size, PAGE_SIZE);
data = vmalloc(size);
if (!data)
- goto err;
+ return -ENOMEM;
start_end_ofs = data + 2;
name_base = name_curpos = ((char *)data) + names_ofs;
size = name_curpos - (char *)data;
fill_note(note, "CORE", NT_FILE, size, data);
- err: ;
+ return 0;
}
#ifdef CORE_DUMP_USE_REGSET
fill_auxv_note(&info->auxv, current->mm);
info->size += notesize(&info->auxv);
- fill_files_note(&info->files);
- info->size += notesize(&info->files);
+ if (fill_files_note(&info->files) == 0)
+ info->size += notesize(&info->files);
return 1;
}
return 0;
if (first && !writenote(&info->auxv, file, foffset))
return 0;
- if (first && !writenote(&info->files, file, foffset))
+ if (first && info->files.data &&
+ !writenote(&info->files, file, foffset))
return 0;
for (i = 1; i < info->thread_notes; ++i)
struct elf_note_info {
struct memelfnote *notes;
+ struct memelfnote *notes_files;
struct elf_prstatus *prstatus; /* NT_PRSTATUS */
struct elf_prpsinfo *psinfo; /* NT_PRPSINFO */
struct list_head thread_list;
fill_siginfo_note(info->notes + 2, &info->csigdata, siginfo);
fill_auxv_note(info->notes + 3, current->mm);
- fill_files_note(info->notes + 4);
+ info->numnote = 4;
- info->numnote = 5;
+ if (fill_files_note(info->notes + info->numnote) == 0) {
+ info->notes_files = info->notes + info->numnote;
+ info->numnote++;
+ }
/* Try to dump the FPU. */
info->prstatus->pr_fpvalid = elf_core_copy_task_fpregs(current, regs,
kfree(list_entry(tmp, struct elf_thread_status, list));
}
- /* Free data allocated by fill_files_note(): */
- vfree(info->notes[4].data);
+ /* Free data possibly allocated by fill_files_note(): */
+ if (info->notes_files)
+ vfree(info->notes_files->data);
kfree(info->prstatus);
kfree(info->psinfo);
struct vm_area_struct *vma, *gate_vma;
struct elfhdr *elf = NULL;
loff_t offset = 0, dataoff, foffset;
- struct elf_note_info info;
+ struct elf_note_info info = { };
struct elf_phdr *phdr4note = NULL;
struct elf_shdr *shdr4extnum = NULL;
Elf_Half e_phnum;
mempool_destroy(bs->bio_integrity_pool);
if (bs->bvec_integrity_pool)
- mempool_destroy(bs->bio_integrity_pool);
+ mempool_destroy(bs->bvec_integrity_pool);
}
EXPORT_SYMBOL(bioset_integrity_free);
src_p = kmap_atomic(src_bv->bv_page);
dst_p = kmap_atomic(dst_bv->bv_page);
- memcpy(dst_p + dst_bv->bv_offset,
- src_p + src_bv->bv_offset,
+ memcpy(dst_p + dst_offset,
+ src_p + src_offset,
bytes);
kunmap_atomic(dst_p);
worker->idle = 1;
/* the list may be empty if the worker is just starting */
- if (!list_empty(&worker->worker_list)) {
+ if (!list_empty(&worker->worker_list) &&
+ !worker->workers->stopping) {
list_move(&worker->worker_list,
&worker->workers->idle_list);
}
spin_lock_irqsave(&worker->workers->lock, flags);
worker->idle = 0;
- if (!list_empty(&worker->worker_list)) {
+ if (!list_empty(&worker->worker_list) &&
+ !worker->workers->stopping) {
list_move_tail(&worker->worker_list,
&worker->workers->worker_list);
}
int can_stop;
spin_lock_irq(&workers->lock);
+ workers->stopping = 1;
list_splice_init(&workers->idle_list, &workers->worker_list);
while (!list_empty(&workers->worker_list)) {
cur = workers->worker_list.next;
workers->ordered = 0;
workers->atomic_start_pending = 0;
workers->atomic_worker_start = async_helper;
+ workers->stopping = 0;
}
/*
atomic_set(&worker->num_pending, 0);
atomic_set(&worker->refs, 1);
worker->workers = workers;
- worker->task = kthread_run(worker_loop, worker,
- "btrfs-%s-%d", workers->name,
- workers->num_workers + 1);
+ worker->task = kthread_create(worker_loop, worker,
+ "btrfs-%s-%d", workers->name,
+ workers->num_workers + 1);
if (IS_ERR(worker->task)) {
ret = PTR_ERR(worker->task);
- kfree(worker);
goto fail;
}
+
spin_lock_irq(&workers->lock);
+ if (workers->stopping) {
+ spin_unlock_irq(&workers->lock);
+ goto fail_kthread;
+ }
list_add_tail(&worker->worker_list, &workers->idle_list);
worker->idle = 1;
workers->num_workers++;
WARN_ON(workers->num_workers_starting < 0);
spin_unlock_irq(&workers->lock);
+ wake_up_process(worker->task);
return 0;
+
+fail_kthread:
+ kthread_stop(worker->task);
fail:
+ kfree(worker);
spin_lock_irq(&workers->lock);
workers->num_workers_starting--;
spin_unlock_irq(&workers->lock);
/* extra name for this worker, used for current->name */
char *name;
+
+ int stopping;
};
void btrfs_queue_worker(struct btrfs_workers *workers, struct btrfs_work *work);
static inline int btrfs_inode_in_log(struct inode *inode, u64 generation)
{
if (BTRFS_I(inode)->logged_trans == generation &&
- BTRFS_I(inode)->last_sub_trans <= BTRFS_I(inode)->last_log_commit)
+ BTRFS_I(inode)->last_sub_trans <=
+ BTRFS_I(inode)->last_log_commit &&
+ BTRFS_I(inode)->last_sub_trans <=
+ BTRFS_I(inode)->root->last_log_commit)
return 1;
return 0;
}
return ret;
}
- if (root->ref_cows)
- btrfs_reloc_cow_block(trans, root, buf, cow);
+ if (root->ref_cows) {
+ ret = btrfs_reloc_cow_block(trans, root, buf, cow);
+ if (ret)
+ return ret;
+ }
if (buf == root->node) {
WARN_ON(parent && parent != buf);
*/
struct percpu_counter total_bytes_pinned;
- /*
- * we bump reservation progress every time we decrement
- * bytes_reserved. This way people waiting for reservations
- * know something good has happened and they can check
- * for progress. The number here isn't to be trusted, it
- * just shows reclaim activity
- */
- unsigned long reservation_progress;
-
unsigned int full:1; /* indicates that we cannot allocate any more
chunks for this space */
unsigned int chunk_alloc:1; /* set if we are allocating a chunk */
unsigned num_items)
{
return (root->leafsize + root->nodesize * (BTRFS_MAX_LEVEL - 1)) *
- 3 * num_items;
+ 2 * num_items;
}
/*
struct btrfs_root *root);
int btrfs_recover_relocation(struct btrfs_root *root);
int btrfs_reloc_clone_csums(struct inode *inode, u64 file_pos, u64 len);
-void btrfs_reloc_cow_block(struct btrfs_trans_handle *trans,
- struct btrfs_root *root, struct extent_buffer *buf,
- struct extent_buffer *cow);
+int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root, struct extent_buffer *buf,
+ struct extent_buffer *cow);
void btrfs_reloc_pre_snapshot(struct btrfs_trans_handle *trans,
struct btrfs_pending_snapshot *pending,
u64 *bytes_to_reserve);
args->result = BTRFS_IOCTL_DEV_REPLACE_RESULT_NO_ERROR;
btrfs_dev_replace_unlock(dev_replace);
- btrfs_wait_all_ordered_extents(root->fs_info, 0);
+ btrfs_wait_all_ordered_extents(root->fs_info);
/* force writing the updated state information to disk */
trans = btrfs_start_transaction(root, 0);
mutex_unlock(&dev_replace->lock_finishing_cancel_unmount);
return ret;
}
- btrfs_wait_all_ordered_extents(root->fs_info, 0);
+ btrfs_wait_all_ordered_extents(root->fs_info);
trans = btrfs_start_transaction(root, 0);
if (IS_ERR(trans)) {
list_add(&tgt_device->dev_alloc_list, &fs_info->fs_devices->alloc_list);
btrfs_rm_dev_replace_srcdev(fs_info, src_device);
- if (src_device->bdev) {
- /* zero out the old super */
- btrfs_scratch_superblock(src_device);
- }
+
/*
* this is again a consistent state where no dev_replace procedure
* is running, the target device is part of the filesystem, the
{ .id = BTRFS_TREE_LOG_OBJECTID, .name_stem = "log" },
{ .id = BTRFS_TREE_RELOC_OBJECTID, .name_stem = "treloc" },
{ .id = BTRFS_DATA_RELOC_TREE_OBJECTID, .name_stem = "dreloc" },
+ { .id = BTRFS_UUID_TREE_OBJECTID, .name_stem = "uuid" },
{ .id = 0, .name_stem = "tree" },
};
return ret;
}
-struct btrfs_root *btrfs_read_fs_root_no_name(struct btrfs_fs_info *fs_info,
- struct btrfs_key *location)
+struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
+ struct btrfs_key *location,
+ bool check_ref)
{
struct btrfs_root *root;
int ret;
again:
root = btrfs_lookup_fs_root(fs_info, location->objectid);
if (root) {
- if (btrfs_root_refs(&root->root_item) == 0)
+ if (check_ref && btrfs_root_refs(&root->root_item) == 0)
return ERR_PTR(-ENOENT);
return root;
}
if (IS_ERR(root))
return root;
- if (btrfs_root_refs(&root->root_item) == 0) {
+ if (check_ref && btrfs_root_refs(&root->root_item) == 0) {
ret = -ENOENT;
goto fail;
}
if (total_errors > max_errors) {
printk(KERN_ERR "btrfs: %d errors while writing supers\n",
total_errors);
+ mutex_unlock(&root->fs_info->fs_devices->device_list_mutex);
/* FUA is masked off if unsupported and can't be the reason */
btrfs_error(root->fs_info, -EIO,
int btrfs_init_fs_root(struct btrfs_root *root);
int btrfs_insert_fs_root(struct btrfs_fs_info *fs_info,
struct btrfs_root *root);
-struct btrfs_root *btrfs_read_fs_root_no_name(struct btrfs_fs_info *fs_info,
- struct btrfs_key *location);
+
+struct btrfs_root *btrfs_get_fs_root(struct btrfs_fs_info *fs_info,
+ struct btrfs_key *key,
+ bool check_ref);
+static inline struct btrfs_root *
+btrfs_read_fs_root_no_name(struct btrfs_fs_info *fs_info,
+ struct btrfs_key *location)
+{
+ return btrfs_get_fs_root(fs_info, location, true);
+}
+
int btrfs_cleanup_fs_roots(struct btrfs_fs_info *fs_info);
void btrfs_btree_balance_dirty(struct btrfs_root *root);
void btrfs_btree_balance_dirty_nodelay(struct btrfs_root *root);
u64 space_size;
u64 avail;
u64 used;
- u64 to_add;
used = space_info->bytes_used + space_info->bytes_reserved +
space_info->bytes_pinned + space_info->bytes_readonly;
BTRFS_BLOCK_GROUP_RAID10))
avail >>= 1;
- to_add = space_info->total_bytes;
-
/*
* If we aren't flushing all things, let us overcommit up to
* 1/2th of the space. If we can flush, don't let us overcommit
* too much, let it overcommit up to 1/8 of the space.
*/
if (flush == BTRFS_RESERVE_FLUSH_ALL)
- to_add >>= 3;
+ avail >>= 3;
else
- to_add >>= 1;
-
- /*
- * Limit the overcommit to the amount of free space we could possibly
- * allocate for chunks.
- */
- to_add = min(avail, to_add);
+ avail >>= 1;
- if (used + bytes < space_info->total_bytes + to_add)
+ if (used + bytes < space_info->total_bytes + avail)
return 1;
return 0;
}
*/
btrfs_start_all_delalloc_inodes(root->fs_info, 0);
if (!current->journal_info)
- btrfs_wait_all_ordered_extents(root->fs_info, 0);
+ btrfs_wait_all_ordered_extents(root->fs_info);
}
}
if (delalloc_bytes == 0) {
if (trans)
return;
- btrfs_wait_all_ordered_extents(root->fs_info, 0);
+ btrfs_wait_all_ordered_extents(root->fs_info);
return;
}
loops++;
if (wait_ordered && !trans) {
- btrfs_wait_all_ordered_extents(root->fs_info, 0);
+ btrfs_wait_all_ordered_extents(root->fs_info);
} else {
time_left = schedule_timeout_killable(1);
if (time_left)
space_info->bytes_may_use -= num_bytes;
trace_btrfs_space_reservation(fs_info, "space_info",
space_info->flags, num_bytes, 0);
- space_info->reservation_progress++;
spin_unlock(&space_info->lock);
}
}
sinfo->bytes_may_use -= num_bytes;
trace_btrfs_space_reservation(fs_info, "space_info",
sinfo->flags, num_bytes, 0);
- sinfo->reservation_progress++;
block_rsv->reserved = block_rsv->size;
block_rsv->full = 1;
}
space_info->bytes_readonly += num_bytes;
cache->reserved -= num_bytes;
space_info->bytes_reserved -= num_bytes;
- space_info->reservation_progress++;
}
spin_unlock(&cache->lock);
spin_unlock(&space_info->lock);
/*
* walks the btree of allocated extents and find a hole of a given size.
* The key ins is changed to record the hole:
- * ins->objectid == block start
+ * ins->objectid == start position
* ins->flags = BTRFS_EXTENT_ITEM_KEY
- * ins->offset == number of blocks
+ * ins->offset == the size of the hole.
* Any available blocks before search_start are skipped.
+ *
+ * If there is no suitable free space, we will record the max size of
+ * the free space extent currently.
*/
static noinline int find_free_extent(struct btrfs_root *orig_root,
u64 num_bytes, u64 empty_size,
struct btrfs_block_group_cache *block_group = NULL;
struct btrfs_block_group_cache *used_block_group;
u64 search_start = 0;
+ u64 max_extent_size = 0;
int empty_cluster = 2 * 1024 * 1024;
struct btrfs_space_info *space_info;
int loop = 0;
btrfs_get_block_group(used_block_group);
offset = btrfs_alloc_from_cluster(used_block_group,
- last_ptr, num_bytes, used_block_group->key.objectid);
+ last_ptr,
+ num_bytes,
+ used_block_group->key.objectid,
+ &max_extent_size);
if (offset) {
/* we have a block, we're done */
spin_unlock(&last_ptr->refill_lock);
* cluster
*/
offset = btrfs_alloc_from_cluster(block_group,
- last_ptr, num_bytes,
- search_start);
+ last_ptr,
+ num_bytes,
+ search_start,
+ &max_extent_size);
if (offset) {
/* we found one, proceed */
spin_unlock(&last_ptr->refill_lock);
if (cached &&
block_group->free_space_ctl->free_space <
num_bytes + empty_cluster + empty_size) {
+ if (block_group->free_space_ctl->free_space >
+ max_extent_size)
+ max_extent_size =
+ block_group->free_space_ctl->free_space;
spin_unlock(&block_group->free_space_ctl->tree_lock);
goto loop;
}
spin_unlock(&block_group->free_space_ctl->tree_lock);
offset = btrfs_find_space_for_alloc(block_group, search_start,
- num_bytes, empty_size);
+ num_bytes, empty_size,
+ &max_extent_size);
/*
* If we didn't find a chunk, and we haven't failed on this
* block group before, and this block group is in the middle of
ret = 0;
}
out:
-
+ if (ret == -ENOSPC)
+ ins->offset = max_extent_size;
return ret;
}
flags);
if (ret == -ENOSPC) {
- if (!final_tried) {
- num_bytes = num_bytes >> 1;
+ if (!final_tried && ins->offset) {
+ num_bytes = min(num_bytes >> 1, ins->offset);
num_bytes = round_down(num_bytes, root->sectorsize);
num_bytes = max(num_bytes, min_alloc_size);
if (num_bytes == min_alloc_size)
offsetof(struct btrfs_io_bio, bio));
if (!btrfs_bioset)
goto free_buffer_cache;
+
+ if (bioset_integrity_create(btrfs_bioset, BIO_POOL_SIZE))
+ goto free_bioset;
+
return 0;
+free_bioset:
+ bioset_free(btrfs_bioset);
+ btrfs_bioset = NULL;
+
free_buffer_cache:
kmem_cache_destroy(extent_buffer_cache);
extent_buffer_cache = NULL;
*end = state->end;
cur_start = state->end + 1;
node = rb_next(node);
- if (!node)
- break;
total_bytes += state->end - state->start + 1;
if (total_bytes >= max_bytes)
break;
+ if (!node)
+ break;
}
out:
spin_unlock(&tree->lock);
*start = delalloc_start;
*end = delalloc_end;
free_extent_state(cached_state);
- return found;
+ return 0;
}
/*
/*
* make sure to limit the number of pages we try to lock down
- * if we're looping.
*/
- if (delalloc_end + 1 - delalloc_start > max_bytes && loops)
- delalloc_end = delalloc_start + PAGE_CACHE_SIZE - 1;
+ if (delalloc_end + 1 - delalloc_start > max_bytes)
+ delalloc_end = delalloc_start + max_bytes - 1;
/* step two, lock all the pages after the page that has start */
ret = lock_delalloc_pages(inode, locked_page,
*/
free_extent_state(cached_state);
if (!loops) {
- unsigned long offset = (*start) & (PAGE_CACHE_SIZE - 1);
- max_bytes = PAGE_CACHE_SIZE - offset;
+ max_bytes = PAGE_CACHE_SIZE;
loops = 1;
goto again;
} else {
ret = btrfs_log_dentry_safe(trans, root, dentry);
if (ret < 0) {
- mutex_unlock(&inode->i_mutex);
- goto out;
+ /* Fallthrough and commit/free transaction. */
+ ret = 1;
}
/* we've logged all the items and now have a consistent
ctl->free_space += bytes;
}
+/*
+ * If we can not find suitable extent, we will use bytes to record
+ * the size of the max extent.
+ */
static int search_bitmap(struct btrfs_free_space_ctl *ctl,
struct btrfs_free_space *bitmap_info, u64 *offset,
u64 *bytes)
{
unsigned long found_bits = 0;
+ unsigned long max_bits = 0;
unsigned long bits, i;
unsigned long next_zero;
+ unsigned long extent_bits;
i = offset_to_bit(bitmap_info->offset, ctl->unit,
max_t(u64, *offset, bitmap_info->offset));
for_each_set_bit_from(i, bitmap_info->bitmap, BITS_PER_BITMAP) {
next_zero = find_next_zero_bit(bitmap_info->bitmap,
BITS_PER_BITMAP, i);
- if ((next_zero - i) >= bits) {
- found_bits = next_zero - i;
+ extent_bits = next_zero - i;
+ if (extent_bits >= bits) {
+ found_bits = extent_bits;
break;
+ } else if (extent_bits > max_bits) {
+ max_bits = extent_bits;
}
i = next_zero;
}
return 0;
}
+ *bytes = (u64)(max_bits) * ctl->unit;
return -1;
}
+/* Cache the size of the max extent in bytes */
static struct btrfs_free_space *
find_free_space(struct btrfs_free_space_ctl *ctl, u64 *offset, u64 *bytes,
- unsigned long align)
+ unsigned long align, u64 *max_extent_size)
{
struct btrfs_free_space *entry;
struct rb_node *node;
- u64 ctl_off;
u64 tmp;
u64 align_off;
int ret;
if (!ctl->free_space_offset.rb_node)
- return NULL;
+ goto out;
entry = tree_search_offset(ctl, offset_to_bitmap(ctl, *offset), 0, 1);
if (!entry)
- return NULL;
+ goto out;
for (node = &entry->offset_index; node; node = rb_next(node)) {
entry = rb_entry(node, struct btrfs_free_space, offset_index);
- if (entry->bytes < *bytes)
+ if (entry->bytes < *bytes) {
+ if (entry->bytes > *max_extent_size)
+ *max_extent_size = entry->bytes;
continue;
+ }
/* make sure the space returned is big enough
* to match our requested alignment
*/
if (*bytes >= align) {
- ctl_off = entry->offset - ctl->start;
- tmp = ctl_off + align - 1;;
+ tmp = entry->offset - ctl->start + align - 1;
do_div(tmp, align);
tmp = tmp * align + ctl->start;
align_off = tmp - entry->offset;
tmp = entry->offset;
}
- if (entry->bytes < *bytes + align_off)
+ if (entry->bytes < *bytes + align_off) {
+ if (entry->bytes > *max_extent_size)
+ *max_extent_size = entry->bytes;
continue;
+ }
if (entry->bitmap) {
- ret = search_bitmap(ctl, entry, &tmp, bytes);
+ u64 size = *bytes;
+
+ ret = search_bitmap(ctl, entry, &tmp, &size);
if (!ret) {
*offset = tmp;
+ *bytes = size;
return entry;
+ } else if (size > *max_extent_size) {
+ *max_extent_size = size;
}
continue;
}
*bytes = entry->bytes - align_off;
return entry;
}
-
+out:
return NULL;
}
}
u64 btrfs_find_space_for_alloc(struct btrfs_block_group_cache *block_group,
- u64 offset, u64 bytes, u64 empty_size)
+ u64 offset, u64 bytes, u64 empty_size,
+ u64 *max_extent_size)
{
struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl;
struct btrfs_free_space *entry = NULL;
spin_lock(&ctl->tree_lock);
entry = find_free_space(ctl, &offset, &bytes_search,
- block_group->full_stripe_len);
+ block_group->full_stripe_len, max_extent_size);
if (!entry)
goto out;
if (!entry->bytes)
free_bitmap(ctl, entry);
} else {
-
unlink_free_space(ctl, entry);
align_gap_len = offset - entry->offset;
align_gap = entry->offset;
else
link_free_space(ctl, entry);
}
-
out:
spin_unlock(&ctl->tree_lock);
static u64 btrfs_alloc_from_bitmap(struct btrfs_block_group_cache *block_group,
struct btrfs_free_cluster *cluster,
struct btrfs_free_space *entry,
- u64 bytes, u64 min_start)
+ u64 bytes, u64 min_start,
+ u64 *max_extent_size)
{
struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl;
int err;
search_bytes = bytes;
err = search_bitmap(ctl, entry, &search_start, &search_bytes);
- if (err)
+ if (err) {
+ if (search_bytes > *max_extent_size)
+ *max_extent_size = search_bytes;
return 0;
+ }
ret = search_start;
__bitmap_clear_bits(ctl, entry, ret, bytes);
*/
u64 btrfs_alloc_from_cluster(struct btrfs_block_group_cache *block_group,
struct btrfs_free_cluster *cluster, u64 bytes,
- u64 min_start)
+ u64 min_start, u64 *max_extent_size)
{
struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl;
struct btrfs_free_space *entry = NULL;
entry = rb_entry(node, struct btrfs_free_space, offset_index);
while(1) {
+ if (entry->bytes < bytes && entry->bytes > *max_extent_size)
+ *max_extent_size = entry->bytes;
+
if (entry->bytes < bytes ||
(!entry->bitmap && entry->offset < min_start)) {
node = rb_next(&entry->offset_index);
if (entry->bitmap) {
ret = btrfs_alloc_from_bitmap(block_group,
cluster, entry, bytes,
- cluster->window_start);
+ cluster->window_start,
+ max_extent_size);
if (ret == 0) {
node = rb_next(&entry->offset_index);
if (!node)
void btrfs_remove_free_space_cache(struct btrfs_block_group_cache
*block_group);
u64 btrfs_find_space_for_alloc(struct btrfs_block_group_cache *block_group,
- u64 offset, u64 bytes, u64 empty_size);
+ u64 offset, u64 bytes, u64 empty_size,
+ u64 *max_extent_size);
u64 btrfs_find_ino_for_alloc(struct btrfs_root *fs_root);
void btrfs_dump_free_space(struct btrfs_block_group_cache *block_group,
u64 bytes);
void btrfs_init_free_cluster(struct btrfs_free_cluster *cluster);
u64 btrfs_alloc_from_cluster(struct btrfs_block_group_cache *block_group,
struct btrfs_free_cluster *cluster, u64 bytes,
- u64 min_start);
+ u64 min_start, u64 *max_extent_size);
int btrfs_return_cluster_to_free_space(
struct btrfs_block_group_cache *block_group,
struct btrfs_free_cluster *cluster);
struct btrfs_inode *entry;
struct rb_node **p;
struct rb_node *parent;
+ struct rb_node *new = &BTRFS_I(inode)->rb_node;
u64 ino = btrfs_ino(inode);
if (inode_unhashed(inode))
return;
-again:
parent = NULL;
spin_lock(&root->inode_lock);
p = &root->inode_tree.rb_node;
else {
WARN_ON(!(entry->vfs_inode.i_state &
(I_WILL_FREE | I_FREEING)));
- rb_erase(parent, &root->inode_tree);
+ rb_replace_node(parent, new, &root->inode_tree);
RB_CLEAR_NODE(parent);
spin_unlock(&root->inode_lock);
- goto again;
+ return;
}
}
- rb_link_node(&BTRFS_I(inode)->rb_node, parent, p);
- rb_insert_color(&BTRFS_I(inode)->rb_node, &root->inode_tree);
+ rb_link_node(new, parent, p);
+ rb_insert_color(new, &root->inode_tree);
spin_unlock(&root->inode_lock);
}
if (btrfs_extent_readonly(root, disk_bytenr))
goto out;
+ btrfs_release_path(path);
/*
* look for other files referencing this extent, if we
/* check for collisions, even if the name isn't there */
- ret = btrfs_check_dir_item_collision(root, new_dir->i_ino,
+ ret = btrfs_check_dir_item_collision(dest, new_dir->i_ino,
new_dentry->d_name.name,
new_dentry->d_name.len);
work = btrfs_alloc_delalloc_work(inode, 0, delay_iput);
if (unlikely(!work)) {
+ if (delay_iput)
+ btrfs_add_delayed_iput(inode);
+ else
+ iput(inode);
ret = -ENOMEM;
goto out;
}
.removexattr = btrfs_removexattr,
.permission = btrfs_permission,
.get_acl = btrfs_get_acl,
+ .update_time = btrfs_update_time,
};
static const struct inode_operations btrfs_dir_ro_inode_operations = {
.lookup = btrfs_lookup,
.permission = btrfs_permission,
.get_acl = btrfs_get_acl,
+ .update_time = btrfs_update_time,
};
static const struct file_operations btrfs_dir_file_operations = {
if (ret)
return ret;
- btrfs_wait_ordered_extents(root, 0);
+ btrfs_wait_ordered_extents(root);
pending_snapshot = kzalloc(sizeof(*pending_snapshot), GFP_NOFS);
if (!pending_snapshot)
static long btrfs_ioctl_file_extent_same(struct file *file,
void __user *argp)
{
- struct btrfs_ioctl_same_args *args = argp;
- struct btrfs_ioctl_same_args same;
- struct btrfs_ioctl_same_extent_info info;
+ struct btrfs_ioctl_same_args tmp;
+ struct btrfs_ioctl_same_args *same;
+ struct btrfs_ioctl_same_extent_info *info;
struct inode *src = file->f_dentry->d_inode;
struct file *dst_file = NULL;
struct inode *dst;
u64 len;
int i;
int ret;
+ unsigned long size;
u64 bs = BTRFS_I(src)->root->fs_info->sb->s_blocksize;
bool is_admin = capable(CAP_SYS_ADMIN);
if (ret)
return ret;
- if (copy_from_user(&same,
+ if (copy_from_user(&tmp,
(struct btrfs_ioctl_same_args __user *)argp,
- sizeof(same))) {
+ sizeof(tmp))) {
ret = -EFAULT;
goto out;
}
- off = same.logical_offset;
- len = same.length;
+ size = sizeof(tmp) +
+ tmp.dest_count * sizeof(struct btrfs_ioctl_same_extent_info);
+
+ same = kmalloc(size, GFP_NOFS);
+ if (!same) {
+ ret = -EFAULT;
+ goto out;
+ }
+
+ if (copy_from_user(same,
+ (struct btrfs_ioctl_same_args __user *)argp, size)) {
+ ret = -EFAULT;
+ goto out;
+ }
+
+ off = same->logical_offset;
+ len = same->length;
/*
* Limit the total length we will dedupe for each operation.
if (!S_ISREG(src->i_mode))
goto out;
- ret = 0;
- for (i = 0; i < same.dest_count; i++) {
- if (copy_from_user(&info, &args->info[i], sizeof(info))) {
- ret = -EFAULT;
- goto out;
- }
+ /* pre-format output fields to sane values */
+ for (i = 0; i < same->dest_count; i++) {
+ same->info[i].bytes_deduped = 0ULL;
+ same->info[i].status = 0;
+ }
- info.bytes_deduped = 0;
+ ret = 0;
+ for (i = 0; i < same->dest_count; i++) {
+ info = &same->info[i];
- dst_file = fget(info.fd);
+ dst_file = fget(info->fd);
if (!dst_file) {
- info.status = -EBADF;
+ info->status = -EBADF;
goto next;
}
if (!(is_admin || (dst_file->f_mode & FMODE_WRITE))) {
- info.status = -EINVAL;
+ info->status = -EINVAL;
goto next;
}
- info.status = -EXDEV;
+ info->status = -EXDEV;
if (file->f_path.mnt != dst_file->f_path.mnt)
goto next;
goto next;
if (S_ISDIR(dst->i_mode)) {
- info.status = -EISDIR;
+ info->status = -EISDIR;
goto next;
}
if (!S_ISREG(dst->i_mode)) {
- info.status = -EACCES;
+ info->status = -EACCES;
goto next;
}
- info.status = btrfs_extent_same(src, off, len, dst,
- info.logical_offset);
- if (info.status == 0)
- info.bytes_deduped += len;
+ info->status = btrfs_extent_same(src, off, len, dst,
+ info->logical_offset);
+ if (info->status == 0)
+ info->bytes_deduped += len;
next:
if (dst_file)
fput(dst_file);
-
- if (__put_user_unaligned(info.status, &args->info[i].status) ||
- __put_user_unaligned(info.bytes_deduped,
- &args->info[i].bytes_deduped)) {
- ret = -EFAULT;
- goto out;
- }
}
+ ret = copy_to_user(argp, same, size);
+ if (ret)
+ ret = -EFAULT;
+
out:
mnt_drop_write_file(file);
return ret;
}
if (!objectid)
- objectid = root->root_key.objectid;
+ objectid = BTRFS_FS_TREE_OBJECTID;
location.objectid = objectid;
location.type = BTRFS_ROOT_ITEM_KEY;
* wait for all the ordered extents in a root. This is done when balancing
* space between drives.
*/
-void btrfs_wait_ordered_extents(struct btrfs_root *root, int delay_iput)
+void btrfs_wait_ordered_extents(struct btrfs_root *root)
{
struct list_head splice, works;
struct btrfs_ordered_extent *ordered, *next;
- struct inode *inode;
INIT_LIST_HEAD(&splice);
INIT_LIST_HEAD(&works);
root_extent_list);
list_move_tail(&ordered->root_extent_list,
&root->ordered_extents);
- /*
- * the inode may be getting freed (in sys_unlink path).
- */
- inode = igrab(ordered->inode);
- if (!inode) {
- cond_resched_lock(&root->ordered_extent_lock);
- continue;
- }
-
atomic_inc(&ordered->refs);
spin_unlock(&root->ordered_extent_lock);
list_for_each_entry_safe(ordered, next, &works, work_list) {
list_del_init(&ordered->work_list);
wait_for_completion(&ordered->completion);
-
- inode = ordered->inode;
btrfs_put_ordered_extent(ordered);
- if (delay_iput)
- btrfs_add_delayed_iput(inode);
- else
- iput(inode);
-
cond_resched();
}
mutex_unlock(&root->fs_info->ordered_operations_mutex);
}
-void btrfs_wait_all_ordered_extents(struct btrfs_fs_info *fs_info,
- int delay_iput)
+void btrfs_wait_all_ordered_extents(struct btrfs_fs_info *fs_info)
{
struct btrfs_root *root;
struct list_head splice;
&fs_info->ordered_roots);
spin_unlock(&fs_info->ordered_root_lock);
- btrfs_wait_ordered_extents(root, delay_iput);
+ btrfs_wait_ordered_extents(root);
btrfs_put_fs_root(root);
spin_lock(&fs_info->ordered_root_lock);
void btrfs_add_ordered_operation(struct btrfs_trans_handle *trans,
struct btrfs_root *root,
struct inode *inode);
-void btrfs_wait_ordered_extents(struct btrfs_root *root, int delay_iput);
-void btrfs_wait_all_ordered_extents(struct btrfs_fs_info *fs_info,
- int delay_iput);
+void btrfs_wait_ordered_extents(struct btrfs_root *root);
+void btrfs_wait_all_ordered_extents(struct btrfs_fs_info *fs_info);
void btrfs_get_logged_extents(struct btrfs_root *log, struct inode *inode);
void btrfs_wait_logged_extents(struct btrfs_root *log, u64 transid);
void btrfs_free_logged_extents(struct btrfs_root *log, u64 transid);
else
key.offset = (u64)-1;
- return btrfs_read_fs_root_no_name(fs_info, &key);
+ return btrfs_get_fs_root(fs_info, &key, false);
}
#ifdef BTRFS_COMPAT_EXTENT_TREE_V0
btrfs_file_extent_other_encoding(leaf, fi));
if (num_bytes != btrfs_file_extent_disk_num_bytes(leaf, fi)) {
- ret = 1;
+ ret = -EINVAL;
goto out;
}
u64 end;
u32 nritems;
u32 i;
- int ret;
+ int ret = 0;
int first = 1;
int dirty = 0;
ret = get_new_location(rc->data_inode, &new_bytenr,
bytenr, num_bytes);
- if (ret > 0) {
- WARN_ON(1);
- continue;
+ if (ret) {
+ /*
+ * Don't have to abort since we've not changed anything
+ * in the file extent yet.
+ */
+ break;
}
- BUG_ON(ret < 0);
btrfs_set_file_extent_disk_bytenr(leaf, fi, new_bytenr);
dirty = 1;
num_bytes, parent,
btrfs_header_owner(leaf),
key.objectid, key.offset, 1);
- BUG_ON(ret);
+ if (ret) {
+ btrfs_abort_transaction(trans, root, ret);
+ break;
+ }
ret = btrfs_free_extent(trans, root, bytenr, num_bytes,
parent, btrfs_header_owner(leaf),
key.objectid, key.offset, 1);
- BUG_ON(ret);
+ if (ret) {
+ btrfs_abort_transaction(trans, root, ret);
+ break;
+ }
}
if (dirty)
btrfs_mark_buffer_dirty(leaf);
if (inode)
btrfs_add_delayed_iput(inode);
- return 0;
+ return ret;
}
static noinline_for_stack
err = ret;
goto out;
}
- btrfs_wait_all_ordered_extents(fs_info, 0);
+ btrfs_wait_all_ordered_extents(fs_info);
while (1) {
mutex_lock(&fs_info->cleaner_mutex);
return ret;
}
-void btrfs_reloc_cow_block(struct btrfs_trans_handle *trans,
- struct btrfs_root *root, struct extent_buffer *buf,
- struct extent_buffer *cow)
+int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans,
+ struct btrfs_root *root, struct extent_buffer *buf,
+ struct extent_buffer *cow)
{
struct reloc_control *rc;
struct backref_node *node;
int first_cow = 0;
int level;
- int ret;
+ int ret = 0;
rc = root->fs_info->reloc_ctl;
if (!rc)
- return;
+ return 0;
BUG_ON(rc->stage == UPDATE_DATA_PTRS &&
root->root_key.objectid == BTRFS_DATA_RELOC_TREE_OBJECTID);
rc->nodes_relocated += buf->len;
}
- if (level == 0 && first_cow && rc->stage == UPDATE_DATA_PTRS) {
+ if (level == 0 && first_cow && rc->stage == UPDATE_DATA_PTRS)
ret = replace_file_extents(trans, rc, root, cow);
- BUG_ON(ret);
- }
+ return ret;
}
/*
continue;
}
- if (btrfs_root_refs(&root->root_item) == 0) {
- btrfs_add_dead_root(root);
- continue;
- }
-
err = btrfs_init_fs_root(root);
if (err) {
btrfs_free_fs_root(root);
btrfs_free_fs_root(root);
break;
}
+
+ if (btrfs_root_refs(&root->root_item) == 0)
+ btrfs_add_dead_root(root);
}
btrfs_free_path(path);
int mirror_num;
};
+struct scrub_nocow_inode {
+ u64 inum;
+ u64 offset;
+ u64 root;
+ struct list_head list;
+};
+
struct scrub_copy_nocow_ctx {
struct scrub_ctx *sctx;
u64 logical;
u64 len;
int mirror_num;
u64 physical_for_dev_replace;
+ struct list_head inodes;
struct btrfs_work work;
};
static int write_page_nocow(struct scrub_ctx *sctx,
u64 physical_for_dev_replace, struct page *page);
static int copy_nocow_pages_for_inode(u64 inum, u64 offset, u64 root,
- void *ctx);
+ struct scrub_copy_nocow_ctx *ctx);
static int copy_nocow_pages(struct scrub_ctx *sctx, u64 logical, u64 len,
int mirror_num, u64 physical_for_dev_replace);
static void copy_nocow_pages_worker(struct btrfs_work *work);
nocow_ctx->mirror_num = mirror_num;
nocow_ctx->physical_for_dev_replace = physical_for_dev_replace;
nocow_ctx->work.func = copy_nocow_pages_worker;
+ INIT_LIST_HEAD(&nocow_ctx->inodes);
btrfs_queue_worker(&fs_info->scrub_nocow_workers,
&nocow_ctx->work);
return 0;
}
+static int record_inode_for_nocow(u64 inum, u64 offset, u64 root, void *ctx)
+{
+ struct scrub_copy_nocow_ctx *nocow_ctx = ctx;
+ struct scrub_nocow_inode *nocow_inode;
+
+ nocow_inode = kzalloc(sizeof(*nocow_inode), GFP_NOFS);
+ if (!nocow_inode)
+ return -ENOMEM;
+ nocow_inode->inum = inum;
+ nocow_inode->offset = offset;
+ nocow_inode->root = root;
+ list_add_tail(&nocow_inode->list, &nocow_ctx->inodes);
+ return 0;
+}
+
+#define COPY_COMPLETE 1
+
static void copy_nocow_pages_worker(struct btrfs_work *work)
{
struct scrub_copy_nocow_ctx *nocow_ctx =
}
ret = iterate_inodes_from_logical(logical, fs_info, path,
- copy_nocow_pages_for_inode,
- nocow_ctx);
+ record_inode_for_nocow, nocow_ctx);
if (ret != 0 && ret != -ENOENT) {
pr_warn("iterate_inodes_from_logical() failed: log %llu, phys %llu, len %llu, mir %u, ret %d\n",
logical, physical_for_dev_replace, len, mirror_num,
goto out;
}
+ btrfs_end_transaction(trans, root);
+ trans = NULL;
+ while (!list_empty(&nocow_ctx->inodes)) {
+ struct scrub_nocow_inode *entry;
+ entry = list_first_entry(&nocow_ctx->inodes,
+ struct scrub_nocow_inode,
+ list);
+ list_del_init(&entry->list);
+ ret = copy_nocow_pages_for_inode(entry->inum, entry->offset,
+ entry->root, nocow_ctx);
+ kfree(entry);
+ if (ret == COPY_COMPLETE) {
+ ret = 0;
+ break;
+ } else if (ret) {
+ break;
+ }
+ }
out:
+ while (!list_empty(&nocow_ctx->inodes)) {
+ struct scrub_nocow_inode *entry;
+ entry = list_first_entry(&nocow_ctx->inodes,
+ struct scrub_nocow_inode,
+ list);
+ list_del_init(&entry->list);
+ kfree(entry);
+ }
if (trans && !IS_ERR(trans))
btrfs_end_transaction(trans, root);
if (not_written)
scrub_pending_trans_workers_dec(sctx);
}
-static int copy_nocow_pages_for_inode(u64 inum, u64 offset, u64 root, void *ctx)
+static int copy_nocow_pages_for_inode(u64 inum, u64 offset, u64 root,
+ struct scrub_copy_nocow_ctx *nocow_ctx)
{
- struct scrub_copy_nocow_ctx *nocow_ctx = ctx;
struct btrfs_fs_info *fs_info = nocow_ctx->sctx->dev_root->fs_info;
struct btrfs_key key;
struct inode *inode;
struct page *page;
struct btrfs_root *local_root;
+ struct btrfs_ordered_extent *ordered;
+ struct extent_map *em;
+ struct extent_state *cached_state = NULL;
+ struct extent_io_tree *io_tree;
u64 physical_for_dev_replace;
- u64 len;
+ u64 len = nocow_ctx->len;
+ u64 lockstart = offset, lockend = offset + len - 1;
unsigned long index;
int srcu_index;
- int ret;
- int err;
+ int ret = 0;
+ int err = 0;
key.objectid = root;
key.type = BTRFS_ROOT_ITEM_KEY;
mutex_lock(&inode->i_mutex);
inode_dio_wait(inode);
- ret = 0;
physical_for_dev_replace = nocow_ctx->physical_for_dev_replace;
- len = nocow_ctx->len;
+ io_tree = &BTRFS_I(inode)->io_tree;
+
+ lock_extent_bits(io_tree, lockstart, lockend, 0, &cached_state);
+ ordered = btrfs_lookup_ordered_range(inode, lockstart, len);
+ if (ordered) {
+ btrfs_put_ordered_extent(ordered);
+ goto out_unlock;
+ }
+
+ em = btrfs_get_extent(inode, NULL, 0, lockstart, len, 0);
+ if (IS_ERR(em)) {
+ ret = PTR_ERR(em);
+ goto out_unlock;
+ }
+
+ /*
+ * This extent does not actually cover the logical extent anymore,
+ * move on to the next inode.
+ */
+ if (em->block_start > nocow_ctx->logical ||
+ em->block_start + em->block_len < nocow_ctx->logical + len) {
+ free_extent_map(em);
+ goto out_unlock;
+ }
+ free_extent_map(em);
+
while (len >= PAGE_CACHE_SIZE) {
index = offset >> PAGE_CACHE_SHIFT;
again:
goto next_page;
} else {
ClearPageError(page);
- err = extent_read_full_page(&BTRFS_I(inode)->
- io_tree,
- page, btrfs_get_extent,
- nocow_ctx->mirror_num);
+ err = extent_read_full_page_nolock(io_tree, page,
+ btrfs_get_extent,
+ nocow_ctx->mirror_num);
if (err) {
ret = err;
goto next_page;
* page in the page cache.
*/
if (page->mapping != inode->i_mapping) {
+ unlock_page(page);
page_cache_release(page);
goto again;
}
physical_for_dev_replace += PAGE_CACHE_SIZE;
len -= PAGE_CACHE_SIZE;
}
+ ret = COPY_COMPLETE;
+out_unlock:
+ unlock_extent_cached(io_tree, lockstart, lockend, &cached_state,
+ GFP_NOFS);
out:
mutex_unlock(&inode->i_mutex);
iput(inode);
return 0;
}
- btrfs_wait_all_ordered_extents(fs_info, 1);
+ btrfs_wait_all_ordered_extents(fs_info);
trans = btrfs_attach_transaction_barrier(root);
if (IS_ERR(trans)) {
if (ret)
goto restore;
} else {
+ if (test_bit(BTRFS_FS_STATE_ERROR, &root->fs_info->fs_state)) {
+ btrfs_err(fs_info,
+ "Remounting read-write after error is not allowed\n");
+ ret = -EINVAL;
+ goto restore;
+ }
if (fs_info->fs_devices->rw_devices == 0) {
ret = -EACCES;
goto restore;
pr_warn("btrfs: failed to resume dev_replace\n");
goto restore;
}
+
+ if (!fs_info->uuid_root) {
+ pr_info("btrfs: creating UUID tree\n");
+ ret = btrfs_create_uuid_tree(fs_info);
+ if (ret) {
+ pr_warn("btrfs: failed to create the uuid tree"
+ "%d\n", ret);
+ goto restore;
+ }
+ }
sb->s_flags &= ~MS_RDONLY;
}
out:
#ifdef CONFIG_BTRFS_DEBUG
", debug=on"
#endif
+#ifdef CONFIG_BTRFS_ASSERT
+ ", assert=on"
+#endif
#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
", integrity-checker=on"
#endif
static inline void btrfs_wait_delalloc_flush(struct btrfs_fs_info *fs_info)
{
if (btrfs_test_opt(fs_info->tree_root, FLUSHONCOMMIT))
- btrfs_wait_all_ordered_extents(fs_info, 1);
+ btrfs_wait_all_ordered_extents(fs_info);
}
int btrfs_commit_transaction(struct btrfs_trans_handle *trans,
assert_qgroups_uptodate(trans);
update_super_roots(root);
- if (!root->fs_info->log_root_recovering) {
- btrfs_set_super_log_root(root->fs_info->super_copy, 0);
- btrfs_set_super_log_root_level(root->fs_info->super_copy, 0);
- }
-
+ btrfs_set_super_log_root(root->fs_info->super_copy, 0);
+ btrfs_set_super_log_root_level(root->fs_info->super_copy, 0);
memcpy(root->fs_info->super_for_commit, root->fs_info->super_copy,
sizeof(*root->fs_info->super_copy));
*/
#define LOG_WALK_PIN_ONLY 0
#define LOG_WALK_REPLAY_INODES 1
-#define LOG_WALK_REPLAY_ALL 2
+#define LOG_WALK_REPLAY_DIR_INDEX 2
+#define LOG_WALK_REPLAY_ALL 3
static int btrfs_log_inode(struct btrfs_trans_handle *trans,
struct btrfs_root *root, struct inode *inode,
if (inode_item) {
struct btrfs_inode_item *item;
u64 nbytes;
+ u32 mode;
item = btrfs_item_ptr(path->nodes[0], path->slots[0],
struct btrfs_inode_item);
item = btrfs_item_ptr(eb, slot,
struct btrfs_inode_item);
btrfs_set_inode_nbytes(eb, item, nbytes);
+
+ /*
+ * If this is a directory we need to reset the i_size to
+ * 0 so that we can set it up properly when replaying
+ * the rest of the items in this log.
+ */
+ mode = btrfs_inode_mode(eb, item);
+ if (S_ISDIR(mode))
+ btrfs_set_inode_size(eb, item, 0);
}
} else if (inode_item) {
struct btrfs_inode_item *item;
+ u32 mode;
/*
* New inode, set nbytes to 0 so that the nbytes comes out
*/
item = btrfs_item_ptr(eb, slot, struct btrfs_inode_item);
btrfs_set_inode_nbytes(eb, item, 0);
+
+ /*
+ * If this is a directory we need to reset the i_size to 0 so
+ * that we can set it up properly when replaying the rest of
+ * the items in this log.
+ */
+ mode = btrfs_inode_mode(eb, item);
+ if (S_ISDIR(mode))
+ btrfs_set_inode_size(eb, item, 0);
}
insert:
btrfs_release_path(path);
iput(inode);
return -EIO;
}
+
ret = btrfs_add_link(trans, dir, inode, name, name_len, 1, index);
/* FIXME, put inode into FIXUP list */
u8 log_type;
int exists;
int ret = 0;
+ bool update_size = (key->type == BTRFS_DIR_INDEX_KEY);
dir = read_one_inode(root, key->objectid);
if (!dir)
goto insert;
out:
btrfs_release_path(path);
+ if (!ret && update_size) {
+ btrfs_i_size_write(dir, dir->i_size + name_len * 2);
+ ret = btrfs_update_inode(trans, root, dir);
+ }
kfree(name);
iput(dir);
return ret;
name, name_len, log_type, &log_key);
if (ret && ret != -ENOENT)
goto out;
+ update_size = false;
ret = 0;
goto out;
}
if (ret)
break;
}
+
+ if (key.type == BTRFS_DIR_INDEX_KEY &&
+ wc->stage == LOG_WALK_REPLAY_DIR_INDEX) {
+ ret = replay_one_dir_item(wc->trans, root, path,
+ eb, i, &key);
+ if (ret)
+ break;
+ }
+
if (wc->stage < LOG_WALK_REPLAY_ALL)
continue;
eb, i, &key);
if (ret)
break;
- } else if (key.type == BTRFS_DIR_ITEM_KEY ||
- key.type == BTRFS_DIR_INDEX_KEY) {
+ } else if (key.type == BTRFS_DIR_ITEM_KEY) {
ret = replay_one_dir_item(wc->trans, root, path,
eb, i, &key);
if (ret)
int ret = 0;
struct btrfs_root *root;
struct dentry *old_parent = NULL;
+ struct inode *orig_inode = inode;
/*
* for regular files, if its inode is already on disk, we don't
}
while (1) {
- BTRFS_I(inode)->logged_trans = trans->transid;
+ /*
+ * If we are logging a directory then we start with our inode,
+ * not our parents inode, so we need to skipp setting the
+ * logged_trans so that further down in the log code we don't
+ * think this inode has already been logged.
+ */
+ if (inode != orig_inode)
+ BTRFS_I(inode)->logged_trans = trans->transid;
smp_mb();
if (BTRFS_I(inode)->last_unlink_trans > last_committed) {
fs_devices->rotating = 1;
fs_devices->open_devices++;
- if (device->writeable && !device->is_tgtdev_for_dev_replace) {
+ if (device->writeable &&
+ device->devid != BTRFS_DEV_REPLACE_DEVID) {
fs_devices->rw_devices++;
list_add(&device->dev_alloc_list,
&fs_devices->alloc_list);
if (disk_super->label[0]) {
if (disk_super->label[BTRFS_LABEL_SIZE - 1])
disk_super->label[BTRFS_LABEL_SIZE - 1] = '\0';
- printk(KERN_INFO "device label %s ", disk_super->label);
+ printk(KERN_INFO "btrfs: device label %s ", disk_super->label);
} else {
- printk(KERN_INFO "device fsid %pU ", disk_super->fsid);
+ printk(KERN_INFO "btrfs: device fsid %pU ", disk_super->fsid);
}
printk(KERN_CONT "devid %llu transid %llu %s\n", devid, transid, path);
struct btrfs_device *srcdev)
{
WARN_ON(!mutex_is_locked(&fs_info->fs_devices->device_list_mutex));
+
list_del_rcu(&srcdev->dev_list);
list_del_rcu(&srcdev->dev_alloc_list);
fs_info->fs_devices->num_devices--;
}
if (srcdev->can_discard)
fs_info->fs_devices->num_can_discard--;
- if (srcdev->bdev)
+ if (srcdev->bdev) {
fs_info->fs_devices->open_devices--;
+ /* zero out the old super */
+ btrfs_scratch_superblock(srcdev);
+ }
+
call_rcu(&srcdev->rcu, free_device);
}
struct buffer_head *bh;
sector_t end_block;
int ret = 0; /* Will call free_more_memory() */
+ gfp_t gfp_mask;
- page = find_or_create_page(inode->i_mapping, index,
- (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE);
+ gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS;
+ gfp_mask |= __GFP_MOVABLE;
+ /*
+ * XXX: __getblk_slow() can not really deal with failure and
+ * will endlessly loop on improvised global reclaim. Prefer
+ * looping in the allocator rather than here, at least that
+ * code knows what it's doing.
+ */
+ gfp_mask |= __GFP_NOFAIL;
+
+ page = find_or_create_page(inode->i_mapping, index, gfp_mask);
if (!page)
return ret;
object->fscache.cookie->parent,
object->fscache.cookie->netfs_data,
object->fscache.cookie->flags);
- if (keybuf)
+ if (keybuf && cookie->def)
keylen = cookie->def->get_key(cookie->netfs_data, keybuf,
CACHEFILES_KEYBUF_SIZE);
else
int cachefiles_check_auxdata(struct cachefiles_object *object)
{
struct cachefiles_xattr *auxbuf;
+ enum fscache_checkaux validity;
struct dentry *dentry = object->dentry;
- unsigned int dlen;
+ ssize_t xlen;
int ret;
ASSERT(dentry);
if (!auxbuf)
return -ENOMEM;
- auxbuf->len = vfs_getxattr(dentry, cachefiles_xattr_cache,
- &auxbuf->type, 512 + 1);
- if (auxbuf->len < 1)
- return -ESTALE;
-
- if (auxbuf->type != object->fscache.cookie->def->type)
- return -ESTALE;
+ xlen = vfs_getxattr(dentry, cachefiles_xattr_cache,
+ &auxbuf->type, 512 + 1);
+ ret = -ESTALE;
+ if (xlen < 1 ||
+ auxbuf->type != object->fscache.cookie->def->type)
+ goto error;
- dlen = auxbuf->len - 1;
- ret = fscache_check_aux(&object->fscache, &auxbuf->data, dlen);
+ xlen--;
+ validity = fscache_check_aux(&object->fscache, &auxbuf->data, xlen);
+ if (validity != FSCACHE_CHECKAUX_OKAY)
+ goto error;
+ ret = 0;
+error:
kfree(auxbuf);
- if (ret != FSCACHE_CHECKAUX_OKAY)
- return -ESTALE;
-
- return 0;
+ return ret;
}
/*
{
struct inode *inode;
struct cifs_sb_info *cifs_sb;
+ struct cifs_tcon *tcon;
int rc = 0;
cifs_sb = CIFS_SB(sb);
+ tcon = cifs_sb_master_tcon(cifs_sb);
if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIXACL)
sb->s_flags |= MS_POSIXACL;
- if (cifs_sb_master_tcon(cifs_sb)->ses->capabilities & CAP_LARGE_FILES)
+ if (tcon->ses->capabilities & tcon->ses->server->vals->cap_large_files)
sb->s_maxbytes = MAX_LFS_FILESIZE;
else
sb->s_maxbytes = MAX_NON_LFS;
goto out_no_root;
}
- if (cifs_sb_master_tcon(cifs_sb)->nocase)
+ if (tcon->nocase)
sb->s_d_op = &cifs_ci_dentry_ops;
else
sb->s_d_op = &cifs_dentry_ops;
extern const struct export_operations cifs_export_ops;
#endif /* CONFIG_CIFS_NFSD_EXPORT */
-#define CIFS_VERSION "2.01"
+#define CIFS_VERSION "2.02"
#endif /* _CIFSFS_H */
unsigned int max_rw; /* maxRw specifies the maximum */
/* message size the server can send or receive for */
/* SMB_COM_WRITE_RAW or SMB_COM_READ_RAW. */
- unsigned int max_vcs; /* maximum number of smb sessions, at least
- those that can be specified uniquely with
- vcnumbers */
unsigned int capabilities; /* selective disabling of caps by smb sess */
int timeAdj; /* Adjust for difference in server time zone in sec */
__u64 CurrentMid; /* multiplex id - rotating counter */
enum statusEnum status;
unsigned overrideSecFlg; /* if non-zero override global sec flags */
__u16 ipc_tid; /* special tid for connection to IPC share */
- __u16 vcnum;
char *serverOS; /* name of operating system underlying server */
char *serverNOS; /* name of network operating system of server */
char *serverDomain; /* security realm of server */
#define CIFS_FATTR_DELETE_PENDING 0x2
#define CIFS_FATTR_NEED_REVAL 0x4
#define CIFS_FATTR_INO_COLLISION 0x8
+#define CIFS_FATTR_UNKNOWN_NLINK 0x10
struct cifs_fattr {
u32 cf_flags;
__u8 FileName[0];
} __attribute__((packed));
-struct reparse_data {
- __u32 ReparseTag;
- __u16 ReparseDataLength;
+/* For IO_REPARSE_TAG_SYMLINK */
+struct reparse_symlink_data {
+ __le32 ReparseTag;
+ __le16 ReparseDataLength;
__u16 Reserved;
- __u16 SubstituteNameOffset;
- __u16 SubstituteNameLength;
- __u16 PrintNameOffset;
- __u16 PrintNameLength;
- __u32 Flags;
+ __le16 SubstituteNameOffset;
+ __le16 SubstituteNameLength;
+ __le16 PrintNameOffset;
+ __le16 PrintNameLength;
+ __le32 Flags;
+ char PathBuffer[0];
+} __attribute__((packed));
+
+/* For IO_REPARSE_TAG_NFS */
+#define NFS_SPECFILE_LNK 0x00000000014B4E4C
+#define NFS_SPECFILE_CHR 0x0000000000524843
+#define NFS_SPECFILE_BLK 0x00000000004B4C42
+#define NFS_SPECFILE_FIFO 0x000000004F464946
+#define NFS_SPECFILE_SOCK 0x000000004B434F53
+struct reparse_posix_data {
+ __le32 ReparseTag;
+ __le16 ReparseDataLength;
+ __u16 Reserved;
+ __le64 InodeType; /* LNK, FIFO, CHR etc. */
char PathBuffer[0];
} __attribute__((packed));
} __attribute__((packed)) FILE_XATTR_INFO; /* extended attribute info
level 0x205 */
-
-/* flags for chattr command */
-#define EXT_SECURE_DELETE 0x00000001 /* EXT3_SECRM_FL */
-#define EXT_ENABLE_UNDELETE 0x00000002 /* EXT3_UNRM_FL */
-/* Reserved for compress file 0x4 */
-#define EXT_SYNCHRONOUS 0x00000008 /* EXT3_SYNC_FL */
-#define EXT_IMMUTABLE_FL 0x00000010 /* EXT3_IMMUTABLE_FL */
-#define EXT_OPEN_APPEND_ONLY 0x00000020 /* EXT3_APPEND_FL */
-#define EXT_DO_NOT_BACKUP 0x00000040 /* EXT3_NODUMP_FL */
-#define EXT_NO_UPDATE_ATIME 0x00000080 /* EXT3_NOATIME_FL */
-/* 0x100 through 0x800 reserved for compression flags and are GET-ONLY */
-#define EXT_HASH_TREE_INDEXED_DIR 0x00001000 /* GET-ONLY EXT3_INDEX_FL */
-/* 0x2000 reserved for IMAGIC_FL */
-#define EXT_JOURNAL_THIS_FILE 0x00004000 /* GET-ONLY EXT3_JOURNAL_DATA_FL */
-/* 0x8000 reserved for EXT3_NOTAIL_FL */
-#define EXT_SYNCHRONOUS_DIR 0x00010000 /* EXT3_DIRSYNC_FL */
-#define EXT_TOPDIR 0x00020000 /* EXT3_TOPDIR_FL */
-
-#define EXT_SET_MASK 0x000300FF
-#define EXT_GET_MASK 0x0003DFFF
+/* flags for lsattr and chflags commands removed arein uapi/linux/fs.h */
typedef struct file_chattr_info {
__le64 mask; /* list of all possible attribute bits */
cifs_max_pending);
set_credits(server, server->maxReq);
server->maxBuf = le16_to_cpu(rsp->MaxBufSize);
- server->max_vcs = le16_to_cpu(rsp->MaxNumberVcs);
/* even though we do not use raw we might as well set this
accurately, in case we ever find a need for it */
if ((le16_to_cpu(rsp->RawMode) & RAW_ENABLE) == RAW_ENABLE) {
bool is_unicode;
unsigned int sub_len;
char *sub_start;
- struct reparse_data *reparse_buf;
+ struct reparse_symlink_data *reparse_buf;
+ struct reparse_posix_data *posix_buf;
__u32 data_offset, data_count;
char *end_of_smb;
goto qreparse_out;
}
end_of_smb = 2 + get_bcc(&pSMBr->hdr) + (char *)&pSMBr->ByteCount;
- reparse_buf = (struct reparse_data *)
+ reparse_buf = (struct reparse_symlink_data *)
((char *)&pSMBr->hdr.Protocol + data_offset);
if ((char *)reparse_buf >= end_of_smb) {
rc = -EIO;
goto qreparse_out;
}
- if ((reparse_buf->PathBuffer + reparse_buf->PrintNameOffset +
- reparse_buf->PrintNameLength) > end_of_smb) {
+ if (reparse_buf->ReparseTag == cpu_to_le32(IO_REPARSE_TAG_NFS)) {
+ cifs_dbg(FYI, "NFS style reparse tag\n");
+ posix_buf = (struct reparse_posix_data *)reparse_buf;
+
+ if (posix_buf->InodeType != cpu_to_le64(NFS_SPECFILE_LNK)) {
+ cifs_dbg(FYI, "unsupported file type 0x%llx\n",
+ le64_to_cpu(posix_buf->InodeType));
+ rc = -EOPNOTSUPP;
+ goto qreparse_out;
+ }
+ is_unicode = true;
+ sub_len = le16_to_cpu(reparse_buf->ReparseDataLength);
+ if (posix_buf->PathBuffer + sub_len > end_of_smb) {
+ cifs_dbg(FYI, "reparse buf beyond SMB\n");
+ rc = -EIO;
+ goto qreparse_out;
+ }
+ *symlinkinfo = cifs_strndup_from_utf16(posix_buf->PathBuffer,
+ sub_len, is_unicode, nls_codepage);
+ goto qreparse_out;
+ } else if (reparse_buf->ReparseTag !=
+ cpu_to_le32(IO_REPARSE_TAG_SYMLINK)) {
+ rc = -EOPNOTSUPP;
+ goto qreparse_out;
+ }
+
+ /* Reparse tag is NTFS symlink */
+ sub_start = le16_to_cpu(reparse_buf->SubstituteNameOffset) +
+ reparse_buf->PathBuffer;
+ sub_len = le16_to_cpu(reparse_buf->SubstituteNameLength);
+ if (sub_start + sub_len > end_of_smb) {
cifs_dbg(FYI, "reparse buf beyond SMB\n");
rc = -EIO;
goto qreparse_out;
}
- sub_start = reparse_buf->SubstituteNameOffset + reparse_buf->PathBuffer;
- sub_len = reparse_buf->SubstituteNameLength;
if (pSMBr->hdr.Flags2 & SMBFLG2_UNICODE)
is_unicode = true;
else
if (server->ops->close)
server->ops->close(xid, tcon, &fid);
cifs_del_pending_open(&open);
+ fput(file);
rc = -ENOMEM;
}
/*
* Reads as many pages as possible from fscache. Returns -ENOBUFS
* immediately if the cookie is negative
+ *
+ * After this point, every page in the list might have PG_fscache set,
+ * so we will need to clean that up off of every page we don't use.
*/
rc = cifs_readpages_from_fscache(mapping->host, mapping, page_list,
&num_pages);
kref_put(&rdata->refcount, cifs_readdata_release);
}
+ /* Any pages that have been shown to fscache but didn't get added to
+ * the pagecache must be uncached before they get returned to the
+ * allocator.
+ */
+ cifs_fscache_readpages_cancel(mapping->host, page_list);
return rc;
}
fscache_uncache_page(CIFS_I(inode)->fscache, page);
}
+void __cifs_fscache_readpages_cancel(struct inode *inode, struct list_head *pages)
+{
+ cifs_dbg(FYI, "%s: (fsc: %p, i: %p)\n",
+ __func__, CIFS_I(inode)->fscache, inode);
+ fscache_readpages_cancel(CIFS_I(inode)->fscache, pages);
+}
+
void __cifs_fscache_invalidate_page(struct page *page, struct inode *inode)
{
struct cifsInodeInfo *cifsi = CIFS_I(inode);
struct address_space *,
struct list_head *,
unsigned *);
+extern void __cifs_fscache_readpages_cancel(struct inode *, struct list_head *);
extern void __cifs_readpage_to_fscache(struct inode *, struct page *);
__cifs_readpage_to_fscache(inode, page);
}
+static inline void cifs_fscache_readpages_cancel(struct inode *inode,
+ struct list_head *pages)
+{
+ if (CIFS_I(inode)->fscache)
+ return __cifs_fscache_readpages_cancel(inode, pages);
+}
+
#else /* CONFIG_CIFS_FSCACHE */
static inline int cifs_fscache_register(void) { return 0; }
static inline void cifs_fscache_unregister(void) {}
static inline void cifs_readpage_to_fscache(struct inode *inode,
struct page *page) {}
+static inline void cifs_fscache_readpages_cancel(struct inode *inode,
+ struct list_head *pages)
+{
+}
+
#endif /* CONFIG_CIFS_FSCACHE */
#endif /* _CIFS_FSCACHE_H */
cifs_i->invalid_mapping = true;
}
+/*
+ * copy nlink to the inode, unless it wasn't provided. Provide
+ * sane values if we don't have an existing one and none was provided
+ */
+static void
+cifs_nlink_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr)
+{
+ /*
+ * if we're in a situation where we can't trust what we
+ * got from the server (readdir, some non-unix cases)
+ * fake reasonable values
+ */
+ if (fattr->cf_flags & CIFS_FATTR_UNKNOWN_NLINK) {
+ /* only provide fake values on a new inode */
+ if (inode->i_state & I_NEW) {
+ if (fattr->cf_cifsattrs & ATTR_DIRECTORY)
+ set_nlink(inode, 2);
+ else
+ set_nlink(inode, 1);
+ }
+ return;
+ }
+
+ /* we trust the server, so update it */
+ set_nlink(inode, fattr->cf_nlink);
+}
+
/* populate an inode with info from a cifs_fattr struct */
void
cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr)
inode->i_mtime = fattr->cf_mtime;
inode->i_ctime = fattr->cf_ctime;
inode->i_rdev = fattr->cf_rdev;
- set_nlink(inode, fattr->cf_nlink);
+ cifs_nlink_fattr_to_inode(inode, fattr);
inode->i_uid = fattr->cf_uid;
inode->i_gid = fattr->cf_gid;
fattr->cf_bytes = le64_to_cpu(info->AllocationSize);
fattr->cf_createtime = le64_to_cpu(info->CreationTime);
+ fattr->cf_nlink = le32_to_cpu(info->NumberOfLinks);
if (fattr->cf_cifsattrs & ATTR_DIRECTORY) {
fattr->cf_mode = S_IFDIR | cifs_sb->mnt_dir_mode;
fattr->cf_dtype = DT_DIR;
* Server can return wrong NumberOfLinks value for directories
* when Unix extensions are disabled - fake it.
*/
- fattr->cf_nlink = 2;
+ if (!tcon->unix_ext)
+ fattr->cf_flags |= CIFS_FATTR_UNKNOWN_NLINK;
} else if (fattr->cf_cifsattrs & ATTR_REPARSE) {
fattr->cf_mode = S_IFLNK;
fattr->cf_dtype = DT_LNK;
if (fattr->cf_cifsattrs & ATTR_READONLY)
fattr->cf_mode &= ~(S_IWUGO);
- fattr->cf_nlink = le32_to_cpu(info->NumberOfLinks);
- if (fattr->cf_nlink < 1) {
- cifs_dbg(1, "replacing bogus file nlink value %u\n",
+ /*
+ * Don't accept zero nlink from non-unix servers unless
+ * delete is pending. Instead mark it as unknown.
+ */
+ if ((fattr->cf_nlink < 1) && !tcon->unix_ext &&
+ !info->DeletePending) {
+ cifs_dbg(1, "bogus file nlink value %u\n",
fattr->cf_nlink);
- fattr->cf_nlink = 1;
+ fattr->cf_flags |= CIFS_FATTR_UNKNOWN_NLINK;
}
}
ERRDOS, ERRnoaccess, 0xc0000290}, {
ERRDOS, ERRbadfunc, 0xc000029c}, {
ERRDOS, ERRsymlink, NT_STATUS_STOPPED_ON_SYMLINK}, {
- ERRDOS, ERRinvlevel, 0x007c0001}, };
+ ERRDOS, ERRinvlevel, 0x007c0001}, {
+ 0, 0, 0 }
+};
/*****************************************************************************
Print an error message from the status code
fattr->cf_dtype = DT_REG;
}
+ /* non-unix readdir doesn't provide nlink */
+ fattr->cf_flags |= CIFS_FATTR_UNKNOWN_NLINK;
+
if (fattr->cf_cifsattrs & ATTR_READONLY)
fattr->cf_mode &= ~S_IWUGO;
#include <linux/slab.h>
#include "cifs_spnego.h"
-/*
- * Checks if this is the first smb session to be reconnected after
- * the socket has been reestablished (so we know whether to use vc 0).
- * Called while holding the cifs_tcp_ses_lock, so do not block
- */
-static bool is_first_ses_reconnect(struct cifs_ses *ses)
-{
- struct list_head *tmp;
- struct cifs_ses *tmp_ses;
-
- list_for_each(tmp, &ses->server->smb_ses_list) {
- tmp_ses = list_entry(tmp, struct cifs_ses,
- smb_ses_list);
- if (tmp_ses->need_reconnect == false)
- return false;
- }
- /* could not find a session that was already connected,
- this must be the first one we are reconnecting */
- return true;
-}
-
-/*
- * vc number 0 is treated specially by some servers, and should be the
- * first one we request. After that we can use vcnumbers up to maxvcs,
- * one for each smb session (some Windows versions set maxvcs incorrectly
- * so maxvc=1 can be ignored). If we have too many vcs, we can reuse
- * any vc but zero (some servers reset the connection on vcnum zero)
- *
- */
-static __le16 get_next_vcnum(struct cifs_ses *ses)
-{
- __u16 vcnum = 0;
- struct list_head *tmp;
- struct cifs_ses *tmp_ses;
- __u16 max_vcs = ses->server->max_vcs;
- __u16 i;
- int free_vc_found = 0;
-
- /* Quoting the MS-SMB specification: "Windows-based SMB servers set this
- field to one but do not enforce this limit, which allows an SMB client
- to establish more virtual circuits than allowed by this value ... but
- other server implementations can enforce this limit." */
- if (max_vcs < 2)
- max_vcs = 0xFFFF;
-
- spin_lock(&cifs_tcp_ses_lock);
- if ((ses->need_reconnect) && is_first_ses_reconnect(ses))
- goto get_vc_num_exit; /* vcnum will be zero */
- for (i = ses->server->srv_count - 1; i < max_vcs; i++) {
- if (i == 0) /* this is the only connection, use vc 0 */
- break;
-
- free_vc_found = 1;
-
- list_for_each(tmp, &ses->server->smb_ses_list) {
- tmp_ses = list_entry(tmp, struct cifs_ses,
- smb_ses_list);
- if (tmp_ses->vcnum == i) {
- free_vc_found = 0;
- break; /* found duplicate, try next vcnum */
- }
- }
- if (free_vc_found)
- break; /* we found a vcnumber that will work - use it */
- }
-
- if (i == 0)
- vcnum = 0; /* for most common case, ie if one smb session, use
- vc zero. Also for case when no free vcnum, zero
- is safest to send (some clients only send zero) */
- else if (free_vc_found == 0)
- vcnum = 1; /* we can not reuse vc=0 safely, since some servers
- reset all uids on that, but 1 is ok. */
- else
- vcnum = i;
- ses->vcnum = vcnum;
-get_vc_num_exit:
- spin_unlock(&cifs_tcp_ses_lock);
-
- return cpu_to_le16(vcnum);
-}
-
static __u32 cifs_ssetup_hdr(struct cifs_ses *ses, SESSION_SETUP_ANDX *pSMB)
{
__u32 capabilities = 0;
CIFSMaxBufSize + MAX_CIFS_HDR_SIZE - 4,
USHRT_MAX));
pSMB->req.MaxMpxCount = cpu_to_le16(ses->server->maxReq);
- pSMB->req.VcNumber = get_next_vcnum(ses);
+ pSMB->req.VcNumber = __constant_cpu_to_le16(1);
/* Now no need to set SMBFLG_CASELESS or obsolete CANONICAL PATH */
return NTLMv2;
if (global_secflags & CIFSSEC_MAY_NTLM)
return NTLM;
- /* Fallthrough */
default:
- return Unspecified;
+ /* Fallthrough to attempt LANMAN authentication next */
+ break;
}
case CIFS_NEGFLAVOR_LANMAN:
switch (requested) {
else
return -EIO;
+ /* no need to send SMB logoff if uid already closed due to reconnect */
+ if (ses->need_reconnect)
+ goto smb2_session_already_dead;
+
rc = small_smb2_init(SMB2_LOGOFF, NULL, (void **) &req);
if (rc)
return rc;
* No tcon so can't do
* cifs_stats_inc(&tcon->stats.smb2_stats.smb2_com_fail[SMB2...]);
*/
+
+smb2_session_already_dead:
return rc;
}
#define FSCTL_QUERY_NETWORK_INTERFACE_INFO 0x001401FC /* BB add struct */
#define FSCTL_SRV_READ_HASH 0x001441BB /* BB add struct */
+/* See FSCC 2.1.2.5 */
#define IO_REPARSE_TAG_MOUNT_POINT 0xA0000003
#define IO_REPARSE_TAG_HSM 0xC0000004
#define IO_REPARSE_TAG_SIS 0x80000007
+#define IO_REPARSE_TAG_HSM2 0x80000006
+#define IO_REPARSE_TAG_DRIVER_EXTENDER 0x80000005
+/* Used by the DFS filter. See MS-DFSC */
+#define IO_REPARSE_TAG_DFS 0x8000000A
+/* Used by the DFS filter See MS-DFSC */
+#define IO_REPARSE_TAG_DFSR 0x80000012
+#define IO_REPARSE_TAG_FILTER_MANAGER 0x8000000B
+/* See section MS-FSCC 2.1.2.4 */
+#define IO_REPARSE_TAG_SYMLINK 0xA000000C
+#define IO_REPARSE_TAG_DEDUP 0x80000013
+#define IO_REPARSE_APPXSTREAM 0xC0000014
+/* NFS symlinks, Win 8/SMB3 and later */
+#define IO_REPARSE_TAG_NFS 0x80000014
/* fsctl flags */
/* If Flags is set to this value, the request is an FSCTL not ioctl request */
wait_for_free_request(struct TCP_Server_Info *server, const int timeout,
const int optype)
{
- return wait_for_free_credits(server, timeout,
- server->ops->get_credits_field(server, optype));
+ int *val;
+
+ val = server->ops->get_credits_field(server, optype);
+ /* Since an echo is already inflight, no need to wait to send another */
+ if (*val <= 0 && optype == CIFS_ECHO_OP)
+ return -EAGAIN;
+ return wait_for_free_credits(server, timeout, val);
}
static int allocate_mid(struct cifs_ses *ses, struct smb_hdr *in_buf,
* If ref is non-zero, then decrement the refcount too.
* Returns dentry requiring refcount drop, or NULL if we're done.
*/
-static inline struct dentry *
+static struct dentry *
dentry_kill(struct dentry *dentry, int unlock_on_failure)
__releases(dentry->d_lock)
{
goto kill_it;
}
- dentry->d_flags |= DCACHE_REFERENCED;
+ if (!(dentry->d_flags & DCACHE_REFERENCED))
+ dentry->d_flags |= DCACHE_REFERENCED;
dentry_lru_add(dentry);
dentry->d_lockref.count--;
* list is non-empty and continue searching.
*/
-/**
- * have_submounts - check for mounts over a dentry
- * @parent: dentry to check.
- *
- * Return true if the parent or its subdirectories contain
- * a mount point
- */
-
static enum d_walk_ret check_mount(void *data, struct dentry *dentry)
{
int *ret = data;
return D_WALK_CONTINUE;
}
+/**
+ * have_submounts - check for mounts over a dentry
+ * @parent: dentry to check.
+ *
+ * Return true if the parent or its subdirectories contain
+ * a mount point
+ */
int have_submounts(struct dentry *parent)
{
int ret = 0;
struct page *page)
{
return ecryptfs_lower_header_size(crypt_stat) +
- (page->index << PAGE_CACHE_SHIFT);
+ ((loff_t)page->index << PAGE_CACHE_SHIFT);
}
/**
struct ecryptfs_msg_ctx *msg_ctx;
struct ecryptfs_message *msg = NULL;
char *auth_tok_sig;
- char *payload;
+ char *payload = NULL;
size_t payload_len = 0;
int rc;
}
out:
kfree(msg);
+ kfree(payload);
return rc;
}
#include <linux/mutex.h>
#include <linux/anon_inodes.h>
#include <linux/device.h>
-#include <linux/freezer.h>
#include <asm/uaccess.h>
#include <asm/io.h>
#include <asm/mman.h>
}
spin_unlock_irqrestore(&ep->lock, flags);
- if (!freezable_schedule_hrtimeout_range(to, slack,
- HRTIMER_MODE_ABS))
+ if (!schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS))
timed_out = 1;
spin_lock_irqsave(&ep->lock, flags);
d_tmpfile(dentry, inode);
err = ext3_orphan_add(handle, inode);
if (err)
- goto err_drop_inode;
+ goto err_unlock_inode;
mark_inode_dirty(inode);
unlock_new_inode(inode);
}
if (err == -ENOSPC && ext3_should_retry_alloc(dir->i_sb, &retries))
goto retry;
return err;
-err_drop_inode:
+err_unlock_inode:
ext3_journal_stop(handle);
unlock_new_inode(inode);
- iput(inode);
return err;
}
break;
}
blk_finish_plug(&plug);
- if (!ret && !cycled) {
+ if (!ret && !cycled && wbc->nr_to_write > 0) {
cycled = 1;
mpd.last_page = writeback_index - 1;
mpd.first_page = 0;
d_tmpfile(dentry, inode);
err = ext4_orphan_add(handle, inode);
if (err)
- goto err_drop_inode;
+ goto err_unlock_inode;
mark_inode_dirty(inode);
unlock_new_inode(inode);
}
if (err == -ENOSPC && ext4_should_retry_alloc(dir->i_sb, &retries))
goto retry;
return err;
-err_drop_inode:
+err_unlock_inode:
ext4_journal_stop(handle);
unlock_new_inode(inode);
- iput(inode);
return err;
}
s_min_extra_isize) {
tried_min_extra_isize++;
new_extra_isize = s_min_extra_isize;
+ kfree(is); is = NULL;
+ kfree(bs); bs = NULL;
goto retry;
}
error = -1;
delayed_fput(NULL);
}
-static DECLARE_WORK(delayed_fput_work, delayed_fput);
+static DECLARE_DELAYED_WORK(delayed_fput_work, delayed_fput);
void fput(struct file *file)
{
}
if (llist_add(&file->f_u.fu_llist, &delayed_fput_list))
- schedule_work(&delayed_fput_work);
+ schedule_delayed_work(&delayed_fput_work, 1);
}
}
fscache_operation_init(op, NULL, NULL);
op->flags = FSCACHE_OP_MYTHREAD |
- (1 << FSCACHE_OP_WAITING);
+ (1 << FSCACHE_OP_WAITING) |
+ (1 << FSCACHE_OP_UNUSE_COOKIE);
spin_lock(&cookie->lock);
struct inode *inode;
struct dentry *parent;
struct fuse_conn *fc;
+ struct fuse_inode *fi;
int ret;
inode = ACCESS_ONCE(entry->d_inode);
if (!err && !outarg.nodeid)
err = -ENOENT;
if (!err) {
- struct fuse_inode *fi = get_fuse_inode(inode);
+ fi = get_fuse_inode(inode);
if (outarg.nodeid != get_node_id(inode)) {
fuse_queue_forget(fc, forget, outarg.nodeid, 1);
goto invalid;
attr_version);
fuse_change_entry_timeout(entry, &outarg);
} else if (inode) {
- fc = get_fuse_conn(inode);
- if (fc->readdirplus_auto) {
+ fi = get_fuse_inode(inode);
+ if (flags & LOOKUP_RCU) {
+ if (test_bit(FUSE_I_INIT_RDPLUS, &fi->state))
+ return -ECHILD;
+ } else if (test_and_clear_bit(FUSE_I_INIT_RDPLUS, &fi->state)) {
parent = dget_parent(entry);
fuse_advise_use_readdirplus(parent->d_inode);
dput(parent);
invalid:
ret = 0;
- if (check_submounts_and_drop(entry) != 0)
+
+ if (!(flags & LOOKUP_RCU) && check_submounts_and_drop(entry) != 0)
ret = 1;
goto out;
}
struct fuse_access_in inarg;
int err;
+ BUG_ON(mask & MAY_NOT_BLOCK);
+
if (fc->no_access)
return 0;
noticed immediately, only after the attribute
timeout has expired */
} else if (mask & (MAY_ACCESS | MAY_CHDIR)) {
- if (mask & MAY_NOT_BLOCK)
- return -ECHILD;
-
err = fuse_access(inode, mask);
} else if ((mask & MAY_EXEC) && S_ISREG(inode->i_mode)) {
if (!(inode->i_mode & S_IXUGO)) {
}
found:
+ if (fc->readdirplus_auto)
+ set_bit(FUSE_I_INIT_RDPLUS, &get_fuse_inode(inode)->state);
fuse_change_entry_timeout(dentry, o);
err = 0;
{
struct fuse_file *ff = file->private_data;
struct inode *inode = file->f_inode;
+ struct fuse_inode *fi = get_fuse_inode(inode);
struct fuse_conn *fc = ff->fc;
struct fuse_req *req;
struct fuse_fallocate_in inarg = {
if (lock_inode) {
mutex_lock(&inode->i_mutex);
- if (mode & FALLOC_FL_PUNCH_HOLE)
- fuse_set_nowrite(inode);
+ if (mode & FALLOC_FL_PUNCH_HOLE) {
+ loff_t endbyte = offset + length - 1;
+ err = filemap_write_and_wait_range(inode->i_mapping,
+ offset, endbyte);
+ if (err)
+ goto out;
+
+ fuse_sync_writes(inode);
+ }
}
+ if (!(mode & FALLOC_FL_KEEP_SIZE))
+ set_bit(FUSE_I_SIZE_UNSTABLE, &fi->state);
+
req = fuse_get_req_nopages(fc);
if (IS_ERR(req)) {
err = PTR_ERR(req);
fuse_invalidate_attr(inode);
out:
- if (lock_inode) {
- if (mode & FALLOC_FL_PUNCH_HOLE)
- fuse_release_nowrite(inode);
+ if (!(mode & FALLOC_FL_KEEP_SIZE))
+ clear_bit(FUSE_I_SIZE_UNSTABLE, &fi->state);
+
+ if (lock_inode)
mutex_unlock(&inode->i_mutex);
- }
return err;
}
enum {
/** Advise readdirplus */
FUSE_I_ADVISE_RDPLUS,
+ /** Initialized with readdirplus */
+ FUSE_I_INIT_RDPLUS,
/** An operation changing file size is in progress */
FUSE_I_SIZE_UNSTABLE,
};
mark_inode_dirty(inode);
d_instantiate(dentry, inode);
- if (file)
+ if (file) {
+ *opened |= FILE_CREATED;
error = finish_open(file, dentry, gfs2_open_common, opened);
+ }
gfs2_glock_dq_uninit(ghs);
gfs2_glock_dq_uninit(ghs + 1);
return error;
if (insert_inode_locked(inode) < 0) {
rc = -EINVAL;
- goto fail_unlock;
+ goto fail_put;
}
inode_init_owner(inode, parent, mode);
fail_drop:
dquot_drop(inode);
inode->i_flags |= S_NOQUOTA;
-fail_unlock:
clear_nlink(inode);
unlock_new_inode(inode);
fail_put:
* path_mountpoint - look up a path to be umounted
* @dfd: directory file descriptor to start walk from
* @name: full pathname to walk
+ * @path: pointer to container for result
* @flags: lookup flags
*
* Look up the given name, but don't attempt to revalidate the last component.
- * Returns 0 and "path" will be valid on success; Retuns error otherwise.
+ * Returns 0 and "path" will be valid on success; Returns error otherwise.
*/
static int
path_mountpoint(int dfd, const char *name, struct path *path, unsigned int flags)
int acc_mode;
int create_error = 0;
struct dentry *const DENTRY_NOT_SET = (void *) -1UL;
+ bool excl;
BUG_ON(dentry->d_inode);
if ((open_flag & O_CREAT) && !IS_POSIXACL(dir))
mode &= ~current_umask();
- if ((open_flag & (O_EXCL | O_CREAT)) == (O_EXCL | O_CREAT)) {
+ excl = (open_flag & (O_EXCL | O_CREAT)) == (O_EXCL | O_CREAT);
+ if (excl)
open_flag &= ~O_TRUNC;
- *opened |= FILE_CREATED;
- }
/*
* Checking write permission is tricky, bacuse we don't know if we are
goto out;
}
- acc_mode = op->acc_mode;
- if (*opened & FILE_CREATED) {
- fsnotify_create(dir, dentry);
- acc_mode = MAY_OPEN;
- }
-
if (error) { /* returned 1, that is */
if (WARN_ON(file->f_path.dentry == DENTRY_NOT_SET)) {
error = -EIO;
dput(dentry);
dentry = file->f_path.dentry;
}
- if (create_error && dentry->d_inode == NULL) {
- error = create_error;
- goto out;
+ if (*opened & FILE_CREATED)
+ fsnotify_create(dir, dentry);
+ if (!dentry->d_inode) {
+ WARN_ON(*opened & FILE_CREATED);
+ if (create_error) {
+ error = create_error;
+ goto out;
+ }
+ } else {
+ if (excl && !(*opened & FILE_CREATED)) {
+ error = -EEXIST;
+ goto out;
+ }
}
goto looked_up;
}
* We didn't have the inode before the open, so check open permission
* here.
*/
+ acc_mode = op->acc_mode;
+ if (*opened & FILE_CREATED) {
+ WARN_ON(!(open_flag & O_CREAT));
+ fsnotify_create(dir, dentry);
+ acc_mode = MAY_OPEN;
+ }
error = may_open(&file->f_path, acc_mode, open_flag);
if (error)
fput(file);
{
int err;
+ if ((open_flags & (O_CREAT | O_EXCL)) == (O_CREAT | O_EXCL))
+ *opened |= FILE_CREATED;
+
err = finish_open(file, dentry, do_open, opened);
if (err)
goto out;
trace_nfs_atomic_open_enter(dir, ctx, open_flags);
nfs_block_sillyrename(dentry->d_parent);
- inode = NFS_PROTO(dir)->open_context(dir, ctx, open_flags, &attr);
+ inode = NFS_PROTO(dir)->open_context(dir, ctx, open_flags, &attr, opened);
nfs_unblock_sillyrename(dentry->d_parent);
if (IS_ERR(inode)) {
err = PTR_ERR(inode);
struct inode *dir;
unsigned openflags = filp->f_flags;
struct iattr attr;
+ int opened = 0;
int err;
/*
nfs_wb_all(inode);
}
- inode = NFS_PROTO(dir)->open_context(dir, ctx, openflags, &attr);
+ inode = NFS_PROTO(dir)->open_context(dir, ctx, openflags, &attr, &opened);
if (IS_ERR(inode)) {
err = PTR_ERR(inode);
switch (err) {
if (status)
goto out_put;
+ smp_wmb();
ds->ds_clp = clp;
dprintk("%s [new] addr: %s\n", __func__, ds->ds_remotestr);
out:
struct nfs4_file_layout_dsaddr *dsaddr = FILELAYOUT_LSEG(lseg)->dsaddr;
struct nfs4_pnfs_ds *ds = dsaddr->ds_list[ds_idx];
struct nfs4_deviceid_node *devid = FILELAYOUT_DEVID_NODE(lseg);
-
- if (filelayout_test_devid_unavailable(devid))
- return NULL;
+ struct nfs4_pnfs_ds *ret = ds;
if (ds == NULL) {
printk(KERN_ERR "NFS: %s: No data server for offset index %d\n",
__func__, ds_idx);
filelayout_mark_devid_invalid(devid);
- return NULL;
+ goto out;
}
+ smp_rmb();
if (ds->ds_clp)
- return ds;
+ goto out_test_devid;
if (test_and_set_bit(NFS4DS_CONNECTING, &ds->ds_state) == 0) {
struct nfs_server *s = NFS_SERVER(lseg->pls_layout->plh_inode);
int err;
err = nfs4_ds_connect(s, ds);
- if (err) {
+ if (err)
nfs4_mark_deviceid_unavailable(devid);
- ds = NULL;
- }
nfs4_clear_ds_conn_bit(ds);
} else {
/* Either ds is connected, or ds is NULL */
nfs4_wait_ds_connect(ds);
}
- return ds;
+out_test_devid:
+ if (filelayout_test_devid_unavailable(devid))
+ ret = NULL;
+out:
+ return ret;
}
module_param(dataserver_retrans, uint, 0644);
struct iattr attrs;
unsigned long timestamp;
unsigned int rpc_done : 1;
+ unsigned int file_created : 1;
unsigned int is_recover : 1;
int rpc_status;
int cancelled;
nfs_fattr_map_and_free_names(server, &data->f_attr);
- if (o_arg->open_flags & O_CREAT)
+ if (o_arg->open_flags & O_CREAT) {
update_changeattr(dir, &o_res->cinfo);
+ if (o_arg->open_flags & O_EXCL)
+ data->file_created = 1;
+ else if (o_res->cinfo.before != o_res->cinfo.after)
+ data->file_created = 1;
+ }
if ((o_res->rflags & NFS4_OPEN_RESULT_LOCKTYPE_POSIX) == 0)
server->caps &= ~NFS_CAP_POSIX_LOCK;
if(o_res->rflags & NFS4_OPEN_RESULT_CONFIRM) {
struct nfs_open_context *ctx,
int flags,
struct iattr *sattr,
- struct nfs4_label *label)
+ struct nfs4_label *label,
+ int *opened)
{
struct nfs4_state_owner *sp;
struct nfs4_state *state = NULL;
nfs_setsecurity(state->inode, opendata->o_res.f_attr, olabel);
}
}
+ if (opendata->file_created)
+ *opened |= FILE_CREATED;
if (pnfs_use_threshold(ctx_th, opendata->f_attr.mdsthreshold, server))
*ctx_th = opendata->f_attr.mdsthreshold;
struct nfs_open_context *ctx,
int flags,
struct iattr *sattr,
- struct nfs4_label *label)
+ struct nfs4_label *label,
+ int *opened)
{
struct nfs_server *server = NFS_SERVER(dir);
struct nfs4_exception exception = { };
int status;
do {
- status = _nfs4_do_open(dir, ctx, flags, sattr, label);
+ status = _nfs4_do_open(dir, ctx, flags, sattr, label, opened);
res = ctx->state;
trace_nfs4_open_file(ctx, flags, status);
if (status == 0)
}
static struct inode *
-nfs4_atomic_open(struct inode *dir, struct nfs_open_context *ctx, int open_flags, struct iattr *attr)
+nfs4_atomic_open(struct inode *dir, struct nfs_open_context *ctx,
+ int open_flags, struct iattr *attr, int *opened)
{
struct nfs4_state *state;
struct nfs4_label l = {0, 0, 0, NULL}, *label = NULL;
label = nfs4_label_init_security(dir, ctx->dentry, attr, &l);
/* Protect against concurrent sillydeletes */
- state = nfs4_do_open(dir, ctx, open_flags, attr, label);
+ state = nfs4_do_open(dir, ctx, open_flags, attr, label, opened);
nfs4_label_release_security(label);
struct nfs4_label l, *ilabel = NULL;
struct nfs_open_context *ctx;
struct nfs4_state *state;
+ int opened = 0;
int status = 0;
ctx = alloc_nfs_open_context(dentry, FMODE_READ);
ilabel = nfs4_label_init_security(dir, dentry, sattr, &l);
sattr->ia_mode &= ~current_umask();
- state = nfs4_do_open(dir, ctx, flags, sattr, ilabel);
+ state = nfs4_do_open(dir, ctx, flags, sattr, ilabel, &opened);
if (IS_ERR(state)) {
status = PTR_ERR(state);
goto out;
{
int err;
struct page *page;
- rpc_authflavor_t flavor;
+ rpc_authflavor_t flavor = RPC_AUTH_MAXFLAVOR;
struct nfs4_secinfo_flavors *flavors;
+ struct nfs4_secinfo4 *secinfo;
+ int i;
page = alloc_page(GFP_KERNEL);
if (!page) {
if (err)
goto out_freepage;
- flavor = nfs_find_best_sec(flavors);
- if (err == 0)
- err = nfs4_lookup_root_sec(server, fhandle, info, flavor);
+ for (i = 0; i < flavors->num_flavors; i++) {
+ secinfo = &flavors->flavors[i];
+
+ switch (secinfo->flavor) {
+ case RPC_AUTH_NULL:
+ case RPC_AUTH_UNIX:
+ case RPC_AUTH_GSS:
+ flavor = rpcauth_get_pseudoflavor(secinfo->flavor,
+ &secinfo->flavor_info);
+ break;
+ default:
+ flavor = RPC_AUTH_MAXFLAVOR;
+ break;
+ }
+
+ if (flavor != RPC_AUTH_MAXFLAVOR) {
+ err = nfs4_lookup_root_sec(server, fhandle,
+ info, flavor);
+ if (!err)
+ break;
+ }
+ }
+
+ if (flavor == RPC_AUTH_MAXFLAVOR)
+ err = -EPERM;
out_freepage:
put_page(page);
clear_buffer_nilfs_volatile(bh);
clear_buffer_nilfs_checked(bh);
clear_buffer_nilfs_redirected(bh);
+ clear_buffer_async_write(bh);
clear_buffer_dirty(bh);
if (nilfs_page_buffers_clean(page))
__nilfs_clear_page_dirty(page);
"discard block %llu, size %zu",
(u64)bh->b_blocknr, bh->b_size);
}
+ clear_buffer_async_write(bh);
clear_buffer_dirty(bh);
clear_buffer_nilfs_volatile(bh);
clear_buffer_nilfs_checked(bh);
bh = head = page_buffers(page);
do {
- if (!buffer_dirty(bh))
+ if (!buffer_dirty(bh) || buffer_async_write(bh))
continue;
get_bh(bh);
list_add_tail(&bh->b_assoc_buffers, listp);
for (i = 0; i < pagevec_count(&pvec); i++) {
bh = head = page_buffers(pvec.pages[i]);
do {
- if (buffer_dirty(bh)) {
+ if (buffer_dirty(bh) &&
+ !buffer_async_write(bh)) {
get_bh(bh);
list_add_tail(&bh->b_assoc_buffers,
listp);
list_for_each_entry(bh, &segbuf->sb_segsum_buffers,
b_assoc_buffers) {
+ set_buffer_async_write(bh);
if (bh->b_page != bd_page) {
if (bd_page) {
lock_page(bd_page);
list_for_each_entry(bh, &segbuf->sb_payload_buffers,
b_assoc_buffers) {
+ set_buffer_async_write(bh);
if (bh == segbuf->sb_super_root) {
if (bh->b_page != bd_page) {
lock_page(bd_page);
list_for_each_entry(segbuf, logs, sb_list) {
list_for_each_entry(bh, &segbuf->sb_segsum_buffers,
b_assoc_buffers) {
+ clear_buffer_async_write(bh);
if (bh->b_page != bd_page) {
if (bd_page)
end_page_writeback(bd_page);
list_for_each_entry(bh, &segbuf->sb_payload_buffers,
b_assoc_buffers) {
+ clear_buffer_async_write(bh);
if (bh == segbuf->sb_super_root) {
if (bh->b_page != bd_page) {
end_page_writeback(bd_page);
b_assoc_buffers) {
set_buffer_uptodate(bh);
clear_buffer_dirty(bh);
+ clear_buffer_async_write(bh);
if (bh->b_page != bd_page) {
if (bd_page)
end_page_writeback(bd_page);
b_assoc_buffers) {
set_buffer_uptodate(bh);
clear_buffer_dirty(bh);
+ clear_buffer_async_write(bh);
clear_buffer_delay(bh);
clear_buffer_nilfs_volatile(bh);
clear_buffer_nilfs_redirected(bh);
*/
if (inode == NULL) {
unsigned long gen = (unsigned long) dentry->d_fsdata;
- unsigned long pgen =
- OCFS2_I(dentry->d_parent->d_inode)->ip_dir_lock_gen;
-
+ unsigned long pgen;
+ spin_lock(&dentry->d_lock);
+ pgen = OCFS2_I(dentry->d_parent->d_inode)->ip_dir_lock_gen;
+ spin_unlock(&dentry->d_lock);
trace_ocfs2_dentry_revalidate_negative(dentry->d_name.len,
dentry->d_name.name,
pgen, gen);
{
int tmp, hangup_needed = 0;
struct ocfs2_super *osb = NULL;
- char nodestr[8];
+ char nodestr[12];
trace_ocfs2_dismount_volume(sb);
/**
* finish_open - finish opening a file
- * @od: opaque open data
+ * @file: file pointer
* @dentry: pointer to dentry
* @open: open callback
+ * @opened: state of open
*
* This can be used to finish opening a file passed to i_op->atomic_open().
*
* If the open callback is set to NULL, then the standard f_op->open()
* filesystem callback is substituted.
+ *
+ * NB: the dentry reference is _not_ consumed. If, for example, the dentry is
+ * the return value of d_splice_alias(), then the caller needs to perform dput()
+ * on it after finish_open().
+ *
+ * On successful return @file is a fully instantiated open file. After this, if
+ * an error occurs in ->atomic_open(), it needs to clean up with fput().
+ *
+ * Returns zero on success or -errno if the open failed.
*/
int finish_open(struct file *file, struct dentry *dentry,
int (*open)(struct inode *, struct file *),
/**
* finish_no_open - finish ->atomic_open() without opening the file
*
- * @od: opaque open data
+ * @file: file pointer
* @dentry: dentry or NULL (as returned from ->lookup())
*
* This can be used to set the result of a successful lookup in ->atomic_open().
- * The filesystem's atomic_open() method shall return NULL after calling this.
+ *
+ * NB: unlike finish_open() this function does consume the dentry reference and
+ * the caller need not dput() it.
+ *
+ * Returns "1" which must be the return value of ->atomic_open() after having
+ * called this function.
*/
int finish_no_open(struct file *file, struct dentry *dentry)
{
static unsigned long proc_reg_get_unmapped_area(struct file *file, unsigned long orig_addr, unsigned long len, unsigned long pgoff, unsigned long flags)
{
struct proc_dir_entry *pde = PDE(file_inode(file));
- int rv = -EIO;
- unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
+ unsigned long rv = -EIO;
+ unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long) = NULL;
if (use_pde(pde)) {
- get_unmapped_area = pde->proc_fops->get_unmapped_area;
+#ifdef CONFIG_MMU
+ get_unmapped_area = current->mm->get_unmapped_area;
+#endif
+ if (pde->proc_fops->get_unmapped_area)
+ get_unmapped_area = pde->proc_fops->get_unmapped_area;
if (get_unmapped_area)
rv = get_unmapped_area(file, orig_addr, len, pgoff, flags);
unuse_pde(pde);
frame = pte_pfn(pte);
flags = PM_PRESENT;
page = vm_normal_page(vma, addr, pte);
+ if (pte_soft_dirty(pte))
+ flags2 |= __PM_SOFT_DIRTY;
} else if (is_swap_pte(pte)) {
swp_entry_t entry;
if (pte_swp_soft_dirty(pte))
if (page && !PageAnon(page))
flags |= PM_FILE;
- if ((vma->vm_flags & VM_SOFTDIRTY) || pte_soft_dirty(pte))
+ if ((vma->vm_flags & VM_SOFTDIRTY))
flags2 |= __PM_SOFT_DIRTY;
*pme = make_pme(PM_PFRAME(frame) | PM_STATUS2(pm->v2, flags2) | flags);
int err, ret;
ret = -EIO;
- err = zlib_inflateInit(&stream);
+ err = zlib_inflateInit2(&stream, WINDOW_BITS);
if (err != Z_OK)
goto error;
static void allocate_buf_for_compression(void)
{
size_t size;
+ size_t cmpr;
+
+ switch (psinfo->bufsize) {
+ /* buffer range for efivars */
+ case 1000 ... 2000:
+ cmpr = 56;
+ break;
+ case 2001 ... 3000:
+ cmpr = 54;
+ break;
+ case 3001 ... 3999:
+ cmpr = 52;
+ break;
+ /* buffer range for nvram, erst */
+ case 4000 ... 10000:
+ cmpr = 45;
+ break;
+ default:
+ cmpr = 60;
+ break;
+ }
- big_oops_buf_sz = (psinfo->bufsize * 100) / 45;
+ big_oops_buf_sz = (psinfo->bufsize * 100) / cmpr;
big_oops_buf = kmalloc(big_oops_buf_sz, GFP_KERNEL);
if (big_oops_buf) {
size = max(zlib_deflate_workspacesize(WINDOW_BITS, MEM_LEVEL),
compressed = true;
total_len = zipped_len;
} else {
- pr_err("pstore: compression failed for Part %d"
- " returned %d\n", part, zipped_len);
- pr_err("pstore: Capture uncompressed"
- " oops/panic report of Part %d\n", part);
compressed = false;
total_len = copy_kmsg_to_buffer(hsize, len);
}
return NULL;
}
-static int newer_jl_done(struct reiserfs_journal_cnode *cn)
-{
- struct super_block *sb = cn->sb;
- b_blocknr_t blocknr = cn->blocknr;
-
- cn = cn->hprev;
- while (cn) {
- if (cn->sb == sb && cn->blocknr == blocknr && cn->jlist &&
- atomic_read(&cn->jlist->j_commit_left) != 0)
- return 0;
- cn = cn->hprev;
- }
- return 1;
-}
-
static void remove_journal_hash(struct super_block *,
struct reiserfs_journal_cnode **,
struct reiserfs_journal_list *, unsigned long,
reiserfs_warning(s, "clm-2048", "called with wcount %d",
atomic_read(&journal->j_wcount));
}
- BUG_ON(jl->j_trans_id == 0);
/* if flushall == 0, the lock is already held */
if (flushall) {
return err;
}
-static int test_transaction(struct super_block *s,
- struct reiserfs_journal_list *jl)
-{
- struct reiserfs_journal_cnode *cn;
-
- if (jl->j_len == 0 || atomic_read(&jl->j_nonzerolen) == 0)
- return 1;
-
- cn = jl->j_realblock;
- while (cn) {
- /* if the blocknr == 0, this has been cleared from the hash,
- ** skip it
- */
- if (cn->blocknr == 0) {
- goto next;
- }
- if (cn->bh && !newer_jl_done(cn))
- return 0;
- next:
- cn = cn->next;
- cond_resched();
- }
- return 0;
-}
-
static int write_one_transaction(struct super_block *s,
struct reiserfs_journal_list *jl,
struct buffer_chunk *chunk)
break;
tjl = JOURNAL_LIST_ENTRY(tjl->j_list.next);
}
+ get_journal_list(jl);
+ get_journal_list(flush_jl);
/* try to find a group of blocks we can flush across all the
** transactions, but only bother if we've actually spanned
** across multiple lists
ret = kupdate_transactions(s, jl, &tjl, &trans_id, len, i);
}
flush_journal_list(s, flush_jl, 1);
+ put_journal_list(s, flush_jl);
+ put_journal_list(s, jl);
return 0;
}
return 1;
}
-static void flush_old_journal_lists(struct super_block *s)
-{
- struct reiserfs_journal *journal = SB_JOURNAL(s);
- struct reiserfs_journal_list *jl;
- struct list_head *entry;
- time_t now = get_seconds();
-
- while (!list_empty(&journal->j_journal_list)) {
- entry = journal->j_journal_list.next;
- jl = JOURNAL_LIST_ENTRY(entry);
- /* this check should always be run, to send old lists to disk */
- if (jl->j_timestamp < (now - (JOURNAL_MAX_TRANS_AGE * 4)) &&
- atomic_read(&jl->j_commit_left) == 0 &&
- test_transaction(s, jl)) {
- flush_used_journal_lists(s, jl);
- } else {
- break;
- }
- }
-}
-
/*
** long and ugly. If flush, will not return until all commit
** blocks and all real buffers in the trans are on disk.
}
}
}
- flush_old_journal_lists(sb);
journal->j_current_jl->j_list_bitmap =
get_list_bitmap(sb, journal->j_current_jl);
set_current_state(state);
if (!pwq->triggered)
- rc = freezable_schedule_hrtimeout_range(expires, slack,
- HRTIMER_MODE_ABS);
+ rc = schedule_hrtimeout_range(expires, slack, HRTIMER_MODE_ABS);
__set_current_state(TASK_RUNNING);
/*
m->read_pos = offset;
retval = file->f_pos = offset;
}
+ } else {
+ file->f_pos = offset;
}
}
file->f_version = m->version;
int fd_statfs(int fd, struct kstatfs *st)
{
- struct fd f = fdget(fd);
+ struct fd f = fdget_raw(fd);
int error = -EBADF;
if (f.file) {
error = vfs_statfs(&f.file->f_path, st);
*/
static inline void destroy_super(struct super_block *s)
{
+ list_lru_destroy(&s->s_dentry_lru);
+ list_lru_destroy(&s->s_inode_lru);
#ifdef CONFIG_SMP
free_percpu(s->s_files);
#endif
/* caches are now gone, we can safely kill the shrinker now */
unregister_shrinker(&s->s_shrink);
- list_lru_destroy(&s->s_dentry_lru);
- list_lru_destroy(&s->s_inode_lru);
put_filesystem(fs);
put_super(s);
sbi->s_sb = sb;
sbi->s_block_base = 0;
sbi->s_type = FSTYPE_V7;
+ mutex_init(&sbi->s_lock);
sb->s_fs_info = sbi;
sb_set_blocksize(sb, 512);
{
struct super_block *sb = inode->i_sb;
struct udf_sb_info *sbi = UDF_SB(sb);
+ struct logicalVolIntegrityDescImpUse *lvidiu = udf_sb_lvidiu(sb);
- mutex_lock(&sbi->s_alloc_mutex);
- if (sbi->s_lvid_bh) {
- struct logicalVolIntegrityDescImpUse *lvidiu =
- udf_sb_lvidiu(sbi);
+ if (lvidiu) {
+ mutex_lock(&sbi->s_alloc_mutex);
if (S_ISDIR(inode->i_mode))
le32_add_cpu(&lvidiu->numDirs, -1);
else
le32_add_cpu(&lvidiu->numFiles, -1);
udf_updated_lvid(sb);
+ mutex_unlock(&sbi->s_alloc_mutex);
}
- mutex_unlock(&sbi->s_alloc_mutex);
udf_free_blocks(sb, NULL, &UDF_I(inode)->i_location, 0, 1);
}
uint32_t start = UDF_I(dir)->i_location.logicalBlockNum;
struct udf_inode_info *iinfo;
struct udf_inode_info *dinfo = UDF_I(dir);
+ struct logicalVolIntegrityDescImpUse *lvidiu;
inode = new_inode(sb);
return NULL;
}
- if (sbi->s_lvid_bh) {
- struct logicalVolIntegrityDescImpUse *lvidiu;
-
+ lvidiu = udf_sb_lvidiu(sb);
+ if (lvidiu) {
iinfo->i_unique = lvid_get_unique_id(sb);
mutex_lock(&sbi->s_alloc_mutex);
- lvidiu = udf_sb_lvidiu(sbi);
if (S_ISDIR(mode))
le32_add_cpu(&lvidiu->numDirs, 1);
else
static int udf_statfs(struct dentry *, struct kstatfs *);
static int udf_show_options(struct seq_file *, struct dentry *);
-struct logicalVolIntegrityDescImpUse *udf_sb_lvidiu(struct udf_sb_info *sbi)
+struct logicalVolIntegrityDescImpUse *udf_sb_lvidiu(struct super_block *sb)
{
- struct logicalVolIntegrityDesc *lvid =
- (struct logicalVolIntegrityDesc *)sbi->s_lvid_bh->b_data;
- __u32 number_of_partitions = le32_to_cpu(lvid->numOfPartitions);
- __u32 offset = number_of_partitions * 2 *
- sizeof(uint32_t)/sizeof(uint8_t);
+ struct logicalVolIntegrityDesc *lvid;
+ unsigned int partnum;
+ unsigned int offset;
+
+ if (!UDF_SB(sb)->s_lvid_bh)
+ return NULL;
+ lvid = (struct logicalVolIntegrityDesc *)UDF_SB(sb)->s_lvid_bh->b_data;
+ partnum = le32_to_cpu(lvid->numOfPartitions);
+ if ((sb->s_blocksize - sizeof(struct logicalVolIntegrityDescImpUse) -
+ offsetof(struct logicalVolIntegrityDesc, impUse)) /
+ (2 * sizeof(uint32_t)) < partnum) {
+ udf_err(sb, "Logical volume integrity descriptor corrupted "
+ "(numOfPartitions = %u)!\n", partnum);
+ return NULL;
+ }
+ /* The offset is to skip freeSpaceTable and sizeTable arrays */
+ offset = partnum * 2 * sizeof(uint32_t);
return (struct logicalVolIntegrityDescImpUse *)&(lvid->impUse[offset]);
}
struct udf_options uopt;
struct udf_sb_info *sbi = UDF_SB(sb);
int error = 0;
+ struct logicalVolIntegrityDescImpUse *lvidiu = udf_sb_lvidiu(sb);
- if (sbi->s_lvid_bh) {
- int write_rev = le16_to_cpu(udf_sb_lvidiu(sbi)->minUDFWriteRev);
+ if (lvidiu) {
+ int write_rev = le16_to_cpu(lvidiu->minUDFWriteRev);
if (write_rev > UDF_MAX_WRITE_VERSION && !(*flags & MS_RDONLY))
return -EACCES;
}
if (!bh)
return;
-
- mutex_lock(&sbi->s_alloc_mutex);
lvid = (struct logicalVolIntegrityDesc *)bh->b_data;
- lvidiu = udf_sb_lvidiu(sbi);
+ lvidiu = udf_sb_lvidiu(sb);
+ if (!lvidiu)
+ return;
+ mutex_lock(&sbi->s_alloc_mutex);
lvidiu->impIdent.identSuffix[0] = UDF_OS_CLASS_UNIX;
lvidiu->impIdent.identSuffix[1] = UDF_OS_ID_LINUX;
udf_time_to_disk_stamp(&lvid->recordingDateAndTime,
if (!bh)
return;
+ lvid = (struct logicalVolIntegrityDesc *)bh->b_data;
+ lvidiu = udf_sb_lvidiu(sb);
+ if (!lvidiu)
+ return;
mutex_lock(&sbi->s_alloc_mutex);
- lvid = (struct logicalVolIntegrityDesc *)bh->b_data;
- lvidiu = udf_sb_lvidiu(sbi);
lvidiu->impIdent.identSuffix[0] = UDF_OS_CLASS_UNIX;
lvidiu->impIdent.identSuffix[1] = UDF_OS_ID_LINUX;
udf_time_to_disk_stamp(&lvid->recordingDateAndTime, CURRENT_TIME);
if (sbi->s_lvid_bh) {
struct logicalVolIntegrityDescImpUse *lvidiu =
- udf_sb_lvidiu(sbi);
- uint16_t minUDFReadRev = le16_to_cpu(lvidiu->minUDFReadRev);
- uint16_t minUDFWriteRev = le16_to_cpu(lvidiu->minUDFWriteRev);
- /* uint16_t maxUDFWriteRev =
- le16_to_cpu(lvidiu->maxUDFWriteRev); */
+ udf_sb_lvidiu(sb);
+ uint16_t minUDFReadRev;
+ uint16_t minUDFWriteRev;
+ if (!lvidiu) {
+ ret = -EINVAL;
+ goto error_out;
+ }
+ minUDFReadRev = le16_to_cpu(lvidiu->minUDFReadRev);
+ minUDFWriteRev = le16_to_cpu(lvidiu->minUDFWriteRev);
if (minUDFReadRev > UDF_MAX_READ_VERSION) {
udf_err(sb, "minUDFReadRev=%x (max is %x)\n",
- le16_to_cpu(lvidiu->minUDFReadRev),
+ minUDFReadRev,
UDF_MAX_READ_VERSION);
ret = -EINVAL;
goto error_out;
struct logicalVolIntegrityDescImpUse *lvidiu;
u64 id = huge_encode_dev(sb->s_bdev->bd_dev);
- if (sbi->s_lvid_bh != NULL)
- lvidiu = udf_sb_lvidiu(sbi);
- else
- lvidiu = NULL;
-
+ lvidiu = udf_sb_lvidiu(sb);
buf->f_type = UDF_SUPER_MAGIC;
buf->f_bsize = sb->s_blocksize;
buf->f_blocks = sbi->s_partmaps[sbi->s_partition].s_partition_len;
return sb->s_fs_info;
}
-struct logicalVolIntegrityDescImpUse *udf_sb_lvidiu(struct udf_sb_info *sbi);
+struct logicalVolIntegrityDescImpUse *udf_sb_lvidiu(struct super_block *sb);
int udf_compute_nr_groups(struct super_block *sb, u32 partition);
else if (aborted) {
ASSERT(XFS_FORCED_SHUTDOWN(lip->li_mountp));
if (lip->li_flags & XFS_LI_IN_AIL) {
+ spin_lock(&lip->li_ailp->xa_lock);
xfs_trans_ail_delete(lip->li_ailp, lip,
SHUTDOWN_LOG_IO_ERROR);
}
/* start with smaller blk num */
forward = nodehdr.forw < nodehdr.back;
for (i = 0; i < 2; forward = !forward, i++) {
+ struct xfs_da3_icnode_hdr thdr;
if (forward)
blkno = nodehdr.forw;
else
return(error);
node = bp->b_addr;
- xfs_da3_node_hdr_from_disk(&nodehdr, node);
+ xfs_da3_node_hdr_from_disk(&thdr, node);
xfs_trans_brelse(state->args->trans, bp);
- if (count - nodehdr.count >= 0)
+ if (count - thdr.count >= 0)
break; /* fits with at least 25% to spare */
}
if (i >= 2) {
/*
* Create entry for .
*/
- dep = xfs_dir3_data_dot_entry_p(hdr);
+ dep = xfs_dir3_data_dot_entry_p(mp, hdr);
dep->inumber = cpu_to_be64(dp->i_ino);
dep->namelen = 1;
dep->name[0] = '.';
/*
* Create entry for ..
*/
- dep = xfs_dir3_data_dotdot_entry_p(hdr);
+ dep = xfs_dir3_data_dotdot_entry_p(mp, hdr);
dep->inumber = cpu_to_be64(xfs_dir2_sf_get_parent_ino(sfp));
dep->namelen = 2;
dep->name[0] = dep->name[1] = '.';
blp[1].hashval = cpu_to_be32(xfs_dir_hash_dotdot);
blp[1].address = cpu_to_be32(xfs_dir2_byte_to_dataptr(mp,
(char *)dep - (char *)hdr));
- offset = xfs_dir3_data_first_offset(hdr);
+ offset = xfs_dir3_data_first_offset(mp);
/*
* Loop over existing entries, stuff them in.
*/
/*
* Offsets of . and .. in data space (always block 0)
*
- * The macros are used for shortform directories as they have no headers to read
- * the magic number out of. Shortform directories need to know the size of the
- * data block header because the sfe embeds the block offset of the entry into
- * it so that it doesn't change when format conversion occurs. Bad Things Happen
- * if we don't follow this rule.
- *
* XXX: there is scope for significant optimisation of the logic here. Right
* now we are checking for "dir3 format" over and over again. Ideally we should
* only do it once for each operation.
*/
-#define XFS_DIR3_DATA_DOT_OFFSET(mp) \
- xfs_dir3_data_hdr_size(xfs_sb_version_hascrc(&(mp)->m_sb))
-#define XFS_DIR3_DATA_DOTDOT_OFFSET(mp) \
- (XFS_DIR3_DATA_DOT_OFFSET(mp) + xfs_dir3_data_entsize(mp, 1))
-#define XFS_DIR3_DATA_FIRST_OFFSET(mp) \
- (XFS_DIR3_DATA_DOTDOT_OFFSET(mp) + xfs_dir3_data_entsize(mp, 2))
-
static inline xfs_dir2_data_aoff_t
-xfs_dir3_data_dot_offset(struct xfs_dir2_data_hdr *hdr)
+xfs_dir3_data_dot_offset(struct xfs_mount *mp)
{
- return xfs_dir3_data_entry_offset(hdr);
+ return xfs_dir3_data_hdr_size(xfs_sb_version_hascrc(&mp->m_sb));
}
static inline xfs_dir2_data_aoff_t
-xfs_dir3_data_dotdot_offset(struct xfs_dir2_data_hdr *hdr)
+xfs_dir3_data_dotdot_offset(struct xfs_mount *mp)
{
- bool dir3 = hdr->magic == cpu_to_be32(XFS_DIR3_DATA_MAGIC) ||
- hdr->magic == cpu_to_be32(XFS_DIR3_BLOCK_MAGIC);
- return xfs_dir3_data_dot_offset(hdr) +
- __xfs_dir3_data_entsize(dir3, 1);
+ return xfs_dir3_data_dot_offset(mp) +
+ xfs_dir3_data_entsize(mp, 1);
}
static inline xfs_dir2_data_aoff_t
-xfs_dir3_data_first_offset(struct xfs_dir2_data_hdr *hdr)
+xfs_dir3_data_first_offset(struct xfs_mount *mp)
{
- bool dir3 = hdr->magic == cpu_to_be32(XFS_DIR3_DATA_MAGIC) ||
- hdr->magic == cpu_to_be32(XFS_DIR3_BLOCK_MAGIC);
- return xfs_dir3_data_dotdot_offset(hdr) +
- __xfs_dir3_data_entsize(dir3, 2);
+ return xfs_dir3_data_dotdot_offset(mp) +
+ xfs_dir3_data_entsize(mp, 2);
}
/*
* location of . and .. in data space (always block 0)
*/
static inline struct xfs_dir2_data_entry *
-xfs_dir3_data_dot_entry_p(struct xfs_dir2_data_hdr *hdr)
+xfs_dir3_data_dot_entry_p(
+ struct xfs_mount *mp,
+ struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
- ((char *)hdr + xfs_dir3_data_dot_offset(hdr));
+ ((char *)hdr + xfs_dir3_data_dot_offset(mp));
}
static inline struct xfs_dir2_data_entry *
-xfs_dir3_data_dotdot_entry_p(struct xfs_dir2_data_hdr *hdr)
+xfs_dir3_data_dotdot_entry_p(
+ struct xfs_mount *mp,
+ struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
- ((char *)hdr + xfs_dir3_data_dotdot_offset(hdr));
+ ((char *)hdr + xfs_dir3_data_dotdot_offset(mp));
}
static inline struct xfs_dir2_data_entry *
-xfs_dir3_data_first_entry_p(struct xfs_dir2_data_hdr *hdr)
+xfs_dir3_data_first_entry_p(
+ struct xfs_mount *mp,
+ struct xfs_dir2_data_hdr *hdr)
{
return (struct xfs_dir2_data_entry *)
- ((char *)hdr + xfs_dir3_data_first_offset(hdr));
+ ((char *)hdr + xfs_dir3_data_first_offset(mp));
}
/*
* mp->m_dirdatablk.
*/
dot_offset = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk,
- XFS_DIR3_DATA_DOT_OFFSET(mp));
+ xfs_dir3_data_dot_offset(mp));
dotdot_offset = xfs_dir2_db_off_to_dataptr(mp, mp->m_dirdatablk,
- XFS_DIR3_DATA_DOTDOT_OFFSET(mp));
+ xfs_dir3_data_dotdot_offset(mp));
/*
* Put . entry unless we're starting past it.
* to insert the new entry.
* If it's going to end up at the end then oldsfep will point there.
*/
- for (offset = XFS_DIR3_DATA_FIRST_OFFSET(mp),
+ for (offset = xfs_dir3_data_first_offset(mp),
oldsfep = xfs_dir2_sf_firstentry(oldsfp),
add_datasize = xfs_dir3_data_entsize(mp, args->namelen),
eof = (char *)oldsfep == &buf[old_isize];
sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data;
size = xfs_dir3_data_entsize(mp, args->namelen);
- offset = XFS_DIR3_DATA_FIRST_OFFSET(mp);
+ offset = xfs_dir3_data_first_offset(mp);
sfep = xfs_dir2_sf_firstentry(sfp);
holefit = 0;
/*
mp = dp->i_mount;
sfp = (xfs_dir2_sf_hdr_t *)dp->i_df.if_u1.if_data;
- offset = XFS_DIR3_DATA_FIRST_OFFSET(mp);
+ offset = xfs_dir3_data_first_offset(mp);
ino = xfs_dir2_sf_get_parent_ino(sfp);
i8count = ino > XFS_DIR2_MAX_SHORT_INUM;
struct kmem_zone *xfs_qm_dqtrxzone;
static struct kmem_zone *xfs_qm_dqzone;
-static struct lock_class_key xfs_dquot_other_class;
+static struct lock_class_key xfs_dquot_group_class;
+static struct lock_class_key xfs_dquot_project_class;
/*
* This is called to free all the memory associated with a dquot
* Make sure group quotas have a different lock class than user
* quotas.
*/
- if (!(type & XFS_DQ_USER))
- lockdep_set_class(&dqp->q_qlock, &xfs_dquot_other_class);
+ switch (type) {
+ case XFS_DQ_USER:
+ /* uses the default lock class */
+ break;
+ case XFS_DQ_GROUP:
+ lockdep_set_class(&dqp->q_qlock, &xfs_dquot_group_class);
+ break;
+ case XFS_DQ_PROJ:
+ lockdep_set_class(&dqp->q_qlock, &xfs_dquot_project_class);
+ break;
+ default:
+ ASSERT(0);
+ break;
+ }
XFS_STATS_INC(xs_qm_dquot);
/* XFS_IOC_GETBIOSIZE ---- deprecated 47 */
#define XFS_IOC_GETBMAPX _IOWR('X', 56, struct getbmap)
#define XFS_IOC_ZERO_RANGE _IOW ('X', 57, struct xfs_flock64)
-#define XFS_IOC_FREE_EOFBLOCKS _IOR ('X', 58, struct xfs_eofblocks)
+#define XFS_IOC_FREE_EOFBLOCKS _IOR ('X', 58, struct xfs_fs_eofblocks)
/*
* ioctl commands that replace IRIX syssgi()'s
ip->i_itemp = NULL;
}
- /* asserts to verify all state is correct here */
- ASSERT(atomic_read(&ip->i_pincount) == 0);
- ASSERT(!spin_is_locked(&ip->i_flags_lock));
- ASSERT(!xfs_isiflocked(ip));
-
/*
* Because we use RCU freeing we need to ensure the inode always
* appears to be reclaimed with an invalid inode number when in the
ip->i_ino = 0;
spin_unlock(&ip->i_flags_lock);
+ /* asserts to verify all state is correct here */
+ ASSERT(atomic_read(&ip->i_pincount) == 0);
+ ASSERT(!xfs_isiflocked(ip));
+
call_rcu(&VFS_I(ip)->i_rcu, xfs_inode_free_callback);
}
"bad number of regions (%d) in inode log format",
in_f->ilf_size);
ASSERT(0);
+ kmem_free(ptr);
return XFS_ERROR(EIO);
}
* magic number. If we don't recognise the magic number in the buffer, then
* return a LSN of -1 so that the caller knows it was an unrecognised block and
* so can recover the buffer.
+ *
+ * Note: we cannot rely solely on magic number matches to determine that the
+ * buffer has a valid LSN - we also need to verify that it belongs to this
+ * filesystem, so we need to extract the object's LSN and compare it to that
+ * which we read from the superblock. If the UUIDs don't match, then we've got a
+ * stale metadata block from an old filesystem instance that we need to recover
+ * over the top of.
*/
static xfs_lsn_t
xlog_recover_get_buf_lsn(
__uint16_t magic16;
__uint16_t magicda;
void *blk = bp->b_addr;
+ uuid_t *uuid;
+ xfs_lsn_t lsn = -1;
/* v4 filesystems always recover immediately */
if (!xfs_sb_version_hascrc(&mp->m_sb))
case XFS_ABTB_MAGIC:
case XFS_ABTC_MAGIC:
case XFS_IBT_CRC_MAGIC:
- case XFS_IBT_MAGIC:
- return be64_to_cpu(
- ((struct xfs_btree_block *)blk)->bb_u.s.bb_lsn);
+ case XFS_IBT_MAGIC: {
+ struct xfs_btree_block *btb = blk;
+
+ lsn = be64_to_cpu(btb->bb_u.s.bb_lsn);
+ uuid = &btb->bb_u.s.bb_uuid;
+ break;
+ }
case XFS_BMAP_CRC_MAGIC:
- case XFS_BMAP_MAGIC:
- return be64_to_cpu(
- ((struct xfs_btree_block *)blk)->bb_u.l.bb_lsn);
+ case XFS_BMAP_MAGIC: {
+ struct xfs_btree_block *btb = blk;
+
+ lsn = be64_to_cpu(btb->bb_u.l.bb_lsn);
+ uuid = &btb->bb_u.l.bb_uuid;
+ break;
+ }
case XFS_AGF_MAGIC:
- return be64_to_cpu(((struct xfs_agf *)blk)->agf_lsn);
+ lsn = be64_to_cpu(((struct xfs_agf *)blk)->agf_lsn);
+ uuid = &((struct xfs_agf *)blk)->agf_uuid;
+ break;
case XFS_AGFL_MAGIC:
- return be64_to_cpu(((struct xfs_agfl *)blk)->agfl_lsn);
+ lsn = be64_to_cpu(((struct xfs_agfl *)blk)->agfl_lsn);
+ uuid = &((struct xfs_agfl *)blk)->agfl_uuid;
+ break;
case XFS_AGI_MAGIC:
- return be64_to_cpu(((struct xfs_agi *)blk)->agi_lsn);
+ lsn = be64_to_cpu(((struct xfs_agi *)blk)->agi_lsn);
+ uuid = &((struct xfs_agi *)blk)->agi_uuid;
+ break;
case XFS_SYMLINK_MAGIC:
- return be64_to_cpu(((struct xfs_dsymlink_hdr *)blk)->sl_lsn);
+ lsn = be64_to_cpu(((struct xfs_dsymlink_hdr *)blk)->sl_lsn);
+ uuid = &((struct xfs_dsymlink_hdr *)blk)->sl_uuid;
+ break;
case XFS_DIR3_BLOCK_MAGIC:
case XFS_DIR3_DATA_MAGIC:
case XFS_DIR3_FREE_MAGIC:
- return be64_to_cpu(((struct xfs_dir3_blk_hdr *)blk)->lsn);
+ lsn = be64_to_cpu(((struct xfs_dir3_blk_hdr *)blk)->lsn);
+ uuid = &((struct xfs_dir3_blk_hdr *)blk)->uuid;
+ break;
case XFS_ATTR3_RMT_MAGIC:
- return be64_to_cpu(((struct xfs_attr3_rmt_hdr *)blk)->rm_lsn);
+ lsn = be64_to_cpu(((struct xfs_attr3_rmt_hdr *)blk)->rm_lsn);
+ uuid = &((struct xfs_attr3_rmt_hdr *)blk)->rm_uuid;
+ break;
case XFS_SB_MAGIC:
- return be64_to_cpu(((struct xfs_dsb *)blk)->sb_lsn);
+ lsn = be64_to_cpu(((struct xfs_dsb *)blk)->sb_lsn);
+ uuid = &((struct xfs_dsb *)blk)->sb_uuid;
+ break;
default:
break;
}
+ if (lsn != (xfs_lsn_t)-1) {
+ if (!uuid_equal(&mp->m_sb.sb_uuid, uuid))
+ goto recover_immediately;
+ return lsn;
+ }
+
magicda = be16_to_cpu(((struct xfs_da_blkinfo *)blk)->magic);
switch (magicda) {
case XFS_DIR3_LEAF1_MAGIC:
case XFS_DIR3_LEAFN_MAGIC:
case XFS_DA3_NODE_MAGIC:
- return be64_to_cpu(((struct xfs_da3_blkinfo *)blk)->lsn);
+ lsn = be64_to_cpu(((struct xfs_da3_blkinfo *)blk)->lsn);
+ uuid = &((struct xfs_da3_blkinfo *)blk)->uuid;
+ break;
default:
break;
}
+ if (lsn != (xfs_lsn_t)-1) {
+ if (!uuid_equal(&mp->m_sb.sb_uuid, uuid))
+ goto recover_immediately;
+ return lsn;
+ }
+
/*
* We do individual object checks on dquot and inode buffers as they
* have their own individual LSN records. Also, we could have a stale
unsigned int physical_node_count;
struct list_head physical_node_list;
struct mutex physical_node_lock;
- struct list_head power_dependent;
void (*remove)(struct acpi_device *);
};
acpi_status acpi_remove_pm_notifier(struct acpi_device *adev,
acpi_notify_handler handler);
int acpi_pm_device_sleep_state(struct device *, int *, int);
-void acpi_dev_pm_add_dependent(acpi_handle handle, struct device *depdev);
-void acpi_dev_pm_remove_dependent(acpi_handle handle, struct device *depdev);
#else
static inline acpi_status acpi_add_pm_notifier(struct acpi_device *adev,
acpi_notify_handler handler,
return (m >= ACPI_STATE_D0 && m <= ACPI_STATE_D3_COLD) ?
m : ACPI_STATE_D0;
}
-static inline void acpi_dev_pm_add_dependent(acpi_handle handle,
- struct device *depdev) {}
-static inline void acpi_dev_pm_remove_dependent(acpi_handle handle,
- struct device *depdev) {}
#endif
#ifdef CONFIG_PM_RUNTIME
return mk_pte(page, pgprot);
}
-static inline int huge_pte_write(pte_t pte)
+static inline unsigned long huge_pte_write(pte_t pte)
{
return pte_write(pte);
}
-static inline int huge_pte_dirty(pte_t pte)
+static inline unsigned long huge_pte_dirty(pte_t pte)
{
return pte_dirty(pte);
}
+/* no content, but patch(1) dislikes empty files */
extern int drm_rmctx(struct drm_device *dev, void *data,
struct drm_file *file_priv);
-extern void drm_legacy_ctxbitmap_init(struct drm_device *dev);
-extern void drm_legacy_ctxbitmap_cleanup(struct drm_device *dev);
-extern void drm_legacy_ctxbitmap_release(struct drm_device *dev,
- struct drm_file *file_priv);
+extern int drm_ctxbitmap_init(struct drm_device *dev);
+extern void drm_ctxbitmap_cleanup(struct drm_device *dev);
+extern void drm_ctxbitmap_free(struct drm_device *dev, int ctx_handle);
extern int drm_setsareactx(struct drm_device *dev, void *data,
struct drm_file *file_priv);
{0x1002, 0x130F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x1310, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x1311, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x1312, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x1313, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x1315, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x1316, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x1317, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x131B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x131C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x131D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x3150, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY}, \
{0x1002, 0x3151, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x3152, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV380|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
#define PULL_UP (1 << 4)
#define ALTELECTRICALSEL (1 << 5)
-/* 34xx specific mux bit defines */
+/* omap3/4/5 specific mux bit defines */
#define INPUT_EN (1 << 8)
#define OFF_EN (1 << 9)
#define OFFOUT_EN (1 << 10)
#define OFF_PULL_EN (1 << 12)
#define OFF_PULL_UP (1 << 13)
#define WAKEUP_EN (1 << 14)
-
-/* 44xx specific mux bit defines */
#define WAKEUP_EVENT (1 << 15)
/* Active pin states */
}
/*
+ * isolated_balloon_page - identify an isolated balloon page on private
+ * compaction/migration page lists.
+ *
+ * After a compaction thread isolates a balloon page for migration, it raises
+ * the page refcount to prevent concurrent compaction threads from re-isolating
+ * the same page. For that reason putback_movable_pages(), or other routines
+ * that need to identify isolated balloon pages on private pagelists, cannot
+ * rely on balloon_page_movable() to accomplish the task.
+ */
+static inline bool isolated_balloon_page(struct page *page)
+{
+ /* Already isolated balloon pages, by default, have a raised refcount */
+ if (page_flags_cleared(page) && !page_mapped(page) &&
+ page_count(page) >= 2)
+ return __is_movable_balloon_page(page);
+
+ return false;
+}
+
+/*
* balloon_page_insert - insert a page into the balloon's page list and make
* the page->mapping assignment accordingly.
* @page : page to be assigned as a 'balloon page'
return false;
}
+static inline bool isolated_balloon_page(struct page *page)
+{
+ return false;
+}
+
static inline bool balloon_page_isolate(struct page *page)
{
return false;
struct bcma_device *core, bool enable);
extern void bcma_core_pci_up(struct bcma_bus *bus);
extern void bcma_core_pci_down(struct bcma_bus *bus);
+extern void bcma_core_pci_power_save(struct bcma_bus *bus, bool up);
extern int bcma_core_pci_pcibios_map_irq(const struct pci_dev *dev);
extern int bcma_core_pci_plat_dev_init(struct pci_dev *dev);
return blk_queue_get_max_sectors(q, rq->cmd_flags);
}
+static inline unsigned int blk_rq_count_bios(struct request *rq)
+{
+ unsigned int nr_bios = 0;
+ struct bio *bio;
+
+ __rq_for_each_bio(bio, rq)
+ nr_bios++;
+
+ return nr_bios;
+}
+
/*
* Request issue related functions.
*/
struct ceph_osd_request *req);
extern void ceph_osdc_sync(struct ceph_osd_client *osdc);
+extern void ceph_osdc_flush_notifies(struct ceph_osd_client *osdc);
+
extern int ceph_osdc_readpages(struct ceph_osd_client *osdc,
struct ceph_vino vino,
struct ceph_file_layout *layout,
#define __visible __attribute__((externally_visible))
#endif
+/*
+ * GCC 'asm goto' miscompiles certain code sequences:
+ *
+ * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670
+ *
+ * Work it around via a compiler barrier quirk suggested by Jakub Jelinek.
+ * Fixed in GCC 4.8.2 and later versions.
+ *
+ * (asm goto is automatically volatile - the naming reflects this.)
+ */
+#if GCC_VERSION <= 40801
+# define asm_volatile_goto(x...) do { asm goto(x); asm (""); } while (0)
+#else
+# define asm_volatile_goto(x...) do { asm goto(x); } while (0)
+#endif
#ifdef CONFIG_ARCH_USE_BUILTIN_BSWAP
#if GCC_VERSION >= 40400
union map_info *dm_get_mapinfo(struct bio *bio);
union map_info *dm_get_rq_mapinfo(struct request *rq);
+struct queue_limits *dm_get_queue_limits(struct mapped_device *md);
+
/*
* Geometry functions.
*/
int dm_get_geometry(struct mapped_device *md, struct hd_geometry *geo);
int dm_set_geometry(struct mapped_device *md, struct hd_geometry *geo);
-
/*-----------------------------------------------------------------
* Functions for manipulating device-mapper tables.
*---------------------------------------------------------------*/
#include <linux/atomic.h>
#include <linux/compat.h>
+#include <linux/workqueue.h>
#include <uapi/linux/filter.h>
#ifdef CONFIG_COMPAT
{
atomic_t refcnt;
unsigned int len; /* Number of filter blocks */
+ struct rcu_head rcu;
unsigned int (*bpf_func)(const struct sk_buff *skb,
const struct sock_filter *filter);
- struct rcu_head rcu;
- struct sock_filter insns[0];
+ union {
+ struct sock_filter insns[0];
+ struct work_struct work;
+ };
};
-static inline unsigned int sk_filter_len(const struct sk_filter *fp)
+static inline unsigned int sk_filter_size(unsigned int proglen)
{
- return fp->len * sizeof(struct sock_filter) + sizeof(*fp);
+ return max(sizeof(struct sk_filter),
+ offsetof(struct sk_filter, insns[proglen]));
}
extern int sk_filter(struct sock *sk, struct sk_buff *skb);
}
#define SK_RUN_FILTER(FILTER, SKB) (*FILTER->bpf_func)(SKB, FILTER->insns)
#else
+#include <linux/slab.h>
static inline void bpf_jit_compile(struct sk_filter *fp)
{
}
static inline void bpf_jit_free(struct sk_filter *fp)
{
+ kfree(fp);
}
#define SK_RUN_FILTER(FILTER, SKB) sk_run_filter(SKB, FILTER->insns)
#endif
struct hid_device *hid_allocate_device(void);
struct hid_report *hid_register_report(struct hid_device *device, unsigned type, unsigned id);
int hid_parse_report(struct hid_device *hid, __u8 *start, unsigned size);
+struct hid_report *hid_validate_values(struct hid_device *hid,
+ unsigned int type, unsigned int id,
+ unsigned int field_index,
+ unsigned int report_counts);
int hid_open_report(struct hid_device *device);
int hid_check_keys_pressed(struct hid_device *hid);
int hid_connect(struct hid_device *hid, unsigned int connect_mask);
/*
* Framework version for util services.
*/
+#define UTIL_FW_MINOR 0
+
+#define UTIL_WS2K8_FW_MAJOR 1
+#define UTIL_WS2K8_FW_VERSION (UTIL_WS2K8_FW_MAJOR << 16 | UTIL_FW_MINOR)
#define UTIL_FW_MAJOR 3
-#define UTIL_FW_MINOR 0
-#define UTIL_FW_MAJOR_MINOR (UTIL_FW_MAJOR << 16 | UTIL_FW_MINOR)
+#define UTIL_FW_VERSION (UTIL_FW_MAJOR << 16 | UTIL_FW_MINOR)
/*
#define DMAR_IQT_REG 0x88 /* Invalidation queue tail register */
#define DMAR_IQ_SHIFT 4 /* Invalidation queue head/tail shift */
#define DMAR_IQA_REG 0x90 /* Invalidation queue addr register */
-#define DMAR_ICS_REG 0x98 /* Invalidation complete status register */
+#define DMAR_ICS_REG 0x9c /* Invalidation complete status register */
#define DMAR_IRTA_REG 0xb8 /* Interrupt remapping table addr register */
#define OFFSET_STRIDE (9)
int sem_ctls[4];
int used_sems;
- int msg_ctlmax;
- int msg_ctlmnb;
- int msg_ctlmni;
+ unsigned int msg_ctlmax;
+ unsigned int msg_ctlmnb;
+ unsigned int msg_ctlmni;
atomic_t msg_bytes;
atomic_t msg_hdrs;
int auto_msgmni;
return buf;
}
+extern const char hex_asc_upper[];
+#define hex_asc_upper_lo(x) hex_asc_upper[((x) & 0x0f)]
+#define hex_asc_upper_hi(x) hex_asc_upper[((x) & 0xf0) >> 4]
+
+static inline char *hex_byte_pack_upper(char *buf, u8 byte)
+{
+ *buf++ = hex_asc_upper_hi(byte);
+ *buf++ = hex_asc_upper_lo(byte);
+ return buf;
+}
+
static inline char * __deprecated pack_hex_byte(char *buf, u8 byte)
{
return hex_byte_pack(buf, byte);
struct page *gfn_to_page(struct kvm *kvm, gfn_t gfn);
unsigned long gfn_to_hva(struct kvm *kvm, gfn_t gfn);
+unsigned long gfn_to_hva_prot(struct kvm *kvm, gfn_t gfn, bool *writable);
unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn);
void kvm_release_page_clean(struct page *page);
void kvm_release_page_dirty(struct page *page);
unsigned int generation;
};
-enum mem_cgroup_filter_t {
- VISIT, /* visit current node */
- SKIP, /* skip the current node and continue traversal */
- SKIP_TREE, /* skip the whole subtree and continue traversal */
-};
-
-/*
- * mem_cgroup_filter_t predicate might instruct mem_cgroup_iter_cond how to
- * iterate through the hierarchy tree. Each tree element is checked by the
- * predicate before it is returned by the iterator. If a filter returns
- * SKIP or SKIP_TREE then the iterator code continues traversal (with the
- * next node down the hierarchy or the next node that doesn't belong under the
- * memcg's subtree).
- */
-typedef enum mem_cgroup_filter_t
-(*mem_cgroup_iter_filter)(struct mem_cgroup *memcg, struct mem_cgroup *root);
-
#ifdef CONFIG_MEMCG
/*
* All "charge" functions with gfp_mask should use GFP_KERNEL or
extern void mem_cgroup_end_migration(struct mem_cgroup *memcg,
struct page *oldpage, struct page *newpage, bool migration_ok);
-struct mem_cgroup *mem_cgroup_iter_cond(struct mem_cgroup *root,
- struct mem_cgroup *prev,
- struct mem_cgroup_reclaim_cookie *reclaim,
- mem_cgroup_iter_filter cond);
-
-static inline struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root,
- struct mem_cgroup *prev,
- struct mem_cgroup_reclaim_cookie *reclaim)
-{
- return mem_cgroup_iter_cond(root, prev, reclaim, NULL);
-}
-
+struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *,
+ struct mem_cgroup *,
+ struct mem_cgroup_reclaim_cookie *);
void mem_cgroup_iter_break(struct mem_cgroup *, struct mem_cgroup *);
/*
extern void mem_cgroup_replace_page_cache(struct page *oldpage,
struct page *newpage);
-/**
- * mem_cgroup_toggle_oom - toggle the memcg OOM killer for the current task
- * @new: true to enable, false to disable
- *
- * Toggle whether a failed memcg charge should invoke the OOM killer
- * or just return -ENOMEM. Returns the previous toggle state.
- *
- * NOTE: Any path that enables the OOM killer before charging must
- * call mem_cgroup_oom_synchronize() afterward to finalize the
- * OOM handling and clean up.
- */
-static inline bool mem_cgroup_toggle_oom(bool new)
+static inline void mem_cgroup_oom_enable(void)
{
- bool old;
-
- old = current->memcg_oom.may_oom;
- current->memcg_oom.may_oom = new;
-
- return old;
+ WARN_ON(current->memcg_oom.may_oom);
+ current->memcg_oom.may_oom = 1;
}
-static inline void mem_cgroup_enable_oom(void)
+static inline void mem_cgroup_oom_disable(void)
{
- bool old = mem_cgroup_toggle_oom(true);
-
- WARN_ON(old == true);
-}
-
-static inline void mem_cgroup_disable_oom(void)
-{
- bool old = mem_cgroup_toggle_oom(false);
-
- WARN_ON(old == false);
+ WARN_ON(!current->memcg_oom.may_oom);
+ current->memcg_oom.may_oom = 0;
}
static inline bool task_in_memcg_oom(struct task_struct *p)
{
- return p->memcg_oom.in_memcg_oom;
+ return p->memcg_oom.memcg;
}
-bool mem_cgroup_oom_synchronize(void);
+bool mem_cgroup_oom_synchronize(bool wait);
#ifdef CONFIG_MEMCG_SWAP
extern int do_swap_account;
mem_cgroup_update_page_stat(page, idx, -1);
}
-enum mem_cgroup_filter_t
-mem_cgroup_soft_reclaim_eligible(struct mem_cgroup *memcg,
- struct mem_cgroup *root);
+unsigned long mem_cgroup_soft_limit_reclaim(struct zone *zone, int order,
+ gfp_t gfp_mask,
+ unsigned long *total_scanned);
void __mem_cgroup_count_vm_event(struct mm_struct *mm, enum vm_event_item idx);
static inline void mem_cgroup_count_vm_event(struct mm_struct *mm,
struct page *oldpage, struct page *newpage, bool migration_ok)
{
}
-static inline struct mem_cgroup *
-mem_cgroup_iter_cond(struct mem_cgroup *root,
- struct mem_cgroup *prev,
- struct mem_cgroup_reclaim_cookie *reclaim,
- mem_cgroup_iter_filter cond)
-{
- /* first call must return non-NULL, second return NULL */
- return (struct mem_cgroup *)(unsigned long)!prev;
-}
static inline struct mem_cgroup *
mem_cgroup_iter(struct mem_cgroup *root,
{
}
-static inline bool mem_cgroup_toggle_oom(bool new)
+static inline void mem_cgroup_oom_enable(void)
{
- return false;
}
-static inline void mem_cgroup_enable_oom(void)
-{
-}
-
-static inline void mem_cgroup_disable_oom(void)
+static inline void mem_cgroup_oom_disable(void)
{
}
return false;
}
-static inline bool mem_cgroup_oom_synchronize(void)
+static inline bool mem_cgroup_oom_synchronize(bool wait)
{
return false;
}
}
static inline
-enum mem_cgroup_filter_t
-mem_cgroup_soft_reclaim_eligible(struct mem_cgroup *memcg,
- struct mem_cgroup *root)
+unsigned long mem_cgroup_soft_limit_reclaim(struct zone *zone, int order,
+ gfp_t gfp_mask,
+ unsigned long *total_scanned)
{
- return VISIT;
+ return 0;
}
static inline void mem_cgroup_split_huge_fixup(struct page *head)
unsigned int mode, unsigned int channel,
u8 ato, bool atox, unsigned int *sample);
+#define MC13783_AUDIO_RX0 36
+#define MC13783_AUDIO_RX1 37
+#define MC13783_AUDIO_TX 38
+#define MC13783_SSI_NETWORK 39
+#define MC13783_AUDIO_CODEC 40
+#define MC13783_AUDIO_DAC 41
+
#define MC13XXX_IRQ_ADCDONE 0
#define MC13XXX_IRQ_ADCBISDONE 1
#define MC13XXX_IRQ_TS 2
#define MAPPER_CTRL_MINOR 236
#define LOOP_CTRL_MINOR 237
#define VHOST_NET_MINOR 238
+#define UHID_MINOR 239
#define MISC_DYNAMIC_MINOR 255
struct device;
MLX5_DEV_CAP_FLAG_TLP_HINTS = 1LL << 39,
MLX5_DEV_CAP_FLAG_SIG_HAND_OVER = 1LL << 40,
MLX5_DEV_CAP_FLAG_DCT = 1LL << 41,
- MLX5_DEV_CAP_FLAG_CMDIF_CSUM = 1LL << 46,
+ MLX5_DEV_CAP_FLAG_CMDIF_CSUM = 3LL << 46,
};
enum {
struct health_buffer health;
__be32 rsvd2[884];
__be32 health_counter;
- __be32 rsvd3[1023];
+ __be32 rsvd3[1019];
__be64 ieee1588_clk;
__be32 ieee1588_clk_type;
__be32 clr_intx;
};
enum {
- MLX5_MAX_EQ_NAME = 20
+ MLX5_MAX_EQ_NAME = 32
};
enum {
enum {
MLX5_PROF_MASK_QP_SIZE = (u64)1 << 0,
- MLX5_PROF_MASK_CMDIF_CSUM = (u64)1 << 1,
- MLX5_PROF_MASK_MR_CACHE = (u64)1 << 2,
+ MLX5_PROF_MASK_MR_CACHE = (u64)1 << 1,
};
enum {
struct mlx5_profile {
u64 mask;
u32 log_max_qp;
- int cmdif_csum;
struct {
int size;
int limit;
#include <linux/spinlock_types.h>
#include <linux/linkage.h>
#include <linux/lockdep.h>
-
#include <linux/atomic.h>
+#include <asm/processor.h>
/*
* Simple, straightforward mutexes with strict semantics:
extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock);
-#ifndef CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX
-#define arch_mutex_cpu_relax() cpu_relax()
+#ifndef arch_mutex_cpu_relax
+# define arch_mutex_cpu_relax() cpu_relax()
#endif
#endif
* multiple net devices on single physical port.
*
* void (*ndo_add_vxlan_port)(struct net_device *dev,
- * sa_family_t sa_family, __u16 port);
+ * sa_family_t sa_family, __be16 port);
* Called by vxlan to notiy a driver about the UDP port and socket
* address family that vxlan is listnening to. It is called only when
* a new port starts listening. The operation is protected by the
* vxlan_net->sock_lock.
*
* void (*ndo_del_vxlan_port)(struct net_device *dev,
- * sa_family_t sa_family, __u16 port);
+ * sa_family_t sa_family, __be16 port);
* Called by vxlan to notify the driver about a UDP port and socket
* address family that vxlan is not listening to anymore. The operation
* is protected by the vxlan_net->sock_lock.
struct netdev_phys_port_id *ppid);
void (*ndo_add_vxlan_port)(struct net_device *dev,
sa_family_t sa_family,
- __u16 port);
+ __be16 port);
void (*ndo_del_vxlan_port)(struct net_device *dev,
sa_family_t sa_family,
- __u16 port);
+ __be16 port);
};
/*
}
#ifdef CONFIG_XPS
-extern int netif_set_xps_queue(struct net_device *dev, struct cpumask *mask,
+extern int netif_set_xps_queue(struct net_device *dev,
+ const struct cpumask *mask,
u16 index);
#else
static inline int netif_set_xps_queue(struct net_device *dev,
- struct cpumask *mask,
+ const struct cpumask *mask,
u16 index)
{
return 0;
/* Match elements marked with nomatch */
static inline bool
-ip_set_enomatch(int ret, u32 flags, enum ipset_adt adt)
+ip_set_enomatch(int ret, u32 flags, enum ipset_adt adt, struct ip_set *set)
{
return adt == IPSET_TEST &&
- ret == -ENOTEMPTY && ((flags >> 16) & IPSET_FLAG_NOMATCH);
+ (set->type->features & IPSET_TYPE_NOMATCH) &&
+ ((flags >> 16) & IPSET_FLAG_NOMATCH) &&
+ (ret > 0 || ret == -ENOTEMPTY);
}
/* Check the NLA_F_NET_BYTEORDER flag */
struct inode * (*open_context) (struct inode *dir,
struct nfs_open_context *ctx,
int open_flags,
- struct iattr *iattr);
+ struct iattr *iattr,
+ int *);
int (*have_delegation)(struct inode *, fmode_t);
int (*return_delegation)(struct inode *);
struct nfs_client *(*alloc_client) (const struct nfs_client_initdata *);
#ifndef __OF_IRQ_H
#define __OF_IRQ_H
-#if defined(CONFIG_OF)
-struct of_irq;
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/irq.h>
#include <linux/ioport.h>
#include <linux/of.h>
-/*
- * irq_of_parse_and_map() is used by all OF enabled platforms; but SPARC
- * implements it differently. However, the prototype is the same for all,
- * so declare it here regardless of the CONFIG_OF_IRQ setting.
- */
-extern unsigned int irq_of_parse_and_map(struct device_node *node, int index);
-
-#if defined(CONFIG_OF_IRQ)
/**
* of_irq - container for device_node/irq_specifier pair for an irq controller
* @controller: pointer to interrupt controller device tree node
extern int of_irq_count(struct device_node *dev);
extern int of_irq_to_resource_table(struct device_node *dev,
struct resource *res, int nr_irqs);
-extern struct device_node *of_irq_find_parent(struct device_node *child);
extern void of_irq_init(const struct of_device_id *matches);
-#endif /* CONFIG_OF_IRQ */
+#if defined(CONFIG_OF)
+/*
+ * irq_of_parse_and_map() is used by all OF enabled platforms; but SPARC
+ * implements it differently. However, the prototype is the same for all,
+ * so declare it here regardless of the CONFIG_OF_IRQ setting.
+ */
+extern unsigned int irq_of_parse_and_map(struct device_node *node, int index);
+extern struct device_node *of_irq_find_parent(struct device_node *child);
#else /* !CONFIG_OF */
static inline unsigned int irq_of_parse_and_map(struct device_node *dev,
+++ /dev/null
-#ifndef __OF_RESERVED_MEM_H
-#define __OF_RESERVED_MEM_H
-
-#ifdef CONFIG_OF_RESERVED_MEM
-void of_reserved_mem_device_init(struct device *dev);
-void of_reserved_mem_device_release(struct device *dev);
-void early_init_dt_scan_reserved_mem(void);
-#else
-static inline void of_reserved_mem_device_init(struct device *dev) { }
-static inline void of_reserved_mem_device_release(struct device *dev) { }
-static inline void early_init_dt_scan_reserved_mem(void) { }
-#endif
-
-#endif /* __OF_RESERVED_MEM_H */
#endif
#ifndef this_cpu_sub
-# define this_cpu_sub(pcp, val) this_cpu_add((pcp), -(val))
+# define this_cpu_sub(pcp, val) this_cpu_add((pcp), -(typeof(pcp))(val))
#endif
#ifndef this_cpu_inc
# define this_cpu_add_return(pcp, val) __pcpu_size_call_return2(this_cpu_add_return_, pcp, val)
#endif
-#define this_cpu_sub_return(pcp, val) this_cpu_add_return(pcp, -(val))
+#define this_cpu_sub_return(pcp, val) this_cpu_add_return(pcp, -(typeof(pcp))(val))
#define this_cpu_inc_return(pcp) this_cpu_add_return(pcp, 1)
#define this_cpu_dec_return(pcp) this_cpu_add_return(pcp, -1)
#endif
#ifndef __this_cpu_sub
-# define __this_cpu_sub(pcp, val) __this_cpu_add((pcp), -(val))
+# define __this_cpu_sub(pcp, val) __this_cpu_add((pcp), -(typeof(pcp))(val))
#endif
#ifndef __this_cpu_inc
__pcpu_size_call_return2(__this_cpu_add_return_, pcp, val)
#endif
-#define __this_cpu_sub_return(pcp, val) __this_cpu_add_return(pcp, -(val))
+#define __this_cpu_sub_return(pcp, val) __this_cpu_add_return(pcp, -(typeof(pcp))(val))
#define __this_cpu_inc_return(pcp) __this_cpu_add_return(pcp, 1)
#define __this_cpu_dec_return(pcp) __this_cpu_add_return(pcp, -1)
*/
struct perf_event {
#ifdef CONFIG_PERF_EVENTS
- struct list_head group_entry;
+ /*
+ * entry onto perf_event_context::event_list;
+ * modifications require ctx->lock
+ * RCU safe iterations.
+ */
struct list_head event_entry;
+
+ /*
+ * XXX: group_entry and sibling_list should be mutually exclusive;
+ * either you're a sibling on a group, or you're the group leader.
+ * Rework the code to always use the same list element.
+ *
+ * Locked for modification by both ctx->mutex and ctx->lock; holding
+ * either sufficies for read.
+ */
+ struct list_head group_entry;
struct list_head sibling_list;
+
+ /*
+ * We need storage to track the entries in perf_pmu_migrate_context; we
+ * cannot use the event_entry because of RCU and we want to keep the
+ * group in tact which avoids us using the other two entries.
+ */
+ struct list_head migrate_entry;
+
struct hlist_node hlist_entry;
int nr_siblings;
int group_flags;
u8 version;
u8 txnumevt;
u8 rxnumevt;
+ int tx_dma_channel;
+ int rx_dma_channel;
};
enum {
extern void get_random_bytes(void *buf, int nbytes);
extern void get_random_bytes_arch(void *buf, int nbytes);
void generate_random_uuid(unsigned char uuid_out[16]);
+extern int random_int_secret_init(void);
#ifndef MODULE
extern const struct file_operations random_fops, urandom_fops;
* @reg: Offset of the register within the regmap bank
* @lsb: lsb of the register field.
* @reg: msb of the register field.
+ * @id_size: port size if it has some ports
+ * @id_offset: address offset for each ports
*/
struct reg_field {
unsigned int reg;
unsigned int lsb;
unsigned int msb;
+ unsigned int id_size;
+ unsigned int id_offset;
};
#define REG_FIELD(_reg, _lsb, _msb) { \
int regmap_field_read(struct regmap_field *field, unsigned int *val);
int regmap_field_write(struct regmap_field *field, unsigned int val);
+int regmap_field_update_bits(struct regmap_field *field,
+ unsigned int mask, unsigned int val);
+
+int regmap_fields_write(struct regmap_field *field, unsigned int id,
+ unsigned int val);
+int regmap_fields_read(struct regmap_field *field, unsigned int id,
+ unsigned int *val);
+int regmap_fields_update_bits(struct regmap_field *field, unsigned int id,
+ unsigned int mask, unsigned int val);
/**
* Description of an IRQ for the generic regmap irq_chip.
};
/**
+ * struct regulator_linear_range - specify linear voltage ranges
+ *
* Specify a range of voltages for regulator_map_linar_range() and
* regulator_list_linear_range().
*
} memcg_batch;
unsigned int memcg_kmem_skip_account;
struct memcg_oom_info {
+ struct mem_cgroup *memcg;
+ gfp_t gfp_mask;
+ int order;
unsigned int may_oom:1;
- unsigned int in_memcg_oom:1;
- unsigned int oom_locked:1;
- int wakeups;
- struct mem_cgroup *wait_on_memcg;
} memcg_oom;
#endif
#ifdef CONFIG_UPROBES
* headers if needed
*/
__u8 encapsulation:1;
- /* 7/9 bit hole (depending on ndisc_nodetype presence) */
+ /* 6/8 bit hole (depending on ndisc_nodetype presence) */
kmemcheck_bitfield_end(flags2);
#if defined CONFIG_NET_DMA || defined CONFIG_NET_RX_BUSY_POLL
static inline void kick_all_cpus_sync(void) { }
+static inline void __smp_call_function_single(int cpuid,
+ struct call_single_data *data, int wait)
+{
+ on_each_cpu(data->func, data->info, wait);
+}
+
#endif /* !SMP */
/*
#include <asm/timex.h>
+#ifndef random_get_entropy
+/*
+ * The random_get_entropy() function is used by the /dev/random driver
+ * in order to extract entropy via the relative unpredictability of
+ * when an interrupt takes places versus a high speed, fine-grained
+ * timing source or cycle counter. Since it will be occurred on every
+ * single interrupt, it must have a very low cost/overhead.
+ *
+ * By default we use get_cycles() for this purpose, but individual
+ * architectures may override this in their asm/timex.h header file.
+ */
+#define random_get_entropy() get_cycles()
+#endif
+
/*
* SHIFT_PLL is used as a dampening factor to define how much we
* adjust the frequency correction for a given offset in PLL mode.
extern void hardpps(const struct timespec *, const struct timespec *);
int read_current_timer(unsigned long *timer_val);
+void ntp_notify_cmos_timer(void);
/* The clock frequency of the i8253/i8254 PIT */
#define PIT_TICK_RATE 1193182ul
unsigned int needs_reset:1;
};
-#if IS_ENABLED(CONFIG_NOP_USB_XCEIV)
+#if defined(CONFIG_NOP_USB_XCEIV) || (defined(CONFIG_NOP_USB_XCEIV_MODULE) && defined(MODULE))
/* sometimes transceivers are accessed only through e.g. ULPI */
extern void usb_nop_xceiv_register(void);
extern void usb_nop_xceiv_unregister(void);
struct usb_host_endpoint *status;
unsigned maxpacket;
struct timer_list delay;
+ const char *padding_pkt;
/* protocol/interface state */
struct net_device *net;
US_FLAG(INITIAL_READ10, 0x00100000) \
/* Initial READ(10) (and others) must be retried */ \
US_FLAG(WRITE_CACHE, 0x00200000) \
- /* Write Cache status is not available */
+ /* Write Cache status is not available */ \
+ US_FLAG(NEEDS_CAP16, 0x00400000)
+ /* cannot handle READ_CAPACITY_10 */
#define US_FLAG(name, value) US_FL_##name = value ,
enum { US_DO_ALL_FLAGS };
* out of the arbitration process (and can be safe to take
* interrupts at any time.
*/
-#if defined(CONFIG_VGA_ARB)
extern void vga_set_legacy_decoding(struct pci_dev *pdev,
unsigned int decodes);
-#else
-static inline void vga_set_legacy_decoding(struct pci_dev *pdev,
- unsigned int decodes)
-{
-}
-#endif
/**
* vga_get - acquire & locks VGA resources
struct yamdrv_ioctl_mcs {
int cmd;
- int bitrate;
+ unsigned int bitrate;
unsigned char bits[YAM_FPGA_SIZE];
};
int ipv6_chk_home_addr(struct net *net, const struct in6_addr *addr);
#endif
+bool ipv6_chk_custom_prefix(const struct in6_addr *addr,
+ const unsigned int prefix_len,
+ struct net_device *dev);
+
int ipv6_chk_prefix(const struct in6_addr *addr, struct net_device *dev);
struct inet6_ifaddr *ipv6_get_ifaddr(struct net *net,
enum {
HCI_SETUP,
HCI_AUTO_OFF,
+ HCI_RFKILLED,
HCI_MGMT,
HCI_PAIRABLE,
HCI_SERVICE_CACHE,
unsigned char err_offset = 0;
u8 opt_len = opt[1];
u8 opt_iter;
+ u8 tag_len;
if (opt_len < 8) {
err_offset = 1;
}
for (opt_iter = 6; opt_iter < opt_len;) {
- if (opt[opt_iter + 1] > (opt_len - opt_iter)) {
+ tag_len = opt[opt_iter + 1];
+ if ((tag_len == 0) || (opt[opt_iter + 1] > (opt_len - opt_iter))) {
err_offset = opt_iter + 1;
goto out;
}
- opt_iter += opt[opt_iter + 1];
+ opt_iter += tag_len;
}
out:
{
return dst_orig;
}
+
+static inline struct xfrm_state *dst_xfrm(const struct dst_entry *dst)
+{
+ return NULL;
+}
+
#else
extern struct dst_entry *xfrm_lookup(struct net *net, struct dst_entry *dst_orig,
const struct flowi *fl, struct sock *sk,
int flags);
+
+/* skb attached with this dst needs transformation if dst->xfrm is valid */
+static inline struct xfrm_state *dst_xfrm(const struct dst_entry *dst)
+{
+ return dst->xfrm;
+}
#endif
#endif /* _NET_DST_H */
extern void __ip_select_ident(struct iphdr *iph, struct dst_entry *dst, int more);
-static inline void ip_select_ident(struct iphdr *iph, struct dst_entry *dst, struct sock *sk)
+static inline void ip_select_ident(struct sk_buff *skb, struct dst_entry *dst, struct sock *sk)
{
- if (iph->frag_off & htons(IP_DF)) {
+ struct iphdr *iph = ip_hdr(skb);
+
+ if ((iph->frag_off & htons(IP_DF)) && !skb->local_df) {
/* This is only to work around buggy Windows95/2000
* VJ compression implementations. If the ID field
* does not change, they drop every other packet in
__ip_select_ident(iph, dst, 0);
}
-static inline void ip_select_ident_more(struct iphdr *iph, struct dst_entry *dst, struct sock *sk, int more)
+static inline void ip_select_ident_more(struct sk_buff *skb, struct dst_entry *dst, struct sock *sk, int more)
{
- if (iph->frag_off & htons(IP_DF)) {
+ struct iphdr *iph = ip_hdr(skb);
+
+ if ((iph->frag_off & htons(IP_DF)) && !skb->local_df) {
if (sk && inet_sk(sk)->inet_daddr) {
iph->id = htons(inet_sk(sk)->inet_id);
inet_sk(sk)->inet_id += 1 + more;
skb_dst(skb)->dev->mtu : dst_mtu(skb_dst(skb));
}
-static inline struct in6_addr *rt6_nexthop(struct rt6_info *rt, struct in6_addr *dest)
+static inline struct in6_addr *rt6_nexthop(struct rt6_info *rt)
{
- if (rt->rt6i_flags & RTF_GATEWAY)
- return &rt->rt6i_gateway;
- return dest;
+ return &rt->rt6i_gateway;
}
#endif
struct rcu_head rcu_head;
};
-/* In grace period after removing */
-#define IP_VS_DEST_STATE_REMOVING 0x01
/*
* The real server destination forwarding entry
* with ip address, port number, and so on.
atomic_t refcnt; /* reference counter */
struct ip_vs_stats stats; /* statistics */
- unsigned long state; /* state flags */
+ unsigned long idle_start; /* start time, jiffies */
/* connection counters and thresholds */
atomic_t activeconns; /* active connections */
struct ip_vs_dest_dst __rcu *dest_dst; /* cached dst info */
/* for virtual service */
- struct ip_vs_service *svc; /* service it belongs to */
+ struct ip_vs_service __rcu *svc; /* service it belongs to */
__u16 protocol; /* which protocol (TCP/UDP) */
__be16 vport; /* virtual port number */
union nf_inet_addr vaddr; /* virtual IP address */
__u32 vfwmark; /* firewall mark of service */
struct list_head t_list; /* in dest_trash */
- struct rcu_head rcu_head;
unsigned int in_rs_table:1; /* we are in rs_table */
};
/* CONFIG_IP_VS_NFCT */
#endif
-static inline unsigned int
+static inline int
ip_vs_dest_conn_overhead(struct ip_vs_dest *dest)
{
/*
/* Basic interface to register ieee802154 device */
struct ieee802154_dev *
-ieee802154_alloc_device(size_t priv_data_lex, struct ieee802154_ops *ops);
+ieee802154_alloc_device(size_t priv_data_len, struct ieee802154_ops *ops);
void ieee802154_free_device(struct ieee802154_dev *dev);
int ieee802154_register_device(struct ieee802154_dev *dev);
void ieee802154_unregister_device(struct ieee802154_dev *dev);
struct mrp_application *app;
struct net_device *dev;
struct timer_list join_timer;
+ struct timer_list periodic_timer;
spinlock_t lock;
struct sk_buff_head queue;
struct hlist_head *dev_index_head;
unsigned int dev_base_seq; /* protected by rtnl_mutex */
int ifindex;
+ unsigned int dev_unreg_count;
/* core fib_rules */
struct list_head rules_ops;
static inline void nf_ct_ext_free(struct nf_conn *ct)
{
if (ct->ext)
- kfree(ct->ext);
+ kfree_rcu(ct->ext, rcu);
}
/* Add this type, returns pointer to data or NULL. */
struct tcphdr;
struct xt_synproxy_info;
-extern void synproxy_parse_options(const struct sk_buff *skb, unsigned int doff,
+extern bool synproxy_parse_options(const struct sk_buff *skb, unsigned int doff,
const struct tcphdr *th,
struct synproxy_options *opts);
extern unsigned int synproxy_options_size(const struct synproxy_options *opts);
#include <linux/types.h>
-extern void net_secret_init(void);
extern __u32 secure_ip_id(__be32 daddr);
extern __u32 secure_ipv6_id(const __be32 daddr[4]);
extern u32 secure_ipv4_port_ephemeral(__be32 saddr, __be32 daddr, __be16 dport);
void (*sk_destruct)(struct sock *sk);
};
+#define __sk_user_data(sk) ((*((void __rcu **)&(sk)->sk_user_data)))
+
+#define rcu_dereference_sk_user_data(sk) rcu_dereference(__sk_user_data((sk)))
+#define rcu_assign_sk_user_data(sk, ptr) rcu_assign_pointer(__sk_user_data((sk)), ptr)
+
/*
* SK_CAN_REUSE and SK_NO_REUSE on a socket mean that the socket is OK
* or not whether his port will be reused by someone else. SK_FORCE_REUSE
static inline void sk_filter_uncharge(struct sock *sk, struct sk_filter *fp)
{
- unsigned int size = sk_filter_len(fp);
-
- atomic_sub(size, &sk->sk_omem_alloc);
+ atomic_sub(sk_filter_size(fp->len), &sk->sk_omem_alloc);
sk_filter_release(fp);
}
static inline void sk_filter_charge(struct sock *sk, struct sk_filter *fp)
{
atomic_inc(&fp->refcnt);
- atomic_add(sk_filter_len(fp), &sk->sk_omem_alloc);
+ atomic_add(sk_filter_size(fp->len), &sk->sk_omem_alloc);
}
/*
/* Charge Pump Freq. Check datasheet Pg73 */
unsigned int chgfreq;
+ /* Reset GPIO */
+ unsigned int reset_gpio;
};
#endif /* __CS42L52_H */
--- /dev/null
+/*
+ * linux/sound/cs42l73.h -- Platform data for CS42L73
+ *
+ * Copyright (c) 2012 Cirrus Logic Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __CS42L73_H
+#define __CS42L73_H
+
+struct cs42l73_platform_data {
+ /* RST GPIO */
+ unsigned int reset_gpio;
+ unsigned int chgfreq;
+ int jack_detection;
+ unsigned int mclk_freq;
+};
+
+#endif /* __CS42L73_H */
* @slave_id: Slave requester id for the DMA channel.
* @filter_data: Custom DMA channel filter data, this will usually be used when
* requesting the DMA channel.
+ * @chan_name: Custom channel name to use when requesting DMA channel.
+ * @fifo_size: FIFO size of the DAI controller in bytes
*/
struct snd_dmaengine_dai_dma_data {
dma_addr_t addr;
u32 maxburst;
unsigned int slave_id;
void *filter_data;
+ const char *chan_name;
+ unsigned int fifo_size;
};
void snd_dmaengine_pcm_set_config_from_dai_data(
* playback.
*/
#define SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX BIT(3)
+/*
+ * The PCM streams have custom channel names specified.
+ */
+#define SND_DMAENGINE_PCM_FLAG_CUSTOM_CHANNEL_NAME BIT(4)
/**
* struct snd_dmaengine_pcm_config - Configuration data for dmaengine based PCM
#define RSND_SSI_CLK_PIN_SHARE (1 << 31)
#define RSND_SSI_CLK_FROM_ADG (1 << 30) /* clock parent is master */
#define RSND_SSI_SYNC (1 << 29) /* SSI34_sync etc */
-#define RSND_SSI_DEPENDENT (1 << 28) /* SSI needs SRU/SCU */
#define RSND_SSI_PLAY (1 << 24)
*
* A : generation
*/
+#define RSND_GEN_MASK (0xF << 0)
#define RSND_GEN1 (1 << 0) /* fixme */
#define RSND_GEN2 (2 << 0) /* fixme */
int snd_soc_dai_set_pll(struct snd_soc_dai *dai,
int pll_id, int source, unsigned int freq_in, unsigned int freq_out);
+int snd_soc_dai_set_bclk_ratio(struct snd_soc_dai *dai, unsigned int ratio);
+
/* Digital Audio interface formatting */
int snd_soc_dai_set_fmt(struct snd_soc_dai *dai, unsigned int fmt);
int (*set_pll)(struct snd_soc_dai *dai, int pll_id, int source,
unsigned int freq_in, unsigned int freq_out);
int (*set_clkdiv)(struct snd_soc_dai *dai, int div_id, int div);
+ int (*set_bclk_ratio)(struct snd_soc_dai *dai, unsigned int ratio);
/*
* DAI format configuration
struct snd_soc_dai *);
int (*prepare)(struct snd_pcm_substream *,
struct snd_soc_dai *);
+ /*
+ * NOTE: Commands passed to the trigger function are not necessarily
+ * compatible with the current state of the dai. For example this
+ * sequence of commands is possible: START STOP STOP.
+ * So do not unconditionally use refcounting functions in the trigger
+ * function, e.g. clk_enable/disable.
+ */
int (*trigger)(struct snd_pcm_substream *, int,
struct snd_soc_dai *);
int (*bespoke_trigger)(struct snd_pcm_substream *, int,
dai->capture_dma_data = data;
}
+static inline void snd_soc_dai_init_dma_data(struct snd_soc_dai *dai,
+ void *playback, void *capture)
+{
+ dai->playback_dma_data = playback;
+ dai->capture_dma_data = capture;
+}
+
static inline void snd_soc_dai_set_drvdata(struct snd_soc_dai *dai,
void *data)
{
#ifndef __LINUX_SND_SOC_H
#define __LINUX_SND_SOC_H
+#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/types.h>
#include <linux/notifier.h>
struct snd_soc_jack;
struct snd_soc_jack_zone;
struct snd_soc_jack_pin;
-struct snd_soc_cache_ops;
#include <sound/soc-dapm.h>
#include <sound/soc-dpcm.h>
SND_SOC_REGMAP,
};
-enum snd_soc_compress_type {
- SND_SOC_FLAT_COMPRESSION = 1,
-};
-
enum snd_soc_pcm_subclass {
SND_SOC_PCM_CLASS_PCM = 0,
SND_SOC_PCM_CLASS_BE = 1,
unsigned int reg, unsigned int value);
int snd_soc_cache_read(struct snd_soc_codec *codec,
unsigned int reg, unsigned int *value);
-int snd_soc_default_volatile_register(struct snd_soc_codec *codec,
- unsigned int reg);
-int snd_soc_default_readable_register(struct snd_soc_codec *codec,
- unsigned int reg);
-int snd_soc_default_writable_register(struct snd_soc_codec *codec,
- unsigned int reg);
int snd_soc_platform_read(struct snd_soc_platform *platform,
unsigned int reg);
int snd_soc_platform_write(struct snd_soc_platform *platform,
struct snd_ctl_elem_value *ucontrol);
/**
- * struct snd_soc_reg_access - Describes whether a given register is
- * readable, writable or volatile.
- *
- * @reg: the register number
- * @read: whether this register is readable
- * @write: whether this register is writable
- * @vol: whether this register is volatile
- */
-struct snd_soc_reg_access {
- u16 reg;
- u16 read;
- u16 write;
- u16 vol;
-};
-
-/**
* struct snd_soc_jack_pin - Describes a pin to update based on jack detection
*
* @pin: name of the pin to update
int (*trigger)(struct snd_compr_stream *);
};
-/* SoC cache ops */
-struct snd_soc_cache_ops {
+/* component interface */
+struct snd_soc_component_driver {
const char *name;
- enum snd_soc_compress_type id;
- int (*init)(struct snd_soc_codec *codec);
- int (*exit)(struct snd_soc_codec *codec);
- int (*read)(struct snd_soc_codec *codec, unsigned int reg,
- unsigned int *value);
- int (*write)(struct snd_soc_codec *codec, unsigned int reg,
- unsigned int value);
- int (*sync)(struct snd_soc_codec *codec);
+
+ /* DT */
+ int (*of_xlate_dai_name)(struct snd_soc_component *component,
+ struct of_phandle_args *args,
+ const char **dai_name);
+};
+
+struct snd_soc_component {
+ const char *name;
+ int id;
+ struct device *dev;
+ struct list_head list;
+
+ struct snd_soc_dai_driver *dai_drv;
+ int num_dai;
+
+ const struct snd_soc_component_driver *driver;
};
/* SoC Audio Codec device */
struct list_head list;
struct list_head card_list;
int num_dai;
- enum snd_soc_compress_type compress_type;
- size_t reg_size; /* reg_cache_size * reg_word_size */
int (*volatile_register)(struct snd_soc_codec *, unsigned int);
int (*readable_register)(struct snd_soc_codec *, unsigned int);
int (*writable_register)(struct snd_soc_codec *, unsigned int);
unsigned int (*hw_read)(struct snd_soc_codec *, unsigned int);
unsigned int (*read)(struct snd_soc_codec *, unsigned int);
int (*write)(struct snd_soc_codec *, unsigned int, unsigned int);
- int (*bulk_write_raw)(struct snd_soc_codec *, unsigned int, const void *, size_t);
void *reg_cache;
- const void *reg_def_copy;
- const struct snd_soc_cache_ops *cache_ops;
struct mutex cache_rw_mutex;
int val_bytes;
+ /* component */
+ struct snd_soc_component component;
+
/* dapm */
struct snd_soc_dapm_context dapm;
unsigned int ignore_pmdown_time:1; /* pmdown_time is ignored at stop */
int (*remove)(struct snd_soc_codec *);
int (*suspend)(struct snd_soc_codec *);
int (*resume)(struct snd_soc_codec *);
+ struct snd_soc_component_driver component_driver;
/* Default control and setup, added after probe() is run */
const struct snd_kcontrol_new *controls;
short reg_cache_step;
short reg_word_size;
const void *reg_cache_default;
- short reg_access_size;
- const struct snd_soc_reg_access *reg_access_default;
- enum snd_soc_compress_type compress_type;
/* codec bias level */
int (*set_bias_level)(struct snd_soc_codec *,
#endif
};
-struct snd_soc_component_driver {
- const char *name;
-};
-
-struct snd_soc_component {
- const char *name;
- int id;
- int num_dai;
- struct device *dev;
- struct list_head list;
-
- const struct snd_soc_component_driver *driver;
-};
-
struct snd_soc_dai_link {
/* config - must be set by machine driver */
const char *name; /* Codec name */
* associated per device
*/
const char *name_prefix;
-
- /*
- * set this to the desired compression type if you want to
- * override the one supplied in codec->driver->compress_type
- */
- enum snd_soc_compress_type compress_type;
};
struct snd_soc_aux_dev {
unsigned int snd_soc_read(struct snd_soc_codec *codec, unsigned int reg);
unsigned int snd_soc_write(struct snd_soc_codec *codec,
unsigned int reg, unsigned int val);
-unsigned int snd_soc_bulk_write_raw(struct snd_soc_codec *codec,
- unsigned int reg, const void *data, size_t len);
/* device driver data */
const char *propname);
unsigned int snd_soc_of_parse_daifmt(struct device_node *np,
const char *prefix);
+int snd_soc_of_get_dai_name(struct device_node *of_node,
+ const char **dai_name);
#include <sound/soc-dai.h>
struct snd_soc_platform;
struct snd_soc_card;
struct snd_soc_dapm_widget;
+struct snd_soc_dapm_path;
/*
* Log register events
__field( unsigned int, nr_sector )
__field( dev_t, old_dev )
__field( sector_t, old_sector )
+ __field( unsigned int, nr_bios )
__array( char, rwbs, RWBS_LEN)
),
__entry->nr_sector = blk_rq_sectors(rq);
__entry->old_dev = dev;
__entry->old_sector = from;
+ __entry->nr_bios = blk_rq_count_bios(rq);
blk_fill_rwbs(__entry->rwbs, rq->cmd_flags, blk_rq_bytes(rq));
),
- TP_printk("%d,%d %s %llu + %u <- (%d,%d) %llu",
+ TP_printk("%d,%d %s %llu + %u <- (%d,%d) %llu %u",
MAJOR(__entry->dev), MINOR(__entry->dev), __entry->rwbs,
(unsigned long long)__entry->sector,
__entry->nr_sector,
MAJOR(__entry->old_dev), MINOR(__entry->old_dev),
- (unsigned long long)__entry->old_sector)
+ (unsigned long long)__entry->old_sector, __entry->nr_bios)
);
#endif /* _TRACE_BLOCK_H */
{ BTRFS_TREE_LOG_OBJECTID, "TREE_LOG" }, \
{ BTRFS_QUOTA_TREE_OBJECTID, "QUOTA_TREE" }, \
{ BTRFS_TREE_RELOC_OBJECTID, "TREE_RELOC" }, \
+ { BTRFS_UUID_TREE_OBJECTID, "UUID_RELOC" }, \
{ BTRFS_DATA_RELOC_TREE_OBJECTID, "DATA_RELOC_TREE" })
#define show_root_type(obj) \
),
TP_fast_assign(
- __entry->unpacked_lun = cmd->se_lun->unpacked_lun;
+ __entry->unpacked_lun = cmd->orig_fe_lun;
__entry->opcode = cmd->t_task_cdb[0];
__entry->data_length = cmd->data_length;
__entry->task_attribute = cmd->sam_task_attr;
),
TP_fast_assign(
- __entry->unpacked_lun = cmd->se_lun->unpacked_lun;
+ __entry->unpacked_lun = cmd->orig_fe_lun;
__entry->opcode = cmd->t_task_cdb[0];
__entry->data_length = cmd->data_length;
__entry->task_attribute = cmd->sam_task_attr;
__u32 connection;
__u32 mm_width, mm_height; /**< HxW in millimeters */
__u32 subpixel;
+
+ __u32 pad;
};
#define DRM_MODE_PROP_PENDING (1<<0)
#define SI_TILE_MODE_DEPTH_STENCIL_2D_4AA 3
#define SI_TILE_MODE_DEPTH_STENCIL_2D_8AA 2
+#define CIK_TILE_MODE_DEPTH_STENCIL_1D 5
+
#endif
#define PERF_EVENT_IOC_PERIOD _IOW('$', 4, __u64)
#define PERF_EVENT_IOC_SET_OUTPUT _IO ('$', 5)
#define PERF_EVENT_IOC_SET_FILTER _IOW('$', 6, char *)
-#define PERF_EVENT_IOC_ID _IOR('$', 7, u64 *)
+#define PERF_EVENT_IOC_ID _IOR('$', 7, __u64 *)
enum perf_event_ioc_flags {
PERF_IOC_FLAG_GROUP = 1U << 0,
union {
__u64 capabilities;
struct {
- __u64 cap_usr_time : 1,
- cap_usr_rdpmc : 1,
- cap_usr_time_zero : 1,
- cap_____res : 61;
+ __u64 cap_bit0 : 1, /* Always 0, deprecated, see commit 860f085b74e9 */
+ cap_bit0_is_deprecated : 1, /* Always 1, signals that bit 0 is zero */
+
+ cap_user_rdpmc : 1, /* The RDPMC instruction can be used to read counts */
+ cap_user_time : 1, /* The time_* fields are used */
+ cap_user_time_zero : 1, /* The time_zero field is used */
+ cap_____res : 59;
};
};
* ((rem * time_mult) >> time_shift);
*/
__u64 time_zero;
+ __u32 size; /* Header size up to __reserved[] fields. */
/*
* Hole for extension of the self monitor capabilities
*/
- __u64 __reserved[119]; /* align to 1k */
+ __u8 __reserved[118*8+4]; /* align to 1k. */
/*
* Control data for the mmap() data buffer.
*
- * User-space reading the @data_head value should issue an rmb(), on
- * SMP capable platforms, after reading this value -- see
- * perf_event_wakeup().
+ * User-space reading the @data_head value should issue an smp_rmb(),
+ * after reading this value.
*
* When the mapping is PROT_WRITE the @data_tail value should be
- * written by userspace to reflect the last read data. In this case
- * the kernel will not over-write unread data.
+ * written by userspace to reflect the last read data, after issueing
+ * an smp_mb() to separate the data read from the ->data_tail store.
+ * In this case the kernel will not over-write unread data.
+ *
+ * See perf_output_put_handle() for the data ordering.
*/
__u64 data_head; /* head in the data section */
__u64 data_tail; /* user-space written tail */
* u64 len;
* u64 pgoff;
* char filename[];
+ * struct sample_id sample_id;
* };
*/
PERF_RECORD_MMAP = 1,
# UAPI Header export list
header-y += tc_csum.h
+header-y += tc_defact.h
header-y += tc_gact.h
header-y += tc_ipt.h
header-y += tc_mirred.h
struct tc_defact {
tc_gen;
};
-
+
enum {
TCA_DEF_UNSPEC,
TCA_DEF_TM,
IB_USER_VERBS_CMD_CLOSE_XRCD,
IB_USER_VERBS_CMD_CREATE_XSRQ,
IB_USER_VERBS_CMD_OPEN_QP,
+#ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING
IB_USER_VERBS_CMD_CREATE_FLOW = IB_USER_VERBS_CMD_THRESHOLD,
IB_USER_VERBS_CMD_DESTROY_FLOW
+#endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */
};
/*
__u16 out_words;
};
+#ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING
struct ib_uverbs_cmd_hdr_ex {
__u32 command;
__u16 in_words;
__u16 provider_out_words;
__u32 cmd_hdr_reserved;
};
+#endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */
struct ib_uverbs_get_context {
__u64 response;
__u64 driver_data[0];
};
+#ifdef CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING
struct ib_kern_eth_filter {
__u8 dst_mac[6];
__u8 src_mac[6];
__u32 comp_mask;
__u32 flow_handle;
};
+#endif /* CONFIG_INFINIBAND_EXPERIMENTAL_UVERBS_FLOW_STEERING */
struct ib_uverbs_create_srq {
__u64 response;
#include <linux/elevator.h>
#include <linux/sched_clock.h>
#include <linux/context_tracking.h>
+#include <linux/random.h>
#include <asm/io.h>
#include <asm/bugs.h>
do_ctors();
usermodehelper_enable();
do_initcalls();
+ random_int_secret_init();
}
static void __init do_pre_smp_initcalls(void)
return err;
}
-static int proc_ipc_callback_dointvec(ctl_table *table, int write,
+static int proc_ipc_callback_dointvec_minmax(ctl_table *table, int write,
void __user *buffer, size_t *lenp, loff_t *ppos)
{
struct ctl_table ipc_table;
memcpy(&ipc_table, table, sizeof(ipc_table));
ipc_table.data = get_ipc(table);
- rc = proc_dointvec(&ipc_table, write, buffer, lenp, ppos);
+ rc = proc_dointvec_minmax(&ipc_table, write, buffer, lenp, ppos);
if (write && !rc && lenp_bef == *lenp)
/*
#define proc_ipc_dointvec NULL
#define proc_ipc_dointvec_minmax NULL
#define proc_ipc_dointvec_minmax_orphans NULL
-#define proc_ipc_callback_dointvec NULL
+#define proc_ipc_callback_dointvec_minmax NULL
#define proc_ipcauto_dointvec_minmax NULL
#endif
static int zero;
static int one = 1;
-#ifdef CONFIG_CHECKPOINT_RESTORE
static int int_max = INT_MAX;
-#endif
static struct ctl_table ipc_kern_table[] = {
{
.data = &init_ipc_ns.msg_ctlmax,
.maxlen = sizeof (init_ipc_ns.msg_ctlmax),
.mode = 0644,
- .proc_handler = proc_ipc_dointvec,
+ .proc_handler = proc_ipc_dointvec_minmax,
+ .extra1 = &zero,
+ .extra2 = &int_max,
},
{
.procname = "msgmni",
.data = &init_ipc_ns.msg_ctlmni,
.maxlen = sizeof (init_ipc_ns.msg_ctlmni),
.mode = 0644,
- .proc_handler = proc_ipc_callback_dointvec,
+ .proc_handler = proc_ipc_callback_dointvec_minmax,
+ .extra1 = &zero,
+ .extra2 = &int_max,
},
{
.procname = "msgmnb",
.data = &init_ipc_ns.msg_ctlmnb,
.maxlen = sizeof (init_ipc_ns.msg_ctlmnb),
.mode = 0644,
- .proc_handler = proc_ipc_dointvec,
+ .proc_handler = proc_ipc_dointvec_minmax,
+ .extra1 = &zero,
+ .extra2 = &int_max,
},
{
.procname = "sem",
ipc_rmid(&msg_ids(ns), &s->q_perm);
}
+static void msg_rcu_free(struct rcu_head *head)
+{
+ struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
+ struct msg_queue *msq = ipc_rcu_to_struct(p);
+
+ security_msg_queue_free(msq);
+ ipc_rcu_free(head);
+}
+
/**
* newque - Create a new msg queue
* @ns: namespace
msq->q_perm.security = NULL;
retval = security_msg_queue_alloc(msq);
if (retval) {
- ipc_rcu_putref(msq);
+ ipc_rcu_putref(msq, ipc_rcu_free);
return retval;
}
/* ipc_addid() locks msq upon success. */
id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni);
if (id < 0) {
- security_msg_queue_free(msq);
- ipc_rcu_putref(msq);
+ ipc_rcu_putref(msq, msg_rcu_free);
return id;
}
free_msg(msg);
}
atomic_sub(msq->q_cbytes, &ns->msg_bytes);
- security_msg_queue_free(msq);
- ipc_rcu_putref(msq);
+ ipc_rcu_putref(msq, msg_rcu_free);
}
/*
if (ipcperms(ns, &msq->q_perm, S_IWUGO))
goto out_unlock0;
+ /* raced with RMID? */
+ if (msq->q_perm.deleted) {
+ err = -EIDRM;
+ goto out_unlock0;
+ }
+
err = security_msg_queue_msgsnd(msq, msg, msgflg);
if (err)
goto out_unlock0;
rcu_read_lock();
ipc_lock_object(&msq->q_perm);
- ipc_rcu_putref(msq);
+ ipc_rcu_putref(msq, ipc_rcu_free);
if (msq->q_perm.deleted) {
err = -EIDRM;
goto out_unlock0;
goto out_unlock1;
ipc_lock_object(&msq->q_perm);
+
+ /* raced with RMID? */
+ if (msq->q_perm.deleted) {
+ msg = ERR_PTR(-EIDRM);
+ goto out_unlock0;
+ }
+
msg = find_msg(msq, &msgtyp, mode);
if (!IS_ERR(msg)) {
/*
}
}
+static void sem_rcu_free(struct rcu_head *head)
+{
+ struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
+ struct sem_array *sma = ipc_rcu_to_struct(p);
+
+ security_sem_free(sma);
+ ipc_rcu_free(head);
+}
+
+/*
+ * Wait until all currently ongoing simple ops have completed.
+ * Caller must own sem_perm.lock.
+ * New simple ops cannot start, because simple ops first check
+ * that sem_perm.lock is free.
+ * that a) sem_perm.lock is free and b) complex_count is 0.
+ */
+static void sem_wait_array(struct sem_array *sma)
+{
+ int i;
+ struct sem *sem;
+
+ if (sma->complex_count) {
+ /* The thread that increased sma->complex_count waited on
+ * all sem->lock locks. Thus we don't need to wait again.
+ */
+ return;
+ }
+
+ for (i = 0; i < sma->sem_nsems; i++) {
+ sem = sma->sem_base + i;
+ spin_unlock_wait(&sem->lock);
+ }
+}
+
/*
* If the request contains only one semaphore operation, and there are
* no complex transactions pending, lock only the semaphore involved.
* Otherwise, lock the entire semaphore array, since we either have
* multiple semaphores in our own semops, or we need to look at
* semaphores from other pending complex operations.
- *
- * Carefully guard against sma->complex_count changing between zero
- * and non-zero while we are spinning for the lock. The value of
- * sma->complex_count cannot change while we are holding the lock,
- * so sem_unlock should be fine.
- *
- * The global lock path checks that all the local locks have been released,
- * checking each local lock once. This means that the local lock paths
- * cannot start their critical sections while the global lock is held.
*/
static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
int nsops)
{
- int locknum;
- again:
- if (nsops == 1 && !sma->complex_count) {
- struct sem *sem = sma->sem_base + sops->sem_num;
+ struct sem *sem;
- /* Lock just the semaphore we are interested in. */
- spin_lock(&sem->lock);
+ if (nsops != 1) {
+ /* Complex operation - acquire a full lock */
+ ipc_lock_object(&sma->sem_perm);
- /*
- * If sma->complex_count was set while we were spinning,
- * we may need to look at things we did not lock here.
+ /* And wait until all simple ops that are processed
+ * right now have dropped their locks.
*/
- if (unlikely(sma->complex_count)) {
- spin_unlock(&sem->lock);
- goto lock_array;
- }
+ sem_wait_array(sma);
+ return -1;
+ }
+
+ /*
+ * Only one semaphore affected - try to optimize locking.
+ * The rules are:
+ * - optimized locking is possible if no complex operation
+ * is either enqueued or processed right now.
+ * - The test for enqueued complex ops is simple:
+ * sma->complex_count != 0
+ * - Testing for complex ops that are processed right now is
+ * a bit more difficult. Complex ops acquire the full lock
+ * and first wait that the running simple ops have completed.
+ * (see above)
+ * Thus: If we own a simple lock and the global lock is free
+ * and complex_count is now 0, then it will stay 0 and
+ * thus just locking sem->lock is sufficient.
+ */
+ sem = sma->sem_base + sops->sem_num;
+ if (sma->complex_count == 0) {
/*
- * Another process is holding the global lock on the
- * sem_array; we cannot enter our critical section,
- * but have to wait for the global lock to be released.
+ * It appears that no complex operation is around.
+ * Acquire the per-semaphore lock.
*/
- if (unlikely(spin_is_locked(&sma->sem_perm.lock))) {
- spin_unlock(&sem->lock);
- spin_unlock_wait(&sma->sem_perm.lock);
- goto again;
+ spin_lock(&sem->lock);
+
+ /* Then check that the global lock is free */
+ if (!spin_is_locked(&sma->sem_perm.lock)) {
+ /* spin_is_locked() is not a memory barrier */
+ smp_mb();
+
+ /* Now repeat the test of complex_count:
+ * It can't change anymore until we drop sem->lock.
+ * Thus: if is now 0, then it will stay 0.
+ */
+ if (sma->complex_count == 0) {
+ /* fast path successful! */
+ return sops->sem_num;
+ }
}
+ spin_unlock(&sem->lock);
+ }
- locknum = sops->sem_num;
+ /* slow path: acquire the full lock */
+ ipc_lock_object(&sma->sem_perm);
+
+ if (sma->complex_count == 0) {
+ /* False alarm:
+ * There is no complex operation, thus we can switch
+ * back to the fast path.
+ */
+ spin_lock(&sem->lock);
+ ipc_unlock_object(&sma->sem_perm);
+ return sops->sem_num;
} else {
- int i;
- /*
- * Lock the semaphore array, and wait for all of the
- * individual semaphore locks to go away. The code
- * above ensures no new single-lock holders will enter
- * their critical section while the array lock is held.
+ /* Not a false alarm, thus complete the sequence for a
+ * full lock.
*/
- lock_array:
- ipc_lock_object(&sma->sem_perm);
- for (i = 0; i < sma->sem_nsems; i++) {
- struct sem *sem = sma->sem_base + i;
- spin_unlock_wait(&sem->lock);
- }
- locknum = -1;
+ sem_wait_array(sma);
+ return -1;
}
- return locknum;
}
static inline void sem_unlock(struct sem_array *sma, int locknum)
static inline void sem_lock_and_putref(struct sem_array *sma)
{
sem_lock(sma, NULL, -1);
- ipc_rcu_putref(sma);
-}
-
-static inline void sem_putref(struct sem_array *sma)
-{
- ipc_rcu_putref(sma);
+ ipc_rcu_putref(sma, ipc_rcu_free);
}
static inline void sem_rmid(struct ipc_namespace *ns, struct sem_array *s)
sma->sem_perm.security = NULL;
retval = security_sem_alloc(sma);
if (retval) {
- ipc_rcu_putref(sma);
+ ipc_rcu_putref(sma, ipc_rcu_free);
return retval;
}
id = ipc_addid(&sem_ids(ns), &sma->sem_perm, ns->sc_semmni);
if (id < 0) {
- security_sem_free(sma);
- ipc_rcu_putref(sma);
+ ipc_rcu_putref(sma, sem_rcu_free);
return id;
}
ns->used_sems += nsems;
}
/**
+ * set_semotime(sma, sops) - set sem_otime
+ * @sma: semaphore array
+ * @sops: operations that modified the array, may be NULL
+ *
+ * sem_otime is replicated to avoid cache line trashing.
+ * This function sets one instance to the current time.
+ */
+static void set_semotime(struct sem_array *sma, struct sembuf *sops)
+{
+ if (sops == NULL) {
+ sma->sem_base[0].sem_otime = get_seconds();
+ } else {
+ sma->sem_base[sops[0].sem_num].sem_otime =
+ get_seconds();
+ }
+}
+
+/**
* do_smart_update(sma, sops, nsops, otime, pt) - optimized update_queue
* @sma: semaphore array
* @sops: operations that were performed
}
}
}
- if (otime) {
- if (sops == NULL) {
- sma->sem_base[0].sem_otime = get_seconds();
- } else {
- sma->sem_base[sops[0].sem_num].sem_otime =
- get_seconds();
- }
- }
+ if (otime)
+ set_semotime(sma, sops);
}
-
/* The following counts are associated to each semaphore:
* semncnt number of tasks waiting on semval being nonzero
* semzcnt number of tasks waiting on semval being zero
wake_up_sem_queue_do(&tasks);
ns->used_sems -= sma->sem_nsems;
- security_sem_free(sma);
- ipc_rcu_putref(sma);
+ ipc_rcu_putref(sma, sem_rcu_free);
}
static unsigned long copy_semid_to_user(void __user *buf, struct semid64_ds *in, int version)
sem_lock(sma, NULL, -1);
+ if (sma->sem_perm.deleted) {
+ sem_unlock(sma, -1);
+ rcu_read_unlock();
+ return -EIDRM;
+ }
+
curr = &sma->sem_base[semnum];
ipc_assert_locked_object(&sma->sem_perm);
int i;
sem_lock(sma, NULL, -1);
+ if (sma->sem_perm.deleted) {
+ err = -EIDRM;
+ goto out_unlock;
+ }
if(nsems > SEMMSL_FAST) {
if (!ipc_rcu_getref(sma)) {
- sem_unlock(sma, -1);
- rcu_read_unlock();
err = -EIDRM;
- goto out_free;
+ goto out_unlock;
}
sem_unlock(sma, -1);
rcu_read_unlock();
sem_io = ipc_alloc(sizeof(ushort)*nsems);
if(sem_io == NULL) {
- sem_putref(sma);
+ ipc_rcu_putref(sma, ipc_rcu_free);
return -ENOMEM;
}
rcu_read_lock();
sem_lock_and_putref(sma);
if (sma->sem_perm.deleted) {
- sem_unlock(sma, -1);
- rcu_read_unlock();
err = -EIDRM;
- goto out_free;
+ goto out_unlock;
}
}
for (i = 0; i < sma->sem_nsems; i++)
struct sem_undo *un;
if (!ipc_rcu_getref(sma)) {
- rcu_read_unlock();
- return -EIDRM;
+ err = -EIDRM;
+ goto out_rcu_wakeup;
}
rcu_read_unlock();
if(nsems > SEMMSL_FAST) {
sem_io = ipc_alloc(sizeof(ushort)*nsems);
if(sem_io == NULL) {
- sem_putref(sma);
+ ipc_rcu_putref(sma, ipc_rcu_free);
return -ENOMEM;
}
}
if (copy_from_user (sem_io, p, nsems*sizeof(ushort))) {
- sem_putref(sma);
+ ipc_rcu_putref(sma, ipc_rcu_free);
err = -EFAULT;
goto out_free;
}
for (i = 0; i < nsems; i++) {
if (sem_io[i] > SEMVMX) {
- sem_putref(sma);
+ ipc_rcu_putref(sma, ipc_rcu_free);
err = -ERANGE;
goto out_free;
}
rcu_read_lock();
sem_lock_and_putref(sma);
if (sma->sem_perm.deleted) {
- sem_unlock(sma, -1);
- rcu_read_unlock();
err = -EIDRM;
- goto out_free;
+ goto out_unlock;
}
for (i = 0; i < nsems; i++)
goto out_rcu_wakeup;
sem_lock(sma, NULL, -1);
+ if (sma->sem_perm.deleted) {
+ err = -EIDRM;
+ goto out_unlock;
+ }
curr = &sma->sem_base[semnum];
switch (cmd) {
/* step 2: allocate new undo structure */
new = kzalloc(sizeof(struct sem_undo) + sizeof(short)*nsems, GFP_KERNEL);
if (!new) {
- sem_putref(sma);
+ ipc_rcu_putref(sma, ipc_rcu_free);
return ERR_PTR(-ENOMEM);
}
if (error)
goto out_rcu_wakeup;
+ error = -EIDRM;
+ locknum = sem_lock(sma, sops, nsops);
+ if (sma->sem_perm.deleted)
+ goto out_unlock_free;
/*
* semid identifiers are not unique - find_alloc_undo may have
* allocated an undo structure, it was invalidated by an RMID
* This case can be detected checking un->semid. The existence of
* "un" itself is guaranteed by rcu.
*/
- error = -EIDRM;
- locknum = sem_lock(sma, sops, nsops);
if (un && un->semid == -1)
goto out_unlock_free;
error = perform_atomic_semop(sma, sops, nsops, un,
task_tgid_vnr(current));
- if (error <= 0) {
- if (alter && error == 0)
+ if (error == 0) {
+ /* If the operation was successful, then do
+ * the required updates.
+ */
+ if (alter)
do_smart_update(sma, sops, nsops, 1, &tasks);
-
- goto out_unlock_free;
+ else
+ set_semotime(sma, sops);
}
+ if (error <= 0)
+ goto out_unlock_free;
/* We need to sleep on this operation, so we put the current
* task into the pending queue and go to sleep.
}
sem_lock(sma, NULL, -1);
+ /* exit_sem raced with IPC_RMID, nothing to do */
+ if (sma->sem_perm.deleted) {
+ sem_unlock(sma, -1);
+ rcu_read_unlock();
+ continue;
+ }
un = __lookup_undo(ulp, semid);
if (un == NULL) {
/* exit_sem raced with IPC_RMID+semget() that created
struct sem_array *sma = it;
time_t sem_otime;
+ /*
+ * The proc interface isn't aware of sem_lock(), it calls
+ * ipc_lock_object() directly (in sysvipc_find_ipc).
+ * In order to stay compatible with sem_lock(), we must wait until
+ * all simple semop() calls have left their critical regions.
+ */
+ sem_wait_array(sma);
+
sem_otime = get_semotime(sma);
return seq_printf(s,
ipc_lock_object(&ipcp->shm_perm);
}
+static void shm_rcu_free(struct rcu_head *head)
+{
+ struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
+ struct shmid_kernel *shp = ipc_rcu_to_struct(p);
+
+ security_shm_free(shp);
+ ipc_rcu_free(head);
+}
+
static inline void shm_rmid(struct ipc_namespace *ns, struct shmid_kernel *s)
{
ipc_rmid(&shm_ids(ns), &s->shm_perm);
user_shm_unlock(file_inode(shp->shm_file)->i_size,
shp->mlock_user);
fput (shp->shm_file);
- security_shm_free(shp);
- ipc_rcu_putref(shp);
+ ipc_rcu_putref(shp, shm_rcu_free);
}
/*
shp->shm_perm.security = NULL;
error = security_shm_alloc(shp);
if (error) {
- ipc_rcu_putref(shp);
+ ipc_rcu_putref(shp, ipc_rcu_free);
return error;
}
user_shm_unlock(size, shp->mlock_user);
fput(file);
no_file:
- security_shm_free(shp);
- ipc_rcu_putref(shp);
+ ipc_rcu_putref(shp, shm_rcu_free);
return error;
}
* Pavel Emelianov <xemul@openvz.org>
*
* General sysv ipc locking scheme:
- * when doing ipc id lookups, take the ids->rwsem
- * rcu_read_lock()
- * obtain the ipc object (kern_ipc_perm)
- * perform security, capabilities, auditing and permission checks, etc.
- * acquire the ipc lock (kern_ipc_perm.lock) throught ipc_lock_object()
- * perform data updates (ie: SET, RMID, LOCK/UNLOCK commands)
+ * rcu_read_lock()
+ * obtain the ipc object (kern_ipc_perm) by looking up the id in an idr
+ * tree.
+ * - perform initial checks (capabilities, auditing and permission,
+ * etc).
+ * - perform read-only operations, such as STAT, INFO commands.
+ * acquire the ipc lock (kern_ipc_perm.lock) through
+ * ipc_lock_object()
+ * - perform data updates, such as SET, RMID commands and
+ * mechanism-specific operations (semop/semtimedop,
+ * msgsnd/msgrcv, shmat/shmdt).
+ * drop the ipc lock, through ipc_unlock_object().
+ * rcu_read_unlock()
+ *
+ * The ids->rwsem must be taken when:
+ * - creating, removing and iterating the existing entries in ipc
+ * identifier sets.
+ * - iterating through files under /proc/sysvipc/
+ *
+ * Note that sems have a special fast path that avoids kern_ipc_perm.lock -
+ * see sem_lock().
*/
#include <linux/mm.h>
kfree(ptr);
}
-struct ipc_rcu {
- struct rcu_head rcu;
- atomic_t refcount;
-} ____cacheline_aligned_in_smp;
-
/**
* ipc_rcu_alloc - allocate ipc and rcu space
* @size: size desired
return atomic_inc_not_zero(&p->refcount);
}
-/**
- * ipc_schedule_free - free ipc + rcu space
- * @head: RCU callback structure for queued work
- */
-static void ipc_schedule_free(struct rcu_head *head)
-{
- vfree(container_of(head, struct ipc_rcu, rcu));
-}
-
-void ipc_rcu_putref(void *ptr)
+void ipc_rcu_putref(void *ptr, void (*func)(struct rcu_head *head))
{
struct ipc_rcu *p = ((struct ipc_rcu *)ptr) - 1;
if (!atomic_dec_and_test(&p->refcount))
return;
- if (is_vmalloc_addr(ptr)) {
- call_rcu(&p->rcu, ipc_schedule_free);
- } else {
- kfree_rcu(p, rcu);
- }
+ call_rcu(&p->rcu, func);
+}
+
+void ipc_rcu_free(struct rcu_head *head)
+{
+ struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu);
+
+ if (is_vmalloc_addr(p))
+ vfree(p);
+ else
+ kfree(p);
}
/**
static inline void shm_exit_ns(struct ipc_namespace *ns) { }
#endif
+struct ipc_rcu {
+ struct rcu_head rcu;
+ atomic_t refcount;
+} ____cacheline_aligned_in_smp;
+
+#define ipc_rcu_to_struct(p) ((void *)(p+1))
+
/*
* Structure that holds the parameters needed by the ipc operations
* (see after)
*/
void* ipc_rcu_alloc(int size);
int ipc_rcu_getref(void *ptr);
-void ipc_rcu_putref(void *ptr);
+void ipc_rcu_putref(void *ptr, void (*func)(struct rcu_head *head));
+void ipc_rcu_free(struct rcu_head *head);
struct kern_ipc_perm *ipc_lock(struct ipc_ids *, int);
struct kern_ipc_perm *ipc_obtain_object(struct ipc_ids *ids, int id);
sleep_time = timeout_start + audit_backlog_wait_time -
jiffies;
- if ((long)sleep_time > 0)
+ if ((long)sleep_time > 0) {
wait_for_auditd(sleep_time);
- continue;
+ continue;
+ }
}
if (audit_rate_check() && printk_ratelimit())
printk(KERN_WARNING
/* @tsk either already exited or can't exit until the end */
if (tsk->flags & PF_EXITING)
- continue;
+ goto next;
/* as per above, nr_threads may decrease, but not increase. */
BUG_ON(i >= group_size);
ent.cgrp = task_cgroup_from_root(tsk, root);
/* nothing to do if this task is already in the cgroup */
if (ent.cgrp == cgrp)
- continue;
+ goto next;
/*
* saying GFP_ATOMIC has no effect here because we did prealloc
* earlier, but it's good form to communicate our expectations.
retval = flex_array_put(group, i, &ent, GFP_ATOMIC);
BUG_ON(retval != 0);
i++;
-
+ next:
if (!threadgroup)
break;
} while_each_thread(leader, tsk);
WARN_ON_ONCE(!rcu_read_lock_held());
- /* if first iteration, visit the leftmost descendant */
- if (!pos) {
- next = css_leftmost_descendant(root);
- return next != root ? next : NULL;
- }
+ /* if first iteration, visit leftmost descendant which may be @root */
+ if (!pos)
+ return css_leftmost_descendant(root);
/* if we visited @root, we're done */
if (pos == root)
unsigned long flags;
/*
+ * Repeat the user_enter() check here because some archs may be calling
+ * this from asm and if no CPU needs context tracking, they shouldn't
+ * go further. Repeat the check here until they support the static key
+ * check.
+ */
+ if (!static_key_false(&context_tracking_enabled))
+ return;
+
+ /*
* Some contexts may involve an exception occuring in an irq,
* leading to that nesting:
* rcu_irq_enter() rcu_user_exit() rcu_user_exit() rcu_irq_exit()
{
unsigned long flags;
+ if (!static_key_false(&context_tracking_enabled))
+ return;
+
if (in_interrupt())
return;
*running = ctx_time - event->tstamp_running;
}
+static void perf_event_init_userpage(struct perf_event *event)
+{
+ struct perf_event_mmap_page *userpg;
+ struct ring_buffer *rb;
+
+ rcu_read_lock();
+ rb = rcu_dereference(event->rb);
+ if (!rb)
+ goto unlock;
+
+ userpg = rb->user_page;
+
+ /* Allow new userspace to detect that bit 0 is deprecated */
+ userpg->cap_bit0_is_deprecated = 1;
+ userpg->size = offsetof(struct perf_event_mmap_page, __reserved);
+
+unlock:
+ rcu_read_unlock();
+}
+
void __weak arch_perf_update_userpage(struct perf_event_mmap_page *userpg, u64 now)
{
}
ring_buffer_attach(event, rb);
rcu_assign_pointer(event->rb, rb);
+ perf_event_init_userpage(event);
perf_event_update_userpage(event);
unlock:
if (ret)
return -EFAULT;
+ /* disabled for now */
+ if (attr->mmap2)
+ return -EINVAL;
+
if (attr->__reserved_1)
return -EINVAL;
perf_remove_from_context(event);
unaccount_event_cpu(event, src_cpu);
put_ctx(src_ctx);
- list_add(&event->event_entry, &events);
+ list_add(&event->migrate_entry, &events);
}
mutex_unlock(&src_ctx->mutex);
synchronize_rcu();
mutex_lock(&dst_ctx->mutex);
- list_for_each_entry_safe(event, tmp, &events, event_entry) {
- list_del(&event->event_entry);
+ list_for_each_entry_safe(event, tmp, &events, migrate_entry) {
+ list_del(&event->migrate_entry);
if (event->state >= PERF_EVENT_STATE_OFF)
event->state = PERF_EVENT_STATE_INACTIVE;
account_event_cpu(event, dst_cpu);
goto out;
/*
- * Publish the known good head. Rely on the full barrier implied
- * by atomic_dec_and_test() order the rb->head read and this
- * write.
+ * Since the mmap() consumer (userspace) can run on a different CPU:
+ *
+ * kernel user
+ *
+ * READ ->data_tail READ ->data_head
+ * smp_mb() (A) smp_rmb() (C)
+ * WRITE $data READ $data
+ * smp_wmb() (B) smp_mb() (D)
+ * STORE ->data_head WRITE ->data_tail
+ *
+ * Where A pairs with D, and B pairs with C.
+ *
+ * I don't think A needs to be a full barrier because we won't in fact
+ * write data until we see the store from userspace. So we simply don't
+ * issue the data WRITE until we observe it. Be conservative for now.
+ *
+ * OTOH, D needs to be a full barrier since it separates the data READ
+ * from the tail WRITE.
+ *
+ * For B a WMB is sufficient since it separates two WRITEs, and for C
+ * an RMB is sufficient since it separates two READs.
+ *
+ * See perf_output_begin().
*/
+ smp_wmb();
rb->user_page->data_head = head;
/*
* Userspace could choose to issue a mb() before updating the
* tail pointer. So that all reads will be completed before the
* write is issued.
+ *
+ * See perf_output_put_handle().
*/
tail = ACCESS_ONCE(rb->user_page->data_tail);
- smp_rmb();
+ smp_mb();
offset = head = local_read(&rb->head);
head += size;
if (unlikely(!perf_output_space(rb, tail, offset, head)))
DECLARE_COMPLETION_ONSTACK(done);
int retval = 0;
+ if (!sub_info->path) {
+ call_usermodehelper_freeinfo(sub_info);
+ return -EINVAL;
+ }
helper_lock();
if (!khelper_wq || usermodehelper_disabled) {
retval = -EBUSY;
static __always_inline int __sched
__mutex_lock_common(struct mutex *lock, long state, unsigned int subclass,
struct lockdep_map *nest_lock, unsigned long ip,
- struct ww_acquire_ctx *ww_ctx)
+ struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
{
struct task_struct *task = current;
struct mutex_waiter waiter;
struct task_struct *owner;
struct mspin_node node;
- if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) {
+ if (use_ww_ctx && ww_ctx->acquired > 0) {
struct ww_mutex *ww;
ww = container_of(lock, struct ww_mutex, base);
if ((atomic_read(&lock->count) == 1) &&
(atomic_cmpxchg(&lock->count, 1, 0) == 1)) {
lock_acquired(&lock->dep_map, ip);
- if (!__builtin_constant_p(ww_ctx == NULL)) {
+ if (use_ww_ctx) {
struct ww_mutex *ww;
ww = container_of(lock, struct ww_mutex, base);
goto err;
}
- if (!__builtin_constant_p(ww_ctx == NULL) && ww_ctx->acquired > 0) {
+ if (use_ww_ctx && ww_ctx->acquired > 0) {
ret = __mutex_lock_check_stamp(lock, ww_ctx);
if (ret)
goto err;
lock_acquired(&lock->dep_map, ip);
mutex_set_owner(lock);
- if (!__builtin_constant_p(ww_ctx == NULL)) {
+ if (use_ww_ctx) {
struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
struct mutex_waiter *cur;
{
might_sleep();
__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE,
- subclass, NULL, _RET_IP_, NULL);
+ subclass, NULL, _RET_IP_, NULL, 0);
}
EXPORT_SYMBOL_GPL(mutex_lock_nested);
{
might_sleep();
__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE,
- 0, nest, _RET_IP_, NULL);
+ 0, nest, _RET_IP_, NULL, 0);
}
EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock);
{
might_sleep();
return __mutex_lock_common(lock, TASK_KILLABLE,
- subclass, NULL, _RET_IP_, NULL);
+ subclass, NULL, _RET_IP_, NULL, 0);
}
EXPORT_SYMBOL_GPL(mutex_lock_killable_nested);
{
might_sleep();
return __mutex_lock_common(lock, TASK_INTERRUPTIBLE,
- subclass, NULL, _RET_IP_, NULL);
+ subclass, NULL, _RET_IP_, NULL, 0);
}
EXPORT_SYMBOL_GPL(mutex_lock_interruptible_nested);
might_sleep();
ret = __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE,
- 0, &ctx->dep_map, _RET_IP_, ctx);
+ 0, &ctx->dep_map, _RET_IP_, ctx, 1);
if (!ret && ctx->acquired > 1)
return ww_mutex_deadlock_injection(lock, ctx);
might_sleep();
ret = __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE,
- 0, &ctx->dep_map, _RET_IP_, ctx);
+ 0, &ctx->dep_map, _RET_IP_, ctx, 1);
if (!ret && ctx->acquired > 1)
return ww_mutex_deadlock_injection(lock, ctx);
struct mutex *lock = container_of(lock_count, struct mutex, count);
__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, 0,
- NULL, _RET_IP_, NULL);
+ NULL, _RET_IP_, NULL, 0);
}
static noinline int __sched
__mutex_lock_killable_slowpath(struct mutex *lock)
{
return __mutex_lock_common(lock, TASK_KILLABLE, 0,
- NULL, _RET_IP_, NULL);
+ NULL, _RET_IP_, NULL, 0);
}
static noinline int __sched
__mutex_lock_interruptible_slowpath(struct mutex *lock)
{
return __mutex_lock_common(lock, TASK_INTERRUPTIBLE, 0,
- NULL, _RET_IP_, NULL);
+ NULL, _RET_IP_, NULL, 0);
}
static noinline int __sched
__ww_mutex_lock_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
{
return __mutex_lock_common(&lock->base, TASK_UNINTERRUPTIBLE, 0,
- NULL, _RET_IP_, ctx);
+ NULL, _RET_IP_, ctx, 1);
}
static noinline int __sched
struct ww_acquire_ctx *ctx)
{
return __mutex_lock_common(&lock->base, TASK_INTERRUPTIBLE, 0,
- NULL, _RET_IP_, ctx);
+ NULL, _RET_IP_, ctx, 1);
}
#endif
STANDARD_PARAM_DEF(byte, unsigned char, "%hhu", unsigned long, kstrtoul);
-STANDARD_PARAM_DEF(short, short, "%hi", long, kstrtoul);
+STANDARD_PARAM_DEF(short, short, "%hi", long, kstrtol);
STANDARD_PARAM_DEF(ushort, unsigned short, "%hu", unsigned long, kstrtoul);
-STANDARD_PARAM_DEF(int, int, "%i", long, kstrtoul);
+STANDARD_PARAM_DEF(int, int, "%i", long, kstrtol);
STANDARD_PARAM_DEF(uint, unsigned int, "%u", unsigned long, kstrtoul);
-STANDARD_PARAM_DEF(long, long, "%li", long, kstrtoul);
+STANDARD_PARAM_DEF(long, long, "%li", long, kstrtol);
STANDARD_PARAM_DEF(ulong, unsigned long, "%lu", unsigned long, kstrtoul);
int param_set_charp(const char *val, const struct kernel_param *kp)
*/
wake_up_process(ns->child_reaper);
break;
+ case PIDNS_HASH_ADDING:
+ /* Handle a fork failure of the first process */
+ WARN_ON(ns->child_reaper);
+ ns->nr_hashed = 0;
+ /* fall through */
case 0:
schedule_work(&ns->proc_work);
break;
goto Finish;
}
-late_initcall(software_resume);
+late_initcall_sync(software_resume);
static const char * const hibernation_modes[] = {
struct memory_bitmap *bm1, *bm2;
int error = 0;
- BUG_ON(forbidden_pages_map || free_pages_map);
+ if (forbidden_pages_map && free_pages_map)
+ return 0;
+ else
+ BUG_ON(forbidden_pages_map || free_pages_map);
bm1 = kzalloc(sizeof(struct memory_bitmap), GFP_KERNEL);
if (!bm1)
char frozen;
char ready;
char platform_support;
+ bool free_bitmaps;
} snapshot_state;
atomic_t snapshot_device_available = ATOMIC_INIT(1);
data->swap = -1;
data->mode = O_WRONLY;
error = pm_notifier_call_chain(PM_RESTORE_PREPARE);
+ if (!error) {
+ error = create_basic_memory_bitmaps();
+ data->free_bitmaps = !error;
+ }
if (error)
pm_notifier_call_chain(PM_POST_RESTORE);
}
pm_restore_gfp_mask();
free_basic_memory_bitmaps();
thaw_processes();
+ } else if (data->free_bitmaps) {
+ free_basic_memory_bitmaps();
}
pm_notifier_call_chain(data->mode == O_RDONLY ?
PM_POST_HIBERNATION : PM_POST_RESTORE);
break;
pm_restore_gfp_mask();
free_basic_memory_bitmaps();
+ data->free_bitmaps = false;
thaw_processes();
data->frozen = 0;
break;
#endif
enum reboot_mode reboot_mode DEFAULT_REBOOT_MODE;
-int reboot_default;
+/*
+ * This variable is used privately to keep track of whether or not
+ * reboot_type is still set to its default value (i.e., reboot= hasn't
+ * been set on the command line). This is needed so that we can
+ * suppress DMI scanning for reboot quirks. Without it, it's
+ * impossible to override a faulty reboot quirk without recompiling.
+ */
+int reboot_default = 1;
int reboot_cpu;
enum reboot_type reboot_type = BOOT_ACPI;
int reboot_force;
SEQ_printf(m, " ");
SEQ_printf(m, "%15s %5d %9Ld.%06ld %9Ld %5d ",
- p->comm, p->pid,
+ p->comm, task_pid_nr(p),
SPLIT_NS(p->se.vruntime),
(long long)(p->nvcsw + p->nivcsw),
p->prio);
P(nr_load_updates);
P(nr_uninterruptible);
PN(next_balance);
- P(curr->pid);
+ SEQ_printf(m, " .%-30s: %ld\n", "curr->pid", (long)(task_pid_nr(rq->curr)));
PN(clock);
P(cpu_load[0]);
P(cpu_load[1]);
{
unsigned long nr_switches;
- SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, p->pid,
+ SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr(p),
get_nr_threads(p));
SEQ_printf(m,
"---------------------------------------------------------"
}
if (!se) {
- cfs_rq->h_load = rq->avg.load_avg_contrib;
+ cfs_rq->h_load = cfs_rq->runnable_load_avg;
cfs_rq->last_h_load_update = now;
}
(busiest->load_per_task * SCHED_POWER_SCALE) /
busiest->group_power;
- if (busiest->avg_load - local->avg_load + scaled_busy_load_per_task >=
- (scaled_busy_load_per_task * imbn)) {
+ if (busiest->avg_load + scaled_busy_load_per_task >=
+ local->avg_load + (scaled_busy_load_per_task * imbn)) {
env->imbalance = busiest->load_per_task;
return;
}
* max load less than avg load(as we skip the groups at or below
* its cpu_power, while calculating max_load..)
*/
- if (busiest->avg_load < sds->avg_load) {
+ if (busiest->avg_load <= sds->avg_load ||
+ local->avg_load >= sds->avg_load) {
env->imbalance = 0;
return fix_small_imbalance(env, sds);
}
cfs_rq = task_cfs_rq(current);
curr = cfs_rq->curr;
- if (unlikely(task_cpu(p) != this_cpu)) {
- rcu_read_lock();
- __set_task_cpu(p, this_cpu);
- rcu_read_unlock();
- }
+ /*
+ * Not only the cpu but also the task_group of the parent might have
+ * been changed after parent->se.parent,cfs_rq were copied to
+ * child->se.parent,cfs_rq. So call __set_task_cpu() to make those
+ * of child point to valid ones.
+ */
+ rcu_read_lock();
+ __set_task_cpu(p, this_cpu);
+ rcu_read_unlock();
update_curr(cfs_rq);
}
/*
- * Called when a process ceases being the active-running process, either
- * voluntarily or involuntarily. Now we can calculate how long we ran.
+ * Called when a process ceases being the active-running process involuntarily
+ * due, typically, to expiring its time slice (this may also be called when
+ * switching to the idle task). Now we can calculate how long we ran.
* Also, if the process is still in the TASK_RUNNING state, call
* sched_info_queued() to mark that it has now again started waiting on
* the runqueue.
static inline void invoke_softirq(void)
{
- if (!force_irqthreads)
- __do_softirq();
- else
+ if (!force_irqthreads) {
+ /*
+ * We can safely execute softirq on the current stack if
+ * it is the irq stack, because it should be near empty
+ * at this stage. But we have no way to know if the arch
+ * calls irq_exit() on the irq stack. So call softirq
+ * in its own stack to prevent from any overrun on top
+ * of a potentially deep task stack.
+ */
+ do_softirq();
+ } else {
wakeup_softirqd();
+ }
}
static inline void tick_irq_exit(void)
int res;
};
-/**
- * clockevents_delta2ns - Convert a latch value (device ticks) to nanoseconds
- * @latch: value to convert
- * @evt: pointer to clock event device descriptor
- *
- * Math helper, returns latch value converted to nanoseconds (bound checked)
- */
-u64 clockevent_delta2ns(unsigned long latch, struct clock_event_device *evt)
+static u64 cev_delta2ns(unsigned long latch, struct clock_event_device *evt,
+ bool ismax)
{
u64 clc = (u64) latch << evt->shift;
+ u64 rnd;
if (unlikely(!evt->mult)) {
evt->mult = 1;
WARN_ON(1);
}
+ rnd = (u64) evt->mult - 1;
+
+ /*
+ * Upper bound sanity check. If the backwards conversion is
+ * not equal latch, we know that the above shift overflowed.
+ */
+ if ((clc >> evt->shift) != (u64)latch)
+ clc = ~0ULL;
+
+ /*
+ * Scaled math oddities:
+ *
+ * For mult <= (1 << shift) we can safely add mult - 1 to
+ * prevent integer rounding loss. So the backwards conversion
+ * from nsec to device ticks will be correct.
+ *
+ * For mult > (1 << shift), i.e. device frequency is > 1GHz we
+ * need to be careful. Adding mult - 1 will result in a value
+ * which when converted back to device ticks can be larger
+ * than latch by up to (mult - 1) >> shift. For the min_delta
+ * calculation we still want to apply this in order to stay
+ * above the minimum device ticks limit. For the upper limit
+ * we would end up with a latch value larger than the upper
+ * limit of the device, so we omit the add to stay below the
+ * device upper boundary.
+ *
+ * Also omit the add if it would overflow the u64 boundary.
+ */
+ if ((~0ULL - clc > rnd) &&
+ (!ismax || evt->mult <= (1U << evt->shift)))
+ clc += rnd;
do_div(clc, evt->mult);
- if (clc < 1000)
- clc = 1000;
- if (clc > KTIME_MAX)
- clc = KTIME_MAX;
- return clc;
+ /* Deltas less than 1usec are pointless noise */
+ return clc > 1000 ? clc : 1000;
+}
+
+/**
+ * clockevents_delta2ns - Convert a latch value (device ticks) to nanoseconds
+ * @latch: value to convert
+ * @evt: pointer to clock event device descriptor
+ *
+ * Math helper, returns latch value converted to nanoseconds (bound checked)
+ */
+u64 clockevent_delta2ns(unsigned long latch, struct clock_event_device *evt)
+{
+ return cev_delta2ns(latch, evt, false);
}
EXPORT_SYMBOL_GPL(clockevent_delta2ns);
sec = 600;
clockevents_calc_mult_shift(dev, freq, sec);
- dev->min_delta_ns = clockevent_delta2ns(dev->min_delta_ticks, dev);
- dev->max_delta_ns = clockevent_delta2ns(dev->max_delta_ticks, dev);
+ dev->min_delta_ns = cev_delta2ns(dev->min_delta_ticks, dev, false);
+ dev->max_delta_ns = cev_delta2ns(dev->max_delta_ticks, dev, true);
}
/**
schedule_delayed_work(&sync_cmos_work, timespec_to_jiffies(&next));
}
-static void notify_cmos_timer(void)
+void ntp_notify_cmos_timer(void)
{
schedule_delayed_work(&sync_cmos_work, 0);
}
#else
-static inline void notify_cmos_timer(void) { }
+void ntp_notify_cmos_timer(void) { }
#endif
if (!(time_status & STA_NANO))
txc->time.tv_usec /= NSEC_PER_USEC;
- notify_cmos_timer();
-
return result;
}
write_seqcount_end(&timekeeper_seq);
raw_spin_unlock_irqrestore(&timekeeper_lock, flags);
+ ntp_notify_cmos_timer();
+
return ret;
}
.unpark = watchdog_enable,
};
-static int watchdog_enable_all_cpus(void)
+static void restart_watchdog_hrtimer(void *info)
+{
+ struct hrtimer *hrtimer = &__raw_get_cpu_var(watchdog_hrtimer);
+ int ret;
+
+ /*
+ * No need to cancel and restart hrtimer if it is currently executing
+ * because it will reprogram itself with the new period now.
+ * We should never see it unqueued here because we are running per-cpu
+ * with interrupts disabled.
+ */
+ ret = hrtimer_try_to_cancel(hrtimer);
+ if (ret == 1)
+ hrtimer_start(hrtimer, ns_to_ktime(sample_period),
+ HRTIMER_MODE_REL_PINNED);
+}
+
+static void update_timers(int cpu)
+{
+ struct call_single_data data = {.func = restart_watchdog_hrtimer};
+ /*
+ * Make sure that perf event counter will adopt to a new
+ * sampling period. Updating the sampling period directly would
+ * be much nicer but we do not have an API for that now so
+ * let's use a big hammer.
+ * Hrtimer will adopt the new period on the next tick but this
+ * might be late already so we have to restart the timer as well.
+ */
+ watchdog_nmi_disable(cpu);
+ __smp_call_function_single(cpu, &data, 1);
+ watchdog_nmi_enable(cpu);
+}
+
+static void update_timers_all_cpus(void)
+{
+ int cpu;
+
+ get_online_cpus();
+ preempt_disable();
+ for_each_online_cpu(cpu)
+ update_timers(cpu);
+ preempt_enable();
+ put_online_cpus();
+}
+
+static int watchdog_enable_all_cpus(bool sample_period_changed)
{
int err = 0;
pr_err("Failed to create watchdog threads, disabled\n");
else
watchdog_running = 1;
+ } else if (sample_period_changed) {
+ update_timers_all_cpus();
}
return err;
void __user *buffer, size_t *lenp, loff_t *ppos)
{
int err, old_thresh, old_enabled;
+ static DEFINE_MUTEX(watchdog_proc_mutex);
+ mutex_lock(&watchdog_proc_mutex);
old_thresh = ACCESS_ONCE(watchdog_thresh);
old_enabled = ACCESS_ONCE(watchdog_user_enabled);
err = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
if (err || !write)
- return err;
+ goto out;
set_sample_period();
/*
* watchdog_*_all_cpus() function takes care of this.
*/
if (watchdog_user_enabled && watchdog_thresh)
- err = watchdog_enable_all_cpus();
+ err = watchdog_enable_all_cpus(old_thresh != watchdog_thresh);
else
watchdog_disable_all_cpus();
watchdog_thresh = old_thresh;
watchdog_user_enabled = old_enabled;
}
-
+out:
+ mutex_unlock(&watchdog_proc_mutex);
return err;
}
#endif /* CONFIG_SYSCTL */
set_sample_period();
if (watchdog_user_enabled)
- watchdog_enable_all_cpus();
+ watchdog_enable_all_cpus(false);
}
config DEBUG_KOBJECT_RELEASE
bool "kobject release debugging"
- depends on DEBUG_KERNEL
+ depends on DEBUG_OBJECTS_TIMERS
help
kobjects are reference counted objects. This means that their
last reference count put is not predictable, and the kobject can
const char hex_asc[] = "0123456789abcdef";
EXPORT_SYMBOL(hex_asc);
+const char hex_asc_upper[] = "0123456789ABCDEF";
+EXPORT_SYMBOL(hex_asc_upper);
/**
* hex_to_bin - convert a hex digit to its real value
{
struct kobject *kobj = container_of(kref, struct kobject, kref);
#ifdef CONFIG_DEBUG_KOBJECT_RELEASE
- pr_debug("kobject: '%s' (%p): %s, parent %p (delayed)\n",
+ pr_info("kobject: '%s' (%p): %s, parent %p (delayed)\n",
kobject_name(kobj), kobj, __func__, kobj->parent);
INIT_DELAYED_WORK(&kobj->release, kobject_delayed_cleanup);
schedule_delayed_work(&kobj->release, HZ);
bool kobj_ns_current_may_mount(enum kobj_ns_type type)
{
- bool may_mount = false;
-
- if (type == KOBJ_NS_TYPE_NONE)
- return true;
+ bool may_mount = true;
spin_lock(&kobj_ns_type_lock);
if ((type > KOBJ_NS_TYPE_NONE) && (type < KOBJ_NS_TYPES) &&
#ifdef CONFIG_CMPXCHG_LOCKREF
/*
+ * Allow weakly-ordered memory architectures to provide barrier-less
+ * cmpxchg semantics for lockref updates.
+ */
+#ifndef cmpxchg64_relaxed
+# define cmpxchg64_relaxed cmpxchg64
+#endif
+
+/*
+ * Allow architectures to override the default cpu_relax() within CMPXCHG_LOOP.
+ * This is useful for architectures with an expensive cpu_relax().
+ */
+#ifndef arch_mutex_cpu_relax
+# define arch_mutex_cpu_relax() cpu_relax()
+#endif
+
+/*
* Note that the "cmpxchg()" reloads the "old" value for the
* failure case.
*/
while (likely(arch_spin_value_unlocked(old.lock.rlock.raw_lock))) { \
struct lockref new = old, prev = old; \
CODE \
- old.lock_count = cmpxchg(&lockref->lock_count, \
- old.lock_count, new.lock_count); \
+ old.lock_count = cmpxchg64_relaxed(&lockref->lock_count, \
+ old.lock_count, \
+ new.lock_count); \
if (likely(old.lock_count == prev.lock_count)) { \
SUCCESS; \
} \
- cpu_relax(); \
+ arch_mutex_cpu_relax(); \
} \
} while (0)
ref->release = release;
return 0;
}
+EXPORT_SYMBOL_GPL(percpu_ref_init);
/**
* percpu_ref_cancel_init - cancel percpu_ref_init()
free_percpu(ref->pcpu_count);
}
}
+EXPORT_SYMBOL_GPL(percpu_ref_cancel_init);
static void percpu_ref_kill_rcu(struct rcu_head *rcu)
{
call_rcu_sched(&ref->rcu, percpu_ref_kill_rcu);
}
+EXPORT_SYMBOL_GPL(percpu_ref_kill_and_confirm);
miter->__offset += miter->consumed;
miter->__remaining -= miter->consumed;
- if (miter->__flags & SG_MITER_TO_SG)
+ if ((miter->__flags & SG_MITER_TO_SG) &&
+ !PageSlab(miter->page))
flush_kernel_dcache_page(miter->page);
if (miter->__flags & SG_MITER_ATOMIC) {
config MEMORY_HOTREMOVE
bool "Allow for memory hot remove"
select MEMORY_ISOLATION
- select HAVE_BOOTMEM_INFO_NODE if X86_64
+ select HAVE_BOOTMEM_INFO_NODE if (X86_64 || PPC64)
depends on MEMORY_HOTPLUG && ARCH_ENABLE_MEMORY_HOTREMOVE
depends on MIGRATION
struct bio_vec *to, *from;
unsigned i;
+ if (force)
+ goto bounce;
bio_for_each_segment(from, *bio_orig, i)
if (page_to_pfn(from->bv_page) > queue_bounce_pfn(q))
goto bounce;
pfn -= pageblock_nr_pages) {
unsigned long isolated;
+ /*
+ * This can iterate a massively long zone without finding any
+ * suitable migration targets, so periodically check if we need
+ * to schedule.
+ */
+ cond_resched();
+
if (!pfn_valid(pfn))
continue;
struct inode *inode = mapping->host;
pgoff_t offset = vmf->pgoff;
struct page *page;
- bool memcg_oom;
pgoff_t size;
int ret = 0;
return VM_FAULT_SIGBUS;
/*
- * Do we have something in the page cache already? Either
- * way, try readahead, but disable the memcg OOM killer for it
- * as readahead is optional and no errors are propagated up
- * the fault stack. The OOM killer is enabled while trying to
- * instantiate the faulting page individually below.
+ * Do we have something in the page cache already?
*/
page = find_get_page(mapping, offset);
if (likely(page) && !(vmf->flags & FAULT_FLAG_TRIED)) {
* We found the page, so try async readahead before
* waiting for the lock.
*/
- memcg_oom = mem_cgroup_toggle_oom(false);
do_async_mmap_readahead(vma, ra, file, page, offset);
- mem_cgroup_toggle_oom(memcg_oom);
} else if (!page) {
/* No page in the page cache at all */
- memcg_oom = mem_cgroup_toggle_oom(false);
do_sync_mmap_readahead(vma, ra, file, offset);
- mem_cgroup_toggle_oom(memcg_oom);
count_vm_event(PGMAJFAULT);
mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT);
ret = VM_FAULT_MAJOR;
int do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
unsigned long addr, pmd_t pmd, pmd_t *pmdp)
{
+ struct anon_vma *anon_vma = NULL;
struct page *page;
unsigned long haddr = addr & HPAGE_PMD_MASK;
+ int page_nid = -1, this_nid = numa_node_id();
int target_nid;
- int current_nid = -1;
- bool migrated;
+ bool page_locked;
+ bool migrated = false;
spin_lock(&mm->page_table_lock);
if (unlikely(!pmd_same(pmd, *pmdp)))
goto out_unlock;
page = pmd_page(pmd);
- get_page(page);
- current_nid = page_to_nid(page);
+ page_nid = page_to_nid(page);
count_vm_numa_event(NUMA_HINT_FAULTS);
- if (current_nid == numa_node_id())
+ if (page_nid == this_nid)
count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL);
+ /*
+ * Acquire the page lock to serialise THP migrations but avoid dropping
+ * page_table_lock if at all possible
+ */
+ page_locked = trylock_page(page);
target_nid = mpol_misplaced(page, vma, haddr);
if (target_nid == -1) {
- put_page(page);
- goto clear_pmdnuma;
+ /* If the page was locked, there are no parallel migrations */
+ if (page_locked)
+ goto clear_pmdnuma;
+
+ /*
+ * Otherwise wait for potential migrations and retry. We do
+ * relock and check_same as the page may no longer be mapped.
+ * As the fault is being retried, do not account for it.
+ */
+ spin_unlock(&mm->page_table_lock);
+ wait_on_page_locked(page);
+ page_nid = -1;
+ goto out;
}
- /* Acquire the page lock to serialise THP migrations */
+ /* Page is misplaced, serialise migrations and parallel THP splits */
+ get_page(page);
spin_unlock(&mm->page_table_lock);
- lock_page(page);
+ if (!page_locked)
+ lock_page(page);
+ anon_vma = page_lock_anon_vma_read(page);
/* Confirm the PTE did not while locked */
spin_lock(&mm->page_table_lock);
if (unlikely(!pmd_same(pmd, *pmdp))) {
unlock_page(page);
put_page(page);
+ page_nid = -1;
goto out_unlock;
}
- spin_unlock(&mm->page_table_lock);
- /* Migrate the THP to the requested node */
+ /*
+ * Migrate the THP to the requested node, returns with page unlocked
+ * and pmd_numa cleared.
+ */
+ spin_unlock(&mm->page_table_lock);
migrated = migrate_misplaced_transhuge_page(mm, vma,
pmdp, pmd, addr, page, target_nid);
- if (!migrated)
- goto check_same;
+ if (migrated)
+ page_nid = target_nid;
- task_numa_fault(target_nid, HPAGE_PMD_NR, true);
- return 0;
-
-check_same:
- spin_lock(&mm->page_table_lock);
- if (unlikely(!pmd_same(pmd, *pmdp)))
- goto out_unlock;
+ goto out;
clear_pmdnuma:
+ BUG_ON(!PageLocked(page));
pmd = pmd_mknonnuma(pmd);
set_pmd_at(mm, haddr, pmdp, pmd);
VM_BUG_ON(pmd_numa(*pmdp));
update_mmu_cache_pmd(vma, addr, pmdp);
+ unlock_page(page);
out_unlock:
spin_unlock(&mm->page_table_lock);
- if (current_nid != -1)
- task_numa_fault(current_nid, HPAGE_PMD_NR, false);
+
+out:
+ if (anon_vma)
+ page_unlock_anon_vma_read(anon_vma);
+
+ if (page_nid != -1)
+ task_numa_fault(page_nid, HPAGE_PMD_NR, migrated);
+
return 0;
}
mmun_start = haddr;
mmun_end = haddr + HPAGE_PMD_SIZE;
+again:
mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
spin_lock(&mm->page_table_lock);
if (unlikely(!pmd_trans_huge(*pmd))) {
split_huge_page(page);
put_page(page);
- BUG_ON(pmd_trans_huge(*pmd));
+
+ /*
+ * We don't always have down_write of mmap_sem here: a racing
+ * do_huge_pmd_wp_page() might have copied-on-write to another
+ * huge page before our split_huge_page() got the anon_vma lock.
+ */
+ if (unlikely(pmd_trans_huge(*pmd)))
+ goto again;
}
void split_huge_page_pmd_mm(struct mm_struct *mm, unsigned long address,
BUG_ON(page_count(page));
BUG_ON(page_mapcount(page));
restore_reserve = PagePrivate(page);
+ ClearPagePrivate(page);
spin_lock(&hugetlb_lock);
hugetlb_cgroup_uncharge_page(hstate_index(h),
/* we rely on prep_new_huge_page to set the destructor */
set_compound_order(page, order);
__SetPageHead(page);
+ __ClearPageReserved(page);
for (i = 1; i < nr_pages; i++, p = mem_map_next(p, page, i)) {
__SetPageTail(p);
+ /*
+ * For gigantic hugepages allocated through bootmem at
+ * boot, it's safer to be consistent with the not-gigantic
+ * hugepages and clear the PG_reserved bit from all tail pages
+ * too. Otherwse drivers using get_user_pages() to access tail
+ * pages may get the reference counting wrong if they see
+ * PG_reserved set on a tail page (despite the head page not
+ * having PG_reserved set). Enforcing this consistency between
+ * head and tail pages allows drivers to optimize away a check
+ * on the head page when they need know if put_page() is needed
+ * after get_user_pages().
+ */
+ __ClearPageReserved(p);
set_page_count(p, 0);
p->first_page = page;
}
#else
page = virt_to_page(m);
#endif
- __ClearPageReserved(page);
WARN_ON(page_count(page) != 1);
prep_compound_huge_page(page, h->order);
+ WARN_ON(PageReserved(page));
prep_new_huge_page(h, page, page_to_nid(page));
/*
* If we had gigantic hugepages allocated at boot time, we need
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
- if (!hwpoison_filter_enable)
- goto inject;
if (!pfn_valid(pfn))
return -ENXIO;
if (!get_page_unless_zero(hpage))
return 0;
+ if (!hwpoison_filter_enable)
+ goto inject;
+
if (!PageLRU(p) && !PageHuge(p))
shake_page(p, 0);
/*
* decrement nr_to_walk first so that we don't livelock if we
* get stuck on large numbesr of LRU_RETRY items
*/
- if (--(*nr_to_walk) == 0)
+ if (!*nr_to_walk)
break;
+ --*nr_to_walk;
ret = isolate(item, &nlru->lock, cb_arg);
switch (ret) {
*/
static int madvise_hwpoison(int bhv, unsigned long start, unsigned long end)
{
+ struct page *p;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
- for (; start < end; start += PAGE_SIZE) {
- struct page *p;
+ for (; start < end; start += PAGE_SIZE <<
+ compound_order(compound_head(p))) {
int ret;
ret = get_user_pages_fast(start, 1, 0, &p);
#include <linux/limits.h>
#include <linux/export.h>
#include <linux/mutex.h>
+#include <linux/rbtree.h>
#include <linux/slab.h>
#include <linux/swap.h>
#include <linux/swapops.h>
#include <linux/page_cgroup.h>
#include <linux/cpu.h>
#include <linux/oom.h>
+#include <linux/lockdep.h>
#include "internal.h"
#include <net/sock.h>
#include <net/ip.h>
struct mem_cgroup_reclaim_iter reclaim_iter[DEF_PRIORITY + 1];
+ struct rb_node tree_node; /* RB tree node */
+ unsigned long long usage_in_excess;/* Set to the value by which */
+ /* the soft limit is exceeded*/
+ bool on_tree;
struct mem_cgroup *memcg; /* Back pointer, we cannot */
/* use container_of */
};
struct mem_cgroup_per_zone zoneinfo[MAX_NR_ZONES];
};
+/*
+ * Cgroups above their limits are maintained in a RB-Tree, independent of
+ * their hierarchy representation
+ */
+
+struct mem_cgroup_tree_per_zone {
+ struct rb_root rb_root;
+ spinlock_t lock;
+};
+
+struct mem_cgroup_tree_per_node {
+ struct mem_cgroup_tree_per_zone rb_tree_per_zone[MAX_NR_ZONES];
+};
+
+struct mem_cgroup_tree {
+ struct mem_cgroup_tree_per_node *rb_tree_per_node[MAX_NUMNODES];
+};
+
+static struct mem_cgroup_tree soft_limit_tree __read_mostly;
+
struct mem_cgroup_threshold {
struct eventfd_ctx *eventfd;
u64 threshold;
atomic_t numainfo_events;
atomic_t numainfo_updating;
#endif
- /*
- * Protects soft_contributed transitions.
- * See mem_cgroup_update_soft_limit
- */
- spinlock_t soft_lock;
-
- /*
- * If true then this group has increased parents' children_in_excess
- * when it got over the soft limit.
- * When a group falls bellow the soft limit, parents' children_in_excess
- * is decreased and soft_contributed changed to false.
- */
- bool soft_contributed;
-
- /* Number of children that are in soft limit excess */
- atomic_t children_in_excess;
struct mem_cgroup_per_node *nodeinfo[0];
/* WARNING: nodeinfo must be the last member here */
* limit reclaim to prevent infinite loops, if they ever occur.
*/
#define MEM_CGROUP_MAX_RECLAIM_LOOPS 100
+#define MEM_CGROUP_MAX_SOFT_LIMIT_RECLAIM_LOOPS 2
enum charge_type {
MEM_CGROUP_CHARGE_TYPE_CACHE = 0,
return mem_cgroup_zoneinfo(memcg, nid, zid);
}
+static struct mem_cgroup_tree_per_zone *
+soft_limit_tree_node_zone(int nid, int zid)
+{
+ return &soft_limit_tree.rb_tree_per_node[nid]->rb_tree_per_zone[zid];
+}
+
+static struct mem_cgroup_tree_per_zone *
+soft_limit_tree_from_page(struct page *page)
+{
+ int nid = page_to_nid(page);
+ int zid = page_zonenum(page);
+
+ return &soft_limit_tree.rb_tree_per_node[nid]->rb_tree_per_zone[zid];
+}
+
+static void
+__mem_cgroup_insert_exceeded(struct mem_cgroup *memcg,
+ struct mem_cgroup_per_zone *mz,
+ struct mem_cgroup_tree_per_zone *mctz,
+ unsigned long long new_usage_in_excess)
+{
+ struct rb_node **p = &mctz->rb_root.rb_node;
+ struct rb_node *parent = NULL;
+ struct mem_cgroup_per_zone *mz_node;
+
+ if (mz->on_tree)
+ return;
+
+ mz->usage_in_excess = new_usage_in_excess;
+ if (!mz->usage_in_excess)
+ return;
+ while (*p) {
+ parent = *p;
+ mz_node = rb_entry(parent, struct mem_cgroup_per_zone,
+ tree_node);
+ if (mz->usage_in_excess < mz_node->usage_in_excess)
+ p = &(*p)->rb_left;
+ /*
+ * We can't avoid mem cgroups that are over their soft
+ * limit by the same amount
+ */
+ else if (mz->usage_in_excess >= mz_node->usage_in_excess)
+ p = &(*p)->rb_right;
+ }
+ rb_link_node(&mz->tree_node, parent, p);
+ rb_insert_color(&mz->tree_node, &mctz->rb_root);
+ mz->on_tree = true;
+}
+
+static void
+__mem_cgroup_remove_exceeded(struct mem_cgroup *memcg,
+ struct mem_cgroup_per_zone *mz,
+ struct mem_cgroup_tree_per_zone *mctz)
+{
+ if (!mz->on_tree)
+ return;
+ rb_erase(&mz->tree_node, &mctz->rb_root);
+ mz->on_tree = false;
+}
+
+static void
+mem_cgroup_remove_exceeded(struct mem_cgroup *memcg,
+ struct mem_cgroup_per_zone *mz,
+ struct mem_cgroup_tree_per_zone *mctz)
+{
+ spin_lock(&mctz->lock);
+ __mem_cgroup_remove_exceeded(memcg, mz, mctz);
+ spin_unlock(&mctz->lock);
+}
+
+
+static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page)
+{
+ unsigned long long excess;
+ struct mem_cgroup_per_zone *mz;
+ struct mem_cgroup_tree_per_zone *mctz;
+ int nid = page_to_nid(page);
+ int zid = page_zonenum(page);
+ mctz = soft_limit_tree_from_page(page);
+
+ /*
+ * Necessary to update all ancestors when hierarchy is used.
+ * because their event counter is not touched.
+ */
+ for (; memcg; memcg = parent_mem_cgroup(memcg)) {
+ mz = mem_cgroup_zoneinfo(memcg, nid, zid);
+ excess = res_counter_soft_limit_excess(&memcg->res);
+ /*
+ * We have to update the tree if mz is on RB-tree or
+ * mem is over its softlimit.
+ */
+ if (excess || mz->on_tree) {
+ spin_lock(&mctz->lock);
+ /* if on-tree, remove it */
+ if (mz->on_tree)
+ __mem_cgroup_remove_exceeded(memcg, mz, mctz);
+ /*
+ * Insert again. mz->usage_in_excess will be updated.
+ * If excess is 0, no tree ops.
+ */
+ __mem_cgroup_insert_exceeded(memcg, mz, mctz, excess);
+ spin_unlock(&mctz->lock);
+ }
+ }
+}
+
+static void mem_cgroup_remove_from_trees(struct mem_cgroup *memcg)
+{
+ int node, zone;
+ struct mem_cgroup_per_zone *mz;
+ struct mem_cgroup_tree_per_zone *mctz;
+
+ for_each_node(node) {
+ for (zone = 0; zone < MAX_NR_ZONES; zone++) {
+ mz = mem_cgroup_zoneinfo(memcg, node, zone);
+ mctz = soft_limit_tree_node_zone(node, zone);
+ mem_cgroup_remove_exceeded(memcg, mz, mctz);
+ }
+ }
+}
+
+static struct mem_cgroup_per_zone *
+__mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_zone *mctz)
+{
+ struct rb_node *rightmost = NULL;
+ struct mem_cgroup_per_zone *mz;
+
+retry:
+ mz = NULL;
+ rightmost = rb_last(&mctz->rb_root);
+ if (!rightmost)
+ goto done; /* Nothing to reclaim from */
+
+ mz = rb_entry(rightmost, struct mem_cgroup_per_zone, tree_node);
+ /*
+ * Remove the node now but someone else can add it back,
+ * we will to add it back at the end of reclaim to its correct
+ * position in the tree.
+ */
+ __mem_cgroup_remove_exceeded(mz->memcg, mz, mctz);
+ if (!res_counter_soft_limit_excess(&mz->memcg->res) ||
+ !css_tryget(&mz->memcg->css))
+ goto retry;
+done:
+ return mz;
+}
+
+static struct mem_cgroup_per_zone *
+mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_zone *mctz)
+{
+ struct mem_cgroup_per_zone *mz;
+
+ spin_lock(&mctz->lock);
+ mz = __mem_cgroup_largest_soft_limit_node(mctz);
+ spin_unlock(&mctz->lock);
+ return mz;
+}
+
/*
* Implementation Note: reading percpu statistics for memcg.
*
unsigned long val = 0;
int cpu;
+ get_online_cpus();
for_each_online_cpu(cpu)
val += per_cpu(memcg->stat->events[idx], cpu);
#ifdef CONFIG_HOTPLUG_CPU
val += memcg->nocpu_base.events[idx];
spin_unlock(&memcg->pcp_counter_lock);
#endif
+ put_online_cpus();
return val;
}
}
/*
- * Called from rate-limited memcg_check_events when enough
- * MEM_CGROUP_TARGET_SOFTLIMIT events are accumulated and it makes sure
- * that all the parents up the hierarchy will be notified that this group
- * is in excess or that it is not in excess anymore. mmecg->soft_contributed
- * makes the transition a single action whenever the state flips from one to
- * the other.
- */
-static void mem_cgroup_update_soft_limit(struct mem_cgroup *memcg)
-{
- unsigned long long excess = res_counter_soft_limit_excess(&memcg->res);
- struct mem_cgroup *parent = memcg;
- int delta = 0;
-
- spin_lock(&memcg->soft_lock);
- if (excess) {
- if (!memcg->soft_contributed) {
- delta = 1;
- memcg->soft_contributed = true;
- }
- } else {
- if (memcg->soft_contributed) {
- delta = -1;
- memcg->soft_contributed = false;
- }
- }
-
- /*
- * Necessary to update all ancestors when hierarchy is used
- * because their event counter is not touched.
- * We track children even outside the hierarchy for the root
- * cgroup because tree walk starting at root should visit
- * all cgroups and we want to prevent from pointless tree
- * walk if no children is below the limit.
- */
- while (delta && (parent = parent_mem_cgroup(parent)))
- atomic_add(delta, &parent->children_in_excess);
- if (memcg != root_mem_cgroup && !root_mem_cgroup->use_hierarchy)
- atomic_add(delta, &root_mem_cgroup->children_in_excess);
- spin_unlock(&memcg->soft_lock);
-}
-
-/*
* Check events in order.
*
*/
mem_cgroup_threshold(memcg);
if (unlikely(do_softlimit))
- mem_cgroup_update_soft_limit(memcg);
+ mem_cgroup_update_tree(memcg, page);
#if MAX_NUMNODES > 1
if (unlikely(do_numainfo))
atomic_inc(&memcg->numainfo_events);
return memcg;
}
-static enum mem_cgroup_filter_t
-mem_cgroup_filter(struct mem_cgroup *memcg, struct mem_cgroup *root,
- mem_cgroup_iter_filter cond)
-{
- if (!cond)
- return VISIT;
- return cond(memcg, root);
-}
-
/*
* Returns a next (in a pre-order walk) alive memcg (with elevated css
* ref. count) or NULL if the whole root's subtree has been visited.
* helper function to be used by mem_cgroup_iter
*/
static struct mem_cgroup *__mem_cgroup_iter_next(struct mem_cgroup *root,
- struct mem_cgroup *last_visited, mem_cgroup_iter_filter cond)
+ struct mem_cgroup *last_visited)
{
struct cgroup_subsys_state *prev_css, *next_css;
if (next_css) {
struct mem_cgroup *mem = mem_cgroup_from_css(next_css);
- switch (mem_cgroup_filter(mem, root, cond)) {
- case SKIP:
+ if (css_tryget(&mem->css))
+ return mem;
+ else {
prev_css = next_css;
goto skip_node;
- case SKIP_TREE:
- if (mem == root)
- return NULL;
- /*
- * css_rightmost_descendant is not an optimal way to
- * skip through a subtree (especially for imbalanced
- * trees leaning to right) but that's what we have right
- * now. More effective solution would be traversing
- * right-up for first non-NULL without calling
- * css_next_descendant_pre afterwards.
- */
- prev_css = css_rightmost_descendant(next_css);
- goto skip_node;
- case VISIT:
- if (css_tryget(&mem->css))
- return mem;
- else {
- prev_css = next_css;
- goto skip_node;
- }
- break;
}
}
* @root: hierarchy root
* @prev: previously returned memcg, NULL on first invocation
* @reclaim: cookie for shared reclaim walks, NULL for full walks
- * @cond: filter for visited nodes, NULL for no filter
*
* Returns references to children of the hierarchy below @root, or
* @root itself, or %NULL after a full round-trip.
* divide up the memcgs in the hierarchy among all concurrent
* reclaimers operating on the same zone and priority.
*/
-struct mem_cgroup *mem_cgroup_iter_cond(struct mem_cgroup *root,
+struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root,
struct mem_cgroup *prev,
- struct mem_cgroup_reclaim_cookie *reclaim,
- mem_cgroup_iter_filter cond)
+ struct mem_cgroup_reclaim_cookie *reclaim)
{
struct mem_cgroup *memcg = NULL;
struct mem_cgroup *last_visited = NULL;
- if (mem_cgroup_disabled()) {
- /* first call must return non-NULL, second return NULL */
- return (struct mem_cgroup *)(unsigned long)!prev;
- }
+ if (mem_cgroup_disabled())
+ return NULL;
if (!root)
root = root_mem_cgroup;
if (!root->use_hierarchy && root != root_mem_cgroup) {
if (prev)
goto out_css_put;
- if (mem_cgroup_filter(root, root, cond) == VISIT)
- return root;
- return NULL;
+ return root;
}
rcu_read_lock();
last_visited = mem_cgroup_iter_load(iter, root, &seq);
}
- memcg = __mem_cgroup_iter_next(root, last_visited, cond);
+ memcg = __mem_cgroup_iter_next(root, last_visited);
if (reclaim) {
mem_cgroup_iter_update(iter, last_visited, memcg, seq);
reclaim->generation = iter->generation;
}
- /*
- * We have finished the whole tree walk or no group has been
- * visited because filter told us to skip the root node.
- */
- if (!memcg && (prev || (cond && !last_visited)))
+ if (prev && !memcg)
goto out_unlock;
}
out_unlock:
return total;
}
-#if MAX_NUMNODES > 1
/**
* test_mem_cgroup_node_reclaimable
* @memcg: the target memcg
return false;
}
+#if MAX_NUMNODES > 1
/*
* Always updating the nodemask is not very good - even if we have an empty
return node;
}
+/*
+ * Check all nodes whether it contains reclaimable pages or not.
+ * For quick scan, we make use of scan_nodes. This will allow us to skip
+ * unused nodes. But scan_nodes is lazily updated and may not cotain
+ * enough new information. We need to do double check.
+ */
+static bool mem_cgroup_reclaimable(struct mem_cgroup *memcg, bool noswap)
+{
+ int nid;
+
+ /*
+ * quick check...making use of scan_node.
+ * We can skip unused nodes.
+ */
+ if (!nodes_empty(memcg->scan_nodes)) {
+ for (nid = first_node(memcg->scan_nodes);
+ nid < MAX_NUMNODES;
+ nid = next_node(nid, memcg->scan_nodes)) {
+
+ if (test_mem_cgroup_node_reclaimable(memcg, nid, noswap))
+ return true;
+ }
+ }
+ /*
+ * Check rest of nodes.
+ */
+ for_each_node_state(nid, N_MEMORY) {
+ if (node_isset(nid, memcg->scan_nodes))
+ continue;
+ if (test_mem_cgroup_node_reclaimable(memcg, nid, noswap))
+ return true;
+ }
+ return false;
+}
+
#else
int mem_cgroup_select_victim_node(struct mem_cgroup *memcg)
{
return 0;
}
-#endif
-
-/*
- * A group is eligible for the soft limit reclaim under the given root
- * hierarchy if
- * a) it is over its soft limit
- * b) any parent up the hierarchy is over its soft limit
- *
- * If the given group doesn't have any children over the limit then it
- * doesn't make any sense to iterate its subtree.
- */
-enum mem_cgroup_filter_t
-mem_cgroup_soft_reclaim_eligible(struct mem_cgroup *memcg,
- struct mem_cgroup *root)
+static bool mem_cgroup_reclaimable(struct mem_cgroup *memcg, bool noswap)
{
- struct mem_cgroup *parent;
-
- if (!memcg)
- memcg = root_mem_cgroup;
- parent = memcg;
-
- if (res_counter_soft_limit_excess(&memcg->res))
- return VISIT;
+ return test_mem_cgroup_node_reclaimable(memcg, 0, noswap);
+}
+#endif
- /*
- * If any parent up to the root in the hierarchy is over its soft limit
- * then we have to obey and reclaim from this group as well.
- */
- while ((parent = parent_mem_cgroup(parent))) {
- if (res_counter_soft_limit_excess(&parent->res))
- return VISIT;
- if (parent == root)
+static int mem_cgroup_soft_reclaim(struct mem_cgroup *root_memcg,
+ struct zone *zone,
+ gfp_t gfp_mask,
+ unsigned long *total_scanned)
+{
+ struct mem_cgroup *victim = NULL;
+ int total = 0;
+ int loop = 0;
+ unsigned long excess;
+ unsigned long nr_scanned;
+ struct mem_cgroup_reclaim_cookie reclaim = {
+ .zone = zone,
+ .priority = 0,
+ };
+
+ excess = res_counter_soft_limit_excess(&root_memcg->res) >> PAGE_SHIFT;
+
+ while (1) {
+ victim = mem_cgroup_iter(root_memcg, victim, &reclaim);
+ if (!victim) {
+ loop++;
+ if (loop >= 2) {
+ /*
+ * If we have not been able to reclaim
+ * anything, it might because there are
+ * no reclaimable pages under this hierarchy
+ */
+ if (!total)
+ break;
+ /*
+ * We want to do more targeted reclaim.
+ * excess >> 2 is not to excessive so as to
+ * reclaim too much, nor too less that we keep
+ * coming back to reclaim from this cgroup
+ */
+ if (total >= (excess >> 2) ||
+ (loop > MEM_CGROUP_MAX_RECLAIM_LOOPS))
+ break;
+ }
+ continue;
+ }
+ if (!mem_cgroup_reclaimable(victim, false))
+ continue;
+ total += mem_cgroup_shrink_node_zone(victim, gfp_mask, false,
+ zone, &nr_scanned);
+ *total_scanned += nr_scanned;
+ if (!res_counter_soft_limit_excess(&root_memcg->res))
break;
}
-
- if (!atomic_read(&memcg->children_in_excess))
- return SKIP_TREE;
- return SKIP;
+ mem_cgroup_iter_break(root_memcg, victim);
+ return total;
}
+#ifdef CONFIG_LOCKDEP
+static struct lockdep_map memcg_oom_lock_dep_map = {
+ .name = "memcg_oom_lock",
+};
+#endif
+
static DEFINE_SPINLOCK(memcg_oom_lock);
/*
}
iter->oom_lock = false;
}
- }
+ } else
+ mutex_acquire(&memcg_oom_lock_dep_map, 0, 1, _RET_IP_);
spin_unlock(&memcg_oom_lock);
struct mem_cgroup *iter;
spin_lock(&memcg_oom_lock);
+ mutex_release(&memcg_oom_lock_dep_map, 1, _RET_IP_);
for_each_mem_cgroup_tree(iter, memcg)
iter->oom_lock = false;
spin_unlock(&memcg_oom_lock);
memcg_wakeup_oom(memcg);
}
-/*
- * try to call OOM killer
- */
static void mem_cgroup_oom(struct mem_cgroup *memcg, gfp_t mask, int order)
{
- bool locked;
- int wakeups;
-
if (!current->memcg_oom.may_oom)
return;
-
- current->memcg_oom.in_memcg_oom = 1;
-
/*
- * As with any blocking lock, a contender needs to start
- * listening for wakeups before attempting the trylock,
- * otherwise it can miss the wakeup from the unlock and sleep
- * indefinitely. This is just open-coded because our locking
- * is so particular to memcg hierarchies.
+ * We are in the middle of the charge context here, so we
+ * don't want to block when potentially sitting on a callstack
+ * that holds all kinds of filesystem and mm locks.
+ *
+ * Also, the caller may handle a failed allocation gracefully
+ * (like optional page cache readahead) and so an OOM killer
+ * invocation might not even be necessary.
+ *
+ * That's why we don't do anything here except remember the
+ * OOM context and then deal with it at the end of the page
+ * fault when the stack is unwound, the locks are released,
+ * and when we know whether the fault was overall successful.
*/
- wakeups = atomic_read(&memcg->oom_wakeups);
- mem_cgroup_mark_under_oom(memcg);
-
- locked = mem_cgroup_oom_trylock(memcg);
-
- if (locked)
- mem_cgroup_oom_notify(memcg);
-
- if (locked && !memcg->oom_kill_disable) {
- mem_cgroup_unmark_under_oom(memcg);
- mem_cgroup_out_of_memory(memcg, mask, order);
- mem_cgroup_oom_unlock(memcg);
- /*
- * There is no guarantee that an OOM-lock contender
- * sees the wakeups triggered by the OOM kill
- * uncharges. Wake any sleepers explicitely.
- */
- memcg_oom_recover(memcg);
- } else {
- /*
- * A system call can just return -ENOMEM, but if this
- * is a page fault and somebody else is handling the
- * OOM already, we need to sleep on the OOM waitqueue
- * for this memcg until the situation is resolved.
- * Which can take some time because it might be
- * handled by a userspace task.
- *
- * However, this is the charge context, which means
- * that we may sit on a large call stack and hold
- * various filesystem locks, the mmap_sem etc. and we
- * don't want the OOM handler to deadlock on them
- * while we sit here and wait. Store the current OOM
- * context in the task_struct, then return -ENOMEM.
- * At the end of the page fault handler, with the
- * stack unwound, pagefault_out_of_memory() will check
- * back with us by calling
- * mem_cgroup_oom_synchronize(), possibly putting the
- * task to sleep.
- */
- current->memcg_oom.oom_locked = locked;
- current->memcg_oom.wakeups = wakeups;
- css_get(&memcg->css);
- current->memcg_oom.wait_on_memcg = memcg;
- }
+ css_get(&memcg->css);
+ current->memcg_oom.memcg = memcg;
+ current->memcg_oom.gfp_mask = mask;
+ current->memcg_oom.order = order;
}
/**
* mem_cgroup_oom_synchronize - complete memcg OOM handling
+ * @handle: actually kill/wait or just clean up the OOM state
*
- * This has to be called at the end of a page fault if the the memcg
- * OOM handler was enabled and the fault is returning %VM_FAULT_OOM.
+ * This has to be called at the end of a page fault if the memcg OOM
+ * handler was enabled.
*
- * Memcg supports userspace OOM handling, so failed allocations must
+ * Memcg supports userspace OOM handling where failed allocations must
* sleep on a waitqueue until the userspace task resolves the
* situation. Sleeping directly in the charge context with all kinds
* of locks held is not a good idea, instead we remember an OOM state
* in the task and mem_cgroup_oom_synchronize() has to be called at
- * the end of the page fault to put the task to sleep and clean up the
- * OOM state.
+ * the end of the page fault to complete the OOM handling.
*
* Returns %true if an ongoing memcg OOM situation was detected and
- * finalized, %false otherwise.
+ * completed, %false otherwise.
*/
-bool mem_cgroup_oom_synchronize(void)
+bool mem_cgroup_oom_synchronize(bool handle)
{
+ struct mem_cgroup *memcg = current->memcg_oom.memcg;
struct oom_wait_info owait;
- struct mem_cgroup *memcg;
+ bool locked;
/* OOM is global, do not handle */
- if (!current->memcg_oom.in_memcg_oom)
- return false;
-
- /*
- * We invoked the OOM killer but there is a chance that a kill
- * did not free up any charges. Everybody else might already
- * be sleeping, so restart the fault and keep the rampage
- * going until some charges are released.
- */
- memcg = current->memcg_oom.wait_on_memcg;
if (!memcg)
- goto out;
+ return false;
- if (test_thread_flag(TIF_MEMDIE) || fatal_signal_pending(current))
- goto out_memcg;
+ if (!handle)
+ goto cleanup;
owait.memcg = memcg;
owait.wait.flags = 0;
INIT_LIST_HEAD(&owait.wait.task_list);
prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
- /* Only sleep if we didn't miss any wakeups since OOM */
- if (atomic_read(&memcg->oom_wakeups) == current->memcg_oom.wakeups)
+ mem_cgroup_mark_under_oom(memcg);
+
+ locked = mem_cgroup_oom_trylock(memcg);
+
+ if (locked)
+ mem_cgroup_oom_notify(memcg);
+
+ if (locked && !memcg->oom_kill_disable) {
+ mem_cgroup_unmark_under_oom(memcg);
+ finish_wait(&memcg_oom_waitq, &owait.wait);
+ mem_cgroup_out_of_memory(memcg, current->memcg_oom.gfp_mask,
+ current->memcg_oom.order);
+ } else {
schedule();
- finish_wait(&memcg_oom_waitq, &owait.wait);
-out_memcg:
- mem_cgroup_unmark_under_oom(memcg);
- if (current->memcg_oom.oom_locked) {
+ mem_cgroup_unmark_under_oom(memcg);
+ finish_wait(&memcg_oom_waitq, &owait.wait);
+ }
+
+ if (locked) {
mem_cgroup_oom_unlock(memcg);
/*
* There is no guarantee that an OOM-lock contender
*/
memcg_oom_recover(memcg);
}
+cleanup:
+ current->memcg_oom.memcg = NULL;
css_put(&memcg->css);
- current->memcg_oom.wait_on_memcg = NULL;
-out:
- current->memcg_oom.in_memcg_oom = 0;
return true;
}
|| fatal_signal_pending(current)))
goto bypass;
+ if (unlikely(task_in_memcg_oom(current)))
+ goto bypass;
+
/*
* We always charge the cgroup the mm_struct belongs to.
* The mm_struct's mem_cgroup changes on task migration if the
*ptr = memcg;
return 0;
nomem:
- *ptr = NULL;
- return -ENOMEM;
+ if (!(gfp_mask & __GFP_NOFAIL)) {
+ *ptr = NULL;
+ return -ENOMEM;
+ }
bypass:
*ptr = root_mem_cgroup;
return -EINTR;
unlock_page_cgroup(pc);
/*
- * "charge_statistics" updated event counter.
+ * "charge_statistics" updated event counter. Then, check it.
+ * Insert ancestor (and ancestor's ancestors), to softlimit RB-tree.
+ * if they exceeds softlimit.
*/
memcg_check_events(memcg, page);
}
{
/* Update stat data for mem_cgroup */
preempt_disable();
- WARN_ON_ONCE(from->stat->count[idx] < nr_pages);
- __this_cpu_add(from->stat->count[idx], -nr_pages);
+ __this_cpu_sub(from->stat->count[idx], nr_pages);
__this_cpu_add(to->stat->count[idx], nr_pages);
preempt_enable();
}
return ret;
}
+unsigned long mem_cgroup_soft_limit_reclaim(struct zone *zone, int order,
+ gfp_t gfp_mask,
+ unsigned long *total_scanned)
+{
+ unsigned long nr_reclaimed = 0;
+ struct mem_cgroup_per_zone *mz, *next_mz = NULL;
+ unsigned long reclaimed;
+ int loop = 0;
+ struct mem_cgroup_tree_per_zone *mctz;
+ unsigned long long excess;
+ unsigned long nr_scanned;
+
+ if (order > 0)
+ return 0;
+
+ mctz = soft_limit_tree_node_zone(zone_to_nid(zone), zone_idx(zone));
+ /*
+ * This loop can run a while, specially if mem_cgroup's continuously
+ * keep exceeding their soft limit and putting the system under
+ * pressure
+ */
+ do {
+ if (next_mz)
+ mz = next_mz;
+ else
+ mz = mem_cgroup_largest_soft_limit_node(mctz);
+ if (!mz)
+ break;
+
+ nr_scanned = 0;
+ reclaimed = mem_cgroup_soft_reclaim(mz->memcg, zone,
+ gfp_mask, &nr_scanned);
+ nr_reclaimed += reclaimed;
+ *total_scanned += nr_scanned;
+ spin_lock(&mctz->lock);
+
+ /*
+ * If we failed to reclaim anything from this memory cgroup
+ * it is time to move on to the next cgroup
+ */
+ next_mz = NULL;
+ if (!reclaimed) {
+ do {
+ /*
+ * Loop until we find yet another one.
+ *
+ * By the time we get the soft_limit lock
+ * again, someone might have aded the
+ * group back on the RB tree. Iterate to
+ * make sure we get a different mem.
+ * mem_cgroup_largest_soft_limit_node returns
+ * NULL if no other cgroup is present on
+ * the tree
+ */
+ next_mz =
+ __mem_cgroup_largest_soft_limit_node(mctz);
+ if (next_mz == mz)
+ css_put(&next_mz->memcg->css);
+ else /* next_mz == NULL or other memcg */
+ break;
+ } while (1);
+ }
+ __mem_cgroup_remove_exceeded(mz->memcg, mz, mctz);
+ excess = res_counter_soft_limit_excess(&mz->memcg->res);
+ /*
+ * One school of thought says that we should not add
+ * back the node to the tree if reclaim returns 0.
+ * But our reclaim could return 0, simply because due
+ * to priority we are exposing a smaller subset of
+ * memory to reclaim from. Consider this as a longer
+ * term TODO.
+ */
+ /* If excess == 0, no tree ops */
+ __mem_cgroup_insert_exceeded(mz->memcg, mz, mctz, excess);
+ spin_unlock(&mctz->lock);
+ css_put(&mz->memcg->css);
+ loop++;
+ /*
+ * Could not reclaim anything and there are no more
+ * mem cgroups to try or we seem to be looping without
+ * reclaiming anything.
+ */
+ if (!nr_reclaimed &&
+ (next_mz == NULL ||
+ loop > MEM_CGROUP_MAX_SOFT_LIMIT_RECLAIM_LOOPS))
+ break;
+ } while (!nr_reclaimed);
+ if (next_mz)
+ css_put(&next_mz->memcg->css);
+ return nr_reclaimed;
+}
+
/**
* mem_cgroup_force_empty_list - clears LRU of a group
* @memcg: group to clear
} while (usage > 0);
}
-/*
- * This mainly exists for tests during the setting of set of use_hierarchy.
- * Since this is the very setting we are changing, the current hierarchy value
- * is meaningless
- */
-static inline bool __memcg_has_children(struct mem_cgroup *memcg)
-{
- struct cgroup_subsys_state *pos;
-
- /* bounce at first found */
- css_for_each_child(pos, &memcg->css)
- return true;
- return false;
-}
-
-/*
- * Must be called with memcg_create_mutex held, unless the cgroup is guaranteed
- * to be already dead (as in mem_cgroup_force_empty, for instance). This is
- * from mem_cgroup_count_children(), in the sense that we don't really care how
- * many children we have; we only need to know if we have any. It also counts
- * any memcg without hierarchy as infertile.
- */
static inline bool memcg_has_children(struct mem_cgroup *memcg)
{
- return memcg->use_hierarchy && __memcg_has_children(memcg);
+ lockdep_assert_held(&memcg_create_mutex);
+ /*
+ * The lock does not prevent addition or deletion to the list
+ * of children, but it prevents a new child from being
+ * initialized based on this parent in css_online(), so it's
+ * enough to decide whether hierarchically inherited
+ * attributes can still be changed or not.
+ */
+ return memcg->use_hierarchy &&
+ !list_empty(&memcg->css.cgroup->children);
}
/*
*/
if ((!parent_memcg || !parent_memcg->use_hierarchy) &&
(val == 1 || val == 0)) {
- if (!__memcg_has_children(memcg))
+ if (list_empty(&memcg->css.cgroup->children))
memcg->use_hierarchy = val;
else
retval = -EBUSY;
for (zone = 0; zone < MAX_NR_ZONES; zone++) {
mz = &pn->zoneinfo[zone];
lruvec_init(&mz->lruvec);
+ mz->usage_in_excess = 0;
+ mz->on_tree = false;
mz->memcg = memcg;
}
memcg->nodeinfo[node] = pn;
int node;
size_t size = memcg_size();
+ mem_cgroup_remove_from_trees(memcg);
free_css_id(&mem_cgroup_subsys, &memcg->css);
for_each_node(node)
}
EXPORT_SYMBOL(parent_mem_cgroup);
+static void __init mem_cgroup_soft_limit_tree_init(void)
+{
+ struct mem_cgroup_tree_per_node *rtpn;
+ struct mem_cgroup_tree_per_zone *rtpz;
+ int tmp, node, zone;
+
+ for_each_node(node) {
+ tmp = node;
+ if (!node_state(node, N_NORMAL_MEMORY))
+ tmp = -1;
+ rtpn = kzalloc_node(sizeof(*rtpn), GFP_KERNEL, tmp);
+ BUG_ON(!rtpn);
+
+ soft_limit_tree.rb_tree_per_node[node] = rtpn;
+
+ for (zone = 0; zone < MAX_NR_ZONES; zone++) {
+ rtpz = &rtpn->rb_tree_per_zone[zone];
+ rtpz->rb_root = RB_ROOT;
+ spin_lock_init(&rtpz->lock);
+ }
+ }
+}
+
static struct cgroup_subsys_state * __ref
mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
{
mutex_init(&memcg->thresholds_lock);
spin_lock_init(&memcg->move_lock);
vmpressure_init(&memcg->vmpressure);
- spin_lock_init(&memcg->soft_lock);
return &memcg->css;
mem_cgroup_invalidate_reclaim_iterators(memcg);
mem_cgroup_reparent_charges(memcg);
- if (memcg->soft_contributed) {
- while ((memcg = parent_mem_cgroup(memcg)))
- atomic_dec(&memcg->children_in_excess);
-
- if (memcg != root_mem_cgroup && !root_mem_cgroup->use_hierarchy)
- atomic_dec(&root_mem_cgroup->children_in_excess);
- }
mem_cgroup_destroy_all_caches(memcg);
vmpressure_cleanup(&memcg->vmpressure);
}
{
hotcpu_notifier(memcg_cpu_hotplug_callback, 0);
enable_swap_cgroup();
+ mem_cgroup_soft_limit_tree_init();
memcg_stock_init();
return 0;
}
* shake_page could have turned it free.
*/
if (is_free_buddy_page(p)) {
- action_result(pfn, "free buddy, 2nd try",
- DELAYED);
+ if (flags & MF_COUNT_INCREASED)
+ action_result(pfn, "free buddy", DELAYED);
+ else
+ action_result(pfn, "free buddy, 2nd try", DELAYED);
return 0;
}
action_result(pfn, "non LRU", IGNORED);
* worked by memory_failure() and the page lock is not held yet.
* In such case, we yield to memory_failure() and make unpoison fail.
*/
- if (PageTransHuge(page)) {
+ if (!PageHuge(page) && PageTransHuge(page)) {
pr_info("MCE: Memory failure is now running on %#lx\n", pfn);
return 0;
}
*/
make_migration_entry_read(&entry);
pte = swp_entry_to_pte(entry);
+ if (pte_swp_soft_dirty(*src_pte))
+ pte = pte_swp_mksoft_dirty(pte);
set_pte_at(src_mm, addr, src_pte, pte);
}
}
}
int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
- unsigned long addr, int current_nid)
+ unsigned long addr, int page_nid)
{
get_page(page);
count_vm_numa_event(NUMA_HINT_FAULTS);
- if (current_nid == numa_node_id())
+ if (page_nid == numa_node_id())
count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL);
return mpol_misplaced(page, vma, addr);
{
struct page *page = NULL;
spinlock_t *ptl;
- int current_nid = -1;
+ int page_nid = -1;
int target_nid;
bool migrated = false;
return 0;
}
- current_nid = page_to_nid(page);
- target_nid = numa_migrate_prep(page, vma, addr, current_nid);
+ page_nid = page_to_nid(page);
+ target_nid = numa_migrate_prep(page, vma, addr, page_nid);
pte_unmap_unlock(ptep, ptl);
if (target_nid == -1) {
- /*
- * Account for the fault against the current node if it not
- * being replaced regardless of where the page is located.
- */
- current_nid = numa_node_id();
put_page(page);
goto out;
}
/* Migrate to the requested node */
migrated = migrate_misplaced_page(page, target_nid);
if (migrated)
- current_nid = target_nid;
+ page_nid = target_nid;
out:
- if (current_nid != -1)
- task_numa_fault(current_nid, 1, migrated);
+ if (page_nid != -1)
+ task_numa_fault(page_nid, 1, migrated);
return 0;
}
unsigned long offset;
spinlock_t *ptl;
bool numa = false;
- int local_nid = numa_node_id();
spin_lock(&mm->page_table_lock);
pmd = *pmdp;
for (addr = _addr + offset; addr < _addr + PMD_SIZE; pte++, addr += PAGE_SIZE) {
pte_t pteval = *pte;
struct page *page;
- int curr_nid = local_nid;
+ int page_nid = -1;
int target_nid;
- bool migrated;
+ bool migrated = false;
+
if (!pte_present(pteval))
continue;
if (!pte_numa(pteval))
if (unlikely(page_mapcount(page) != 1))
continue;
- /*
- * Note that the NUMA fault is later accounted to either
- * the node that is currently running or where the page is
- * migrated to.
- */
- curr_nid = local_nid;
- target_nid = numa_migrate_prep(page, vma, addr,
- page_to_nid(page));
- if (target_nid == -1) {
+ page_nid = page_to_nid(page);
+ target_nid = numa_migrate_prep(page, vma, addr, page_nid);
+ pte_unmap_unlock(pte, ptl);
+ if (target_nid != -1) {
+ migrated = migrate_misplaced_page(page, target_nid);
+ if (migrated)
+ page_nid = target_nid;
+ } else {
put_page(page);
- continue;
}
- /* Migrate to the requested node */
- pte_unmap_unlock(pte, ptl);
- migrated = migrate_misplaced_page(page, target_nid);
- if (migrated)
- curr_nid = target_nid;
- task_numa_fault(curr_nid, 1, migrated);
+ if (page_nid != -1)
+ task_numa_fault(page_nid, 1, migrated);
pte = pte_offset_map_lock(mm, pmdp, addr, &ptl);
}
* space. Kernel faults are handled more gracefully.
*/
if (flags & FAULT_FLAG_USER)
- mem_cgroup_enable_oom();
+ mem_cgroup_oom_enable();
ret = __handle_mm_fault(mm, vma, address, flags);
- if (flags & FAULT_FLAG_USER)
- mem_cgroup_disable_oom();
-
- if (WARN_ON(task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM)))
- mem_cgroup_oom_synchronize();
+ if (flags & FAULT_FLAG_USER) {
+ mem_cgroup_oom_disable();
+ /*
+ * The task may have entered a memcg OOM situation but
+ * if the allocation error was handled gracefully (no
+ * VM_FAULT_OOM), there is no need to kill anything.
+ * Just clean up the OOM state peacefully.
+ */
+ if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))
+ mem_cgroup_oom_synchronize(false);
+ }
return ret;
}
list_del(&page->lru);
dec_zone_page_state(page, NR_ISOLATED_ANON +
page_is_file_cache(page));
- if (unlikely(balloon_page_movable(page)))
+ if (unlikely(isolated_balloon_page(page)))
balloon_page_putback(page);
else
putback_lru_page(page);
get_page(new);
pte = pte_mkold(mk_pte(new, vma->vm_page_prot));
+ if (pte_swp_soft_dirty(*ptep))
+ pte = pte_mksoft_dirty(pte);
if (is_write_migration_entry(entry))
pte = pte_mkwrite(pte);
#ifdef CONFIG_HUGETLB_PAGE
unlock_page(new_page);
put_page(new_page); /* Free it */
- unlock_page(page);
+ /* Retake the callers reference and putback on LRU */
+ get_page(page);
putback_lru_page(page);
-
- count_vm_events(PGMIGRATE_FAIL, HPAGE_PMD_NR);
- isolated = 0;
- goto out;
+ mod_zone_page_state(page_zone(page),
+ NR_ISOLATED_ANON + page_lru, -HPAGE_PMD_NR);
+ goto out_fail;
}
/*
entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
entry = pmd_mkhuge(entry);
- page_add_new_anon_rmap(new_page, vma, haddr);
-
+ pmdp_clear_flush(vma, haddr, pmd);
set_pmd_at(mm, haddr, pmd, entry);
+ page_add_new_anon_rmap(new_page, vma, haddr);
update_mmu_cache_pmd(vma, address, &entry);
page_remove_rmap(page);
/*
count_vm_events(PGMIGRATE_SUCCESS, HPAGE_PMD_NR);
count_vm_numa_events(NUMA_PAGE_MIGRATE, HPAGE_PMD_NR);
-out:
mod_zone_page_state(page_zone(page),
NR_ISOLATED_ANON + page_lru,
-HPAGE_PMD_NR);
out_fail:
count_vm_events(PGMIGRATE_FAIL, HPAGE_PMD_NR);
out_dropref:
+ entry = pmd_mknonnuma(entry);
+ set_pmd_at(mm, haddr, pmd, entry);
+ update_mmu_cache_pmd(vma, address, &entry);
+
unlock_page(page);
put_page(page);
return 0;
/*
* Initialize pte walk starting at the already pinned page where we
- * are sure that there is a pte.
+ * are sure that there is a pte, as it was pinned under the same
+ * mmap_sem write op.
*/
pte = get_locked_pte(vma->vm_mm, start, &ptl);
- end = min(end, pmd_addr_end(start, end));
+ /* Make sure we do not cross the page table boundary */
+ end = pgd_addr_end(start, end);
+ end = pud_addr_end(start, end);
+ end = pmd_addr_end(start, end);
/* The page next to the pinned page is the first we will try to get */
start += PAGE_SIZE;
/* Ignore errors */
mlock_fixup(vma, &prev, vma->vm_start, vma->vm_end, newflags);
+ cond_resched();
}
out:
return 0;
swp_entry_t entry = pte_to_swp_entry(oldpte);
if (is_write_migration_entry(entry)) {
+ pte_t newpte;
/*
* A protection check is difficult so
* just be safe and disable write
*/
make_migration_entry_read(&entry);
- set_pte_at(mm, addr, pte,
- swp_entry_to_pte(entry));
+ newpte = swp_entry_to_pte(entry);
+ if (pte_swp_soft_dirty(oldpte))
+ newpte = pte_swp_mksoft_dirty(newpte);
+ set_pte_at(mm, addr, pte, newpte);
}
pages++;
}
split_huge_page_pmd(vma, addr, pmd);
else if (change_huge_pmd(vma, pmd, addr, newprot,
prot_numa)) {
- pages += HPAGE_PMD_NR;
+ pages++;
continue;
}
/* fall through */
#include <asm/uaccess.h>
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
-#include <asm/pgalloc.h>
#include "internal.h"
return NULL;
pmd = pmd_alloc(mm, pud, addr);
- if (!pmd) {
- pud_free(mm, pud);
+ if (!pmd)
return NULL;
- }
VM_BUG_ON(pmd_trans_huge(*pmd));
{
struct zonelist *zonelist;
- if (mem_cgroup_oom_synchronize())
+ if (mem_cgroup_oom_synchronize(true))
return;
zonelist = node_zonelist(first_online_node, GFP_KERNEL);
return 1;
}
-static long bdi_max_pause(struct backing_dev_info *bdi,
- unsigned long bdi_dirty)
+static unsigned long bdi_max_pause(struct backing_dev_info *bdi,
+ unsigned long bdi_dirty)
{
- long bw = bdi->avg_write_bandwidth;
- long t;
+ unsigned long bw = bdi->avg_write_bandwidth;
+ unsigned long t;
/*
* Limit pause time for small memory systems. If sleeping for too long
t = bdi_dirty / (1 + bw / roundup_pow_of_two(1 + HZ / 8));
t++;
- return min_t(long, t, MAX_PAUSE);
+ return min_t(unsigned long, t, MAX_PAUSE);
}
static long bdi_min_pause(struct backing_dev_info *bdi,
list_del(&page->lru);
rmv_page_order(page);
zone->free_area[order].nr_free--;
-#ifdef CONFIG_HIGHMEM
- if (PageHighMem(page))
- totalhigh_pages -= 1 << order;
-#endif
for (i = 0; i < (1 << order); i++)
SetPageReserved((page+i));
pfn += (1 << order);
if (err)
break;
pgd++;
- } while (addr = next, addr != end);
+ } while (addr = next, addr < end);
return err;
}
continue;
}
+#if !defined(CONFIG_SLUB) || !defined(CONFIG_SLUB_DEBUG_ON)
/*
* For simplicity, we won't check this in the list of memcg
* caches. We have control over memcg naming, and if there
s = NULL;
return -EINVAL;
}
+#endif
}
WARN_ON(strchr(name, ' ')); /* It confuses parsers */
struct filename *pathname;
int i, type, prev;
int err;
+ unsigned int old_block_size;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
}
swap_file = p->swap_file;
+ old_block_size = p->old_block_size;
p->swap_file = NULL;
p->max = 0;
swap_map = p->swap_map;
inode = mapping->host;
if (S_ISBLK(inode->i_mode)) {
struct block_device *bdev = I_BDEV(inode);
- set_blocksize(bdev, p->old_block_size);
+ set_blocksize(bdev, old_block_size);
blkdev_put(bdev, FMODE_READ | FMODE_WRITE | FMODE_EXCL);
} else {
mutex_lock(&inode->i_mutex);
#include <asm/div64.h>
#include <linux/swapops.h>
+#include <linux/balloon_compaction.h>
#include "internal.h"
{
return !sc->target_mem_cgroup;
}
-
-static bool mem_cgroup_should_soft_reclaim(struct scan_control *sc)
-{
- struct mem_cgroup *root = sc->target_mem_cgroup;
- return !mem_cgroup_disabled() &&
- mem_cgroup_soft_reclaim_eligible(root, root) != SKIP_TREE;
-}
#else
static bool global_reclaim(struct scan_control *sc)
{
return true;
}
-
-static bool mem_cgroup_should_soft_reclaim(struct scan_control *sc)
-{
- return false;
-}
#endif
unsigned long zone_reclaimable_pages(struct zone *zone)
down_write(&shrinker_rwsem);
list_del(&shrinker->list);
up_write(&shrinker_rwsem);
+ kfree(shrinker->nr_deferred);
}
EXPORT_SYMBOL(unregister_shrinker);
LIST_HEAD(clean_pages);
list_for_each_entry_safe(page, next, page_list, lru) {
- if (page_is_file_cache(page) && !PageDirty(page)) {
+ if (page_is_file_cache(page) && !PageDirty(page) &&
+ !isolated_balloon_page(page)) {
ClearPageActive(page);
list_move(&page->lru, &clean_pages);
}
}
}
-static int
-__shrink_zone(struct zone *zone, struct scan_control *sc, bool soft_reclaim)
+static void shrink_zone(struct zone *zone, struct scan_control *sc)
{
unsigned long nr_reclaimed, nr_scanned;
- int groups_scanned = 0;
do {
struct mem_cgroup *root = sc->target_mem_cgroup;
.zone = zone,
.priority = sc->priority,
};
- struct mem_cgroup *memcg = NULL;
- mem_cgroup_iter_filter filter = (soft_reclaim) ?
- mem_cgroup_soft_reclaim_eligible : NULL;
+ struct mem_cgroup *memcg;
nr_reclaimed = sc->nr_reclaimed;
nr_scanned = sc->nr_scanned;
- while ((memcg = mem_cgroup_iter_cond(root, memcg, &reclaim, filter))) {
+ memcg = mem_cgroup_iter(root, NULL, &reclaim);
+ do {
struct lruvec *lruvec;
- groups_scanned++;
lruvec = mem_cgroup_zone_lruvec(zone, memcg);
shrink_lruvec(lruvec, sc);
mem_cgroup_iter_break(root, memcg);
break;
}
- }
+ memcg = mem_cgroup_iter(root, memcg, &reclaim);
+ } while (memcg);
vmpressure(sc->gfp_mask, sc->target_mem_cgroup,
sc->nr_scanned - nr_scanned,
} while (should_continue_reclaim(zone, sc->nr_reclaimed - nr_reclaimed,
sc->nr_scanned - nr_scanned, sc));
-
- return groups_scanned;
-}
-
-
-static void shrink_zone(struct zone *zone, struct scan_control *sc)
-{
- bool do_soft_reclaim = mem_cgroup_should_soft_reclaim(sc);
- unsigned long nr_scanned = sc->nr_scanned;
- int scanned_groups;
-
- scanned_groups = __shrink_zone(zone, sc, do_soft_reclaim);
- /*
- * memcg iterator might race with other reclaimer or start from
- * a incomplete tree walk so the tree walk in __shrink_zone
- * might have missed groups that are above the soft limit. Try
- * another loop to catch up with others. Do it just once to
- * prevent from reclaim latencies when other reclaimers always
- * preempt this one.
- */
- if (do_soft_reclaim && !scanned_groups)
- __shrink_zone(zone, sc, do_soft_reclaim);
-
- /*
- * No group is over the soft limit or those that are do not have
- * pages in the zone we are reclaiming so we have to reclaim everybody
- */
- if (do_soft_reclaim && (sc->nr_scanned == nr_scanned)) {
- __shrink_zone(zone, sc, false);
- return;
- }
}
/* Returns true if compaction should go ahead for a high-order request */
{
struct zoneref *z;
struct zone *zone;
+ unsigned long nr_soft_reclaimed;
+ unsigned long nr_soft_scanned;
bool aborted_reclaim = false;
/*
continue;
}
}
+ /*
+ * This steals pages from memory cgroups over softlimit
+ * and returns the number of reclaimed pages and
+ * scanned pages. This works for global memory pressure
+ * and balancing, not for a memcg's limit.
+ */
+ nr_soft_scanned = 0;
+ nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(zone,
+ sc->order, sc->gfp_mask,
+ &nr_soft_scanned);
+ sc->nr_reclaimed += nr_soft_reclaimed;
+ sc->nr_scanned += nr_soft_scanned;
/* need some check for avoid more shrink_zone() */
}
{
int i;
int end_zone = 0; /* Inclusive. 0 = ZONE_DMA */
+ unsigned long nr_soft_reclaimed;
+ unsigned long nr_soft_scanned;
struct scan_control sc = {
.gfp_mask = GFP_KERNEL,
.priority = DEF_PRIORITY,
sc.nr_scanned = 0;
+ nr_soft_scanned = 0;
+ /*
+ * Call soft limit reclaim before calling shrink_zone.
+ */
+ nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(zone,
+ order, sc.gfp_mask,
+ &nr_soft_scanned);
+ sc.nr_reclaimed += nr_soft_reclaimed;
+
/*
* There should be no need to raise the scanning
* priority if enough pages are already being scanned
}
tree->rbroot = RB_ROOT;
spin_unlock(&tree->lock);
+
+ zbud_destroy_pool(tree->pool);
+ kfree(tree);
+ zswap_trees[type] = NULL;
}
static struct zbud_ops zswap_zbud_ops = {
static unsigned int mrp_join_time __read_mostly = 200;
module_param(mrp_join_time, uint, 0644);
MODULE_PARM_DESC(mrp_join_time, "Join time in ms (default 200ms)");
+
+static unsigned int mrp_periodic_time __read_mostly = 1000;
+module_param(mrp_periodic_time, uint, 0644);
+MODULE_PARM_DESC(mrp_periodic_time, "Periodic time in ms (default 1s)");
+
MODULE_LICENSE("GPL");
static const u8
mrp_join_timer_arm(app);
}
+static void mrp_periodic_timer_arm(struct mrp_applicant *app)
+{
+ mod_timer(&app->periodic_timer,
+ jiffies + msecs_to_jiffies(mrp_periodic_time));
+}
+
+static void mrp_periodic_timer(unsigned long data)
+{
+ struct mrp_applicant *app = (struct mrp_applicant *)data;
+
+ spin_lock(&app->lock);
+ mrp_mad_event(app, MRP_EVENT_PERIODIC);
+ mrp_pdu_queue(app);
+ spin_unlock(&app->lock);
+
+ mrp_periodic_timer_arm(app);
+}
+
static int mrp_pdu_parse_end_mark(struct sk_buff *skb, int *offset)
{
__be16 endmark;
rcu_assign_pointer(dev->mrp_port->applicants[appl->type], app);
setup_timer(&app->join_timer, mrp_join_timer, (unsigned long)app);
mrp_join_timer_arm(app);
+ setup_timer(&app->periodic_timer, mrp_periodic_timer,
+ (unsigned long)app);
+ mrp_periodic_timer_arm(app);
return 0;
err3:
* all pending messages before the applicant is gone.
*/
del_timer_sync(&app->join_timer);
+ del_timer_sync(&app->periodic_timer);
spin_lock_bh(&app->lock);
mrp_mad_event(app, MRP_EVENT_TX);
return nla_total_size(2) + /* IFLA_VLAN_PROTOCOL */
nla_total_size(2) + /* IFLA_VLAN_ID */
- sizeof(struct ifla_vlan_flags) + /* IFLA_VLAN_FLAGS */
+ nla_total_size(sizeof(struct ifla_vlan_flags)) + /* IFLA_VLAN_FLAGS */
vlan_qos_map_size(vlan->nr_ingress_mappings) +
vlan_qos_map_size(vlan->nr_egress_mappings);
}
batadv_recv_handler_init();
batadv_iv_init();
+ batadv_nc_init();
batadv_event_workqueue = create_singlethread_workqueue("bat_events");
if (ret < 0)
goto err;
- ret = batadv_nc_init(bat_priv);
+ ret = batadv_nc_mesh_init(bat_priv);
if (ret < 0)
goto err;
batadv_vis_quit(bat_priv);
batadv_gw_node_purge(bat_priv);
- batadv_nc_free(bat_priv);
+ batadv_nc_mesh_free(bat_priv);
batadv_dat_free(bat_priv);
batadv_bla_free(bat_priv);
struct batadv_hard_iface *recv_if);
/**
+ * batadv_nc_init - one-time initialization for network coding
+ */
+int __init batadv_nc_init(void)
+{
+ int ret;
+
+ /* Register our packet type */
+ ret = batadv_recv_handler_register(BATADV_CODED,
+ batadv_nc_recv_coded_packet);
+
+ return ret;
+}
+
+/**
* batadv_nc_start_timer - initialise the nc periodic worker
* @bat_priv: the bat priv with all the soft interface information
*/
}
/**
- * batadv_nc_init - initialise coding hash table and start house keeping
+ * batadv_nc_mesh_init - initialise coding hash table and start house keeping
* @bat_priv: the bat priv with all the soft interface information
*/
-int batadv_nc_init(struct batadv_priv *bat_priv)
+int batadv_nc_mesh_init(struct batadv_priv *bat_priv)
{
bat_priv->nc.timestamp_fwd_flush = jiffies;
bat_priv->nc.timestamp_sniffed_purge = jiffies;
batadv_hash_set_lock_class(bat_priv->nc.coding_hash,
&batadv_nc_decoding_hash_lock_class_key);
- /* Register our packet type */
- if (batadv_recv_handler_register(BATADV_CODED,
- batadv_nc_recv_coded_packet) < 0)
- goto err;
-
INIT_DELAYED_WORK(&bat_priv->nc.work, batadv_nc_worker);
batadv_nc_start_timer(bat_priv);
}
/**
- * batadv_nc_free - clean up network coding memory
+ * batadv_nc_mesh_free - clean up network coding memory
* @bat_priv: the bat priv with all the soft interface information
*/
-void batadv_nc_free(struct batadv_priv *bat_priv)
+void batadv_nc_mesh_free(struct batadv_priv *bat_priv)
{
- batadv_recv_handler_unregister(BATADV_CODED);
cancel_delayed_work_sync(&bat_priv->nc.work);
batadv_nc_purge_paths(bat_priv, bat_priv->nc.coding_hash, NULL);
#ifdef CONFIG_BATMAN_ADV_NC
-int batadv_nc_init(struct batadv_priv *bat_priv);
-void batadv_nc_free(struct batadv_priv *bat_priv);
+int batadv_nc_init(void);
+int batadv_nc_mesh_init(struct batadv_priv *bat_priv);
+void batadv_nc_mesh_free(struct batadv_priv *bat_priv);
void batadv_nc_update_nc_node(struct batadv_priv *bat_priv,
struct batadv_orig_node *orig_node,
struct batadv_orig_node *orig_neigh_node,
#else /* ifdef CONFIG_BATMAN_ADV_NC */
-static inline int batadv_nc_init(struct batadv_priv *bat_priv)
+static inline int batadv_nc_init(void)
{
return 0;
}
-static inline void batadv_nc_free(struct batadv_priv *bat_priv)
+static inline int batadv_nc_mesh_init(struct batadv_priv *bat_priv)
+{
+ return 0;
+}
+
+static inline void batadv_nc_mesh_free(struct batadv_priv *bat_priv)
{
return;
}
case ETH_P_8021Q:
vhdr = (struct vlan_ethhdr *)skb->data;
vid = ntohs(vhdr->h_vlan_TCI) & VLAN_VID_MASK;
+ vid |= BATADV_VLAN_HAS_TAG;
if (vhdr->h_vlan_encapsulated_proto != ethertype)
break;
case ETH_P_8021Q:
vhdr = (struct vlan_ethhdr *)skb->data;
vid = ntohs(vhdr->h_vlan_TCI) & VLAN_VID_MASK;
+ vid |= BATADV_VLAN_HAS_TAG;
if (vhdr->h_vlan_encapsulated_proto != ethertype)
break;
goto done;
}
- if (hdev->rfkill && rfkill_blocked(hdev->rfkill)) {
+ /* Check for rfkill but allow the HCI setup stage to proceed
+ * (which in itself doesn't cause any RF activity).
+ */
+ if (test_bit(HCI_RFKILLED, &hdev->dev_flags) &&
+ !test_bit(HCI_SETUP, &hdev->dev_flags)) {
ret = -ERFKILL;
goto done;
}
BT_DBG("%p name %s blocked %d", hdev, hdev->name, blocked);
- if (!blocked)
- return 0;
-
- hci_dev_do_close(hdev);
+ if (blocked) {
+ set_bit(HCI_RFKILLED, &hdev->dev_flags);
+ if (!test_bit(HCI_SETUP, &hdev->dev_flags))
+ hci_dev_do_close(hdev);
+ } else {
+ clear_bit(HCI_RFKILLED, &hdev->dev_flags);
+ }
return 0;
}
return;
}
- if (test_bit(HCI_AUTO_OFF, &hdev->dev_flags))
+ if (test_bit(HCI_RFKILLED, &hdev->dev_flags)) {
+ clear_bit(HCI_AUTO_OFF, &hdev->dev_flags);
+ hci_dev_do_close(hdev);
+ } else if (test_bit(HCI_AUTO_OFF, &hdev->dev_flags)) {
queue_delayed_work(hdev->req_workqueue, &hdev->power_off,
HCI_AUTO_OFF_TIMEOUT);
+ }
if (test_and_clear_bit(HCI_SETUP, &hdev->dev_flags))
mgmt_index_added(hdev);
}
}
+ if (hdev->rfkill && rfkill_blocked(hdev->rfkill))
+ set_bit(HCI_RFKILLED, &hdev->dev_flags);
+
set_bit(HCI_SETUP, &hdev->dev_flags);
if (hdev->dev_type != HCI_AMP)
cp.handle = cpu_to_le16(conn->handle);
if (ltk->authenticated)
- conn->sec_level = BT_SECURITY_HIGH;
+ conn->pending_sec_level = BT_SECURITY_HIGH;
+ else
+ conn->pending_sec_level = BT_SECURITY_MEDIUM;
+
+ conn->enc_key_size = ltk->enc_size;
hci_send_cmd(hdev, HCI_OP_LE_LTK_REPLY, sizeof(cp), &cp);
sk = chan->sk;
+ /* For certain devices (ex: HID mouse), support for authentication,
+ * pairing and bonding is optional. For such devices, inorder to avoid
+ * the ACL alive for too long after L2CAP disconnection, reset the ACL
+ * disc_timeout back to HCI_DISCONN_TIMEOUT during L2CAP connect.
+ */
+ conn->hcon->disc_timeout = HCI_DISCONN_TIMEOUT;
+
bacpy(&bt_sk(sk)->src, conn->src);
bacpy(&bt_sk(sk)->dst, conn->dst);
chan->psm = psm;
static void rfcomm_dev_state_change(struct rfcomm_dlc *dlc, int err)
{
struct rfcomm_dev *dev = dlc->owner;
- struct tty_struct *tty;
if (!dev)
return;
DPM_ORDER_DEV_AFTER_PARENT);
wake_up_interruptible(&dev->port.open_wait);
- } else if (dlc->state == BT_CLOSED) {
- tty = tty_port_tty_get(&dev->port);
- if (!tty) {
- if (test_bit(RFCOMM_RELEASE_ONHUP, &dev->flags)) {
- /* Drop DLC lock here to avoid deadlock
- * 1. rfcomm_dev_get will take rfcomm_dev_lock
- * but in rfcomm_dev_add there's lock order:
- * rfcomm_dev_lock -> dlc lock
- * 2. tty_port_put will deadlock if it's
- * the last reference
- *
- * FIXME: when we release the lock anything
- * could happen to dev, even its destruction
- */
- rfcomm_dlc_unlock(dlc);
- if (rfcomm_dev_get(dev->id) == NULL) {
- rfcomm_dlc_lock(dlc);
- return;
- }
-
- if (!test_and_set_bit(RFCOMM_TTY_RELEASED,
- &dev->flags))
- tty_port_put(&dev->port);
-
- tty_port_put(&dev->port);
- rfcomm_dlc_lock(dlc);
- }
- } else {
- tty_hangup(tty);
- tty_kref_put(tty);
- }
- }
+ } else if (dlc->state == BT_CLOSED)
+ tty_port_tty_hangup(&dev->port, false);
}
static void rfcomm_dev_modem_status(struct rfcomm_dlc *dlc, u8 v24_sig)
vid = nla_get_u16(tb[NDA_VLAN]);
- if (vid >= VLAN_N_VID) {
+ if (!vid || vid >= VLAN_VID_MASK) {
pr_info("bridge: RTM_NEWNEIGH with invalid vlan id %d\n",
vid);
return -EINVAL;
vid = nla_get_u16(tb[NDA_VLAN]);
- if (vid >= VLAN_N_VID) {
+ if (!vid || vid >= VLAN_VID_MASK) {
pr_info("bridge: RTM_NEWNEIGH with invalid vlan id %d\n",
vid);
return -EINVAL;
call_rcu_bh(&p->rcu, br_multicast_free_pg);
err = 0;
- if (!mp->ports && !mp->mglist && mp->timer_armed &&
+ if (!mp->ports && !mp->mglist &&
netif_running(br->dev))
mod_timer(&mp->timer, jiffies);
break;
del_timer(&p->timer);
call_rcu_bh(&p->rcu, br_multicast_free_pg);
- if (!mp->ports && !mp->mglist && mp->timer_armed &&
+ if (!mp->ports && !mp->mglist &&
netif_running(br->dev))
mod_timer(&mp->timer, jiffies);
mp->br = br;
mp->addr = *group;
-
setup_timer(&mp->timer, br_multicast_group_expired,
(unsigned long)mp);
struct net_bridge_mdb_entry *mp;
struct net_bridge_port_group *p;
struct net_bridge_port_group __rcu **pp;
+ unsigned long now = jiffies;
int err;
spin_lock(&br->multicast_lock);
if (!port) {
mp->mglist = true;
+ mod_timer(&mp->timer, now + br->multicast_membership_interval);
goto out;
}
(p = mlock_dereference(*pp, br)) != NULL;
pp = &p->next) {
if (p->port == port)
- goto out;
+ goto found;
if ((unsigned long)p->port < (unsigned long)port)
break;
}
rcu_assign_pointer(*pp, p);
br_mdb_notify(br->dev, port, group, RTM_NEWMDB);
+found:
+ mod_timer(&p->timer, now + br->multicast_membership_interval);
out:
err = 0;
if (!mp)
goto out;
- mod_timer(&mp->timer, now + br->multicast_membership_interval);
- mp->timer_armed = true;
-
max_delay *= br->multicast_last_member_count;
if (mp->mglist &&
if (!mp)
goto out;
- mod_timer(&mp->timer, now + br->multicast_membership_interval);
- mp->timer_armed = true;
-
max_delay *= br->multicast_last_member_count;
if (mp->mglist &&
(timer_pending(&mp->timer) ?
call_rcu_bh(&p->rcu, br_multicast_free_pg);
br_mdb_notify(br->dev, port, group, RTM_DELMDB);
- if (!mp->ports && !mp->mglist && mp->timer_armed &&
+ if (!mp->ports && !mp->mglist &&
netif_running(br->dev))
mod_timer(&mp->timer, jiffies);
}
br->multicast_last_member_interval;
if (!port) {
- if (mp->mglist && mp->timer_armed &&
+ if (mp->mglist &&
(timer_pending(&mp->timer) ?
time_after(mp->timer.expires, time) :
try_to_del_timer_sync(&mp->timer) >= 0)) {
mod_timer(&mp->timer, time);
}
+
+ goto out;
+ }
+
+ for (p = mlock_dereference(mp->ports, br);
+ p != NULL;
+ p = mlock_dereference(p->next, br)) {
+ if (p->port != port)
+ continue;
+
+ if (!hlist_unhashed(&p->mglist) &&
+ (timer_pending(&p->timer) ?
+ time_after(p->timer.expires, time) :
+ try_to_del_timer_sync(&p->timer) >= 0)) {
+ mod_timer(&p->timer, time);
+ }
+
+ break;
}
out:
spin_unlock(&br->multicast_lock);
hlist_for_each_entry_safe(mp, n, &mdb->mhash[i],
hlist[ver]) {
del_timer(&mp->timer);
- mp->timer_armed = false;
call_rcu_bh(&mp->rcu, br_multicast_free_group);
}
}
struct net_device *dev, u32 filter_mask)
{
int err = 0;
- struct net_bridge_port *port = br_port_get_rcu(dev);
+ struct net_bridge_port *port = br_port_get_rtnl(dev);
/* not a bridge port and */
if (!port && !(filter_mask & RTEXT_FILTER_BRVLAN))
vinfo = nla_data(tb[IFLA_BRIDGE_VLAN_INFO]);
- if (vinfo->vid >= VLAN_N_VID)
+ if (!vinfo->vid || vinfo->vid >= VLAN_VID_MASK)
return -EINVAL;
switch (cmd) {
struct net_port_vlans *pv;
if (br_port_exists(dev))
- pv = nbp_get_vlan_info(br_port_get_rcu(dev));
+ pv = nbp_get_vlan_info(br_port_get_rtnl(dev));
else if (dev->priv_flags & IFF_EBRIDGE)
pv = br_get_vlan_info((struct net_bridge *)netdev_priv(dev));
else
struct timer_list timer;
struct br_ip addr;
bool mglist;
- bool timer_armed;
};
struct net_bridge_mdb_htable
static inline struct net_bridge_port *br_port_get_rcu(const struct net_device *dev)
{
- struct net_bridge_port *port =
- rcu_dereference_rtnl(dev->rx_handler_data);
-
- return br_port_exists(dev) ? port : NULL;
+ return rcu_dereference(dev->rx_handler_data);
}
-static inline struct net_bridge_port *br_port_get_rtnl(struct net_device *dev)
+static inline struct net_bridge_port *br_port_get_rtnl(const struct net_device *dev)
{
return br_port_exists(dev) ?
rtnl_dereference(dev->rx_handler_data) : NULL;
* vid wasn't set
*/
smp_rmb();
- return (v->pvid & VLAN_TAG_PRESENT) ?
- (v->pvid & ~VLAN_TAG_PRESENT) :
- VLAN_N_VID;
+ return v->pvid ?: VLAN_N_VID;
}
#else
extern void br_init_port(struct net_bridge_port *p);
extern void br_become_designated_port(struct net_bridge_port *p);
+extern void __br_set_forward_delay(struct net_bridge *br, unsigned long t);
extern int br_set_forward_delay(struct net_bridge *br, unsigned long x);
extern int br_set_hello_time(struct net_bridge *br, unsigned long x);
extern int br_set_max_age(struct net_bridge *br, unsigned long x);
p->designated_age = jiffies - bpdu->message_age;
mod_timer(&p->message_age_timer, jiffies
- + (p->br->max_age - bpdu->message_age));
+ + (bpdu->max_age - bpdu->message_age));
}
/* called under bridge lock */
}
+void __br_set_forward_delay(struct net_bridge *br, unsigned long t)
+{
+ br->bridge_forward_delay = t;
+ if (br_is_root_bridge(br))
+ br->forward_delay = br->bridge_forward_delay;
+}
+
int br_set_forward_delay(struct net_bridge *br, unsigned long val)
{
unsigned long t = clock_t_to_jiffies(val);
+ int err = -ERANGE;
+ spin_lock_bh(&br->lock);
if (br->stp_enabled != BR_NO_STP &&
(t < BR_MIN_FORWARD_DELAY || t > BR_MAX_FORWARD_DELAY))
- return -ERANGE;
+ goto unlock;
- spin_lock_bh(&br->lock);
- br->bridge_forward_delay = t;
- if (br_is_root_bridge(br))
- br->forward_delay = br->bridge_forward_delay;
+ __br_set_forward_delay(br, t);
+ err = 0;
+
+unlock:
spin_unlock_bh(&br->lock);
- return 0;
+ return err;
}
char *envp[] = { NULL };
r = call_usermodehelper(BR_STP_PROG, argv, envp, UMH_WAIT_PROC);
+
+ spin_lock_bh(&br->lock);
+
+ if (br->bridge_forward_delay < BR_MIN_FORWARD_DELAY)
+ __br_set_forward_delay(br, BR_MIN_FORWARD_DELAY);
+ else if (br->bridge_forward_delay > BR_MAX_FORWARD_DELAY)
+ __br_set_forward_delay(br, BR_MAX_FORWARD_DELAY);
+
if (r == 0) {
br->stp_enabled = BR_USER_STP;
br_debug(br, "userspace STP started\n");
br_debug(br, "using kernel STP\n");
/* To start timers on any ports left in blocking */
- spin_lock_bh(&br->lock);
br_port_state_selection(br);
- spin_unlock_bh(&br->lock);
}
+
+ spin_unlock_bh(&br->lock);
}
static void br_stp_stop(struct net_bridge *br)
return 0;
}
- if (vid) {
- if (v->port_idx) {
- p = v->parent.port;
- br = p->br;
- dev = p->dev;
- } else {
- br = v->parent.br;
- dev = br->dev;
- }
- ops = dev->netdev_ops;
-
- if (p && (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) {
- /* Add VLAN to the device filter if it is supported.
- * Stricly speaking, this is not necessary now, since
- * devices are made promiscuous by the bridge, but if
- * that ever changes this code will allow tagged
- * traffic to enter the bridge.
- */
- err = ops->ndo_vlan_rx_add_vid(dev, htons(ETH_P_8021Q),
- vid);
- if (err)
- return err;
- }
-
- err = br_fdb_insert(br, p, dev->dev_addr, vid);
- if (err) {
- br_err(br, "failed insert local address into bridge "
- "forwarding table\n");
- goto out_filt;
- }
+ if (v->port_idx) {
+ p = v->parent.port;
+ br = p->br;
+ dev = p->dev;
+ } else {
+ br = v->parent.br;
+ dev = br->dev;
+ }
+ ops = dev->netdev_ops;
+
+ if (p && (dev->features & NETIF_F_HW_VLAN_CTAG_FILTER)) {
+ /* Add VLAN to the device filter if it is supported.
+ * Stricly speaking, this is not necessary now, since
+ * devices are made promiscuous by the bridge, but if
+ * that ever changes this code will allow tagged
+ * traffic to enter the bridge.
+ */
+ err = ops->ndo_vlan_rx_add_vid(dev, htons(ETH_P_8021Q),
+ vid);
+ if (err)
+ return err;
+ }
+ err = br_fdb_insert(br, p, dev->dev_addr, vid);
+ if (err) {
+ br_err(br, "failed insert local address into bridge "
+ "forwarding table\n");
+ goto out_filt;
}
set_bit(vid, v->vlan_bitmap);
__vlan_delete_pvid(v, vid);
clear_bit(vid, v->untagged_bitmap);
- if (v->port_idx && vid) {
+ if (v->port_idx) {
struct net_device *dev = v->parent.port->dev;
const struct net_device_ops *ops = dev->netdev_ops;
bool br_allowed_ingress(struct net_bridge *br, struct net_port_vlans *v,
struct sk_buff *skb, u16 *vid)
{
+ int err;
+
/* If VLAN filtering is disabled on the bridge, all packets are
* permitted.
*/
if (!v)
return false;
- if (br_vlan_get_tag(skb, vid)) {
+ err = br_vlan_get_tag(skb, vid);
+ if (!*vid) {
u16 pvid = br_get_pvid(v);
- /* Frame did not have a tag. See if pvid is set
- * on this port. That tells us which vlan untagged
- * traffic belongs to.
+ /* Frame had a tag with VID 0 or did not have a tag.
+ * See if pvid is set on this port. That tells us which
+ * vlan untagged or priority-tagged traffic belongs to.
*/
if (pvid == VLAN_N_VID)
return false;
- /* PVID is set on this port. Any untagged ingress
- * frame is considered to belong to this vlan.
+ /* PVID is set on this port. Any untagged or priority-tagged
+ * ingress frame is considered to belong to this vlan.
*/
- __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), pvid);
+ *vid = pvid;
+ if (likely(err))
+ /* Untagged Frame. */
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), pvid);
+ else
+ /* Priority-tagged Frame.
+ * At this point, We know that skb->vlan_tci had
+ * VLAN_TAG_PRESENT bit and its VID field was 0x000.
+ * We update only VID field and preserve PCP field.
+ */
+ skb->vlan_tci |= pvid;
+
return true;
}
return false;
}
-/* Must be protected by RTNL */
+/* Must be protected by RTNL.
+ * Must be called with vid in range from 1 to 4094 inclusive.
+ */
int br_vlan_add(struct net_bridge *br, u16 vid, u16 flags)
{
struct net_port_vlans *pv = NULL;
return err;
}
-/* Must be protected by RTNL */
+/* Must be protected by RTNL.
+ * Must be called with vid in range from 1 to 4094 inclusive.
+ */
int br_vlan_delete(struct net_bridge *br, u16 vid)
{
struct net_port_vlans *pv;
if (!pv)
return -EINVAL;
- if (vid) {
- /* If the VID !=0 remove fdb for this vid. VID 0 is special
- * in that it's the default and is always there in the fdb.
- */
- spin_lock_bh(&br->hash_lock);
- fdb_delete_by_addr(br, br->dev->dev_addr, vid);
- spin_unlock_bh(&br->hash_lock);
- }
+ spin_lock_bh(&br->hash_lock);
+ fdb_delete_by_addr(br, br->dev->dev_addr, vid);
+ spin_unlock_bh(&br->hash_lock);
__vlan_del(pv, vid);
return 0;
return 0;
}
-/* Must be protected by RTNL */
+/* Must be protected by RTNL.
+ * Must be called with vid in range from 1 to 4094 inclusive.
+ */
int nbp_vlan_add(struct net_bridge_port *port, u16 vid, u16 flags)
{
struct net_port_vlans *pv = NULL;
return err;
}
-/* Must be protected by RTNL */
+/* Must be protected by RTNL.
+ * Must be called with vid in range from 1 to 4094 inclusive.
+ */
int nbp_vlan_delete(struct net_bridge_port *port, u16 vid)
{
struct net_port_vlans *pv;
if (!pv)
return -EINVAL;
- if (vid) {
- /* If the VID !=0 remove fdb for this vid. VID 0 is special
- * in that it's the default and is always there in the fdb.
- */
- spin_lock_bh(&port->br->hash_lock);
- fdb_delete_by_addr(port->br, port->dev->dev_addr, vid);
- spin_unlock_bh(&port->br->hash_lock);
- }
+ spin_lock_bh(&port->br->hash_lock);
+ fdb_delete_by_addr(port->br, port->dev->dev_addr, vid);
+ spin_unlock_bh(&port->br->hash_lock);
return __vlan_del(pv, vid);
}
EXPORT_SYMBOL(ceph_osdc_sync);
/*
+ * Call all pending notify callbacks - for use after a watch is
+ * unregistered, to make sure no more callbacks for it will be invoked
+ */
+extern void ceph_osdc_flush_notifies(struct ceph_osd_client *osdc)
+{
+ flush_workqueue(osdc->notify_wq);
+}
+EXPORT_SYMBOL(ceph_osdc_flush_notifies);
+
+
+/*
* init, shutdown
*/
int ceph_osdc_init(struct ceph_osd_client *osdc, struct ceph_client *client)
__get_user(kmsg->msg_controllen, &umsg->msg_controllen) ||
__get_user(kmsg->msg_flags, &umsg->msg_flags))
return -EFAULT;
+ if (kmsg->msg_namelen > sizeof(struct sockaddr_storage))
+ return -EINVAL;
kmsg->msg_name = compat_ptr(tmp1);
kmsg->msg_iov = compat_ptr(tmp2);
kmsg->msg_control = compat_ptr(tmp3);
return new_map;
}
-int netif_set_xps_queue(struct net_device *dev, struct cpumask *mask, u16 index)
+int netif_set_xps_queue(struct net_device *dev, const struct cpumask *mask,
+ u16 index)
{
struct xps_dev_maps *dev_maps, *new_dev_maps = NULL;
struct xps_map *map, *new_map;
/* Delayed registration/unregisteration */
static LIST_HEAD(net_todo_list);
+static DECLARE_WAIT_QUEUE_HEAD(netdev_unregistering_wq);
static void net_set_todo(struct net_device *dev)
{
list_add_tail(&dev->todo_list, &net_todo_list);
+ dev_net(dev)->dev_unreg_count++;
}
static void rollback_registered_many(struct list_head *head)
if (dev->destructor)
dev->destructor(dev);
+ /* Report a network device has been unregistered */
+ rtnl_lock();
+ dev_net(dev)->dev_unreg_count--;
+ __rtnl_unlock();
+ wake_up(&netdev_unregistering_wq);
+
/* Free network device */
kobject_put(&dev->dev.kobj);
}
rtnl_unlock();
}
+static void __net_exit rtnl_lock_unregistering(struct list_head *net_list)
+{
+ /* Return with the rtnl_lock held when there are no network
+ * devices unregistering in any network namespace in net_list.
+ */
+ struct net *net;
+ bool unregistering;
+ DEFINE_WAIT(wait);
+
+ for (;;) {
+ prepare_to_wait(&netdev_unregistering_wq, &wait,
+ TASK_UNINTERRUPTIBLE);
+ unregistering = false;
+ rtnl_lock();
+ list_for_each_entry(net, net_list, exit_list) {
+ if (net->dev_unreg_count > 0) {
+ unregistering = true;
+ break;
+ }
+ }
+ if (!unregistering)
+ break;
+ __rtnl_unlock();
+ schedule();
+ }
+ finish_wait(&netdev_unregistering_wq, &wait);
+}
+
static void __net_exit default_device_exit_batch(struct list_head *net_list)
{
/* At exit all network devices most be removed from a network
struct net *net;
LIST_HEAD(dev_kill_list);
- rtnl_lock();
+ /* To prevent network device cleanup code from dereferencing
+ * loopback devices or network devices that have been freed
+ * wait here for all pending unregistrations to complete,
+ * before unregistring the loopback device and allowing the
+ * network namespace be freed.
+ *
+ * The netdev todo list containing all network devices
+ * unregistrations that happen in default_device_exit_batch
+ * will run in the rtnl_unlock() at the end of
+ * default_device_exit_batch.
+ */
+ rtnl_lock_unregistering(net_list);
list_for_each_entry(net, net_list, exit_list) {
for_each_netdev_reverse(net, dev) {
if (dev->rtnl_link_ops)
struct sk_filter *fp = container_of(rcu, struct sk_filter, rcu);
bpf_jit_free(fp);
- kfree(fp);
}
EXPORT_SYMBOL(sk_filter_release_rcu);
if (fprog->filter == NULL)
return -EINVAL;
- fp = kmalloc(fsize + sizeof(*fp), GFP_KERNEL);
+ fp = kmalloc(sk_filter_size(fprog->len), GFP_KERNEL);
if (!fp)
return -ENOMEM;
memcpy(fp->insns, fprog->filter, fsize);
{
struct sk_filter *fp, *old_fp;
unsigned int fsize = sizeof(struct sock_filter) * fprog->len;
+ unsigned int sk_fsize = sk_filter_size(fprog->len);
int err;
if (sock_flag(sk, SOCK_FILTER_LOCKED))
if (fprog->filter == NULL)
return -EINVAL;
- fp = sock_kmalloc(sk, fsize+sizeof(*fp), GFP_KERNEL);
+ fp = sock_kmalloc(sk, sk_fsize, GFP_KERNEL);
if (!fp)
return -ENOMEM;
if (copy_from_user(fp->insns, fprog->filter, fsize)) {
- sock_kfree_s(sk, fp, fsize+sizeof(*fp));
+ sock_kfree_s(sk, fp, sk_fsize);
return -EFAULT;
}
if (poff >= 0) {
__be32 *ports, _ports;
- nhoff += poff;
- ports = skb_header_pointer(skb, nhoff, sizeof(_ports), &_ports);
+ ports = skb_header_pointer(skb, nhoff + poff,
+ sizeof(_ports), &_ports);
if (ports)
flow->ports = *ports;
}
return;
proto = ntohs(eth_hdr(skb)->h_proto);
- if (proto == ETH_P_IP) {
+ if (proto == ETH_P_ARP) {
struct arphdr *arp;
unsigned char *arp_ptr;
/* No arp on this interface */
void netpoll_cleanup(struct netpoll *np)
{
- if (!np->dev)
- return;
-
rtnl_lock();
+ if (!np->dev)
+ goto out;
__netpoll_cleanup(np);
- rtnl_unlock();
-
dev_put(np->dev);
np->dev = NULL;
+out:
+ rtnl_unlock();
}
EXPORT_SYMBOL(netpoll_cleanup);
#include <net/secure_seq.h>
-static u32 net_secret[MD5_MESSAGE_BYTES / 4] ____cacheline_aligned;
+#if IS_ENABLED(CONFIG_IPV6) || IS_ENABLED(CONFIG_INET)
+#define NET_SECRET_SIZE (MD5_MESSAGE_BYTES / 4)
-void net_secret_init(void)
+static u32 net_secret[NET_SECRET_SIZE] ____cacheline_aligned;
+
+static void net_secret_init(void)
{
- get_random_bytes(net_secret, sizeof(net_secret));
+ u32 tmp;
+ int i;
+
+ if (likely(net_secret[0]))
+ return;
+
+ for (i = NET_SECRET_SIZE; i > 0;) {
+ do {
+ get_random_bytes(&tmp, sizeof(tmp));
+ } while (!tmp);
+ cmpxchg(&net_secret[--i], 0, tmp);
+ }
}
+#endif
#ifdef CONFIG_INET
static u32 seq_scale(u32 seq)
u32 hash[MD5_DIGEST_WORDS];
u32 i;
+ net_secret_init();
memcpy(hash, saddr, 16);
for (i = 0; i < 4; i++)
secret[i] = net_secret[i] + (__force u32)daddr[i];
u32 hash[MD5_DIGEST_WORDS];
u32 i;
+ net_secret_init();
memcpy(hash, saddr, 16);
for (i = 0; i < 4; i++)
secret[i] = net_secret[i] + (__force u32) daddr[i];
{
u32 hash[MD5_DIGEST_WORDS];
+ net_secret_init();
hash[0] = (__force __u32) daddr;
hash[1] = net_secret[13];
hash[2] = net_secret[14];
{
__u32 hash[4];
+ net_secret_init();
memcpy(hash, daddr, 16);
md5_transform(hash, net_secret);
{
u32 hash[MD5_DIGEST_WORDS];
+ net_secret_init();
hash[0] = (__force u32)saddr;
hash[1] = (__force u32)daddr;
hash[2] = ((__force u16)sport << 16) + (__force u16)dport;
{
u32 hash[MD5_DIGEST_WORDS];
+ net_secret_init();
hash[0] = (__force u32)saddr;
hash[1] = (__force u32)daddr;
hash[2] = (__force u32)dport ^ net_secret[14];
u32 hash[MD5_DIGEST_WORDS];
u64 seq;
+ net_secret_init();
hash[0] = (__force u32)saddr;
hash[1] = (__force u32)daddr;
hash[2] = ((__force u16)sport << 16) + (__force u16)dport;
u64 seq;
u32 i;
+ net_secret_init();
memcpy(hash, saddr, 16);
for (i = 0; i < 4; i++)
secret[i] = net_secret[i] + daddr[i];
sk->sk_ll_usec = sysctl_net_busy_read;
#endif
+ sk->sk_pacing_rate = ~0U;
/*
* Before updating sk_refcnt, we must commit prior changes to memory
* (Documentation/RCU/rculist_nulls.txt for details)
if (dst)
dst->ops->redirect(dst, sk, skb);
+ goto out;
}
if (type == ICMPV6_PKT_TOOBIG) {
real_dev = dev_get_by_index(src_net, nla_get_u32(tb[IFLA_LINK]));
if (!real_dev)
return -ENODEV;
+ if (real_dev->type != ARPHRD_IEEE802154)
+ return -EINVAL;
lowpan_dev_info(dev)->real_dev = real_dev;
lowpan_dev_info(dev)->fragment_tag = 0;
entry->ldev = dev;
+ /* Set the lowpan harware address to the wpan hardware address. */
+ memcpy(dev->dev_addr, real_dev->dev_addr, IEEE802154_ADDR_LEN);
+
mutex_lock(&lowpan_dev_info(dev)->dev_list_mtx);
INIT_LIST_HEAD(&entry->list);
list_add_tail(&entry->list, &lowpan_devices);
get_random_bytes(&rnd, sizeof(rnd));
} while (rnd == 0);
- if (cmpxchg(&inet_ehash_secret, 0, rnd) == 0) {
+ if (cmpxchg(&inet_ehash_secret, 0, rnd) == 0)
get_random_bytes(&ipv6_hash_secret, sizeof(ipv6_hash_secret));
- net_secret_init();
- }
}
EXPORT_SYMBOL(build_ehash_secret);
pip->saddr = fl4.saddr;
pip->protocol = IPPROTO_IGMP;
pip->tot_len = 0; /* filled in later */
- ip_select_ident(pip, &rt->dst, NULL);
+ ip_select_ident(skb, &rt->dst, NULL);
((u8 *)&pip[1])[0] = IPOPT_RA;
((u8 *)&pip[1])[1] = 4;
((u8 *)&pip[1])[2] = 0;
iph->daddr = dst;
iph->saddr = fl4.saddr;
iph->protocol = IPPROTO_IGMP;
- ip_select_ident(iph, &rt->dst, NULL);
+ ip_select_ident(skb, &rt->dst, NULL);
((u8 *)&iph[1])[0] = IPOPT_RA;
((u8 *)&iph[1])[1] = 4;
((u8 *)&iph[1])[2] = 0;
in_dev->mr_gq_running = 0;
igmpv3_send_report(in_dev, NULL);
- __in_dev_put(in_dev);
+ in_dev_put(in_dev);
}
static void igmp_ifc_timer_expire(unsigned long data)
igmp_ifc_start_timer(in_dev,
unsolicited_report_interval(in_dev));
}
- __in_dev_put(in_dev);
+ in_dev_put(in_dev);
}
static void igmp_ifc_event(struct in_device *in_dev)
if (unlikely(!INET_TW_MATCH(sk, net, acookie,
saddr, daddr, ports,
dif))) {
- sock_put(sk);
+ inet_twsk_put(inet_twsk(sk));
goto begintw;
}
goto out;
* At the moment of writing this notes identifier of IP packets is generated
* to be unpredictable using this code only for packets subjected
* (actually or potentially) to defragmentation. I.e. DF packets less than
- * PMTU in size uses a constant ID and do not use this code (see
- * ip_select_ident() in include/net/ip.h).
+ * PMTU in size when local fragmentation is disabled use a constant ID and do
+ * not use this code (see ip_select_ident() in include/net/ip.h).
*
* Route cache entries hold references to our nodes.
* New cache entries get references via lookup by destination IP address in
iph->daddr = (opt && opt->opt.srr ? opt->opt.faddr : daddr);
iph->saddr = saddr;
iph->protocol = sk->sk_protocol;
- ip_select_ident(iph, &rt->dst, sk);
+ ip_select_ident(skb, &rt->dst, sk);
if (opt && opt->opt.optlen) {
iph->ihl += opt->opt.optlen>>2;
ip_options_build(skb, &inet_opt->opt, inet->inet_daddr, rt, 0);
}
- ip_select_ident_more(iph, &rt->dst, sk,
+ ip_select_ident_more(skb, &rt->dst, sk,
(skb_shinfo(skb)->gso_segs ?: 1) - 1);
skb->priority = sk->sk_priority;
/* initialize protocol header pointer */
skb->transport_header = skb->network_header + fragheaderlen;
- skb->ip_summed = CHECKSUM_PARTIAL;
skb->csum = 0;
- /* specify the length of each IP datagram fragment */
- skb_shinfo(skb)->gso_size = maxfraglen - fragheaderlen;
- skb_shinfo(skb)->gso_type = SKB_GSO_UDP;
+
__skb_queue_tail(queue, skb);
+ } else if (skb_is_gso(skb)) {
+ goto append;
}
+ skb->ip_summed = CHECKSUM_PARTIAL;
+ /* specify the length of each IP datagram fragment */
+ skb_shinfo(skb)->gso_size = maxfraglen - fragheaderlen;
+ skb_shinfo(skb)->gso_type = SKB_GSO_UDP;
+
+append:
return skb_append_datato_frags(sk, skb, getfrag, from,
(length - transhdrlen));
}
else
ttl = ip_select_ttl(inet, &rt->dst);
- iph = (struct iphdr *)skb->data;
+ iph = ip_hdr(skb);
iph->version = 4;
iph->ihl = 5;
iph->tos = inet->tos;
iph->ttl = ttl;
iph->protocol = sk->sk_protocol;
ip_copy_addrs(iph, fl4);
- ip_select_ident(iph, &rt->dst, sk);
+ ip_select_ident(skb, &rt->dst, sk);
if (opt) {
iph->ihl += opt->optlen>>2;
tunnel->err_count = 0;
}
+ tos = ip_tunnel_ecn_encap(tos, inner_iph, skb);
ttl = tnl_params->ttl;
if (ttl == 0) {
if (skb->protocol == htons(ETH_P_IP))
max_headroom = LL_RESERVED_SPACE(rt->dst.dev) + sizeof(struct iphdr)
+ rt->dst.header_len;
- if (max_headroom > dev->needed_headroom) {
+ if (max_headroom > dev->needed_headroom)
dev->needed_headroom = max_headroom;
- if (skb_cow_head(skb, dev->needed_headroom)) {
- dev->stats.tx_dropped++;
- dev_kfree_skb(skb);
- return;
- }
+
+ if (skb_cow_head(skb, dev->needed_headroom)) {
+ dev->stats.tx_dropped++;
+ dev_kfree_skb(skb);
+ return;
}
err = iptunnel_xmit(rt, skb, fl4.saddr, fl4.daddr, protocol,
- ip_tunnel_ecn_encap(tos, inner_iph, skb), ttl, df,
- !net_eq(tunnel->net, dev_net(dev)));
+ tos, ttl, df, !net_eq(tunnel->net, dev_net(dev)));
iptunnel_xmit_stats(err, &dev->stats, dev->tstats);
return;
/* FB netdevice is special: we have one, and only one per netns.
* Allowing to move it to another netns is clearly unsafe.
*/
- if (!IS_ERR(itn->fb_tunnel_dev))
+ if (!IS_ERR(itn->fb_tunnel_dev)) {
itn->fb_tunnel_dev->features |= NETIF_F_NETNS_LOCAL;
+ ip_tunnel_add(itn, netdev_priv(itn->fb_tunnel_dev));
+ }
rtnl_unlock();
return PTR_RET(itn->fb_tunnel_dev);
if (!net_eq(dev_net(t->dev), net))
unregister_netdevice_queue(t->dev, head);
}
- if (itn->fb_tunnel_dev)
- unregister_netdevice_queue(itn->fb_tunnel_dev, head);
}
void ip_tunnel_delete_net(struct ip_tunnel_net *itn, struct rtnl_link_ops *ops)
memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
/* Push down and install the IP header. */
- __skb_push(skb, sizeof(struct iphdr));
+ skb_push(skb, sizeof(struct iphdr));
skb_reset_network_header(skb);
iph = ip_hdr(skb);
iph->saddr, iph->daddr, 0);
if (tunnel != NULL) {
struct pcpu_tstats *tstats;
+ u32 oldmark = skb->mark;
+ int ret;
- if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
+
+ /* temporarily mark the skb with the tunnel o_key, to
+ * only match policies with this mark.
+ */
+ skb->mark = be32_to_cpu(tunnel->parms.o_key);
+ ret = xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb);
+ skb->mark = oldmark;
+ if (!ret)
return -1;
tstats = this_cpu_ptr(tunnel->dev->tstats);
tstats->rx_bytes += skb->len;
u64_stats_update_end(&tstats->syncp);
- skb->mark = 0;
secpath_reset(skb);
skb->dev = tunnel->dev;
return 1;
memset(&fl4, 0, sizeof(fl4));
flowi4_init_output(&fl4, tunnel->parms.link,
- be32_to_cpu(tunnel->parms.i_key), RT_TOS(tos),
+ be32_to_cpu(tunnel->parms.o_key), RT_TOS(tos),
RT_SCOPE_UNIVERSE,
IPPROTO_IPIP, 0,
dst, tiph->saddr, 0, 0);
iph->protocol = IPPROTO_IPIP;
iph->ihl = 5;
iph->tot_len = htons(skb->len);
- ip_select_ident(iph, skb_dst(skb), NULL);
+ ip_select_ident(skb, skb_dst(skb), NULL);
ip_send_check(iph);
memset(&(IPCB(skb)->opt), 0, sizeof(IPCB(skb)->opt));
if (th == NULL)
return NF_DROP;
- synproxy_parse_options(skb, par->thoff, th, &opts);
+ if (!synproxy_parse_options(skb, par->thoff, th, &opts))
+ return NF_DROP;
if (th->syn && !(th->ack || th->fin || th->rst)) {
/* Initial SYN from client */
/* fall through */
case TCP_CONNTRACK_SYN_SENT:
- synproxy_parse_options(skb, thoff, th, &opts);
+ if (!synproxy_parse_options(skb, thoff, th, &opts))
+ return NF_DROP;
if (!th->syn && th->ack &&
CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL) {
if (!th->syn || !th->ack)
break;
- synproxy_parse_options(skb, thoff, th, &opts);
+ if (!synproxy_parse_options(skb, thoff, th, &opts))
+ return NF_DROP;
+
if (opts.options & XT_SYNPROXY_OPT_TIMESTAMP)
synproxy->tsoff = opts.tsval - synproxy->its;
if (type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED)
ipv4_sk_update_pmtu(skb, sk, info);
- else if (type == ICMP_REDIRECT)
+ else if (type == ICMP_REDIRECT) {
ipv4_sk_redirect(skb, sk);
+ return;
+ }
/* Report error on raw socket, if:
1. User requested ip_recverr.
iph->check = 0;
iph->tot_len = htons(length);
if (!iph->id)
- ip_select_ident(iph, &rt->dst, NULL);
+ ip_select_ident(skb, &rt->dst, NULL);
iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl);
}
RT_SCOPE_LINK);
goto make_route;
}
- if (fl4->saddr) {
+ if (!fl4->saddr) {
if (ipv4_is_multicast(fl4->daddr))
fl4->saddr = inet_select_addr(dev_out, 0,
fl4->flowi4_scope);
tp->lost_cnt_hint -= tcp_skb_pcount(prev);
}
- TCP_SKB_CB(skb)->tcp_flags |= TCP_SKB_CB(prev)->tcp_flags;
+ TCP_SKB_CB(prev)->tcp_flags |= TCP_SKB_CB(skb)->tcp_flags;
+ if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
+ TCP_SKB_CB(prev)->end_seq++;
+
if (skb == tcp_highest_sack(sk))
tcp_advance_highest_sack(sk, skb);
tcp_init_cwnd_reduction(sk, true);
tcp_set_ca_state(sk, TCP_CA_CWR);
tcp_end_cwnd_reduction(sk);
- tcp_set_ca_state(sk, TCP_CA_Open);
+ tcp_try_keep_open(sk);
NET_INC_STATS_BH(sock_net(sk),
LINUX_MIB_TCPLOSSPROBERECOVERY);
}
} else
tcp_init_metrics(sk);
+ tcp_update_pacing_rate(sk);
+
/* Prevent spurious tcp_cwnd_restart() on first data packet */
tp->lsndtime = tcp_time_stamp;
* ACKs, wait for troubles.
*/
if (crtt > tp->srtt) {
- inet_csk(sk)->icsk_rto = crtt + max(crtt >> 2, tcp_rto_min(sk));
+ /* Set RTO like tcp_rtt_estimator(), but from cached RTT. */
+ crtt >>= 3;
+ inet_csk(sk)->icsk_rto = crtt + max(2 * crtt, tcp_rto_min(sk));
} else if (tp->srtt == 0) {
/* RFC6298: 5.7 We've failed to get a valid RTT sample from
* 3WHS. This is most likely due to retransmission,
unsigned int size = 0;
unsigned int eff_sacks;
+ opts->options = 0;
+
#ifdef CONFIG_TCP_MD5SIG
*md5 = tp->af_specific->md5_lookup(sk, sk);
if (unlikely(*md5)) {
skb_orphan(skb);
skb->sk = sk;
- skb->destructor = (sysctl_tcp_limit_output_bytes > 0) ?
- tcp_wfree : sock_wfree;
+ skb->destructor = tcp_wfree;
atomic_add(skb->truesize, &sk->sk_wmem_alloc);
/* Build TCP header and checksum it. */
static void tcp_set_skb_tso_segs(const struct sock *sk, struct sk_buff *skb,
unsigned int mss_now)
{
- if (skb->len <= mss_now || !sk_can_gso(sk) ||
- skb->ip_summed == CHECKSUM_NONE) {
+ /* Make sure we own this skb before messing gso_size/gso_segs */
+ WARN_ON_ONCE(skb_cloned(skb));
+
+ if (skb->len <= mss_now || skb->ip_summed == CHECKSUM_NONE) {
/* Avoid the costly divide in the normal
* non-TSO case.
*/
if (nsize < 0)
nsize = 0;
- if (skb_cloned(skb) &&
- skb_is_nonlinear(skb) &&
- pskb_expand_head(skb, 0, 0, GFP_ATOMIC))
+ if (skb_unclone(skb, GFP_ATOMIC))
return -ENOMEM;
/* Get a new skb... force flag on. */
while ((skb = tcp_send_head(sk))) {
unsigned int limit;
-
tso_segs = tcp_init_tso_segs(sk, skb, mss_now);
BUG_ON(!tso_segs);
break;
}
- /* TSQ : sk_wmem_alloc accounts skb truesize,
- * including skb overhead. But thats OK.
+ /* TCP Small Queues :
+ * Control number of packets in qdisc/devices to two packets / or ~1 ms.
+ * This allows for :
+ * - better RTT estimation and ACK scheduling
+ * - faster recovery
+ * - high rates
*/
- if (atomic_read(&sk->sk_wmem_alloc) >= sysctl_tcp_limit_output_bytes) {
+ limit = max(skb->truesize, sk->sk_pacing_rate >> 10);
+
+ if (atomic_read(&sk->sk_wmem_alloc) > limit) {
set_bit(TSQ_THROTTLED, &tp->tsq_flags);
break;
}
+
limit = mss_now;
if (tso_segs > 1 && !tcp_urg_mode(tp))
limit = tcp_mss_split_point(sk, skb, mss_now,
int oldpcount = tcp_skb_pcount(skb);
if (unlikely(oldpcount > 1)) {
+ if (skb_unclone(skb, GFP_ATOMIC))
+ return -ENOMEM;
tcp_init_tso_segs(sk, skb, cur_mss);
tcp_adjust_pcount(sk, skb, oldpcount - tcp_skb_pcount(skb));
}
break;
case ICMP_REDIRECT:
ipv4_sk_redirect(skb, sk);
- break;
+ goto out;
}
/*
top_iph->frag_off = (flags & XFRM_STATE_NOPMTUDISC) ?
0 : (XFRM_MODE_SKB_CB(skb)->frag_off & htons(IP_DF));
- ip_select_ident(top_iph, dst->child, NULL);
+ ip_select_ident(skb, dst->child, NULL);
top_iph->ttl = ip4_dst_hoplimit(dst->child);
memset(fl4, 0, sizeof(struct flowi4));
fl4->flowi4_mark = skb->mark;
+ fl4->flowi4_oif = skb_dst(skb)->dev->ifindex;
if (!ip_is_fragment(iph)) {
switch (iph->protocol) {
return false;
}
+/* Compares an address/prefix_len with addresses on device @dev.
+ * If one is found it returns true.
+ */
+bool ipv6_chk_custom_prefix(const struct in6_addr *addr,
+ const unsigned int prefix_len, struct net_device *dev)
+{
+ struct inet6_dev *idev;
+ struct inet6_ifaddr *ifa;
+ bool ret = false;
+
+ rcu_read_lock();
+ idev = __in6_dev_get(dev);
+ if (idev) {
+ read_lock_bh(&idev->lock);
+ list_for_each_entry(ifa, &idev->addr_list, if_list) {
+ ret = ipv6_prefix_equal(addr, &ifa->addr, prefix_len);
+ if (ret)
+ break;
+ }
+ read_unlock_bh(&idev->lock);
+ }
+ rcu_read_unlock();
+
+ return ret;
+}
+EXPORT_SYMBOL(ipv6_chk_custom_prefix);
+
int ipv6_chk_prefix(const struct in6_addr *addr, struct net_device *dev)
{
struct inet6_dev *idev;
else
stored_lft = 0;
if (!update_lft && !create && stored_lft) {
- if (valid_lft > MIN_VALID_LIFETIME ||
- valid_lft > stored_lft)
- update_lft = 1;
- else if (stored_lft <= MIN_VALID_LIFETIME) {
- /* valid_lft <= stored_lft is always true */
- /*
- * RFC 4862 Section 5.5.3e:
- * "Note that the preferred lifetime of
- * the corresponding address is always
- * reset to the Preferred Lifetime in
- * the received Prefix Information
- * option, regardless of whether the
- * valid lifetime is also reset or
- * ignored."
- *
- * So if the preferred lifetime in
- * this advertisement is different
- * than what we have stored, but the
- * valid lifetime is invalid, just
- * reset prefered_lft.
- *
- * We must set the valid lifetime
- * to the stored lifetime since we'll
- * be updating the timestamp below,
- * else we'll set it back to the
- * minimum.
- */
- if (prefered_lft != ifp->prefered_lft) {
- valid_lft = stored_lft;
- update_lft = 1;
- }
- } else {
- valid_lft = MIN_VALID_LIFETIME;
- if (valid_lft < prefered_lft)
- prefered_lft = valid_lft;
- update_lft = 1;
- }
+ const u32 minimum_lft = min(
+ stored_lft, (u32)MIN_VALID_LIFETIME);
+ valid_lft = max(valid_lft, minimum_lft);
+
+ /* RFC4862 Section 5.5.3e:
+ * "Note that the preferred lifetime of the
+ * corresponding address is always reset to
+ * the Preferred Lifetime in the received
+ * Prefix Information option, regardless of
+ * whether the valid lifetime is also reset or
+ * ignored."
+ *
+ * So we should always update prefered_lft here.
+ */
+ update_lft = 1;
}
if (update_lft) {
struct ip_auth_hdr *ah = (struct ip_auth_hdr*)(skb->data+offset);
struct xfrm_state *x;
- if (type != ICMPV6_DEST_UNREACH &&
- type != ICMPV6_PKT_TOOBIG &&
+ if (type != ICMPV6_PKT_TOOBIG &&
type != NDISC_REDIRECT)
return;
struct ip_esp_hdr *esph = (struct ip_esp_hdr *)(skb->data + offset);
struct xfrm_state *x;
- if (type != ICMPV6_DEST_UNREACH &&
- type != ICMPV6_PKT_TOOBIG &&
+ if (type != ICMPV6_PKT_TOOBIG &&
type != NDISC_REDIRECT)
return;
}
if (unlikely(!INET6_TW_MATCH(sk, net, saddr, daddr,
ports, dif))) {
- sock_put(sk);
+ inet_twsk_put(inet_twsk(sk));
goto begintw;
}
goto out;
struct ip6_tnl *tunnel = netdev_priv(dev);
struct net_device *tdev; /* Device to other host */
struct ipv6hdr *ipv6h; /* Our new IP header */
- unsigned int max_headroom; /* The extra header space needed */
+ unsigned int max_headroom = 0; /* The extra header space needed */
int gre_hlen;
struct ipv6_tel_txoption opt;
int mtu;
skb_scrub_packet(skb, !net_eq(tunnel->net, dev_net(dev)));
- max_headroom = LL_RESERVED_SPACE(tdev) + gre_hlen + dst->header_len;
+ max_headroom += LL_RESERVED_SPACE(tdev) + gre_hlen + dst->header_len;
if (skb_headroom(skb) < max_headroom || skb_shared(skb) ||
(skb_cloned(skb) && !skb_clone_writable(skb, 0))) {
if (t->parms.o_flags&GRE_SEQ)
addend += 4;
}
+ t->hlen = addend;
if (p->flags & IP6_TNL_F_CAP_XMIT) {
int strict = (ipv6_addr_type(&p->raddr) &
}
ip6_rt_put(rt);
}
-
- t->hlen = addend;
}
static int ip6gre_tnl_change(struct ip6_tnl *t,
static int ip6gre_tunnel_change_mtu(struct net_device *dev, int new_mtu)
{
- struct ip6_tnl *tunnel = netdev_priv(dev);
if (new_mtu < 68 ||
- new_mtu > 0xFFF8 - dev->hard_header_len - tunnel->hlen)
+ new_mtu > 0xFFF8 - dev->hard_header_len)
return -EINVAL;
dev->mtu = new_mtu;
return 0;
}
rcu_read_lock_bh();
- nexthop = rt6_nexthop((struct rt6_info *)dst, &ipv6_hdr(skb)->daddr);
+ nexthop = rt6_nexthop((struct rt6_info *)dst);
neigh = __ipv6_neigh_lookup_noref(dst->dev, nexthop);
if (unlikely(!neigh))
neigh = __neigh_create(&nd_tbl, nexthop, dst->dev, false);
*/
rt = (struct rt6_info *) *dst;
rcu_read_lock_bh();
- n = __ipv6_neigh_lookup_noref(rt->dst.dev, rt6_nexthop(rt, &fl6->daddr));
+ n = __ipv6_neigh_lookup_noref(rt->dst.dev, rt6_nexthop(rt));
err = n && !(n->nud_state & NUD_VALID) ? -EINVAL : 0;
rcu_read_unlock_bh();
{
struct sk_buff *skb;
+ struct frag_hdr fhdr;
int err;
/* There is support for UDP large send offload by network
skb->transport_header = skb->network_header + fragheaderlen;
skb->protocol = htons(ETH_P_IPV6);
- skb->ip_summed = CHECKSUM_PARTIAL;
skb->csum = 0;
- }
-
- err = skb_append_datato_frags(sk,skb, getfrag, from,
- (length - transhdrlen));
- if (!err) {
- struct frag_hdr fhdr;
- /* Specify the length of each IPv6 datagram fragment.
- * It has to be a multiple of 8.
- */
- skb_shinfo(skb)->gso_size = (mtu - fragheaderlen -
- sizeof(struct frag_hdr)) & ~7;
- skb_shinfo(skb)->gso_type = SKB_GSO_UDP;
- ipv6_select_ident(&fhdr, rt);
- skb_shinfo(skb)->ip6_frag_id = fhdr.identification;
__skb_queue_tail(&sk->sk_write_queue, skb);
-
- return 0;
+ } else if (skb_is_gso(skb)) {
+ goto append;
}
- /* There is not enough support do UPD LSO,
- * so follow normal path
- */
- kfree_skb(skb);
- return err;
+ skb->ip_summed = CHECKSUM_PARTIAL;
+ /* Specify the length of each IPv6 datagram fragment.
+ * It has to be a multiple of 8.
+ */
+ skb_shinfo(skb)->gso_size = (mtu - fragheaderlen -
+ sizeof(struct frag_hdr)) & ~7;
+ skb_shinfo(skb)->gso_type = SKB_GSO_UDP;
+ ipv6_select_ident(&fhdr, rt);
+ skb_shinfo(skb)->ip6_frag_id = fhdr.identification;
+
+append:
+ return skb_append_datato_frags(sk, skb, getfrag, from,
+ (length - transhdrlen));
}
static inline struct ipv6_opt_hdr *ip6_opt_dup(struct ipv6_opt_hdr *src,
* --yoshfuji
*/
- cork->length += length;
- if (length > mtu) {
- int proto = sk->sk_protocol;
- if (dontfrag && (proto == IPPROTO_UDP || proto == IPPROTO_RAW)){
- ipv6_local_rxpmtu(sk, fl6, mtu-exthdrlen);
- return -EMSGSIZE;
- }
-
- if (proto == IPPROTO_UDP &&
- (rt->dst.dev->features & NETIF_F_UFO)) {
+ if ((length > mtu) && dontfrag && (sk->sk_protocol == IPPROTO_UDP ||
+ sk->sk_protocol == IPPROTO_RAW)) {
+ ipv6_local_rxpmtu(sk, fl6, mtu-exthdrlen);
+ return -EMSGSIZE;
+ }
- err = ip6_ufo_append_data(sk, getfrag, from, length,
- hh_len, fragheaderlen,
- transhdrlen, mtu, flags, rt);
- if (err)
- goto error;
- return 0;
- }
+ skb = skb_peek_tail(&sk->sk_write_queue);
+ cork->length += length;
+ if (((length > mtu) ||
+ (skb && skb_is_gso(skb))) &&
+ (sk->sk_protocol == IPPROTO_UDP) &&
+ (rt->dst.dev->features & NETIF_F_UFO)) {
+ err = ip6_ufo_append_data(sk, getfrag, from, length,
+ hh_len, fragheaderlen,
+ transhdrlen, mtu, flags, rt);
+ if (err)
+ goto error;
+ return 0;
}
- if ((skb = skb_peek_tail(&sk->sk_write_queue)) == NULL)
+ if (!skb)
goto alloc_new_skb;
while (length > 0) {
static int
ip6_tnl_change_mtu(struct net_device *dev, int new_mtu)
{
- if (new_mtu < IPV6_MIN_MTU) {
- return -EINVAL;
+ struct ip6_tnl *tnl = netdev_priv(dev);
+
+ if (tnl->parms.proto == IPPROTO_IPIP) {
+ if (new_mtu < 68)
+ return -EINVAL;
+ } else {
+ if (new_mtu < IPV6_MIN_MTU)
+ return -EINVAL;
}
+ if (new_mtu > 0xFFF8 - dev->hard_header_len)
+ return -EINVAL;
dev->mtu = new_mtu;
return 0;
}
if (nla_put_u32(skb, IFLA_IPTUN_LINK, parm->link) ||
nla_put(skb, IFLA_IPTUN_LOCAL, sizeof(struct in6_addr),
- &parm->raddr) ||
- nla_put(skb, IFLA_IPTUN_REMOTE, sizeof(struct in6_addr),
&parm->laddr) ||
+ nla_put(skb, IFLA_IPTUN_REMOTE, sizeof(struct in6_addr),
+ &parm->raddr) ||
nla_put_u8(skb, IFLA_IPTUN_TTL, parm->hop_limit) ||
nla_put_u8(skb, IFLA_IPTUN_ENCAP_LIMIT, parm->encap_limit) ||
nla_put_be32(skb, IFLA_IPTUN_FLOWINFO, parm->flowinfo) ||
}
}
- t = rtnl_dereference(ip6n->tnls_wc[0]);
- unregister_netdevice_queue(t->dev, &list);
unregister_netdevice_many(&list);
}
if (!ip6n->fb_tnl_dev)
goto err_alloc_dev;
dev_net_set(ip6n->fb_tnl_dev, net);
+ ip6n->fb_tnl_dev->rtnl_link_ops = &ip6_link_ops;
/* FB netdevice is special: we have one, and only one per netns.
* Allowing to move it to another netns is clearly unsafe.
*/
(struct ip_comp_hdr *)(skb->data + offset);
struct xfrm_state *x;
- if (type != ICMPV6_DEST_UNREACH &&
- type != ICMPV6_PKT_TOOBIG &&
+ if (type != ICMPV6_PKT_TOOBIG &&
type != NDISC_REDIRECT)
return;
if (idev->mc_dad_count)
mld_dad_start_timer(idev, idev->mc_maxdelay);
}
- __in6_dev_put(idev);
+ in6_dev_put(idev);
}
static int ip6_mc_del1_src(struct ifmcaddr6 *pmc, int sfmode,
idev->mc_gq_running = 0;
mld_send_report(idev, NULL);
- __in6_dev_put(idev);
+ in6_dev_put(idev);
}
static void mld_ifc_timer_expire(unsigned long data)
if (idev->mc_ifc_count)
mld_ifc_start_timer(idev, idev->mc_maxdelay);
}
- __in6_dev_put(idev);
+ in6_dev_put(idev);
}
static void mld_ifc_event(struct inet6_dev *idev)
if (th == NULL)
return NF_DROP;
- synproxy_parse_options(skb, par->thoff, th, &opts);
+ if (!synproxy_parse_options(skb, par->thoff, th, &opts))
+ return NF_DROP;
if (th->syn && !(th->ack || th->fin || th->rst)) {
/* Initial SYN from client */
/* fall through */
case TCP_CONNTRACK_SYN_SENT:
- synproxy_parse_options(skb, thoff, th, &opts);
+ if (!synproxy_parse_options(skb, thoff, th, &opts))
+ return NF_DROP;
if (!th->syn && th->ack &&
CTINFO2DIR(ctinfo) == IP_CT_DIR_ORIGINAL) {
if (!th->syn || !th->ack)
break;
- synproxy_parse_options(skb, thoff, th, &opts);
+ if (!synproxy_parse_options(skb, thoff, th, &opts))
+ return NF_DROP;
+
if (opts.options & XT_SYNPROXY_OPT_TIMESTAMP)
synproxy->tsoff = opts.tsval - synproxy->its;
hdr = (struct icmp6hdr *)(skb->data + hdroff);
l3proto->csum_update(skb, iphdroff, &hdr->icmp6_cksum,
tuple, maniptype);
- if (hdr->icmp6_code == ICMPV6_ECHO_REQUEST ||
- hdr->icmp6_code == ICMPV6_ECHO_REPLY) {
+ if (hdr->icmp6_type == ICMPV6_ECHO_REQUEST ||
+ hdr->icmp6_type == ICMPV6_ECHO_REPLY) {
inet_proto_csum_replace2(&hdr->icmp6_cksum, skb,
hdr->icmp6_identifier,
tuple->src.u.icmp.id, 0);
ip6_sk_update_pmtu(skb, sk, info);
harderr = (np->pmtudisc == IPV6_PMTUDISC_DO);
}
- if (type == NDISC_REDIRECT)
+ if (type == NDISC_REDIRECT) {
ip6_sk_redirect(skb, sk);
+ return;
+ }
if (np->recverr) {
u8 *payload = skb->data;
if (!inet->hdrincl)
}
#ifdef CONFIG_IPV6_ROUTER_PREF
+struct __rt6_probe_work {
+ struct work_struct work;
+ struct in6_addr target;
+ struct net_device *dev;
+};
+
+static void rt6_probe_deferred(struct work_struct *w)
+{
+ struct in6_addr mcaddr;
+ struct __rt6_probe_work *work =
+ container_of(w, struct __rt6_probe_work, work);
+
+ addrconf_addr_solict_mult(&work->target, &mcaddr);
+ ndisc_send_ns(work->dev, NULL, &work->target, &mcaddr, NULL);
+ dev_put(work->dev);
+ kfree(w);
+}
+
static void rt6_probe(struct rt6_info *rt)
{
struct neighbour *neigh;
if (!neigh ||
time_after(jiffies, neigh->updated + rt->rt6i_idev->cnf.rtr_probe_interval)) {
- struct in6_addr mcaddr;
- struct in6_addr *target;
+ struct __rt6_probe_work *work;
- if (neigh) {
+ work = kmalloc(sizeof(*work), GFP_ATOMIC);
+
+ if (neigh && work)
neigh->updated = jiffies;
+
+ if (neigh)
write_unlock(&neigh->lock);
- }
- target = (struct in6_addr *)&rt->rt6i_gateway;
- addrconf_addr_solict_mult(target, &mcaddr);
- ndisc_send_ns(rt->dst.dev, NULL, target, &mcaddr, NULL);
+ if (work) {
+ INIT_WORK(&work->work, rt6_probe_deferred);
+ work->target = rt->rt6i_gateway;
+ dev_hold(rt->dst.dev);
+ work->dev = rt->dst.dev;
+ schedule_work(&work->work);
+ }
} else {
out:
write_unlock(&neigh->lock);
if (ort->rt6i_dst.plen != 128 &&
ipv6_addr_equal(&ort->rt6i_dst.addr, daddr))
rt->rt6i_flags |= RTF_ANYCAST;
- rt->rt6i_gateway = *daddr;
}
rt->rt6i_flags |= RTF_CACHE;
rt->dst.flags |= DST_HOST;
rt->dst.output = ip6_output;
atomic_set(&rt->dst.__refcnt, 1);
+ rt->rt6i_gateway = fl6->daddr;
rt->rt6i_dst.addr = fl6->daddr;
rt->rt6i_dst.plen = 128;
rt->rt6i_idev = idev;
in6_dev_hold(rt->rt6i_idev);
rt->dst.lastuse = jiffies;
- rt->rt6i_gateway = ort->rt6i_gateway;
+ if (ort->rt6i_flags & RTF_GATEWAY)
+ rt->rt6i_gateway = ort->rt6i_gateway;
+ else
+ rt->rt6i_gateway = *dest;
rt->rt6i_flags = ort->rt6i_flags;
if ((ort->rt6i_flags & (RTF_DEFAULT | RTF_ADDRCONF)) ==
(RTF_DEFAULT | RTF_ADDRCONF))
else
rt->rt6i_flags |= RTF_LOCAL;
+ rt->rt6i_gateway = *addr;
rt->rt6i_dst.addr = *addr;
rt->rt6i_dst.plen = 128;
rt->rt6i_table = fib6_get_table(net, RT6_TABLE_LOCAL);
return false;
}
+/* Checks if an address matches an address on the tunnel interface.
+ * Used to detect the NAT of proto 41 packets and let them pass spoofing test.
+ * Long story:
+ * This function is called after we considered the packet as spoofed
+ * in is_spoofed_6rd.
+ * We may have a router that is doing NAT for proto 41 packets
+ * for an internal station. Destination a.a.a.a/PREFIX:bbbb:bbbb
+ * will be translated to n.n.n.n/PREFIX:bbbb:bbbb. And is_spoofed_6rd
+ * function will return true, dropping the packet.
+ * But, we can still check if is spoofed against the IP
+ * addresses associated with the interface.
+ */
+static bool only_dnatted(const struct ip_tunnel *tunnel,
+ const struct in6_addr *v6dst)
+{
+ int prefix_len;
+
+#ifdef CONFIG_IPV6_SIT_6RD
+ prefix_len = tunnel->ip6rd.prefixlen + 32
+ - tunnel->ip6rd.relay_prefixlen;
+#else
+ prefix_len = 48;
+#endif
+ return ipv6_chk_custom_prefix(v6dst, prefix_len, tunnel->dev);
+}
+
+/* Returns true if a packet is spoofed */
+static bool packet_is_spoofed(struct sk_buff *skb,
+ const struct iphdr *iph,
+ struct ip_tunnel *tunnel)
+{
+ const struct ipv6hdr *ipv6h;
+
+ if (tunnel->dev->priv_flags & IFF_ISATAP) {
+ if (!isatap_chksrc(skb, iph, tunnel))
+ return true;
+
+ return false;
+ }
+
+ if (tunnel->dev->flags & IFF_POINTOPOINT)
+ return false;
+
+ ipv6h = ipv6_hdr(skb);
+
+ if (unlikely(is_spoofed_6rd(tunnel, iph->saddr, &ipv6h->saddr))) {
+ net_warn_ratelimited("Src spoofed %pI4/%pI6c -> %pI4/%pI6c\n",
+ &iph->saddr, &ipv6h->saddr,
+ &iph->daddr, &ipv6h->daddr);
+ return true;
+ }
+
+ if (likely(!is_spoofed_6rd(tunnel, iph->daddr, &ipv6h->daddr)))
+ return false;
+
+ if (only_dnatted(tunnel, &ipv6h->daddr))
+ return false;
+
+ net_warn_ratelimited("Dst spoofed %pI4/%pI6c -> %pI4/%pI6c\n",
+ &iph->saddr, &ipv6h->saddr,
+ &iph->daddr, &ipv6h->daddr);
+ return true;
+}
+
static int ipip6_rcv(struct sk_buff *skb)
{
const struct iphdr *iph = ip_hdr(skb);
IPCB(skb)->flags = 0;
skb->protocol = htons(ETH_P_IPV6);
- if (tunnel->dev->priv_flags & IFF_ISATAP) {
- if (!isatap_chksrc(skb, iph, tunnel)) {
- tunnel->dev->stats.rx_errors++;
- goto out;
- }
- } else if (!(tunnel->dev->flags&IFF_POINTOPOINT)) {
- if (is_spoofed_6rd(tunnel, iph->saddr,
- &ipv6_hdr(skb)->saddr) ||
- is_spoofed_6rd(tunnel, iph->daddr,
- &ipv6_hdr(skb)->daddr)) {
- tunnel->dev->stats.rx_errors++;
- goto out;
- }
+ if (packet_is_spoofed(skb, iph, tunnel)) {
+ tunnel->dev->stats.rx_errors++;
+ goto out;
}
__skb_tunnel_rx(skb, tunnel->dev, tunnel->net);
neigh = dst_neigh_lookup(skb_dst(skb), &iph6->daddr);
if (neigh == NULL) {
- net_dbg_ratelimited("sit: nexthop == NULL\n");
+ net_dbg_ratelimited("nexthop == NULL\n");
goto tx_error;
}
neigh = dst_neigh_lookup(skb_dst(skb), &iph6->daddr);
if (neigh == NULL) {
- net_dbg_ratelimited("sit: nexthop == NULL\n");
+ net_dbg_ratelimited("nexthop == NULL\n");
goto tx_error;
}
goto err_alloc_dev;
}
dev_net_set(sitn->fb_tunnel_dev, net);
+ sitn->fb_tunnel_dev->rtnl_link_ops = &sit_link_ops;
/* FB netdevice is special: we have one, and only one per netns.
* Allowing to move it to another netns is clearly unsafe.
*/
rtnl_lock();
sit_destroy_tunnels(sitn, &list);
- unregister_netdevice_queue(sitn->fb_tunnel_dev, &list);
unregister_netdevice_many(&list);
rtnl_unlock();
}
if (type == ICMPV6_PKT_TOOBIG)
ip6_sk_update_pmtu(skb, sk, info);
- if (type == NDISC_REDIRECT)
+ if (type == NDISC_REDIRECT) {
ip6_sk_redirect(skb, sk);
+ goto out;
+ }
np = inet6_sk(sk);
if (tclass < 0)
tclass = np->tclass;
- if (dontfrag < 0)
- dontfrag = np->dontfrag;
-
if (msg->msg_flags&MSG_CONFIRM)
goto do_confirm;
back_from_confirm:
up->pending = AF_INET6;
do_append_data:
+ if (dontfrag < 0)
+ dontfrag = np->dontfrag;
up->len += ulen;
getfrag = is_udplite ? udplite_getfrag : ip_generic_getfrag;
err = ip6_append_data(sk, getfrag, msg->msg_iov, ulen,
memset(fl6, 0, sizeof(struct flowi6));
fl6->flowi6_mark = skb->mark;
+ fl6->flowi6_oif = skb_dst(skb)->dev->ifindex;
fl6->daddr = reverse ? hdr->saddr : hdr->daddr;
fl6->saddr = reverse ? hdr->daddr : hdr->saddr;
x->id.proto = proto;
x->id.spi = sa->sadb_sa_spi;
- x->props.replay_window = sa->sadb_sa_replay;
+ x->props.replay_window = min_t(unsigned int, sa->sadb_sa_replay,
+ (sizeof(x->replay.bitmap) * 8));
if (sa->sadb_sa_flags & SADB_SAFLAGS_NOECN)
x->props.flags |= XFRM_STATE_NOECN;
if (sa->sadb_sa_flags & SADB_SAFLAGS_DECAP_DSCP)
static void l2tp_session_set_header_len(struct l2tp_session *session, int version);
static void l2tp_tunnel_free(struct l2tp_tunnel *tunnel);
+static inline struct l2tp_tunnel *l2tp_tunnel(struct sock *sk)
+{
+ return sk->sk_user_data;
+}
+
static inline struct l2tp_net *l2tp_pernet(struct net *net)
{
BUG_ON(!net);
return 0;
#if IS_ENABLED(CONFIG_IPV6)
- if (sk->sk_family == PF_INET6) {
+ if (sk->sk_family == PF_INET6 && !l2tp_tunnel(sk)->v4mapped) {
if (!uh->check) {
LIMIT_NETDEBUG(KERN_INFO "L2TP: IPv6: checksum is 0\n");
return 1;
/* Queue the packet to IP for output */
skb->local_df = 1;
#if IS_ENABLED(CONFIG_IPV6)
- if (skb->sk->sk_family == PF_INET6)
+ if (skb->sk->sk_family == PF_INET6 && !tunnel->v4mapped)
error = inet6_csk_xmit(skb, NULL);
else
#endif
/* Calculate UDP checksum if configured to do so */
#if IS_ENABLED(CONFIG_IPV6)
- if (sk->sk_family == PF_INET6)
+ if (sk->sk_family == PF_INET6 && !tunnel->v4mapped)
l2tp_xmit_ipv6_csum(sk, skb, udp_len);
else
#endif
*/
static void l2tp_tunnel_destruct(struct sock *sk)
{
- struct l2tp_tunnel *tunnel;
+ struct l2tp_tunnel *tunnel = l2tp_tunnel(sk);
struct l2tp_net *pn;
- tunnel = sk->sk_user_data;
if (tunnel == NULL)
goto end;
}
/* Check if this socket has already been prepped */
- tunnel = (struct l2tp_tunnel *)sk->sk_user_data;
+ tunnel = l2tp_tunnel(sk);
if (tunnel != NULL) {
/* This socket has already been prepped */
err = -EBUSY;
if (cfg != NULL)
tunnel->debug = cfg->debug;
+#if IS_ENABLED(CONFIG_IPV6)
+ if (sk->sk_family == PF_INET6) {
+ struct ipv6_pinfo *np = inet6_sk(sk);
+
+ if (ipv6_addr_v4mapped(&np->saddr) &&
+ ipv6_addr_v4mapped(&np->daddr)) {
+ struct inet_sock *inet = inet_sk(sk);
+
+ tunnel->v4mapped = true;
+ inet->inet_saddr = np->saddr.s6_addr32[3];
+ inet->inet_rcv_saddr = np->rcv_saddr.s6_addr32[3];
+ inet->inet_daddr = np->daddr.s6_addr32[3];
+ } else {
+ tunnel->v4mapped = false;
+ }
+ }
+#endif
+
/* Mark socket as an encapsulation socket. See net/ipv4/udp.c */
tunnel->encap = encap;
if (encap == L2TP_ENCAPTYPE_UDP) {
udp_sk(sk)->encap_rcv = l2tp_udp_encap_recv;
udp_sk(sk)->encap_destroy = l2tp_udp_encap_destroy;
#if IS_ENABLED(CONFIG_IPV6)
- if (sk->sk_family == PF_INET6)
+ if (sk->sk_family == PF_INET6 && !tunnel->v4mapped)
udpv6_encap_enable();
else
#endif
struct sock *sock; /* Parent socket */
int fd; /* Parent fd, if tunnel socket
* was created by userspace */
+#if IS_ENABLED(CONFIG_IPV6)
+ bool v4mapped;
+#endif
struct work_struct del_work;
goto error_put_sess_tun;
}
+ local_bh_disable();
l2tp_xmit_skb(session, skb, session->hdr_len);
+ local_bh_enable();
sock_put(ps->tunnel_sock);
sock_put(sk);
skb->data[0] = ppph[0];
skb->data[1] = ppph[1];
+ local_bh_disable();
l2tp_xmit_skb(session, skb, session->hdr_len);
+ local_bh_enable();
sock_put(sk_tun);
sock_put(sk);
} else {
lapb->n2count++;
lapb_requeue_frames(lapb);
+ lapb_kick(lapb);
}
break;
return -EINVAL;
}
band = chanctx_conf->def.chan->band;
- sta = sta_info_get(sdata, peer);
+ sta = sta_info_get_bss(sdata, peer);
if (sta) {
qos = test_sta_flag(sta, WLAN_STA_WME);
} else {
* that the scan completed.
* @SCAN_ABORTED: Set for our scan work function when the driver reported
* a scan complete for an aborted scan.
+ * @SCAN_HW_CANCELLED: Set for our scan work function when the scan is being
+ * cancelled.
*/
enum {
SCAN_SW_SCANNING,
SCAN_ONCHANNEL_SCANNING,
SCAN_COMPLETED,
SCAN_ABORTED,
+ SCAN_HW_CANCELLED,
};
/**
if (started)
ieee80211_start_next_roc(local);
+ else if (list_empty(&local->roc_list))
+ ieee80211_run_deferred_scan(local);
}
out_unlock:
case NL80211_IFTYPE_ADHOC:
if (!bssid)
return 0;
+ if (ether_addr_equal(sdata->vif.addr, hdr->addr2) ||
+ ether_addr_equal(sdata->u.ibss.bssid, hdr->addr2))
+ return 0;
if (ieee80211_is_beacon(hdr->frame_control)) {
return 1;
} else if (!ieee80211_bssid_match(bssid, sdata->u.ibss.bssid)) {
enum ieee80211_band band;
int i, ielen, n_chans;
+ if (test_bit(SCAN_HW_CANCELLED, &local->scanning))
+ return false;
+
do {
if (local->hw_scan_band == IEEE80211_NUM_BANDS)
return false;
if (!local->scan_req)
goto out;
+ /*
+ * We have a scan running and the driver already reported completion,
+ * but the worker hasn't run yet or is stuck on the mutex - mark it as
+ * cancelled.
+ */
+ if (test_bit(SCAN_HW_SCANNING, &local->scanning) &&
+ test_bit(SCAN_COMPLETED, &local->scanning)) {
+ set_bit(SCAN_HW_CANCELLED, &local->scanning);
+ goto out;
+ }
+
if (test_bit(SCAN_HW_SCANNING, &local->scanning)) {
+ /*
+ * Make sure that __ieee80211_scan_completed doesn't trigger a
+ * scan on another band.
+ */
+ set_bit(SCAN_HW_CANCELLED, &local->scanning);
if (local->ops->cancel_hw_scan)
drv_cancel_hw_scan(local,
rcu_dereference_protected(local->scan_sdata,
struct ieee80211_local *local = sta->local;
struct ieee80211_sub_if_data *sdata = sta->sdata;
+ if (local->hw.flags & IEEE80211_HW_REPORTS_TX_ACK_STATUS)
+ sta->last_rx = jiffies;
+
if (ieee80211_is_data_qos(mgmt->frame_control)) {
struct ieee80211_hdr *hdr = (void *) skb->data;
u8 *qc = ieee80211_get_qos_ctl(hdr);
tx->sta = rcu_dereference(sdata->u.vlan.sta);
if (!tx->sta && sdata->dev->ieee80211_ptr->use_4addr)
return TX_DROP;
- } else if (info->flags & IEEE80211_TX_CTL_INJECTED ||
+ } else if (info->flags & (IEEE80211_TX_CTL_INJECTED |
+ IEEE80211_TX_INTFL_NL80211_FRAME_TX) ||
tx->sdata->control_port_protocol == tx->skb->protocol) {
tx->sta = sta_info_get_bss(sdata, hdr->addr1);
}
{
struct ieee80211_local *local = sdata->local;
struct ieee80211_supported_band *sband;
- int rate, skip, shift;
+ int rate, shift;
u8 i, exrates, *pos;
u32 basic_rates = sdata->vif.bss_conf.basic_rates;
u32 rate_flags;
pos = skb_put(skb, exrates + 2);
*pos++ = WLAN_EID_EXT_SUPP_RATES;
*pos++ = exrates;
- skip = 0;
for (i = 8; i < sband->n_bitrates; i++) {
u8 basic = 0;
if ((rate_flags & sband->bitrates[i].flags)
!= rate_flags)
continue;
- if (skip++ < 8)
- continue;
if (need_basic && basic_rates & BIT(i))
basic = 0x80;
rate = DIV_ROUND_UP(sband->bitrates[i].bitrate,
}
rate = cfg80211_calculate_bitrate(&ri);
+ if (WARN_ONCE(!rate,
+ "Invalid bitrate: flags=0x%x, idx=%d, vht_nss=%d\n",
+ status->flag, status->rate_idx, status->vht_nss))
+ return 0;
/* rewind from end of MPDU */
if (status->flag & RX_FLAG_MACTIME_END)
* Not an artificial restriction anymore, as we must prevent
* possible loops created by swapping in setlist type of sets. */
if (!(from->type->features == to->type->features &&
- from->type->family == to->type->family))
+ from->family == to->family))
return -IPSET_ERR_TYPE_MISMATCH;
strncpy(from_name, from->name, IPSET_MAXNAMELEN);
if (ret == -EAGAIN)
ret = 1;
- return (ret < 0 && ret != -ENOTEMPTY) ? ret :
- ret > 0 ? 0 : -IPSET_ERR_EXIST;
+ return ret > 0 ? 0 : -IPSET_ERR_EXIST;
}
/* Get headed data of a set */
{
int protoff;
u8 nexthdr;
- __be16 frag_off;
+ __be16 frag_off = 0;
nexthdr = ipv6_hdr(skb)->nexthdr;
protoff = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &nexthdr,
&frag_off);
- if (protoff < 0)
+ if (protoff < 0 || (frag_off & htons(~0x7)) != 0)
return false;
return get_port(skb, nexthdr, protoff, src, port, proto);
static void
mtype_del_cidr(struct htype *h, u8 cidr, u8 nets_length)
{
- u8 i, j;
-
- for (i = 0; i < nets_length - 1 && h->nets[i].cidr != cidr; i++)
- ;
- h->nets[i].nets--;
-
- if (h->nets[i].nets != 0)
- return;
-
- for (j = i; j < nets_length - 1 && h->nets[j].nets; j++) {
- h->nets[j].cidr = h->nets[j + 1].cidr;
- h->nets[j].nets = h->nets[j + 1].nets;
+ u8 i, j, net_end = nets_length - 1;
+
+ for (i = 0; i < nets_length; i++) {
+ if (h->nets[i].cidr != cidr)
+ continue;
+ if (h->nets[i].nets > 1 || i == net_end ||
+ h->nets[i + 1].nets == 0) {
+ h->nets[i].nets--;
+ return;
+ }
+ for (j = i; j < net_end && h->nets[j].nets; j++) {
+ h->nets[j].cidr = h->nets[j + 1].cidr;
+ h->nets[j].nets = h->nets[j + 1].nets;
+ }
+ h->nets[j].nets = 0;
+ return;
}
}
#endif
e.ip = htonl(ip);
e.ip2 = htonl(ip2_from & ip_set_hostmask(e.cidr + 1));
ret = adtfn(set, &e, &ext, &ext, flags);
- return ip_set_enomatch(ret, flags, adt) ? 1 :
+ return ip_set_enomatch(ret, flags, adt, set) ? -ret :
ip_set_eexist(ret, flags) ? 0 : ret;
}
if (adt == IPSET_TEST || !with_ports || !tb[IPSET_ATTR_PORT_TO]) {
ret = adtfn(set, &e, &ext, &ext, flags);
- return ip_set_enomatch(ret, flags, adt) ? 1 :
+ return ip_set_enomatch(ret, flags, adt, set) ? -ret :
ip_set_eexist(ret, flags) ? 0 : ret;
}
if (adt == IPSET_TEST || !tb[IPSET_ATTR_IP_TO]) {
e.ip = htonl(ip & ip_set_hostmask(e.cidr));
ret = adtfn(set, &e, &ext, &ext, flags);
- return ip_set_enomatch(ret, flags, adt) ? 1 :
+ return ip_set_enomatch(ret, flags, adt, set) ? -ret:
ip_set_eexist(ret, flags) ? 0 : ret;
}
ret = adtfn(set, &e, &ext, &ext, flags);
- return ip_set_enomatch(ret, flags, adt) ? 1 :
+ return ip_set_enomatch(ret, flags, adt, set) ? -ret :
ip_set_eexist(ret, flags) ? 0 : ret;
}
if (adt == IPSET_TEST || !tb[IPSET_ATTR_IP_TO]) {
e.ip = htonl(ip & ip_set_hostmask(e.cidr));
ret = adtfn(set, &e, &ext, &ext, flags);
- return ip_set_enomatch(ret, flags, adt) ? 1 :
+ return ip_set_enomatch(ret, flags, adt, set) ? -ret :
ip_set_eexist(ret, flags) ? 0 : ret;
}
ret = adtfn(set, &e, &ext, &ext, flags);
- return ip_set_enomatch(ret, flags, adt) ? 1 :
+ return ip_set_enomatch(ret, flags, adt, set) ? -ret :
ip_set_eexist(ret, flags) ? 0 : ret;
}
if (adt == IPSET_TEST || !(with_ports || tb[IPSET_ATTR_IP_TO])) {
e.ip = htonl(ip & ip_set_hostmask(e.cidr + 1));
ret = adtfn(set, &e, &ext, &ext, flags);
- return ip_set_enomatch(ret, flags, adt) ? 1 :
+ return ip_set_enomatch(ret, flags, adt, set) ? -ret :
ip_set_eexist(ret, flags) ? 0 : ret;
}
if (adt == IPSET_TEST || !with_ports || !tb[IPSET_ATTR_PORT_TO]) {
ret = adtfn(set, &e, &ext, &ext, flags);
- return ip_set_enomatch(ret, flags, adt) ? 1 :
+ return ip_set_enomatch(ret, flags, adt, set) ? -ret :
ip_set_eexist(ret, flags) ? 0 : ret;
}
if (dest && (dest->flags & IP_VS_DEST_F_AVAILABLE)) {
struct ip_vs_cpu_stats *s;
+ struct ip_vs_service *svc;
s = this_cpu_ptr(dest->stats.cpustats);
s->ustats.inpkts++;
s->ustats.inbytes += skb->len;
u64_stats_update_end(&s->syncp);
- s = this_cpu_ptr(dest->svc->stats.cpustats);
+ rcu_read_lock();
+ svc = rcu_dereference(dest->svc);
+ s = this_cpu_ptr(svc->stats.cpustats);
s->ustats.inpkts++;
u64_stats_update_begin(&s->syncp);
s->ustats.inbytes += skb->len;
u64_stats_update_end(&s->syncp);
+ rcu_read_unlock();
s = this_cpu_ptr(ipvs->tot_stats.cpustats);
s->ustats.inpkts++;
if (dest && (dest->flags & IP_VS_DEST_F_AVAILABLE)) {
struct ip_vs_cpu_stats *s;
+ struct ip_vs_service *svc;
s = this_cpu_ptr(dest->stats.cpustats);
s->ustats.outpkts++;
s->ustats.outbytes += skb->len;
u64_stats_update_end(&s->syncp);
- s = this_cpu_ptr(dest->svc->stats.cpustats);
+ rcu_read_lock();
+ svc = rcu_dereference(dest->svc);
+ s = this_cpu_ptr(svc->stats.cpustats);
s->ustats.outpkts++;
u64_stats_update_begin(&s->syncp);
s->ustats.outbytes += skb->len;
u64_stats_update_end(&s->syncp);
+ rcu_read_unlock();
s = this_cpu_ptr(ipvs->tot_stats.cpustats);
s->ustats.outpkts++;
__ip_vs_bind_svc(struct ip_vs_dest *dest, struct ip_vs_service *svc)
{
atomic_inc(&svc->refcnt);
- dest->svc = svc;
+ rcu_assign_pointer(dest->svc, svc);
}
static void ip_vs_service_free(struct ip_vs_service *svc)
kfree(svc);
}
-static void
-__ip_vs_unbind_svc(struct ip_vs_dest *dest)
+static void ip_vs_service_rcu_free(struct rcu_head *head)
{
- struct ip_vs_service *svc = dest->svc;
+ struct ip_vs_service *svc;
+
+ svc = container_of(head, struct ip_vs_service, rcu_head);
+ ip_vs_service_free(svc);
+}
- dest->svc = NULL;
+static void __ip_vs_svc_put(struct ip_vs_service *svc, bool do_delay)
+{
if (atomic_dec_and_test(&svc->refcnt)) {
IP_VS_DBG_BUF(3, "Removing service %u/%s:%u\n",
svc->fwmark,
IP_VS_DBG_ADDR(svc->af, &svc->addr),
ntohs(svc->port));
- ip_vs_service_free(svc);
+ if (do_delay)
+ call_rcu(&svc->rcu_head, ip_vs_service_rcu_free);
+ else
+ ip_vs_service_free(svc);
}
}
IP_VS_DBG_ADDR(svc->af, &dest->addr),
ntohs(dest->port),
atomic_read(&dest->refcnt));
- /* We can not reuse dest while in grace period
- * because conns still can use dest->svc
- */
- if (test_bit(IP_VS_DEST_STATE_REMOVING, &dest->state))
- continue;
if (dest->af == svc->af &&
ip_vs_addr_equal(svc->af, &dest->addr, daddr) &&
dest->port == dport &&
static void ip_vs_dest_free(struct ip_vs_dest *dest)
{
+ struct ip_vs_service *svc = rcu_dereference_protected(dest->svc, 1);
+
__ip_vs_dst_cache_reset(dest);
- __ip_vs_unbind_svc(dest);
+ __ip_vs_svc_put(svc, false);
free_percpu(dest->stats.cpustats);
kfree(dest);
}
struct ip_vs_dest_user_kern *udest, int add)
{
struct netns_ipvs *ipvs = net_ipvs(svc->net);
+ struct ip_vs_service *old_svc;
struct ip_vs_scheduler *sched;
int conn_flags;
atomic_set(&dest->conn_flags, conn_flags);
/* bind the service */
- if (!dest->svc) {
+ old_svc = rcu_dereference_protected(dest->svc, 1);
+ if (!old_svc) {
__ip_vs_bind_svc(dest, svc);
} else {
- if (dest->svc != svc) {
- __ip_vs_unbind_svc(dest);
+ if (old_svc != svc) {
ip_vs_zero_stats(&dest->stats);
__ip_vs_bind_svc(dest, svc);
+ __ip_vs_svc_put(old_svc, true);
}
}
return 0;
}
-static void ip_vs_dest_wait_readers(struct rcu_head *head)
-{
- struct ip_vs_dest *dest = container_of(head, struct ip_vs_dest,
- rcu_head);
-
- /* End of grace period after unlinking */
- clear_bit(IP_VS_DEST_STATE_REMOVING, &dest->state);
-}
-
-
/*
* Delete a destination (must be already unlinked from the service)
*/
*/
ip_vs_rs_unhash(dest);
- if (!cleanup) {
- set_bit(IP_VS_DEST_STATE_REMOVING, &dest->state);
- call_rcu(&dest->rcu_head, ip_vs_dest_wait_readers);
- }
-
spin_lock_bh(&ipvs->dest_trash_lock);
IP_VS_DBG_BUF(3, "Moving dest %s:%u into trash, dest->refcnt=%d\n",
IP_VS_DBG_ADDR(dest->af, &dest->addr), ntohs(dest->port),
atomic_read(&dest->refcnt));
if (list_empty(&ipvs->dest_trash) && !cleanup)
mod_timer(&ipvs->dest_trash_timer,
- jiffies + IP_VS_DEST_TRASH_PERIOD);
+ jiffies + (IP_VS_DEST_TRASH_PERIOD >> 1));
/* dest lives in trash without reference */
list_add(&dest->t_list, &ipvs->dest_trash);
+ dest->idle_start = 0;
spin_unlock_bh(&ipvs->dest_trash_lock);
ip_vs_dest_put(dest);
}
struct net *net = (struct net *) data;
struct netns_ipvs *ipvs = net_ipvs(net);
struct ip_vs_dest *dest, *next;
+ unsigned long now = jiffies;
spin_lock(&ipvs->dest_trash_lock);
list_for_each_entry_safe(dest, next, &ipvs->dest_trash, t_list) {
- /* Skip if dest is in grace period */
- if (test_bit(IP_VS_DEST_STATE_REMOVING, &dest->state))
- continue;
if (atomic_read(&dest->refcnt) > 0)
continue;
+ if (dest->idle_start) {
+ if (time_before(now, dest->idle_start +
+ IP_VS_DEST_TRASH_PERIOD))
+ continue;
+ } else {
+ dest->idle_start = max(1UL, now);
+ continue;
+ }
IP_VS_DBG_BUF(3, "Removing destination %u/%s:%u from trash\n",
dest->vfwmark,
- IP_VS_DBG_ADDR(dest->svc->af, &dest->addr),
+ IP_VS_DBG_ADDR(dest->af, &dest->addr),
ntohs(dest->port));
list_del(&dest->t_list);
ip_vs_dest_free(dest);
}
if (!list_empty(&ipvs->dest_trash))
mod_timer(&ipvs->dest_trash_timer,
- jiffies + IP_VS_DEST_TRASH_PERIOD);
+ jiffies + (IP_VS_DEST_TRASH_PERIOD >> 1));
spin_unlock(&ipvs->dest_trash_lock);
}
return ret;
}
-static void ip_vs_service_rcu_free(struct rcu_head *head)
-{
- struct ip_vs_service *svc;
-
- svc = container_of(head, struct ip_vs_service, rcu_head);
- ip_vs_service_free(svc);
-}
-
/*
* Delete a service from the service list
* - The service must be unlinked, unlocked and not referenced!
/*
* Free the service if nobody refers to it
*/
- if (atomic_dec_and_test(&svc->refcnt)) {
- IP_VS_DBG_BUF(3, "Removing service %u/%s:%u\n",
- svc->fwmark,
- IP_VS_DBG_ADDR(svc->af, &svc->addr),
- ntohs(svc->port));
- call_rcu(&svc->rcu_head, ip_vs_service_rcu_free);
- }
+ __ip_vs_svc_put(svc, true);
/* decrease the module use count */
ip_vs_use_count_dec();
struct ip_vs_cpu_stats __percpu *stats)
{
int i;
+ bool add = false;
for_each_possible_cpu(i) {
struct ip_vs_cpu_stats *s = per_cpu_ptr(stats, i);
unsigned int start;
__u64 inbytes, outbytes;
- if (i) {
+ if (add) {
sum->conns += s->ustats.conns;
sum->inpkts += s->ustats.inpkts;
sum->outpkts += s->ustats.outpkts;
sum->inbytes += inbytes;
sum->outbytes += outbytes;
} else {
+ add = true;
sum->conns = s->ustats.conns;
sum->inpkts = s->ustats.inpkts;
sum->outpkts = s->ustats.outpkts;
struct hlist_node list;
int af; /* address family */
union nf_inet_addr addr; /* destination IP address */
- struct ip_vs_dest __rcu *dest; /* real server (cache) */
+ struct ip_vs_dest *dest; /* real server (cache) */
unsigned long lastuse; /* last used time */
struct rcu_head rcu_head;
};
};
#endif
-static inline void ip_vs_lblc_free(struct ip_vs_lblc_entry *en)
+static void ip_vs_lblc_rcu_free(struct rcu_head *head)
{
- struct ip_vs_dest *dest;
+ struct ip_vs_lblc_entry *en = container_of(head,
+ struct ip_vs_lblc_entry,
+ rcu_head);
- hlist_del_rcu(&en->list);
- /*
- * We don't kfree dest because it is referred either by its service
- * or the trash dest list.
- */
- dest = rcu_dereference_protected(en->dest, 1);
- ip_vs_dest_put(dest);
- kfree_rcu(en, rcu_head);
+ ip_vs_dest_put(en->dest);
+ kfree(en);
}
+static inline void ip_vs_lblc_del(struct ip_vs_lblc_entry *en)
+{
+ hlist_del_rcu(&en->list);
+ call_rcu(&en->rcu_head, ip_vs_lblc_rcu_free);
+}
/*
* Returns hash value for IPVS LBLC entry
struct ip_vs_lblc_entry *en;
en = ip_vs_lblc_get(dest->af, tbl, daddr);
- if (!en) {
- en = kmalloc(sizeof(*en), GFP_ATOMIC);
- if (!en)
- return NULL;
-
- en->af = dest->af;
- ip_vs_addr_copy(dest->af, &en->addr, daddr);
- en->lastuse = jiffies;
+ if (en) {
+ if (en->dest == dest)
+ return en;
+ ip_vs_lblc_del(en);
+ }
+ en = kmalloc(sizeof(*en), GFP_ATOMIC);
+ if (!en)
+ return NULL;
- ip_vs_dest_hold(dest);
- RCU_INIT_POINTER(en->dest, dest);
+ en->af = dest->af;
+ ip_vs_addr_copy(dest->af, &en->addr, daddr);
+ en->lastuse = jiffies;
- ip_vs_lblc_hash(tbl, en);
- } else {
- struct ip_vs_dest *old_dest;
+ ip_vs_dest_hold(dest);
+ en->dest = dest;
- old_dest = rcu_dereference_protected(en->dest, 1);
- if (old_dest != dest) {
- ip_vs_dest_put(old_dest);
- ip_vs_dest_hold(dest);
- /* No ordering constraints for refcnt */
- RCU_INIT_POINTER(en->dest, dest);
- }
- }
+ ip_vs_lblc_hash(tbl, en);
return en;
}
tbl->dead = 1;
for (i=0; i<IP_VS_LBLC_TAB_SIZE; i++) {
hlist_for_each_entry_safe(en, next, &tbl->bucket[i], list) {
- ip_vs_lblc_free(en);
+ ip_vs_lblc_del(en);
atomic_dec(&tbl->entries);
}
}
sysctl_lblc_expiration(svc)))
continue;
- ip_vs_lblc_free(en);
+ ip_vs_lblc_del(en);
atomic_dec(&tbl->entries);
}
spin_unlock(&svc->sched_lock);
if (time_before(now, en->lastuse + ENTRY_TIMEOUT))
continue;
- ip_vs_lblc_free(en);
+ ip_vs_lblc_del(en);
atomic_dec(&tbl->entries);
goal--;
}
continue;
doh = ip_vs_dest_conn_overhead(dest);
- if (loh * atomic_read(&dest->weight) >
- doh * atomic_read(&least->weight)) {
+ if ((__s64)loh * atomic_read(&dest->weight) >
+ (__s64)doh * atomic_read(&least->weight)) {
least = dest;
loh = doh;
}
* free up entries from the trash at any time.
*/
- dest = rcu_dereference(en->dest);
+ dest = en->dest;
if ((dest->flags & IP_VS_DEST_F_AVAILABLE) &&
atomic_read(&dest->weight) > 0 && !is_overloaded(dest, svc))
goto out;
{
unregister_ip_vs_scheduler(&ip_vs_lblc_scheduler);
unregister_pernet_subsys(&ip_vs_lblc_ops);
- synchronize_rcu();
+ rcu_barrier();
}
*/
struct ip_vs_dest_set_elem {
struct list_head list; /* list link */
- struct ip_vs_dest __rcu *dest; /* destination server */
+ struct ip_vs_dest *dest; /* destination server */
struct rcu_head rcu_head;
};
if (check) {
list_for_each_entry(e, &set->list, list) {
- struct ip_vs_dest *d;
-
- d = rcu_dereference_protected(e->dest, 1);
- if (d == dest)
- /* already existed */
+ if (e->dest == dest)
return;
}
}
return;
ip_vs_dest_hold(dest);
- RCU_INIT_POINTER(e->dest, dest);
+ e->dest = dest;
list_add_rcu(&e->list, &set->list);
atomic_inc(&set->size);
set->lastmod = jiffies;
}
+static void ip_vs_lblcr_elem_rcu_free(struct rcu_head *head)
+{
+ struct ip_vs_dest_set_elem *e;
+
+ e = container_of(head, struct ip_vs_dest_set_elem, rcu_head);
+ ip_vs_dest_put(e->dest);
+ kfree(e);
+}
+
static void
ip_vs_dest_set_erase(struct ip_vs_dest_set *set, struct ip_vs_dest *dest)
{
struct ip_vs_dest_set_elem *e;
list_for_each_entry(e, &set->list, list) {
- struct ip_vs_dest *d;
-
- d = rcu_dereference_protected(e->dest, 1);
- if (d == dest) {
+ if (e->dest == dest) {
/* HIT */
atomic_dec(&set->size);
set->lastmod = jiffies;
- ip_vs_dest_put(dest);
list_del_rcu(&e->list);
- kfree_rcu(e, rcu_head);
+ call_rcu(&e->rcu_head, ip_vs_lblcr_elem_rcu_free);
break;
}
}
struct ip_vs_dest_set_elem *e, *ep;
list_for_each_entry_safe(e, ep, &set->list, list) {
- struct ip_vs_dest *d;
-
- d = rcu_dereference_protected(e->dest, 1);
- /*
- * We don't kfree dest because it is referred either
- * by its service or by the trash dest list.
- */
- ip_vs_dest_put(d);
list_del_rcu(&e->list);
- kfree_rcu(e, rcu_head);
+ call_rcu(&e->rcu_head, ip_vs_lblcr_elem_rcu_free);
}
}
struct ip_vs_dest *dest, *least;
int loh, doh;
- if (set == NULL)
- return NULL;
-
/* select the first destination server, whose weight > 0 */
list_for_each_entry_rcu(e, &set->list, list) {
- least = rcu_dereference(e->dest);
+ least = e->dest;
if (least->flags & IP_VS_DEST_F_OVERLOAD)
continue;
/* find the destination with the weighted least load */
nextstage:
list_for_each_entry_continue_rcu(e, &set->list, list) {
- dest = rcu_dereference(e->dest);
+ dest = e->dest;
if (dest->flags & IP_VS_DEST_F_OVERLOAD)
continue;
doh = ip_vs_dest_conn_overhead(dest);
- if ((loh * atomic_read(&dest->weight) >
- doh * atomic_read(&least->weight))
+ if (((__s64)loh * atomic_read(&dest->weight) >
+ (__s64)doh * atomic_read(&least->weight))
&& (dest->flags & IP_VS_DEST_F_AVAILABLE)) {
least = dest;
loh = doh;
/* select the first destination server, whose weight > 0 */
list_for_each_entry(e, &set->list, list) {
- most = rcu_dereference_protected(e->dest, 1);
+ most = e->dest;
if (atomic_read(&most->weight) > 0) {
moh = ip_vs_dest_conn_overhead(most);
goto nextstage;
/* find the destination with the weighted most load */
nextstage:
list_for_each_entry_continue(e, &set->list, list) {
- dest = rcu_dereference_protected(e->dest, 1);
+ dest = e->dest;
doh = ip_vs_dest_conn_overhead(dest);
/* moh/mw < doh/dw ==> moh*dw < doh*mw, where mw,dw>0 */
- if ((moh * atomic_read(&dest->weight) <
- doh * atomic_read(&most->weight))
+ if (((__s64)moh * atomic_read(&dest->weight) <
+ (__s64)doh * atomic_read(&most->weight))
&& (atomic_read(&dest->weight) > 0)) {
most = dest;
moh = doh;
continue;
doh = ip_vs_dest_conn_overhead(dest);
- if (loh * atomic_read(&dest->weight) >
- doh * atomic_read(&least->weight)) {
+ if ((__s64)loh * atomic_read(&dest->weight) >
+ (__s64)doh * atomic_read(&least->weight)) {
least = dest;
loh = doh;
}
{
unregister_ip_vs_scheduler(&ip_vs_lblcr_scheduler);
unregister_pernet_subsys(&ip_vs_lblcr_ops);
- synchronize_rcu();
+ rcu_barrier();
}
#include <net/ip_vs.h>
-static inline unsigned int
+static inline int
ip_vs_nq_dest_overhead(struct ip_vs_dest *dest)
{
/*
struct ip_vs_iphdr *iph)
{
struct ip_vs_dest *dest, *least = NULL;
- unsigned int loh = 0, doh;
+ int loh = 0, doh;
IP_VS_DBG(6, "%s(): Scheduling...\n", __func__);
}
if (!least ||
- (loh * atomic_read(&dest->weight) >
- doh * atomic_read(&least->weight))) {
+ ((__s64)loh * atomic_read(&dest->weight) >
+ (__s64)doh * atomic_read(&least->weight))) {
least = dest;
loh = doh;
}
#include <net/ip_vs.h>
-static inline unsigned int
+static inline int
ip_vs_sed_dest_overhead(struct ip_vs_dest *dest)
{
/*
struct ip_vs_iphdr *iph)
{
struct ip_vs_dest *dest, *least;
- unsigned int loh, doh;
+ int loh, doh;
IP_VS_DBG(6, "%s(): Scheduling...\n", __func__);
if (dest->flags & IP_VS_DEST_F_OVERLOAD)
continue;
doh = ip_vs_sed_dest_overhead(dest);
- if (loh * atomic_read(&dest->weight) >
- doh * atomic_read(&least->weight)) {
+ if ((__s64)loh * atomic_read(&dest->weight) >
+ (__s64)doh * atomic_read(&least->weight)) {
least = dest;
loh = doh;
}
struct ip_vs_iphdr *iph)
{
struct ip_vs_dest *dest, *least;
- unsigned int loh, doh;
+ int loh, doh;
IP_VS_DBG(6, "ip_vs_wlc_schedule(): Scheduling...\n");
if (dest->flags & IP_VS_DEST_F_OVERLOAD)
continue;
doh = ip_vs_dest_conn_overhead(dest);
- if (loh * atomic_read(&dest->weight) >
- doh * atomic_read(&least->weight)) {
+ if ((__s64)loh * atomic_read(&dest->weight) >
+ (__s64)doh * atomic_read(&least->weight)) {
least = dest;
loh = doh;
}
iph->daddr = cp->daddr.ip;
iph->saddr = saddr;
iph->ttl = old_iph->ttl;
- ip_select_ident(iph, &rt->dst, NULL);
+ ip_select_ident(skb, &rt->dst, NULL);
/* Another hack: avoid icmp_send in ip_fragment */
skb->local_df = 1;
flowi6_to_flowi(&fl1), false)) {
if (!afinfo->route(&init_net, (struct dst_entry **)&rt2,
flowi6_to_flowi(&fl2), false)) {
- if (!memcmp(&rt1->rt6i_gateway, &rt2->rt6i_gateway,
- sizeof(rt1->rt6i_gateway)) &&
+ if (ipv6_addr_equal(rt6_nexthop(rt1),
+ rt6_nexthop(rt2)) &&
rt1->dst.dev == rt2->dst.dev)
ret = 1;
dst_release(&rt2->dst);
int synproxy_net_id;
EXPORT_SYMBOL_GPL(synproxy_net_id);
-void
+bool
synproxy_parse_options(const struct sk_buff *skb, unsigned int doff,
const struct tcphdr *th, struct synproxy_options *opts)
{
u8 buf[40], *ptr;
ptr = skb_header_pointer(skb, doff + sizeof(*th), length, buf);
- BUG_ON(ptr == NULL);
+ if (ptr == NULL)
+ return false;
opts->options = 0;
while (length > 0) {
switch (opcode) {
case TCPOPT_EOL:
- return;
+ return true;
case TCPOPT_NOP:
length--;
continue;
default:
opsize = *ptr++;
if (opsize < 2)
- return;
+ return true;
if (opsize > length)
- return;
+ return true;
switch (opcode) {
case TCPOPT_MSS:
length -= opsize;
}
}
+ return true;
}
EXPORT_SYMBOL_GPL(synproxy_parse_options);
verdict = NF_DROP;
if (ct)
- nfqnl_ct_seq_adjust(skb, ct, ctinfo, diff);
+ nfqnl_ct_seq_adjust(entry->skb, ct, ctinfo, diff);
}
if (nfqa[NFQA_MARK])
/* remove one skb from head of flow queue */
-static struct sk_buff *fq_dequeue_head(struct fq_flow *flow)
+static struct sk_buff *fq_dequeue_head(struct Qdisc *sch, struct fq_flow *flow)
{
struct sk_buff *skb = flow->head;
flow->head = skb->next;
skb->next = NULL;
flow->qlen--;
+ sch->qstats.backlog -= qdisc_pkt_len(skb);
+ sch->q.qlen--;
}
return skb;
}
struct fq_flow_head *head;
struct sk_buff *skb;
struct fq_flow *f;
+ u32 rate;
- skb = fq_dequeue_head(&q->internal);
+ skb = fq_dequeue_head(sch, &q->internal);
if (skb)
goto out;
fq_check_throttled(q, now);
goto begin;
}
- skb = fq_dequeue_head(f);
+ skb = fq_dequeue_head(sch, f);
if (!skb) {
head->first = f->next;
/* force a pass through old_flows to prevent starvation */
f->time_next_packet = now;
f->credit -= qdisc_pkt_len(skb);
- if (f->credit <= 0 &&
- q->rate_enable &&
- skb->sk && skb->sk->sk_state != TCP_TIME_WAIT) {
- u32 rate = skb->sk->sk_pacing_rate ?: q->flow_default_rate;
+ if (f->credit > 0 || !q->rate_enable)
+ goto out;
- rate = min(rate, q->flow_max_rate);
- if (rate) {
- u64 len = (u64)qdisc_pkt_len(skb) * NSEC_PER_SEC;
+ rate = q->flow_max_rate;
+ if (skb->sk && skb->sk->sk_state != TCP_TIME_WAIT)
+ rate = min(skb->sk->sk_pacing_rate, rate);
- do_div(len, rate);
- /* Since socket rate can change later,
- * clamp the delay to 125 ms.
- * TODO: maybe segment the too big skb, as in commit
- * e43ac79a4bc ("sch_tbf: segment too big GSO packets")
- */
- if (unlikely(len > 125 * NSEC_PER_MSEC)) {
- len = 125 * NSEC_PER_MSEC;
- q->stat_pkts_too_long++;
- }
+ if (rate != ~0U) {
+ u32 plen = max(qdisc_pkt_len(skb), q->quantum);
+ u64 len = (u64)plen * NSEC_PER_SEC;
- f->time_next_packet = now + len;
+ if (likely(rate))
+ do_div(len, rate);
+ /* Since socket rate can change later,
+ * clamp the delay to 125 ms.
+ * TODO: maybe segment the too big skb, as in commit
+ * e43ac79a4bc ("sch_tbf: segment too big GSO packets")
+ */
+ if (unlikely(len > 125 * NSEC_PER_MSEC)) {
+ len = 125 * NSEC_PER_MSEC;
+ q->stat_pkts_too_long++;
}
+
+ f->time_next_packet = now + len;
}
out:
- sch->qstats.backlog -= qdisc_pkt_len(skb);
qdisc_bstats_update(sch, skb);
- sch->q.qlen--;
qdisc_unthrottled(sch);
return skb;
}
static void fq_reset(struct Qdisc *sch)
{
+ struct fq_sched_data *q = qdisc_priv(sch);
+ struct rb_root *root;
struct sk_buff *skb;
+ struct rb_node *p;
+ struct fq_flow *f;
+ unsigned int idx;
- while ((skb = fq_dequeue(sch)) != NULL)
+ while ((skb = fq_dequeue_head(sch, &q->internal)) != NULL)
kfree_skb(skb);
+
+ if (!q->fq_root)
+ return;
+
+ for (idx = 0; idx < (1U << q->fq_trees_log); idx++) {
+ root = &q->fq_root[idx];
+ while ((p = rb_first(root)) != NULL) {
+ f = container_of(p, struct fq_flow, fq_node);
+ rb_erase(p, root);
+
+ while ((skb = fq_dequeue_head(sch, f)) != NULL)
+ kfree_skb(skb);
+
+ kmem_cache_free(fq_flow_cachep, f);
+ }
+ }
+ q->new_flows.first = NULL;
+ q->old_flows.first = NULL;
+ q->delayed = RB_ROOT;
+ q->flows = 0;
+ q->inactive_flows = 0;
+ q->throttled_flows = 0;
}
static void fq_rehash(struct fq_sched_data *q,
q->quantum = nla_get_u32(tb[TCA_FQ_QUANTUM]);
if (tb[TCA_FQ_INITIAL_QUANTUM])
- q->quantum = nla_get_u32(tb[TCA_FQ_INITIAL_QUANTUM]);
+ q->initial_quantum = nla_get_u32(tb[TCA_FQ_INITIAL_QUANTUM]);
if (tb[TCA_FQ_FLOW_DEFAULT_RATE])
q->flow_default_rate = nla_get_u32(tb[TCA_FQ_FLOW_DEFAULT_RATE]);
while (sch->q.qlen > sch->limit) {
struct sk_buff *skb = fq_dequeue(sch);
+ if (!skb)
+ break;
kfree_skb(skb);
drop_count++;
}
static void fq_destroy(struct Qdisc *sch)
{
struct fq_sched_data *q = qdisc_priv(sch);
- struct rb_root *root;
- struct rb_node *p;
- unsigned int idx;
- if (q->fq_root) {
- for (idx = 0; idx < (1U << q->fq_trees_log); idx++) {
- root = &q->fq_root[idx];
- while ((p = rb_first(root)) != NULL) {
- rb_erase(p, root);
- kmem_cache_free(fq_flow_cachep,
- container_of(p, struct fq_flow, fq_node));
- }
- }
- kfree(q->fq_root);
- }
+ fq_reset(sch);
+ kfree(q->fq_root);
qdisc_watchdog_cancel(&q->watchdog);
}
if (opts == NULL)
goto nla_put_failure;
+ /* TCA_FQ_FLOW_DEFAULT_RATE is not used anymore,
+ * do not bother giving its value
+ */
if (nla_put_u32(skb, TCA_FQ_PLIMIT, sch->limit) ||
nla_put_u32(skb, TCA_FQ_FLOW_PLIMIT, q->flow_plimit) ||
nla_put_u32(skb, TCA_FQ_QUANTUM, q->quantum) ||
nla_put_u32(skb, TCA_FQ_INITIAL_QUANTUM, q->initial_quantum) ||
nla_put_u32(skb, TCA_FQ_RATE_ENABLE, q->rate_enable) ||
- nla_put_u32(skb, TCA_FQ_FLOW_DEFAULT_RATE, q->flow_default_rate) ||
nla_put_u32(skb, TCA_FQ_FLOW_MAX_RATE, q->flow_max_rate) ||
nla_put_u32(skb, TCA_FQ_BUCKETS_LOG, q->fq_trees_log))
goto nla_put_failure;
return PSCHED_NS2TICKS(ticks);
}
+static void tfifo_reset(struct Qdisc *sch)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+ struct rb_node *p;
+
+ while ((p = rb_first(&q->t_root))) {
+ struct sk_buff *skb = netem_rb_to_skb(p);
+
+ rb_erase(p, &q->t_root);
+ skb->next = NULL;
+ skb->prev = NULL;
+ kfree_skb(skb);
+ }
+}
+
static void tfifo_enqueue(struct sk_buff *nskb, struct Qdisc *sch)
{
struct netem_sched_data *q = qdisc_priv(sch);
skb->next = NULL;
skb->prev = NULL;
len = qdisc_pkt_len(skb);
+ sch->qstats.backlog -= len;
kfree_skb(skb);
}
}
struct netem_sched_data *q = qdisc_priv(sch);
qdisc_reset_queue(sch);
+ tfifo_reset(sch);
if (q->qdisc)
qdisc_reset(q->qdisc);
qdisc_watchdog_cancel(&q->watchdog);
break;
case ICMP_REDIRECT:
sctp_icmp_redirect(sk, transport, skb);
- err = 0;
- break;
+ /* Fall through to out_unlock. */
default:
goto out_unlock;
}
break;
case NDISC_REDIRECT:
sctp_icmp_redirect(sk, transport, skb);
- break;
+ goto out_unlock;
default:
break;
}
in6_dev_put(idev);
}
-/* Based on tcp_v6_xmit() in tcp_ipv6.c. */
static int sctp_v6_xmit(struct sk_buff *skb, struct sctp_transport *transport)
{
struct sock *sk = skb->sk;
struct ipv6_pinfo *np = inet6_sk(sk);
- struct flowi6 fl6;
-
- memset(&fl6, 0, sizeof(fl6));
-
- fl6.flowi6_proto = sk->sk_protocol;
-
- /* Fill in the dest address from the route entry passed with the skb
- * and the source address from the transport.
- */
- fl6.daddr = transport->ipaddr.v6.sin6_addr;
- fl6.saddr = transport->saddr.v6.sin6_addr;
-
- fl6.flowlabel = np->flow_label;
- IP6_ECN_flow_xmit(sk, fl6.flowlabel);
- if (ipv6_addr_type(&fl6.saddr) & IPV6_ADDR_LINKLOCAL)
- fl6.flowi6_oif = transport->saddr.v6.sin6_scope_id;
- else
- fl6.flowi6_oif = sk->sk_bound_dev_if;
-
- if (np->opt && np->opt->srcrt) {
- struct rt0_hdr *rt0 = (struct rt0_hdr *) np->opt->srcrt;
- fl6.daddr = *rt0->addr;
- }
+ struct flowi6 *fl6 = &transport->fl.u.ip6;
pr_debug("%s: skb:%p, len:%d, src:%pI6 dst:%pI6\n", __func__, skb,
- skb->len, &fl6.saddr, &fl6.daddr);
+ skb->len, &fl6->saddr, &fl6->daddr);
- SCTP_INC_STATS(sock_net(sk), SCTP_MIB_OUTSCTPPACKS);
+ IP6_ECN_flow_xmit(sk, fl6->flowlabel);
if (!(transport->param_flags & SPP_PMTUD_ENABLE))
skb->local_df = 1;
- return ip6_xmit(sk, skb, &fl6, np->opt, np->tclass);
+ SCTP_INC_STATS(sock_net(sk), SCTP_MIB_OUTSCTPPACKS);
+
+ return ip6_xmit(sk, skb, fl6, np->opt, np->tclass);
}
/* Returns the dst cache entry for the given source and destination ip
struct dst_entry *dst = NULL;
struct flowi6 *fl6 = &fl->u.ip6;
struct sctp_bind_addr *bp;
+ struct ipv6_pinfo *np = inet6_sk(sk);
struct sctp_sockaddr_entry *laddr;
union sctp_addr *baddr = NULL;
union sctp_addr *daddr = &t->ipaddr;
union sctp_addr dst_saddr;
+ struct in6_addr *final_p, final;
__u8 matchlen = 0;
__u8 bmatchlen;
sctp_scope_t scope;
pr_debug("src=%pI6 - ", &fl6->saddr);
}
- dst = ip6_dst_lookup_flow(sk, fl6, NULL, false);
+ final_p = fl6_update_dst(fl6, np->opt, &final);
+ dst = ip6_dst_lookup_flow(sk, fl6, final_p, false);
if (!asoc || saddr)
goto out;
}
}
rcu_read_unlock();
+
if (baddr) {
fl6->saddr = baddr->v6.sin6_addr;
fl6->fl6_sport = baddr->v6.sin6_port;
- dst = ip6_dst_lookup_flow(sk, fl6, NULL, false);
+ final_p = fl6_update_dst(fl6, np->opt, &final);
+ dst = ip6_dst_lookup_flow(sk, fl6, final_p, false);
}
out:
* by CRC32-C as described in <draft-ietf-tsvwg-sctpcsum-02.txt>.
*/
if (!sctp_checksum_disable) {
- if (!(dst->dev->features & NETIF_F_SCTP_CSUM)) {
+ if (!(dst->dev->features & NETIF_F_SCTP_CSUM) ||
+ (dst_xfrm(dst) != NULL) || packet->ipfragok) {
__u32 crc32 = sctp_start_cksum((__u8 *)sh, cksum_buf_len);
/* 3) Put the resultant value into the checksum field in the
unsigned int name_len;
};
+static int copy_msghdr_from_user(struct msghdr *kmsg,
+ struct msghdr __user *umsg)
+{
+ if (copy_from_user(kmsg, umsg, sizeof(struct msghdr)))
+ return -EFAULT;
+ if (kmsg->msg_namelen > sizeof(struct sockaddr_storage))
+ return -EINVAL;
+ return 0;
+}
+
static int ___sys_sendmsg(struct socket *sock, struct msghdr __user *msg,
struct msghdr *msg_sys, unsigned int flags,
struct used_address *used_address)
if (MSG_CMSG_COMPAT & flags) {
if (get_compat_msghdr(msg_sys, msg_compat))
return -EFAULT;
- } else if (copy_from_user(msg_sys, msg, sizeof(struct msghdr)))
- return -EFAULT;
+ } else {
+ err = copy_msghdr_from_user(msg_sys, msg);
+ if (err)
+ return err;
+ }
if (msg_sys->msg_iovlen > UIO_FASTIOV) {
err = -EMSGSIZE;
if (MSG_CMSG_COMPAT & flags) {
if (get_compat_msghdr(msg_sys, msg_compat))
return -EFAULT;
- } else if (copy_from_user(msg_sys, msg, sizeof(struct msghdr)))
- return -EFAULT;
+ } else {
+ err = copy_msghdr_from_user(msg_sys, msg);
+ if (err)
+ return err;
+ }
if (msg_sys->msg_iovlen > UIO_FASTIOV) {
err = -EMSGSIZE;
kref_put(&gss_auth->kref, gss_free_callback);
}
+/*
+ * Auths may be shared between rpc clients that were cloned from a
+ * common client with the same xprt, if they also share the flavor and
+ * target_name.
+ *
+ * The auth is looked up from the oldest parent sharing the same
+ * cl_xprt, and the auth itself references only that common parent
+ * (which is guaranteed to last as long as any of its descendants).
+ */
static struct gss_auth *
gss_auth_find_or_add_hashed(struct rpc_auth_create_args *args,
struct rpc_clnt *clnt,
gss_auth,
hash,
hashval) {
+ if (gss_auth->client != clnt)
+ continue;
if (gss_auth->rpc_auth.au_flavor != args->pseudoflavor)
continue;
if (gss_auth->target_name != args->target_name) {
/* Allow network administrator to have same access as root. */
if (ns_capable(net->user_ns, CAP_NET_ADMIN) ||
- uid_eq(root_uid, current_uid())) {
+ uid_eq(root_uid, current_euid())) {
int mode = (table->mode >> 6) & 7;
return (mode << 6) | (mode << 3) | mode;
}
/* Allow netns root group to have the same access as the root group */
- if (gid_eq(root_gid, current_gid())) {
+ if (in_egroup_p(root_gid)) {
int mode = (table->mode >> 3) & 7;
return (mode << 3) | mode;
}
return 0;
}
+static void unix_sock_inherit_flags(const struct socket *old,
+ struct socket *new)
+{
+ if (test_bit(SOCK_PASSCRED, &old->flags))
+ set_bit(SOCK_PASSCRED, &new->flags);
+ if (test_bit(SOCK_PASSSEC, &old->flags))
+ set_bit(SOCK_PASSSEC, &new->flags);
+}
+
static int unix_accept(struct socket *sock, struct socket *newsock, int flags)
{
struct sock *sk = sock->sk;
/* attach accepted sock to socket */
unix_state_lock(tsk);
newsock->state = SS_CONNECTED;
+ unix_sock_inherit_flags(sock, newsock);
sock_graft(tsk, newsock);
unix_state_unlock(tsk);
return 0;
rep->udiag_family = AF_UNIX;
rep->udiag_type = sk->sk_type;
rep->udiag_state = sk->sk_state;
+ rep->pad = 0;
rep->udiag_ino = sk_ino;
sock_diag_save_cookie(sk, rep->udiag_cookie);
/* check and set up bitrates */
ieee80211_set_bitrate_flags(wiphy);
-
+ rtnl_lock();
res = device_add(&rdev->wiphy.dev);
- if (res)
- return res;
-
- res = rfkill_register(rdev->rfkill);
if (res) {
- device_del(&rdev->wiphy.dev);
+ rtnl_unlock();
return res;
}
- rtnl_lock();
/* set up regulatory info */
wiphy_regulatory_register(wiphy);
rdev->wiphy.registered = true;
rtnl_unlock();
+
+ res = rfkill_register(rdev->rfkill);
+ if (res) {
+ rfkill_destroy(rdev->rfkill);
+ rdev->rfkill = NULL;
+ wiphy_unregister(&rdev->wiphy);
+ return res;
+ }
+
return 0;
}
EXPORT_SYMBOL(wiphy_register);
rtnl_unlock();
__count == 0; }));
- rfkill_unregister(rdev->rfkill);
+ if (rdev->rfkill)
+ rfkill_unregister(rdev->rfkill);
rtnl_lock();
rdev->wiphy.registered = false;
case NETDEV_PRE_UP:
if (!(wdev->wiphy->interface_modes & BIT(wdev->iftype)))
return notifier_from_errno(-EOPNOTSUPP);
- if (rfkill_blocked(rdev->rfkill))
- return notifier_from_errno(-ERFKILL);
ret = cfg80211_can_add_interface(rdev, wdev->iftype);
if (ret)
return notifier_from_errno(ret);
cfg80211_can_add_interface(struct cfg80211_registered_device *rdev,
enum nl80211_iftype iftype)
{
+ if (rfkill_blocked(rdev->rfkill))
+ return -ERFKILL;
+
return cfg80211_can_change_interface(rdev, NULL, iftype);
}
if (chan->flags & IEEE80211_CHAN_DISABLED)
continue;
wdev->wext.ibss.chandef.chan = chan;
+ wdev->wext.ibss.chandef.center_freq1 =
+ chan->center_freq;
break;
}
if (chan) {
wdev->wext.ibss.chandef.chan = chan;
wdev->wext.ibss.chandef.width = NL80211_CHAN_WIDTH_20_NOHT;
+ wdev->wext.ibss.chandef.center_freq1 = freq;
wdev->wext.ibss.channel_fixed = true;
} else {
/* cfg80211_ibss_wext_join will pick one if needed */
change = true;
}
- if (flags && (*flags & NL80211_MNTR_FLAG_ACTIVE) &&
+ if (flags && (*flags & MONITOR_FLAG_ACTIVE) &&
!(rdev->wiphy.features & NL80211_FEATURE_ACTIVE_MONITOR))
return -EOPNOTSUPP;
info->attrs[NL80211_ATTR_MNTR_FLAGS] : NULL,
&flags);
- if (!err && (flags & NL80211_MNTR_FLAG_ACTIVE) &&
+ if (!err && (flags & MONITOR_FLAG_ACTIVE) &&
!(rdev->wiphy.features & NL80211_FEATURE_ACTIVE_MONITOR))
return -EOPNOTSUPP;
struct ieee80211_radiotap_header *radiotap_header,
int max_length, const struct ieee80211_radiotap_vendor_namespaces *vns)
{
+ /* check the radiotap header can actually be present */
+ if (max_length < sizeof(struct ieee80211_radiotap_header))
+ return -EINVAL;
+
/* Linux only supports version 0 radiotap format */
if (radiotap_header->it_version)
return -EINVAL;
*/
if ((unsigned long)iterator->_arg -
- (unsigned long)iterator->_rtheader >
+ (unsigned long)iterator->_rtheader +
+ sizeof(uint32_t) >
(unsigned long)iterator->_max_length)
return -EINVAL;
}
atomic_inc(&policy->genid);
- del_timer(&policy->polq.hold_timer);
+ if (del_timer(&policy->polq.hold_timer))
+ xfrm_pol_put(policy);
xfrm_queue_purge(&policy->polq.hold_queue);
if (del_timer(&policy->timer))
spin_lock_bh(&pq->hold_queue.lock);
skb_queue_splice_init(&pq->hold_queue, &list);
- del_timer(&pq->hold_timer);
+ if (del_timer(&pq->hold_timer))
+ xfrm_pol_put(old);
spin_unlock_bh(&pq->hold_queue.lock);
if (skb_queue_empty(&list))
spin_lock_bh(&pq->hold_queue.lock);
skb_queue_splice(&list, &pq->hold_queue);
pq->timeout = XFRM_QUEUE_TMO_MIN;
- mod_timer(&pq->hold_timer, jiffies);
+ if (!mod_timer(&pq->hold_timer, jiffies))
+ xfrm_pol_hold(new);
spin_unlock_bh(&pq->hold_queue.lock);
}
spin_lock(&pq->hold_queue.lock);
skb = skb_peek(&pq->hold_queue);
+ if (!skb) {
+ spin_unlock(&pq->hold_queue.lock);
+ goto out;
+ }
dst = skb_dst(skb);
sk = skb->sk;
xfrm_decode_session(skb, &fl, dst->ops->family);
goto purge_queue;
pq->timeout = pq->timeout << 1;
- mod_timer(&pq->hold_timer, jiffies + pq->timeout);
- return;
+ if (!mod_timer(&pq->hold_timer, jiffies + pq->timeout))
+ xfrm_pol_hold(pol);
+ goto out;
}
dst_release(dst);
err = dst_output(skb);
}
+out:
+ xfrm_pol_put(pol);
return;
purge_queue:
pq->timeout = 0;
xfrm_queue_purge(&pq->hold_queue);
+ xfrm_pol_put(pol);
}
static int xdst_queue_output(struct sk_buff *skb)
unsigned long sched_next;
struct dst_entry *dst = skb_dst(skb);
struct xfrm_dst *xdst = (struct xfrm_dst *) dst;
- struct xfrm_policy_queue *pq = &xdst->pols[0]->polq;
+ struct xfrm_policy *pol = xdst->pols[0];
+ struct xfrm_policy_queue *pq = &pol->polq;
if (pq->hold_queue.qlen > XFRM_MAX_QUEUE_LEN) {
kfree_skb(skb);
if (del_timer(&pq->hold_timer)) {
if (time_before(pq->hold_timer.expires, sched_next))
sched_next = pq->hold_timer.expires;
+ xfrm_pol_put(pol);
}
__skb_queue_tail(&pq->hold_queue, skb);
- mod_timer(&pq->hold_timer, sched_next);
+ if (!mod_timer(&pq->hold_timer, sched_next))
+ xfrm_pol_hold(pol);
spin_unlock_bh(&pq->hold_queue.lock);
switch (event) {
case XFRM_REPLAY_UPDATE:
- if (x->replay_maxdiff &&
- (x->replay.seq - x->preplay.seq < x->replay_maxdiff) &&
- (x->replay.oseq - x->preplay.oseq < x->replay_maxdiff)) {
+ if (!x->replay_maxdiff ||
+ ((x->replay.seq - x->preplay.seq < x->replay_maxdiff) &&
+ (x->replay.oseq - x->preplay.oseq < x->replay_maxdiff))) {
if (x->xflags & XFRM_TIME_DEFER)
event = XFRM_REPLAY_TIMEOUT;
else
return 0;
diff = x->replay.seq - seq;
- if (diff >= min_t(unsigned int, x->props.replay_window,
- sizeof(x->replay.bitmap) * 8)) {
+ if (diff >= x->props.replay_window) {
x->stats.replay_window++;
goto err;
}
switch (event) {
case XFRM_REPLAY_UPDATE:
- if (x->replay_maxdiff &&
- (replay_esn->seq - preplay_esn->seq < x->replay_maxdiff) &&
- (replay_esn->oseq - preplay_esn->oseq < x->replay_maxdiff)) {
+ if (!x->replay_maxdiff ||
+ ((replay_esn->seq - preplay_esn->seq < x->replay_maxdiff) &&
+ (replay_esn->oseq - preplay_esn->oseq
+ < x->replay_maxdiff))) {
if (x->xflags & XFRM_TIME_DEFER)
event = XFRM_REPLAY_TIMEOUT;
else
switch (event) {
case XFRM_REPLAY_UPDATE:
- if (!x->replay_maxdiff)
- break;
-
- if (replay_esn->seq_hi == preplay_esn->seq_hi)
- seq_diff = replay_esn->seq - preplay_esn->seq;
- else
- seq_diff = ~preplay_esn->seq + replay_esn->seq + 1;
-
- if (replay_esn->oseq_hi == preplay_esn->oseq_hi)
- oseq_diff = replay_esn->oseq - preplay_esn->oseq;
- else
- oseq_diff = ~preplay_esn->oseq + replay_esn->oseq + 1;
-
- if (seq_diff < x->replay_maxdiff &&
- oseq_diff < x->replay_maxdiff) {
+ if (x->replay_maxdiff) {
+ if (replay_esn->seq_hi == preplay_esn->seq_hi)
+ seq_diff = replay_esn->seq - preplay_esn->seq;
+ else
+ seq_diff = ~preplay_esn->seq + replay_esn->seq
+ + 1;
- if (x->xflags & XFRM_TIME_DEFER)
- event = XFRM_REPLAY_TIMEOUT;
+ if (replay_esn->oseq_hi == preplay_esn->oseq_hi)
+ oseq_diff = replay_esn->oseq
+ - preplay_esn->oseq;
else
- return;
+ oseq_diff = ~preplay_esn->oseq
+ + replay_esn->oseq + 1;
+
+ if (seq_diff >= x->replay_maxdiff ||
+ oseq_diff >= x->replay_maxdiff)
+ break;
}
+ if (x->xflags & XFRM_TIME_DEFER)
+ event = XFRM_REPLAY_TIMEOUT;
+ else
+ return;
+
break;
case XFRM_REPLAY_TIMEOUT:
memcpy(&x->sel, &p->sel, sizeof(x->sel));
memcpy(&x->lft, &p->lft, sizeof(x->lft));
x->props.mode = p->mode;
- x->props.replay_window = p->replay_window;
+ x->props.replay_window = min_t(unsigned int, p->replay_window,
+ sizeof(x->replay.bitmap) * 8);
x->props.reqid = p->reqid;
x->props.family = p->family;
memcpy(&x->props.saddr, &p->saddr, sizeof(x->props.saddr));
if (x->km.state != XFRM_STATE_VALID)
goto out;
- err = xfrm_replay_verify_len(x->replay_esn, rp);
+ err = xfrm_replay_verify_len(x->replay_esn, re);
if (err)
goto out;
# check for new externs in .h files.
if ($realfile =~ /\.h$/ &&
$line =~ /^\+\s*(extern\s+)$Type\s*$Ident\s*\(/s) {
- if (WARN("AVOID_EXTERNS",
- "extern prototypes should be avoided in .h files\n" . $herecurr) &&
+ if (CHK("AVOID_EXTERNS",
+ "extern prototypes should be avoided in .h files\n" . $herecurr) &&
$fix) {
$fixed[$linenr - 1] =~ s/(.*)\bextern\b\s*(.*)/$1$2/;
}
static unsigned int table_size, table_cnt;
static int all_symbols = 0;
static char symbol_prefix_char = '\0';
+static unsigned long long kernel_start_addr = 0;
int token_profit[0x10000];
static void usage(void)
{
- fprintf(stderr, "Usage: kallsyms [--all-symbols] [--symbol-prefix=<prefix char>] < in.map > out.S\n");
+ fprintf(stderr, "Usage: kallsyms [--all-symbols] "
+ "[--symbol-prefix=<prefix char>] "
+ "[--page-offset=<CONFIG_PAGE_OFFSET>] "
+ "< in.map > out.S\n");
exit(1);
}
int i;
int offset = 1;
+ if (s->addr < kernel_start_addr)
+ return 0;
+
/* skip prefix char */
if (symbol_prefix_char && *(s->sym + 1) == symbol_prefix_char)
offset++;
if ((*p == '"' && *(p+2) == '"') || (*p == '\'' && *(p+2) == '\''))
p++;
symbol_prefix_char = *p;
+ } else if (strncmp(argv[i], "--page-offset=", 14) == 0) {
+ const char *p = &argv[i][14];
+ kernel_start_addr = strtoull(p, NULL, 16);
} else
usage();
}
kallsymopt="${kallsymopt} --all-symbols"
fi
+ kallsymopt="${kallsymopt} --page-offset=$CONFIG_PAGE_OFFSET"
+
local aflags="${KBUILD_AFLAGS} ${KBUILD_AFLAGS_KERNEL} \
${NOSTDINC_FLAGS} ${LINUXINCLUDE} ${KBUILD_CPPFLAGS}"
/* check if the next ns is a sibling, parent, gp, .. */
parent = ns->parent;
- while (parent) {
+ while (ns != root) {
mutex_unlock(&ns->lock);
next = list_entry_next(ns, base.list);
if (!list_entry_is_head(next, &parent->sub_ns, base.list)) {
mutex_lock(&next->lock);
return next;
}
- if (parent == root)
- return NULL;
ns = parent;
parent = parent->parent;
}
* it should be.
*/
-#include <linux/crypto.h>
+#include <crypto/hash.h>
#include "include/apparmor.h"
#include "include/crypto.h"
static unsigned int apparmor_hash_size;
-static struct crypto_hash *apparmor_tfm;
+static struct crypto_shash *apparmor_tfm;
unsigned int aa_hash_size(void)
{
int aa_calc_profile_hash(struct aa_profile *profile, u32 version, void *start,
size_t len)
{
- struct scatterlist sg[2];
- struct hash_desc desc = {
- .tfm = apparmor_tfm,
- .flags = 0
- };
+ struct {
+ struct shash_desc shash;
+ char ctx[crypto_shash_descsize(apparmor_tfm)];
+ } desc;
int error = -ENOMEM;
u32 le32_version = cpu_to_le32(version);
if (!apparmor_tfm)
return 0;
- sg_init_table(sg, 2);
- sg_set_buf(&sg[0], &le32_version, 4);
- sg_set_buf(&sg[1], (u8 *) start, len);
-
profile->hash = kzalloc(apparmor_hash_size, GFP_KERNEL);
if (!profile->hash)
goto fail;
- error = crypto_hash_init(&desc);
+ desc.shash.tfm = apparmor_tfm;
+ desc.shash.flags = 0;
+
+ error = crypto_shash_init(&desc.shash);
if (error)
goto fail;
- error = crypto_hash_update(&desc, &sg[0], 4);
+ error = crypto_shash_update(&desc.shash, (u8 *) &le32_version, 4);
if (error)
goto fail;
- error = crypto_hash_update(&desc, &sg[1], len);
+ error = crypto_shash_update(&desc.shash, (u8 *) start, len);
if (error)
goto fail;
- error = crypto_hash_final(&desc, profile->hash);
+ error = crypto_shash_final(&desc.shash, profile->hash);
if (error)
goto fail;
static int __init init_profile_hash(void)
{
- struct crypto_hash *tfm;
+ struct crypto_shash *tfm;
if (!apparmor_initialized)
return 0;
- tfm = crypto_alloc_hash("sha1", 0, CRYPTO_ALG_ASYNC);
+ tfm = crypto_alloc_shash("sha1", 0, CRYPTO_ALG_ASYNC);
if (IS_ERR(tfm)) {
int error = PTR_ERR(tfm);
AA_ERROR("failed to setup profile sha1 hashing: %d\n", error);
return error;
}
apparmor_tfm = tfm;
- apparmor_hash_size = crypto_hash_digestsize(apparmor_tfm);
+ apparmor_hash_size = crypto_shash_digestsize(apparmor_tfm);
aa_info_message("AppArmor sha1 policy hashing enabled");
static inline void __aa_update_replacedby(struct aa_profile *orig,
struct aa_profile *new)
{
- struct aa_profile *tmp = rcu_dereference(orig->replacedby->profile);
+ struct aa_profile *tmp;
+ tmp = rcu_dereference_protected(orig->replacedby->profile,
+ mutex_is_locked(&orig->ns->lock));
rcu_assign_pointer(orig->replacedby->profile, aa_get_profile(new));
orig->flags |= PFLAG_INVALID;
aa_put_profile(tmp);
static void free_replacedby(struct aa_replacedby *r)
{
if (r) {
- aa_put_profile(rcu_dereference(r->profile));
+ /* r->profile will not be updated any more as r is dead */
+ aa_put_profile(rcu_dereference_protected(r->profile, true));
kzfree(r);
}
}
aa_put_dfa(profile->policy.dfa);
aa_put_replacedby(profile->replacedby);
+ kzfree(profile->hash);
kzfree(profile);
}
* @tclass: target security class
* @requested: requested permissions, interpreted based on @tclass
* @auditdata: auxiliary audit data
- * @flags: VFS walk flags
*
* Check the AVC to determine whether the @requested permissions are granted
* for the SID pair (@ssid, @tsid), interpreting the permissions
* permissions are granted, -%EACCES if any permissions are denied, or
* another -errno upon other errors.
*/
-int avc_has_perm_flags(u32 ssid, u32 tsid, u16 tclass,
- u32 requested, struct common_audit_data *auditdata,
- unsigned flags)
+int avc_has_perm(u32 ssid, u32 tsid, u16 tclass,
+ u32 requested, struct common_audit_data *auditdata)
{
struct av_decision avd;
int rc, rc2;
rc = avc_has_perm_noaudit(ssid, tsid, tclass, requested, 0, &avd);
- rc2 = avc_audit(ssid, tsid, tclass, requested, &avd, rc, auditdata,
- flags);
+ rc2 = avc_audit(ssid, tsid, tclass, requested, &avd, rc, auditdata);
if (rc2)
return rc2;
return rc;
rc = avc_has_perm_noaudit(sid, sid, sclass, av, 0, &avd);
if (audit == SECURITY_CAP_AUDIT) {
- int rc2 = avc_audit(sid, sid, sclass, av, &avd, rc, &ad, 0);
+ int rc2 = avc_audit(sid, sid, sclass, av, &avd, rc, &ad);
if (rc2)
return rc2;
}
static int inode_has_perm(const struct cred *cred,
struct inode *inode,
u32 perms,
- struct common_audit_data *adp,
- unsigned flags)
+ struct common_audit_data *adp)
{
struct inode_security_struct *isec;
u32 sid;
sid = cred_sid(cred);
isec = inode->i_security;
- return avc_has_perm_flags(sid, isec->sid, isec->sclass, perms, adp, flags);
+ return avc_has_perm(sid, isec->sid, isec->sclass, perms, adp);
}
/* Same as inode_has_perm, but pass explicit audit data containing
ad.type = LSM_AUDIT_DATA_DENTRY;
ad.u.dentry = dentry;
- return inode_has_perm(cred, inode, av, &ad, 0);
+ return inode_has_perm(cred, inode, av, &ad);
}
/* Same as inode_has_perm, but pass explicit audit data containing
ad.type = LSM_AUDIT_DATA_PATH;
ad.u.path = *path;
- return inode_has_perm(cred, inode, av, &ad, 0);
+ return inode_has_perm(cred, inode, av, &ad);
}
/* Same as path_has_perm, but uses the inode from the file struct. */
ad.type = LSM_AUDIT_DATA_PATH;
ad.u.path = file->f_path;
- return inode_has_perm(cred, file_inode(file), av, &ad, 0);
+ return inode_has_perm(cred, file_inode(file), av, &ad);
}
/* Check whether a task can use an open file descriptor to
/* av is zero if only checking access to the descriptor. */
rc = 0;
if (av)
- rc = inode_has_perm(cred, inode, av, &ad, 0);
+ rc = inode_has_perm(cred, inode, av, &ad);
out:
return rc;
u16 tclass, u32 requested,
struct av_decision *avd,
int result,
- struct common_audit_data *a, unsigned flags)
+ struct common_audit_data *a)
{
u32 audited, denied;
audited = avc_audit_required(requested, avd, result, 0, &denied);
return 0;
return slow_avc_audit(ssid, tsid, tclass,
requested, audited, denied,
- a, flags);
+ a, 0);
}
#define AVC_STRICT 1 /* Ignore permissive mode. */
unsigned flags,
struct av_decision *avd);
-int avc_has_perm_flags(u32 ssid, u32 tsid,
- u16 tclass, u32 requested,
- struct common_audit_data *auditdata,
- unsigned);
-
-static inline int avc_has_perm(u32 ssid, u32 tsid,
- u16 tclass, u32 requested,
- struct common_audit_data *auditdata)
-{
- return avc_has_perm_flags(ssid, tsid, tclass, requested, auditdata, 0);
-}
+int avc_has_perm(u32 ssid, u32 tsid,
+ u16 tclass, u32 requested,
+ struct common_audit_data *auditdata);
u32 avc_policy_seqno(void);
{
gsr_bits = 0;
- GCR |= GCR_WARM_RST | GCR_PRIRDY_IEN | GCR_SECRDY_IEN;
- wait_event_timeout(gsr_wq, gsr_bits & (GSR_PCR | GSR_SCR), 1);
+ GCR |= GCR_WARM_RST;
}
static inline void pxa_ac97_cold_pxa25x(void)
gsr_bits = 0;
GCR = GCR_COLD_RST;
- GCR |= GCR_CDONE_IE|GCR_SDONE_IE;
- wait_event_timeout(gsr_wq, gsr_bits & (GSR_PCR | GSR_SCR), 1);
}
#endif
static inline void pxa_ac97_cold_pxa27x(void)
{
- unsigned int timeout;
-
GCR &= GCR_COLD_RST; /* clear everything but nCRST */
GCR &= ~GCR_COLD_RST; /* then assert nCRST */
udelay(5);
clk_disable(ac97conf_clk);
GCR = GCR_COLD_RST | GCR_WARM_RST;
- timeout = 100; /* wait for the codec-ready bit to be set */
- while (!((GSR | gsr_bits) & (GSR_PCR | GSR_SCR)) && timeout--)
- mdelay(1);
}
#endif
#ifdef CONFIG_PXA3xx
static inline void pxa_ac97_warm_pxa3xx(void)
{
- int timeout = 100;
-
gsr_bits = 0;
/* Can't use interrupts */
GCR |= GCR_WARM_RST;
- while (!((GSR | gsr_bits) & (GSR_PCR | GSR_SCR)) && timeout--)
- mdelay(1);
}
static inline void pxa_ac97_cold_pxa3xx(void)
{
- int timeout = 1000;
-
/* Hold CLKBPB for 100us */
GCR = 0;
GCR = GCR_CLKBPB;
GCR &= ~(GCR_PRIRDY_IEN|GCR_SECRDY_IEN);
GCR = GCR_WARM_RST | GCR_COLD_RST;
- while (!(GSR & (GSR_PCR | GSR_SCR)) && timeout--)
- mdelay(10);
}
#endif
bool pxa2xx_ac97_try_warm_reset(struct snd_ac97 *ac97)
{
unsigned long gsr;
+ unsigned int timeout = 100;
#ifdef CONFIG_PXA25x
if (cpu_is_pxa25x())
else
#endif
BUG();
+
+ while (!((GSR | gsr_bits) & (GSR_PCR | GSR_SCR)) && timeout--)
+ mdelay(1);
+
gsr = GSR | gsr_bits;
if (!(gsr & (GSR_PCR | GSR_SCR))) {
printk(KERN_INFO "%s: warm reset timeout (GSR=%#lx)\n",
bool pxa2xx_ac97_try_cold_reset(struct snd_ac97 *ac97)
{
unsigned long gsr;
+ unsigned int timeout = 1000;
#ifdef CONFIG_PXA25x
if (cpu_is_pxa25x())
#endif
BUG();
+ while (!((GSR | gsr_bits) & (GSR_PCR | GSR_SCR)) && timeout--)
+ mdelay(1);
+
gsr = GSR | gsr_bits;
if (!(gsr & (GSR_PCR | GSR_SCR))) {
printk(KERN_INFO "%s: cold reset timeout (GSR=%#lx)\n",
static int snd_compr_free(struct inode *inode, struct file *f)
{
struct snd_compr_file *data = f->private_data;
+ struct snd_compr_runtime *runtime = data->stream.runtime;
+
+ switch (runtime->state) {
+ case SNDRV_PCM_STATE_RUNNING:
+ case SNDRV_PCM_STATE_DRAINING:
+ case SNDRV_PCM_STATE_PAUSED:
+ data->stream.ops->trigger(&data->stream, SNDRV_PCM_TRIGGER_STOP);
+ break;
+ default:
+ break;
+ }
+
data->stream.ops->free(&data->stream);
kfree(data->stream.runtime->buffer);
kfree(data->stream.runtime);
struct snd_compr *compr;
compr = device->device_data;
- snd_unregister_device(compr->direction, compr->card, compr->device);
+ snd_unregister_device(SNDRV_DEVICE_TYPE_COMPRESS, compr->card,
+ compr->device);
return 0;
}
struct snd_pcm *pcm;
list_for_each_entry(pcm, &snd_pcm_devices, list) {
+ if (pcm->internal)
+ continue;
if (pcm->card == card && pcm->device == device)
return pcm;
}
struct snd_pcm *pcm;
list_for_each_entry(pcm, &snd_pcm_devices, list) {
+ if (pcm->internal)
+ continue;
if (pcm->card == card && pcm->device > device)
return pcm->device;
else if (pcm->card->number > card->number)
{ 0x54524106, 0xffffffff, "TR28026", NULL, NULL },
{ 0x54524108, 0xffffffff, "TR28028", patch_tritech_tr28028, NULL }, // added by xin jin [07/09/99]
{ 0x54524123, 0xffffffff, "TR28602", NULL, NULL }, // only guess --jk [TR28023 = eMicro EM28023 (new CT1297)]
+{ 0x54584e03, 0xffffffff, "TLV320AIC27", NULL, NULL },
{ 0x54584e20, 0xffffffff, "TLC320AD9xC", NULL, NULL },
{ 0x56494161, 0xffffffff, "VIA1612A", NULL, NULL }, // modified ICE1232 with S/PDIF
{ 0x56494170, 0xffffffff, "VIA1617A", patch_vt1617a, NULL }, // modified VT1616 with S/PDIF
spin_unlock(&codec->power_lock);
state = hda_call_codec_suspend(codec, true);
- codec->pm_down_notified = 0;
- if (!bus->power_keep_link_on && (state & AC_PWRST_CLK_STOP_OK)) {
+ if (!codec->pm_down_notified &&
+ !bus->power_keep_link_on && (state & AC_PWRST_CLK_STOP_OK)) {
codec->pm_down_notified = 1;
hda_call_pm_notify(bus, false);
}
if (!multi)
err = create_single_cap_vol_ctl(codec, n, vol, sw,
inv_dmic);
- else if (!multi_cap_vol)
+ else if (!multi_cap_vol && !inv_dmic)
err = create_bind_cap_vol_ctl(codec, n, vol, sw);
else
err = create_multi_cap_vol_ctl(codec);
true, &spec->vmaster_mute.sw_kctl);
if (err < 0)
return err;
- if (spec->vmaster_mute.hook)
+ if (spec->vmaster_mute.hook) {
snd_hda_add_vmaster_hook(codec, &spec->vmaster_mute,
spec->vmaster_mute_enum);
+ snd_hda_sync_vmaster_hook(&spec->vmaster_mute);
+ }
}
free_kctls(spec); /* no longer needed */
}
}
+static void ad1884_fixup_thinkpad(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ struct ad198x_spec *spec = codec->spec;
+
+ if (action == HDA_FIXUP_ACT_PRE_PROBE)
+ spec->gen.keep_eapd_on = 1;
+}
+
/* set magic COEFs for dmic */
static const struct hda_verb ad1884_dmic_init_verbs[] = {
{0x01, AC_VERB_SET_COEF_INDEX, 0x13f7},
AD1884_FIXUP_AMP_OVERRIDE,
AD1884_FIXUP_HP_EAPD,
AD1884_FIXUP_DMIC_COEF,
+ AD1884_FIXUP_THINKPAD,
AD1884_FIXUP_HP_TOUCHSMART,
};
.type = HDA_FIXUP_VERBS,
.v.verbs = ad1884_dmic_init_verbs,
},
+ [AD1884_FIXUP_THINKPAD] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = ad1884_fixup_thinkpad,
+ .chained = true,
+ .chain_id = AD1884_FIXUP_DMIC_COEF,
+ },
[AD1884_FIXUP_HP_TOUCHSMART] = {
.type = HDA_FIXUP_VERBS,
.v.verbs = ad1884_dmic_init_verbs,
static const struct snd_pci_quirk ad1884_fixup_tbl[] = {
SND_PCI_QUIRK(0x103c, 0x2a82, "HP Touchsmart", AD1884_FIXUP_HP_TOUCHSMART),
SND_PCI_QUIRK_VENDOR(0x103c, "HP", AD1884_FIXUP_HP_EAPD),
- SND_PCI_QUIRK_VENDOR(0x17aa, "Lenovo Thinkpad", AD1884_FIXUP_DMIC_COEF),
+ SND_PCI_QUIRK_VENDOR(0x17aa, "Lenovo Thinkpad", AD1884_FIXUP_THINKPAD),
{}
};
/* 0x0009 - 0x0014 -> 12 test regs */
/* 0x0015 - visibility reg */
+/* Cirrus Logic CS4208 */
+#define CS4208_VENDOR_NID 0x24
+
/*
* Cirrus Logic CS4210
*
{} /* terminator */
};
+static const struct hda_verb cs4208_coef_init_verbs[] = {
+ {0x01, AC_VERB_SET_POWER_STATE, 0x00}, /* AFG: D0 */
+ {0x24, AC_VERB_SET_PROC_STATE, 0x01}, /* VPW: processing on */
+ {0x24, AC_VERB_SET_COEF_INDEX, 0x0033},
+ {0x24, AC_VERB_SET_PROC_COEF, 0x0001}, /* A1 ICS */
+ {0x24, AC_VERB_SET_COEF_INDEX, 0x0034},
+ {0x24, AC_VERB_SET_PROC_COEF, 0x1C01}, /* A1 Enable, A Thresh = 300mV */
+ {} /* terminator */
+};
+
/* Errata: CS4207 rev C0/C1/C2 Silicon
*
* http://www.cirrus.com/en/pubs/errata/ER880C3.pdf
/* init_verb sequence for C0/C1/C2 errata*/
snd_hda_sequence_write(codec, cs_errata_init_verbs);
snd_hda_sequence_write(codec, cs_coef_init_verbs);
+ } else if (spec->vendor_nid == CS4208_VENDOR_NID) {
+ snd_hda_sequence_write(codec, cs4208_coef_init_verbs);
}
snd_hda_gen_init(codec);
{} /* terminator */
};
+static const struct hda_pintbl mba6_pincfgs[] = {
+ { 0x10, 0x032120f0 }, /* HP */
+ { 0x11, 0x500000f0 },
+ { 0x12, 0x90100010 }, /* Speaker */
+ { 0x13, 0x500000f0 },
+ { 0x14, 0x500000f0 },
+ { 0x15, 0x770000f0 },
+ { 0x16, 0x770000f0 },
+ { 0x17, 0x430000f0 },
+ { 0x18, 0x43ab9030 }, /* Mic */
+ { 0x19, 0x770000f0 },
+ { 0x1a, 0x770000f0 },
+ { 0x1b, 0x770000f0 },
+ { 0x1c, 0x90a00090 },
+ { 0x1d, 0x500000f0 },
+ { 0x1e, 0x500000f0 },
+ { 0x1f, 0x500000f0 },
+ { 0x20, 0x500000f0 },
+ { 0x21, 0x430000f0 },
+ { 0x22, 0x430000f0 },
+ {} /* terminator */
+};
+
static void cs420x_fixup_gpio_13(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
{
/*
* CS4208 support:
- * Its layout is no longer compatible with CS4206/CS4207, and the generic
- * parser seems working fairly well, except for trivial fixups.
+ * Its layout is no longer compatible with CS4206/CS4207
*/
enum {
+ CS4208_MBA6,
CS4208_GPIO0,
};
static const struct hda_model_fixup cs4208_models[] = {
{ .id = CS4208_GPIO0, .name = "gpio0" },
+ { .id = CS4208_MBA6, .name = "mba6" },
{}
};
static const struct snd_pci_quirk cs4208_fixup_tbl[] = {
/* codec SSID */
- SND_PCI_QUIRK(0x106b, 0x7100, "MacBookPro 6,1", CS4208_GPIO0),
- SND_PCI_QUIRK(0x106b, 0x7200, "MacBookPro 6,2", CS4208_GPIO0),
+ SND_PCI_QUIRK(0x106b, 0x7100, "MacBookAir 6,1", CS4208_MBA6),
+ SND_PCI_QUIRK(0x106b, 0x7200, "MacBookAir 6,2", CS4208_MBA6),
{} /* terminator */
};
}
static const struct hda_fixup cs4208_fixups[] = {
+ [CS4208_MBA6] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = mba6_pincfgs,
+ .chained = true,
+ .chain_id = CS4208_GPIO0,
+ },
[CS4208_GPIO0] = {
.type = HDA_FIXUP_FUNC,
.v.func = cs4208_fixup_gpio0,
},
};
+/* correct the 0dB offset of input pins */
+static void cs4208_fix_amp_caps(struct hda_codec *codec, hda_nid_t adc)
+{
+ unsigned int caps;
+
+ caps = query_amp_caps(codec, adc, HDA_INPUT);
+ caps &= ~(AC_AMPCAP_OFFSET);
+ caps |= 0x02;
+ snd_hda_override_amp_caps(codec, adc, HDA_INPUT, caps);
+}
+
static int patch_cs4208(struct hda_codec *codec)
{
struct cs_spec *spec;
int err;
- spec = cs_alloc_spec(codec, 0); /* no specific w/a */
+ spec = cs_alloc_spec(codec, CS4208_VENDOR_NID);
if (!spec)
return -ENOMEM;
cs4208_fixups);
snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_PRE_PROBE);
+ snd_hda_override_wcaps(codec, 0x18,
+ get_wcaps(codec, 0x18) | AC_WCAP_STEREO);
+ cs4208_fix_amp_caps(codec, 0x18);
+ cs4208_fix_amp_caps(codec, 0x1b);
+ cs4208_fix_amp_caps(codec, 0x1c);
+
err = cs_parse_auto_config(codec);
if (err < 0)
goto error;
CXT_FIXUP_INC_MIC_BOOST,
CXT_FIXUP_HEADPHONE_MIC_PIN,
CXT_FIXUP_HEADPHONE_MIC,
+ CXT_FIXUP_GPIO1,
};
static void cxt_fixup_stereo_dmic(struct hda_codec *codec,
.type = HDA_FIXUP_FUNC,
.v.func = cxt_fixup_headphone_mic,
},
+ [CXT_FIXUP_GPIO1] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ { 0x01, AC_VERB_SET_GPIO_MASK, 0x01 },
+ { 0x01, AC_VERB_SET_GPIO_DIRECTION, 0x01 },
+ { 0x01, AC_VERB_SET_GPIO_DATA, 0x01 },
+ { }
+ },
+ },
};
static const struct snd_pci_quirk cxt5051_fixups[] = {
static const struct snd_pci_quirk cxt5066_fixups[] = {
SND_PCI_QUIRK(0x1025, 0x0543, "Acer Aspire One 522", CXT_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x054c, "Acer Aspire 3830TG", CXT_FIXUP_GPIO1),
SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
SND_PCI_QUIRK(0x17aa, 0x215e, "Lenovo T410", CXT_PINCFG_LENOVO_TP410),
}
/*
+ * always configure channel mapping, it may have been changed by the
+ * user in the meantime
+ */
+ hdmi_setup_channel_mapping(codec, pin_nid, non_pcm, ca,
+ channels, per_pin->chmap,
+ per_pin->chmap_set);
+
+ /*
* sizeof(ai) is used instead of sizeof(*hdmi_ai) or
* sizeof(*dp_ai) to avoid partial match/update problems when
* the user switches between HDMI/DP monitors.
"pin=%d channels=%d\n",
pin_nid,
channels);
- hdmi_setup_channel_mapping(codec, pin_nid, non_pcm, ca,
- channels, per_pin->chmap,
- per_pin->chmap_set);
hdmi_stop_infoframe_trans(codec, pin_nid);
hdmi_fill_audio_infoframe(codec, pin_nid,
ai.bytes, sizeof(ai));
hdmi_start_infoframe_trans(codec, pin_nid);
- } else {
- /* For non-pcm audio switch, setup new channel mapping
- * accordingly */
- if (per_pin->non_pcm != non_pcm)
- hdmi_setup_channel_mapping(codec, pin_nid, non_pcm, ca,
- channels, per_pin->chmap,
- per_pin->chmap_set);
}
per_pin->non_pcm = non_pcm;
}
static void haswell_config_cvts(struct hda_codec *codec,
- int pin_id, int mux_id)
+ hda_nid_t pin_nid, int mux_idx)
{
struct hdmi_spec *spec = codec->spec;
- struct hdmi_spec_per_pin *per_pin;
- int pin_idx, mux_idx;
- int curr;
- int err;
+ hda_nid_t nid, end_nid;
+ int cvt_idx, curr;
+ struct hdmi_spec_per_cvt *per_cvt;
- for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {
- per_pin = get_pin(spec, pin_idx);
+ /* configure all pins, including "no physical connection" ones */
+ end_nid = codec->start_nid + codec->num_nodes;
+ for (nid = codec->start_nid; nid < end_nid; nid++) {
+ unsigned int wid_caps = get_wcaps(codec, nid);
+ unsigned int wid_type = get_wcaps_type(wid_caps);
+
+ if (wid_type != AC_WID_PIN)
+ continue;
- if (pin_idx == pin_id)
+ if (nid == pin_nid)
continue;
- curr = snd_hda_codec_read(codec, per_pin->pin_nid, 0,
+ curr = snd_hda_codec_read(codec, nid, 0,
AC_VERB_GET_CONNECT_SEL, 0);
+ if (curr != mux_idx)
+ continue;
- /* Choose another unused converter */
- if (curr == mux_id) {
- err = hdmi_choose_cvt(codec, pin_idx, NULL, &mux_idx);
- if (err < 0)
- return;
- snd_printdd("HDMI: choose converter %d for pin %d\n", mux_idx, pin_idx);
- snd_hda_codec_write_cache(codec, per_pin->pin_nid, 0,
+ /* choose an unassigned converter. The conveters in the
+ * connection list are in the same order as in the codec.
+ */
+ for (cvt_idx = 0; cvt_idx < spec->num_cvts; cvt_idx++) {
+ per_cvt = get_cvt(spec, cvt_idx);
+ if (!per_cvt->assigned) {
+ snd_printdd("choose cvt %d for pin nid %d\n",
+ cvt_idx, nid);
+ snd_hda_codec_write_cache(codec, nid, 0,
AC_VERB_SET_CONNECT_SEL,
- mux_idx);
+ cvt_idx);
+ break;
+ }
}
}
}
/* configure unused pins to choose other converters */
if (is_haswell(codec))
- haswell_config_cvts(codec, pin_idx, mux_idx);
+ haswell_config_cvts(codec, per_pin->pin_nid, mux_idx);
snd_hda_spdif_ctls_assign(codec, pin_idx, per_cvt->cvt_nid);
alc_write_coef_idx(codec, 0x1e, coef | 0x80);
}
+static void alc269_fixup_headset_mic(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ struct alc_spec *spec = codec->spec;
+
+ if (action == HDA_FIXUP_ACT_PRE_PROBE)
+ spec->parse_flags |= HDA_PINCFG_HEADSET_MIC;
+}
+
static void alc271_fixup_dmic(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
{
/* Set to manual mode */
val = alc_read_coef_idx(codec, 0x06);
alc_write_coef_idx(codec, 0x06, val & ~0x000c);
+ /* Enable Line1 input control by verb */
+ val = alc_read_coef_idx(codec, 0x1a);
+ alc_write_coef_idx(codec, 0x1a, val | (1 << 4));
break;
}
}
}
}
+static void alc290_fixup_mono_speakers(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ if (action == HDA_FIXUP_ACT_PRE_PROBE)
+ /* Remove DAC node 0x03, as it seems to be
+ giving mono output */
+ snd_hda_override_wcaps(codec, 0x03, 0);
+}
+
enum {
ALC269_FIXUP_SONY_VAIO,
ALC275_FIXUP_SONY_VAIO_GPIO2,
ALC271_FIXUP_DMIC,
ALC269_FIXUP_PCM_44K,
ALC269_FIXUP_STEREO_DMIC,
+ ALC269_FIXUP_HEADSET_MIC,
ALC269_FIXUP_QUANTA_MUTE,
ALC269_FIXUP_LIFEBOOK,
ALC269_FIXUP_AMIC,
ALC269_FIXUP_HP_GPIO_LED,
ALC269_FIXUP_INV_DMIC,
ALC269_FIXUP_LENOVO_DOCK,
+ ALC286_FIXUP_SONY_MIC_NO_PRESENCE,
ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT,
ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
ALC269_FIXUP_DELL2_MIC_NO_PRESENCE,
+ ALC269_FIXUP_DELL3_MIC_NO_PRESENCE,
ALC269_FIXUP_HEADSET_MODE,
ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC,
ALC269_FIXUP_ASUS_X101_FUNC,
ALC269VB_FIXUP_ORDISSIMO_EVE2,
ALC283_FIXUP_CHROME_BOOK,
ALC282_FIXUP_ASUS_TX300,
+ ALC283_FIXUP_INT_MIC,
+ ALC290_FIXUP_MONO_SPEAKERS,
};
static const struct hda_fixup alc269_fixups[] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc269_fixup_stereo_dmic,
},
+ [ALC269_FIXUP_HEADSET_MIC] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_headset_mic,
+ },
[ALC269_FIXUP_QUANTA_MUTE] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc269_fixup_quanta_mute,
.chained = true,
.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
},
+ [ALC269_FIXUP_DELL3_MIC_NO_PRESENCE] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x1a, 0x01a1913c }, /* use as headset mic, without its own jack detect */
+ { }
+ },
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
+ },
[ALC269_FIXUP_HEADSET_MODE] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc_fixup_headset_mode,
.type = HDA_FIXUP_FUNC,
.v.func = alc_fixup_headset_mode_no_hp_mic,
},
+ [ALC286_FIXUP_SONY_MIC_NO_PRESENCE] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x18, 0x01a1913c }, /* use as headset mic, without its own jack detect */
+ { }
+ },
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MIC
+ },
[ALC269_FIXUP_ASUS_X101_FUNC] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc269_fixup_x101_headset_mic,
.type = HDA_FIXUP_FUNC,
.v.func = alc282_fixup_asus_tx300,
},
+ [ALC283_FIXUP_INT_MIC] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ {0x20, AC_VERB_SET_COEF_INDEX, 0x1a},
+ {0x20, AC_VERB_SET_PROC_COEF, 0x0011},
+ { }
+ },
+ .chained = true,
+ .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
+ },
+ [ALC290_FIXUP_MONO_SPEAKERS] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc290_fixup_mono_speakers,
+ .chained = true,
+ .chain_id = ALC269_FIXUP_DELL3_MIC_NO_PRESENCE,
+ },
};
static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1028, 0x0608, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0609, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x0613, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_MONO_SPEAKERS),
SND_PCI_QUIRK(0x1028, 0x15cc, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x15cd, "Dell X5 Precision", ALC269_FIXUP_DELL2_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x1043, 0x83ce, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x1043, 0x8516, "ASUS X101CH", ALC269_FIXUP_ASUS_X101),
+ SND_PCI_QUIRK(0x104d, 0x90b6, "Sony VAIO Pro 13", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x104d, 0x9073, "Sony VAIO", ALC275_FIXUP_SONY_VAIO_GPIO2),
SND_PCI_QUIRK(0x104d, 0x907b, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
- SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC),
SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
{.id = ALC269_FIXUP_STEREO_DMIC, .name = "alc269-dmic"},
{.id = ALC271_FIXUP_DMIC, .name = "alc271-dmic"},
{.id = ALC269_FIXUP_INV_DMIC, .name = "inv-dmic"},
+ {.id = ALC269_FIXUP_HEADSET_MIC, .name = "headset-mic"},
{.id = ALC269_FIXUP_LENOVO_DOCK, .name = "lenovo-dock"},
{.id = ALC269_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"},
{.id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, .name = "dell-headset-multi"},
SND_PCI_QUIRK(0x1028, 0x05d8, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x1028, 0x05db, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE),
SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800),
+ SND_PCI_QUIRK(0x1043, 0x1477, "ASUS N56VZ", ALC662_FIXUP_ASUS_MODE4),
+ SND_PCI_QUIRK(0x1043, 0x1bf3, "ASUS N76VZ", ALC662_FIXUP_ASUS_MODE4),
SND_PCI_QUIRK(0x1043, 0x8469, "ASUS mobo", ALC662_FIXUP_NO_JACK_DETECT),
SND_PCI_QUIRK(0x105b, 0x0cd6, "Foxconn", ALC662_FIXUP_ASUS_MODE2),
SND_PCI_QUIRK(0x144d, 0xc051, "Samsung R720", ALC662_FIXUP_IDEAPAD),
if ((err = hdsp_get_iobox_version(hdsp)) < 0)
return err;
}
+ memset(&hdsp_version, 0, sizeof(hdsp_version));
hdsp_version.io_type = hdsp->io_type;
hdsp_version.firmware_rev = hdsp->firmware_rev;
if ((err = copy_to_user(argp, &hdsp_version, sizeof(hdsp_version))))
buf->area = dma_alloc_coherent(pcm->card->dev, size,
&buf->addr, GFP_KERNEL);
pr_debug("atmel-pcm: alloc dma buffer: area=%p, addr=%p, size=%zu\n",
- (void *)buf->area, (void *)buf->addr, size);
+ (void *)buf->area, (void *)(long)buf->addr, size);
if (!buf->area)
return -ENOMEM;
#include <linux/module.h>
#include <linux/of.h>
#include <linux/of_device.h>
-#include <linux/pinctrl/consumer.h>
#include <sound/soc.h>
struct snd_soc_card *card = &atmel_asoc_wm8904_card;
struct snd_soc_dai_link *dailink = &atmel_asoc_wm8904_dailink;
struct clk *clk_src;
- struct pinctrl *pinctrl;
int id, ret;
- pinctrl = devm_pinctrl_get_select_default(&pdev->dev);
- if (IS_ERR(pinctrl)) {
- dev_err(&pdev->dev, "failed to request pinctrl\n");
- return PTR_ERR(pinctrl);
- }
-
card->dev = &pdev->dev;
ret = atmel_asoc_wm8904_dt_init(pdev);
if (ret) {
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/i2c.h>
+#include <linux/of.h>
#include <linux/atmel-ssc.h>
case SNDRV_PCM_FORMAT_S8:
param.spctl |= 0x70;
sport->wdsize = 1;
+ break;
case SNDRV_PCM_FORMAT_S16_LE:
param.spctl |= 0xf0;
sport->wdsize = 2;
config SND_EP93XX_SOC
tristate "SoC Audio support for the Cirrus Logic EP93xx series"
- depends on ARCH_EP93XX && SND_SOC
+ depends on (ARCH_EP93XX || COMPILE_TEST) && SND_SOC
select SND_SOC_GENERIC_DMAENGINE_PCM
help
Say Y or M if you want to add support for codecs attached to
return false;
}
+static struct dma_chan *ep93xx_compat_request_channel(
+ struct snd_soc_pcm_runtime *rtd,
+ struct snd_pcm_substream *substream)
+{
+ struct snd_dmaengine_dai_dma_data *dma_data;
+
+ dma_data = snd_soc_dai_get_dma_data(rtd->cpu_dai, substream);
+
+ return snd_dmaengine_pcm_request_channel(ep93xx_pcm_dma_filter,
+ dma_data);
+}
+
static const struct snd_dmaengine_pcm_config ep93xx_dmaengine_pcm_config = {
.pcm_hardware = &ep93xx_pcm_hardware,
.compat_filter_fn = ep93xx_pcm_dma_filter,
+ .compat_request_channel = ep93xx_compat_request_channel,
.prealloc_buffer_size = 131072,
};
#include <linux/mfd/88pm860x.h>
#include <linux/slab.h>
#include <linux/delay.h>
+#include <linux/regmap.h>
#include <sound/core.h>
#include <sound/pcm.h>
#include <sound/pcm_params.h>
unsigned int filter;
struct snd_soc_codec *codec;
struct i2c_client *i2c;
+ struct regmap *regmap;
struct pm860x_chip *chip;
struct pm860x_det det;
{ -86, 29, 0}, { -56, 30, 0}, { -28, 31, 0}, { 0, 0, 0},
};
-static int pm860x_volatile(unsigned int reg)
-{
- BUG_ON(reg >= REG_CACHE_SIZE);
-
- switch (reg) {
- case PM860X_AUDIO_SUPPLIES_2:
- return 1;
- }
-
- return 0;
-}
-
-static unsigned int pm860x_read_reg_cache(struct snd_soc_codec *codec,
- unsigned int reg)
-{
- unsigned char *cache = codec->reg_cache;
-
- BUG_ON(reg >= REG_CACHE_SIZE);
-
- if (pm860x_volatile(reg))
- return cache[reg];
-
- reg += REG_CACHE_BASE;
-
- return pm860x_reg_read(codec->control_data, reg);
-}
-
-static int pm860x_write_reg_cache(struct snd_soc_codec *codec,
- unsigned int reg, unsigned int value)
-{
- unsigned char *cache = codec->reg_cache;
-
- BUG_ON(reg >= REG_CACHE_SIZE);
-
- if (!pm860x_volatile(reg))
- cache[reg] = (unsigned char)value;
-
- reg += REG_CACHE_BASE;
-
- return pm860x_reg_write(codec->control_data, reg, value);
-}
-
static int snd_soc_get_volsw_2r_st(struct snd_kcontrol *kcontrol,
struct snd_ctl_elem_value *ucontrol)
{
val = ucontrol->value.integer.value[0];
val2 = ucontrol->value.integer.value[1];
+ if (val >= ARRAY_SIZE(st_table) || val2 >= ARRAY_SIZE(st_table))
+ return -EINVAL;
+
err = snd_soc_update_bits(codec, reg, 0x3f, st_table[val].m);
if (err < 0)
return err;
static int pm860x_set_bias_level(struct snd_soc_codec *codec,
enum snd_soc_bias_level level)
{
+ struct pm860x_priv *pm860x = snd_soc_codec_get_drvdata(codec);
int data;
switch (level) {
if (codec->dapm.bias_level == SND_SOC_BIAS_OFF) {
/* Enable Audio PLL & Audio section */
data = AUDIO_PLL | AUDIO_SECTION_ON;
- pm860x_reg_write(codec->control_data, REG_MISC2, data);
+ pm860x_reg_write(pm860x->i2c, REG_MISC2, data);
udelay(300);
data = AUDIO_PLL | AUDIO_SECTION_RESET
| AUDIO_SECTION_ON;
- pm860x_reg_write(codec->control_data, REG_MISC2, data);
+ pm860x_reg_write(pm860x->i2c, REG_MISC2, data);
}
break;
case SND_SOC_BIAS_OFF:
data = AUDIO_PLL | AUDIO_SECTION_RESET | AUDIO_SECTION_ON;
- pm860x_set_bits(codec->control_data, REG_MISC2, data, 0);
+ pm860x_set_bits(pm860x->i2c, REG_MISC2, data, 0);
break;
}
codec->dapm.bias_level = level;
pm860x->det.lo_shrt = lo_shrt;
if (det & SND_JACK_HEADPHONE)
- pm860x_set_bits(codec->control_data, REG_HS_DET,
+ pm860x_set_bits(pm860x->i2c, REG_HS_DET,
EN_HS_DET, EN_HS_DET);
/* headset short detect */
if (hs_shrt) {
data = CLR_SHORT_HS2 | CLR_SHORT_HS1;
- pm860x_set_bits(codec->control_data, REG_SHORTS, data, data);
+ pm860x_set_bits(pm860x->i2c, REG_SHORTS, data, data);
}
/* Lineout short detect */
if (lo_shrt) {
data = CLR_SHORT_LO2 | CLR_SHORT_LO1;
- pm860x_set_bits(codec->control_data, REG_SHORTS, data, data);
+ pm860x_set_bits(pm860x->i2c, REG_SHORTS, data, data);
}
/* sync status */
pm860x->det.mic_det = det;
if (det & SND_JACK_MICROPHONE)
- pm860x_set_bits(codec->control_data, REG_MIC_DET,
+ pm860x_set_bits(pm860x->i2c, REG_MIC_DET,
MICDET_MASK, MICDET_MASK);
/* sync status */
pm860x->codec = codec;
- codec->control_data = pm860x->i2c;
+ codec->control_data = pm860x->regmap;
for (i = 0; i < 4; i++) {
ret = request_threaded_irq(pm860x->irq[i], NULL,
pm860x_set_bias_level(codec, SND_SOC_BIAS_STANDBY);
- ret = pm860x_bulk_read(codec->control_data, REG_CACHE_BASE,
- REG_CACHE_SIZE, codec->reg_cache);
- if (ret < 0) {
- dev_err(codec->dev, "Failed to fill register cache: %d\n",
- ret);
- goto out;
- }
-
return 0;
out:
static struct snd_soc_codec_driver soc_codec_dev_pm860x = {
.probe = pm860x_probe,
.remove = pm860x_remove,
- .read = pm860x_read_reg_cache,
- .write = pm860x_write_reg_cache,
- .reg_cache_size = REG_CACHE_SIZE,
- .reg_word_size = sizeof(u8),
.set_bias_level = pm860x_set_bias_level,
.controls = pm860x_snd_controls,
pm860x->chip = chip;
pm860x->i2c = (chip->id == CHIP_PM8607) ? chip->client
: chip->companion;
+ pm860x->regmap = (chip->id == CHIP_PM8607) ? chip->regmap
+ : chip->regmap_companion;
platform_set_drvdata(pdev, pm860x);
for (i = 0; i < 4; i++) {
#ifndef __88PM860X_H
#define __88PM860X_H
-/* The offset of these registers are 0xb0 */
-#define PM860X_PCM_IFACE_1 0x00
-#define PM860X_PCM_IFACE_2 0x01
-#define PM860X_PCM_IFACE_3 0x02
-#define PM860X_PCM_RATE 0x03
-#define PM860X_EC_PATH 0x04
-#define PM860X_SIDETONE_L_GAIN 0x05
-#define PM860X_SIDETONE_R_GAIN 0x06
-#define PM860X_SIDETONE_SHIFT 0x07
-#define PM860X_ADC_OFFSET_1 0x08
-#define PM860X_ADC_OFFSET_2 0x09
-#define PM860X_DMIC_DELAY 0x0a
+#define PM860X_PCM_IFACE_1 0xb0
+#define PM860X_PCM_IFACE_2 0xb1
+#define PM860X_PCM_IFACE_3 0xb2
+#define PM860X_PCM_RATE 0xb3
+#define PM860X_EC_PATH 0xb4
+#define PM860X_SIDETONE_L_GAIN 0xb5
+#define PM860X_SIDETONE_R_GAIN 0xb6
+#define PM860X_SIDETONE_SHIFT 0xb7
+#define PM860X_ADC_OFFSET_1 0xb8
+#define PM860X_ADC_OFFSET_2 0xb9
+#define PM860X_DMIC_DELAY 0xba
-#define PM860X_I2S_IFACE_1 0x0b
-#define PM860X_I2S_IFACE_2 0x0c
-#define PM860X_I2S_IFACE_3 0x0d
-#define PM860X_I2S_IFACE_4 0x0e
-#define PM860X_EQUALIZER_N0_1 0x0f
-#define PM860X_EQUALIZER_N0_2 0x10
-#define PM860X_EQUALIZER_N1_1 0x11
-#define PM860X_EQUALIZER_N1_2 0x12
-#define PM860X_EQUALIZER_D1_1 0x13
-#define PM860X_EQUALIZER_D1_2 0x14
-#define PM860X_LOFI_GAIN_LEFT 0x15
-#define PM860X_LOFI_GAIN_RIGHT 0x16
-#define PM860X_HIFIL_GAIN_LEFT 0x17
-#define PM860X_HIFIL_GAIN_RIGHT 0x18
-#define PM860X_HIFIR_GAIN_LEFT 0x19
-#define PM860X_HIFIR_GAIN_RIGHT 0x1a
-#define PM860X_DAC_OFFSET 0x1b
-#define PM860X_OFFSET_LEFT_1 0x1c
-#define PM860X_OFFSET_LEFT_2 0x1d
-#define PM860X_OFFSET_RIGHT_1 0x1e
-#define PM860X_OFFSET_RIGHT_2 0x1f
-#define PM860X_ADC_ANA_1 0x20
-#define PM860X_ADC_ANA_2 0x21
-#define PM860X_ADC_ANA_3 0x22
-#define PM860X_ADC_ANA_4 0x23
-#define PM860X_ANA_TO_ANA 0x24
-#define PM860X_HS1_CTRL 0x25
-#define PM860X_HS2_CTRL 0x26
-#define PM860X_LO1_CTRL 0x27
-#define PM860X_LO2_CTRL 0x28
-#define PM860X_EAR_CTRL_1 0x29
-#define PM860X_EAR_CTRL_2 0x2a
-#define PM860X_AUDIO_SUPPLIES_1 0x2b
-#define PM860X_AUDIO_SUPPLIES_2 0x2c
-#define PM860X_ADC_EN_1 0x2d
-#define PM860X_ADC_EN_2 0x2e
-#define PM860X_DAC_EN_1 0x2f
-#define PM860X_DAC_EN_2 0x31
-#define PM860X_AUDIO_CAL_1 0x32
-#define PM860X_AUDIO_CAL_2 0x33
-#define PM860X_AUDIO_CAL_3 0x34
-#define PM860X_AUDIO_CAL_4 0x35
-#define PM860X_AUDIO_CAL_5 0x36
-#define PM860X_ANA_INPUT_SEL_1 0x37
-#define PM860X_ANA_INPUT_SEL_2 0x38
+#define PM860X_I2S_IFACE_1 0xbb
+#define PM860X_I2S_IFACE_2 0xbc
+#define PM860X_I2S_IFACE_3 0xbd
+#define PM860X_I2S_IFACE_4 0xbe
+#define PM860X_EQUALIZER_N0_1 0xbf
+#define PM860X_EQUALIZER_N0_2 0xc0
+#define PM860X_EQUALIZER_N1_1 0xc1
+#define PM860X_EQUALIZER_N1_2 0xc2
+#define PM860X_EQUALIZER_D1_1 0xc3
+#define PM860X_EQUALIZER_D1_2 0xc4
+#define PM860X_LOFI_GAIN_LEFT 0xc5
+#define PM860X_LOFI_GAIN_RIGHT 0xc6
+#define PM860X_HIFIL_GAIN_LEFT 0xc7
+#define PM860X_HIFIL_GAIN_RIGHT 0xc8
+#define PM860X_HIFIR_GAIN_LEFT 0xc9
+#define PM860X_HIFIR_GAIN_RIGHT 0xca
+#define PM860X_DAC_OFFSET 0xcb
+#define PM860X_OFFSET_LEFT_1 0xcc
+#define PM860X_OFFSET_LEFT_2 0xcd
+#define PM860X_OFFSET_RIGHT_1 0xce
+#define PM860X_OFFSET_RIGHT_2 0xcf
+#define PM860X_ADC_ANA_1 0xd0
+#define PM860X_ADC_ANA_2 0xd1
+#define PM860X_ADC_ANA_3 0xd2
+#define PM860X_ADC_ANA_4 0xd3
+#define PM860X_ANA_TO_ANA 0xd4
+#define PM860X_HS1_CTRL 0xd5
+#define PM860X_HS2_CTRL 0xd6
+#define PM860X_LO1_CTRL 0xd7
+#define PM860X_LO2_CTRL 0xd8
+#define PM860X_EAR_CTRL_1 0xd9
+#define PM860X_EAR_CTRL_2 0xda
+#define PM860X_AUDIO_SUPPLIES_1 0xdb
+#define PM860X_AUDIO_SUPPLIES_2 0xdc
+#define PM860X_ADC_EN_1 0xdd
+#define PM860X_ADC_EN_2 0xde
+#define PM860X_DAC_EN_1 0xdf
+#define PM860X_DAC_EN_2 0xe1
+#define PM860X_AUDIO_CAL_1 0xe2
+#define PM860X_AUDIO_CAL_2 0xe3
+#define PM860X_AUDIO_CAL_3 0xe4
+#define PM860X_AUDIO_CAL_4 0xe5
+#define PM860X_AUDIO_CAL_5 0xe6
+#define PM860X_ANA_INPUT_SEL_1 0xe7
+#define PM860X_ANA_INPUT_SEL_2 0xe8
-#define PM860X_PCM_IFACE_4 0x39
-#define PM860X_I2S_IFACE_5 0x3a
+#define PM860X_PCM_IFACE_4 0xe9
+#define PM860X_I2S_IFACE_5 0xea
#define PM860X_SHORTS 0x3b
#define PM860X_PLL_ADJ_1 0x3c
/* Private data for AB8500 device-driver */
struct ab8500_codec_drvdata {
+ struct regmap *regmap;
+
/* Sidetone */
long *sid_fir_values;
enum sid_state sid_status;
*/
/* Read a register from the audio-bank of AB8500 */
-static unsigned int ab8500_codec_read_reg(struct snd_soc_codec *codec,
- unsigned int reg)
+static int ab8500_codec_read_reg(void *context, unsigned int reg,
+ unsigned int *value)
{
+ struct device *dev = context;
int status;
- unsigned int value = 0;
u8 value8;
- status = abx500_get_register_interruptible(codec->dev, AB8500_AUDIO,
- reg, &value8);
- if (status < 0) {
- dev_err(codec->dev,
- "%s: ERROR: Register (0x%02x:0x%02x) read failed (%d).\n",
- __func__, (u8)AB8500_AUDIO, (u8)reg, status);
- } else {
- dev_dbg(codec->dev,
- "%s: Read 0x%02x from register 0x%02x:0x%02x\n",
- __func__, value8, (u8)AB8500_AUDIO, (u8)reg);
- value = (unsigned int)value8;
- }
+ status = abx500_get_register_interruptible(dev, AB8500_AUDIO,
+ reg, &value8);
+ *value = (unsigned int)value8;
- return value;
+ return status;
}
/* Write to a register in the audio-bank of AB8500 */
-static int ab8500_codec_write_reg(struct snd_soc_codec *codec,
- unsigned int reg, unsigned int value)
+static int ab8500_codec_write_reg(void *context, unsigned int reg,
+ unsigned int value)
{
- int status;
-
- status = abx500_set_register_interruptible(codec->dev, AB8500_AUDIO,
- reg, value);
- if (status < 0)
- dev_err(codec->dev,
- "%s: ERROR: Register (%02x:%02x) write failed (%d).\n",
- __func__, (u8)AB8500_AUDIO, (u8)reg, status);
- else
- dev_dbg(codec->dev,
- "%s: Wrote 0x%02x into register %02x:%02x\n",
- __func__, (u8)value, (u8)AB8500_AUDIO, (u8)reg);
+ struct device *dev = context;
- return status;
+ return abx500_set_register_interruptible(dev, AB8500_AUDIO,
+ reg, value);
}
+static const struct regmap_config ab8500_codec_regmap = {
+ .reg_read = ab8500_codec_read_reg,
+ .reg_write = ab8500_codec_write_reg,
+};
+
/*
* Controls - DAPM
*/
struct ab8500_codec_drvdata *drvdata = dev_get_drvdata(codec->dev);
struct device *dev = codec->dev;
bool apply_fir, apply_iir;
- int req, status;
+ unsigned int req;
+ int status;
dev_dbg(dev, "%s: Enter.\n", __func__);
mutex_lock(&drvdata->anc_lock);
req = ucontrol->value.integer.value[0];
+ if (req >= ARRAY_SIZE(enum_anc_state)) {
+ status = -EINVAL;
+ goto cleanup;
+ }
if (req != ANC_APPLY_FIR_IIR && req != ANC_APPLY_FIR &&
req != ANC_APPLY_IIR) {
dev_err(dev, "%s: ERROR: Unsupported status to set '%s'!\n",
case 0:
break;
case 1:
- slot = find_first_bit((unsigned long *)&tx_mask, 32);
+ slot = ffs(tx_mask);
snd_soc_update_bits(codec, AB8500_DASLOTCONF1, mask, slot);
snd_soc_update_bits(codec, AB8500_DASLOTCONF3, mask, slot);
snd_soc_update_bits(codec, AB8500_DASLOTCONF2, mask, slot);
snd_soc_update_bits(codec, AB8500_DASLOTCONF4, mask, slot);
break;
case 2:
- slot = find_first_bit((unsigned long *)&tx_mask, 32);
+ slot = ffs(tx_mask);
snd_soc_update_bits(codec, AB8500_DASLOTCONF1, mask, slot);
snd_soc_update_bits(codec, AB8500_DASLOTCONF3, mask, slot);
- slot = find_next_bit((unsigned long *)&tx_mask, 32, slot + 1);
+ slot = fls(tx_mask);
snd_soc_update_bits(codec, AB8500_DASLOTCONF2, mask, slot);
snd_soc_update_bits(codec, AB8500_DASLOTCONF4, mask, slot);
break;
case 0:
break;
case 1:
- slot = find_first_bit((unsigned long *)&rx_mask, 32);
+ slot = ffs(rx_mask);
snd_soc_update_bits(codec, AB8500_ADSLOTSEL(slot),
AB8500_MASK_SLOT(slot),
AB8500_ADSLOTSELX_AD_OUT_TO_SLOT(AB8500_AD_OUT3, slot));
break;
case 2:
- slot = find_first_bit((unsigned long *)&rx_mask, 32);
+ slot = ffs(rx_mask);
snd_soc_update_bits(codec,
AB8500_ADSLOTSEL(slot),
AB8500_MASK_SLOT(slot),
AB8500_ADSLOTSELX_AD_OUT_TO_SLOT(AB8500_AD_OUT3, slot));
- slot = find_next_bit((unsigned long *)&rx_mask, 32, slot + 1);
+ slot = fls(rx_mask);
snd_soc_update_bits(codec,
AB8500_ADSLOTSEL(slot),
AB8500_MASK_SLOT(slot),
dev_dbg(dev, "%s: Enter.\n", __func__);
+ snd_soc_codec_set_cache_io(codec, 0, 0, SND_SOC_REGMAP);
+
/* Setup AB8500 according to board-settings */
pdata = dev_get_platdata(dev->parent);
+ codec->control_data = drvdata->regmap;
+
if (np) {
if (!pdata)
pdata = devm_kzalloc(dev,
}
/* Override HW-defaults */
- ab8500_codec_write_reg(codec,
- AB8500_ANACONF5,
- BIT(AB8500_ANACONF5_HSAUTOEN));
- ab8500_codec_write_reg(codec,
- AB8500_SHORTCIRCONF,
- BIT(AB8500_SHORTCIRCONF_HSZCDDIS));
+ snd_soc_write(codec, AB8500_ANACONF5,
+ BIT(AB8500_ANACONF5_HSAUTOEN));
+ snd_soc_write(codec, AB8500_SHORTCIRCONF,
+ BIT(AB8500_SHORTCIRCONF_HSZCDDIS));
/* Add filter controls */
status = snd_soc_add_codec_controls(codec, ab8500_filter_controls,
static struct snd_soc_codec_driver ab8500_codec_driver = {
.probe = ab8500_codec_probe,
- .read = ab8500_codec_read_reg,
- .write = ab8500_codec_write_reg,
- .reg_word_size = sizeof(u8),
.controls = ab8500_ctrls,
.num_controls = ARRAY_SIZE(ab8500_ctrls),
.dapm_widgets = ab8500_dapm_widgets,
/* Create driver private-data struct */
drvdata = devm_kzalloc(&pdev->dev, sizeof(struct ab8500_codec_drvdata),
GFP_KERNEL);
+ if (!drvdata)
+ return -ENOMEM;
drvdata->sid_status = SID_UNCONFIGURED;
drvdata->anc_status = ANC_UNCONFIGURED;
dev_set_drvdata(&pdev->dev, drvdata);
+ drvdata->regmap = devm_regmap_init(&pdev->dev, NULL, &pdev->dev,
+ &ab8500_codec_regmap);
+ if (IS_ERR(drvdata->regmap)) {
+ status = PTR_ERR(drvdata->regmap);
+ dev_err(&pdev->dev, "%s: Failed to allocate regmap: %d\n",
+ __func__, status);
+ return status;
+ }
+
dev_dbg(&pdev->dev, "%s: Register codec.\n", __func__);
status = snd_soc_register_codec(&pdev->dev, &ab8500_codec_driver,
ab8500_codec_dai,
static int ab8500_codec_driver_remove(struct platform_device *pdev)
{
- dev_info(&pdev->dev, "%s Enter.\n", __func__);
+ dev_dbg(&pdev->dev, "%s Enter.\n", __func__);
snd_soc_unregister_codec(&pdev->dev);
};
struct adau1373 {
+ struct regmap *regmap;
struct adau1373_dai dais[3];
};
#define ADAU1373_PLL_CTRL4(x) (0x2c + (x) * 7)
#define ADAU1373_PLL_CTRL5(x) (0x2d + (x) * 7)
#define ADAU1373_PLL_CTRL6(x) (0x2e + (x) * 7)
-#define ADAU1373_PLL_CTRL7(x) (0x2f + (x) * 7)
#define ADAU1373_HEADDECT 0x36
#define ADAU1373_ADC_DAC_STATUS 0x37
#define ADAU1373_ADC_CTRL 0x3c
#define ADAU1373_EP_CTRL_MICBIAS1_OFFSET 4
#define ADAU1373_EP_CTRL_MICBIAS2_OFFSET 2
-static const uint8_t adau1373_default_regs[] = {
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 0x00 */
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 0x10 */
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 0x20 */
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, /* 0x30 */
- 0x00, 0x00, 0x00, 0x80, 0x00, 0x01, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x0a, 0x0a, 0x0a, 0x00, /* 0x40 */
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x08, 0x08, 0x08, 0x00, 0x00, 0x00, 0x00, /* 0x50 */
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 0x60 */
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 0x70 */
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
- 0x78, 0x18, 0x00, 0x00, 0x00, 0xc0, 0x00, 0x00, /* 0x80 */
- 0x00, 0xc0, 0x88, 0x7a, 0xdf, 0x20, 0x00, 0x00,
- 0x78, 0x18, 0x00, 0x00, 0x00, 0xc0, 0x00, 0x00, /* 0x90 */
- 0x00, 0xc0, 0x88, 0x7a, 0xdf, 0x20, 0x00, 0x00,
- 0x78, 0x18, 0x00, 0x00, 0x00, 0xc0, 0x00, 0x00, /* 0xa0 */
- 0x00, 0xc0, 0x88, 0x7a, 0xdf, 0x20, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0xff, 0xff, 0xff, 0xff, 0xff, /* 0xb0 */
- 0xff, 0xff, 0xff, 0xff, 0xff, 0x1f, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 0xc0 */
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, /* 0xd0 */
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, /* 0xe0 */
- 0x00, 0x1f, 0x0f, 0x00, 0x00,
+static const struct reg_default adau1373_reg_defaults[] = {
+ { ADAU1373_INPUT_MODE, 0x00 },
+ { ADAU1373_AINL_CTRL(0), 0x00 },
+ { ADAU1373_AINR_CTRL(0), 0x00 },
+ { ADAU1373_AINL_CTRL(1), 0x00 },
+ { ADAU1373_AINR_CTRL(1), 0x00 },
+ { ADAU1373_AINL_CTRL(2), 0x00 },
+ { ADAU1373_AINR_CTRL(2), 0x00 },
+ { ADAU1373_AINL_CTRL(3), 0x00 },
+ { ADAU1373_AINR_CTRL(3), 0x00 },
+ { ADAU1373_LLINE_OUT(0), 0x00 },
+ { ADAU1373_RLINE_OUT(0), 0x00 },
+ { ADAU1373_LLINE_OUT(1), 0x00 },
+ { ADAU1373_RLINE_OUT(1), 0x00 },
+ { ADAU1373_LSPK_OUT, 0x00 },
+ { ADAU1373_RSPK_OUT, 0x00 },
+ { ADAU1373_LHP_OUT, 0x00 },
+ { ADAU1373_RHP_OUT, 0x00 },
+ { ADAU1373_ADC_GAIN, 0x00 },
+ { ADAU1373_LADC_MIXER, 0x00 },
+ { ADAU1373_RADC_MIXER, 0x00 },
+ { ADAU1373_LLINE1_MIX, 0x00 },
+ { ADAU1373_RLINE1_MIX, 0x00 },
+ { ADAU1373_LLINE2_MIX, 0x00 },
+ { ADAU1373_RLINE2_MIX, 0x00 },
+ { ADAU1373_LSPK_MIX, 0x00 },
+ { ADAU1373_RSPK_MIX, 0x00 },
+ { ADAU1373_LHP_MIX, 0x00 },
+ { ADAU1373_RHP_MIX, 0x00 },
+ { ADAU1373_EP_MIX, 0x00 },
+ { ADAU1373_HP_CTRL, 0x00 },
+ { ADAU1373_HP_CTRL2, 0x00 },
+ { ADAU1373_LS_CTRL, 0x00 },
+ { ADAU1373_EP_CTRL, 0x00 },
+ { ADAU1373_MICBIAS_CTRL1, 0x00 },
+ { ADAU1373_MICBIAS_CTRL2, 0x00 },
+ { ADAU1373_OUTPUT_CTRL, 0x00 },
+ { ADAU1373_PWDN_CTRL1, 0x00 },
+ { ADAU1373_PWDN_CTRL2, 0x00 },
+ { ADAU1373_PWDN_CTRL3, 0x00 },
+ { ADAU1373_DPLL_CTRL(0), 0x00 },
+ { ADAU1373_PLL_CTRL1(0), 0x00 },
+ { ADAU1373_PLL_CTRL2(0), 0x00 },
+ { ADAU1373_PLL_CTRL3(0), 0x00 },
+ { ADAU1373_PLL_CTRL4(0), 0x00 },
+ { ADAU1373_PLL_CTRL5(0), 0x00 },
+ { ADAU1373_PLL_CTRL6(0), 0x02 },
+ { ADAU1373_DPLL_CTRL(1), 0x00 },
+ { ADAU1373_PLL_CTRL1(1), 0x00 },
+ { ADAU1373_PLL_CTRL2(1), 0x00 },
+ { ADAU1373_PLL_CTRL3(1), 0x00 },
+ { ADAU1373_PLL_CTRL4(1), 0x00 },
+ { ADAU1373_PLL_CTRL5(1), 0x00 },
+ { ADAU1373_PLL_CTRL6(1), 0x02 },
+ { ADAU1373_HEADDECT, 0x00 },
+ { ADAU1373_ADC_CTRL, 0x00 },
+ { ADAU1373_CLK_SRC_DIV(0), 0x00 },
+ { ADAU1373_CLK_SRC_DIV(1), 0x00 },
+ { ADAU1373_DAI(0), 0x0a },
+ { ADAU1373_DAI(1), 0x0a },
+ { ADAU1373_DAI(2), 0x0a },
+ { ADAU1373_BCLKDIV(0), 0x00 },
+ { ADAU1373_BCLKDIV(1), 0x00 },
+ { ADAU1373_BCLKDIV(2), 0x00 },
+ { ADAU1373_SRC_RATIOA(0), 0x00 },
+ { ADAU1373_SRC_RATIOB(0), 0x00 },
+ { ADAU1373_SRC_RATIOA(1), 0x00 },
+ { ADAU1373_SRC_RATIOB(1), 0x00 },
+ { ADAU1373_SRC_RATIOA(2), 0x00 },
+ { ADAU1373_SRC_RATIOB(2), 0x00 },
+ { ADAU1373_DEEMP_CTRL, 0x00 },
+ { ADAU1373_SRC_DAI_CTRL(0), 0x08 },
+ { ADAU1373_SRC_DAI_CTRL(1), 0x08 },
+ { ADAU1373_SRC_DAI_CTRL(2), 0x08 },
+ { ADAU1373_DIN_MIX_CTRL(0), 0x00 },
+ { ADAU1373_DIN_MIX_CTRL(1), 0x00 },
+ { ADAU1373_DIN_MIX_CTRL(2), 0x00 },
+ { ADAU1373_DIN_MIX_CTRL(3), 0x00 },
+ { ADAU1373_DIN_MIX_CTRL(4), 0x00 },
+ { ADAU1373_DOUT_MIX_CTRL(0), 0x00 },
+ { ADAU1373_DOUT_MIX_CTRL(1), 0x00 },
+ { ADAU1373_DOUT_MIX_CTRL(2), 0x00 },
+ { ADAU1373_DOUT_MIX_CTRL(3), 0x00 },
+ { ADAU1373_DOUT_MIX_CTRL(4), 0x00 },
+ { ADAU1373_DAI_PBL_VOL(0), 0x00 },
+ { ADAU1373_DAI_PBR_VOL(0), 0x00 },
+ { ADAU1373_DAI_PBL_VOL(1), 0x00 },
+ { ADAU1373_DAI_PBR_VOL(1), 0x00 },
+ { ADAU1373_DAI_PBL_VOL(2), 0x00 },
+ { ADAU1373_DAI_PBR_VOL(2), 0x00 },
+ { ADAU1373_DAI_RECL_VOL(0), 0x00 },
+ { ADAU1373_DAI_RECR_VOL(0), 0x00 },
+ { ADAU1373_DAI_RECL_VOL(1), 0x00 },
+ { ADAU1373_DAI_RECR_VOL(1), 0x00 },
+ { ADAU1373_DAI_RECL_VOL(2), 0x00 },
+ { ADAU1373_DAI_RECR_VOL(2), 0x00 },
+ { ADAU1373_DAC1_PBL_VOL, 0x00 },
+ { ADAU1373_DAC1_PBR_VOL, 0x00 },
+ { ADAU1373_DAC2_PBL_VOL, 0x00 },
+ { ADAU1373_DAC2_PBR_VOL, 0x00 },
+ { ADAU1373_ADC_RECL_VOL, 0x00 },
+ { ADAU1373_ADC_RECR_VOL, 0x00 },
+ { ADAU1373_DMIC_RECL_VOL, 0x00 },
+ { ADAU1373_DMIC_RECR_VOL, 0x00 },
+ { ADAU1373_VOL_GAIN1, 0x00 },
+ { ADAU1373_VOL_GAIN2, 0x00 },
+ { ADAU1373_VOL_GAIN3, 0x00 },
+ { ADAU1373_HPF_CTRL, 0x00 },
+ { ADAU1373_BASS1, 0x00 },
+ { ADAU1373_BASS2, 0x00 },
+ { ADAU1373_DRC(0) + 0x0, 0x78 },
+ { ADAU1373_DRC(0) + 0x1, 0x18 },
+ { ADAU1373_DRC(0) + 0x2, 0x00 },
+ { ADAU1373_DRC(0) + 0x3, 0x00 },
+ { ADAU1373_DRC(0) + 0x4, 0x00 },
+ { ADAU1373_DRC(0) + 0x5, 0xc0 },
+ { ADAU1373_DRC(0) + 0x6, 0x00 },
+ { ADAU1373_DRC(0) + 0x7, 0x00 },
+ { ADAU1373_DRC(0) + 0x8, 0x00 },
+ { ADAU1373_DRC(0) + 0x9, 0xc0 },
+ { ADAU1373_DRC(0) + 0xa, 0x88 },
+ { ADAU1373_DRC(0) + 0xb, 0x7a },
+ { ADAU1373_DRC(0) + 0xc, 0xdf },
+ { ADAU1373_DRC(0) + 0xd, 0x20 },
+ { ADAU1373_DRC(0) + 0xe, 0x00 },
+ { ADAU1373_DRC(0) + 0xf, 0x00 },
+ { ADAU1373_DRC(1) + 0x0, 0x78 },
+ { ADAU1373_DRC(1) + 0x1, 0x18 },
+ { ADAU1373_DRC(1) + 0x2, 0x00 },
+ { ADAU1373_DRC(1) + 0x3, 0x00 },
+ { ADAU1373_DRC(1) + 0x4, 0x00 },
+ { ADAU1373_DRC(1) + 0x5, 0xc0 },
+ { ADAU1373_DRC(1) + 0x6, 0x00 },
+ { ADAU1373_DRC(1) + 0x7, 0x00 },
+ { ADAU1373_DRC(1) + 0x8, 0x00 },
+ { ADAU1373_DRC(1) + 0x9, 0xc0 },
+ { ADAU1373_DRC(1) + 0xa, 0x88 },
+ { ADAU1373_DRC(1) + 0xb, 0x7a },
+ { ADAU1373_DRC(1) + 0xc, 0xdf },
+ { ADAU1373_DRC(1) + 0xd, 0x20 },
+ { ADAU1373_DRC(1) + 0xe, 0x00 },
+ { ADAU1373_DRC(1) + 0xf, 0x00 },
+ { ADAU1373_DRC(2) + 0x0, 0x78 },
+ { ADAU1373_DRC(2) + 0x1, 0x18 },
+ { ADAU1373_DRC(2) + 0x2, 0x00 },
+ { ADAU1373_DRC(2) + 0x3, 0x00 },
+ { ADAU1373_DRC(2) + 0x4, 0x00 },
+ { ADAU1373_DRC(2) + 0x5, 0xc0 },
+ { ADAU1373_DRC(2) + 0x6, 0x00 },
+ { ADAU1373_DRC(2) + 0x7, 0x00 },
+ { ADAU1373_DRC(2) + 0x8, 0x00 },
+ { ADAU1373_DRC(2) + 0x9, 0xc0 },
+ { ADAU1373_DRC(2) + 0xa, 0x88 },
+ { ADAU1373_DRC(2) + 0xb, 0x7a },
+ { ADAU1373_DRC(2) + 0xc, 0xdf },
+ { ADAU1373_DRC(2) + 0xd, 0x20 },
+ { ADAU1373_DRC(2) + 0xe, 0x00 },
+ { ADAU1373_DRC(2) + 0xf, 0x00 },
+ { ADAU1373_3D_CTRL1, 0x00 },
+ { ADAU1373_3D_CTRL2, 0x00 },
+ { ADAU1373_FDSP_SEL1, 0x00 },
+ { ADAU1373_FDSP_SEL2, 0x00 },
+ { ADAU1373_FDSP_SEL2, 0x00 },
+ { ADAU1373_FDSP_SEL4, 0x00 },
+ { ADAU1373_DIGMICCTRL, 0x00 },
+ { ADAU1373_DIGEN, 0x00 },
};
static const unsigned int adau1373_out_tlv[] = {
struct snd_kcontrol *kcontrol, int event)
{
struct snd_soc_codec *codec = w->codec;
+ struct adau1373 *adau1373 = snd_soc_codec_get_drvdata(codec);
unsigned int pll_id = w->name[3] - '1';
unsigned int val;
else
val = 0;
- snd_soc_update_bits(codec, ADAU1373_PLL_CTRL6(pll_id),
+ regmap_update_bits(adau1373->regmap, ADAU1373_PLL_CTRL6(pll_id),
ADAU1373_PLL_CTRL6_PLL_EN, val);
if (SND_SOC_DAPM_EVENT_ON(event))
adau1373_dai->enable_src = (div != 0);
- snd_soc_update_bits(codec, ADAU1373_BCLKDIV(dai->id),
+ regmap_update_bits(adau1373->regmap, ADAU1373_BCLKDIV(dai->id),
ADAU1373_BCLKDIV_SR_MASK | ADAU1373_BCLKDIV_BCLK_MASK,
(div << 2) | ADAU1373_BCLKDIV_64);
return -EINVAL;
}
- return snd_soc_update_bits(codec, ADAU1373_DAI(dai->id),
+ return regmap_update_bits(adau1373->regmap, ADAU1373_DAI(dai->id),
ADAU1373_DAI_WLEN_MASK, ctrl);
}
return -EINVAL;
}
- snd_soc_update_bits(codec, ADAU1373_DAI(dai->id),
+ regmap_update_bits(adau1373->regmap, ADAU1373_DAI(dai->id),
~ADAU1373_DAI_WLEN_MASK, ctrl);
return 0;
adau1373_dai->sysclk = freq;
adau1373_dai->clk_src = clk_id;
- snd_soc_update_bits(dai->codec, ADAU1373_BCLKDIV(dai->id),
+ regmap_update_bits(adau1373->regmap, ADAU1373_BCLKDIV(dai->id),
ADAU1373_BCLKDIV_SOURCE, clk_id << 5);
return 0;
static int adau1373_set_pll(struct snd_soc_codec *codec, int pll_id,
int source, unsigned int freq_in, unsigned int freq_out)
{
+ struct adau1373 *adau1373 = snd_soc_codec_get_drvdata(codec);
unsigned int dpll_div = 0;
unsigned int x, r, n, m, i, j, mode;
if (dpll_div) {
dpll_div = 11 - dpll_div;
- snd_soc_update_bits(codec, ADAU1373_PLL_CTRL6(pll_id),
+ regmap_update_bits(adau1373->regmap, ADAU1373_PLL_CTRL6(pll_id),
ADAU1373_PLL_CTRL6_DPLL_BYPASS, 0);
} else {
- snd_soc_update_bits(codec, ADAU1373_PLL_CTRL6(pll_id),
+ regmap_update_bits(adau1373->regmap, ADAU1373_PLL_CTRL6(pll_id),
ADAU1373_PLL_CTRL6_DPLL_BYPASS,
ADAU1373_PLL_CTRL6_DPLL_BYPASS);
}
- snd_soc_write(codec, ADAU1373_DPLL_CTRL(pll_id),
+ regmap_write(adau1373->regmap, ADAU1373_DPLL_CTRL(pll_id),
(source << 4) | dpll_div);
- snd_soc_write(codec, ADAU1373_PLL_CTRL1(pll_id), (m >> 8) & 0xff);
- snd_soc_write(codec, ADAU1373_PLL_CTRL2(pll_id), m & 0xff);
- snd_soc_write(codec, ADAU1373_PLL_CTRL3(pll_id), (n >> 8) & 0xff);
- snd_soc_write(codec, ADAU1373_PLL_CTRL4(pll_id), n & 0xff);
- snd_soc_write(codec, ADAU1373_PLL_CTRL5(pll_id),
+ regmap_write(adau1373->regmap, ADAU1373_PLL_CTRL1(pll_id), (m >> 8) & 0xff);
+ regmap_write(adau1373->regmap, ADAU1373_PLL_CTRL2(pll_id), m & 0xff);
+ regmap_write(adau1373->regmap, ADAU1373_PLL_CTRL3(pll_id), (n >> 8) & 0xff);
+ regmap_write(adau1373->regmap, ADAU1373_PLL_CTRL4(pll_id), n & 0xff);
+ regmap_write(adau1373->regmap, ADAU1373_PLL_CTRL5(pll_id),
(r << 3) | (x << 1) | mode);
/* Set sysclk to pll_rate / 4 */
- snd_soc_update_bits(codec, ADAU1373_CLK_SRC_DIV(pll_id), 0x3f, 0x09);
+ regmap_update_bits(adau1373->regmap, ADAU1373_CLK_SRC_DIV(pll_id), 0x3f, 0x09);
return 0;
}
-static void adau1373_load_drc_settings(struct snd_soc_codec *codec,
+static void adau1373_load_drc_settings(struct adau1373 *adau1373,
unsigned int nr, uint8_t *drc)
{
unsigned int i;
for (i = 0; i < ADAU1373_DRC_SIZE; ++i)
- snd_soc_write(codec, ADAU1373_DRC(nr) + i, drc[i]);
+ regmap_write(adau1373->regmap, ADAU1373_DRC(nr) + i, drc[i]);
}
static bool adau1373_valid_micbias(enum adau1373_micbias_voltage micbias)
static int adau1373_probe(struct snd_soc_codec *codec)
{
+ struct adau1373 *adau1373 = snd_soc_codec_get_drvdata(codec);
struct adau1373_platform_data *pdata = codec->dev->platform_data;
bool lineout_differential = false;
unsigned int val;
int ret;
int i;
- ret = snd_soc_codec_set_cache_io(codec, 8, 8, SND_SOC_I2C);
+ ret = snd_soc_codec_set_cache_io(codec, 0, 0, SND_SOC_REGMAP);
if (ret) {
dev_err(codec->dev, "failed to set cache I/O: %d\n", ret);
return ret;
return -EINVAL;
for (i = 0; i < pdata->num_drc; ++i) {
- adau1373_load_drc_settings(codec, i,
+ adau1373_load_drc_settings(adau1373, i,
pdata->drc_setting[i]);
}
if (pdata->input_differential[i])
val |= BIT(i);
}
- snd_soc_write(codec, ADAU1373_INPUT_MODE, val);
+ regmap_write(adau1373->regmap, ADAU1373_INPUT_MODE, val);
val = 0;
if (pdata->lineout_differential)
val |= ADAU1373_OUTPUT_CTRL_LDIFF;
if (pdata->lineout_ground_sense)
val |= ADAU1373_OUTPUT_CTRL_LNFBEN;
- snd_soc_write(codec, ADAU1373_OUTPUT_CTRL, val);
+ regmap_write(adau1373->regmap, ADAU1373_OUTPUT_CTRL, val);
lineout_differential = pdata->lineout_differential;
- snd_soc_write(codec, ADAU1373_EP_CTRL,
+ regmap_write(adau1373->regmap, ADAU1373_EP_CTRL,
(pdata->micbias1 << ADAU1373_EP_CTRL_MICBIAS1_OFFSET) |
(pdata->micbias2 << ADAU1373_EP_CTRL_MICBIAS2_OFFSET));
}
ARRAY_SIZE(adau1373_lineout2_controls));
}
- snd_soc_write(codec, ADAU1373_ADC_CTRL,
+ regmap_write(adau1373->regmap, ADAU1373_ADC_CTRL,
ADAU1373_ADC_CTRL_RESET_FORCE | ADAU1373_ADC_CTRL_PEAK_DETECT);
return 0;
static int adau1373_set_bias_level(struct snd_soc_codec *codec,
enum snd_soc_bias_level level)
{
+ struct adau1373 *adau1373 = snd_soc_codec_get_drvdata(codec);
+
switch (level) {
case SND_SOC_BIAS_ON:
break;
case SND_SOC_BIAS_PREPARE:
break;
case SND_SOC_BIAS_STANDBY:
- snd_soc_update_bits(codec, ADAU1373_PWDN_CTRL3,
+ regmap_update_bits(adau1373->regmap, ADAU1373_PWDN_CTRL3,
ADAU1373_PWDN_CTRL3_PWR_EN, ADAU1373_PWDN_CTRL3_PWR_EN);
break;
case SND_SOC_BIAS_OFF:
- snd_soc_update_bits(codec, ADAU1373_PWDN_CTRL3,
+ regmap_update_bits(adau1373->regmap, ADAU1373_PWDN_CTRL3,
ADAU1373_PWDN_CTRL3_PWR_EN, 0);
break;
}
static int adau1373_suspend(struct snd_soc_codec *codec)
{
- return adau1373_set_bias_level(codec, SND_SOC_BIAS_OFF);
+ struct adau1373 *adau1373 = snd_soc_codec_get_drvdata(codec);
+ int ret;
+
+ ret = adau1373_set_bias_level(codec, SND_SOC_BIAS_OFF);
+ regcache_cache_only(adau1373->regmap, true);
+
+ return ret;
}
static int adau1373_resume(struct snd_soc_codec *codec)
{
+ struct adau1373 *adau1373 = snd_soc_codec_get_drvdata(codec);
+
+ regcache_cache_only(adau1373->regmap, false);
adau1373_set_bias_level(codec, SND_SOC_BIAS_STANDBY);
- snd_soc_cache_sync(codec);
+ regcache_sync(adau1373->regmap);
return 0;
}
+static bool adau1373_register_volatile(struct device *dev, unsigned int reg)
+{
+ switch (reg) {
+ case ADAU1373_SOFT_RESET:
+ case ADAU1373_ADC_DAC_STATUS:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static const struct regmap_config adau1373_regmap_config = {
+ .val_bits = 8,
+ .reg_bits = 8,
+
+ .volatile_reg = adau1373_register_volatile,
+ .max_register = ADAU1373_SOFT_RESET,
+
+ .cache_type = REGCACHE_RBTREE,
+ .reg_defaults = adau1373_reg_defaults,
+ .num_reg_defaults = ARRAY_SIZE(adau1373_reg_defaults),
+};
+
static struct snd_soc_codec_driver adau1373_codec_driver = {
.probe = adau1373_probe,
.remove = adau1373_remove,
.resume = adau1373_resume,
.set_bias_level = adau1373_set_bias_level,
.idle_bias_off = true,
- .reg_cache_size = ARRAY_SIZE(adau1373_default_regs),
- .reg_cache_default = adau1373_default_regs,
- .reg_word_size = sizeof(uint8_t),
.set_pll = adau1373_set_pll,
if (!adau1373)
return -ENOMEM;
+ adau1373->regmap = devm_regmap_init_i2c(client,
+ &adau1373_regmap_config);
+ if (IS_ERR(adau1373->regmap))
+ return PTR_ERR(adau1373->regmap);
+
+ regmap_write(adau1373->regmap, ADAU1373_SOFT_RESET, 0x00);
+
dev_set_drvdata(&client->dev, adau1373);
ret = snd_soc_register_codec(&client->dev, &adau1373_codec_driver,
#define ADAV80X_PLL_OUTE_SYSCLKPD(x) BIT(2 - (x))
-static u8 adav80x_default_regs[] = {
- 0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x02, 0x01, 0x80, 0x26, 0x00, 0x00,
- 0x02, 0x40, 0x20, 0x00, 0x09, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
- 0x04, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0xd1, 0x92, 0xb1, 0x37,
- 0x48, 0xd2, 0xfb, 0xca, 0xd2, 0x15, 0xe8, 0x29, 0xb9, 0x6a, 0xda, 0x2b,
- 0xb7, 0xc0, 0x11, 0x65, 0x5c, 0xf6, 0xff, 0x8d, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xa5, 0x00, 0x00,
- 0x00, 0xe8, 0x46, 0xe1, 0x5b, 0xd3, 0x43, 0x77, 0x93, 0xa7, 0x44, 0xee,
- 0x32, 0x12, 0xc0, 0x11, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x3f, 0x3f,
- 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x1d, 0x00, 0x00, 0x00, 0x00,
- 0x00, 0x00, 0x00, 0x00, 0x52, 0x00,
+static struct reg_default adav80x_reg_defaults[] = {
+ { ADAV80X_PLAYBACK_CTRL, 0x01 },
+ { ADAV80X_AUX_IN_CTRL, 0x01 },
+ { ADAV80X_REC_CTRL, 0x02 },
+ { ADAV80X_AUX_OUT_CTRL, 0x01 },
+ { ADAV80X_DPATH_CTRL1, 0xc0 },
+ { ADAV80X_DPATH_CTRL2, 0x11 },
+ { ADAV80X_DAC_CTRL1, 0x00 },
+ { ADAV80X_DAC_CTRL2, 0x00 },
+ { ADAV80X_DAC_CTRL3, 0x00 },
+ { ADAV80X_DAC_L_VOL, 0xff },
+ { ADAV80X_DAC_R_VOL, 0xff },
+ { ADAV80X_PGA_L_VOL, 0x00 },
+ { ADAV80X_PGA_R_VOL, 0x00 },
+ { ADAV80X_ADC_CTRL1, 0x00 },
+ { ADAV80X_ADC_CTRL2, 0x00 },
+ { ADAV80X_ADC_L_VOL, 0xff },
+ { ADAV80X_ADC_R_VOL, 0xff },
+ { ADAV80X_PLL_CTRL1, 0x00 },
+ { ADAV80X_PLL_CTRL2, 0x00 },
+ { ADAV80X_ICLK_CTRL1, 0x00 },
+ { ADAV80X_ICLK_CTRL2, 0x00 },
+ { ADAV80X_PLL_CLK_SRC, 0x00 },
+ { ADAV80X_PLL_OUTE, 0x00 },
};
struct adav80x {
- enum snd_soc_control_type control_type;
+ struct regmap *regmap;
enum adav80x_clk_src clk_src;
unsigned int sysclk;
val = ADAV80X_DAC_CTRL2_DEEMPH_NONE;
}
- return snd_soc_update_bits(codec, ADAV80X_DAC_CTRL2,
+ return regmap_update_bits(adav80x->regmap, ADAV80X_DAC_CTRL2,
ADAV80X_DAC_CTRL2_DEEMPH_MASK, val);
}
return -EINVAL;
}
- snd_soc_update_bits(codec, adav80x_port_ctrl_regs[dai->id][0],
+ regmap_update_bits(adav80x->regmap, adav80x_port_ctrl_regs[dai->id][0],
ADAV80X_CAPTURE_MODE_MASK | ADAV80X_CAPTURE_MODE_MASTER,
capture);
- snd_soc_write(codec, adav80x_port_ctrl_regs[dai->id][1], playback);
+ regmap_write(adav80x->regmap, adav80x_port_ctrl_regs[dai->id][1],
+ playback);
adav80x->dai_fmt[dai->id] = fmt & SND_SOC_DAIFMT_FORMAT_MASK;
static int adav80x_set_adc_clock(struct snd_soc_codec *codec,
unsigned int sample_rate)
{
+ struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec);
unsigned int val;
if (sample_rate <= 48000)
else
val = ADAV80X_ADC_CTRL1_MODULATOR_64FS;
- snd_soc_update_bits(codec, ADAV80X_ADC_CTRL1,
+ regmap_update_bits(adav80x->regmap, ADAV80X_ADC_CTRL1,
ADAV80X_ADC_CTRL1_MODULATOR_MASK, val);
return 0;
static int adav80x_set_dac_clock(struct snd_soc_codec *codec,
unsigned int sample_rate)
{
+ struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec);
unsigned int val;
if (sample_rate <= 48000)
else
val = ADAV80X_DAC_CTRL2_DIV2 | ADAV80X_DAC_CTRL2_INTERPOL_128FS;
- snd_soc_update_bits(codec, ADAV80X_DAC_CTRL2,
+ regmap_update_bits(adav80x->regmap, ADAV80X_DAC_CTRL2,
ADAV80X_DAC_CTRL2_DIV_MASK | ADAV80X_DAC_CTRL2_INTERPOL_MASK,
val);
static int adav80x_set_capture_pcm_format(struct snd_soc_codec *codec,
struct snd_soc_dai *dai, snd_pcm_format_t format)
{
+ struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec);
unsigned int val;
switch (format) {
return -EINVAL;
}
- snd_soc_update_bits(codec, adav80x_port_ctrl_regs[dai->id][0],
+ regmap_update_bits(adav80x->regmap, adav80x_port_ctrl_regs[dai->id][0],
ADAV80X_CAPTURE_WORD_LEN_MASK, val);
return 0;
return -EINVAL;
}
- snd_soc_update_bits(codec, adav80x_port_ctrl_regs[dai->id][1],
+ regmap_update_bits(adav80x->regmap, adav80x_port_ctrl_regs[dai->id][1],
ADAV80X_PLAYBACK_MODE_MASK, val);
return 0;
ADAV80X_ICLK_CTRL1_ICLK2_SRC(clk_id);
iclk_ctrl2 = ADAV80X_ICLK_CTRL2_ICLK1_SRC(clk_id);
- snd_soc_write(codec, ADAV80X_ICLK_CTRL1, iclk_ctrl1);
- snd_soc_write(codec, ADAV80X_ICLK_CTRL2, iclk_ctrl2);
+ regmap_write(adav80x->regmap, ADAV80X_ICLK_CTRL1,
+ iclk_ctrl1);
+ regmap_write(adav80x->regmap, ADAV80X_ICLK_CTRL2,
+ iclk_ctrl2);
snd_soc_dapm_sync(&codec->dapm);
}
mask = ADAV80X_PLL_OUTE_SYSCLKPD(clk_id);
if (freq == 0) {
- snd_soc_update_bits(codec, ADAV80X_PLL_OUTE, mask, mask);
+ regmap_update_bits(adav80x->regmap, ADAV80X_PLL_OUTE,
+ mask, mask);
adav80x->sysclk_pd[clk_id] = true;
} else {
- snd_soc_update_bits(codec, ADAV80X_PLL_OUTE, mask, 0);
+ regmap_update_bits(adav80x->regmap, ADAV80X_PLL_OUTE,
+ mask, 0);
adav80x->sysclk_pd[clk_id] = false;
}
return -EINVAL;
}
- snd_soc_update_bits(codec, ADAV80X_PLL_CTRL1, ADAV80X_PLL_CTRL1_PLLDIV,
- pll_ctrl1);
- snd_soc_update_bits(codec, ADAV80X_PLL_CTRL2,
+ regmap_update_bits(adav80x->regmap, ADAV80X_PLL_CTRL1,
+ ADAV80X_PLL_CTRL1_PLLDIV, pll_ctrl1);
+ regmap_update_bits(adav80x->regmap, ADAV80X_PLL_CTRL2,
ADAV80X_PLL_CTRL2_PLL_MASK(pll_id), pll_ctrl2);
if (source != adav80x->pll_src) {
else
pll_src = ADAV80X_PLL_CLK_SRC_PLL_XIN(pll_id);
- snd_soc_update_bits(codec, ADAV80X_PLL_CLK_SRC,
+ regmap_update_bits(adav80x->regmap, ADAV80X_PLL_CLK_SRC,
ADAV80X_PLL_CLK_SRC_PLL_MASK(pll_id), pll_src);
adav80x->pll_src = source;
static int adav80x_set_bias_level(struct snd_soc_codec *codec,
enum snd_soc_bias_level level)
{
+ struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec);
unsigned int mask = ADAV80X_DAC_CTRL1_PD;
switch (level) {
case SND_SOC_BIAS_PREPARE:
break;
case SND_SOC_BIAS_STANDBY:
- snd_soc_update_bits(codec, ADAV80X_DAC_CTRL1, mask, 0x00);
+ regmap_update_bits(adav80x->regmap, ADAV80X_DAC_CTRL1, mask,
+ 0x00);
break;
case SND_SOC_BIAS_OFF:
- snd_soc_update_bits(codec, ADAV80X_DAC_CTRL1, mask, mask);
+ regmap_update_bits(adav80x->regmap, ADAV80X_DAC_CTRL1, mask,
+ mask);
break;
}
int ret;
struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec);
- ret = snd_soc_codec_set_cache_io(codec, 7, 9, adav80x->control_type);
+ ret = snd_soc_codec_set_cache_io(codec, 0, 0, SND_SOC_REGMAP);
if (ret) {
dev_err(codec->dev, "failed to set cache I/O: %d\n", ret);
return ret;
snd_soc_dapm_force_enable_pin(&codec->dapm, "PLL2");
/* Power down S/PDIF receiver, since it is currently not supported */
- snd_soc_write(codec, ADAV80X_PLL_OUTE, 0x20);
+ regmap_write(adav80x->regmap, ADAV80X_PLL_OUTE, 0x20);
/* Disable DAC zero flag */
- snd_soc_write(codec, ADAV80X_DAC_CTRL3, 0x6);
+ regmap_write(adav80x->regmap, ADAV80X_DAC_CTRL3, 0x6);
return adav80x_set_bias_level(codec, SND_SOC_BIAS_STANDBY);
}
static int adav80x_suspend(struct snd_soc_codec *codec)
{
- return adav80x_set_bias_level(codec, SND_SOC_BIAS_OFF);
+ struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec);
+ int ret;
+
+ ret = adav80x_set_bias_level(codec, SND_SOC_BIAS_OFF);
+ regcache_cache_only(adav80x->regmap, true);
+
+ return ret;
}
static int adav80x_resume(struct snd_soc_codec *codec)
{
+ struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec);
+
+ regcache_cache_only(adav80x->regmap, false);
adav80x_set_bias_level(codec, SND_SOC_BIAS_STANDBY);
- codec->cache_sync = 1;
- snd_soc_cache_sync(codec);
+ regcache_sync(adav80x->regmap);
return 0;
}
.set_pll = adav80x_set_pll,
.set_sysclk = adav80x_set_sysclk,
- .reg_word_size = sizeof(u8),
- .reg_cache_size = ARRAY_SIZE(adav80x_default_regs),
- .reg_cache_default = adav80x_default_regs,
-
.controls = adav80x_controls,
.num_controls = ARRAY_SIZE(adav80x_controls),
.dapm_widgets = adav80x_dapm_widgets,
.num_dapm_routes = ARRAY_SIZE(adav80x_dapm_routes),
};
-static int adav80x_bus_probe(struct device *dev,
- enum snd_soc_control_type control_type)
+static int adav80x_bus_probe(struct device *dev, struct regmap *regmap)
{
struct adav80x *adav80x;
int ret;
+ if (IS_ERR(regmap))
+ return PTR_ERR(regmap);
+
adav80x = kzalloc(sizeof(*adav80x), GFP_KERNEL);
if (!adav80x)
return -ENOMEM;
+
dev_set_drvdata(dev, adav80x);
- adav80x->control_type = control_type;
+ adav80x->regmap = regmap;
ret = snd_soc_register_codec(dev, &adav80x_codec_driver,
adav80x_dais, ARRAY_SIZE(adav80x_dais));
}
#if defined(CONFIG_SPI_MASTER)
+static const struct regmap_config adav80x_spi_regmap_config = {
+ .val_bits = 8,
+ .pad_bits = 1,
+ .reg_bits = 7,
+ .read_flag_mask = 0x01,
+
+ .max_register = ADAV80X_PLL_OUTE,
+
+ .cache_type = REGCACHE_RBTREE,
+ .reg_defaults = adav80x_reg_defaults,
+ .num_reg_defaults = ARRAY_SIZE(adav80x_reg_defaults),
+};
+
static const struct spi_device_id adav80x_spi_id[] = {
{ "adav801", 0 },
{ }
static int adav80x_spi_probe(struct spi_device *spi)
{
- return adav80x_bus_probe(&spi->dev, SND_SOC_SPI);
+ return adav80x_bus_probe(&spi->dev,
+ devm_regmap_init_spi(spi, &adav80x_spi_regmap_config));
}
static int adav80x_spi_remove(struct spi_device *spi)
#endif
#if defined(CONFIG_I2C) || defined(CONFIG_I2C_MODULE)
+static const struct regmap_config adav80x_i2c_regmap_config = {
+ .val_bits = 8,
+ .pad_bits = 1,
+ .reg_bits = 7,
+
+ .max_register = ADAV80X_PLL_OUTE,
+
+ .cache_type = REGCACHE_RBTREE,
+ .reg_defaults = adav80x_reg_defaults,
+ .num_reg_defaults = ARRAY_SIZE(adav80x_reg_defaults),
+};
+
static const struct i2c_device_id adav80x_i2c_id[] = {
{ "adav803", 0 },
{ }
static int adav80x_i2c_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
- return adav80x_bus_probe(&client->dev, SND_SOC_I2C);
+ return adav80x_bus_probe(&client->dev,
+ devm_regmap_init_i2c(client, &adav80x_i2c_regmap_config));
}
static int adav80x_i2c_remove(struct i2c_client *client)
#define AK4104_TX_TXE (1 << 0)
#define AK4104_TX_V (1 << 1)
-#define DRV_NAME "ak4104-codec"
-
struct ak4104_private {
struct regmap *regmap;
};
};
MODULE_DEVICE_TABLE(of, ak4104_of_match);
+static const struct spi_device_id ak4104_id_table[] = {
+ { "ak4104", 0 },
+ { }
+};
+MODULE_DEVICE_TABLE(spi, ak4104_id_table);
+
static struct spi_driver ak4104_spi_driver = {
.driver = {
- .name = DRV_NAME,
+ .name = "ak4104",
.owner = THIS_MODULE,
.of_match_table = ak4104_of_match,
},
+ .id_table = ak4104_id_table,
.probe = ak4104_spi_probe,
.remove = ak4104_spi_remove,
};
* This operation came from example code of
* "ASAHI KASEI AK4642" (japanese) manual p94.
*/
- snd_soc_write(codec, SG_SL1, PMMP | MGAIN0);
+ snd_soc_update_bits(codec, SG_SL1, PMMP | MGAIN0, PMMP | MGAIN0);
snd_soc_write(codec, TIMER, ZTM(0x3) | WTM(0x3));
snd_soc_write(codec, ALC_CTL1, ALC | LMTH0);
snd_soc_update_bits(codec, PW_MGMT1, PMADL, PMADL);
*/
default:
return -EINVAL;
- break;
}
snd_soc_update_bits(codec, MD_CTL1, DIF_MASK, data);
break;
default:
return -EINVAL;
- break;
}
snd_soc_update_bits(codec, MD_CTL2, FS_MASK, rate);
{
struct arizona *arizona = fll->arizona;
int ret;
+ bool use_sync = false;
/*
* If we have both REFCLK and SYNCCLK then enable both,
* otherwise apply the SYNCCLK settings to REFCLK.
*/
- if (fll->ref_src >= 0 && fll->ref_src != fll->sync_src) {
+ if (fll->ref_src >= 0 && fll->ref_freq &&
+ fll->ref_src != fll->sync_src) {
regmap_update_bits(arizona->regmap, fll->base + 5,
ARIZONA_FLL1_OUTDIV_MASK,
ref->outdiv << ARIZONA_FLL1_OUTDIV_SHIFT);
arizona_apply_fll(arizona, fll->base, ref, fll->ref_src,
false);
- if (fll->sync_src >= 0)
+ if (fll->sync_src >= 0) {
arizona_apply_fll(arizona, fll->base + 0x10, sync,
fll->sync_src, true);
+ use_sync = true;
+ }
} else if (fll->sync_src >= 0) {
regmap_update_bits(arizona->regmap, fll->base + 5,
ARIZONA_FLL1_OUTDIV_MASK,
* Increase the bandwidth if we're not using a low frequency
* sync source.
*/
- if (fll->sync_src >= 0 && fll->sync_freq > 100000)
+ if (use_sync && fll->sync_freq > 100000)
regmap_update_bits(arizona->regmap, fll->base + 0x17,
ARIZONA_FLL1_SYNC_BW, 0);
else
regmap_update_bits(arizona->regmap, fll->base + 1,
ARIZONA_FLL1_ENA, ARIZONA_FLL1_ENA);
- if (fll->ref_src >= 0 && fll->sync_src >= 0 &&
- fll->ref_src != fll->sync_src)
+ if (use_sync)
regmap_update_bits(arizona->regmap, fll->base + 0x11,
ARIZONA_FLL1_SYNC_ENA,
ARIZONA_FLL1_SYNC_ENA);
if (fll->ref_src == source && fll->ref_freq == Fref)
return 0;
- if (fll->fout && Fref > 0) {
- ret = arizona_calc_fll(fll, &ref, Fref, fll->fout);
- if (ret != 0)
- return ret;
+ if (fll->fout) {
+ if (Fref > 0) {
+ ret = arizona_calc_fll(fll, &ref, Fref, fll->fout);
+ if (ret != 0)
+ return ret;
+ }
if (fll->sync_src >= 0) {
ret = arizona_calc_fll(fll, &sync, fll->sync_freq,
#include <sound/soc.h>
#include <sound/initval.h>
-static inline unsigned int cq93vc_read(struct snd_soc_codec *codec,
- unsigned int reg)
-{
- struct davinci_vc *davinci_vc = codec->control_data;
-
- return readl(davinci_vc->base + reg);
-}
-
-static inline int cq93vc_write(struct snd_soc_codec *codec, unsigned int reg,
- unsigned int value)
-{
- struct davinci_vc *davinci_vc = codec->control_data;
-
- writel(value, davinci_vc->base + reg);
-
- return 0;
-}
-
static const struct snd_kcontrol_new cq93vc_snd_controls[] = {
SOC_SINGLE("PGA Capture Volume", DAVINCI_VC_REG05, 0, 0x03, 0),
SOC_SINGLE("Mono DAC Playback Volume", DAVINCI_VC_REG09, 0, 0x3f, 0),
static int cq93vc_mute(struct snd_soc_dai *dai, int mute)
{
struct snd_soc_codec *codec = dai->codec;
- u8 reg = cq93vc_read(codec, DAVINCI_VC_REG09) & ~DAVINCI_VC_REG09_MUTE;
+ u8 reg;
if (mute)
- cq93vc_write(codec, DAVINCI_VC_REG09,
- reg | DAVINCI_VC_REG09_MUTE);
+ reg = DAVINCI_VC_REG09_MUTE;
else
- cq93vc_write(codec, DAVINCI_VC_REG09, reg);
+ reg = 0;
+
+ snd_soc_update_bits(codec, DAVINCI_VC_REG09, DAVINCI_VC_REG09_MUTE,
+ reg);
return 0;
}
int clk_id, unsigned int freq, int dir)
{
struct snd_soc_codec *codec = codec_dai->codec;
- struct davinci_vc *davinci_vc = codec->control_data;
+ struct davinci_vc *davinci_vc = codec->dev->platform_data;
switch (freq) {
case 22579200:
{
switch (level) {
case SND_SOC_BIAS_ON:
- cq93vc_write(codec, DAVINCI_VC_REG12,
+ snd_soc_write(codec, DAVINCI_VC_REG12,
DAVINCI_VC_REG12_POWER_ALL_ON);
break;
case SND_SOC_BIAS_PREPARE:
break;
case SND_SOC_BIAS_STANDBY:
- cq93vc_write(codec, DAVINCI_VC_REG12,
+ snd_soc_write(codec, DAVINCI_VC_REG12,
DAVINCI_VC_REG12_POWER_ALL_OFF);
break;
case SND_SOC_BIAS_OFF:
/* force all power off */
- cq93vc_write(codec, DAVINCI_VC_REG12,
+ snd_soc_write(codec, DAVINCI_VC_REG12,
DAVINCI_VC_REG12_POWER_ALL_OFF);
break;
}
struct davinci_vc *davinci_vc = codec->dev->platform_data;
davinci_vc->cq93vc.codec = codec;
- codec->control_data = davinci_vc;
+ codec->control_data = davinci_vc->regmap;
- /* Set controls */
- snd_soc_add_codec_controls(codec, cq93vc_snd_controls,
- ARRAY_SIZE(cq93vc_snd_controls));
+ snd_soc_codec_set_cache_io(codec, 32, 32, SND_SOC_REGMAP);
/* Off, with power on */
cq93vc_set_bias_level(codec, SND_SOC_BIAS_STANDBY);
}
static struct snd_soc_codec_driver soc_codec_dev_cq93vc = {
- .read = cq93vc_read,
- .write = cq93vc_write,
.set_bias_level = cq93vc_set_bias_level,
.probe = cq93vc_probe,
.remove = cq93vc_remove,
.resume = cq93vc_resume,
+ .controls = cq93vc_snd_controls,
+ .num_controls = ARRAY_SIZE(cq93vc_snd_controls),
};
static int cq93vc_platform_probe(struct platform_device *pdev)
#include <linux/gpio.h>
#include <linux/i2c.h>
#include <linux/spi/spi.h>
+#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <sound/pcm.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/delay.h>
+#include <linux/gpio.h>
#include <linux/pm.h>
#include <linux/i2c.h>
#include <linux/input.h>
cs42l52->sysclk = CS42L52_DEFAULT_CLK;
cs42l52->config.format = CS42L52_DEFAULT_FORMAT;
- /* Set Platform MICx CFG */
- snd_soc_update_bits(codec, CS42L52_MICA_CTL,
- CS42L52_MIC_CTL_TYPE_MASK,
- cs42l52->pdata.mica_cfg <<
- CS42L52_MIC_CTL_TYPE_SHIFT);
-
- snd_soc_update_bits(codec, CS42L52_MICB_CTL,
- CS42L52_MIC_CTL_TYPE_MASK,
- cs42l52->pdata.micb_cfg <<
- CS42L52_MIC_CTL_TYPE_SHIFT);
-
- /* if Single Ended, Get Mic_Select */
- if (cs42l52->pdata.mica_cfg)
- snd_soc_update_bits(codec, CS42L52_MICA_CTL,
- CS42L52_MIC_CTL_MIC_SEL_MASK,
- cs42l52->pdata.mica_sel <<
- CS42L52_MIC_CTL_MIC_SEL_SHIFT);
- if (cs42l52->pdata.micb_cfg)
- snd_soc_update_bits(codec, CS42L52_MICB_CTL,
- CS42L52_MIC_CTL_MIC_SEL_MASK,
- cs42l52->pdata.micb_sel <<
- CS42L52_MIC_CTL_MIC_SEL_SHIFT);
-
- /* Set Platform Charge Pump Freq */
- snd_soc_update_bits(codec, CS42L52_CHARGE_PUMP,
- CS42L52_CHARGE_PUMP_MASK,
- cs42l52->pdata.chgfreq <<
- CS42L52_CHARGE_PUMP_SHIFT);
-
- /* Set Platform Bias Level */
- snd_soc_update_bits(codec, CS42L52_IFACE_CTL2,
- CS42L52_IFACE_CTL2_BIAS_LVL,
- cs42l52->pdata.micbias_lvl);
-
return ret;
}
const struct i2c_device_id *id)
{
struct cs42l52_private *cs42l52;
+ struct cs42l52_platform_data *pdata = dev_get_platdata(&i2c_client->dev);
int ret;
unsigned int devid = 0;
unsigned int reg;
return ret;
}
- i2c_set_clientdata(i2c_client, cs42l52);
+ if (pdata)
+ cs42l52->pdata = *pdata;
+
+ if (cs42l52->pdata.reset_gpio) {
+ ret = gpio_request_one(cs42l52->pdata.reset_gpio,
+ GPIOF_OUT_INIT_HIGH, "CS42L52 /RST");
+ if (ret < 0) {
+ dev_err(&i2c_client->dev, "Failed to request /RST %d: %d\n",
+ cs42l52->pdata.reset_gpio, ret);
+ return ret;
+ }
+ gpio_set_value_cansleep(cs42l52->pdata.reset_gpio, 0);
+ gpio_set_value_cansleep(cs42l52->pdata.reset_gpio, 1);
+ }
- if (dev_get_platdata(&i2c_client->dev))
- memcpy(&cs42l52->pdata, dev_get_platdata(&i2c_client->dev),
- sizeof(cs42l52->pdata));
+ i2c_set_clientdata(i2c_client, cs42l52);
ret = regmap_register_patch(cs42l52->regmap, cs42l52_threshold_patch,
ARRAY_SIZE(cs42l52_threshold_patch));
return ret;
}
- regcache_cache_only(cs42l52->regmap, true);
+ dev_info(&i2c_client->dev, "Cirrus Logic CS42L52, Revision: %02X\n",
+ reg & 0xFF);
+
+ /* Set Platform Data */
+ if (cs42l52->pdata.mica_cfg)
+ regmap_update_bits(cs42l52->regmap, CS42L52_MICA_CTL,
+ CS42L52_MIC_CTL_TYPE_MASK,
+ cs42l52->pdata.mica_cfg <<
+ CS42L52_MIC_CTL_TYPE_SHIFT);
+
+ if (cs42l52->pdata.micb_cfg)
+ regmap_update_bits(cs42l52->regmap, CS42L52_MICB_CTL,
+ CS42L52_MIC_CTL_TYPE_MASK,
+ cs42l52->pdata.micb_cfg <<
+ CS42L52_MIC_CTL_TYPE_SHIFT);
+
+ if (cs42l52->pdata.mica_sel)
+ regmap_update_bits(cs42l52->regmap, CS42L52_MICA_CTL,
+ CS42L52_MIC_CTL_MIC_SEL_MASK,
+ cs42l52->pdata.mica_sel <<
+ CS42L52_MIC_CTL_MIC_SEL_SHIFT);
+ if (cs42l52->pdata.micb_sel)
+ regmap_update_bits(cs42l52->regmap, CS42L52_MICB_CTL,
+ CS42L52_MIC_CTL_MIC_SEL_MASK,
+ cs42l52->pdata.micb_sel <<
+ CS42L52_MIC_CTL_MIC_SEL_SHIFT);
+
+ if (cs42l52->pdata.chgfreq)
+ regmap_update_bits(cs42l52->regmap, CS42L52_CHARGE_PUMP,
+ CS42L52_CHARGE_PUMP_MASK,
+ cs42l52->pdata.chgfreq <<
+ CS42L52_CHARGE_PUMP_SHIFT);
+
+ if (cs42l52->pdata.micbias_lvl)
+ regmap_update_bits(cs42l52->regmap, CS42L52_IFACE_CTL2,
+ CS42L52_IFACE_CTL2_BIAS_LVL,
+ cs42l52->pdata.micbias_lvl);
ret = snd_soc_register_codec(&i2c_client->dev,
&soc_codec_dev_cs42l52, &cs42l52_dai, 1);
#define CS42L52_FIX_BITS1 0x3E
#define CS42L52_FIX_BITS2 0x47
-#define CS42L52_MAX_REGISTER 0x34
+#define CS42L52_MAX_REGISTER 0x47
#endif
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/delay.h>
+#include <linux/of_gpio.h>
#include <linux/pm.h>
#include <linux/i2c.h>
#include <linux/regmap.h>
#include <sound/soc-dapm.h>
#include <sound/initval.h>
#include <sound/tlv.h>
+#include <sound/cs42l73.h>
#include "cs42l73.h"
struct sp_config {
u32 srate;
};
struct cs42l73_private {
+ struct cs42l73_platform_data pdata;
struct sp_config config[3];
struct regmap *regmap;
u32 sysclk;
SOC_ENUM_SINGLE(CS42L73_NGCAB, 0,
ARRAY_SIZE(cs42l73_ng_delay_text), cs42l73_ng_delay_text);
-static const char * const charge_pump_freq_text[] = {
- "0", "1", "2", "3", "4",
- "5", "6", "7", "8", "9",
- "10", "11", "12", "13", "14", "15" };
-
-static const struct soc_enum charge_pump_enum =
- SOC_ENUM_SINGLE(CS42L73_CPFCHC, 4,
- ARRAY_SIZE(charge_pump_freq_text), charge_pump_freq_text);
-
static const char * const cs42l73_mono_mix_texts[] = {
"Left", "Right", "Mono Mix"};
SOC_SINGLE("NG Threshold", CS42L73_NGCAB, 2, 7, 0),
SOC_ENUM("NG Delay", ng_delay_enum),
- SOC_ENUM("Charge Pump Frequency", charge_pump_enum),
-
SOC_DOUBLE_R_TLV("XSP-IP Volume",
CS42L73_XSPAIPAA, CS42L73_XSPBIPBA, 0, 0x3F, 1,
attn_tlv),
switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
case SND_SOC_DAIFMT_CBM_CFM:
- mmcc |= MS_MASTER;
+ mmcc |= CS42L73_MS_MASTER;
break;
case SND_SOC_DAIFMT_CBS_CFS:
- mmcc &= ~MS_MASTER;
+ mmcc &= ~CS42L73_MS_MASTER;
break;
default:
switch (format) {
case SND_SOC_DAIFMT_I2S:
- spc &= ~SPDIF_PCM;
+ spc &= ~CS42L73_SPDIF_PCM;
break;
case SND_SOC_DAIFMT_DSP_A:
case SND_SOC_DAIFMT_DSP_B:
- if (mmcc & MS_MASTER) {
+ if (mmcc & CS42L73_MS_MASTER) {
dev_err(codec->dev,
"PCM format in slave mode only\n");
return -EINVAL;
"PCM format is not supported on ASP port\n");
return -EINVAL;
}
- spc |= SPDIF_PCM;
+ spc |= CS42L73_SPDIF_PCM;
break;
default:
return -EINVAL;
}
- if (spc & SPDIF_PCM) {
+ if (spc & CS42L73_SPDIF_PCM) {
/* Clear PCM mode, clear PCM_BIT_ORDER bit for MSB->LSB */
- spc &= ~(PCM_MODE_MASK | PCM_BIT_ORDER);
+ spc &= ~(CS42L73_PCM_MODE_MASK | CS42L73_PCM_BIT_ORDER);
switch (format) {
case SND_SOC_DAIFMT_DSP_B:
if (inv == SND_SOC_DAIFMT_IB_IF)
- spc |= PCM_MODE0;
+ spc |= CS42L73_PCM_MODE0;
if (inv == SND_SOC_DAIFMT_IB_NF)
- spc |= PCM_MODE1;
+ spc |= CS42L73_PCM_MODE1;
break;
case SND_SOC_DAIFMT_DSP_A:
if (inv == SND_SOC_DAIFMT_IB_IF)
- spc |= PCM_MODE1;
+ spc |= CS42L73_PCM_MODE1;
break;
default:
return -EINVAL;
int mclk_coeff;
int srate = params_rate(params);
- if (priv->config[id].mmcc & MS_MASTER) {
+ if (priv->config[id].mmcc & CS42L73_MS_MASTER) {
/* CS42L73 Master */
/* MCLK -> srate */
mclk_coeff =
priv->config[id].spc &= 0xFC;
/* Use SCLK=64*Fs if internal MCLK >= 6.4MHz */
if (priv->mclk >= 6400000)
- priv->config[id].spc |= MCK_SCLK_64FS;
+ priv->config[id].spc |= CS42L73_MCK_SCLK_64FS;
else
- priv->config[id].spc |= MCK_SCLK_MCLK;
+ priv->config[id].spc |= CS42L73_MCK_SCLK_MCLK;
} else {
/* CS42L73 Slave */
priv->config[id].spc &= 0xFC;
- priv->config[id].spc |= MCK_SCLK_64FS;
+ priv->config[id].spc |= CS42L73_MCK_SCLK_64FS;
}
/* Update ASRCs */
priv->config[id].srate = srate;
switch (level) {
case SND_SOC_BIAS_ON:
- snd_soc_update_bits(codec, CS42L73_DMMCC, MCLKDIS, 0);
- snd_soc_update_bits(codec, CS42L73_PWRCTL1, PDN, 0);
+ snd_soc_update_bits(codec, CS42L73_DMMCC, CS42L73_MCLKDIS, 0);
+ snd_soc_update_bits(codec, CS42L73_PWRCTL1, CS42L73_PDN, 0);
break;
case SND_SOC_BIAS_PREPARE:
regcache_cache_only(cs42l73->regmap, false);
regcache_sync(cs42l73->regmap);
}
- snd_soc_update_bits(codec, CS42L73_PWRCTL1, PDN, 1);
+ snd_soc_update_bits(codec, CS42L73_PWRCTL1, CS42L73_PDN, 1);
break;
case SND_SOC_BIAS_OFF:
- snd_soc_update_bits(codec, CS42L73_PWRCTL1, PDN, 1);
+ snd_soc_update_bits(codec, CS42L73_PWRCTL1, CS42L73_PDN, 1);
if (cs42l73->shutdwn_delay > 0) {
mdelay(cs42l73->shutdwn_delay);
cs42l73->shutdwn_delay = 0;
* down.
*/
}
- snd_soc_update_bits(codec, CS42L73_DMMCC, MCLKDIS, 1);
+ snd_soc_update_bits(codec, CS42L73_DMMCC, CS42L73_MCLKDIS, 1);
break;
}
codec->dapm.bias_level = level;
return ret;
}
- regcache_cache_only(cs42l73->regmap, true);
-
cs42l73_set_bias_level(codec, SND_SOC_BIAS_STANDBY);
- cs42l73->mclksel = CS42L73_CLKID_MCLK1; /* MCLK1 as master clk */
+ /* Set Charge Pump Frequency */
+ if (cs42l73->pdata.chgfreq)
+ snd_soc_update_bits(codec, CS42L73_CPFCHC,
+ CS42L73_CHARGEPUMP_MASK,
+ cs42l73->pdata.chgfreq << 4);
+
+ /* MCLK1 as master clk */
+ cs42l73->mclksel = CS42L73_CLKID_MCLK1;
cs42l73->mclk = 0;
return ret;
const struct i2c_device_id *id)
{
struct cs42l73_private *cs42l73;
+ struct cs42l73_platform_data *pdata = dev_get_platdata(&i2c_client->dev);
int ret;
unsigned int devid = 0;
unsigned int reg;
+ u32 val32;
cs42l73 = devm_kzalloc(&i2c_client->dev, sizeof(struct cs42l73_private),
GFP_KERNEL);
return -ENOMEM;
}
- i2c_set_clientdata(i2c_client, cs42l73);
-
cs42l73->regmap = devm_regmap_init_i2c(i2c_client, &cs42l73_regmap);
if (IS_ERR(cs42l73->regmap)) {
ret = PTR_ERR(cs42l73->regmap);
dev_err(&i2c_client->dev, "regmap_init() failed: %d\n", ret);
return ret;
}
+
+ if (pdata) {
+ cs42l73->pdata = *pdata;
+ } else {
+ pdata = devm_kzalloc(&i2c_client->dev,
+ sizeof(struct cs42l73_platform_data),
+ GFP_KERNEL);
+ if (!pdata) {
+ dev_err(&i2c_client->dev, "could not allocate pdata\n");
+ return -ENOMEM;
+ }
+ if (i2c_client->dev.of_node) {
+ if (of_property_read_u32(i2c_client->dev.of_node,
+ "chgfreq", &val32) >= 0)
+ pdata->chgfreq = val32;
+ }
+ pdata->reset_gpio = of_get_named_gpio(i2c_client->dev.of_node,
+ "reset-gpio", 0);
+ cs42l73->pdata = *pdata;
+ }
+
+ i2c_set_clientdata(i2c_client, cs42l73);
+
+ if (cs42l73->pdata.reset_gpio) {
+ ret = gpio_request_one(cs42l73->pdata.reset_gpio,
+ GPIOF_OUT_INIT_HIGH, "CS42L73 /RST");
+ if (ret < 0) {
+ dev_err(&i2c_client->dev, "Failed to request /RST %d: %d\n",
+ cs42l73->pdata.reset_gpio, ret);
+ return ret;
+ }
+ gpio_set_value_cansleep(cs42l73->pdata.reset_gpio, 0);
+ gpio_set_value_cansleep(cs42l73->pdata.reset_gpio, 1);
+ }
+
+ regcache_cache_bypass(cs42l73->regmap, true);
+
/* initialize codec */
ret = regmap_read(cs42l73->regmap, CS42L73_DEVID_AB, ®);
devid = (reg & 0xFF) << 12;
ret = regmap_read(cs42l73->regmap, CS42L73_DEVID_E, ®);
devid |= (reg & 0xF0) >> 4;
-
if (devid != CS42L73_DEVID) {
ret = -ENODEV;
dev_err(&i2c_client->dev,
dev_info(&i2c_client->dev,
"Cirrus Logic CS42L73, Revision: %02X\n", reg & 0xFF);
- regcache_cache_only(cs42l73->regmap, true);
+ regcache_cache_bypass(cs42l73->regmap, false);
ret = snd_soc_register_codec(&i2c_client->dev,
&soc_codec_dev_cs42l73, cs42l73_dai,
return 0;
}
+static const struct of_device_id cs42l73_of_match[] = {
+ { .compatible = "cirrus,cs42l73", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, cs42l73_of_match);
+
static const struct i2c_device_id cs42l73_id[] = {
{"cs42l73", 0},
{}
.driver = {
.name = "cs42l73",
.owner = THIS_MODULE,
+ .of_match_table = cs42l73_of_match,
},
.id_table = cs42l73_id,
.probe = cs42l73_i2c_probe,
/* Bitfield Definitions */
/* CS42L73_PWRCTL1 */
-#define PDN_ADCB (1 << 7)
-#define PDN_DMICB (1 << 6)
-#define PDN_ADCA (1 << 5)
-#define PDN_DMICA (1 << 4)
-#define PDN_LDO (1 << 2)
-#define DISCHG_FILT (1 << 1)
-#define PDN (1 << 0)
+#define CS42L73_PDN_ADCB (1 << 7)
+#define CS42L73_PDN_DMICB (1 << 6)
+#define CS42L73_PDN_ADCA (1 << 5)
+#define CS42L73_PDN_DMICA (1 << 4)
+#define CS42L73_PDN_LDO (1 << 2)
+#define CS42L73_DISCHG_FILT (1 << 1)
+#define CS42L73_PDN (1 << 0)
/* CS42L73_PWRCTL2 */
-#define PDN_MIC2_BIAS (1 << 7)
-#define PDN_MIC1_BIAS (1 << 6)
-#define PDN_VSP (1 << 4)
-#define PDN_ASP_SDOUT (1 << 3)
-#define PDN_ASP_SDIN (1 << 2)
-#define PDN_XSP_SDOUT (1 << 1)
-#define PDN_XSP_SDIN (1 << 0)
+#define CS42L73_PDN_MIC2_BIAS (1 << 7)
+#define CS42L73_PDN_MIC1_BIAS (1 << 6)
+#define CS42L73_PDN_VSP (1 << 4)
+#define CS42L73_PDN_ASP_SDOUT (1 << 3)
+#define CS42L73_PDN_ASP_SDIN (1 << 2)
+#define CS42L73_PDN_XSP_SDOUT (1 << 1)
+#define CS42L73_PDN_XSP_SDIN (1 << 0)
/* CS42L73_PWRCTL3 */
-#define PDN_THMS (1 << 5)
-#define PDN_SPKLO (1 << 4)
-#define PDN_EAR (1 << 3)
-#define PDN_SPK (1 << 2)
-#define PDN_LO (1 << 1)
-#define PDN_HP (1 << 0)
+#define CS42L73_PDN_THMS (1 << 5)
+#define CS42L73_PDN_SPKLO (1 << 4)
+#define CS42L73_PDN_EAR (1 << 3)
+#define CS42L73_PDN_SPK (1 << 2)
+#define CS42L73_PDN_LO (1 << 1)
+#define CS42L73_PDN_HP (1 << 0)
/* Thermal Overload Detect. Requires interrupt ... */
-#define THMOVLD_150C 0
-#define THMOVLD_132C 1
-#define THMOVLD_115C 2
-#define THMOVLD_098C 3
+#define CS42L73_THMOVLD_150C 0
+#define CS42L73_THMOVLD_132C 1
+#define CS42L73_THMOVLD_115C 2
+#define CS42L73_THMOVLD_098C 3
+#define CS42L73_CHARGEPUMP_MASK (0xF0)
/* CS42L73_ASPC, CS42L73_XSPC, CS42L73_VSPC */
-#define SP_3ST (1 << 7)
-#define SPDIF_I2S (0 << 6)
-#define SPDIF_PCM (1 << 6)
-#define PCM_MODE0 (0 << 4)
-#define PCM_MODE1 (1 << 4)
-#define PCM_MODE2 (2 << 4)
-#define PCM_MODE_MASK (3 << 4)
-#define PCM_BIT_ORDER (1 << 3)
-#define MCK_SCLK_64FS (0 << 0)
-#define MCK_SCLK_MCLK (2 << 0)
-#define MCK_SCLK_PREMCLK (3 << 0)
+#define CS42L73_SP_3ST (1 << 7)
+#define CS42L73_SPDIF_I2S (0 << 6)
+#define CS42L73_SPDIF_PCM (1 << 6)
+#define CS42L73_PCM_MODE0 (0 << 4)
+#define CS42L73_PCM_MODE1 (1 << 4)
+#define CS42L73_PCM_MODE2 (2 << 4)
+#define CS42L73_PCM_MODE_MASK (3 << 4)
+#define CS42L73_PCM_BIT_ORDER (1 << 3)
+#define CS42L73_MCK_SCLK_64FS (0 << 0)
+#define CS42L73_MCK_SCLK_MCLK (2 << 0)
+#define CS42L73_MCK_SCLK_PREMCLK (3 << 0)
/* CS42L73_xSPMMCC */
-#define MS_MASTER (1 << 7)
+#define CS42L73_MS_MASTER (1 << 7)
/* CS42L73_DMMCC */
-#define MCLKDIS (1 << 0)
-#define MCLKSEL_MCLK2 (1 << 4)
-#define MCLKSEL_MCLK1 (0 << 4)
+#define CS42L73_MCLKDIS (1 << 0)
+#define CS42L73_MCLKSEL_MCLK2 (1 << 4)
+#define CS42L73_MCLKSEL_MCLK1 (0 << 4)
/* CS42L73 MCLK derived from MCLK1 or MCLK2 */
#define CS42L73_CLKID_MCLK1 0
#define CS42L73_VSP 2
/* IS1, IM1 */
-#define MIC2_SDET (1 << 6)
-#define THMOVLD (1 << 4)
-#define DIGMIXOVFL (1 << 3)
-#define IPBOVFL (1 << 1)
-#define IPAOVFL (1 << 0)
+#define CS42L73_MIC2_SDET (1 << 6)
+#define CS42L73_THMOVLD (1 << 4)
+#define CS42L73_DIGMIXOVFL (1 << 3)
+#define CS42L73_IPBOVFL (1 << 1)
+#define CS42L73_IPAOVFL (1 << 0)
/* Analog Softramp */
-#define ANLGOSFT (1 << 0)
+#define CS42L73_ANLGOSFT (1 << 0)
/* HP A/B Analog Mute */
-#define HPA_MUTE (1 << 7)
+#define CS42L73_HPA_MUTE (1 << 7)
/* LO A/B Analog Mute */
-#define LOA_MUTE (1 << 7)
+#define CS42L73_LOA_MUTE (1 << 7)
/* Digital Mute */
-#define HLAD_MUTE (1 << 0)
-#define HLBD_MUTE (1 << 1)
-#define SPKD_MUTE (1 << 2)
-#define ESLD_MUTE (1 << 3)
+#define CS42L73_HLAD_MUTE (1 << 0)
+#define CS42L73_HLBD_MUTE (1 << 1)
+#define CS42L73_SPKD_MUTE (1 << 2)
+#define CS42L73_ESLD_MUTE (1 << 3)
/* Misc defines for codec */
-#define CS42L73_RESET_GPIO 143
-
#define CS42L73_DEVID 0x00042A73
#define CS42L73_MCLKX_MIN 5644800
#define CS42L73_MCLKX_MAX 38400000
#include <linux/delay.h>
#include <linux/pm.h>
#include <linux/i2c.h>
+#include <linux/regmap.h>
#include <sound/core.h>
#include <sound/pcm.h>
#include <sound/pcm_params.h>
};
struct max98088_priv {
- enum max98088_type devtype;
- struct max98088_pdata *pdata;
- unsigned int sysclk;
- struct max98088_cdata dai[2];
- int eq_textcnt;
- const char **eq_texts;
- struct soc_enum eq_enum;
- u8 ina_state;
- u8 inb_state;
- unsigned int ex_mode;
- unsigned int digmic;
- unsigned int mic1pre;
- unsigned int mic2pre;
- unsigned int extmic_mode;
+ struct regmap *regmap;
+ enum max98088_type devtype;
+ struct max98088_pdata *pdata;
+ unsigned int sysclk;
+ struct max98088_cdata dai[2];
+ int eq_textcnt;
+ const char **eq_texts;
+ struct soc_enum eq_enum;
+ u8 ina_state;
+ u8 inb_state;
+ unsigned int ex_mode;
+ unsigned int digmic;
+ unsigned int mic1pre;
+ unsigned int mic2pre;
+ unsigned int extmic_mode;
};
-static const u8 max98088_reg[M98088_REG_CNT] = {
- 0x00, /* 00 IRQ status */
- 0x00, /* 01 MIC status */
- 0x00, /* 02 jack status */
- 0x00, /* 03 battery voltage */
- 0x00, /* 04 */
- 0x00, /* 05 */
- 0x00, /* 06 */
- 0x00, /* 07 */
- 0x00, /* 08 */
- 0x00, /* 09 */
- 0x00, /* 0A */
- 0x00, /* 0B */
- 0x00, /* 0C */
- 0x00, /* 0D */
- 0x00, /* 0E */
- 0x00, /* 0F interrupt enable */
-
- 0x00, /* 10 master clock */
- 0x00, /* 11 DAI1 clock mode */
- 0x00, /* 12 DAI1 clock control */
- 0x00, /* 13 DAI1 clock control */
- 0x00, /* 14 DAI1 format */
- 0x00, /* 15 DAI1 clock */
- 0x00, /* 16 DAI1 config */
- 0x00, /* 17 DAI1 TDM */
- 0x00, /* 18 DAI1 filters */
- 0x00, /* 19 DAI2 clock mode */
- 0x00, /* 1A DAI2 clock control */
- 0x00, /* 1B DAI2 clock control */
- 0x00, /* 1C DAI2 format */
- 0x00, /* 1D DAI2 clock */
- 0x00, /* 1E DAI2 config */
- 0x00, /* 1F DAI2 TDM */
-
- 0x00, /* 20 DAI2 filters */
- 0x00, /* 21 data config */
- 0x00, /* 22 DAC mixer */
- 0x00, /* 23 left ADC mixer */
- 0x00, /* 24 right ADC mixer */
- 0x00, /* 25 left HP mixer */
- 0x00, /* 26 right HP mixer */
- 0x00, /* 27 HP control */
- 0x00, /* 28 left REC mixer */
- 0x00, /* 29 right REC mixer */
- 0x00, /* 2A REC control */
- 0x00, /* 2B left SPK mixer */
- 0x00, /* 2C right SPK mixer */
- 0x00, /* 2D SPK control */
- 0x00, /* 2E sidetone */
- 0x00, /* 2F DAI1 playback level */
-
- 0x00, /* 30 DAI1 playback level */
- 0x00, /* 31 DAI2 playback level */
- 0x00, /* 32 DAI2 playbakc level */
- 0x00, /* 33 left ADC level */
- 0x00, /* 34 right ADC level */
- 0x00, /* 35 MIC1 level */
- 0x00, /* 36 MIC2 level */
- 0x00, /* 37 INA level */
- 0x00, /* 38 INB level */
- 0x00, /* 39 left HP volume */
- 0x00, /* 3A right HP volume */
- 0x00, /* 3B left REC volume */
- 0x00, /* 3C right REC volume */
- 0x00, /* 3D left SPK volume */
- 0x00, /* 3E right SPK volume */
- 0x00, /* 3F MIC config */
-
- 0x00, /* 40 MIC threshold */
- 0x00, /* 41 excursion limiter filter */
- 0x00, /* 42 excursion limiter threshold */
- 0x00, /* 43 ALC */
- 0x00, /* 44 power limiter threshold */
- 0x00, /* 45 power limiter config */
- 0x00, /* 46 distortion limiter config */
- 0x00, /* 47 audio input */
- 0x00, /* 48 microphone */
- 0x00, /* 49 level control */
- 0x00, /* 4A bypass switches */
- 0x00, /* 4B jack detect */
- 0x00, /* 4C input enable */
- 0x00, /* 4D output enable */
- 0xF0, /* 4E bias control */
- 0x00, /* 4F DAC power */
-
- 0x0F, /* 50 DAC power */
- 0x00, /* 51 system */
- 0x00, /* 52 DAI1 EQ1 */
- 0x00, /* 53 DAI1 EQ1 */
- 0x00, /* 54 DAI1 EQ1 */
- 0x00, /* 55 DAI1 EQ1 */
- 0x00, /* 56 DAI1 EQ1 */
- 0x00, /* 57 DAI1 EQ1 */
- 0x00, /* 58 DAI1 EQ1 */
- 0x00, /* 59 DAI1 EQ1 */
- 0x00, /* 5A DAI1 EQ1 */
- 0x00, /* 5B DAI1 EQ1 */
- 0x00, /* 5C DAI1 EQ2 */
- 0x00, /* 5D DAI1 EQ2 */
- 0x00, /* 5E DAI1 EQ2 */
- 0x00, /* 5F DAI1 EQ2 */
-
- 0x00, /* 60 DAI1 EQ2 */
- 0x00, /* 61 DAI1 EQ2 */
- 0x00, /* 62 DAI1 EQ2 */
- 0x00, /* 63 DAI1 EQ2 */
- 0x00, /* 64 DAI1 EQ2 */
- 0x00, /* 65 DAI1 EQ2 */
- 0x00, /* 66 DAI1 EQ3 */
- 0x00, /* 67 DAI1 EQ3 */
- 0x00, /* 68 DAI1 EQ3 */
- 0x00, /* 69 DAI1 EQ3 */
- 0x00, /* 6A DAI1 EQ3 */
- 0x00, /* 6B DAI1 EQ3 */
- 0x00, /* 6C DAI1 EQ3 */
- 0x00, /* 6D DAI1 EQ3 */
- 0x00, /* 6E DAI1 EQ3 */
- 0x00, /* 6F DAI1 EQ3 */
-
- 0x00, /* 70 DAI1 EQ4 */
- 0x00, /* 71 DAI1 EQ4 */
- 0x00, /* 72 DAI1 EQ4 */
- 0x00, /* 73 DAI1 EQ4 */
- 0x00, /* 74 DAI1 EQ4 */
- 0x00, /* 75 DAI1 EQ4 */
- 0x00, /* 76 DAI1 EQ4 */
- 0x00, /* 77 DAI1 EQ4 */
- 0x00, /* 78 DAI1 EQ4 */
- 0x00, /* 79 DAI1 EQ4 */
- 0x00, /* 7A DAI1 EQ5 */
- 0x00, /* 7B DAI1 EQ5 */
- 0x00, /* 7C DAI1 EQ5 */
- 0x00, /* 7D DAI1 EQ5 */
- 0x00, /* 7E DAI1 EQ5 */
- 0x00, /* 7F DAI1 EQ5 */
-
- 0x00, /* 80 DAI1 EQ5 */
- 0x00, /* 81 DAI1 EQ5 */
- 0x00, /* 82 DAI1 EQ5 */
- 0x00, /* 83 DAI1 EQ5 */
- 0x00, /* 84 DAI2 EQ1 */
- 0x00, /* 85 DAI2 EQ1 */
- 0x00, /* 86 DAI2 EQ1 */
- 0x00, /* 87 DAI2 EQ1 */
- 0x00, /* 88 DAI2 EQ1 */
- 0x00, /* 89 DAI2 EQ1 */
- 0x00, /* 8A DAI2 EQ1 */
- 0x00, /* 8B DAI2 EQ1 */
- 0x00, /* 8C DAI2 EQ1 */
- 0x00, /* 8D DAI2 EQ1 */
- 0x00, /* 8E DAI2 EQ2 */
- 0x00, /* 8F DAI2 EQ2 */
-
- 0x00, /* 90 DAI2 EQ2 */
- 0x00, /* 91 DAI2 EQ2 */
- 0x00, /* 92 DAI2 EQ2 */
- 0x00, /* 93 DAI2 EQ2 */
- 0x00, /* 94 DAI2 EQ2 */
- 0x00, /* 95 DAI2 EQ2 */
- 0x00, /* 96 DAI2 EQ2 */
- 0x00, /* 97 DAI2 EQ2 */
- 0x00, /* 98 DAI2 EQ3 */
- 0x00, /* 99 DAI2 EQ3 */
- 0x00, /* 9A DAI2 EQ3 */
- 0x00, /* 9B DAI2 EQ3 */
- 0x00, /* 9C DAI2 EQ3 */
- 0x00, /* 9D DAI2 EQ3 */
- 0x00, /* 9E DAI2 EQ3 */
- 0x00, /* 9F DAI2 EQ3 */
-
- 0x00, /* A0 DAI2 EQ3 */
- 0x00, /* A1 DAI2 EQ3 */
- 0x00, /* A2 DAI2 EQ4 */
- 0x00, /* A3 DAI2 EQ4 */
- 0x00, /* A4 DAI2 EQ4 */
- 0x00, /* A5 DAI2 EQ4 */
- 0x00, /* A6 DAI2 EQ4 */
- 0x00, /* A7 DAI2 EQ4 */
- 0x00, /* A8 DAI2 EQ4 */
- 0x00, /* A9 DAI2 EQ4 */
- 0x00, /* AA DAI2 EQ4 */
- 0x00, /* AB DAI2 EQ4 */
- 0x00, /* AC DAI2 EQ5 */
- 0x00, /* AD DAI2 EQ5 */
- 0x00, /* AE DAI2 EQ5 */
- 0x00, /* AF DAI2 EQ5 */
-
- 0x00, /* B0 DAI2 EQ5 */
- 0x00, /* B1 DAI2 EQ5 */
- 0x00, /* B2 DAI2 EQ5 */
- 0x00, /* B3 DAI2 EQ5 */
- 0x00, /* B4 DAI2 EQ5 */
- 0x00, /* B5 DAI2 EQ5 */
- 0x00, /* B6 DAI1 biquad */
- 0x00, /* B7 DAI1 biquad */
- 0x00, /* B8 DAI1 biquad */
- 0x00, /* B9 DAI1 biquad */
- 0x00, /* BA DAI1 biquad */
- 0x00, /* BB DAI1 biquad */
- 0x00, /* BC DAI1 biquad */
- 0x00, /* BD DAI1 biquad */
- 0x00, /* BE DAI1 biquad */
- 0x00, /* BF DAI1 biquad */
-
- 0x00, /* C0 DAI2 biquad */
- 0x00, /* C1 DAI2 biquad */
- 0x00, /* C2 DAI2 biquad */
- 0x00, /* C3 DAI2 biquad */
- 0x00, /* C4 DAI2 biquad */
- 0x00, /* C5 DAI2 biquad */
- 0x00, /* C6 DAI2 biquad */
- 0x00, /* C7 DAI2 biquad */
- 0x00, /* C8 DAI2 biquad */
- 0x00, /* C9 DAI2 biquad */
- 0x00, /* CA */
- 0x00, /* CB */
- 0x00, /* CC */
- 0x00, /* CD */
- 0x00, /* CE */
- 0x00, /* CF */
-
- 0x00, /* D0 */
- 0x00, /* D1 */
- 0x00, /* D2 */
- 0x00, /* D3 */
- 0x00, /* D4 */
- 0x00, /* D5 */
- 0x00, /* D6 */
- 0x00, /* D7 */
- 0x00, /* D8 */
- 0x00, /* D9 */
- 0x00, /* DA */
- 0x70, /* DB */
- 0x00, /* DC */
- 0x00, /* DD */
- 0x00, /* DE */
- 0x00, /* DF */
-
- 0x00, /* E0 */
- 0x00, /* E1 */
- 0x00, /* E2 */
- 0x00, /* E3 */
- 0x00, /* E4 */
- 0x00, /* E5 */
- 0x00, /* E6 */
- 0x00, /* E7 */
- 0x00, /* E8 */
- 0x00, /* E9 */
- 0x00, /* EA */
- 0x00, /* EB */
- 0x00, /* EC */
- 0x00, /* ED */
- 0x00, /* EE */
- 0x00, /* EF */
-
- 0x00, /* F0 */
- 0x00, /* F1 */
- 0x00, /* F2 */
- 0x00, /* F3 */
- 0x00, /* F4 */
- 0x00, /* F5 */
- 0x00, /* F6 */
- 0x00, /* F7 */
- 0x00, /* F8 */
- 0x00, /* F9 */
- 0x00, /* FA */
- 0x00, /* FB */
- 0x00, /* FC */
- 0x00, /* FD */
- 0x00, /* FE */
- 0x00, /* FF */
+static const struct reg_default max98088_reg[] = {
+ { 0xf, 0x00 }, /* 0F interrupt enable */
+
+ { 0x10, 0x00 }, /* 10 master clock */
+ { 0x11, 0x00 }, /* 11 DAI1 clock mode */
+ { 0x12, 0x00 }, /* 12 DAI1 clock control */
+ { 0x13, 0x00 }, /* 13 DAI1 clock control */
+ { 0x14, 0x00 }, /* 14 DAI1 format */
+ { 0x15, 0x00 }, /* 15 DAI1 clock */
+ { 0x16, 0x00 }, /* 16 DAI1 config */
+ { 0x17, 0x00 }, /* 17 DAI1 TDM */
+ { 0x18, 0x00 }, /* 18 DAI1 filters */
+ { 0x19, 0x00 }, /* 19 DAI2 clock mode */
+ { 0x1a, 0x00 }, /* 1A DAI2 clock control */
+ { 0x1b, 0x00 }, /* 1B DAI2 clock control */
+ { 0x1c, 0x00 }, /* 1C DAI2 format */
+ { 0x1d, 0x00 }, /* 1D DAI2 clock */
+ { 0x1e, 0x00 }, /* 1E DAI2 config */
+ { 0x1f, 0x00 }, /* 1F DAI2 TDM */
+
+ { 0x20, 0x00 }, /* 20 DAI2 filters */
+ { 0x21, 0x00 }, /* 21 data config */
+ { 0x22, 0x00 }, /* 22 DAC mixer */
+ { 0x23, 0x00 }, /* 23 left ADC mixer */
+ { 0x24, 0x00 }, /* 24 right ADC mixer */
+ { 0x25, 0x00 }, /* 25 left HP mixer */
+ { 0x26, 0x00 }, /* 26 right HP mixer */
+ { 0x27, 0x00 }, /* 27 HP control */
+ { 0x28, 0x00 }, /* 28 left REC mixer */
+ { 0x29, 0x00 }, /* 29 right REC mixer */
+ { 0x2a, 0x00 }, /* 2A REC control */
+ { 0x2b, 0x00 }, /* 2B left SPK mixer */
+ { 0x2c, 0x00 }, /* 2C right SPK mixer */
+ { 0x2d, 0x00 }, /* 2D SPK control */
+ { 0x2e, 0x00 }, /* 2E sidetone */
+ { 0x2f, 0x00 }, /* 2F DAI1 playback level */
+
+ { 0x30, 0x00 }, /* 30 DAI1 playback level */
+ { 0x31, 0x00 }, /* 31 DAI2 playback level */
+ { 0x32, 0x00 }, /* 32 DAI2 playbakc level */
+ { 0x33, 0x00 }, /* 33 left ADC level */
+ { 0x34, 0x00 }, /* 34 right ADC level */
+ { 0x35, 0x00 }, /* 35 MIC1 level */
+ { 0x36, 0x00 }, /* 36 MIC2 level */
+ { 0x37, 0x00 }, /* 37 INA level */
+ { 0x38, 0x00 }, /* 38 INB level */
+ { 0x39, 0x00 }, /* 39 left HP volume */
+ { 0x3a, 0x00 }, /* 3A right HP volume */
+ { 0x3b, 0x00 }, /* 3B left REC volume */
+ { 0x3c, 0x00 }, /* 3C right REC volume */
+ { 0x3d, 0x00 }, /* 3D left SPK volume */
+ { 0x3e, 0x00 }, /* 3E right SPK volume */
+ { 0x3f, 0x00 }, /* 3F MIC config */
+
+ { 0x40, 0x00 }, /* 40 MIC threshold */
+ { 0x41, 0x00 }, /* 41 excursion limiter filter */
+ { 0x42, 0x00 }, /* 42 excursion limiter threshold */
+ { 0x43, 0x00 }, /* 43 ALC */
+ { 0x44, 0x00 }, /* 44 power limiter threshold */
+ { 0x45, 0x00 }, /* 45 power limiter config */
+ { 0x46, 0x00 }, /* 46 distortion limiter config */
+ { 0x47, 0x00 }, /* 47 audio input */
+ { 0x48, 0x00 }, /* 48 microphone */
+ { 0x49, 0x00 }, /* 49 level control */
+ { 0x4a, 0x00 }, /* 4A bypass switches */
+ { 0x4b, 0x00 }, /* 4B jack detect */
+ { 0x4c, 0x00 }, /* 4C input enable */
+ { 0x4d, 0x00 }, /* 4D output enable */
+ { 0x4e, 0xF0 }, /* 4E bias control */
+ { 0x4f, 0x00 }, /* 4F DAC power */
+
+ { 0x50, 0x0F }, /* 50 DAC power */
+ { 0x51, 0x00 }, /* 51 system */
+ { 0x52, 0x00 }, /* 52 DAI1 EQ1 */
+ { 0x53, 0x00 }, /* 53 DAI1 EQ1 */
+ { 0x54, 0x00 }, /* 54 DAI1 EQ1 */
+ { 0x55, 0x00 }, /* 55 DAI1 EQ1 */
+ { 0x56, 0x00 }, /* 56 DAI1 EQ1 */
+ { 0x57, 0x00 }, /* 57 DAI1 EQ1 */
+ { 0x58, 0x00 }, /* 58 DAI1 EQ1 */
+ { 0x59, 0x00 }, /* 59 DAI1 EQ1 */
+ { 0x5a, 0x00 }, /* 5A DAI1 EQ1 */
+ { 0x5b, 0x00 }, /* 5B DAI1 EQ1 */
+ { 0x5c, 0x00 }, /* 5C DAI1 EQ2 */
+ { 0x5d, 0x00 }, /* 5D DAI1 EQ2 */
+ { 0x5e, 0x00 }, /* 5E DAI1 EQ2 */
+ { 0x5f, 0x00 }, /* 5F DAI1 EQ2 */
+
+ { 0x60, 0x00 }, /* 60 DAI1 EQ2 */
+ { 0x61, 0x00 }, /* 61 DAI1 EQ2 */
+ { 0x62, 0x00 }, /* 62 DAI1 EQ2 */
+ { 0x63, 0x00 }, /* 63 DAI1 EQ2 */
+ { 0x64, 0x00 }, /* 64 DAI1 EQ2 */
+ { 0x65, 0x00 }, /* 65 DAI1 EQ2 */
+ { 0x66, 0x00 }, /* 66 DAI1 EQ3 */
+ { 0x67, 0x00 }, /* 67 DAI1 EQ3 */
+ { 0x68, 0x00 }, /* 68 DAI1 EQ3 */
+ { 0x69, 0x00 }, /* 69 DAI1 EQ3 */
+ { 0x6a, 0x00 }, /* 6A DAI1 EQ3 */
+ { 0x6b, 0x00 }, /* 6B DAI1 EQ3 */
+ { 0x6c, 0x00 }, /* 6C DAI1 EQ3 */
+ { 0x6d, 0x00 }, /* 6D DAI1 EQ3 */
+ { 0x6e, 0x00 }, /* 6E DAI1 EQ3 */
+ { 0x6f, 0x00 }, /* 6F DAI1 EQ3 */
+
+ { 0x70, 0x00 }, /* 70 DAI1 EQ4 */
+ { 0x71, 0x00 }, /* 71 DAI1 EQ4 */
+ { 0x72, 0x00 }, /* 72 DAI1 EQ4 */
+ { 0x73, 0x00 }, /* 73 DAI1 EQ4 */
+ { 0x74, 0x00 }, /* 74 DAI1 EQ4 */
+ { 0x75, 0x00 }, /* 75 DAI1 EQ4 */
+ { 0x76, 0x00 }, /* 76 DAI1 EQ4 */
+ { 0x77, 0x00 }, /* 77 DAI1 EQ4 */
+ { 0x78, 0x00 }, /* 78 DAI1 EQ4 */
+ { 0x79, 0x00 }, /* 79 DAI1 EQ4 */
+ { 0x7a, 0x00 }, /* 7A DAI1 EQ5 */
+ { 0x7b, 0x00 }, /* 7B DAI1 EQ5 */
+ { 0x7c, 0x00 }, /* 7C DAI1 EQ5 */
+ { 0x7d, 0x00 }, /* 7D DAI1 EQ5 */
+ { 0x7e, 0x00 }, /* 7E DAI1 EQ5 */
+ { 0x7f, 0x00 }, /* 7F DAI1 EQ5 */
+
+ { 0x80, 0x00 }, /* 80 DAI1 EQ5 */
+ { 0x81, 0x00 }, /* 81 DAI1 EQ5 */
+ { 0x82, 0x00 }, /* 82 DAI1 EQ5 */
+ { 0x83, 0x00 }, /* 83 DAI1 EQ5 */
+ { 0x84, 0x00 }, /* 84 DAI2 EQ1 */
+ { 0x85, 0x00 }, /* 85 DAI2 EQ1 */
+ { 0x86, 0x00 }, /* 86 DAI2 EQ1 */
+ { 0x87, 0x00 }, /* 87 DAI2 EQ1 */
+ { 0x88, 0x00 }, /* 88 DAI2 EQ1 */
+ { 0x89, 0x00 }, /* 89 DAI2 EQ1 */
+ { 0x8a, 0x00 }, /* 8A DAI2 EQ1 */
+ { 0x8b, 0x00 }, /* 8B DAI2 EQ1 */
+ { 0x8c, 0x00 }, /* 8C DAI2 EQ1 */
+ { 0x8d, 0x00 }, /* 8D DAI2 EQ1 */
+ { 0x8e, 0x00 }, /* 8E DAI2 EQ2 */
+ { 0x8f, 0x00 }, /* 8F DAI2 EQ2 */
+
+ { 0x90, 0x00 }, /* 90 DAI2 EQ2 */
+ { 0x91, 0x00 }, /* 91 DAI2 EQ2 */
+ { 0x92, 0x00 }, /* 92 DAI2 EQ2 */
+ { 0x93, 0x00 }, /* 93 DAI2 EQ2 */
+ { 0x94, 0x00 }, /* 94 DAI2 EQ2 */
+ { 0x95, 0x00 }, /* 95 DAI2 EQ2 */
+ { 0x96, 0x00 }, /* 96 DAI2 EQ2 */
+ { 0x97, 0x00 }, /* 97 DAI2 EQ2 */
+ { 0x98, 0x00 }, /* 98 DAI2 EQ3 */
+ { 0x99, 0x00 }, /* 99 DAI2 EQ3 */
+ { 0x9a, 0x00 }, /* 9A DAI2 EQ3 */
+ { 0x9b, 0x00 }, /* 9B DAI2 EQ3 */
+ { 0x9c, 0x00 }, /* 9C DAI2 EQ3 */
+ { 0x9d, 0x00 }, /* 9D DAI2 EQ3 */
+ { 0x9e, 0x00 }, /* 9E DAI2 EQ3 */
+ { 0x9f, 0x00 }, /* 9F DAI2 EQ3 */
+
+ { 0xa0, 0x00 }, /* A0 DAI2 EQ3 */
+ { 0xa1, 0x00 }, /* A1 DAI2 EQ3 */
+ { 0xa2, 0x00 }, /* A2 DAI2 EQ4 */
+ { 0xa3, 0x00 }, /* A3 DAI2 EQ4 */
+ { 0xa4, 0x00 }, /* A4 DAI2 EQ4 */
+ { 0xa5, 0x00 }, /* A5 DAI2 EQ4 */
+ { 0xa6, 0x00 }, /* A6 DAI2 EQ4 */
+ { 0xa7, 0x00 }, /* A7 DAI2 EQ4 */
+ { 0xa8, 0x00 }, /* A8 DAI2 EQ4 */
+ { 0xa9, 0x00 }, /* A9 DAI2 EQ4 */
+ { 0xaa, 0x00 }, /* AA DAI2 EQ4 */
+ { 0xab, 0x00 }, /* AB DAI2 EQ4 */
+ { 0xac, 0x00 }, /* AC DAI2 EQ5 */
+ { 0xad, 0x00 }, /* AD DAI2 EQ5 */
+ { 0xae, 0x00 }, /* AE DAI2 EQ5 */
+ { 0xaf, 0x00 }, /* AF DAI2 EQ5 */
+
+ { 0xb0, 0x00 }, /* B0 DAI2 EQ5 */
+ { 0xb1, 0x00 }, /* B1 DAI2 EQ5 */
+ { 0xb2, 0x00 }, /* B2 DAI2 EQ5 */
+ { 0xb3, 0x00 }, /* B3 DAI2 EQ5 */
+ { 0xb4, 0x00 }, /* B4 DAI2 EQ5 */
+ { 0xb5, 0x00 }, /* B5 DAI2 EQ5 */
+ { 0xb6, 0x00 }, /* B6 DAI1 biquad */
+ { 0xb7, 0x00 }, /* B7 DAI1 biquad */
+ { 0xb8 ,0x00 }, /* B8 DAI1 biquad */
+ { 0xb9, 0x00 }, /* B9 DAI1 biquad */
+ { 0xba, 0x00 }, /* BA DAI1 biquad */
+ { 0xbb, 0x00 }, /* BB DAI1 biquad */
+ { 0xbc, 0x00 }, /* BC DAI1 biquad */
+ { 0xbd, 0x00 }, /* BD DAI1 biquad */
+ { 0xbe, 0x00 }, /* BE DAI1 biquad */
+ { 0xbf, 0x00 }, /* BF DAI1 biquad */
+
+ { 0xc0, 0x00 }, /* C0 DAI2 biquad */
+ { 0xc1, 0x00 }, /* C1 DAI2 biquad */
+ { 0xc2, 0x00 }, /* C2 DAI2 biquad */
+ { 0xc3, 0x00 }, /* C3 DAI2 biquad */
+ { 0xc4, 0x00 }, /* C4 DAI2 biquad */
+ { 0xc5, 0x00 }, /* C5 DAI2 biquad */
+ { 0xc6, 0x00 }, /* C6 DAI2 biquad */
+ { 0xc7, 0x00 }, /* C7 DAI2 biquad */
+ { 0xc8, 0x00 }, /* C8 DAI2 biquad */
+ { 0xc9, 0x00 }, /* C9 DAI2 biquad */
};
static struct {
{ 0xFF, 0x00, 1 }, /* FF */
};
-static int max98088_volatile_register(struct snd_soc_codec *codec, unsigned int reg)
+static bool max98088_readable_register(struct device *dev, unsigned int reg)
+{
+ return max98088_access[reg].readable;
+}
+
+static bool max98088_volatile_register(struct device *dev, unsigned int reg)
{
return max98088_access[reg].vol;
}
+static const struct regmap_config max98088_regmap = {
+ .reg_bits = 8,
+ .val_bits = 8,
+
+ .readable_reg = max98088_readable_register,
+ .volatile_reg = max98088_volatile_register,
+ .max_register = 0xff,
+
+ .reg_defaults = max98088_reg,
+ .num_reg_defaults = ARRAY_SIZE(max98088_reg),
+ .cache_type = REGCACHE_RBTREE,
+};
/*
* Load equalizer DSP coefficient configurations registers
return 0;
}
-static void max98088_sync_cache(struct snd_soc_codec *codec)
-{
- u8 *reg_cache = codec->reg_cache;
- int i;
-
- if (!codec->cache_sync)
- return;
-
- codec->cache_only = 0;
-
- /* write back cached values if they're writeable and
- * different from the hardware default.
- */
- for (i = 1; i < codec->driver->reg_cache_size; i++) {
- if (!max98088_access[i].writable)
- continue;
-
- if (reg_cache[i] == max98088_reg[i])
- continue;
-
- snd_soc_write(codec, i, reg_cache[i]);
- }
-
- codec->cache_sync = 0;
-}
-
static int max98088_set_bias_level(struct snd_soc_codec *codec,
enum snd_soc_bias_level level)
{
- switch (level) {
- case SND_SOC_BIAS_ON:
- break;
-
- case SND_SOC_BIAS_PREPARE:
- break;
-
- case SND_SOC_BIAS_STANDBY:
- if (codec->dapm.bias_level == SND_SOC_BIAS_OFF)
- max98088_sync_cache(codec);
-
- snd_soc_update_bits(codec, M98088_REG_4C_PWR_EN_IN,
- M98088_MBEN, M98088_MBEN);
- break;
-
- case SND_SOC_BIAS_OFF:
- snd_soc_update_bits(codec, M98088_REG_4C_PWR_EN_IN,
- M98088_MBEN, 0);
- codec->cache_sync = 1;
- break;
- }
- codec->dapm.bias_level = level;
- return 0;
+ struct max98088_priv *max98088 = snd_soc_codec_get_drvdata(codec);
+
+ switch (level) {
+ case SND_SOC_BIAS_ON:
+ break;
+
+ case SND_SOC_BIAS_PREPARE:
+ break;
+
+ case SND_SOC_BIAS_STANDBY:
+ if (codec->dapm.bias_level == SND_SOC_BIAS_OFF)
+ regcache_sync(max98088->regmap);
+
+ snd_soc_update_bits(codec, M98088_REG_4C_PWR_EN_IN,
+ M98088_MBEN, M98088_MBEN);
+ break;
+
+ case SND_SOC_BIAS_OFF:
+ snd_soc_update_bits(codec, M98088_REG_4C_PWR_EN_IN,
+ M98088_MBEN, 0);
+ regcache_mark_dirty(max98088->regmap);
+ break;
+ }
+ codec->dapm.bias_level = level;
+ return 0;
}
#define MAX98088_RATES SNDRV_PCM_RATE_8000_96000
struct max98088_cdata *cdata;
int ret = 0;
- codec->cache_sync = 1;
+ regcache_mark_dirty(max98088->regmap);
- ret = snd_soc_codec_set_cache_io(codec, 8, 8, SND_SOC_I2C);
+ ret = snd_soc_codec_set_cache_io(codec, 8, 8, SND_SOC_REGMAP);
if (ret != 0) {
dev_err(codec->dev, "Failed to set cache I/O: %d\n", ret);
return ret;
max98088_handle_pdata(codec);
- snd_soc_add_codec_controls(codec, max98088_snd_controls,
- ARRAY_SIZE(max98088_snd_controls));
-
err_access:
return ret;
}
}
static struct snd_soc_codec_driver soc_codec_dev_max98088 = {
- .probe = max98088_probe,
- .remove = max98088_remove,
- .suspend = max98088_suspend,
- .resume = max98088_resume,
- .set_bias_level = max98088_set_bias_level,
- .reg_cache_size = ARRAY_SIZE(max98088_reg),
- .reg_word_size = sizeof(u8),
- .reg_cache_default = max98088_reg,
- .volatile_register = max98088_volatile_register,
+ .probe = max98088_probe,
+ .remove = max98088_remove,
+ .suspend = max98088_suspend,
+ .resume = max98088_resume,
+ .set_bias_level = max98088_set_bias_level,
+ .controls = max98088_snd_controls,
+ .num_controls = ARRAY_SIZE(max98088_snd_controls),
.dapm_widgets = max98088_dapm_widgets,
.num_dapm_widgets = ARRAY_SIZE(max98088_dapm_widgets),
.dapm_routes = max98088_audio_map,
};
static int max98088_i2c_probe(struct i2c_client *i2c,
- const struct i2c_device_id *id)
+ const struct i2c_device_id *id)
{
struct max98088_priv *max98088;
int ret;
if (max98088 == NULL)
return -ENOMEM;
+ max98088->regmap = devm_regmap_init_i2c(i2c, &max98088_regmap);
+ if (IS_ERR(max98088->regmap))
+ return PTR_ERR(max98088->regmap);
+
max98088->devtype = id->driver_data;
i2c_set_clientdata(i2c, max98088);
};
struct max98095_priv {
+ struct regmap *regmap;
enum max98095_type devtype;
struct max98095_pdata *pdata;
unsigned int sysclk;
struct snd_soc_jack *mic_jack;
};
-static const u8 max98095_reg_def[M98095_REG_CNT] = {
- 0x00, /* 00 */
- 0x00, /* 01 */
- 0x00, /* 02 */
- 0x00, /* 03 */
- 0x00, /* 04 */
- 0x00, /* 05 */
- 0x00, /* 06 */
- 0x00, /* 07 */
- 0x00, /* 08 */
- 0x00, /* 09 */
- 0x00, /* 0A */
- 0x00, /* 0B */
- 0x00, /* 0C */
- 0x00, /* 0D */
- 0x00, /* 0E */
- 0x00, /* 0F */
- 0x00, /* 10 */
- 0x00, /* 11 */
- 0x00, /* 12 */
- 0x00, /* 13 */
- 0x00, /* 14 */
- 0x00, /* 15 */
- 0x00, /* 16 */
- 0x00, /* 17 */
- 0x00, /* 18 */
- 0x00, /* 19 */
- 0x00, /* 1A */
- 0x00, /* 1B */
- 0x00, /* 1C */
- 0x00, /* 1D */
- 0x00, /* 1E */
- 0x00, /* 1F */
- 0x00, /* 20 */
- 0x00, /* 21 */
- 0x00, /* 22 */
- 0x00, /* 23 */
- 0x00, /* 24 */
- 0x00, /* 25 */
- 0x00, /* 26 */
- 0x00, /* 27 */
- 0x00, /* 28 */
- 0x00, /* 29 */
- 0x00, /* 2A */
- 0x00, /* 2B */
- 0x00, /* 2C */
- 0x00, /* 2D */
- 0x00, /* 2E */
- 0x00, /* 2F */
- 0x00, /* 30 */
- 0x00, /* 31 */
- 0x00, /* 32 */
- 0x00, /* 33 */
- 0x00, /* 34 */
- 0x00, /* 35 */
- 0x00, /* 36 */
- 0x00, /* 37 */
- 0x00, /* 38 */
- 0x00, /* 39 */
- 0x00, /* 3A */
- 0x00, /* 3B */
- 0x00, /* 3C */
- 0x00, /* 3D */
- 0x00, /* 3E */
- 0x00, /* 3F */
- 0x00, /* 40 */
- 0x00, /* 41 */
- 0x00, /* 42 */
- 0x00, /* 43 */
- 0x00, /* 44 */
- 0x00, /* 45 */
- 0x00, /* 46 */
- 0x00, /* 47 */
- 0x00, /* 48 */
- 0x00, /* 49 */
- 0x00, /* 4A */
- 0x00, /* 4B */
- 0x00, /* 4C */
- 0x00, /* 4D */
- 0x00, /* 4E */
- 0x00, /* 4F */
- 0x00, /* 50 */
- 0x00, /* 51 */
- 0x00, /* 52 */
- 0x00, /* 53 */
- 0x00, /* 54 */
- 0x00, /* 55 */
- 0x00, /* 56 */
- 0x00, /* 57 */
- 0x00, /* 58 */
- 0x00, /* 59 */
- 0x00, /* 5A */
- 0x00, /* 5B */
- 0x00, /* 5C */
- 0x00, /* 5D */
- 0x00, /* 5E */
- 0x00, /* 5F */
- 0x00, /* 60 */
- 0x00, /* 61 */
- 0x00, /* 62 */
- 0x00, /* 63 */
- 0x00, /* 64 */
- 0x00, /* 65 */
- 0x00, /* 66 */
- 0x00, /* 67 */
- 0x00, /* 68 */
- 0x00, /* 69 */
- 0x00, /* 6A */
- 0x00, /* 6B */
- 0x00, /* 6C */
- 0x00, /* 6D */
- 0x00, /* 6E */
- 0x00, /* 6F */
- 0x00, /* 70 */
- 0x00, /* 71 */
- 0x00, /* 72 */
- 0x00, /* 73 */
- 0x00, /* 74 */
- 0x00, /* 75 */
- 0x00, /* 76 */
- 0x00, /* 77 */
- 0x00, /* 78 */
- 0x00, /* 79 */
- 0x00, /* 7A */
- 0x00, /* 7B */
- 0x00, /* 7C */
- 0x00, /* 7D */
- 0x00, /* 7E */
- 0x00, /* 7F */
- 0x00, /* 80 */
- 0x00, /* 81 */
- 0x00, /* 82 */
- 0x00, /* 83 */
- 0x00, /* 84 */
- 0x00, /* 85 */
- 0x00, /* 86 */
- 0x00, /* 87 */
- 0x00, /* 88 */
- 0x00, /* 89 */
- 0x00, /* 8A */
- 0x00, /* 8B */
- 0x00, /* 8C */
- 0x00, /* 8D */
- 0x00, /* 8E */
- 0x00, /* 8F */
- 0x00, /* 90 */
- 0x00, /* 91 */
- 0x30, /* 92 */
- 0xF0, /* 93 */
- 0x00, /* 94 */
- 0x00, /* 95 */
- 0x3F, /* 96 */
- 0x00, /* 97 */
- 0x00, /* 98 */
- 0x00, /* 99 */
- 0x00, /* 9A */
- 0x00, /* 9B */
- 0x00, /* 9C */
- 0x00, /* 9D */
- 0x00, /* 9E */
- 0x00, /* 9F */
- 0x00, /* A0 */
- 0x00, /* A1 */
- 0x00, /* A2 */
- 0x00, /* A3 */
- 0x00, /* A4 */
- 0x00, /* A5 */
- 0x00, /* A6 */
- 0x00, /* A7 */
- 0x00, /* A8 */
- 0x00, /* A9 */
- 0x00, /* AA */
- 0x00, /* AB */
- 0x00, /* AC */
- 0x00, /* AD */
- 0x00, /* AE */
- 0x00, /* AF */
- 0x00, /* B0 */
- 0x00, /* B1 */
- 0x00, /* B2 */
- 0x00, /* B3 */
- 0x00, /* B4 */
- 0x00, /* B5 */
- 0x00, /* B6 */
- 0x00, /* B7 */
- 0x00, /* B8 */
- 0x00, /* B9 */
- 0x00, /* BA */
- 0x00, /* BB */
- 0x00, /* BC */
- 0x00, /* BD */
- 0x00, /* BE */
- 0x00, /* BF */
- 0x00, /* C0 */
- 0x00, /* C1 */
- 0x00, /* C2 */
- 0x00, /* C3 */
- 0x00, /* C4 */
- 0x00, /* C5 */
- 0x00, /* C6 */
- 0x00, /* C7 */
- 0x00, /* C8 */
- 0x00, /* C9 */
- 0x00, /* CA */
- 0x00, /* CB */
- 0x00, /* CC */
- 0x00, /* CD */
- 0x00, /* CE */
- 0x00, /* CF */
- 0x00, /* D0 */
- 0x00, /* D1 */
- 0x00, /* D2 */
- 0x00, /* D3 */
- 0x00, /* D4 */
- 0x00, /* D5 */
- 0x00, /* D6 */
- 0x00, /* D7 */
- 0x00, /* D8 */
- 0x00, /* D9 */
- 0x00, /* DA */
- 0x00, /* DB */
- 0x00, /* DC */
- 0x00, /* DD */
- 0x00, /* DE */
- 0x00, /* DF */
- 0x00, /* E0 */
- 0x00, /* E1 */
- 0x00, /* E2 */
- 0x00, /* E3 */
- 0x00, /* E4 */
- 0x00, /* E5 */
- 0x00, /* E6 */
- 0x00, /* E7 */
- 0x00, /* E8 */
- 0x00, /* E9 */
- 0x00, /* EA */
- 0x00, /* EB */
- 0x00, /* EC */
- 0x00, /* ED */
- 0x00, /* EE */
- 0x00, /* EF */
- 0x00, /* F0 */
- 0x00, /* F1 */
- 0x00, /* F2 */
- 0x00, /* F3 */
- 0x00, /* F4 */
- 0x00, /* F5 */
- 0x00, /* F6 */
- 0x00, /* F7 */
- 0x00, /* F8 */
- 0x00, /* F9 */
- 0x00, /* FA */
- 0x00, /* FB */
- 0x00, /* FC */
- 0x00, /* FD */
- 0x00, /* FE */
- 0x00, /* FF */
+static const struct reg_default max98095_reg_def[] = {
+ { 0xf, 0x00 }, /* 0F */
+ { 0x10, 0x00 }, /* 10 */
+ { 0x11, 0x00 }, /* 11 */
+ { 0x12, 0x00 }, /* 12 */
+ { 0x13, 0x00 }, /* 13 */
+ { 0x14, 0x00 }, /* 14 */
+ { 0x15, 0x00 }, /* 15 */
+ { 0x16, 0x00 }, /* 16 */
+ { 0x17, 0x00 }, /* 17 */
+ { 0x18, 0x00 }, /* 18 */
+ { 0x19, 0x00 }, /* 19 */
+ { 0x1a, 0x00 }, /* 1A */
+ { 0x1b, 0x00 }, /* 1B */
+ { 0x1c, 0x00 }, /* 1C */
+ { 0x1d, 0x00 }, /* 1D */
+ { 0x1e, 0x00 }, /* 1E */
+ { 0x1f, 0x00 }, /* 1F */
+ { 0x20, 0x00 }, /* 20 */
+ { 0x21, 0x00 }, /* 21 */
+ { 0x22, 0x00 }, /* 22 */
+ { 0x23, 0x00 }, /* 23 */
+ { 0x24, 0x00 }, /* 24 */
+ { 0x25, 0x00 }, /* 25 */
+ { 0x26, 0x00 }, /* 26 */
+ { 0x27, 0x00 }, /* 27 */
+ { 0x28, 0x00 }, /* 28 */
+ { 0x29, 0x00 }, /* 29 */
+ { 0x2a, 0x00 }, /* 2A */
+ { 0x2b, 0x00 }, /* 2B */
+ { 0x2c, 0x00 }, /* 2C */
+ { 0x2d, 0x00 }, /* 2D */
+ { 0x2e, 0x00 }, /* 2E */
+ { 0x2f, 0x00 }, /* 2F */
+ { 0x30, 0x00 }, /* 30 */
+ { 0x31, 0x00 }, /* 31 */
+ { 0x32, 0x00 }, /* 32 */
+ { 0x33, 0x00 }, /* 33 */
+ { 0x34, 0x00 }, /* 34 */
+ { 0x35, 0x00 }, /* 35 */
+ { 0x36, 0x00 }, /* 36 */
+ { 0x37, 0x00 }, /* 37 */
+ { 0x38, 0x00 }, /* 38 */
+ { 0x39, 0x00 }, /* 39 */
+ { 0x3a, 0x00 }, /* 3A */
+ { 0x3b, 0x00 }, /* 3B */
+ { 0x3c, 0x00 }, /* 3C */
+ { 0x3d, 0x00 }, /* 3D */
+ { 0x3e, 0x00 }, /* 3E */
+ { 0x3f, 0x00 }, /* 3F */
+ { 0x40, 0x00 }, /* 40 */
+ { 0x41, 0x00 }, /* 41 */
+ { 0x42, 0x00 }, /* 42 */
+ { 0x43, 0x00 }, /* 43 */
+ { 0x44, 0x00 }, /* 44 */
+ { 0x45, 0x00 }, /* 45 */
+ { 0x46, 0x00 }, /* 46 */
+ { 0x47, 0x00 }, /* 47 */
+ { 0x48, 0x00 }, /* 48 */
+ { 0x49, 0x00 }, /* 49 */
+ { 0x4a, 0x00 }, /* 4A */
+ { 0x4b, 0x00 }, /* 4B */
+ { 0x4c, 0x00 }, /* 4C */
+ { 0x4d, 0x00 }, /* 4D */
+ { 0x4e, 0x00 }, /* 4E */
+ { 0x4f, 0x00 }, /* 4F */
+ { 0x50, 0x00 }, /* 50 */
+ { 0x51, 0x00 }, /* 51 */
+ { 0x52, 0x00 }, /* 52 */
+ { 0x53, 0x00 }, /* 53 */
+ { 0x54, 0x00 }, /* 54 */
+ { 0x55, 0x00 }, /* 55 */
+ { 0x56, 0x00 }, /* 56 */
+ { 0x57, 0x00 }, /* 57 */
+ { 0x58, 0x00 }, /* 58 */
+ { 0x59, 0x00 }, /* 59 */
+ { 0x5a, 0x00 }, /* 5A */
+ { 0x5b, 0x00 }, /* 5B */
+ { 0x5c, 0x00 }, /* 5C */
+ { 0x5d, 0x00 }, /* 5D */
+ { 0x5e, 0x00 }, /* 5E */
+ { 0x5f, 0x00 }, /* 5F */
+ { 0x60, 0x00 }, /* 60 */
+ { 0x61, 0x00 }, /* 61 */
+ { 0x62, 0x00 }, /* 62 */
+ { 0x63, 0x00 }, /* 63 */
+ { 0x64, 0x00 }, /* 64 */
+ { 0x65, 0x00 }, /* 65 */
+ { 0x66, 0x00 }, /* 66 */
+ { 0x67, 0x00 }, /* 67 */
+ { 0x68, 0x00 }, /* 68 */
+ { 0x69, 0x00 }, /* 69 */
+ { 0x6a, 0x00 }, /* 6A */
+ { 0x6b, 0x00 }, /* 6B */
+ { 0x6c, 0x00 }, /* 6C */
+ { 0x6d, 0x00 }, /* 6D */
+ { 0x6e, 0x00 }, /* 6E */
+ { 0x6f, 0x00 }, /* 6F */
+ { 0x70, 0x00 }, /* 70 */
+ { 0x71, 0x00 }, /* 71 */
+ { 0x72, 0x00 }, /* 72 */
+ { 0x73, 0x00 }, /* 73 */
+ { 0x74, 0x00 }, /* 74 */
+ { 0x75, 0x00 }, /* 75 */
+ { 0x76, 0x00 }, /* 76 */
+ { 0x77, 0x00 }, /* 77 */
+ { 0x78, 0x00 }, /* 78 */
+ { 0x79, 0x00 }, /* 79 */
+ { 0x7a, 0x00 }, /* 7A */
+ { 0x7b, 0x00 }, /* 7B */
+ { 0x7c, 0x00 }, /* 7C */
+ { 0x7d, 0x00 }, /* 7D */
+ { 0x7e, 0x00 }, /* 7E */
+ { 0x7f, 0x00 }, /* 7F */
+ { 0x80, 0x00 }, /* 80 */
+ { 0x81, 0x00 }, /* 81 */
+ { 0x82, 0x00 }, /* 82 */
+ { 0x83, 0x00 }, /* 83 */
+ { 0x84, 0x00 }, /* 84 */
+ { 0x85, 0x00 }, /* 85 */
+ { 0x86, 0x00 }, /* 86 */
+ { 0x87, 0x00 }, /* 87 */
+ { 0x88, 0x00 }, /* 88 */
+ { 0x89, 0x00 }, /* 89 */
+ { 0x8a, 0x00 }, /* 8A */
+ { 0x8b, 0x00 }, /* 8B */
+ { 0x8c, 0x00 }, /* 8C */
+ { 0x8d, 0x00 }, /* 8D */
+ { 0x8e, 0x00 }, /* 8E */
+ { 0x8f, 0x00 }, /* 8F */
+ { 0x90, 0x00 }, /* 90 */
+ { 0x91, 0x00 }, /* 91 */
+ { 0x92, 0x30 }, /* 92 */
+ { 0x93, 0xF0 }, /* 93 */
+ { 0x94, 0x00 }, /* 94 */
+ { 0x95, 0x00 }, /* 95 */
+ { 0x96, 0x3F }, /* 96 */
+ { 0x97, 0x00 }, /* 97 */
+ { 0xff, 0x00 }, /* FF */
};
static struct {
{ 0xFF, 0x00 }, /* FF */
};
-static int max98095_readable(struct snd_soc_codec *codec, unsigned int reg)
+static bool max98095_readable(struct device *dev, unsigned int reg)
{
if (reg >= M98095_REG_CNT)
return 0;
return max98095_access[reg].readable != 0;
}
-static int max98095_volatile(struct snd_soc_codec *codec, unsigned int reg)
+static bool max98095_volatile(struct device *dev, unsigned int reg)
{
if (reg > M98095_REG_MAX_CACHED)
return 1;
return 0;
}
-/*
- * Filter coefficients are in a separate register segment
- * and they share the address space of the normal registers.
- * The coefficient registers do not need or share the cache.
- */
-static int max98095_hw_write(struct snd_soc_codec *codec, unsigned int reg,
- unsigned int value)
-{
- int ret;
+static const struct regmap_config max98095_regmap = {
+ .reg_bits = 8,
+ .val_bits = 8,
- codec->cache_bypass = 1;
- ret = snd_soc_write(codec, reg, value);
- codec->cache_bypass = 0;
+ .reg_defaults = max98095_reg_def,
+ .num_reg_defaults = ARRAY_SIZE(max98095_reg_def),
+ .max_register = M98095_0FF_REV_ID,
+ .cache_type = REGCACHE_RBTREE,
- return ret ? -EIO : 0;
-}
+ .readable_reg = max98095_readable,
+ .volatile_reg = max98095_volatile,
+};
/*
* Load equalizer DSP coefficient configurations registers
/* Step through the registers and coefs */
for (i = 0; i < M98095_COEFS_PER_BAND; i++) {
- max98095_hw_write(codec, eq_reg++, M98095_BYTE1(coefs[i]));
- max98095_hw_write(codec, eq_reg++, M98095_BYTE0(coefs[i]));
+ snd_soc_write(codec, eq_reg++, M98095_BYTE1(coefs[i]));
+ snd_soc_write(codec, eq_reg++, M98095_BYTE0(coefs[i]));
}
}
/* Step through the registers and coefs */
for (i = 0; i < M98095_COEFS_PER_BAND; i++) {
- max98095_hw_write(codec, bq_reg++, M98095_BYTE1(coefs[i]));
- max98095_hw_write(codec, bq_reg++, M98095_BYTE0(coefs[i]));
+ snd_soc_write(codec, bq_reg++, M98095_BYTE1(coefs[i]));
+ snd_soc_write(codec, bq_reg++, M98095_BYTE0(coefs[i]));
}
}
{"MIC2 Input", NULL, "MIC2"},
};
-static int max98095_add_widgets(struct snd_soc_codec *codec)
-{
- snd_soc_add_codec_controls(codec, max98095_snd_controls,
- ARRAY_SIZE(max98095_snd_controls));
-
- return 0;
-}
-
/* codec mclk clock divider coefficients */
static const struct {
u32 rate;
static int max98095_set_bias_level(struct snd_soc_codec *codec,
enum snd_soc_bias_level level)
{
+ struct max98095_priv *max98095 = snd_soc_codec_get_drvdata(codec);
int ret;
switch (level) {
case SND_SOC_BIAS_STANDBY:
if (codec->dapm.bias_level == SND_SOC_BIAS_OFF) {
- ret = snd_soc_cache_sync(codec);
+ ret = regcache_sync(max98095->regmap);
if (ret != 0) {
dev_err(codec->dev, "Failed to sync cache: %d\n", ret);
case SND_SOC_BIAS_OFF:
snd_soc_update_bits(codec, M98095_090_PWR_EN_IN,
M98095_MBEN, 0);
- codec->cache_sync = 1;
+ regcache_mark_dirty(max98095->regmap);
break;
}
codec->dapm.bias_level = level;
struct max98095_pdata *pdata = max98095->pdata;
int channel = max98095_get_eq_channel(kcontrol->id.name);
struct max98095_cdata *cdata;
- int sel = ucontrol->value.integer.value[0];
+ unsigned int sel = ucontrol->value.integer.value[0];
struct max98095_eq_cfg *coef_set;
int fs, best, best_val, i;
int regmask, regsave;
struct max98095_pdata *pdata = max98095->pdata;
int channel = max98095_get_bq_channel(codec, kcontrol->id.name);
struct max98095_cdata *cdata;
- int sel = ucontrol->value.integer.value[0];
+ unsigned int sel = ucontrol->value.integer.value[0];
struct max98095_biquad_cfg *coef_set;
int fs, best, best_val, i;
int regmask, regsave;
/* Reset to hardware default for registers, as there is not
* a soft reset hardware control register */
for (i = M98095_010_HOST_INT_CFG; i < M98095_REG_MAX_CACHED; i++) {
- ret = snd_soc_write(codec, i, max98095_reg_def[i]);
+ ret = snd_soc_write(codec, i, snd_soc_read(codec, i));
if (ret < 0) {
dev_err(codec->dev, "Failed to reset: %d\n", ret);
return ret;
struct i2c_client *client;
int ret = 0;
- ret = snd_soc_codec_set_cache_io(codec, 8, 8, SND_SOC_I2C);
+ ret = snd_soc_codec_set_cache_io(codec, 8, 8, SND_SOC_REGMAP);
if (ret != 0) {
dev_err(codec->dev, "Failed to set cache I/O: %d\n", ret);
return ret;
snd_soc_update_bits(codec, M98095_097_PWR_SYS, M98095_SHDNRUN,
M98095_SHDNRUN);
- max98095_add_widgets(codec);
-
return 0;
err_irq:
.suspend = max98095_suspend,
.resume = max98095_resume,
.set_bias_level = max98095_set_bias_level,
- .reg_cache_size = ARRAY_SIZE(max98095_reg_def),
- .reg_word_size = sizeof(u8),
- .reg_cache_default = max98095_reg_def,
- .readable_register = max98095_readable,
- .volatile_register = max98095_volatile,
+ .controls = max98095_snd_controls,
+ .num_controls = ARRAY_SIZE(max98095_snd_controls),
.dapm_widgets = max98095_dapm_widgets,
.num_dapm_widgets = ARRAY_SIZE(max98095_dapm_widgets),
.dapm_routes = max98095_audio_map,
if (max98095 == NULL)
return -ENOMEM;
+ max98095->regmap = devm_regmap_init_i2c(i2c, &max98095_regmap);
+ if (IS_ERR(max98095->regmap)) {
+ ret = PTR_ERR(max98095->regmap);
+ dev_err(&i2c->dev, "Failed to allocate regmap: %d\n", ret);
+ return ret;
+ }
+
max98095->devtype = id->driver_data;
i2c_set_clientdata(i2c, max98095);
max98095->pdata = i2c->dev.platform_data;
#include <linux/module.h>
#include <linux/init.h>
#include <linux/i2c.h>
+#include <linux/regmap.h>
#include <linux/slab.h>
#include <sound/pcm.h>
#include <sound/pcm_params.h>
#include "max9850.h"
struct max9850_priv {
+ struct regmap *regmap;
unsigned int sysclk;
};
/* max9850 register cache */
-static const u8 max9850_reg[MAX9850_CACHEREGNUM] = {
- 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+static const struct reg_default max9850_reg[] = {
+ { 2, 0x0c },
+ { 3, 0x00 },
+ { 4, 0x00 },
+ { 5, 0x00 },
+ { 6, 0x00 },
+ { 7, 0x00 },
+ { 8, 0x00 },
+ { 9, 0x00 },
+ { 10, 0x00 },
};
/* these registers are not used at the moment but provided for the sake of
* completeness */
-static int max9850_volatile_register(struct snd_soc_codec *codec,
- unsigned int reg)
+static bool max9850_volatile_register(struct device *dev, unsigned int reg)
{
switch (reg) {
case MAX9850_STATUSA:
}
}
+static const struct regmap_config max9850_regmap = {
+ .reg_bits = 8,
+ .val_bits = 8,
+
+ .max_register = MAX9850_DIGITAL_AUDIO,
+ .volatile_reg = max9850_volatile_register,
+ .cache_type = REGCACHE_RBTREE,
+};
+
static const unsigned int max9850_tlv[] = {
TLV_DB_RANGE_HEAD(4),
0x18, 0x1f, TLV_DB_SCALE_ITEM(-7450, 400, 0),
static int max9850_set_bias_level(struct snd_soc_codec *codec,
enum snd_soc_bias_level level)
{
+ struct max9850_priv *max9850 = snd_soc_codec_get_drvdata(codec);
int ret;
switch (level) {
break;
case SND_SOC_BIAS_STANDBY:
if (codec->dapm.bias_level == SND_SOC_BIAS_OFF) {
- ret = snd_soc_cache_sync(codec);
+ ret = regcache_sync(max9850->regmap);
if (ret) {
dev_err(codec->dev,
"Failed to sync cache: %d\n", ret);
{
int ret;
- ret = snd_soc_codec_set_cache_io(codec, 8, 8, SND_SOC_I2C);
+ ret = snd_soc_codec_set_cache_io(codec, 8, 8, SND_SOC_REGMAP);
if (ret < 0) {
dev_err(codec->dev, "Failed to set cache I/O: %d\n", ret);
return ret;
.suspend = max9850_suspend,
.resume = max9850_resume,
.set_bias_level = max9850_set_bias_level,
- .reg_cache_size = ARRAY_SIZE(max9850_reg),
- .reg_word_size = sizeof(u8),
- .reg_cache_default = max9850_reg,
- .volatile_register = max9850_volatile_register,
.controls = max9850_controls,
.num_controls = ARRAY_SIZE(max9850_controls),
if (max9850 == NULL)
return -ENOMEM;
+ max9850->regmap = devm_regmap_init_i2c(i2c, &max9850_regmap);
+ if (IS_ERR(max9850->regmap))
+ return PTR_ERR(max9850->regmap);
+
i2c_set_clientdata(i2c, max9850);
ret = snd_soc_register_codec(&i2c->dev,
#include <sound/soc.h>
#include <sound/initval.h>
#include <sound/soc-dapm.h>
+#include <linux/regmap.h>
#include "mc13783.h"
-#define MC13783_AUDIO_RX0 36
-#define MC13783_AUDIO_RX1 37
-#define MC13783_AUDIO_TX 38
-#define MC13783_SSI_NETWORK 39
-#define MC13783_AUDIO_CODEC 40
-#define MC13783_AUDIO_DAC 41
-
#define AUDIO_RX0_ALSPEN (1 << 5)
#define AUDIO_RX0_ALSPSEL (1 << 7)
#define AUDIO_RX0_ADDCDC (1 << 21)
struct mc13783_priv {
struct mc13xxx *mc13xxx;
+ struct regmap *regmap;
enum mc13783_ssi_port adc_ssi_port;
enum mc13783_ssi_port dac_ssi_port;
};
-static unsigned int mc13783_read(struct snd_soc_codec *codec,
- unsigned int reg)
-{
- struct mc13783_priv *priv = snd_soc_codec_get_drvdata(codec);
- unsigned int value = 0;
-
- mc13xxx_lock(priv->mc13xxx);
-
- mc13xxx_reg_read(priv->mc13xxx, reg, &value);
-
- mc13xxx_unlock(priv->mc13xxx);
-
- return value;
-}
-
-static int mc13783_write(struct snd_soc_codec *codec,
- unsigned int reg, unsigned int value)
-{
- struct mc13783_priv *priv = snd_soc_codec_get_drvdata(codec);
- int ret;
-
- mc13xxx_lock(priv->mc13xxx);
-
- ret = mc13xxx_reg_write(priv->mc13xxx, reg, value);
-
- /* include errata fix for spi audio problems */
- if (reg == MC13783_AUDIO_CODEC || reg == MC13783_AUDIO_DAC)
- ret = mc13xxx_reg_write(priv->mc13xxx, reg, value);
-
- mc13xxx_unlock(priv->mc13xxx);
-
- return ret;
-}
-
/* Mapping between sample rates and register value */
static unsigned int mc13783_rates[] = {
8000, 11025, 12000, 16000,
static const struct snd_kcontrol_new samp_ctl =
SOC_DAPM_SINGLE("Switch", MC13783_AUDIO_RX0, 3, 1, 0);
+static const char * const speaker_amp_source_text[] = {
+ "CODEC", "Right"
+};
+static const SOC_ENUM_SINGLE_DECL(speaker_amp_source, MC13783_AUDIO_RX0, 4,
+ speaker_amp_source_text);
+static const struct snd_kcontrol_new speaker_amp_source_mux =
+ SOC_DAPM_ENUM("Speaker Amp Source MUX", speaker_amp_source);
+
+static const char * const headset_amp_source_text[] = {
+ "CODEC", "Mixer"
+};
+
+static const SOC_ENUM_SINGLE_DECL(headset_amp_source, MC13783_AUDIO_RX0, 11,
+ headset_amp_source_text);
+static const struct snd_kcontrol_new headset_amp_source_mux =
+ SOC_DAPM_ENUM("Headset Amp Source MUX", headset_amp_source);
+
+static const struct snd_kcontrol_new cdcout_ctl =
+ SOC_DAPM_SINGLE("Switch", MC13783_AUDIO_RX0, 18, 1, 0);
+
+static const struct snd_kcontrol_new adc_bypass_ctl =
+ SOC_DAPM_SINGLE("Switch", MC13783_AUDIO_CODEC, 16, 1, 0);
+
static const struct snd_kcontrol_new lamp_ctl =
SOC_DAPM_SINGLE("Switch", MC13783_AUDIO_RX0, 5, 1, 0);
SND_SOC_DAPM_VIRT_MUX("PGA Right Input Mux", SND_SOC_NOPM, 0, 0,
&right_input_mux),
+ SND_SOC_DAPM_MUX("Speaker Amp Source MUX", SND_SOC_NOPM, 0, 0,
+ &speaker_amp_source_mux),
+
+ SND_SOC_DAPM_MUX("Headset Amp Source MUX", SND_SOC_NOPM, 0, 0,
+ &headset_amp_source_mux),
+
SND_SOC_DAPM_PGA("PGA Left Input", SND_SOC_NOPM, 0, 0, NULL, 0),
SND_SOC_DAPM_PGA("PGA Right Input", SND_SOC_NOPM, 0, 0, NULL, 0),
SND_SOC_DAPM_ADC("ADC", "Capture", MC13783_AUDIO_CODEC, 11, 0),
SND_SOC_DAPM_SUPPLY("ADC_Reset", MC13783_AUDIO_CODEC, 15, 0, NULL, 0),
+ SND_SOC_DAPM_PGA("Voice CODEC PGA", MC13783_AUDIO_RX1, 0, 0, NULL, 0),
+ SND_SOC_DAPM_SWITCH("Voice CODEC Bypass", MC13783_AUDIO_CODEC, 16, 0,
+ &adc_bypass_ctl),
+
/* Output */
SND_SOC_DAPM_SUPPLY("DAC_E", MC13783_AUDIO_DAC, 11, 0, NULL, 0),
SND_SOC_DAPM_SUPPLY("DAC_Reset", MC13783_AUDIO_DAC, 15, 0, NULL, 0),
SND_SOC_DAPM_OUTPUT("RXOUTR"),
SND_SOC_DAPM_OUTPUT("HSL"),
SND_SOC_DAPM_OUTPUT("HSR"),
+ SND_SOC_DAPM_OUTPUT("LSPL"),
SND_SOC_DAPM_OUTPUT("LSP"),
SND_SOC_DAPM_OUTPUT("SP"),
+ SND_SOC_DAPM_OUTPUT("CDCOUT"),
- SND_SOC_DAPM_SWITCH("Speaker Amp", MC13783_AUDIO_RX0, 3, 0, &samp_ctl),
+ SND_SOC_DAPM_SWITCH("CDCOUT Switch", MC13783_AUDIO_RX0, 18, 0,
+ &cdcout_ctl),
+ SND_SOC_DAPM_SWITCH("Speaker Amp Switch", MC13783_AUDIO_RX0, 3, 0,
+ &samp_ctl),
SND_SOC_DAPM_SWITCH("Loudspeaker Amp", SND_SOC_NOPM, 0, 0, &lamp_ctl),
SND_SOC_DAPM_SWITCH("Headset Amp Left", MC13783_AUDIO_RX0, 10, 0,
&hlamp_ctl),
{ "ADC", NULL, "PGA Right Input"},
{ "ADC", NULL, "ADC_Reset"},
+ { "Voice CODEC PGA", "Voice CODEC Bypass", "ADC" },
+
+ { "Speaker Amp Source MUX", "CODEC", "Voice CODEC PGA"},
+ { "Speaker Amp Source MUX", "Right", "DAC PGA"},
+
+ { "Headset Amp Source MUX", "CODEC", "Voice CODEC PGA"},
+ { "Headset Amp Source MUX", "Mixer", "DAC PGA"},
+
/* Output */
{ "HSL", NULL, "Headset Amp Left" },
{ "HSR", NULL, "Headset Amp Right"},
{ "RXOUTL", NULL, "Line out Amp Left"},
{ "RXOUTR", NULL, "Line out Amp Right"},
- { "SP", NULL, "Speaker Amp"},
- { "Speaker Amp", NULL, "DAC PGA"},
- { "LSP", NULL, "DAC PGA"},
- { "Headset Amp Left", NULL, "DAC PGA"},
- { "Headset Amp Right", NULL, "DAC PGA"},
+ { "SP", "Speaker Amp Switch", "Speaker Amp Source MUX"},
+ { "LSP", "Loudspeaker Amp", "Speaker Amp Source MUX"},
+ { "HSL", "Headset Amp Left", "Headset Amp Source MUX"},
+ { "HSR", "Headset Amp Right", "Headset Amp Source MUX"},
{ "Line out Amp Left", NULL, "DAC PGA"},
{ "Line out Amp Right", NULL, "DAC PGA"},
{ "DAC PGA", NULL, "DAC"},
{ "DAC", NULL, "DAC_E"},
+ { "CDCOUT", "CDCOUT Switch", "Voice CODEC PGA"},
};
static const char * const mc13783_3d_mixer[] = {"Stereo", "Phase Mix",
static struct snd_kcontrol_new mc13783_control_list[] = {
SOC_SINGLE("Loudspeaker enable", MC13783_AUDIO_RX0, 5, 1, 0),
SOC_SINGLE("PCM Playback Volume", MC13783_AUDIO_RX1, 6, 15, 0),
+ SOC_SINGLE("PCM Playback Switch", MC13783_AUDIO_RX1, 5, 1, 0),
SOC_DOUBLE("PCM Capture Volume", MC13783_AUDIO_TX, 19, 14, 31, 0),
SOC_ENUM("3D Control", mc13783_enum_3d_mixer),
+
+ SOC_SINGLE("CDCOUT Switch", MC13783_AUDIO_RX0, 18, 1, 0),
+ SOC_SINGLE("Earpiece Amp Switch", MC13783_AUDIO_RX0, 3, 1, 0),
+ SOC_DOUBLE("Headset Amp Switch", MC13783_AUDIO_RX0, 10, 9, 1, 0),
+ SOC_DOUBLE("Line out Amp Switch", MC13783_AUDIO_RX0, 16, 15, 1, 0),
+
+ SOC_SINGLE("PCM Capture Mixin Switch", MC13783_AUDIO_RX0, 22, 1, 0),
+ SOC_SINGLE("Line in Capture Mixin Switch", MC13783_AUDIO_RX0, 23, 1, 0),
+
+ SOC_SINGLE("CODEC Capture Volume", MC13783_AUDIO_RX1, 1, 15, 0),
+ SOC_SINGLE("CODEC Capture Mixin Switch", MC13783_AUDIO_RX0, 21, 1, 0),
+
+ SOC_SINGLE("Line in Capture Volume", MC13783_AUDIO_RX1, 12, 15, 0),
+ SOC_SINGLE("Line in Capture Switch", MC13783_AUDIO_RX1, 10, 1, 0),
+
+ SOC_SINGLE("MC1 Capture Bias Switch", MC13783_AUDIO_TX, 0, 1, 0),
+ SOC_SINGLE("MC2 Capture Bias Switch", MC13783_AUDIO_TX, 1, 1, 0),
};
static int mc13783_probe(struct snd_soc_codec *codec)
{
struct mc13783_priv *priv = snd_soc_codec_get_drvdata(codec);
+ int ret;
- mc13xxx_lock(priv->mc13xxx);
+ codec->control_data = dev_get_regmap(codec->dev->parent, NULL);
+ ret = snd_soc_codec_set_cache_io(codec, 8, 24, SND_SOC_REGMAP);
+ if (ret != 0) {
+ dev_err(codec->dev, "Failed to set cache I/O: %d\n", ret);
+ return ret;
+ }
/* these are the reset values */
mc13xxx_reg_write(priv->mc13xxx, MC13783_AUDIO_RX0, 0x25893);
mc13xxx_reg_rmw(priv->mc13xxx, MC13783_AUDIO_DAC,
0, AUDIO_SSI_SEL);
- mc13xxx_unlock(priv->mc13xxx);
-
return 0;
}
{
struct mc13783_priv *priv = snd_soc_codec_get_drvdata(codec);
- mc13xxx_lock(priv->mc13xxx);
-
/* Make sure VAUDIOON is off */
mc13xxx_reg_rmw(priv->mc13xxx, MC13783_AUDIO_RX0, 0x3, 0);
- mc13xxx_unlock(priv->mc13xxx);
-
return 0;
}
static struct snd_soc_codec_driver soc_codec_dev_mc13783 = {
.probe = mc13783_probe,
.remove = mc13783_remove,
- .read = mc13783_read,
- .write = mc13783_write,
.controls = mc13783_control_list,
.num_controls = ARRAY_SIZE(mc13783_control_list),
.dapm_widgets = mc13783_dapm_widgets,
struct ml26124_priv *priv = snd_soc_codec_get_drvdata(codec);
int i = get_coeff(priv->mclk, params_rate(hw_params));
+ if (i < 0)
+ return i;
priv->substream = substream;
priv->rate = params_rate(hw_params);
#include <linux/gpio.h>
#include <linux/i2c.h>
#include <linux/regmap.h>
+#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
#include <sound/pcm.h>
static const struct regmap_config pcm1681_regmap = {
.reg_bits = 8,
.val_bits = 8,
- .max_register = ARRAY_SIZE(pcm1681_reg_defaults) + 1,
+ .max_register = 0x13,
.reg_defaults = pcm1681_reg_defaults,
.num_reg_defaults = ARRAY_SIZE(pcm1681_reg_defaults),
.writeable_reg = pcm1681_writeable_reg,
#include <sound/initval.h>
#include <sound/soc.h>
#include <sound/tlv.h>
+#include <linux/of.h>
#include <linux/of_device.h>
#include "pcm1792a.h"
static const struct regmap_config pcm1792a_regmap = {
.reg_bits = 8,
.val_bits = 8,
- .max_register = 24,
+ .max_register = 23,
.reg_defaults = pcm1792a_reg_defaults,
.num_reg_defaults = ARRAY_SIZE(pcm1792a_reg_defaults),
.writeable_reg = pcm1792a_writeable_reg,
#include <linux/of_gpio.h>
#include <linux/platform_device.h>
#include <linux/spi/spi.h>
+#include <linux/acpi.h>
#include <sound/core.h>
#include <sound/pcm.h>
#include <sound/pcm_params.h>
return 0;
}
-void hp_amp_power_on(struct snd_soc_codec *codec)
+static void hp_amp_power_on(struct snd_soc_codec *codec)
{
struct rt5640_priv *rt5640 = snd_soc_codec_get_drvdata(codec);
struct snd_soc_pcm_runtime *rtd = substream->private_data;
struct snd_soc_codec *codec = rtd->codec;
struct rt5640_priv *rt5640 = snd_soc_codec_get_drvdata(codec);
- unsigned int val_len = 0, val_clk, mask_clk, dai_sel;
- int pre_div, bclk_ms, frame_size;
+ unsigned int val_len = 0, val_clk, mask_clk;
+ int dai_sel, pre_div, bclk_ms, frame_size;
rt5640->lrck[dai->id] = params_rate(params);
pre_div = get_clk_info(rt5640->sysclk, rt5640->lrck[dai->id]);
if (pre_div < 0) {
- dev_err(codec->dev, "Unsupported clock setting\n");
+ dev_err(codec->dev, "Unsupported clock setting %d for DAI %d\n",
+ rt5640->lrck[dai->id], dai->id);
return -EINVAL;
}
frame_size = snd_soc_params_to_frame_size(params);
{
struct snd_soc_codec *codec = dai->codec;
struct rt5640_priv *rt5640 = snd_soc_codec_get_drvdata(codec);
- unsigned int reg_val = 0, dai_sel;
+ unsigned int reg_val = 0;
+ int dai_sel;
switch (fmt & SND_SOC_DAIFMT_MASTER_MASK) {
case SND_SOC_DAIFMT_CBM_CFM:
rt5640_reset(codec);
regcache_cache_only(rt5640->regmap, true);
regcache_mark_dirty(rt5640->regmap);
+ if (gpio_is_valid(rt5640->pdata.ldo1_en))
+ gpio_set_value_cansleep(rt5640->pdata.ldo1_en, 0);
return 0;
}
static int rt5640_resume(struct snd_soc_codec *codec)
{
- rt5640_set_bias_level(codec, SND_SOC_BIAS_STANDBY);
+ struct rt5640_priv *rt5640 = snd_soc_codec_get_drvdata(codec);
+
+ if (gpio_is_valid(rt5640->pdata.ldo1_en)) {
+ gpio_set_value_cansleep(rt5640->pdata.ldo1_en, 1);
+ msleep(400);
+ }
return 0;
}
};
MODULE_DEVICE_TABLE(i2c, rt5640_i2c_id);
+#ifdef CONFIG_ACPI
+static struct acpi_device_id rt5640_acpi_match[] = {
+ { "INT33CA", 0 },
+ { },
+};
+MODULE_DEVICE_TABLE(acpi, rt5640_acpi_match);
+#endif
+
static int rt5640_parse_dt(struct rt5640_priv *rt5640, struct device_node *np)
{
rt5640->pdata.in1_diff = of_property_read_bool(np,
.driver = {
.name = "rt5640",
.owner = THIS_MODULE,
+ .acpi_match_table = ACPI_PTR(rt5640_acpi_match),
},
.probe = rt5640_i2c_probe,
.remove = rt5640_i2c_remove,
/* Left Input */
{"Left Line1L Mux", "single-ended", "LINE1L"},
{"Left Line1L Mux", "differential", "LINE1L"},
+ {"Left Line1R Mux", "single-ended", "LINE1R"},
+ {"Left Line1R Mux", "differential", "LINE1R"},
{"Left Line2L Mux", "single-ended", "LINE2L"},
{"Left Line2L Mux", "differential", "LINE2L"},
/* Right Input */
{"Right Line1R Mux", "single-ended", "LINE1R"},
{"Right Line1R Mux", "differential", "LINE1R"},
+ {"Right Line1L Mux", "single-ended", "LINE1L"},
+ {"Right Line1L Mux", "differential", "LINE1L"},
{"Right Line2R Mux", "single-ended", "LINE2R"},
{"Right Line2R Mux", "differential", "LINE2R"},
ARIZONA_MUX_ROUTES("ASRC2L", "ASRC2L"),
ARIZONA_MUX_ROUTES("ASRC2R", "ASRC2R"),
+ { "AEC Loopback", "HPOUT1L", "OUT1L" },
+ { "AEC Loopback", "HPOUT1R", "OUT1R" },
{ "HPOUT1L", NULL, "OUT1L" },
{ "HPOUT1R", NULL, "OUT1R" },
+ { "AEC Loopback", "HPOUT2L", "OUT2L" },
+ { "AEC Loopback", "HPOUT2R", "OUT2R" },
{ "HPOUT2L", NULL, "OUT2L" },
{ "HPOUT2R", NULL, "OUT2R" },
+ { "AEC Loopback", "HPOUT3L", "OUT3L" },
+ { "AEC Loopback", "HPOUT3R", "OUT3R" },
{ "HPOUT3L", NULL, "OUT3L" },
{ "HPOUT3R", NULL, "OUT3L" },
+ { "AEC Loopback", "SPKOUTL", "OUT4L" },
{ "SPKOUTLN", NULL, "OUT4L" },
{ "SPKOUTLP", NULL, "OUT4L" },
+ { "AEC Loopback", "SPKOUTR", "OUT4R" },
{ "SPKOUTRN", NULL, "OUT4R" },
{ "SPKOUTRP", NULL, "OUT4R" },
+ { "AEC Loopback", "SPKDAT1L", "OUT5L" },
+ { "AEC Loopback", "SPKDAT1R", "OUT5R" },
{ "SPKDAT1L", NULL, "OUT5L" },
{ "SPKDAT1R", NULL, "OUT5R" },
+ { "AEC Loopback", "SPKDAT2L", "OUT6L" },
+ { "AEC Loopback", "SPKDAT2R", "OUT6R" },
{ "SPKDAT2L", NULL, "OUT6L" },
{ "SPKDAT2R", NULL, "OUT6R" },
ret = regmap_raw_write(adsp->regmap, reg, scratch,
ctl->len);
if (ret) {
- adsp_err(adsp, "Failed to write %zu bytes to %x\n",
- ctl->len, reg);
+ adsp_err(adsp, "Failed to write %zu bytes to %x: %d\n",
+ ctl->len, reg, ret);
kfree(scratch);
return ret;
}
+ adsp_dbg(adsp, "Wrote %zu bytes to %x\n", ctl->len, reg);
kfree(scratch);
ret = regmap_raw_read(adsp->regmap, reg, scratch, ctl->len);
if (ret) {
- adsp_err(adsp, "Failed to read %zu bytes from %x\n",
- ctl->len, reg);
+ adsp_err(adsp, "Failed to read %zu bytes from %x: %d\n",
+ ctl->len, reg, ret);
kfree(scratch);
return ret;
}
+ adsp_dbg(adsp, "Read %zu bytes from %x\n", ctl->len, reg);
memcpy(buf, scratch, ctl->len);
kfree(scratch);
file, header->ver);
goto out_fw;
}
+ adsp_info(dsp, "Firmware version: %d\n", header->ver);
if (header->core != dsp->type) {
adsp_err(dsp, "%s: invalid core %d != %d\n",
&buf_list);
if (!buf) {
adsp_err(dsp, "Out of memory\n");
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out_fw;
}
ret = regmap_raw_write_async(regmap, reg, buf->buf,
if (i + 1 < algs) {
region->len = be32_to_cpu(adsp1_alg[i + 1].dm);
region->len -= be32_to_cpu(adsp1_alg[i].dm);
+ region->len *= 4;
wm_adsp_create_control(dsp, region);
} else {
adsp_warn(dsp, "Missing length info for region DM with ID %x\n",
if (i + 1 < algs) {
region->len = be32_to_cpu(adsp1_alg[i + 1].zm);
region->len -= be32_to_cpu(adsp1_alg[i].zm);
+ region->len *= 4;
wm_adsp_create_control(dsp, region);
} else {
adsp_warn(dsp, "Missing length info for region ZM with ID %x\n",
if (i + 1 < algs) {
region->len = be32_to_cpu(adsp2_alg[i + 1].xm);
region->len -= be32_to_cpu(adsp2_alg[i].xm);
+ region->len *= 4;
wm_adsp_create_control(dsp, region);
} else {
adsp_warn(dsp, "Missing length info for region XM with ID %x\n",
if (i + 1 < algs) {
region->len = be32_to_cpu(adsp2_alg[i + 1].ym);
region->len -= be32_to_cpu(adsp2_alg[i].ym);
+ region->len *= 4;
wm_adsp_create_control(dsp, region);
} else {
adsp_warn(dsp, "Missing length info for region YM with ID %x\n",
if (i + 1 < algs) {
region->len = be32_to_cpu(adsp2_alg[i + 1].zm);
region->len -= be32_to_cpu(adsp2_alg[i].zm);
+ region->len *= 4;
wm_adsp_create_control(dsp, region);
} else {
adsp_warn(dsp, "Missing length info for region ZM with ID %x\n",
le32_to_cpu(blk->len));
if (ret != 0) {
adsp_err(dsp,
- "%s.%d: Failed to write to %x in %s\n",
- file, blocks, reg, region_name);
+ "%s.%d: Failed to write to %x in %s: %d\n",
+ file, blocks, reg, region_name, ret);
}
}
struct snd_soc_codec *codec = w->codec;
struct wm_adsp *dsps = snd_soc_codec_get_drvdata(codec);
struct wm_adsp *dsp = &dsps[w->shift];
+ struct wm_adsp_alg_region *alg_region;
struct wm_coeff_ctl *ctl;
int ret;
int val;
list_for_each_entry(ctl, &dsp->ctl_list, list)
ctl->enabled = 0;
+
+ while (!list_empty(&dsp->alg_regions)) {
+ alg_region = list_first_entry(&dsp->alg_regions,
+ struct wm_adsp_alg_region,
+ list);
+ list_del(&alg_region->list);
+ kfree(alg_region);
+ }
break;
default:
hubs->hp_startup_mode);
break;
}
+ break;
case SND_SOC_DAPM_PRE_PMD:
snd_soc_update_bits(codec, WM8993_CHARGE_PUMP_1,
config SND_DAVINCI_SOC
- tristate "SoC Audio for the TI DAVINCI chip"
- depends on ARCH_DAVINCI
+ tristate "SoC Audio for the TI DAVINCI or AM33XX chip"
+ depends on ARCH_DAVINCI || SOC_AM33XX
help
+ Platform driver for daVinci or AM33xx
Say Y or M if you want to add support for codecs attached to
- the DAVINCI AC97 or I2S interface. You will also need
+ the DAVINCI AC97, I2S, or McASP interface. You will also need
to select the audio interfaces to support below.
config SND_DAVINCI_SOC_I2S
config SND_DAVINCI_SOC_VCIF
tristate
+config SND_AM33XX_SOC_EVM
+ tristate "SoC Audio for the AM33XX chip based boards"
+ depends on SND_DAVINCI_SOC && SOC_AM33XX
+ select SND_SOC_TLV320AIC3X
+ select SND_DAVINCI_SOC_MCASP
+ help
+ Say Y or M if you want to add support for SoC audio on AM33XX
+ boards using McASP and TLV320AIC3X codec. For example AM335X-EVM,
+ AM335X-EVMSK, and BeagelBone with AudioCape boards have this
+ setup.
+
config SND_DAVINCI_SOC_EVM
tristate "SoC Audio support for DaVinci DM6446, DM355 or DM365 EVM"
depends on SND_DAVINCI_SOC
snd-soc-evm-objs := davinci-evm.o
obj-$(CONFIG_SND_DAVINCI_SOC_EVM) += snd-soc-evm.o
+obj-$(CONFIG_SND_AM33XX_SOC_EVM) += snd-soc-evm.o
obj-$(CONFIG_SND_DM6467_SOC_EVM) += snd-soc-evm.o
obj-$(CONFIG_SND_DA830_SOC_EVM) += snd-soc-evm.o
obj-$(CONFIG_SND_DA850_SOC_EVM) += snd-soc-evm.o
#include <linux/platform_device.h>
#include <linux/platform_data/edma.h>
#include <linux/i2c.h>
+#include <linux/of_platform.h>
#include <sound/core.h>
#include <sound/pcm.h>
#include <sound/soc.h>
#include <asm/dma.h>
#include <asm/mach-types.h>
+#include <linux/edma.h>
+
#include "davinci-pcm.h"
#include "davinci-i2s.h"
#include "davinci-mcasp.h"
+struct snd_soc_card_drvdata_davinci {
+ unsigned sysclk;
+};
+
#define AUDIO_FORMAT (SND_SOC_DAIFMT_DSP_B | \
SND_SOC_DAIFMT_CBM_CFM | SND_SOC_DAIFMT_IB_NF)
static int evm_hw_params(struct snd_pcm_substream *substream,
struct snd_soc_pcm_runtime *rtd = substream->private_data;
struct snd_soc_dai *codec_dai = rtd->codec_dai;
struct snd_soc_dai *cpu_dai = rtd->cpu_dai;
+ struct snd_soc_codec *codec = rtd->codec;
+ struct snd_soc_card *soc_card = codec->card;
int ret = 0;
- unsigned sysclk;
-
- /* ASP1 on DM355 EVM is clocked by an external oscillator */
- if (machine_is_davinci_dm355_evm() || machine_is_davinci_dm6467_evm() ||
- machine_is_davinci_dm365_evm())
- sysclk = 27000000;
-
- /* ASP0 in DM6446 EVM is clocked by U55, as configured by
- * board-dm644x-evm.c using GPIOs from U18. There are six
- * options; here we "know" we use a 48 KHz sample rate.
- */
- else if (machine_is_davinci_evm())
- sysclk = 12288000;
-
- else if (machine_is_davinci_da830_evm() ||
- machine_is_davinci_da850_evm())
- sysclk = 24576000;
-
- else
- return -EINVAL;
+ unsigned sysclk = ((struct snd_soc_card_drvdata_davinci *)
+ snd_soc_card_get_drvdata(soc_card))->sysclk;
/* set codec DAI configuration */
ret = snd_soc_dai_set_fmt(codec_dai, AUDIO_FORMAT);
{
struct snd_soc_codec *codec = rtd->codec;
struct snd_soc_dapm_context *dapm = &codec->dapm;
+ struct device_node *np = codec->card->dev->of_node;
+ int ret;
/* Add davinci-evm specific widgets */
snd_soc_dapm_new_controls(dapm, aic3x_dapm_widgets,
ARRAY_SIZE(aic3x_dapm_widgets));
- /* Set up davinci-evm specific audio path audio_map */
- snd_soc_dapm_add_routes(dapm, audio_map, ARRAY_SIZE(audio_map));
+ if (np) {
+ ret = snd_soc_of_parse_audio_routing(codec->card,
+ "ti,audio-routing");
+ if (ret)
+ return ret;
+ } else {
+ /* Set up davinci-evm specific audio path audio_map */
+ snd_soc_dapm_add_routes(dapm, audio_map, ARRAY_SIZE(audio_map));
+ }
/* not connected */
snd_soc_dapm_disable_pin(dapm, "MONO_LOUT");
};
/* davinci dm6446 evm audio machine driver */
+/*
+ * ASP0 in DM6446 EVM is clocked by U55, as configured by
+ * board-dm644x-evm.c using GPIOs from U18. There are six
+ * options; here we "know" we use a 48 KHz sample rate.
+ */
+static struct snd_soc_card_drvdata_davinci dm6446_snd_soc_card_drvdata = {
+ .sysclk = 12288000,
+};
+
static struct snd_soc_card dm6446_snd_soc_card_evm = {
.name = "DaVinci DM6446 EVM",
.owner = THIS_MODULE,
.dai_link = &dm6446_evm_dai,
.num_links = 1,
+ .drvdata = &dm6446_snd_soc_card_drvdata,
};
/* davinci dm355 evm audio machine driver */
+/* ASP1 on DM355 EVM is clocked by an external oscillator */
+static struct snd_soc_card_drvdata_davinci dm355_snd_soc_card_drvdata = {
+ .sysclk = 27000000,
+};
+
static struct snd_soc_card dm355_snd_soc_card_evm = {
.name = "DaVinci DM355 EVM",
.owner = THIS_MODULE,
.dai_link = &dm355_evm_dai,
.num_links = 1,
+ .drvdata = &dm355_snd_soc_card_drvdata,
};
/* davinci dm365 evm audio machine driver */
+static struct snd_soc_card_drvdata_davinci dm365_snd_soc_card_drvdata = {
+ .sysclk = 27000000,
+};
+
static struct snd_soc_card dm365_snd_soc_card_evm = {
.name = "DaVinci DM365 EVM",
.owner = THIS_MODULE,
.dai_link = &dm365_evm_dai,
.num_links = 1,
+ .drvdata = &dm365_snd_soc_card_drvdata,
};
/* davinci dm6467 evm audio machine driver */
+static struct snd_soc_card_drvdata_davinci dm6467_snd_soc_card_drvdata = {
+ .sysclk = 27000000,
+};
+
static struct snd_soc_card dm6467_snd_soc_card_evm = {
.name = "DaVinci DM6467 EVM",
.owner = THIS_MODULE,
.dai_link = dm6467_evm_dai,
.num_links = ARRAY_SIZE(dm6467_evm_dai),
+ .drvdata = &dm6467_snd_soc_card_drvdata,
+};
+
+static struct snd_soc_card_drvdata_davinci da830_snd_soc_card_drvdata = {
+ .sysclk = 24576000,
};
static struct snd_soc_card da830_snd_soc_card = {
.owner = THIS_MODULE,
.dai_link = &da830_evm_dai,
.num_links = 1,
+ .drvdata = &da830_snd_soc_card_drvdata,
+};
+
+static struct snd_soc_card_drvdata_davinci da850_snd_soc_card_drvdata = {
+ .sysclk = 24576000,
};
static struct snd_soc_card da850_snd_soc_card = {
.owner = THIS_MODULE,
.dai_link = &da850_evm_dai,
.num_links = 1,
+ .drvdata = &da850_snd_soc_card_drvdata,
+};
+
+#if defined(CONFIG_OF)
+
+/*
+ * The struct is used as place holder. It will be completely
+ * filled with data from dt node.
+ */
+static struct snd_soc_dai_link evm_dai_tlv320aic3x = {
+ .name = "TLV320AIC3X",
+ .stream_name = "AIC3X",
+ .codec_dai_name = "tlv320aic3x-hifi",
+ .ops = &evm_ops,
+ .init = evm_aic3x_init,
+};
+
+static const struct of_device_id davinci_evm_dt_ids[] = {
+ {
+ .compatible = "ti,da830-evm-audio",
+ .data = (void *) &evm_dai_tlv320aic3x,
+ },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, davinci_evm_dt_ids);
+
+/* davinci evm audio machine driver */
+static struct snd_soc_card evm_soc_card = {
+ .owner = THIS_MODULE,
+ .num_links = 1,
};
+static int davinci_evm_probe(struct platform_device *pdev)
+{
+ struct device_node *np = pdev->dev.of_node;
+ const struct of_device_id *match =
+ of_match_device(of_match_ptr(davinci_evm_dt_ids), &pdev->dev);
+ struct snd_soc_dai_link *dai = (struct snd_soc_dai_link *) match->data;
+ struct snd_soc_card_drvdata_davinci *drvdata = NULL;
+ int ret = 0;
+
+ evm_soc_card.dai_link = dai;
+
+ dai->codec_of_node = of_parse_phandle(np, "ti,audio-codec", 0);
+ if (!dai->codec_of_node)
+ return -EINVAL;
+
+ dai->cpu_of_node = of_parse_phandle(np, "ti,mcasp-controller", 0);
+ if (!dai->cpu_of_node)
+ return -EINVAL;
+
+ dai->platform_of_node = dai->cpu_of_node;
+
+ evm_soc_card.dev = &pdev->dev;
+ ret = snd_soc_of_parse_card_name(&evm_soc_card, "ti,model");
+ if (ret)
+ return ret;
+
+ drvdata = devm_kzalloc(&pdev->dev, sizeof(*drvdata), GFP_KERNEL);
+ if (!drvdata)
+ return -ENOMEM;
+
+ ret = of_property_read_u32(np, "ti,codec-clock-rate", &drvdata->sysclk);
+ if (ret < 0)
+ return -EINVAL;
+
+ snd_soc_card_set_drvdata(&evm_soc_card, drvdata);
+ ret = devm_snd_soc_register_card(&pdev->dev, &evm_soc_card);
+
+ if (ret)
+ dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", ret);
+
+ return ret;
+}
+
+static int davinci_evm_remove(struct platform_device *pdev)
+{
+ struct snd_soc_card *card = platform_get_drvdata(pdev);
+
+ snd_soc_unregister_card(card);
+
+ return 0;
+}
+
+static struct platform_driver davinci_evm_driver = {
+ .probe = davinci_evm_probe,
+ .remove = davinci_evm_remove,
+ .driver = {
+ .name = "davinci_evm",
+ .owner = THIS_MODULE,
+ .of_match_table = of_match_ptr(davinci_evm_dt_ids),
+ },
+};
+#endif
+
static struct platform_device *evm_snd_device;
static int __init evm_init(void)
int index;
int ret;
+ /*
+ * If dtb is there, the devices will be created dynamically.
+ * Only register platfrom driver structure.
+ */
+#if defined(CONFIG_OF)
+ if (of_have_populated_dt())
+ return platform_driver_register(&davinci_evm_driver);
+#endif
+
if (machine_is_davinci_evm()) {
evm_snd_dev_data = &dm6446_snd_soc_card_evm;
index = 0;
static void __exit evm_exit(void)
{
+#if defined(CONFIG_OF)
+ if (of_have_populated_dt()) {
+ platform_driver_unregister(&davinci_evm_driver);
+ return;
+ }
+#endif
+
platform_device_unregister(evm_snd_device);
}
.name = "davinci-mcasp",
};
+/* Some HW specific values and defaults. The rest is filled in from DT. */
+static struct snd_platform_data dm646x_mcasp_pdata = {
+ .tx_dma_offset = 0x400,
+ .rx_dma_offset = 0x400,
+ .asp_chan_q = EVENTQ_0,
+ .version = MCASP_VERSION_1,
+};
+
+static struct snd_platform_data da830_mcasp_pdata = {
+ .tx_dma_offset = 0x2000,
+ .rx_dma_offset = 0x2000,
+ .asp_chan_q = EVENTQ_0,
+ .version = MCASP_VERSION_2,
+};
+
+static struct snd_platform_data omap2_mcasp_pdata = {
+ .tx_dma_offset = 0,
+ .rx_dma_offset = 0,
+ .asp_chan_q = EVENTQ_0,
+ .version = MCASP_VERSION_3,
+};
+
static const struct of_device_id mcasp_dt_ids[] = {
{
.compatible = "ti,dm646x-mcasp-audio",
- .data = (void *)MCASP_VERSION_1,
+ .data = &dm646x_mcasp_pdata,
},
{
.compatible = "ti,da830-mcasp-audio",
- .data = (void *)MCASP_VERSION_2,
+ .data = &da830_mcasp_pdata,
},
{
- .compatible = "ti,omap2-mcasp-audio",
- .data = (void *)MCASP_VERSION_3,
+ .compatible = "ti,am33xx-mcasp-audio",
+ .data = &omap2_mcasp_pdata,
},
{ /* sentinel */ }
};
struct snd_platform_data *pdata = NULL;
const struct of_device_id *match =
of_match_device(mcasp_dt_ids, &pdev->dev);
+ struct of_phandle_args dma_spec;
const u32 *of_serial_dir32;
- u8 *of_serial_dir;
u32 val;
int i, ret = 0;
pdata = pdev->dev.platform_data;
return pdata;
} else if (match) {
- pdata = devm_kzalloc(&pdev->dev, sizeof(*pdata), GFP_KERNEL);
- if (!pdata) {
- ret = -ENOMEM;
- goto nodata;
- }
+ pdata = (struct snd_platform_data *) match->data;
} else {
/* control shouldn't reach here. something is wrong */
ret = -EINVAL;
goto nodata;
}
- if (match->data)
- pdata->version = (u8)((int)match->data);
-
ret = of_property_read_u32(np, "op-mode", &val);
if (ret >= 0)
pdata->op_mode = val;
pdata->tdm_slots = val;
}
- ret = of_property_read_u32(np, "num-serializer", &val);
- if (ret >= 0)
- pdata->num_serializer = val;
-
of_serial_dir32 = of_get_property(np, "serial-dir", &val);
val /= sizeof(u32);
- if (val != pdata->num_serializer) {
- dev_err(&pdev->dev,
- "num-serializer(%d) != serial-dir size(%d)\n",
- pdata->num_serializer, val);
- ret = -EINVAL;
- goto nodata;
- }
-
if (of_serial_dir32) {
- of_serial_dir = devm_kzalloc(&pdev->dev,
- (sizeof(*of_serial_dir) * val),
- GFP_KERNEL);
+ u8 *of_serial_dir = devm_kzalloc(&pdev->dev,
+ (sizeof(*of_serial_dir) * val),
+ GFP_KERNEL);
if (!of_serial_dir) {
ret = -ENOMEM;
goto nodata;
}
- for (i = 0; i < pdata->num_serializer; i++)
+ for (i = 0; i < val; i++)
of_serial_dir[i] = be32_to_cpup(&of_serial_dir32[i]);
+ pdata->num_serializer = val;
pdata->serial_dir = of_serial_dir;
}
+ ret = of_property_match_string(np, "dma-names", "tx");
+ if (ret < 0)
+ goto nodata;
+
+ ret = of_parse_phandle_with_args(np, "dmas", "#dma-cells", ret,
+ &dma_spec);
+ if (ret < 0)
+ goto nodata;
+
+ pdata->tx_dma_channel = dma_spec.args[0];
+
+ ret = of_property_match_string(np, "dma-names", "rx");
+ if (ret < 0)
+ goto nodata;
+
+ ret = of_parse_phandle_with_args(np, "dmas", "#dma-cells", ret,
+ &dma_spec);
+ if (ret < 0)
+ goto nodata;
+
+ pdata->rx_dma_channel = dma_spec.args[0];
+
ret = of_property_read_u32(np, "tx-num-evt", &val);
if (ret >= 0)
pdata->txnumevt = val;
static int davinci_mcasp_probe(struct platform_device *pdev)
{
struct davinci_pcm_dma_params *dma_data;
- struct resource *mem, *ioarea, *res;
+ struct resource *mem, *ioarea, *res, *dat;
struct snd_platform_data *pdata;
struct davinci_audio_dev *dev;
int ret;
return -EINVAL;
}
- mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ mem = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mpu");
if (!mem) {
- dev_err(&pdev->dev, "no mem resource?\n");
- return -ENODEV;
+ dev_warn(dev->dev,
+ "\"mpu\" mem resource not found, using index 0\n");
+ mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!mem) {
+ dev_err(&pdev->dev, "no mem resource?\n");
+ return -ENODEV;
+ }
}
ioarea = devm_request_mem_region(&pdev->dev, mem->start,
dev->rxnumevt = pdata->rxnumevt;
dev->dev = &pdev->dev;
+ dat = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dat");
+ if (!dat)
+ dat = mem;
+
dma_data = &dev->dma_params[SNDRV_PCM_STREAM_PLAYBACK];
dma_data->asp_chan_q = pdata->asp_chan_q;
dma_data->ram_chan_q = pdata->ram_chan_q;
dma_data->sram_pool = pdata->sram_pool;
dma_data->sram_size = pdata->sram_size_playback;
- dma_data->dma_addr = (dma_addr_t) (pdata->tx_dma_offset +
- mem->start);
+ dma_data->dma_addr = dat->start + pdata->tx_dma_offset;
- /* first TX, then RX */
res = platform_get_resource(pdev, IORESOURCE_DMA, 0);
- if (!res) {
- dev_err(&pdev->dev, "no DMA resource\n");
- ret = -ENODEV;
- goto err_release_clk;
- }
-
- dma_data->channel = res->start;
+ if (res)
+ dma_data->channel = res->start;
+ else
+ dma_data->channel = pdata->tx_dma_channel;
dma_data = &dev->dma_params[SNDRV_PCM_STREAM_CAPTURE];
dma_data->asp_chan_q = pdata->asp_chan_q;
dma_data->ram_chan_q = pdata->ram_chan_q;
dma_data->sram_pool = pdata->sram_pool;
dma_data->sram_size = pdata->sram_size_capture;
- dma_data->dma_addr = (dma_addr_t)(pdata->rx_dma_offset +
- mem->start);
+ dma_data->dma_addr = dat->start + pdata->rx_dma_offset;
res = platform_get_resource(pdev, IORESOURCE_DMA, 1);
- if (!res) {
- dev_err(&pdev->dev, "no DMA resource\n");
- ret = -ENODEV;
- goto err_release_clk;
- }
+ if (res)
+ dma_data->channel = res->start;
+ else
+ dma_data->channel = pdata->rx_dma_channel;
- dma_data->channel = res->start;
dev_set_drvdata(&pdev->dev, dev);
ret = snd_soc_register_component(&pdev->dev, &davinci_mcasp_component,
&davinci_mcasp_dai[pdata->op_mode], 1);
return 0;
}
+#ifdef CONFIG_PM_SLEEP
+static int davinci_mcasp_suspend(struct device *dev)
+{
+ struct davinci_audio_dev *a = dev_get_drvdata(dev);
+ void __iomem *base = a->base;
+
+ a->context.txfmtctl = mcasp_get_reg(base + DAVINCI_MCASP_TXFMCTL_REG);
+ a->context.rxfmtctl = mcasp_get_reg(base + DAVINCI_MCASP_RXFMCTL_REG);
+ a->context.txfmt = mcasp_get_reg(base + DAVINCI_MCASP_TXFMT_REG);
+ a->context.rxfmt = mcasp_get_reg(base + DAVINCI_MCASP_RXFMT_REG);
+ a->context.aclkxctl = mcasp_get_reg(base + DAVINCI_MCASP_ACLKXCTL_REG);
+ a->context.aclkrctl = mcasp_get_reg(base + DAVINCI_MCASP_ACLKRCTL_REG);
+ a->context.pdir = mcasp_get_reg(base + DAVINCI_MCASP_PDIR_REG);
+
+ return 0;
+}
+
+static int davinci_mcasp_resume(struct device *dev)
+{
+ struct davinci_audio_dev *a = dev_get_drvdata(dev);
+ void __iomem *base = a->base;
+
+ mcasp_set_reg(base + DAVINCI_MCASP_TXFMCTL_REG, a->context.txfmtctl);
+ mcasp_set_reg(base + DAVINCI_MCASP_RXFMCTL_REG, a->context.rxfmtctl);
+ mcasp_set_reg(base + DAVINCI_MCASP_TXFMT_REG, a->context.txfmt);
+ mcasp_set_reg(base + DAVINCI_MCASP_RXFMT_REG, a->context.rxfmt);
+ mcasp_set_reg(base + DAVINCI_MCASP_ACLKXCTL_REG, a->context.aclkxctl);
+ mcasp_set_reg(base + DAVINCI_MCASP_ACLKRCTL_REG, a->context.aclkrctl);
+ mcasp_set_reg(base + DAVINCI_MCASP_PDIR_REG, a->context.pdir);
+
+ return 0;
+}
+#endif
+
+SIMPLE_DEV_PM_OPS(davinci_mcasp_pm_ops,
+ davinci_mcasp_suspend,
+ davinci_mcasp_resume);
+
static struct platform_driver davinci_mcasp_driver = {
.probe = davinci_mcasp_probe,
.remove = davinci_mcasp_remove,
.driver = {
.name = "davinci-mcasp",
.owner = THIS_MODULE,
+ .pm = &davinci_mcasp_pm_ops,
.of_match_table = mcasp_dt_ids,
},
};
MODULE_AUTHOR("Steve Chen");
MODULE_DESCRIPTION("TI DAVINCI McASP SoC Interface");
MODULE_LICENSE("GPL");
-
/* McASP FIFO related */
u8 txnumevt;
u8 rxnumevt;
+
+#ifdef CONFIG_PM_SLEEP
+ struct {
+ u32 txfmtctl;
+ u32 rxfmtctl;
+ u32 txfmt;
+ u32 rxfmt;
+ u32 aclkxctl;
+ u32 aclkrctl;
+ u32 pdir;
+ } context;
+#endif
};
#endif /* DAVINCI_MCASP_H */
SND_SOC_DAIFMT_NB_NF |
SND_SOC_DAIFMT_CBM_CFM);
if (ret) {
- pr_err("%s: failed set cpu dai format\n", __func__);
+ dev_err(cpu_dai->dev,
+ "Failed to set the cpu dai format.\n");
return ret;
}
SND_SOC_DAIFMT_NB_NF |
SND_SOC_DAIFMT_CBM_CFM);
if (ret) {
- pr_err("%s: failed set codec dai format\n", __func__);
+ dev_err(cpu_dai->dev,
+ "Failed to set the codec format.\n");
return ret;
}
ret = snd_soc_dai_set_sysclk(codec_dai, 0,
CODEC_CLOCK, SND_SOC_CLOCK_OUT);
if (ret) {
- pr_err("%s: failed setting codec sysclk\n", __func__);
+ dev_err(cpu_dai->dev,
+ "Failed to set the codec sysclk.\n");
return ret;
}
snd_soc_dai_set_tdm_slot(cpu_dai, 0xffffffc, 0xffffffc, 2, 0);
ret = snd_soc_dai_set_sysclk(cpu_dai, IMX_SSP_SYS_CLK, 0,
SND_SOC_CLOCK_IN);
if (ret) {
- pr_err("can't set CPU system clock IMX_SSP_SYS_CLK\n");
+ dev_err(cpu_dai->dev,
+ "Can't set the IMX_SSP_SYS_CLK CPU system clock.\n");
return ret;
}
.owner = THIS_MODULE,
},
.probe = eukrea_tlv320_probe,
- .remove = eukrea_tlv320_remove,};
+ .remove = eukrea_tlv320_remove,
+};
module_platform_driver(eukrea_tlv320_driver);
return true;
default:
return false;
- };
+ }
}
static bool fsl_spdif_writeable_reg(struct device *dev, unsigned int reg)
return true;
default:
return false;
- };
+ }
}
static const struct regmap_config fsl_spdif_regmap_config = {
/* Get the addresses and IRQ */
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (IS_ERR(res)) {
- dev_err(&pdev->dev, "could not determine device resources\n");
- return PTR_ERR(res);
- }
-
regs = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(regs))
return PTR_ERR(regs);
/* Register with ASoC */
dev_set_drvdata(&pdev->dev, spdif_priv);
- ret = snd_soc_register_component(&pdev->dev, &fsl_spdif_component,
- &spdif_priv->cpu_dai_drv, 1);
+ ret = devm_snd_soc_register_component(&pdev->dev, &fsl_spdif_component,
+ &spdif_priv->cpu_dai_drv, 1);
if (ret) {
dev_err(&pdev->dev, "failed to register DAI: %d\n", ret);
return ret;
}
ret = imx_pcm_dma_init(pdev);
- if (ret) {
+ if (ret)
dev_err(&pdev->dev, "imx_pcm_dma_init failed: %d\n", ret);
- goto error_component;
- }
-
- return ret;
-
-error_component:
- snd_soc_unregister_component(&pdev->dev);
return ret;
}
static int fsl_spdif_remove(struct platform_device *pdev)
{
imx_pcm_dma_exit(pdev);
- snd_soc_unregister_component(&pdev->dev);
return 0;
}
* parameters, then the second stream may be
* constrained to the wrong sample rate or size.
*/
- if (!first_runtime->sample_bits) {
- dev_err(substream->pcm->card->dev,
- "set sample size in %s stream first\n",
- substream->stream ==
- SNDRV_PCM_STREAM_PLAYBACK
- ? "capture" : "playback");
- return -EAGAIN;
- }
-
- snd_pcm_hw_constraint_minmax(substream->runtime,
- SNDRV_PCM_HW_PARAM_SAMPLE_BITS,
+ if (first_runtime->sample_bits) {
+ snd_pcm_hw_constraint_minmax(substream->runtime,
+ SNDRV_PCM_HW_PARAM_SAMPLE_BITS,
first_runtime->sample_bits,
first_runtime->sample_bits);
+ }
}
ssi_private->second_stream = substream;
fsl_ssi_setup(fsl_ac97_data);
}
-void fsl_ssi_ac97_write(struct snd_ac97 *ac97, unsigned short reg,
+static void fsl_ssi_ac97_write(struct snd_ac97 *ac97, unsigned short reg,
unsigned short val)
{
struct ccsr_ssi *ssi = fsl_ac97_data->ssi;
udelay(100);
}
-unsigned short fsl_ssi_ac97_read(struct snd_ac97 *ac97,
+static unsigned short fsl_ssi_ac97_read(struct snd_ac97 *ac97,
unsigned short reg)
{
struct ccsr_ssi *ssi = fsl_ac97_data->ssi;
ssi_private->ssi_phys = res.start;
ssi_private->irq = irq_of_parse_and_map(np, 0);
- if (ssi_private->irq == NO_IRQ) {
+ if (!ssi_private->irq) {
dev_err(&pdev->dev, "no irq for node %s\n", np->full_name);
return -ENXIO;
}
if (ssi_private->ssi_on_imx)
imx_pcm_dma_exit(pdev);
snd_soc_unregister_component(&pdev->dev);
- dev_set_drvdata(&pdev->dev, NULL);
device_remove_file(&pdev->dev, &ssi_private->dev_attr);
if (ssi_private->ssi_on_imx)
clk_disable_unprepare(ssi_private->clk);
size_t count, loff_t *ppos)
{
ssize_t ret;
- char *buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ char *buf;
int port = (int)file->private_data;
u32 pdcr, ptcr;
- if (!buf)
- return -ENOMEM;
-
if (audmux_clk) {
ret = clk_prepare_enable(audmux_clk);
if (ret)
if (audmux_clk)
clk_disable_unprepare(audmux_clk);
+ buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
ret = snprintf(buf, PAGE_SIZE, "PDCR: %08x\nPTCR: %08x\n",
pdcr, ptcr);
return ret;
}
- if (machine_is_mx31_3ds()) {
+ if (machine_is_mx31_3ds() || machine_is_mx31moboard()) {
imx_audmux_v2_configure_port(MX31_AUDMUX_PORT4_SSI_PINS_4,
IMX_AUDMUX_V2_PTCR_SYN,
IMX_AUDMUX_V2_PDCR_RXDSEL(MX31_AUDMUX_PORT1_SSI0) |
.driver = {
.name = "imx_mc13783",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
},
.probe = imx_mc13783_probe,
.remove = imx_mc13783_remove
static bool filter(struct dma_chan *chan, void *param)
{
- struct snd_dmaengine_dai_dma_data *dma_data = param;
-
if (!imx_dma_is_general_purpose(chan))
return false;
- chan->private = dma_data->filter_data;
+ chan->private = param;
return true;
}
unsigned int period;
int periods;
unsigned long offset;
- unsigned long last_offset;
- unsigned long size;
struct hrtimer hrt;
int poll_time_ns;
struct snd_pcm_substream *substream;
struct imx_pcm_runtime_data *iprtd =
container_of(hrt, struct imx_pcm_runtime_data, hrt);
struct snd_pcm_substream *substream = iprtd->substream;
- struct snd_pcm_runtime *runtime = substream->runtime;
struct pt_regs regs;
- unsigned long delta;
if (!atomic_read(&iprtd->running))
return HRTIMER_NORESTART;
else
iprtd->offset = regs.ARM_r9 & 0xffff;
- /* How much data have we transferred since the last period report? */
- if (iprtd->offset >= iprtd->last_offset)
- delta = iprtd->offset - iprtd->last_offset;
- else
- delta = runtime->buffer_size + iprtd->offset
- - iprtd->last_offset;
-
- /* If we've transferred at least a period then report it and
- * reset our poll time */
- if (delta >= iprtd->period) {
- snd_pcm_period_elapsed(substream);
- iprtd->last_offset = iprtd->offset;
- }
+ snd_pcm_period_elapsed(substream);
hrtimer_forward_now(hrt, ns_to_ktime(iprtd->poll_time_ns));
struct snd_pcm_runtime *runtime = substream->runtime;
struct imx_pcm_runtime_data *iprtd = runtime->private_data;
- iprtd->size = params_buffer_bytes(params);
iprtd->periods = params_periods(params);
- iprtd->period = params_period_bytes(params) ;
+ iprtd->period = params_period_bytes(params);
iprtd->offset = 0;
- iprtd->last_offset = 0;
iprtd->poll_time_ns = 1000000000 / params_rate(params) *
params_period_size(params);
snd_pcm_set_runtime_buffer(substream, &substream->dma_buffer);
struct device_node *ssi_np, *codec_np;
struct platform_device *ssi_pdev;
struct i2c_client *codec_dev;
- struct imx_sgtl5000_data *data;
+ struct imx_sgtl5000_data *data = NULL;
int int_port, ext_port;
int ret;
goto fail;
}
- data->codec_clk = devm_clk_get(&codec_dev->dev, NULL);
+ data->codec_clk = clk_get(&codec_dev->dev, NULL);
if (IS_ERR(data->codec_clk)) {
ret = PTR_ERR(data->codec_clk);
goto fail;
data->card.dapm_widgets = imx_sgtl5000_dapm_widgets;
data->card.num_dapm_widgets = ARRAY_SIZE(imx_sgtl5000_dapm_widgets);
- ret = snd_soc_register_card(&data->card);
+ ret = devm_snd_soc_register_card(&pdev->dev, &data->card);
if (ret) {
dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", ret);
goto fail;
return 0;
fail:
+ if (data && !IS_ERR(data->codec_clk))
+ clk_put(data->codec_clk);
if (ssi_np)
of_node_put(ssi_np);
if (codec_np)
{
struct imx_sgtl5000_data *data = platform_get_drvdata(pdev);
- snd_soc_unregister_card(&data->card);
+ clk_put(data->codec_clk);
return 0;
}
.driver = {
.name = "imx-sgtl5000",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
.of_match_table = imx_sgtl5000_dt_ids,
},
.probe = imx_sgtl5000_probe,
if (ret)
goto error_dir;
- ret = snd_soc_register_card(&data->card);
+ ret = devm_snd_soc_register_card(&pdev->dev, &data->card);
if (ret) {
dev_err(&pdev->dev, "snd_soc_register_card failed: %d\n", ret);
goto error_dir;
if (data->txdev)
platform_device_unregister(data->txdev);
- snd_soc_unregister_card(&data->card);
-
return 0;
}
ssi->fiq_params.dma_params_rx = &ssi->dma_params_rx;
ssi->fiq_params.dma_params_tx = &ssi->dma_params_tx;
- ret = imx_pcm_fiq_init(pdev, &ssi->fiq_params);
- if (ret)
- goto failed_pcm_fiq;
+ ssi->fiq_init = imx_pcm_fiq_init(pdev, &ssi->fiq_params);
+ ssi->dma_init = imx_pcm_dma_init(pdev);
- ret = imx_pcm_dma_init(pdev);
- if (ret)
- goto failed_pcm_dma;
+ if (ssi->fiq_init && ssi->dma_init) {
+ ret = ssi->fiq_init;
+ goto failed_pcm;
+ }
return 0;
-failed_pcm_dma:
- imx_pcm_fiq_exit(pdev);
-failed_pcm_fiq:
+failed_pcm:
snd_soc_unregister_component(&pdev->dev);
failed_register:
- release_mem_region(res->start, resource_size(res));
clk_disable_unprepare(ssi->clk);
failed_clk:
snd_soc_set_ac97_ops(NULL);
static int imx_ssi_remove(struct platform_device *pdev)
{
- struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
struct imx_ssi *ssi = platform_get_drvdata(pdev);
- imx_pcm_dma_exit(pdev);
- imx_pcm_fiq_exit(pdev);
+ if (!ssi->dma_init)
+ imx_pcm_dma_exit(pdev);
+
+ if (!ssi->fiq_init)
+ imx_pcm_fiq_exit(pdev);
snd_soc_unregister_component(&pdev->dev);
if (ssi->flags & IMX_SSI_USE_AC97)
ac97_ssi = NULL;
- release_mem_region(res->start, resource_size(res));
clk_disable_unprepare(ssi->clk);
snd_soc_set_ac97_ops(NULL);
struct imx_dma_data filter_data_rx;
struct imx_pcm_fiq_params fiq_params;
+ int fiq_init;
+ int dma_init;
int enabled;
};
data->card.late_probe = imx_wm8962_late_probe;
data->card.set_bias_level = imx_wm8962_set_bias_level;
- ret = snd_soc_register_card(&data->card);
+ ret = devm_snd_soc_register_card(&pdev->dev, &data->card);
if (ret) {
dev_err(&pdev->dev, "snd_soc_register_card failed (%d)\n", ret);
goto clk_fail;
return 0;
clk_fail:
- if (!IS_ERR(data->codec_clk))
- clk_disable_unprepare(data->codec_clk);
+ clk_disable_unprepare(data->codec_clk);
fail:
if (ssi_np)
of_node_put(ssi_np);
if (!IS_ERR(data->codec_clk))
clk_disable_unprepare(data->codec_clk);
- snd_soc_unregister_card(&data->card);
return 0;
}
.driver = {
.name = "imx-wm8962",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
.of_match_table = imx_wm8962_dt_ids,
},
.probe = imx_wm8962_probe,
#define KIRKWOOD_FORMATS \
(SNDRV_PCM_FMTBIT_S16_LE | \
SNDRV_PCM_FMTBIT_S24_LE | \
- SNDRV_PCM_FMTBIT_S32_LE | \
- SNDRV_PCM_FMTBIT_IEC958_SUBFRAME_LE | \
- SNDRV_PCM_FMTBIT_IEC958_SUBFRAME_BE)
+ SNDRV_PCM_FMTBIT_S32_LE)
static struct kirkwood_dma_data *kirkwood_priv(struct snd_pcm_substream *subs)
{
* Enable Error interrupts. We're only ack'ing them but
* it's useful for diagnostics
*/
- writel((unsigned long)-1, priv->io + KIRKWOOD_ERR_MASK);
+ writel((unsigned int)-1, priv->io + KIRKWOOD_ERR_MASK);
}
dram = mv_mbus_dram_info();
{
uint32_t clks_ctrl;
- if (rate == 44100 || rate == 48000 || rate == 96000) {
+ if (IS_ERR(priv->extclk)) {
/* use internal dco for the supported rates
* defined in kirkwood_i2s_dai */
dev_dbg(dai->dev, "%s: dco set rate = %lu\n",
case SNDRV_PCM_FORMAT_S16_LE:
i2s_value |= KIRKWOOD_I2S_CTL_SIZE_16;
ctl_play = KIRKWOOD_PLAYCTL_SIZE_16_C |
- KIRKWOOD_PLAYCTL_I2S_EN;
+ KIRKWOOD_PLAYCTL_I2S_EN |
+ KIRKWOOD_PLAYCTL_SPDIF_EN;
ctl_rec = KIRKWOOD_RECCTL_SIZE_16_C |
- KIRKWOOD_RECCTL_I2S_EN;
+ KIRKWOOD_RECCTL_I2S_EN |
+ KIRKWOOD_RECCTL_SPDIF_EN;
break;
/*
* doesn't work... S20_3LE != kirkwood 20bit format ?
case SNDRV_PCM_FORMAT_S24_LE:
i2s_value |= KIRKWOOD_I2S_CTL_SIZE_24;
ctl_play = KIRKWOOD_PLAYCTL_SIZE_24 |
- KIRKWOOD_PLAYCTL_I2S_EN;
+ KIRKWOOD_PLAYCTL_I2S_EN |
+ KIRKWOOD_PLAYCTL_SPDIF_EN;
ctl_rec = KIRKWOOD_RECCTL_SIZE_24 |
- KIRKWOOD_RECCTL_I2S_EN;
+ KIRKWOOD_RECCTL_I2S_EN |
+ KIRKWOOD_RECCTL_SPDIF_EN;
break;
case SNDRV_PCM_FORMAT_S32_LE:
i2s_value |= KIRKWOOD_I2S_CTL_SIZE_32;
ctl);
}
+ if (dai->id == 0)
+ ctl &= ~KIRKWOOD_PLAYCTL_SPDIF_EN; /* i2s */
+ else
+ ctl &= ~KIRKWOOD_PLAYCTL_I2S_EN; /* spdif */
+
switch (cmd) {
case SNDRV_PCM_TRIGGER_START:
/* configure */
case SNDRV_PCM_TRIGGER_STOP:
/* stop audio, disable interrupts */
- ctl |= KIRKWOOD_PLAYCTL_PAUSE | KIRKWOOD_PLAYCTL_I2S_MUTE;
+ ctl |= KIRKWOOD_PLAYCTL_PAUSE | KIRKWOOD_PLAYCTL_I2S_MUTE |
+ KIRKWOOD_PLAYCTL_SPDIF_MUTE;
writel(ctl, priv->io + KIRKWOOD_PLAYCTL);
value = readl(priv->io + KIRKWOOD_INT_MASK);
case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
case SNDRV_PCM_TRIGGER_SUSPEND:
- ctl |= KIRKWOOD_PLAYCTL_PAUSE | KIRKWOOD_PLAYCTL_I2S_MUTE;
+ ctl |= KIRKWOOD_PLAYCTL_PAUSE | KIRKWOOD_PLAYCTL_I2S_MUTE |
+ KIRKWOOD_PLAYCTL_SPDIF_MUTE;
writel(ctl, priv->io + KIRKWOOD_PLAYCTL);
break;
case SNDRV_PCM_TRIGGER_RESUME:
case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
- ctl &= ~(KIRKWOOD_PLAYCTL_PAUSE | KIRKWOOD_PLAYCTL_I2S_MUTE);
+ ctl &= ~(KIRKWOOD_PLAYCTL_PAUSE | KIRKWOOD_PLAYCTL_I2S_MUTE |
+ KIRKWOOD_PLAYCTL_SPDIF_MUTE);
writel(ctl, priv->io + KIRKWOOD_PLAYCTL);
break;
case SNDRV_PCM_TRIGGER_START:
/* configure */
ctl = priv->ctl_rec;
- value = ctl & ~KIRKWOOD_RECCTL_I2S_EN;
+ if (dai->id == 0)
+ ctl &= ~KIRKWOOD_RECCTL_SPDIF_EN; /* i2s */
+ else
+ ctl &= ~KIRKWOOD_RECCTL_I2S_EN; /* spdif */
+
+ value = ctl & ~(KIRKWOOD_RECCTL_I2S_EN |
+ KIRKWOOD_RECCTL_SPDIF_EN);
writel(value, priv->io + KIRKWOOD_RECCTL);
/* enable interrupts */
return 0;
}
-static int kirkwood_i2s_probe(struct snd_soc_dai *dai)
+static int kirkwood_i2s_init(struct kirkwood_dma_data *priv)
{
- struct kirkwood_dma_data *priv = snd_soc_dai_get_drvdata(dai);
unsigned long value;
unsigned int reg_data;
.set_fmt = kirkwood_i2s_set_fmt,
};
-
-static struct snd_soc_dai_driver kirkwood_i2s_dai = {
- .probe = kirkwood_i2s_probe,
+static struct snd_soc_dai_driver kirkwood_i2s_dai[2] = {
+ {
+ .name = "i2s",
+ .id = 0,
.playback = {
.channels_min = 1,
.channels_max = 2,
.formats = KIRKWOOD_I2S_FORMATS,
},
.ops = &kirkwood_i2s_dai_ops,
+ },
+ {
+ .name = "spdif",
+ .id = 1,
+ .playback = {
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000 |
+ SNDRV_PCM_RATE_96000,
+ .formats = KIRKWOOD_I2S_FORMATS,
+ },
+ .capture = {
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = SNDRV_PCM_RATE_44100 | SNDRV_PCM_RATE_48000 |
+ SNDRV_PCM_RATE_96000,
+ .formats = KIRKWOOD_I2S_FORMATS,
+ },
+ .ops = &kirkwood_i2s_dai_ops,
+ },
};
-static struct snd_soc_dai_driver kirkwood_i2s_dai_extclk = {
- .probe = kirkwood_i2s_probe,
+static struct snd_soc_dai_driver kirkwood_i2s_dai_extclk[2] = {
+ {
+ .name = "i2s",
+ .id = 0,
+ .playback = {
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = SNDRV_PCM_RATE_8000_192000 |
+ SNDRV_PCM_RATE_CONTINUOUS |
+ SNDRV_PCM_RATE_KNOT,
+ .formats = KIRKWOOD_I2S_FORMATS,
+ },
+ .capture = {
+ .channels_min = 1,
+ .channels_max = 2,
+ .rates = SNDRV_PCM_RATE_8000_192000 |
+ SNDRV_PCM_RATE_CONTINUOUS |
+ SNDRV_PCM_RATE_KNOT,
+ .formats = KIRKWOOD_I2S_FORMATS,
+ },
+ .ops = &kirkwood_i2s_dai_ops,
+ },
+ {
+ .name = "spdif",
+ .id = 1,
.playback = {
.channels_min = 1,
.channels_max = 2,
.formats = KIRKWOOD_I2S_FORMATS,
},
.ops = &kirkwood_i2s_dai_ops,
+ },
};
static const struct snd_soc_component_driver kirkwood_i2s_component = {
static int kirkwood_i2s_dev_probe(struct platform_device *pdev)
{
struct kirkwood_asoc_platform_data *data = pdev->dev.platform_data;
- struct snd_soc_dai_driver *soc_dai = &kirkwood_i2s_dai;
+ struct snd_soc_dai_driver *soc_dai = kirkwood_i2s_dai;
struct kirkwood_dma_data *priv;
struct resource *mem;
struct device_node *np = pdev->dev.of_node;
return err;
priv->extclk = devm_clk_get(&pdev->dev, "extclk");
- if (!IS_ERR(priv->extclk)) {
+ if (IS_ERR(priv->extclk)) {
+ if (PTR_ERR(priv->extclk) == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+ } else {
if (priv->extclk == priv->clk) {
devm_clk_put(&pdev->dev, priv->extclk);
priv->extclk = ERR_PTR(-EINVAL);
} else {
dev_info(&pdev->dev, "found external clock\n");
clk_prepare_enable(priv->extclk);
- soc_dai = &kirkwood_i2s_dai_extclk;
+ soc_dai = kirkwood_i2s_dai_extclk;
}
}
}
err = snd_soc_register_component(&pdev->dev, &kirkwood_i2s_component,
- soc_dai, 1);
+ soc_dai, 2);
if (err) {
dev_err(&pdev->dev, "snd_soc_register_component failed\n");
goto err_component;
dev_err(&pdev->dev, "snd_soc_register_platform failed\n");
goto err_platform;
}
+
+ kirkwood_i2s_init(priv);
+
return 0;
err_platform:
snd_soc_unregister_component(&pdev->dev);
{
.name = "CS42L51",
.stream_name = "CS42L51 HiFi",
- .cpu_dai_name = "mvebu-audio",
+ .cpu_dai_name = "i2s",
.platform_name = "mvebu-audio",
.codec_dai_name = "cs42l51-hifi",
.codec_name = "cs42l51-codec.0-004a",
{
.name = "ALC5621",
.stream_name = "ALC5621 HiFi",
- .cpu_dai_name = "mvebu-audio",
+ .cpu_dai_name = "i2s",
.platform_name = "mvebu-audio",
.codec_dai_name = "alc5621-hifi",
.codec_name = "alc562x-codec.0-001a",
/* need to find where they come from */
#define KIRKWOOD_SND_MIN_PERIODS 8
#define KIRKWOOD_SND_MAX_PERIODS 16
-#define KIRKWOOD_SND_MIN_PERIOD_BYTES 0x4000
-#define KIRKWOOD_SND_MAX_PERIOD_BYTES 0x4000
+#define KIRKWOOD_SND_MIN_PERIOD_BYTES 0x800
+#define KIRKWOOD_SND_MAX_PERIOD_BYTES 0x8000
#define KIRKWOOD_SND_MAX_BUFFER_BYTES (KIRKWOOD_SND_MAX_PERIOD_BYTES \
* KIRKWOOD_SND_MAX_PERIODS)
}
/* register the soc card */
snd_soc_card_mfld.dev = &pdev->dev;
- ret_val = snd_soc_register_card(&snd_soc_card_mfld);
+ ret_val = devm_snd_soc_register_card(&pdev->dev, &snd_soc_card_mfld);
if (ret_val) {
pr_debug("snd_soc_register_card failed %d\n", ret_val);
return ret_val;
return 0;
}
-static int snd_mfld_mc_remove(struct platform_device *pdev)
-{
- pr_debug("snd_mfld_mc_remove called\n");
- snd_soc_unregister_card(&snd_soc_card_mfld);
- return 0;
-}
-
static struct platform_driver snd_mfld_mc_driver = {
.driver = {
.owner = THIS_MODULE,
.name = "msic_audio",
},
.probe = snd_mfld_mc_probe,
- .remove = snd_mfld_mc_remove,
};
module_platform_driver(snd_mfld_mc_driver);
struct mxs_saif *saif = snd_soc_dai_get_drvdata(cpu_dai);
struct mxs_saif *master_saif;
u32 delay;
+ int ret;
master_saif = mxs_saif_get_master(saif);
if (!master_saif)
case SNDRV_PCM_TRIGGER_START:
case SNDRV_PCM_TRIGGER_RESUME:
case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+ if (saif->state == MXS_SAIF_STATE_RUNNING)
+ return 0;
+
dev_dbg(cpu_dai->dev, "start\n");
- clk_enable(master_saif->clk);
- if (!master_saif->mclk_in_use)
- __raw_writel(BM_SAIF_CTRL_RUN,
- master_saif->base + SAIF_CTRL + MXS_SET_ADDR);
+ ret = clk_enable(master_saif->clk);
+ if (ret) {
+ dev_err(saif->dev, "Failed to enable master clock\n");
+ return ret;
+ }
/*
* If the saif's master is not himself, we also need to enable
* itself clk for its internal basic logic to work.
*/
if (saif != master_saif) {
- clk_enable(saif->clk);
+ ret = clk_enable(saif->clk);
+ if (ret) {
+ dev_err(saif->dev, "Failed to enable master clock\n");
+ clk_disable(master_saif->clk);
+ return ret;
+ }
+
__raw_writel(BM_SAIF_CTRL_RUN,
saif->base + SAIF_CTRL + MXS_SET_ADDR);
}
+ if (!master_saif->mclk_in_use)
+ __raw_writel(BM_SAIF_CTRL_RUN,
+ master_saif->base + SAIF_CTRL + MXS_SET_ADDR);
+
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
/*
* write data to saif data register to trigger
}
master_saif->ongoing = 1;
+ saif->state = MXS_SAIF_STATE_RUNNING;
dev_dbg(saif->dev, "CTRL 0x%x STAT 0x%x\n",
__raw_readl(saif->base + SAIF_CTRL),
case SNDRV_PCM_TRIGGER_SUSPEND:
case SNDRV_PCM_TRIGGER_STOP:
case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
+ if (saif->state == MXS_SAIF_STATE_STOPPED)
+ return 0;
+
dev_dbg(cpu_dai->dev, "stop\n");
/* wait a while for the current sample to complete */
}
master_saif->ongoing = 0;
+ saif->state = MXS_SAIF_STATE_STOPPED;
break;
default:
dev_warn(&pdev->dev, "failed to init clocks\n");
}
- ret = snd_soc_register_component(&pdev->dev, &mxs_saif_component,
- &mxs_saif_dai, 1);
+ ret = devm_snd_soc_register_component(&pdev->dev, &mxs_saif_component,
+ &mxs_saif_dai, 1);
if (ret) {
dev_err(&pdev->dev, "register DAI failed\n");
return ret;
ret = mxs_pcm_platform_register(&pdev->dev);
if (ret) {
dev_err(&pdev->dev, "register PCM failed: %d\n", ret);
- goto failed_pdev_alloc;
+ return ret;
}
return 0;
-
-failed_pdev_alloc:
- snd_soc_unregister_component(&pdev->dev);
-
- return ret;
}
static int mxs_saif_remove(struct platform_device *pdev)
{
mxs_pcm_platform_unregister(&pdev->dev);
- snd_soc_unregister_component(&pdev->dev);
return 0;
}
u32 fifo_underrun;
u32 fifo_overrun;
+
+ enum {
+ MXS_SAIF_STATE_STOPPED,
+ MXS_SAIF_STATE_RUNNING,
+ } state;
};
extern int mxs_saif_put_mclk(unsigned int saif_id);
.num_links = ARRAY_SIZE(mxs_sgtl5000_dai),
};
-static int mxs_sgtl5000_probe_dt(struct platform_device *pdev)
+static int mxs_sgtl5000_probe(struct platform_device *pdev)
{
+ struct snd_soc_card *card = &mxs_sgtl5000;
+ int ret, i;
struct device_node *np = pdev->dev.of_node;
struct device_node *saif_np[2], *codec_np;
- int i;
-
- if (!np)
- return 1; /* no device tree */
saif_np[0] = of_parse_phandle(np, "saif-controllers", 0);
saif_np[1] = of_parse_phandle(np, "saif-controllers", 1);
of_node_put(saif_np[0]);
of_node_put(saif_np[1]);
- return 0;
-}
-
-static int mxs_sgtl5000_probe(struct platform_device *pdev)
-{
- struct snd_soc_card *card = &mxs_sgtl5000;
- int ret;
-
- ret = mxs_sgtl5000_probe_dt(pdev);
- if (ret < 0)
- return ret;
-
/*
* Set an init clock(11.28Mhz) for sgtl5000 initialization(i2c r/w).
* The Sgtl5000 sysclk is derived from saif0 mclk and it's range
config SND_OMAP_SOC
tristate "SoC Audio for the Texas Instruments OMAP chips"
- depends on (ARCH_OMAP && DMA_OMAP) || (ARCH_ARM && COMPILE_TEST)
+ depends on (ARCH_OMAP && DMA_OMAP) || (ARM && COMPILE_TEST)
select SND_DMAENGINE_PCM
config SND_OMAP_SOC_DMIC
config SND_OMAP_SOC_RX51
tristate "SoC Audio support for Nokia RX-51"
- depends on SND_OMAP_SOC && ARCH_ARM && (MACH_NOKIA_RX51 || COMPILE_TEST)
+ depends on SND_OMAP_SOC && ARM && (MACH_NOKIA_RX51 || COMPILE_TEST)
select SND_OMAP_SOC_MCBSP
select SND_SOC_TLV320AIC3X
select SND_SOC_TPA6130A2
mcpdm->dev = &pdev->dev;
- return snd_soc_register_component(&pdev->dev, &omap_mcpdm_component,
- &omap_mcpdm_dai, 1);
-}
-
-static int asoc_mcpdm_remove(struct platform_device *pdev)
-{
- snd_soc_unregister_component(&pdev->dev);
- return 0;
+ return devm_snd_soc_register_component(&pdev->dev,
+ &omap_mcpdm_component,
+ &omap_mcpdm_dai, 1);
}
static const struct of_device_id omap_mcpdm_of_match[] = {
},
.probe = asoc_mcpdm_probe,
- .remove = asoc_mcpdm_remove,
};
module_platform_driver(asoc_mcpdm_driver);
}
snd_soc_card_set_drvdata(card, priv);
- ret = snd_soc_register_card(card);
+ ret = devm_snd_soc_register_card(&pdev->dev, card);
if (ret) {
- dev_err(&pdev->dev, "snd_soc_register_card() failed: %d\n",
+ dev_err(&pdev->dev, "devm_snd_soc_register_card() failed: %d\n",
ret);
return ret;
}
snd_soc_jack_free_gpios(&priv->hs_jack,
ARRAY_SIZE(hs_jack_gpios),
hs_jack_gpios);
- snd_soc_unregister_card(card);
return 0;
}
.driver = {
.name = "brownstone-audio",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
},
.probe = brownstone_probe,
.remove = brownstone_remove,
.driver = {
.name = "corgi-audio",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
},
.probe = corgi_probe,
.remove = corgi_remove,
.driver = {
.name = "e740-audio",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
},
.probe = e740_probe,
.remove = e740_remove,
.driver = {
.name = "e750-audio",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
},
.probe = e750_probe,
.remove = e750_remove,
.driver = {
.name = "e800-audio",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
},
.probe = e800_probe,
.remove = e800_remove,
.driver = {
.name = "imote2-audio",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
},
.probe = imote2_probe,
.remove = imote2_remove,
.driver = {
.name = "mioa701-wm9713",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
},
};
priv->dai_fmt = (unsigned int) -1;
platform_set_drvdata(pdev, priv);
- return snd_soc_register_component(&pdev->dev, &mmp_sspa_component,
- &mmp_sspa_dai, 1);
+ return devm_snd_soc_register_component(&pdev->dev, &mmp_sspa_component,
+ &mmp_sspa_dai, 1);
}
static int asoc_mmp_sspa_remove(struct platform_device *pdev)
clk_disable(priv->audio_clk);
clk_put(priv->audio_clk);
clk_put(priv->sysclk);
- snd_soc_unregister_component(&pdev->dev);
return 0;
}
.driver = {
.name = "palm27x-asoc",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
},
};
.driver = {
.name = "poodle-audio",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
},
.probe = poodle_probe,
.remove = poodle_remove,
.filter_data = &pxa2xx_ac97_pcm_aux_mic_mono_req,
};
-#ifdef CONFIG_PM
-static int pxa2xx_ac97_suspend(struct snd_soc_dai *dai)
-{
- return pxa2xx_ac97_hw_suspend();
-}
-
-static int pxa2xx_ac97_resume(struct snd_soc_dai *dai)
-{
- return pxa2xx_ac97_hw_resume();
-}
-
-#else
-#define pxa2xx_ac97_suspend NULL
-#define pxa2xx_ac97_resume NULL
-#endif
-
-static int pxa2xx_ac97_probe(struct snd_soc_dai *dai)
-{
- return pxa2xx_ac97_hw_probe(to_platform_device(dai->dev));
-}
-
-static int pxa2xx_ac97_remove(struct snd_soc_dai *dai)
-{
- pxa2xx_ac97_hw_remove(to_platform_device(dai->dev));
- return 0;
-}
-
static int pxa2xx_ac97_hw_params(struct snd_pcm_substream *substream,
struct snd_pcm_hw_params *params,
struct snd_soc_dai *cpu_dai)
{
.name = "pxa2xx-ac97",
.ac97_control = 1,
- .probe = pxa2xx_ac97_probe,
- .remove = pxa2xx_ac97_remove,
- .suspend = pxa2xx_ac97_suspend,
- .resume = pxa2xx_ac97_resume,
.playback = {
.stream_name = "AC97 Playback",
.channels_min = 2,
return -ENXIO;
}
+ ret = pxa2xx_ac97_hw_probe(pdev);
+ if (ret) {
+ dev_err(&pdev->dev, "PXA2xx AC97 hw probe error (%d)\n", ret);
+ return ret;
+ }
+
ret = snd_soc_set_ac97_ops(&pxa2xx_ac97_ops);
if (ret != 0)
return ret;
{
snd_soc_unregister_component(&pdev->dev);
snd_soc_set_ac97_ops(NULL);
+ pxa2xx_ac97_hw_remove(pdev);
return 0;
}
+#ifdef CONFIG_PM_SLEEP
+static int pxa2xx_ac97_dev_suspend(struct device *dev)
+{
+ return pxa2xx_ac97_hw_suspend();
+}
+
+static int pxa2xx_ac97_dev_resume(struct device *dev)
+{
+ return pxa2xx_ac97_hw_resume();
+}
+
+static SIMPLE_DEV_PM_OPS(pxa2xx_ac97_pm_ops,
+ pxa2xx_ac97_dev_suspend, pxa2xx_ac97_dev_resume);
+#endif
+
static struct platform_driver pxa2xx_ac97_driver = {
.probe = pxa2xx_ac97_dev_probe,
.remove = pxa2xx_ac97_dev_remove,
.driver = {
.name = "pxa2xx-ac97",
.owner = THIS_MODULE,
+#ifdef CONFIG_PM_SLEEP
+ .pm = &pxa2xx_ac97_pm_ops,
+#endif
},
};
.driver = {
.name = "tosa-audio",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
},
.probe = tosa_probe,
.remove = tosa_remove,
.driver = {
.name = "ttc-dkb-audio",
.owner = THIS_MODULE,
+ .pm = &snd_soc_pm_ops,
},
.probe = ttc_dkb_probe,
.remove = ttc_dkb_remove,
}
writel(mod, i2s->addr + I2SMOD);
- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
- snd_soc_dai_set_dma_data(dai, substream,
- (void *)&i2s->dma_playback);
- else
- snd_soc_dai_set_dma_data(dai, substream,
- (void *)&i2s->dma_capture);
-
i2s->frmclk = params_rate(params);
return 0;
}
clk_prepare_enable(i2s->clk);
+ snd_soc_dai_init_dma_data(dai, &i2s->dma_playback, &i2s->dma_capture);
+
if (other) {
other->addr = i2s->addr;
other->clk = i2s->clk;
config SND_SOC_RCAR
tristate "R-Car series SRU/SCU/SSIU/SSI support"
select SND_SIMPLE_CARD
- select RCAR_CLK_ADG
help
This option enables R-Car SUR/SCU/SSIU/SSI sound support
* for more details.
*/
#include <linux/sh_clk.h>
-#include <mach/clock.h>
#include "rsnd.h"
#define CLKA 0
int rate_of_441khz_div_6;
int rate_of_48khz_div_6;
+ u32 ckr;
};
#define for_each_rsnd_clk(pos, adg, i) \
found_clock:
+ /* see rsnd_adg_ssi_clk_init() */
+ rsnd_mod_bset(mod, SSICKR, 0x00FF0000, adg->ckr);
+ rsnd_mod_write(mod, BRRA, 0x00000002); /* 1/6 */
+ rsnd_mod_write(mod, BRRB, 0x00000002); /* 1/6 */
+
/*
* This "mod" = "ssi" here.
* we can get "ssi id" from mod
}
}
- rsnd_priv_bset(priv, SSICKR, 0x00FF0000, ckr);
- rsnd_priv_write(priv, BRRA, 0x00000002); /* 1/6 */
- rsnd_priv_write(priv, BRRB, 0x00000002); /* 1/6 */
+ adg->ckr = ckr;
}
int rsnd_adg_probe(struct platform_device *pdev,
*
*/
#include <linux/pm_runtime.h>
+#include <linux/shdma-base.h>
#include "rsnd.h"
#define RSND_RATES SNDRV_PCM_RATE_8000_96000
* rsnd_platform functions
*/
#define rsnd_platform_call(priv, dai, func, param...) \
- (!(priv->info->func) ? -ENODEV : \
+ (!(priv->info->func) ? 0 : \
priv->info->func(param))
-
-/*
- * basic function
- */
-u32 rsnd_read(struct rsnd_priv *priv,
- struct rsnd_mod *mod, enum rsnd_reg reg)
-{
- void __iomem *base = rsnd_gen_reg_get(priv, mod, reg);
-
- BUG_ON(!base);
-
- return ioread32(base);
-}
-
-void rsnd_write(struct rsnd_priv *priv,
- struct rsnd_mod *mod,
- enum rsnd_reg reg, u32 data)
-{
- void __iomem *base = rsnd_gen_reg_get(priv, mod, reg);
- struct device *dev = rsnd_priv_to_dev(priv);
-
- BUG_ON(!base);
-
- dev_dbg(dev, "w %p : %08x\n", base, data);
-
- iowrite32(data, base);
-}
-
-void rsnd_bset(struct rsnd_priv *priv, struct rsnd_mod *mod,
- enum rsnd_reg reg, u32 mask, u32 data)
-{
- void __iomem *base = rsnd_gen_reg_get(priv, mod, reg);
- struct device *dev = rsnd_priv_to_dev(priv);
- u32 val;
-
- BUG_ON(!base);
-
- val = ioread32(base);
- val &= ~mask;
- val |= data & mask;
- iowrite32(val, base);
-
- dev_dbg(dev, "s %p : %08x\n", base, val);
-}
-
/*
* rsnd_mod functions
*/
return !!dma->chan;
}
-static bool rsnd_dma_filter(struct dma_chan *chan, void *param)
-{
- chan->private = param;
-
- return true;
-}
-
int rsnd_dma_init(struct rsnd_priv *priv, struct rsnd_dma *dma,
int is_play, int id,
int (*inquiry)(struct rsnd_dma *dma,
int (*complete)(struct rsnd_dma *dma))
{
struct device *dev = rsnd_priv_to_dev(priv);
+ struct dma_slave_config cfg;
dma_cap_mask_t mask;
+ int ret;
if (dma->chan) {
dev_err(dev, "it already has dma channel\n");
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
- dma->slave.shdma_slave.slave_id = id;
-
- dma->chan = dma_request_channel(mask, rsnd_dma_filter,
- &dma->slave.shdma_slave);
+ dma->chan = dma_request_slave_channel_compat(mask, shdma_chan_filter,
+ (void *)id, dev,
+ is_play ? "tx" : "rx");
if (!dma->chan) {
dev_err(dev, "can't get dma channel\n");
return -EIO;
}
+ cfg.slave_id = id;
+ cfg.dst_addr = 0; /* use default addr when playback */
+ cfg.src_addr = 0; /* use default addr when capture */
+ cfg.direction = is_play ? DMA_MEM_TO_DEV : DMA_DEV_TO_MEM;
+
+ ret = dmaengine_slave_config(dma->chan, &cfg);
+ if (ret < 0)
+ goto rsnd_dma_init_err;
+
dma->dir = is_play ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
dma->priv = priv;
dma->inquiry = inquiry;
INIT_WORK(&dma->work, rsnd_dma_do_work);
return 0;
+
+rsnd_dma_init_err:
+ rsnd_dma_quit(priv, dma);
+
+ return ret;
}
void rsnd_dma_quit(struct rsnd_priv *priv,
struct rsnd_dai *rsnd_dai_get(struct rsnd_priv *priv, int id)
{
+ if ((id < 0) || (id >= rsnd_dai_nr(priv)))
+ return NULL;
+
return priv->rdai + id;
}
#include "rsnd.h"
struct rsnd_gen_ops {
+ int (*probe)(struct platform_device *pdev,
+ struct rcar_snd_info *info,
+ struct rsnd_priv *priv);
+ void (*remove)(struct platform_device *pdev,
+ struct rsnd_priv *priv);
int (*path_init)(struct rsnd_priv *priv,
struct rsnd_dai *rdai,
struct rsnd_dai_stream *io);
struct rsnd_dai_stream *io);
};
-struct rsnd_gen_reg_map {
- int index; /* -1 : not supported */
- u32 offset_id; /* offset of ssi0, ssi1, ssi2... */
- u32 offset_adr; /* offset of SSICR, SSISR, ... */
-};
-
struct rsnd_gen {
void __iomem *base[RSND_BASE_MAX];
- struct rsnd_gen_reg_map reg_map[RSND_REG_MAX];
struct rsnd_gen_ops *ops;
+
+ struct regmap *regmap;
+ struct regmap_field *regs[RSND_REG_MAX];
};
#define rsnd_priv_to_gen(p) ((struct rsnd_gen *)(p)->gen)
+#define RSND_REG_SET(gen, id, reg_id, offset, _id_offset, _id_size) \
+ [id] = { \
+ .reg = (unsigned int)gen->base[reg_id] + offset, \
+ .lsb = 0, \
+ .msb = 31, \
+ .id_size = _id_size, \
+ .id_offset = _id_offset, \
+ }
+
+/*
+ * basic function
+ */
+static int rsnd_regmap_write32(void *context, const void *_data, size_t count)
+{
+ struct rsnd_priv *priv = context;
+ struct device *dev = rsnd_priv_to_dev(priv);
+ u32 *data = (u32 *)_data;
+ u32 val = data[1];
+ void __iomem *reg = (void *)data[0];
+
+ iowrite32(val, reg);
+
+ dev_dbg(dev, "w %p : %08x\n", reg, val);
+
+ return 0;
+}
+
+static int rsnd_regmap_read32(void *context,
+ const void *_data, size_t reg_size,
+ void *_val, size_t val_size)
+{
+ struct rsnd_priv *priv = context;
+ struct device *dev = rsnd_priv_to_dev(priv);
+ u32 *data = (u32 *)_data;
+ u32 *val = (u32 *)_val;
+ void __iomem *reg = (void *)data[0];
+
+ *val = ioread32(reg);
+
+ dev_dbg(dev, "r %p : %08x\n", reg, *val);
+
+ return 0;
+}
+
+static struct regmap_bus rsnd_regmap_bus = {
+ .write = rsnd_regmap_write32,
+ .read = rsnd_regmap_read32,
+ .reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
+ .val_format_endian_default = REGMAP_ENDIAN_NATIVE,
+};
+
+u32 rsnd_read(struct rsnd_priv *priv,
+ struct rsnd_mod *mod, enum rsnd_reg reg)
+{
+ struct rsnd_gen *gen = rsnd_priv_to_gen(priv);
+ u32 val;
+
+ regmap_fields_read(gen->regs[reg], rsnd_mod_id(mod), &val);
+
+ return val;
+}
+
+void rsnd_write(struct rsnd_priv *priv,
+ struct rsnd_mod *mod,
+ enum rsnd_reg reg, u32 data)
+{
+ struct rsnd_gen *gen = rsnd_priv_to_gen(priv);
+
+ regmap_fields_write(gen->regs[reg], rsnd_mod_id(mod), data);
+}
+
+void rsnd_bset(struct rsnd_priv *priv, struct rsnd_mod *mod,
+ enum rsnd_reg reg, u32 mask, u32 data)
+{
+ struct rsnd_gen *gen = rsnd_priv_to_gen(priv);
+
+ regmap_fields_update_bits(gen->regs[reg], rsnd_mod_id(mod),
+ mask, data);
+}
+
/*
* Gen2
* will be filled in the future
return ret;
}
-static struct rsnd_gen_ops rsnd_gen1_ops = {
- .path_init = rsnd_gen1_path_init,
- .path_exit = rsnd_gen1_path_exit,
-};
+/* single address mapping */
+#define RSND_GEN1_S_REG(gen, reg, id, offset) \
+ RSND_REG_SET(gen, RSND_REG_##id, RSND_GEN1_##reg, offset, 0, 9)
-#define RSND_GEN1_REG_MAP(g, s, i, oi, oa) \
- do { \
- (g)->reg_map[RSND_REG_##i].index = RSND_GEN1_##s; \
- (g)->reg_map[RSND_REG_##i].offset_id = oi; \
- (g)->reg_map[RSND_REG_##i].offset_adr = oa; \
- } while (0)
+/* multi address mapping */
+#define RSND_GEN1_M_REG(gen, reg, id, offset, _id_offset) \
+ RSND_REG_SET(gen, RSND_REG_##id, RSND_GEN1_##reg, offset, _id_offset, 9)
-static void rsnd_gen1_reg_map_init(struct rsnd_gen *gen)
+static int rsnd_gen1_regmap_init(struct rsnd_priv *priv, struct rsnd_gen *gen)
{
- RSND_GEN1_REG_MAP(gen, SRU, SRC_ROUTE_SEL, 0x0, 0x00);
- RSND_GEN1_REG_MAP(gen, SRU, SRC_TMG_SEL0, 0x0, 0x08);
- RSND_GEN1_REG_MAP(gen, SRU, SRC_TMG_SEL1, 0x0, 0x0c);
- RSND_GEN1_REG_MAP(gen, SRU, SRC_TMG_SEL2, 0x0, 0x10);
- RSND_GEN1_REG_MAP(gen, SRU, SRC_CTRL, 0x0, 0xc0);
- RSND_GEN1_REG_MAP(gen, SRU, SSI_MODE0, 0x0, 0xD0);
- RSND_GEN1_REG_MAP(gen, SRU, SSI_MODE1, 0x0, 0xD4);
- RSND_GEN1_REG_MAP(gen, SRU, BUSIF_MODE, 0x4, 0x20);
- RSND_GEN1_REG_MAP(gen, SRU, BUSIF_ADINR, 0x40, 0x214);
-
- RSND_GEN1_REG_MAP(gen, ADG, BRRA, 0x0, 0x00);
- RSND_GEN1_REG_MAP(gen, ADG, BRRB, 0x0, 0x04);
- RSND_GEN1_REG_MAP(gen, ADG, SSICKR, 0x0, 0x08);
- RSND_GEN1_REG_MAP(gen, ADG, AUDIO_CLK_SEL0, 0x0, 0x0c);
- RSND_GEN1_REG_MAP(gen, ADG, AUDIO_CLK_SEL1, 0x0, 0x10);
- RSND_GEN1_REG_MAP(gen, ADG, AUDIO_CLK_SEL3, 0x0, 0x18);
- RSND_GEN1_REG_MAP(gen, ADG, AUDIO_CLK_SEL4, 0x0, 0x1c);
- RSND_GEN1_REG_MAP(gen, ADG, AUDIO_CLK_SEL5, 0x0, 0x20);
-
- RSND_GEN1_REG_MAP(gen, SSI, SSICR, 0x40, 0x00);
- RSND_GEN1_REG_MAP(gen, SSI, SSISR, 0x40, 0x04);
- RSND_GEN1_REG_MAP(gen, SSI, SSITDR, 0x40, 0x08);
- RSND_GEN1_REG_MAP(gen, SSI, SSIRDR, 0x40, 0x0c);
- RSND_GEN1_REG_MAP(gen, SSI, SSIWSR, 0x40, 0x20);
+ int i;
+ struct device *dev = rsnd_priv_to_dev(priv);
+ struct regmap_config regc;
+ struct reg_field regf[RSND_REG_MAX] = {
+ RSND_GEN1_S_REG(gen, SRU, SRC_ROUTE_SEL, 0x00),
+ RSND_GEN1_S_REG(gen, SRU, SRC_TMG_SEL0, 0x08),
+ RSND_GEN1_S_REG(gen, SRU, SRC_TMG_SEL1, 0x0c),
+ RSND_GEN1_S_REG(gen, SRU, SRC_TMG_SEL2, 0x10),
+ RSND_GEN1_S_REG(gen, SRU, SRC_CTRL, 0xc0),
+ RSND_GEN1_S_REG(gen, SRU, SSI_MODE0, 0xD0),
+ RSND_GEN1_S_REG(gen, SRU, SSI_MODE1, 0xD4),
+ RSND_GEN1_M_REG(gen, SRU, BUSIF_MODE, 0x20, 0x4),
+ RSND_GEN1_M_REG(gen, SRU, BUSIF_ADINR, 0x214, 0x40),
+
+ RSND_GEN1_S_REG(gen, ADG, BRRA, 0x00),
+ RSND_GEN1_S_REG(gen, ADG, BRRB, 0x04),
+ RSND_GEN1_S_REG(gen, ADG, SSICKR, 0x08),
+ RSND_GEN1_S_REG(gen, ADG, AUDIO_CLK_SEL0, 0x0c),
+ RSND_GEN1_S_REG(gen, ADG, AUDIO_CLK_SEL1, 0x10),
+ RSND_GEN1_S_REG(gen, ADG, AUDIO_CLK_SEL3, 0x18),
+ RSND_GEN1_S_REG(gen, ADG, AUDIO_CLK_SEL4, 0x1c),
+ RSND_GEN1_S_REG(gen, ADG, AUDIO_CLK_SEL5, 0x20),
+
+ RSND_GEN1_M_REG(gen, SSI, SSICR, 0x00, 0x40),
+ RSND_GEN1_M_REG(gen, SSI, SSISR, 0x04, 0x40),
+ RSND_GEN1_M_REG(gen, SSI, SSITDR, 0x08, 0x40),
+ RSND_GEN1_M_REG(gen, SSI, SSIRDR, 0x0c, 0x40),
+ RSND_GEN1_M_REG(gen, SSI, SSIWSR, 0x20, 0x40),
+ };
+
+ memset(®c, 0, sizeof(regc));
+ regc.reg_bits = 32;
+ regc.val_bits = 32;
+
+ gen->regmap = devm_regmap_init(dev, &rsnd_regmap_bus, priv, ®c);
+ if (IS_ERR(gen->regmap)) {
+ dev_err(dev, "regmap error %ld\n", PTR_ERR(gen->regmap));
+ return PTR_ERR(gen->regmap);
+ }
+
+ for (i = 0; i < RSND_REG_MAX; i++) {
+ gen->regs[i] = devm_regmap_field_alloc(dev, gen->regmap, regf[i]);
+ if (IS_ERR(gen->regs[i]))
+ return PTR_ERR(gen->regs[i]);
+
+ }
+
+ return 0;
}
static int rsnd_gen1_probe(struct platform_device *pdev,
struct resource *sru_res;
struct resource *adg_res;
struct resource *ssi_res;
+ int ret;
/*
* map address
IS_ERR(gen->base[RSND_GEN1_SSI]))
return -ENODEV;
- gen->ops = &rsnd_gen1_ops;
- rsnd_gen1_reg_map_init(gen);
+ ret = rsnd_gen1_regmap_init(priv, gen);
+ if (ret < 0)
+ return ret;
dev_dbg(dev, "Gen1 device probed\n");
dev_dbg(dev, "SRU : %08x => %p\n", sru_res->start,
{
}
+static struct rsnd_gen_ops rsnd_gen1_ops = {
+ .probe = rsnd_gen1_probe,
+ .remove = rsnd_gen1_remove,
+ .path_init = rsnd_gen1_path_init,
+ .path_exit = rsnd_gen1_path_exit,
+};
+
/*
* Gen
*/
return gen->ops->path_exit(priv, rdai, io);
}
-void __iomem *rsnd_gen_reg_get(struct rsnd_priv *priv,
- struct rsnd_mod *mod,
- enum rsnd_reg reg)
-{
- struct rsnd_gen *gen = rsnd_priv_to_gen(priv);
- struct device *dev = rsnd_priv_to_dev(priv);
- int index;
- u32 offset_id, offset_adr;
-
- if (reg >= RSND_REG_MAX) {
- dev_err(dev, "rsnd_reg reg error\n");
- return NULL;
- }
-
- index = gen->reg_map[reg].index;
- offset_id = gen->reg_map[reg].offset_id;
- offset_adr = gen->reg_map[reg].offset_adr;
-
- if (index < 0) {
- dev_err(dev, "unsupported reg access %d\n", reg);
- return NULL;
- }
-
- if (offset_id && mod)
- offset_id *= rsnd_mod_id(mod);
-
- /*
- * index/offset were set on gen1/gen2
- */
-
- return gen->base[index] + offset_id + offset_adr;
-}
-
int rsnd_gen_probe(struct platform_device *pdev,
struct rcar_snd_info *info,
struct rsnd_priv *priv)
{
struct device *dev = rsnd_priv_to_dev(priv);
struct rsnd_gen *gen;
- int i;
gen = devm_kzalloc(dev, sizeof(*gen), GFP_KERNEL);
if (!gen) {
return -ENOMEM;
}
- priv->gen = gen;
-
- /*
- * see
- * rsnd_reg_get()
- * rsnd_gen_probe()
- */
- for (i = 0; i < RSND_REG_MAX; i++)
- gen->reg_map[i].index = -1;
-
- /*
- * init each module
- */
if (rsnd_is_gen1(priv))
- return rsnd_gen1_probe(pdev, info, priv);
+ gen->ops = &rsnd_gen1_ops;
- dev_err(dev, "unknown generation R-Car sound device\n");
+ if (!gen->ops) {
+ dev_err(dev, "unknown generation R-Car sound device\n");
+ return -ENODEV;
+ }
- return -ENODEV;
+ priv->gen = gen;
+
+ return gen->ops->probe(pdev, info, priv);
}
void rsnd_gen_remove(struct platform_device *pdev,
struct rsnd_priv *priv)
{
- if (rsnd_is_gen1(priv))
- rsnd_gen1_remove(pdev, priv);
+ struct rsnd_gen *gen = rsnd_priv_to_gen(priv);
+
+ gen->ops->remove(pdev, priv);
}
#define rsnd_mod_bset(m, r, s, d) \
rsnd_bset(rsnd_mod_to_priv(m), m, RSND_REG_##r, s, d)
-#define rsnd_priv_read(p, r) rsnd_read(p, NULL, RSND_REG_##r)
-#define rsnd_priv_write(p, r, d) rsnd_write(p, NULL, RSND_REG_##r, d)
-#define rsnd_priv_bset(p, r, s, d) rsnd_bset(p, NULL, RSND_REG_##r, s, d)
-
u32 rsnd_read(struct rsnd_priv *priv, struct rsnd_mod *mod, enum rsnd_reg reg);
void rsnd_write(struct rsnd_priv *priv, struct rsnd_mod *mod,
enum rsnd_reg reg, u32 data);
void __iomem *rsnd_gen_reg_get(struct rsnd_priv *priv,
struct rsnd_mod *mod,
enum rsnd_reg reg);
-#define rsnd_is_gen1(s) ((s)->info->flags & RSND_GEN1)
-#define rsnd_is_gen2(s) ((s)->info->flags & RSND_GEN2)
+#define rsnd_is_gen1(s) (((s)->info->flags & RSND_GEN_MASK) == RSND_GEN1)
+#define rsnd_is_gen2(s) (((s)->info->flags & RSND_GEN_MASK) == RSND_GEN2)
/*
* R-Car ADG
void rsnd_scu_remove(struct platform_device *pdev,
struct rsnd_priv *priv);
struct rsnd_mod *rsnd_scu_mod_get(struct rsnd_priv *priv, int id);
+bool rsnd_scu_hpbif_is_enable(struct rsnd_mod *mod);
#define rsnd_scu_nr(priv) ((priv)->scu_nr)
/*
return 0;
}
+bool rsnd_scu_hpbif_is_enable(struct rsnd_mod *mod)
+{
+ struct rsnd_scu *scu = rsnd_mod_to_scu(mod);
+ u32 flags = rsnd_scu_mode_flags(scu);
+
+ return !!(flags & RSND_SCU_USE_HPBIF);
+}
+
static int rsnd_scu_start(struct rsnd_mod *mod,
struct rsnd_dai *rdai,
struct rsnd_dai_stream *io)
{
struct rsnd_priv *priv = rsnd_mod_to_priv(mod);
- struct rsnd_scu *scu = rsnd_mod_to_scu(mod);
struct device *dev = rsnd_priv_to_dev(priv);
- u32 flags = rsnd_scu_mode_flags(scu);
int ret;
/*
* SCU will be used if it has RSND_SCU_USE_HPBIF flags
*/
- if (!(flags & RSND_SCU_USE_HPBIF)) {
+ if (!rsnd_scu_hpbif_is_enable(mod)) {
/* it use PIO transter */
dev_dbg(dev, "%s%d is not used\n",
rsnd_mod_name(mod), rsnd_mod_id(mod));
#define rsnd_ssi_to_ssiu(ssi)\
(((struct rsnd_ssiu *)((ssi) - rsnd_mod_id(&(ssi)->mod))) - 1)
-static void rsnd_ssi_mode_init(struct rsnd_priv *priv,
- struct rsnd_ssiu *ssiu)
+static void rsnd_ssi_mode_set(struct rsnd_priv *priv,
+ struct rsnd_dai *rdai,
+ struct rsnd_ssi *ssi)
{
struct device *dev = rsnd_priv_to_dev(priv);
- struct rsnd_ssi *ssi;
+ struct rsnd_mod *scu;
+ struct rsnd_ssiu *ssiu = rsnd_ssi_to_ssiu(ssi);
+ int id = rsnd_mod_id(&ssi->mod);
u32 flags;
u32 val;
- int i;
+
+ scu = rsnd_scu_mod_get(priv, rsnd_mod_id(&ssi->mod));
/*
* SSI_MODE0
*/
- ssiu->ssi_mode0 = 0;
- for_each_rsnd_ssi(ssi, priv, i) {
- flags = rsnd_ssi_mode_flags(ssi);
-
- /* see also BUSIF_MODE */
- if (!(flags & RSND_SSI_DEPENDENT)) {
- ssiu->ssi_mode0 |= (1 << i);
- dev_dbg(dev, "SSI%d uses INDEPENDENT mode\n", i);
- } else {
- dev_dbg(dev, "SSI%d uses DEPENDENT mode\n", i);
- }
+
+ /* see also BUSIF_MODE */
+ if (rsnd_scu_hpbif_is_enable(scu)) {
+ ssiu->ssi_mode0 &= ~(1 << id);
+ dev_dbg(dev, "SSI%d uses DEPENDENT mode\n", id);
+ } else {
+ ssiu->ssi_mode0 |= (1 << id);
+ dev_dbg(dev, "SSI%d uses INDEPENDENT mode\n", id);
}
/*
#define ssi_parent_set(p, sync, adg, ext) \
do { \
ssi->parent = ssiu->ssi + p; \
- if (flags & RSND_SSI_CLK_FROM_ADG) \
+ if (rsnd_rdai_is_clk_master(rdai)) \
val = adg; \
else \
val = ext; \
val |= sync; \
} while (0)
- ssiu->ssi_mode1 = 0;
- for_each_rsnd_ssi(ssi, priv, i) {
- flags = rsnd_ssi_mode_flags(ssi);
-
- if (!(flags & RSND_SSI_CLK_PIN_SHARE))
- continue;
+ flags = rsnd_ssi_mode_flags(ssi);
+ if (flags & RSND_SSI_CLK_PIN_SHARE) {
val = 0;
- switch (i) {
+ switch (id) {
case 1:
ssi_parent_set(0, (1 << 4), (0x2 << 0), (0x1 << 0));
break;
ssiu->ssi_mode1 |= val;
}
-}
-
-static void rsnd_ssi_mode_set(struct rsnd_ssi *ssi)
-{
- struct rsnd_ssiu *ssiu = rsnd_ssi_to_ssiu(ssi);
rsnd_mod_write(&ssi->mod, SSI_MODE0, ssiu->ssi_mode0);
rsnd_mod_write(&ssi->mod, SSI_MODE1, ssiu->ssi_mode1);
ssi->cr_own = cr;
ssi->err = -1; /* ignore 1st error */
- rsnd_ssi_mode_set(ssi);
+ rsnd_ssi_mode_set(priv, rdai, ssi);
dev_dbg(dev, "%s.%d init\n", rsnd_mod_name(mod), rsnd_mod_id(mod));
rsnd_mod_init(priv, &ssi->mod, ops, i);
}
- rsnd_ssi_mode_init(priv, ssiu);
-
dev_dbg(dev, "ssi probed\n");
return 0;
* option) any later version.
*/
-#include <linux/i2c.h>
-#include <linux/spi/spi.h>
#include <sound/soc.h>
-#include <linux/bitmap.h>
-#include <linux/rbtree.h>
#include <linux/export.h>
+#include <linux/slab.h>
#include <trace/events/asoc.h>
return -1;
}
-static int snd_soc_flat_cache_sync(struct snd_soc_codec *codec)
+int snd_soc_cache_init(struct snd_soc_codec *codec)
{
- int i;
- int ret;
- const struct snd_soc_codec_driver *codec_drv;
- unsigned int val;
+ const struct snd_soc_codec_driver *codec_drv = codec->driver;
+ size_t reg_size;
- codec_drv = codec->driver;
- for (i = 0; i < codec_drv->reg_cache_size; ++i) {
- ret = snd_soc_cache_read(codec, i, &val);
- if (ret)
- return ret;
- if (codec->reg_def_copy)
- if (snd_soc_get_cache_val(codec->reg_def_copy,
- i, codec_drv->reg_word_size) == val)
- continue;
+ reg_size = codec_drv->reg_cache_size * codec_drv->reg_word_size;
- WARN_ON(!snd_soc_codec_writable_register(codec, i));
-
- ret = snd_soc_write(codec, i, val);
- if (ret)
- return ret;
- dev_dbg(codec->dev, "ASoC: Synced register %#x, value = %#x\n",
- i, val);
- }
- return 0;
-}
-
-static int snd_soc_flat_cache_write(struct snd_soc_codec *codec,
- unsigned int reg, unsigned int value)
-{
- snd_soc_set_cache_val(codec->reg_cache, reg, value,
- codec->driver->reg_word_size);
- return 0;
-}
-
-static int snd_soc_flat_cache_read(struct snd_soc_codec *codec,
- unsigned int reg, unsigned int *value)
-{
- *value = snd_soc_get_cache_val(codec->reg_cache, reg,
- codec->driver->reg_word_size);
- return 0;
-}
+ mutex_init(&codec->cache_rw_mutex);
-static int snd_soc_flat_cache_exit(struct snd_soc_codec *codec)
-{
- if (!codec->reg_cache)
- return 0;
- kfree(codec->reg_cache);
- codec->reg_cache = NULL;
- return 0;
-}
+ dev_dbg(codec->dev, "ASoC: Initializing cache for %s codec\n",
+ codec->name);
-static int snd_soc_flat_cache_init(struct snd_soc_codec *codec)
-{
- if (codec->reg_def_copy)
- codec->reg_cache = kmemdup(codec->reg_def_copy,
- codec->reg_size, GFP_KERNEL);
+ if (codec_drv->reg_cache_default)
+ codec->reg_cache = kmemdup(codec_drv->reg_cache_default,
+ reg_size, GFP_KERNEL);
else
- codec->reg_cache = kzalloc(codec->reg_size, GFP_KERNEL);
+ codec->reg_cache = kzalloc(reg_size, GFP_KERNEL);
if (!codec->reg_cache)
return -ENOMEM;
return 0;
}
-/* an array of all supported compression types */
-static const struct snd_soc_cache_ops cache_types[] = {
- /* Flat *must* be the first entry for fallback */
- {
- .id = SND_SOC_FLAT_COMPRESSION,
- .name = "flat",
- .init = snd_soc_flat_cache_init,
- .exit = snd_soc_flat_cache_exit,
- .read = snd_soc_flat_cache_read,
- .write = snd_soc_flat_cache_write,
- .sync = snd_soc_flat_cache_sync
- },
-};
-
-int snd_soc_cache_init(struct snd_soc_codec *codec)
-{
- int i;
-
- for (i = 0; i < ARRAY_SIZE(cache_types); ++i)
- if (cache_types[i].id == codec->compress_type)
- break;
-
- /* Fall back to flat compression */
- if (i == ARRAY_SIZE(cache_types)) {
- dev_warn(codec->dev, "ASoC: Could not match compress type: %d\n",
- codec->compress_type);
- i = 0;
- }
-
- mutex_init(&codec->cache_rw_mutex);
- codec->cache_ops = &cache_types[i];
-
- if (codec->cache_ops->init) {
- if (codec->cache_ops->name)
- dev_dbg(codec->dev, "ASoC: Initializing %s cache for %s codec\n",
- codec->cache_ops->name, codec->name);
- return codec->cache_ops->init(codec);
- }
- return -ENOSYS;
-}
-
/*
* NOTE: keep in mind that this function might be called
* multiple times.
*/
int snd_soc_cache_exit(struct snd_soc_codec *codec)
{
- if (codec->cache_ops && codec->cache_ops->exit) {
- if (codec->cache_ops->name)
- dev_dbg(codec->dev, "ASoC: Destroying %s cache for %s codec\n",
- codec->cache_ops->name, codec->name);
- return codec->cache_ops->exit(codec);
- }
- return -ENOSYS;
+ dev_dbg(codec->dev, "ASoC: Destroying cache for %s codec\n",
+ codec->name);
+ if (!codec->reg_cache)
+ return 0;
+ kfree(codec->reg_cache);
+ codec->reg_cache = NULL;
+ return 0;
}
/**
int snd_soc_cache_read(struct snd_soc_codec *codec,
unsigned int reg, unsigned int *value)
{
- int ret;
+ if (!value)
+ return -EINVAL;
mutex_lock(&codec->cache_rw_mutex);
-
- if (value && codec->cache_ops && codec->cache_ops->read) {
- ret = codec->cache_ops->read(codec, reg, value);
- mutex_unlock(&codec->cache_rw_mutex);
- return ret;
- }
-
+ *value = snd_soc_get_cache_val(codec->reg_cache, reg,
+ codec->driver->reg_word_size);
mutex_unlock(&codec->cache_rw_mutex);
- return -ENOSYS;
+
+ return 0;
}
EXPORT_SYMBOL_GPL(snd_soc_cache_read);
int snd_soc_cache_write(struct snd_soc_codec *codec,
unsigned int reg, unsigned int value)
{
+ mutex_lock(&codec->cache_rw_mutex);
+ snd_soc_set_cache_val(codec->reg_cache, reg, value,
+ codec->driver->reg_word_size);
+ mutex_unlock(&codec->cache_rw_mutex);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(snd_soc_cache_write);
+
+static int snd_soc_flat_cache_sync(struct snd_soc_codec *codec)
+{
+ int i;
int ret;
+ const struct snd_soc_codec_driver *codec_drv;
+ unsigned int val;
- mutex_lock(&codec->cache_rw_mutex);
+ codec_drv = codec->driver;
+ for (i = 0; i < codec_drv->reg_cache_size; ++i) {
+ ret = snd_soc_cache_read(codec, i, &val);
+ if (ret)
+ return ret;
+ if (codec_drv->reg_cache_default)
+ if (snd_soc_get_cache_val(codec_drv->reg_cache_default,
+ i, codec_drv->reg_word_size) == val)
+ continue;
- if (codec->cache_ops && codec->cache_ops->write) {
- ret = codec->cache_ops->write(codec, reg, value);
- mutex_unlock(&codec->cache_rw_mutex);
- return ret;
- }
+ WARN_ON(!snd_soc_codec_writable_register(codec, i));
- mutex_unlock(&codec->cache_rw_mutex);
- return -ENOSYS;
+ ret = snd_soc_write(codec, i, val);
+ if (ret)
+ return ret;
+ dev_dbg(codec->dev, "ASoC: Synced register %#x, value = %#x\n",
+ i, val);
+ }
+ return 0;
}
-EXPORT_SYMBOL_GPL(snd_soc_cache_write);
/**
* snd_soc_cache_sync: Sync the register cache with the hardware.
*/
int snd_soc_cache_sync(struct snd_soc_codec *codec)
{
+ const char *name = "flat";
int ret;
- const char *name;
- if (!codec->cache_sync) {
+ if (!codec->cache_sync)
return 0;
- }
-
- if (!codec->cache_ops || !codec->cache_ops->sync)
- return -ENOSYS;
- if (codec->cache_ops->name)
- name = codec->cache_ops->name;
- else
- name = "unknown";
-
- if (codec->cache_ops->name)
- dev_dbg(codec->dev, "ASoC: Syncing %s cache for %s codec\n",
- codec->cache_ops->name, codec->name);
+ dev_dbg(codec->dev, "ASoC: Syncing cache for %s codec\n",
+ codec->name);
trace_snd_soc_cache_sync(codec, name, "start");
- ret = codec->cache_ops->sync(codec);
+ ret = snd_soc_flat_cache_sync(codec);
if (!ret)
codec->cache_sync = 0;
trace_snd_soc_cache_sync(codec, name, "end");
return ret;
}
EXPORT_SYMBOL_GPL(snd_soc_cache_sync);
-
-static int snd_soc_get_reg_access_index(struct snd_soc_codec *codec,
- unsigned int reg)
-{
- const struct snd_soc_codec_driver *codec_drv;
- unsigned int min, max, index;
-
- codec_drv = codec->driver;
- min = 0;
- max = codec_drv->reg_access_size - 1;
- do {
- index = (min + max) / 2;
- if (codec_drv->reg_access_default[index].reg == reg)
- return index;
- if (codec_drv->reg_access_default[index].reg < reg)
- min = index + 1;
- else
- max = index;
- } while (min <= max);
- return -1;
-}
-
-int snd_soc_default_volatile_register(struct snd_soc_codec *codec,
- unsigned int reg)
-{
- int index;
-
- if (reg >= codec->driver->reg_cache_size)
- return 1;
- index = snd_soc_get_reg_access_index(codec, reg);
- if (index < 0)
- return 0;
- return codec->driver->reg_access_default[index].vol;
-}
-EXPORT_SYMBOL_GPL(snd_soc_default_volatile_register);
-
-int snd_soc_default_readable_register(struct snd_soc_codec *codec,
- unsigned int reg)
-{
- int index;
-
- if (reg >= codec->driver->reg_cache_size)
- return 1;
- index = snd_soc_get_reg_access_index(codec, reg);
- if (index < 0)
- return 0;
- return codec->driver->reg_access_default[index].read;
-}
-EXPORT_SYMBOL_GPL(snd_soc_default_readable_register);
-
-int snd_soc_default_writable_register(struct snd_soc_codec *codec,
- unsigned int reg)
-{
- int index;
-
- if (reg >= codec->driver->reg_cache_size)
- return 1;
- index = snd_soc_get_reg_access_index(codec, reg);
- if (index < 0)
- return 0;
- return codec->driver->reg_access_default[index].write;
-}
-EXPORT_SYMBOL_GPL(snd_soc_default_writable_register);
codec->cache_sync = 1;
if (codec->using_regmap)
regcache_mark_dirty(codec->control_data);
+ /* deactivate pins to sleep state */
+ pinctrl_pm_select_sleep_state(codec->dev);
break;
default:
dev_dbg(codec->dev,
if (cpu_dai->driver->suspend && cpu_dai->driver->ac97_control)
cpu_dai->driver->suspend(cpu_dai);
+
+ /* deactivate pins to sleep state */
+ pinctrl_pm_select_sleep_state(cpu_dai->dev);
}
if (card->suspend_post)
if (list_empty(&card->codec_dev_list))
return 0;
+ /* activate pins from sleep state */
+ for (i = 0; i < card->num_rtd; i++) {
+ struct snd_soc_dai *cpu_dai = card->rtd[i].cpu_dai;
+ struct snd_soc_dai *codec_dai = card->rtd[i].codec_dai;
+ if (cpu_dai->active)
+ pinctrl_pm_select_default_state(cpu_dai->dev);
+ if (codec_dai->active)
+ pinctrl_pm_select_default_state(codec_dai->dev);
+ }
+
/* AC97 devices might have other drivers hanging off them so
* need to resume immediately. Other drivers don't have that
* problem and may take a substantial amount of time to resume
return -ENODEV;
list_add(&cpu_dai->dapm.list, &card->dapm_list);
- snd_soc_dapm_new_dai_widgets(&cpu_dai->dapm, cpu_dai);
}
if (cpu_dai->driver->probe) {
soc_remove_codec(codec);
}
-static int snd_soc_init_codec_cache(struct snd_soc_codec *codec,
- enum snd_soc_compress_type compress_type)
+static int snd_soc_init_codec_cache(struct snd_soc_codec *codec)
{
int ret;
if (codec->cache_init)
return 0;
- /* override the compress_type if necessary */
- if (compress_type && codec->compress_type != compress_type)
- codec->compress_type = compress_type;
ret = snd_soc_cache_init(codec);
if (ret < 0) {
dev_err(codec->dev,
static int snd_soc_instantiate_card(struct snd_soc_card *card)
{
struct snd_soc_codec *codec;
- struct snd_soc_codec_conf *codec_conf;
- enum snd_soc_compress_type compress_type;
struct snd_soc_dai_link *dai_link;
int ret, i, order, dai_fmt;
list_for_each_entry(codec, &codec_list, list) {
if (codec->cache_init)
continue;
- /* by default we don't override the compress_type */
- compress_type = 0;
- /* check to see if we need to override the compress_type */
- for (i = 0; i < card->num_configs; ++i) {
- codec_conf = &card->codec_conf[i];
- if (!strcmp(codec->name, codec_conf->dev_name)) {
- compress_type = codec_conf->compress_type;
- if (compress_type && compress_type
- != codec->compress_type)
- break;
- }
- }
- ret = snd_soc_init_codec_cache(codec, compress_type);
+ ret = snd_soc_init_codec_cache(codec);
if (ret < 0)
goto base_error;
}
snd_soc_dapm_shutdown(card);
+ /* deactivate pins to sleep state */
+ for (i = 0; i < card->num_rtd; i++) {
+ struct snd_soc_dai *cpu_dai = card->rtd[i].cpu_dai;
+ struct snd_soc_dai *codec_dai = card->rtd[i].codec_dai;
+ pinctrl_pm_select_sleep_state(codec_dai->dev);
+ pinctrl_pm_select_sleep_state(cpu_dai->dev);
+ }
+
return 0;
}
EXPORT_SYMBOL_GPL(snd_soc_poweroff);
}
EXPORT_SYMBOL_GPL(snd_soc_write);
-unsigned int snd_soc_bulk_write_raw(struct snd_soc_codec *codec,
- unsigned int reg, const void *data, size_t len)
-{
- return codec->bulk_write_raw(codec, reg, data, len);
-}
-EXPORT_SYMBOL_GPL(snd_soc_bulk_write_raw);
-
/**
* snd_soc_update_bits - update codec register bits
* @codec: audio codec
if (uinfo->value.enumerated.item > e->max - 1)
uinfo->value.enumerated.item = e->max - 1;
- strcpy(uinfo->value.enumerated.name,
- e->texts[uinfo->value.enumerated.item]);
+ strlcpy(uinfo->value.enumerated.name,
+ e->texts[uinfo->value.enumerated.item],
+ sizeof(uinfo->value.enumerated.name));
return 0;
}
EXPORT_SYMBOL_GPL(snd_soc_info_enum_double);
EXPORT_SYMBOL_GPL(snd_soc_codec_set_pll);
/**
+ * snd_soc_dai_set_bclk_ratio - configure BCLK to sample rate ratio.
+ * @dai: DAI
+ * @ratio Ratio of BCLK to Sample rate.
+ *
+ * Configures the DAI for a preset BCLK to sample rate ratio.
+ */
+int snd_soc_dai_set_bclk_ratio(struct snd_soc_dai *dai, unsigned int ratio)
+{
+ if (dai->driver && dai->driver->ops->set_bclk_ratio)
+ return dai->driver->ops->set_bclk_ratio(dai, ratio);
+ else
+ return -EINVAL;
+}
+EXPORT_SYMBOL_GPL(snd_soc_dai_set_bclk_ratio);
+
+/**
* snd_soc_dai_set_fmt - configure DAI hardware audio format.
* @dai: DAI
* @fmt: SND_SOC_DAIFMT_ format value.
if (ret != 0)
soc_cleanup_card_debugfs(card);
+ /* deactivate pins to sleep state */
+ for (i = 0; i < card->num_rtd; i++) {
+ struct snd_soc_dai *cpu_dai = card->rtd[i].cpu_dai;
+ struct snd_soc_dai *codec_dai = card->rtd[i].codec_dai;
+ if (!codec_dai->active)
+ pinctrl_pm_select_sleep_state(codec_dai->dev);
+ if (!cpu_dai->active)
+ pinctrl_pm_select_sleep_state(cpu_dai->dev);
+ }
+
return ret;
}
EXPORT_SYMBOL_GPL(snd_soc_register_card);
}
/**
+ * snd_soc_register_component - Register a component with the ASoC core
+ *
+ */
+static int
+__snd_soc_register_component(struct device *dev,
+ struct snd_soc_component *cmpnt,
+ const struct snd_soc_component_driver *cmpnt_drv,
+ struct snd_soc_dai_driver *dai_drv,
+ int num_dai, bool allow_single_dai)
+{
+ int ret;
+
+ dev_dbg(dev, "component register %s\n", dev_name(dev));
+
+ if (!cmpnt) {
+ dev_err(dev, "ASoC: Failed to connecting component\n");
+ return -ENOMEM;
+ }
+
+ cmpnt->name = fmt_single_name(dev, &cmpnt->id);
+ if (!cmpnt->name) {
+ dev_err(dev, "ASoC: Failed to simplifying name\n");
+ return -ENOMEM;
+ }
+
+ cmpnt->dev = dev;
+ cmpnt->driver = cmpnt_drv;
+ cmpnt->dai_drv = dai_drv;
+ cmpnt->num_dai = num_dai;
+
+ /*
+ * snd_soc_register_dai() uses fmt_single_name(), and
+ * snd_soc_register_dais() uses fmt_multiple_name()
+ * for dai->name which is used for name based matching
+ *
+ * this function is used from cpu/codec.
+ * allow_single_dai flag can ignore "codec" driver reworking
+ * since it had been used snd_soc_register_dais(),
+ */
+ if ((1 == num_dai) && allow_single_dai)
+ ret = snd_soc_register_dai(dev, dai_drv);
+ else
+ ret = snd_soc_register_dais(dev, dai_drv, num_dai);
+ if (ret < 0) {
+ dev_err(dev, "ASoC: Failed to regster DAIs: %d\n", ret);
+ goto error_component_name;
+ }
+
+ mutex_lock(&client_mutex);
+ list_add(&cmpnt->list, &component_list);
+ mutex_unlock(&client_mutex);
+
+ dev_dbg(cmpnt->dev, "ASoC: Registered component '%s'\n", cmpnt->name);
+
+ return ret;
+
+error_component_name:
+ kfree(cmpnt->name);
+
+ return ret;
+}
+
+int snd_soc_register_component(struct device *dev,
+ const struct snd_soc_component_driver *cmpnt_drv,
+ struct snd_soc_dai_driver *dai_drv,
+ int num_dai)
+{
+ struct snd_soc_component *cmpnt;
+
+ cmpnt = devm_kzalloc(dev, sizeof(*cmpnt), GFP_KERNEL);
+ if (!cmpnt) {
+ dev_err(dev, "ASoC: Failed to allocate memory\n");
+ return -ENOMEM;
+ }
+
+ return __snd_soc_register_component(dev, cmpnt, cmpnt_drv,
+ dai_drv, num_dai, true);
+}
+EXPORT_SYMBOL_GPL(snd_soc_register_component);
+
+/**
+ * snd_soc_unregister_component - Unregister a component from the ASoC core
+ *
+ */
+void snd_soc_unregister_component(struct device *dev)
+{
+ struct snd_soc_component *cmpnt;
+
+ list_for_each_entry(cmpnt, &component_list, list) {
+ if (dev == cmpnt->dev)
+ goto found;
+ }
+ return;
+
+found:
+ snd_soc_unregister_dais(dev, cmpnt->num_dai);
+
+ mutex_lock(&client_mutex);
+ list_del(&cmpnt->list);
+ mutex_unlock(&client_mutex);
+
+ dev_dbg(dev, "ASoC: Unregistered component '%s'\n", cmpnt->name);
+ kfree(cmpnt->name);
+}
+EXPORT_SYMBOL_GPL(snd_soc_unregister_component);
+
+/**
* snd_soc_add_platform - Add a platform to the ASoC core
* @dev: The parent device for the platform
* @platform: The platform to add
struct snd_soc_dai_driver *dai_drv,
int num_dai)
{
- size_t reg_size;
struct snd_soc_codec *codec;
int ret, i;
goto fail_codec;
}
- if (codec_drv->compress_type)
- codec->compress_type = codec_drv->compress_type;
- else
- codec->compress_type = SND_SOC_FLAT_COMPRESSION;
-
codec->write = codec_drv->write;
codec->read = codec_drv->read;
codec->volatile_register = codec_drv->volatile_register;
codec->num_dai = num_dai;
mutex_init(&codec->mutex);
- /* allocate CODEC register cache */
- if (codec_drv->reg_cache_size && codec_drv->reg_word_size) {
- reg_size = codec_drv->reg_cache_size * codec_drv->reg_word_size;
- codec->reg_size = reg_size;
- /* it is necessary to make a copy of the default register cache
- * because in the case of using a compression type that requires
- * the default register cache to be marked as the
- * kernel might have freed the array by the time we initialize
- * the cache.
- */
- if (codec_drv->reg_cache_default) {
- codec->reg_def_copy = kmemdup(codec_drv->reg_cache_default,
- reg_size, GFP_KERNEL);
- if (!codec->reg_def_copy) {
- ret = -ENOMEM;
- goto fail_codec_name;
- }
- }
- }
-
- if (codec_drv->reg_access_size && codec_drv->reg_access_default) {
- if (!codec->volatile_register)
- codec->volatile_register = snd_soc_default_volatile_register;
- if (!codec->readable_register)
- codec->readable_register = snd_soc_default_readable_register;
- if (!codec->writable_register)
- codec->writable_register = snd_soc_default_writable_register;
- }
-
for (i = 0; i < num_dai; i++) {
fixup_codec_formats(&dai_drv[i].playback);
fixup_codec_formats(&dai_drv[i].capture);
list_add(&codec->list, &codec_list);
mutex_unlock(&client_mutex);
- /* register any DAIs */
- ret = snd_soc_register_dais(dev, dai_drv, num_dai);
+ /* register component */
+ ret = __snd_soc_register_component(dev, &codec->component,
+ &codec_drv->component_driver,
+ dai_drv, num_dai, false);
if (ret < 0) {
- dev_err(codec->dev, "ASoC: Failed to regster DAIs: %d\n", ret);
+ dev_err(codec->dev, "ASoC: Failed to regster component: %d\n", ret);
goto fail_codec_name;
}
return;
found:
- snd_soc_unregister_dais(dev, codec->num_dai);
+ snd_soc_unregister_component(dev);
mutex_lock(&client_mutex);
list_del(&codec->list);
dev_dbg(codec->dev, "ASoC: Unregistered codec '%s'\n", codec->name);
snd_soc_cache_exit(codec);
- kfree(codec->reg_def_copy);
kfree(codec->name);
kfree(codec);
}
EXPORT_SYMBOL_GPL(snd_soc_unregister_codec);
-
-/**
- * snd_soc_register_component - Register a component with the ASoC core
- *
- */
-int snd_soc_register_component(struct device *dev,
- const struct snd_soc_component_driver *cmpnt_drv,
- struct snd_soc_dai_driver *dai_drv,
- int num_dai)
-{
- struct snd_soc_component *cmpnt;
- int ret;
-
- dev_dbg(dev, "component register %s\n", dev_name(dev));
-
- cmpnt = devm_kzalloc(dev, sizeof(*cmpnt), GFP_KERNEL);
- if (!cmpnt) {
- dev_err(dev, "ASoC: Failed to allocate memory\n");
- return -ENOMEM;
- }
-
- cmpnt->name = fmt_single_name(dev, &cmpnt->id);
- if (!cmpnt->name) {
- dev_err(dev, "ASoC: Failed to simplifying name\n");
- return -ENOMEM;
- }
-
- cmpnt->dev = dev;
- cmpnt->driver = cmpnt_drv;
- cmpnt->num_dai = num_dai;
-
- /*
- * snd_soc_register_dai() uses fmt_single_name(), and
- * snd_soc_register_dais() uses fmt_multiple_name()
- * for dai->name which is used for name based matching
- */
- if (1 == num_dai)
- ret = snd_soc_register_dai(dev, dai_drv);
- else
- ret = snd_soc_register_dais(dev, dai_drv, num_dai);
- if (ret < 0) {
- dev_err(dev, "ASoC: Failed to regster DAIs: %d\n", ret);
- goto error_component_name;
- }
-
- mutex_lock(&client_mutex);
- list_add(&cmpnt->list, &component_list);
- mutex_unlock(&client_mutex);
-
- dev_dbg(cmpnt->dev, "ASoC: Registered component '%s'\n", cmpnt->name);
-
- return ret;
-
-error_component_name:
- kfree(cmpnt->name);
-
- return ret;
-}
-EXPORT_SYMBOL_GPL(snd_soc_register_component);
-
-/**
- * snd_soc_unregister_component - Unregister a component from the ASoC core
- *
- */
-void snd_soc_unregister_component(struct device *dev)
-{
- struct snd_soc_component *cmpnt;
-
- list_for_each_entry(cmpnt, &component_list, list) {
- if (dev == cmpnt->dev)
- goto found;
- }
- return;
-
-found:
- snd_soc_unregister_dais(dev, cmpnt->num_dai);
-
- mutex_lock(&client_mutex);
- list_del(&cmpnt->list);
- mutex_unlock(&client_mutex);
-
- dev_dbg(dev, "ASoC: Unregistered component '%s'\n", cmpnt->name);
- kfree(cmpnt->name);
-}
-EXPORT_SYMBOL_GPL(snd_soc_unregister_component);
-
/* Retrieve a card's name from device tree */
int snd_soc_of_parse_card_name(struct snd_soc_card *card,
const char *propname)
}
EXPORT_SYMBOL_GPL(snd_soc_of_parse_daifmt);
+int snd_soc_of_get_dai_name(struct device_node *of_node,
+ const char **dai_name)
+{
+ struct snd_soc_component *pos;
+ struct of_phandle_args args;
+ int ret;
+
+ ret = of_parse_phandle_with_args(of_node, "sound-dai",
+ "#sound-dai-cells", 0, &args);
+ if (ret)
+ return ret;
+
+ ret = -EPROBE_DEFER;
+
+ mutex_lock(&client_mutex);
+ list_for_each_entry(pos, &component_list, list) {
+ if (pos->dev->of_node != args.np)
+ continue;
+
+ if (pos->driver->of_xlate_dai_name) {
+ ret = pos->driver->of_xlate_dai_name(pos, &args, dai_name);
+ } else {
+ int id = -1;
+
+ switch (args.args_count) {
+ case 0:
+ id = 0; /* same as dai_drv[0] */
+ break;
+ case 1:
+ id = args.args[0];
+ break;
+ default:
+ /* not supported */
+ break;
+ }
+
+ if (id < 0 || id >= pos->num_dai) {
+ ret = -EINVAL;
+ } else {
+ *dai_name = pos->dai_drv[id].name;
+ ret = 0;
+ }
+ }
+
+ break;
+ }
+ mutex_unlock(&client_mutex);
+
+ of_node_put(args.np);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(snd_soc_of_get_dai_name);
+
static int __init snd_soc_init(void)
{
#ifdef CONFIG_DEBUG_FS
w->active ? "active" : "inactive");
list_for_each_entry(p, &w->sources, list_sink) {
- if (p->connected && !p->connected(w, p->sink))
+ if (p->connected && !p->connected(w, p->source))
continue;
if (p->connect)
if (!w) {
dev_err(dapm->dev, "ASoC: Failed to create %s widget\n",
dai->driver->playback.stream_name);
+ return -ENOMEM;
}
w->priv = dai;
if (!w) {
dev_err(dapm->dev, "ASoC: Failed to create %s widget\n",
dai->driver->capture.stream_name);
+ return -ENOMEM;
}
w->priv = dai;
#include <sound/dmaengine_pcm.h>
struct dmaengine_pcm {
- struct dma_chan *chan[SNDRV_PCM_STREAM_CAPTURE + 1];
+ struct dma_chan *chan[SNDRV_PCM_STREAM_LAST + 1];
const struct snd_dmaengine_pcm_config *config;
struct snd_soc_platform platform;
unsigned int flags;
return container_of(p, struct dmaengine_pcm, platform);
}
+static struct device *dmaengine_dma_dev(struct dmaengine_pcm *pcm,
+ struct snd_pcm_substream *substream)
+{
+ if (!pcm->chan[substream->stream])
+ return NULL;
+
+ return pcm->chan[substream->stream]->device->dev;
+}
+
/**
* snd_dmaengine_pcm_prepare_slave_config() - Generic prepare_slave_config callback
* @substream: PCM substream
struct snd_soc_pcm_runtime *rtd = substream->private_data;
struct dmaengine_pcm *pcm = soc_platform_to_pcm(rtd->platform);
struct dma_chan *chan = snd_dmaengine_pcm_get_chan(substream);
+ int (*prepare_slave_config)(struct snd_pcm_substream *substream,
+ struct snd_pcm_hw_params *params,
+ struct dma_slave_config *slave_config);
struct dma_slave_config slave_config;
int ret;
- if (pcm->config->prepare_slave_config) {
- ret = pcm->config->prepare_slave_config(substream, params,
- &slave_config);
+ memset(&slave_config, 0, sizeof(slave_config));
+
+ if (!pcm->config)
+ prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config;
+ else
+ prepare_slave_config = pcm->config->prepare_slave_config;
+
+ if (prepare_slave_config) {
+ ret = prepare_slave_config(substream, params, &slave_config);
if (ret)
return ret;
return snd_pcm_lib_malloc_pages(substream, params_buffer_bytes(params));
}
-static int dmaengine_pcm_open(struct snd_pcm_substream *substream)
+static int dmaengine_pcm_set_runtime_hwparams(struct snd_pcm_substream *substream)
{
struct snd_soc_pcm_runtime *rtd = substream->private_data;
struct dmaengine_pcm *pcm = soc_platform_to_pcm(rtd->platform);
+ struct device *dma_dev = dmaengine_dma_dev(pcm, substream);
struct dma_chan *chan = pcm->chan[substream->stream];
+ struct snd_dmaengine_dai_dma_data *dma_data;
+ struct dma_slave_caps dma_caps;
+ struct snd_pcm_hardware hw;
int ret;
- ret = snd_soc_set_runtime_hwparams(substream,
+ if (pcm->config && pcm->config->pcm_hardware)
+ return snd_soc_set_runtime_hwparams(substream,
pcm->config->pcm_hardware);
- if (ret)
- return ret;
- return snd_dmaengine_pcm_open(substream, chan);
+ dma_data = snd_soc_dai_get_dma_data(rtd->cpu_dai, substream);
+
+ memset(&hw, 0, sizeof(hw));
+ hw.info = SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_MMAP_VALID |
+ SNDRV_PCM_INFO_INTERLEAVED;
+ hw.periods_min = 2;
+ hw.periods_max = UINT_MAX;
+ hw.period_bytes_min = 256;
+ hw.period_bytes_max = dma_get_max_seg_size(dma_dev);
+ hw.buffer_bytes_max = SIZE_MAX;
+ hw.fifo_size = dma_data->fifo_size;
+
+ ret = dma_get_slave_caps(chan, &dma_caps);
+ if (ret == 0) {
+ if (dma_caps.cmd_pause)
+ hw.info |= SNDRV_PCM_INFO_PAUSE | SNDRV_PCM_INFO_RESUME;
+ }
+
+ return snd_soc_set_runtime_hwparams(substream, &hw);
}
-static struct device *dmaengine_dma_dev(struct dmaengine_pcm *pcm,
- struct snd_pcm_substream *substream)
+static int dmaengine_pcm_open(struct snd_pcm_substream *substream)
{
- if (!pcm->chan[substream->stream])
- return NULL;
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+ struct dmaengine_pcm *pcm = soc_platform_to_pcm(rtd->platform);
+ struct dma_chan *chan = pcm->chan[substream->stream];
+ int ret;
- return pcm->chan[substream->stream]->device->dev;
+ ret = dmaengine_pcm_set_runtime_hwparams(substream);
+ if (ret)
+ return ret;
+
+ return snd_dmaengine_pcm_open(substream, chan);
}
static void dmaengine_pcm_free(struct snd_pcm *pcm)
struct snd_pcm_substream *substream)
{
struct dmaengine_pcm *pcm = soc_platform_to_pcm(rtd->platform);
+ struct snd_dmaengine_dai_dma_data *dma_data;
+
+ dma_data = snd_soc_dai_get_dma_data(rtd->cpu_dai, substream);
if ((pcm->flags & SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX) && pcm->chan[0])
return pcm->chan[0];
return pcm->config->compat_request_channel(rtd, substream);
return snd_dmaengine_pcm_request_channel(pcm->config->compat_filter_fn,
- snd_soc_dai_get_dma_data(rtd->cpu_dai, substream));
+ dma_data->filter_data);
}
static int dmaengine_pcm_new(struct snd_soc_pcm_runtime *rtd)
{
struct dmaengine_pcm *pcm = soc_platform_to_pcm(rtd->platform);
const struct snd_dmaengine_pcm_config *config = pcm->config;
+ struct device *dev = rtd->platform->dev;
+ struct snd_dmaengine_dai_dma_data *dma_data;
struct snd_pcm_substream *substream;
+ size_t prealloc_buffer_size;
+ size_t max_buffer_size;
unsigned int i;
int ret;
+ if (config && config->prealloc_buffer_size) {
+ prealloc_buffer_size = config->prealloc_buffer_size;
+ max_buffer_size = config->pcm_hardware->buffer_bytes_max;
+ } else {
+ prealloc_buffer_size = 512 * 1024;
+ max_buffer_size = SIZE_MAX;
+ }
+
+
for (i = SNDRV_PCM_STREAM_PLAYBACK; i <= SNDRV_PCM_STREAM_CAPTURE; i++) {
substream = rtd->pcm->streams[i].substream;
if (!substream)
continue;
+ dma_data = snd_soc_dai_get_dma_data(rtd->cpu_dai, substream);
+
+ if (!pcm->chan[i] &&
+ (pcm->flags & SND_DMAENGINE_PCM_FLAG_CUSTOM_CHANNEL_NAME))
+ pcm->chan[i] = dma_request_slave_channel(dev,
+ dma_data->chan_name);
+
if (!pcm->chan[i] && (pcm->flags & SND_DMAENGINE_PCM_FLAG_COMPAT)) {
pcm->chan[i] = dmaengine_pcm_compat_request_channel(rtd,
substream);
ret = snd_pcm_lib_preallocate_pages(substream,
SNDRV_DMA_TYPE_DEV,
dmaengine_dma_dev(pcm, substream),
- config->prealloc_buffer_size,
- config->pcm_hardware->buffer_bytes_max);
+ prealloc_buffer_size,
+ max_buffer_size);
if (ret)
goto err_free;
}
{
unsigned int i;
- if ((pcm->flags & SND_DMAENGINE_PCM_FLAG_NO_DT) || !dev->of_node)
+ if ((pcm->flags & (SND_DMAENGINE_PCM_FLAG_NO_DT |
+ SND_DMAENGINE_PCM_FLAG_CUSTOM_CHANNEL_NAME)) ||
+ !dev->of_node)
return;
if (pcm->flags & SND_DMAENGINE_PCM_FLAG_HALF_DUPLEX) {
return val;
}
-/* Primitive bulk write support for soc-cache. The data pointed to by
- * `data' needs to already be in the form the hardware expects. Any
- * data written through this function will not go through the cache as
- * it only handles writing to volatile or out of bounds registers.
- *
- * This is currently only supported for devices using the regmap API
- * wrappers.
- */
-static int snd_soc_hw_bulk_write_raw(struct snd_soc_codec *codec,
- unsigned int reg,
- const void *data, size_t len)
-{
- /* To ensure that we don't get out of sync with the cache, check
- * whether the base register is volatile or if we've directly asked
- * to bypass the cache. Out of bounds registers are considered
- * volatile.
- */
- if (!codec->cache_bypass
- && !snd_soc_codec_volatile_register(codec, reg)
- && reg < codec->driver->reg_cache_size)
- return -EINVAL;
-
- return regmap_raw_write(codec->control_data, reg, data, len);
-}
-
/**
* snd_soc_codec_set_cache_io: Set up standard I/O functions.
*
memset(&config, 0, sizeof(config));
codec->write = hw_write;
codec->read = hw_read;
- codec->bulk_write_raw = snd_soc_hw_bulk_write_raw;
config.reg_bits = addr_bits;
config.val_bits = data_bits;
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/delay.h>
+#include <linux/pinctrl/consumer.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <linux/workqueue.h>
struct snd_soc_dai_driver *codec_dai_drv = codec_dai->driver;
int ret = 0;
+ pinctrl_pm_select_default_state(cpu_dai->dev);
+ pinctrl_pm_select_default_state(codec_dai->dev);
pm_runtime_get_sync(cpu_dai->dev);
pm_runtime_get_sync(codec_dai->dev);
pm_runtime_get_sync(platform->dev);
mutex_lock_nested(&rtd->pcm_mutex, rtd->pcm_subclass);
/* startup the audio subsystem */
- if (cpu_dai->driver->ops->startup) {
+ if (cpu_dai->driver->ops && cpu_dai->driver->ops->startup) {
ret = cpu_dai->driver->ops->startup(substream, cpu_dai);
if (ret < 0) {
dev_err(cpu_dai->dev, "ASoC: can't open interface"
}
}
- if (codec_dai->driver->ops->startup) {
+ if (codec_dai->driver->ops && codec_dai->driver->ops->startup) {
ret = codec_dai->driver->ops->startup(substream, codec_dai);
if (ret < 0) {
dev_err(codec_dai->dev, "ASoC: can't open codec"
pm_runtime_put(platform->dev);
pm_runtime_put(codec_dai->dev);
pm_runtime_put(cpu_dai->dev);
+ if (!codec_dai->active)
+ pinctrl_pm_select_sleep_state(codec_dai->dev);
+ if (!cpu_dai->active)
+ pinctrl_pm_select_sleep_state(cpu_dai->dev);
return ret;
}
pm_runtime_put(platform->dev);
pm_runtime_put(codec_dai->dev);
pm_runtime_put(cpu_dai->dev);
+ if (!codec_dai->active)
+ pinctrl_pm_select_sleep_state(codec_dai->dev);
+ if (!cpu_dai->active)
+ pinctrl_pm_select_sleep_state(cpu_dai->dev);
return 0;
}
}
}
- if (codec_dai->driver->ops->prepare) {
+ if (codec_dai->driver->ops && codec_dai->driver->ops->prepare) {
ret = codec_dai->driver->ops->prepare(substream, codec_dai);
if (ret < 0) {
dev_err(codec_dai->dev, "ASoC: DAI prepare error: %d\n",
}
}
- if (cpu_dai->driver->ops->prepare) {
+ if (cpu_dai->driver->ops && cpu_dai->driver->ops->prepare) {
ret = cpu_dai->driver->ops->prepare(substream, cpu_dai);
if (ret < 0) {
dev_err(cpu_dai->dev, "ASoC: DAI prepare error: %d\n",
}
}
- if (codec_dai->driver->ops->hw_params) {
+ if (codec_dai->driver->ops && codec_dai->driver->ops->hw_params) {
ret = codec_dai->driver->ops->hw_params(substream, params, codec_dai);
if (ret < 0) {
dev_err(codec_dai->dev, "ASoC: can't set %s hw params:"
}
}
- if (cpu_dai->driver->ops->hw_params) {
+ if (cpu_dai->driver->ops && cpu_dai->driver->ops->hw_params) {
ret = cpu_dai->driver->ops->hw_params(substream, params, cpu_dai);
if (ret < 0) {
dev_err(cpu_dai->dev, "ASoC: %s hw params failed: %d\n",
return ret;
platform_err:
- if (cpu_dai->driver->ops->hw_free)
+ if (cpu_dai->driver->ops && cpu_dai->driver->ops->hw_free)
cpu_dai->driver->ops->hw_free(substream, cpu_dai);
interface_err:
- if (codec_dai->driver->ops->hw_free)
+ if (codec_dai->driver->ops && codec_dai->driver->ops->hw_free)
codec_dai->driver->ops->hw_free(substream, codec_dai);
codec_err:
platform->driver->ops->hw_free(substream);
/* now free hw params for the DAIs */
- if (codec_dai->driver->ops->hw_free)
+ if (codec_dai->driver->ops && codec_dai->driver->ops->hw_free)
codec_dai->driver->ops->hw_free(substream, codec_dai);
- if (cpu_dai->driver->ops->hw_free)
+ if (cpu_dai->driver->ops && cpu_dai->driver->ops->hw_free)
cpu_dai->driver->ops->hw_free(substream, cpu_dai);
mutex_unlock(&rtd->pcm_mutex);
struct snd_soc_dai *codec_dai = rtd->codec_dai;
int ret;
- if (codec_dai->driver->ops->trigger) {
+ if (codec_dai->driver->ops && codec_dai->driver->ops->trigger) {
ret = codec_dai->driver->ops->trigger(substream, cmd, codec_dai);
if (ret < 0)
return ret;
return ret;
}
- if (cpu_dai->driver->ops->trigger) {
+ if (cpu_dai->driver->ops && cpu_dai->driver->ops->trigger) {
ret = cpu_dai->driver->ops->trigger(substream, cmd, cpu_dai);
if (ret < 0)
return ret;
struct snd_soc_dai *codec_dai = rtd->codec_dai;
int ret;
- if (codec_dai->driver->ops->bespoke_trigger) {
+ if (codec_dai->driver->ops &&
+ codec_dai->driver->ops->bespoke_trigger) {
ret = codec_dai->driver->ops->bespoke_trigger(substream, cmd, codec_dai);
if (ret < 0)
return ret;
}
- if (platform->driver->bespoke_trigger) {
+ if (platform->driver->ops && platform->driver->bespoke_trigger) {
ret = platform->driver->bespoke_trigger(substream, cmd);
if (ret < 0)
return ret;
}
- if (cpu_dai->driver->ops->bespoke_trigger) {
+ if (cpu_dai->driver->ops && cpu_dai->driver->ops->bespoke_trigger) {
ret = cpu_dai->driver->ops->bespoke_trigger(substream, cmd, cpu_dai);
if (ret < 0)
return ret;
if (platform->driver->ops && platform->driver->ops->pointer)
offset = platform->driver->ops->pointer(substream);
- if (cpu_dai->driver->ops->delay)
+ if (cpu_dai->driver->ops && cpu_dai->driver->ops->delay)
delay += cpu_dai->driver->ops->delay(substream, cpu_dai);
- if (codec_dai->driver->ops->delay)
+ if (codec_dai->driver->ops && codec_dai->driver->ops->delay)
delay += codec_dai->driver->ops->delay(substream, codec_dai);
if (platform->driver->delay)
list_add(&dpcm->list_be, &fe->dpcm[stream].be_clients);
list_add(&dpcm->list_fe, &be->dpcm[stream].fe_clients);
- dev_dbg(fe->dev, " connected new DPCM %s path %s %s %s\n",
+ dev_dbg(fe->dev, "connected new DPCM %s path %s %s %s\n",
stream ? "capture" : "playback", fe->dai_link->name,
stream ? "<-" : "->", be->dai_link->name);
if (dpcm->fe == fe)
continue;
- dev_dbg(fe->dev, " reparent %s path %s %s %s\n",
+ dev_dbg(fe->dev, "reparent %s path %s %s %s\n",
stream ? "capture" : "playback",
dpcm->fe->dai_link->name,
stream ? "<-" : "->", dpcm->be->dai_link->name);
if (dpcm->state != SND_SOC_DPCM_LINK_STATE_FREE)
continue;
- dev_dbg(fe->dev, " freed DSP %s path %s %s %s\n",
+ dev_dbg(fe->dev, "freed DSP %s path %s %s %s\n",
stream ? "capture" : "playback", fe->dai_link->name,
stream ? "<-" : "->", dpcm->be->dai_link->name);
struct snd_pcm_substream *be_substream =
snd_soc_dpcm_get_substream(be, stream);
+ if (!be_substream) {
+ dev_err(be->dev, "ASoC: no backend %s stream\n",
+ stream ? "capture" : "playback");
+ continue;
+ }
+
/* is this op for this BE ? */
if (!snd_soc_dpcm_be_can_update(fe, be, stream))
continue;
(be->dpcm[stream].state != SND_SOC_DPCM_STATE_CLOSE))
continue;
- dev_dbg(be->dev, "ASoC: open BE %s\n", be->dai_link->name);
+ dev_dbg(be->dev, "ASoC: open %s BE %s\n",
+ stream ? "capture" : "playback", be->dai_link->name);
be_substream->runtime = be->dpcm[stream].runtime;
err = soc_pcm_open(be_substream);
struct snd_soc_pcm_runtime *rtd = substream->private_data;
struct snd_soc_platform *platform = rtd->platform;
- if (platform->driver->ops->ioctl)
+ if (platform->driver->ops && platform->driver->ops->ioctl)
return platform->driver->ops->ioctl(substream, cmd, arg);
return snd_pcm_lib_ioctl(substream, cmd, arg);
}
dev_dbg(be->dev, "ASoC: BE digital mute %s\n", be->dai_link->name);
- if (drv->ops->digital_mute && dai->playback_active)
- drv->ops->digital_mute(dai, mute);
+ if (drv->ops && drv->ops->digital_mute && dai->playback_active)
+ drv->ops->digital_mute(dai, mute);
}
return 0;
pcm->private_free = platform->driver->pcm_free;
out:
- dev_info(rtd->card->dev, " %s <-> %s mapping ok\n", codec_dai->name,
+ dev_info(rtd->card->dev, "%s <-> %s mapping ok\n", codec_dai->name,
cpu_dai->name);
return ret;
}
int snd_soc_platform_trigger(struct snd_pcm_substream *substream,
int cmd, struct snd_soc_platform *platform)
{
- if (platform->driver->ops->trigger)
+ if (platform->driver->ops && platform->driver->ops->trigger)
return platform->driver->ops->trigger(substream, cmd);
return 0;
}
static int dummy_dma_open(struct snd_pcm_substream *substream)
{
- snd_soc_set_runtime_hwparams(substream, &dummy_dma_hardware);
+ struct snd_soc_pcm_runtime *rtd = substream->private_data;
+
+ /* BE's dont need dummy params */
+ if (!rtd->dai_link->no_pcm)
+ snd_soc_set_runtime_hwparams(substream, &dummy_dma_hardware);
return 0;
}
static const struct snd_dmaengine_pcm_config tegra_dmaengine_pcm_config = {
.pcm_hardware = &tegra_pcm_hardware,
.prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config,
- .compat_filter_fn = NULL,
.prealloc_buffer_size = PAGE_SIZE * 8,
};
}
area->vm_ops = &usb_stream_hwdep_vm_ops;
- area->vm_flags |= VM_DONTEXPAND | VM_DONTDUMP;
+ area->vm_flags |= VM_DONTDUMP;
+ if (!read)
+ area->vm_flags |= VM_DONTEXPAND;
area->vm_private_data = us122l;
atomic_inc(&us122l->mmap_count);
out:
usX2Y_clients_stop(usX2Y);
}
-static void usX2Y_error_sequence(struct usX2Ydev *usX2Y,
- struct snd_usX2Y_substream *subs, struct urb *urb)
-{
- snd_printk(KERN_ERR
-"Sequence Error!(hcd_frame=%i ep=%i%s;wait=%i,frame=%i).\n"
-"Most probably some urb of usb-frame %i is still missing.\n"
-"Cause could be too long delays in usb-hcd interrupt handling.\n",
- usb_get_current_frame_number(usX2Y->dev),
- subs->endpoint, usb_pipein(urb->pipe) ? "in" : "out",
- usX2Y->wait_iso_frame, urb->start_frame, usX2Y->wait_iso_frame);
- usX2Y_clients_stop(usX2Y);
-}
-
static void i_usX2Y_urb_complete(struct urb *urb)
{
struct snd_usX2Y_substream *subs = urb->context;
usX2Y_error_urb_status(usX2Y, subs, urb);
return;
}
- if (likely((urb->start_frame & 0xFFFF) == (usX2Y->wait_iso_frame & 0xFFFF)))
- subs->completed_urb = urb;
- else {
- usX2Y_error_sequence(usX2Y, subs, urb);
- return;
- }
+
+ subs->completed_urb = urb;
+
{
struct snd_usX2Y_substream *capsubs = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE],
*playbacksubs = usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
usX2Y_error_urb_status(usX2Y, subs, urb);
return;
}
- if (likely((urb->start_frame & 0xFFFF) == (usX2Y->wait_iso_frame & 0xFFFF)))
- subs->completed_urb = urb;
- else {
- usX2Y_error_sequence(usX2Y, subs, urb);
- return;
- }
+ subs->completed_urb = urb;
capsubs = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE];
capsubs2 = usX2Y->subs[SNDRV_PCM_STREAM_CAPTURE + 2];
playbacksubs = usX2Y->subs[SNDRV_PCM_STREAM_PLAYBACK];
#include <stdbool.h>
#include <sys/vfs.h>
#include <sys/mount.h>
-#include <linux/magic.h>
#include <linux/kernel.h>
#include "debugfs.h"
Number of mmap data pages. Must be a power of two.
-g::
+ Enables call-graph (stack chain/backtrace) recording.
+
--call-graph::
- Do call-graph (stack chain/backtrace) recording.
+ Setup and enable call-graph (stack chain/backtrace) recording,
+ implies -g.
+
+ Allows specifying "fp" (frame pointer) or "dwarf"
+ (DWARF's CFI - Call Frame Information) as the method to collect
+ the information used to show the call graphs.
+
+ In some systems, where binaries are build with gcc
+ --fomit-frame-pointer, using the "fp" method will produce bogus
+ call graphs, using "dwarf", if available (perf tools linked to
+ the libunwind library) should be used instead.
-q::
--quiet::
--asm-raw::
Show raw instruction encoding of assembly instructions.
--G [type,min,order]::
+-G::
+ Enables call-graph (stack chain/backtrace) recording.
+
--call-graph::
- Display call chains using type, min percent threshold and order.
- type can be either:
- - flat: single column, linear exposure of call chains.
- - graph: use a graph tree, displaying absolute overhead rates.
- - fractal: like graph, but displays relative rates. Each branch of
- the tree is considered as a new profiled object.
-
- order can be either:
- - callee: callee based call graph.
- - caller: inverted caller based call graph.
-
- Default: fractal,0.5,callee.
+ Setup and enable call-graph (stack chain/backtrace) recording,
+ implies -G.
--ignore-callees=<regex>::
Ignore callees of the function(s) matching the given regex.
install-bin: all
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(bindir_SQ)'
$(INSTALL) $(OUTPUT)perf '$(DESTDIR_SQ)$(bindir_SQ)'
+ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)'
$(INSTALL) $(OUTPUT)perf-archive -t '$(DESTDIR_SQ)$(perfexec_instdir_SQ)'
ifndef NO_LIBPERL
$(INSTALL) -d -m 755 '$(DESTDIR_SQ)$(perfexec_instdir_SQ)/scripts/perl/Perf-Trace-Util/lib/Perf/Trace'
int perf_read_tsc_conversion(const struct perf_event_mmap_page *pc,
struct perf_tsc_conversion *tc)
{
- bool cap_usr_time_zero;
+ bool cap_user_time_zero;
u32 seq;
int i = 0;
tc->time_mult = pc->time_mult;
tc->time_shift = pc->time_shift;
tc->time_zero = pc->time_zero;
- cap_usr_time_zero = pc->cap_usr_time_zero;
+ cap_user_time_zero = pc->cap_user_time_zero;
rmb();
if (pc->lock == seq && !(seq & 1))
break;
}
}
- if (!cap_usr_time_zero)
+ if (!cap_user_time_zero)
return -EOPNOTSUPP;
return 0;
return perf_event__repipe(tool, event_sw, &sample_sw, machine);
}
-extern volatile int session_done;
-
static void sig_handler(int sig __maybe_unused)
{
session_done = 1;
dir1 = opendir(PATH_SYS_NODE);
if (!dir1)
- return -1;
+ return 0;
while ((dent1 = readdir(dir1)) != NULL) {
if (dent1->d_type != DT_DIR ||
while ((event = perf_evlist__mmap_read(kvm->evlist, idx)) != NULL) {
err = perf_evlist__parse_sample(kvm->evlist, event, &sample);
if (err) {
+ perf_evlist__mmap_consume(kvm->evlist, idx);
pr_err("Failed to parse sample\n");
return -1;
}
err = perf_session_queue_event(kvm->session, event, &sample, 0);
+ /*
+ * FIXME: Here we can't consume the event, as perf_session_queue_event will
+ * point to it, and it'll get possibly overwritten by the kernel.
+ */
+ perf_evlist__mmap_consume(kvm->evlist, idx);
+
if (err) {
pr_err("Failed to enqueue sample: %d\n", err);
return -1;
}
#endif /* LIBUNWIND_SUPPORT */
-int record_parse_callchain_opt(const struct option *opt,
- const char *arg, int unset)
+int record_parse_callchain(const char *arg, struct perf_record_opts *opts)
{
- struct perf_record_opts *opts = opt->value;
char *tok, *name, *saveptr = NULL;
char *buf;
int ret = -1;
- /* --no-call-graph */
- if (unset)
- return 0;
-
- /* We specified default option if none is provided. */
- BUG_ON(!arg);
-
/* We need buffer that we know we can write to. */
buf = malloc(strlen(arg) + 1);
if (!buf)
ret = get_stack_size(tok, &size);
opts->stack_dump_size = size;
}
-
- if (!ret)
- pr_debug("callchain: stack dump size %d\n",
- opts->stack_dump_size);
#endif /* LIBUNWIND_SUPPORT */
} else {
- pr_err("callchain: Unknown -g option "
+ pr_err("callchain: Unknown --call-graph option "
"value: %s\n", arg);
break;
}
} while (0);
free(buf);
+ return ret;
+}
+
+static void callchain_debug(struct perf_record_opts *opts)
+{
+ pr_debug("callchain: type %d\n", opts->call_graph);
+ if (opts->call_graph == CALLCHAIN_DWARF)
+ pr_debug("callchain: stack dump size %d\n",
+ opts->stack_dump_size);
+}
+
+int record_parse_callchain_opt(const struct option *opt,
+ const char *arg,
+ int unset)
+{
+ struct perf_record_opts *opts = opt->value;
+ int ret;
+
+ /* --no-call-graph */
+ if (unset) {
+ opts->call_graph = CALLCHAIN_NONE;
+ pr_debug("callchain: disabled\n");
+ return 0;
+ }
+
+ ret = record_parse_callchain(arg, opts);
if (!ret)
- pr_debug("callchain: type %d\n", opts->call_graph);
+ callchain_debug(opts);
return ret;
}
+int record_callchain_opt(const struct option *opt,
+ const char *arg __maybe_unused,
+ int unset __maybe_unused)
+{
+ struct perf_record_opts *opts = opt->value;
+
+ if (opts->call_graph == CALLCHAIN_NONE)
+ opts->call_graph = CALLCHAIN_FP;
+
+ callchain_debug(opts);
+ return 0;
+}
+
static const char * const record_usage[] = {
"perf record [<options>] [<command>]",
"perf record [<options>] -- <command> [<options>]",
},
};
-#define CALLCHAIN_HELP "do call-graph (stack chain/backtrace) recording: "
+#define CALLCHAIN_HELP "setup and enables call-graph (stack chain/backtrace) recording: "
#ifdef LIBUNWIND_SUPPORT
-const char record_callchain_help[] = CALLCHAIN_HELP "[fp] dwarf";
+const char record_callchain_help[] = CALLCHAIN_HELP "fp dwarf";
#else
-const char record_callchain_help[] = CALLCHAIN_HELP "[fp]";
+const char record_callchain_help[] = CALLCHAIN_HELP "fp";
#endif
/*
"number of mmap data pages"),
OPT_BOOLEAN(0, "group", &record.opts.group,
"put the counters into a counter group"),
- OPT_CALLBACK_DEFAULT('g', "call-graph", &record.opts,
- "mode[,dump_size]", record_callchain_help,
- &record_parse_callchain_opt, "fp"),
+ OPT_CALLBACK_NOOPT('g', NULL, &record.opts,
+ NULL, "enables call-graph recording" ,
+ &record_callchain_opt),
+ OPT_CALLBACK(0, "call-graph", &record.opts,
+ "mode[,dump_size]", record_callchain_help,
+ &record_parse_callchain_opt),
OPT_INCR('v', "verbose", &verbose,
"be more verbose (show counter open errors, etc)"),
OPT_BOOLEAN('q', "quiet", &quiet, "don't print any message"),
return 0;
}
-extern volatile int session_done;
-
static void sig_handler(int sig __maybe_unused)
{
session_done = 1;
}
}
+ if (session_done())
+ return 0;
+
if (nr_samples == 0) {
ui__error("The %s file has no samples!\n", session->filename);
return 0;
.ordering_requires_timestamps = true,
};
-extern volatile int session_done;
-
static void sig_handler(int sig __maybe_unused)
{
session_done = 1;
perror("failed to prepare workload");
return -1;
}
+ child_pid = evsel_list->workload.pid;
}
if (group)
ret = perf_evlist__parse_sample(top->evlist, event, &sample);
if (ret) {
pr_err("Can't parse sample, err = %d\n", ret);
- continue;
+ goto next_event;
}
evsel = perf_evlist__id2evsel(session->evlist, sample.id);
case PERF_RECORD_MISC_USER:
++top->us_samples;
if (top->hide_user_symbols)
- continue;
+ goto next_event;
machine = &session->machines.host;
break;
case PERF_RECORD_MISC_KERNEL:
++top->kernel_samples;
if (top->hide_kernel_symbols)
- continue;
+ goto next_event;
machine = &session->machines.host;
break;
case PERF_RECORD_MISC_GUEST_KERNEL:
*/
/* Fall thru */
default:
- continue;
+ goto next_event;
}
machine__process_event(machine, event);
} else
++session->stats.nr_unknown_events;
+next_event:
+ perf_evlist__mmap_consume(top->evlist, idx);
}
}
}
static int
-parse_callchain_opt(const struct option *opt, const char *arg, int unset)
+callchain_opt(const struct option *opt, const char *arg, int unset)
{
- /*
- * --no-call-graph
- */
- if (unset)
- return 0;
-
symbol_conf.use_callchain = true;
+ return record_callchain_opt(opt, arg, unset);
+}
+static int
+parse_callchain_opt(const struct option *opt, const char *arg, int unset)
+{
+ symbol_conf.use_callchain = true;
return record_parse_callchain_opt(opt, arg, unset);
}
"sort by key(s): pid, comm, dso, symbol, parent, weight, local_weight"),
OPT_BOOLEAN('n', "show-nr-samples", &symbol_conf.show_nr_samples,
"Show a column with the number of samples"),
- OPT_CALLBACK_DEFAULT('G', "call-graph", &top.record_opts,
- "mode[,dump_size]", record_callchain_help,
- &parse_callchain_opt, "fp"),
+ OPT_CALLBACK_NOOPT('G', NULL, &top.record_opts,
+ NULL, "enables call-graph recording",
+ &callchain_opt),
+ OPT_CALLBACK(0, "call-graph", &top.record_opts,
+ "mode[,dump_size]", record_callchain_help,
+ &parse_callchain_opt),
OPT_CALLBACK(0, "ignore-callees", NULL, "regex",
"ignore callees of these functions in call graphs",
report_parse_ignore_callees_opt),
#include <sys/mman.h>
#include <linux/futex.h>
+/* For older distros: */
+#ifndef MAP_STACK
+# define MAP_STACK 0x20000
+#endif
+
+#ifndef MADV_HWPOISON
+# define MADV_HWPOISON 100
+#endif
+
+#ifndef MADV_MERGEABLE
+# define MADV_MERGEABLE 12
+#endif
+
+#ifndef MADV_UNMERGEABLE
+# define MADV_UNMERGEABLE 13
+#endif
+
static size_t syscall_arg__scnprintf_hex(char *bf, size_t size,
unsigned long arg,
u8 arg_idx __maybe_unused,
err = perf_evlist__parse_sample(evlist, event, &sample);
if (err) {
fprintf(trace->output, "Can't parse sample, err = %d, skipping...\n", err);
- continue;
+ goto next_event;
}
if (trace->base_time == 0)
evsel = perf_evlist__id2evsel(evlist, sample.id);
if (evsel == NULL) {
fprintf(trace->output, "Unknown tp ID %" PRIu64 ", skipping...\n", sample.id);
- continue;
+ goto next_event;
}
if (sample.raw_data == NULL) {
fprintf(trace->output, "%s sample with no payload for tid: %d, cpu %d, raw_size=%d, skipping...\n",
perf_evsel__name(evsel), sample.tid,
sample.cpu, sample.raw_size);
- continue;
+ goto next_event;
}
handler = evsel->handler.func;
handler(trace, evsel, &sample);
+next_event:
+ perf_evlist__mmap_consume(evlist, i);
if (done)
goto out_unmap_evlist;
trace->tool.sample = trace__process_sample;
trace->tool.mmap = perf_event__process_mmap;
+ trace->tool.mmap2 = perf_event__process_mmap2;
trace->tool.comm = perf_event__process_comm;
trace->tool.exit = perf_event__process_exit;
trace->tool.fork = perf_event__process_fork;
CFLAGS += -Wextra
CFLAGS += -std=gnu99
-EXTLIBS = -lelf -lpthread -lrt -lm
+EXTLIBS = -lelf -lpthread -lrt -lm -ldl
ifeq ($(call try-cc,$(SOURCE_HELLO),$(CFLAGS) -Werror -fstack-protector-all,-fstack-protector-all),y)
CFLAGS += -fstack-protector-all
ifeq ($(call try-cc,$(SOURCE_ELF_MMAP),$(FLAGS_LIBELF),-DLIBELF_MMAP),y)
CFLAGS += -DLIBELF_MMAP
endif
+ifeq ($(call try-cc,$(SOURCE_ELF_GETPHDRNUM),$(FLAGS_LIBELF),-DHAVE_ELF_GETPHDRNUM),y)
+ CFLAGS += -DHAVE_ELF_GETPHDRNUM
+endif
# include ARCH specific config
-include $(src-perf)/arch/$(ARCH)/Makefile
}
endef
+define SOURCE_ELF_GETPHDRNUM
+#include <libelf.h>
+int main(void)
+{
+ size_t dst;
+ return elf_getphdrnum(0, &dst);
+}
+endef
+
ifndef NO_SLANG
define SOURCE_SLANG
#include <slang.h>
int main(void)
{
+ printf(\"error message: %s\", audit_errno_to_name(0));
return audit_open();
}
endef
for (i = 0; i < evlist->nr_mmaps; i++) {
while ((event = perf_evlist__mmap_read(evlist, i)) != NULL) {
ret = process_event(machine, evlist, event, state);
+ perf_evlist__mmap_consume(evlist, i);
if (ret < 0)
return ret;
}
(pid_t)event->comm.tid == getpid() &&
strcmp(event->comm.comm, comm) == 0)
found += 1;
+ perf_evlist__mmap_consume(evlist, i);
}
}
return found;
goto out_munmap;
}
nr_events[evsel->idx]++;
+ perf_evlist__mmap_consume(evlist, 0);
}
err = 0;
++nr_events;
- if (type != PERF_RECORD_SAMPLE)
+ if (type != PERF_RECORD_SAMPLE) {
+ perf_evlist__mmap_consume(evlist, i);
continue;
+ }
err = perf_evsel__parse_sample(evsel, event, &sample);
if (err) {
type);
++errs;
}
+
+ perf_evlist__mmap_consume(evlist, i);
}
}
if (event->header.type != PERF_RECORD_COMM ||
(pid_t)event->comm.pid != getpid() ||
(pid_t)event->comm.tid != getpid())
- continue;
+ goto next_event;
if (strcmp(event->comm.comm, comm1) == 0) {
CHECK__(perf_evsel__parse_sample(evsel, event,
&sample));
comm2_time = sample.time;
}
+next_event:
+ perf_evlist__mmap_consume(evlist, i);
}
}
struct perf_sample sample;
if (event->header.type != PERF_RECORD_SAMPLE)
- continue;
+ goto next_event;
err = perf_evlist__parse_sample(evlist, event, &sample);
if (err < 0) {
total_periods += sample.period;
nr_samples++;
+next_event:
+ perf_evlist__mmap_consume(evlist, 0);
}
if ((u64) nr_samples == total_periods) {
retry:
while ((event = perf_evlist__mmap_read(evlist, 0)) != NULL) {
- if (event->header.type != PERF_RECORD_EXIT)
- continue;
+ if (event->header.type == PERF_RECORD_EXIT)
+ nr_exit++;
- nr_exit++;
+ perf_evlist__mmap_consume(evlist, 0);
}
if (!exited || !nr_exit) {
}
static int hist_entry__period_snprintf(struct perf_hpp *hpp,
- struct hist_entry *he,
- bool color)
+ struct hist_entry *he)
{
const char *sep = symbol_conf.field_sep;
struct perf_hpp_fmt *fmt;
} else
first = false;
- if (color && fmt->color)
+ if (perf_hpp__use_color() && fmt->color)
ret = fmt->color(fmt, hpp, he);
else
ret = fmt->entry(fmt, hpp, he);
.buf = bf,
.size = size,
};
- bool color = !symbol_conf.field_sep;
if (size == 0 || size > bfsz)
size = hpp.size = bfsz;
- ret = hist_entry__period_snprintf(&hpp, he, color);
+ ret = hist_entry__period_snprintf(&hpp, he);
hist_entry__sort_snprintf(he, bf + ret, size - ret, hists);
ret = fprintf(fp, "%s\n", bf);
print_entries:
linesz = hists__sort_list_width(hists) + 3 + 1;
+ linesz += perf_hpp__color_overhead();
line = malloc(linesz);
if (line == NULL) {
ret = -1;
end = map__rip_2objdump(map, sym->end);
offset = line_ip - start;
- if (offset < 0 || (u64)line_ip > end)
+ if ((u64)line_ip < start || (u64)line_ip > end)
offset = -1;
else
parsed_line = tmp2 + 1;
struct option;
+int record_parse_callchain(const char *arg, struct perf_record_opts *opts);
int record_parse_callchain_opt(const struct option *opt, const char *arg, int unset);
+int record_callchain_opt(const struct option *opt, const char *arg, int unset);
+
extern const char record_callchain_help[];
#endif /* __PERF_CALLCHAIN_H */
}
/**
+ * die_is_func_def - Ensure that this DIE is a subprogram and definition
+ * @dw_die: a DIE
+ *
+ * Ensure that this DIE is a subprogram and NOT a declaration. This
+ * returns true if @dw_die is a function definition.
+ **/
+bool die_is_func_def(Dwarf_Die *dw_die)
+{
+ Dwarf_Attribute attr;
+
+ return (dwarf_tag(dw_die) == DW_TAG_subprogram &&
+ dwarf_attr(dw_die, DW_AT_declaration, &attr) == NULL);
+}
+
+/**
* die_get_data_member_location - Get the data-member offset
* @mb_die: a DIE of a member of a data structure
* @offs: The offset of the member in the data structure
{
struct __addr_die_search_param *ad = data;
+ /*
+ * Since a declaration entry doesn't has given pc, this always returns
+ * function definition entry.
+ */
if (dwarf_tag(fn_die) == DW_TAG_subprogram &&
dwarf_haspc(fn_die, ad->addr)) {
memcpy(ad->die_mem, fn_die, sizeof(Dwarf_Die));
* @die_mem: a buffer for result DIE
*
* Search a non-inlined function DIE which includes @addr. Stores the
- * DIE to @die_mem and returns it if found. Returns NULl if failed.
+ * DIE to @die_mem and returns it if found. Returns NULL if failed.
*/
Dwarf_Die *die_find_realfunc(Dwarf_Die *cu_die, Dwarf_Addr addr,
Dwarf_Die *die_mem)
}
/**
+ * die_find_top_inlinefunc - Search the top inlined function at given address
+ * @sp_die: a subprogram DIE which including @addr
+ * @addr: target address
+ * @die_mem: a buffer for result DIE
+ *
+ * Search an inlined function DIE which includes @addr. Stores the
+ * DIE to @die_mem and returns it if found. Returns NULL if failed.
+ * Even if several inlined functions are expanded recursively, this
+ * doesn't trace it down, and returns the topmost one.
+ */
+Dwarf_Die *die_find_top_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,
+ Dwarf_Die *die_mem)
+{
+ return die_find_child(sp_die, __die_find_inline_cb, &addr, die_mem);
+}
+
+/**
* die_find_inlinefunc - Search an inlined function at given address
- * @cu_die: a CU DIE which including @addr
+ * @sp_die: a subprogram DIE which including @addr
* @addr: target address
* @die_mem: a buffer for result DIE
*
* Search an inlined function DIE which includes @addr. Stores the
- * DIE to @die_mem and returns it if found. Returns NULl if failed.
+ * DIE to @die_mem and returns it if found. Returns NULL if failed.
* If several inlined functions are expanded recursively, this trace
- * it and returns deepest one.
+ * it down and returns deepest one.
*/
Dwarf_Die *die_find_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,
Dwarf_Die *die_mem)
extern int cu_walk_functions_at(Dwarf_Die *cu_die, Dwarf_Addr addr,
int (*callback)(Dwarf_Die *, void *), void *data);
+/* Ensure that this DIE is a subprogram and definition (not declaration) */
+extern bool die_is_func_def(Dwarf_Die *dw_die);
+
/* Compare diename and tname */
extern bool die_compare_name(Dwarf_Die *dw_die, const char *tname);
extern Dwarf_Die *die_find_realfunc(Dwarf_Die *cu_die, Dwarf_Addr addr,
Dwarf_Die *die_mem);
-/* Search an inlined function including given address */
+/* Search the top inlined function including given address */
+extern Dwarf_Die *die_find_top_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,
+ Dwarf_Die *die_mem);
+
+/* Search the deepest inlined function including given address */
extern Dwarf_Die *die_find_inlinefunc(Dwarf_Die *sp_die, Dwarf_Addr addr,
Dwarf_Die *die_mem);
return -1;
}
- event->header.type = PERF_RECORD_MMAP2;
+ event->header.type = PERF_RECORD_MMAP;
/*
* Just like the kernel, see __perf_event_mmap in kernel/perf_event.c
*/
char prot[5];
char execname[PATH_MAX];
char anonstr[] = "//anon";
- unsigned int ino;
size_t size;
ssize_t n;
strcpy(execname, "");
/* 00400000-0040c000 r-xp 00000000 fd:01 41038 /bin/cat */
- n = sscanf(bf, "%"PRIx64"-%"PRIx64" %s %"PRIx64" %x:%x %u %s\n",
- &event->mmap2.start, &event->mmap2.len, prot,
- &event->mmap2.pgoff, &event->mmap2.maj,
- &event->mmap2.min,
- &ino, execname);
-
- event->mmap2.ino = (u64)ino;
+ n = sscanf(bf, "%"PRIx64"-%"PRIx64" %s %"PRIx64" %*x:%*x %*u %s\n",
+ &event->mmap.start, &event->mmap.len, prot,
+ &event->mmap.pgoff,
+ execname);
- if (n != 8)
+ if (n != 5)
continue;
if (prot[2] != 'x')
strcpy(execname, anonstr);
size = strlen(execname) + 1;
- memcpy(event->mmap2.filename, execname, size);
+ memcpy(event->mmap.filename, execname, size);
size = PERF_ALIGN(size, sizeof(u64));
- event->mmap2.len -= event->mmap.start;
- event->mmap2.header.size = (sizeof(event->mmap2) -
- (sizeof(event->mmap2.filename) - size));
- memset(event->mmap2.filename + size, 0, machine->id_hdr_size);
- event->mmap2.header.size += machine->id_hdr_size;
- event->mmap2.pid = tgid;
- event->mmap2.tid = pid;
+ event->mmap.len -= event->mmap.start;
+ event->mmap.header.size = (sizeof(event->mmap) -
+ (sizeof(event->mmap.filename) - size));
+ memset(event->mmap.filename + size, 0, machine->id_hdr_size);
+ event->mmap.header.size += machine->id_hdr_size;
+ event->mmap.pid = tgid;
+ event->mmap.tid = pid;
if (process(tool, event, &synth_sample, machine) != 0) {
rc = -1;
md->prev = old;
- if (!evlist->overwrite)
- perf_mmap__write_tail(md, old);
-
return event;
}
+void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx)
+{
+ if (!evlist->overwrite) {
+ struct perf_mmap *md = &evlist->mmap[idx];
+ unsigned int old = md->prev;
+
+ perf_mmap__write_tail(md, old);
+ }
+}
+
static void __perf_evlist__munmap(struct perf_evlist *evlist, int idx)
{
if (evlist->mmap[idx].base != NULL) {
union perf_event *perf_evlist__mmap_read(struct perf_evlist *self, int idx);
+void perf_evlist__mmap_consume(struct perf_evlist *evlist, int idx);
+
int perf_evlist__open(struct perf_evlist *evlist);
void perf_evlist__close(struct perf_evlist *evlist);
attr->sample_type |= PERF_SAMPLE_WEIGHT;
attr->mmap = track;
- attr->mmap2 = track && !perf_missing_features.mmap2;
attr->comm = track;
/*
return write_padded(fd, name, name_len + 1, len);
}
-static int __dsos__write_buildid_table(struct list_head *head, pid_t pid,
- u16 misc, int fd)
+static int __dsos__write_buildid_table(struct list_head *head,
+ struct machine *machine,
+ pid_t pid, u16 misc, int fd)
{
+ char nm[PATH_MAX];
struct dso *pos;
dsos__for_each_with_build_id(pos, head) {
if (is_vdso_map(pos->short_name)) {
name = (char *) VDSO__MAP_NAME;
name_len = sizeof(VDSO__MAP_NAME) + 1;
+ } else if (dso__is_kcore(pos)) {
+ machine__mmap_name(machine, nm, sizeof(nm));
+ name = nm;
+ name_len = strlen(nm) + 1;
} else {
name = pos->long_name;
name_len = pos->long_name_len + 1;
umisc = PERF_RECORD_MISC_GUEST_USER;
}
- err = __dsos__write_buildid_table(&machine->kernel_dsos, machine->pid,
- kmisc, fd);
+ err = __dsos__write_buildid_table(&machine->kernel_dsos, machine,
+ machine->pid, kmisc, fd);
if (err == 0)
- err = __dsos__write_buildid_table(&machine->user_dsos,
+ err = __dsos__write_buildid_table(&machine->user_dsos, machine,
machine->pid, umisc, fd);
return err;
}
return err;
}
-static int dso__cache_build_id(struct dso *dso, const char *debugdir)
+static int dso__cache_build_id(struct dso *dso, struct machine *machine,
+ const char *debugdir)
{
bool is_kallsyms = dso->kernel && dso->long_name[0] != '/';
bool is_vdso = is_vdso_map(dso->short_name);
+ char *name = dso->long_name;
+ char nm[PATH_MAX];
- return build_id_cache__add_b(dso->build_id, sizeof(dso->build_id),
- dso->long_name, debugdir,
- is_kallsyms, is_vdso);
+ if (dso__is_kcore(dso)) {
+ is_kallsyms = true;
+ machine__mmap_name(machine, nm, sizeof(nm));
+ name = nm;
+ }
+ return build_id_cache__add_b(dso->build_id, sizeof(dso->build_id), name,
+ debugdir, is_kallsyms, is_vdso);
}
-static int __dsos__cache_build_ids(struct list_head *head, const char *debugdir)
+static int __dsos__cache_build_ids(struct list_head *head,
+ struct machine *machine, const char *debugdir)
{
struct dso *pos;
int err = 0;
dsos__for_each_with_build_id(pos, head)
- if (dso__cache_build_id(pos, debugdir))
+ if (dso__cache_build_id(pos, machine, debugdir))
err = -1;
return err;
static int machine__cache_build_ids(struct machine *machine, const char *debugdir)
{
- int ret = __dsos__cache_build_ids(&machine->kernel_dsos, debugdir);
- ret |= __dsos__cache_build_ids(&machine->user_dsos, debugdir);
+ int ret = __dsos__cache_build_ids(&machine->kernel_dsos, machine,
+ debugdir);
+ ret |= __dsos__cache_build_ids(&machine->user_dsos, machine, debugdir);
return ret;
}
if (perf_file_header__read(&f_header, header, fd) < 0)
return -EINVAL;
+ /*
+ * Sanity check that perf.data was written cleanly; data size is
+ * initialized to 0 and updated only if the on_exit function is run.
+ * If data size is still 0 then the file contains only partial
+ * information. Just warn user and process it as much as it can.
+ */
+ if (f_header.data.size == 0) {
+ pr_warning("WARNING: The %s file's data size field is 0 which is unexpected.\n"
+ "Was the 'perf record' command properly terminated?\n",
+ session->filename);
+ }
+
nr_attrs = f_header.attrs.size / f_header.attr_size;
lseek(fd, f_header.attrs.offset, SEEK_SET);
next = rb_first(root);
while (next) {
+ if (session_done())
+ break;
n = rb_entry(next, struct hist_entry, rb_node_in);
next = rb_next(&n->rb_node_in);
#include <pthread.h>
#include "callchain.h"
#include "header.h"
+#include "color.h"
extern struct callchain_param callchain_param;
void perf_hpp__column_register(struct perf_hpp_fmt *format);
void perf_hpp__column_enable(unsigned col);
+static inline size_t perf_hpp__use_color(void)
+{
+ return !symbol_conf.field_sep;
+}
+
+static inline size_t perf_hpp__color_overhead(void)
+{
+ return perf_hpp__use_color() ?
+ (COLOR_MAXLEN + sizeof(PERF_COLOR_RESET)) * PERF_HPP__MAX_INDEX
+ : 0;
+}
+
struct perf_evlist;
struct hist_browser_timer {
modules = path;
}
- if (symbol__restricted_filename(path, "/proc/modules"))
+ if (symbol__restricted_filename(modules, "/proc/modules"))
return -1;
file = fopen(modules, "r");
static int debuginfo__init_offline_dwarf(struct debuginfo *self,
const char *path)
{
- Dwfl_Module *mod;
int fd;
fd = open(path, O_RDONLY);
if (!self->dwfl)
goto error;
- mod = dwfl_report_offline(self->dwfl, "", "", fd);
- if (!mod)
+ self->mod = dwfl_report_offline(self->dwfl, "", "", fd);
+ if (!self->mod)
goto error;
- self->dbg = dwfl_module_getdwarf(mod, &self->bias);
+ self->dbg = dwfl_module_getdwarf(self->mod, &self->bias);
if (!self->dbg)
goto error;
}
/* Convert subprogram DIE to trace point */
-static int convert_to_trace_point(Dwarf_Die *sp_die, Dwarf_Addr paddr,
- bool retprobe, struct probe_trace_point *tp)
+static int convert_to_trace_point(Dwarf_Die *sp_die, Dwfl_Module *mod,
+ Dwarf_Addr paddr, bool retprobe,
+ struct probe_trace_point *tp)
{
Dwarf_Addr eaddr, highaddr;
- const char *name;
-
- /* Copy the name of probe point */
- name = dwarf_diename(sp_die);
- if (name) {
- if (dwarf_entrypc(sp_die, &eaddr) != 0) {
- pr_warning("Failed to get entry address of %s\n",
- dwarf_diename(sp_die));
- return -ENOENT;
- }
- if (dwarf_highpc(sp_die, &highaddr) != 0) {
- pr_warning("Failed to get end address of %s\n",
- dwarf_diename(sp_die));
- return -ENOENT;
- }
- if (paddr > highaddr) {
- pr_warning("Offset specified is greater than size of %s\n",
- dwarf_diename(sp_die));
- return -EINVAL;
- }
- tp->symbol = strdup(name);
- if (tp->symbol == NULL)
- return -ENOMEM;
- tp->offset = (unsigned long)(paddr - eaddr);
- } else
- /* This function has no name. */
- tp->offset = (unsigned long)paddr;
+ GElf_Sym sym;
+ const char *symbol;
+
+ /* Verify the address is correct */
+ if (dwarf_entrypc(sp_die, &eaddr) != 0) {
+ pr_warning("Failed to get entry address of %s\n",
+ dwarf_diename(sp_die));
+ return -ENOENT;
+ }
+ if (dwarf_highpc(sp_die, &highaddr) != 0) {
+ pr_warning("Failed to get end address of %s\n",
+ dwarf_diename(sp_die));
+ return -ENOENT;
+ }
+ if (paddr > highaddr) {
+ pr_warning("Offset specified is greater than size of %s\n",
+ dwarf_diename(sp_die));
+ return -EINVAL;
+ }
+
+ /* Get an appropriate symbol from symtab */
+ symbol = dwfl_module_addrsym(mod, paddr, &sym, NULL);
+ if (!symbol) {
+ pr_warning("Failed to find symbol at 0x%lx\n",
+ (unsigned long)paddr);
+ return -ENOENT;
+ }
+ tp->offset = (unsigned long)(paddr - sym.st_value);
+ tp->symbol = strdup(symbol);
+ if (!tp->symbol)
+ return -ENOMEM;
/* Return probe must be on the head of a subprogram */
if (retprobe) {
}
/* If not a real subprogram, find a real one */
- if (dwarf_tag(sc_die) != DW_TAG_subprogram) {
+ if (!die_is_func_def(sc_die)) {
if (!die_find_realfunc(&pf->cu_die, pf->addr, &pf->sp_die)) {
pr_warning("Failed to find probe point in any "
"functions.\n");
struct dwarf_callback_param *param = data;
struct probe_finder *pf = param->data;
struct perf_probe_point *pp = &pf->pev->point;
- Dwarf_Attribute attr;
/* Check tag and diename */
- if (dwarf_tag(sp_die) != DW_TAG_subprogram ||
- !die_compare_name(sp_die, pp->function) ||
- dwarf_attr(sp_die, DW_AT_declaration, &attr))
+ if (!die_is_func_def(sp_die) ||
+ !die_compare_name(sp_die, pp->function))
return DWARF_CB_OK;
/* Check declared file */
tev = &tf->tevs[tf->ntevs++];
/* Trace point should be converted from subprogram DIE */
- ret = convert_to_trace_point(&pf->sp_die, pf->addr,
+ ret = convert_to_trace_point(&pf->sp_die, tf->mod, pf->addr,
pf->pev->point.retprobe, &tev->point);
if (ret < 0)
return ret;
{
struct trace_event_finder tf = {
.pf = {.pev = pev, .callback = add_probe_trace_event},
- .max_tevs = max_tevs};
+ .mod = self->mod, .max_tevs = max_tevs};
int ret;
/* Allocate result tevs array */
vl = &af->vls[af->nvls++];
/* Trace point should be converted from subprogram DIE */
- ret = convert_to_trace_point(&pf->sp_die, pf->addr,
+ ret = convert_to_trace_point(&pf->sp_die, af->mod, pf->addr,
pf->pev->point.retprobe, &vl->point);
if (ret < 0)
return ret;
{
struct available_var_finder af = {
.pf = {.pev = pev, .callback = add_available_vars},
+ .mod = self->mod,
.max_vls = max_vls, .externs = externs};
int ret;
struct perf_probe_point *ppt)
{
Dwarf_Die cudie, spdie, indie;
- Dwarf_Addr _addr, baseaddr;
- const char *fname = NULL, *func = NULL, *tmp;
+ Dwarf_Addr _addr = 0, baseaddr = 0;
+ const char *fname = NULL, *func = NULL, *basefunc = NULL, *tmp;
int baseline = 0, lineno = 0, ret = 0;
/* Adjust address with bias */
/* Find a corresponding function (name, baseline and baseaddr) */
if (die_find_realfunc(&cudie, (Dwarf_Addr)addr, &spdie)) {
/* Get function entry information */
- tmp = dwarf_diename(&spdie);
- if (!tmp ||
+ func = basefunc = dwarf_diename(&spdie);
+ if (!func ||
dwarf_entrypc(&spdie, &baseaddr) != 0 ||
- dwarf_decl_line(&spdie, &baseline) != 0)
+ dwarf_decl_line(&spdie, &baseline) != 0) {
+ lineno = 0;
goto post;
- func = tmp;
+ }
- if (addr == (unsigned long)baseaddr)
+ fname = dwarf_decl_file(&spdie);
+ if (addr == (unsigned long)baseaddr) {
/* Function entry - Relative line number is 0 */
lineno = baseline;
- else if (die_find_inlinefunc(&spdie, (Dwarf_Addr)addr,
- &indie)) {
+ goto post;
+ }
+
+ /* Track down the inline functions step by step */
+ while (die_find_top_inlinefunc(&spdie, (Dwarf_Addr)addr,
+ &indie)) {
+ /* There is an inline function */
if (dwarf_entrypc(&indie, &_addr) == 0 &&
- _addr == addr)
+ _addr == addr) {
/*
* addr is at an inline function entry.
* In this case, lineno should be the call-site
- * line number.
+ * line number. (overwrite lineinfo)
*/
lineno = die_get_call_lineno(&indie);
- else {
+ fname = die_get_call_file(&indie);
+ break;
+ } else {
/*
* addr is in an inline function body.
* Since lineno points one of the lines
* be the entry line of the inline function.
*/
tmp = dwarf_diename(&indie);
- if (tmp &&
- dwarf_decl_line(&spdie, &baseline) == 0)
- func = tmp;
+ if (!tmp ||
+ dwarf_decl_line(&indie, &baseline) != 0)
+ break;
+ func = tmp;
+ spdie = indie;
}
}
+ /* Verify the lineno and baseline are in a same file */
+ tmp = dwarf_decl_file(&spdie);
+ if (!tmp || strcmp(tmp, fname) != 0)
+ lineno = 0;
}
post:
/* Make a relative line number or an offset */
if (lineno)
ppt->line = lineno - baseline;
- else if (func)
+ else if (basefunc) {
ppt->offset = addr - (unsigned long)baseaddr;
+ func = basefunc;
+ }
/* Duplicate strings */
if (func) {
return 0;
}
-/* Search function from function name */
+/* Search function definition from function name */
static int line_range_search_cb(Dwarf_Die *sp_die, void *data)
{
struct dwarf_callback_param *param = data;
if (lr->file && strtailcmp(lr->file, dwarf_decl_file(sp_die)))
return DWARF_CB_OK;
- if (dwarf_tag(sp_die) == DW_TAG_subprogram &&
+ if (die_is_func_def(sp_die) &&
die_compare_name(sp_die, lr->function)) {
lf->fname = dwarf_decl_file(sp_die);
dwarf_decl_line(sp_die, &lr->offset);
/* debug information structure */
struct debuginfo {
Dwarf *dbg;
+ Dwfl_Module *mod;
Dwfl *dwfl;
Dwarf_Addr bias;
};
struct trace_event_finder {
struct probe_finder pf;
+ Dwfl_Module *mod; /* For solving symbols */
struct probe_trace_event *tevs; /* Found trace events */
int ntevs; /* Number of trace events */
int max_tevs; /* Max number of trace events */
struct available_var_finder {
struct probe_finder pf;
+ Dwfl_Module *mod; /* For solving symbols */
struct variable_list *vls; /* Found variable lists */
int nvls; /* Number of variable lists */
int max_vls; /* Max no. of variable lists */
PyObject *pyevent = pyrf_event__new(event);
struct pyrf_event *pevent = (struct pyrf_event *)pyevent;
+ perf_evlist__mmap_consume(evlist, cpu);
+
if (pyevent == NULL)
return PyErr_NoMemory();
event = find_cache_event(evsel);
if (!event)
- die("ug! no event found for type %" PRIu64, evsel->attr.config);
+ die("ug! no event found for type %" PRIu64, (u64)evsel->attr.config);
pid = raw_field_value(event, "common_pid", data);
Py_FatalError("problem in Python trace event handler");
}
+/*
+ * Insert val into into the dictionary and decrement the reference counter.
+ * This is necessary for dictionaries since PyDict_SetItemString() does not
+ * steal a reference, as opposed to PyTuple_SetItem().
+ */
+static void pydict_set_item_string_decref(PyObject *dict, const char *key, PyObject *val)
+{
+ PyDict_SetItemString(dict, key, val);
+ Py_DECREF(val);
+}
+
static void define_value(enum print_arg_type field_type,
const char *ev_name,
const char *field_name,
PyTuple_SetItem(t, n++, PyInt_FromLong(pid));
PyTuple_SetItem(t, n++, PyString_FromString(comm));
} else {
- PyDict_SetItemString(dict, "common_cpu", PyInt_FromLong(cpu));
- PyDict_SetItemString(dict, "common_s", PyInt_FromLong(s));
- PyDict_SetItemString(dict, "common_ns", PyInt_FromLong(ns));
- PyDict_SetItemString(dict, "common_pid", PyInt_FromLong(pid));
- PyDict_SetItemString(dict, "common_comm", PyString_FromString(comm));
+ pydict_set_item_string_decref(dict, "common_cpu", PyInt_FromLong(cpu));
+ pydict_set_item_string_decref(dict, "common_s", PyInt_FromLong(s));
+ pydict_set_item_string_decref(dict, "common_ns", PyInt_FromLong(ns));
+ pydict_set_item_string_decref(dict, "common_pid", PyInt_FromLong(pid));
+ pydict_set_item_string_decref(dict, "common_comm", PyString_FromString(comm));
}
for (field = event->format.fields; field; field = field->next) {
if (field->flags & FIELD_IS_STRING) {
if (handler)
PyTuple_SetItem(t, n++, obj);
else
- PyDict_SetItemString(dict, field->name, obj);
+ pydict_set_item_string_decref(dict, field->name, obj);
}
if (!handler)
if (!handler || !PyCallable_Check(handler))
goto exit;
- PyDict_SetItemString(dict, "ev_name", PyString_FromString(perf_evsel__name(evsel)));
- PyDict_SetItemString(dict, "attr", PyString_FromStringAndSize(
+ pydict_set_item_string_decref(dict, "ev_name", PyString_FromString(perf_evsel__name(evsel)));
+ pydict_set_item_string_decref(dict, "attr", PyString_FromStringAndSize(
(const char *)&evsel->attr, sizeof(evsel->attr)));
- PyDict_SetItemString(dict, "sample", PyString_FromStringAndSize(
+ pydict_set_item_string_decref(dict, "sample", PyString_FromStringAndSize(
(const char *)sample, sizeof(*sample)));
- PyDict_SetItemString(dict, "raw_buf", PyString_FromStringAndSize(
+ pydict_set_item_string_decref(dict, "raw_buf", PyString_FromStringAndSize(
(const char *)sample->raw_data, sample->raw_size));
- PyDict_SetItemString(dict, "comm",
+ pydict_set_item_string_decref(dict, "comm",
PyString_FromString(thread->comm));
if (al->map) {
- PyDict_SetItemString(dict, "dso",
+ pydict_set_item_string_decref(dict, "dso",
PyString_FromString(al->map->dso->name));
}
if (al->sym) {
- PyDict_SetItemString(dict, "symbol",
+ pydict_set_item_string_decref(dict, "symbol",
PyString_FromString(al->sym->name));
}
tool->sample = process_event_sample_stub;
if (tool->mmap == NULL)
tool->mmap = process_event_stub;
+ if (tool->mmap2 == NULL)
+ tool->mmap2 = process_event_stub;
if (tool->comm == NULL)
tool->comm = process_event_stub;
if (tool->fork == NULL)
return 0;
list_for_each_entry_safe(iter, tmp, head, list) {
+ if (session_done())
+ return 0;
+
if (iter->timestamp > limit)
break;
}
}
-#define session_done() (*(volatile int *)(&session_done))
volatile int session_done;
static int __perf_session__process_pipe_events(struct perf_session *self,
file_offset = page_offset;
head = data_offset - page_offset;
- if (data_offset + data_size < file_size)
+ if (data_size && (data_offset + data_size < file_size))
file_size = data_offset + data_size;
progress_next = file_size / 16;
"Processing events...");
}
+ err = 0;
+ if (session_done())
+ goto out_err;
+
if (file_pos < file_size)
goto more;
- err = 0;
/* do the final flush for ordered samples */
session->ordered_samples.next_flush = ULLONG_MAX;
err = flush_sample_queue(session, tool);
#define perf_session__set_tracepoints_handlers(session, array) \
__perf_session__set_tracepoints_handlers(session, array, ARRAY_SIZE(array))
+
+extern volatile int session_done;
+
+#define session_done() (*(volatile int *)(&session_done))
#endif /* __PERF_SESSION_H */
#include "symbol.h"
#include "debug.h"
+#ifndef HAVE_ELF_GETPHDRNUM
+static int elf_getphdrnum(Elf *elf, size_t *dst)
+{
+ GElf_Ehdr gehdr;
+ GElf_Ehdr *ehdr;
+
+ ehdr = gelf_getehdr(elf, &gehdr);
+ if (!ehdr)
+ return -1;
+
+ *dst = ehdr->e_phnum;
+
+ return 0;
+}
+#endif
+
#ifndef NT_GNU_BUILD_ID
#define NT_GNU_BUILD_ID 3
#endif
char *next = NULL;
char *addr_str;
char *mod;
- char *fmt;
+ char *fmt = NULL;
line = strtok_r(file, "\n", &next);
while (line) {
fflush(stdout);
done = 0;
- timer_create(which, NULL, &id);
+ err = timer_create(which, NULL, &id);
if (err < 0) {
perror("Can't create timer\n");
return -1;
typeof(*work), queue);
cancel_work_sync(&work->work);
list_del(&work->queue);
- if (!work->done) /* work was canceled */
+ if (!work->done) { /* work was canceled */
+ mmdrop(work->mm);
+ kvm_put_kvm(vcpu->kvm); /* == work->vcpu->kvm */
kmem_cache_free(async_pf_cache, work);
+ }
}
spin_lock(&vcpu->async_pf.lock);
EXPORT_SYMBOL_GPL(gfn_to_hva);
/*
- * The hva returned by this function is only allowed to be read.
- * It should pair with kvm_read_hva() or kvm_read_hva_atomic().
+ * If writable is set to false, the hva returned by this function is only
+ * allowed to be read.
*/
-static unsigned long gfn_to_hva_read(struct kvm *kvm, gfn_t gfn)
+unsigned long gfn_to_hva_prot(struct kvm *kvm, gfn_t gfn, bool *writable)
{
- return __gfn_to_hva_many(gfn_to_memslot(kvm, gfn), gfn, NULL, false);
+ struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn);
+ unsigned long hva = __gfn_to_hva_many(slot, gfn, NULL, false);
+
+ if (!kvm_is_error_hva(hva) && writable)
+ *writable = !memslot_is_readonly(slot);
+
+ return hva;
}
static int kvm_read_hva(void *data, void __user *hva, int len)
int r;
unsigned long addr;
- addr = gfn_to_hva_read(kvm, gfn);
+ addr = gfn_to_hva_prot(kvm, gfn, NULL);
if (kvm_is_error_hva(addr))
return -EFAULT;
r = kvm_read_hva(data, (void __user *)addr + offset, len);
gfn_t gfn = gpa >> PAGE_SHIFT;
int offset = offset_in_page(gpa);
- addr = gfn_to_hva_read(kvm, gfn);
+ addr = gfn_to_hva_prot(kvm, gfn, NULL);
if (kvm_is_error_hva(addr))
return -EFAULT;
pagefault_disable();
static int kvm_init_debug(void)
{
- int r = -EFAULT;
+ int r = -EEXIST;
struct kvm_stats_debugfs_item *p;
kvm_debugfs_dir = debugfs_create_dir("kvm", NULL);