1.2.0 Handle creation of arrays that contain failed devices.
1.3.0 Added support for RAID 10
1.3.1 Allow device replacement/rebuild for RAID 10
+1.3.2 Fix/improve redundancy checking for RAID10
Required properties for pin configuration node:
- atmel,pins: 4 integers array, represents a group of pins mux and config
setting. The format is atmel,pins = <PIN_BANK PIN_BANK_NUM PERIPH CONFIG>.
- The PERIPH 0 means gpio.
+ The PERIPH 0 means gpio, PERIPH 1 is periph A, PERIPH 2 is periph B...
+ PIN_BANK 0 is pioA, PIN_BANK 1 is pioB...
Bits used for CONFIG:
PULL_UP (1 << 0): indicate this pin need a pull up.
pinctrl_dbgu: dbgu-0 {
atmel,pins =
<1 14 0x1 0x0 /* PB14 periph A */
- 1 15 0x1 0x1>; /* PB15 periph with pullup */
+ 1 15 0x1 0x1>; /* PB15 periph A with pullup */
};
};
};
align with the zone size <-|
|-> align with the segment size
_________________________________________________________________________
- | | | Node | Segment | Segment | |
- | Superblock | Checkpoint | Address | Info. | Summary | Main |
- | (SB) | (CP) | Table (NAT) | Table (SIT) | Area (SSA) | |
+ | | | Segment | Node | Segment | |
+ | Superblock | Checkpoint | Info. | Address | Summary | Main |
+ | (SB) | (CP) | Table (SIT) | Table (NAT) | Area (SSA) | |
|____________|_____2______|______N______|______N______|______N_____|__N___|
. .
. .
: It contains file system information, bitmaps for valid NAT/SIT sets, orphan
inode lists, and summary entries of current active segments.
-- Node Address Table (NAT)
- : It is composed of a block address table for all the node blocks stored in
- Main area.
-
- Segment Information Table (SIT)
: It contains segment information such as valid block count and bitmap for the
validity of all the blocks.
+- Node Address Table (NAT)
+ : It is composed of a block address table for all the node blocks stored in
+ Main area.
+
- Segment Summary Area (SSA)
: It contains summary entries which contains the owner information of all the
data and node blocks stored in Main area.
valid, as shown as below.
+--------+----------+---------+
- | CP | NAT | SIT |
+ | CP | SIT | NAT |
+--------+----------+---------+
. . . .
. . . .
. . . .
+-------+-------+--------+--------+--------+--------+
- | CP #0 | CP #1 | NAT #0 | NAT #1 | SIT #0 | SIT #1 |
+ | CP #0 | CP #1 | SIT #0 | SIT #1 | NAT #0 | NAT #1 |
+-------+-------+--------+--------+--------+--------+
| ^ ^
| | |
real-time workloads. It can also improve energy
efficiency for asymmetric multiprocessors.
- rcu_nocbs_poll [KNL,BOOT]
+ rcu_nocb_poll [KNL,BOOT]
Rather than requiring that offloaded CPUs
(specified by rcu_nocbs= above) explicitly
awaken the corresponding "rcuoN" kthreads,
Protocol 2.11: (Kernel 3.6) Added a field for offset of EFI handover
protocol entry point.
+Protocol 2.12: (Kernel 3.8) Added the xloadflags field and extension fields
+ to struct boot_params for for loading bzImage and ramdisk
+ above 4G in 64bit.
+
**** MEMORY LAYOUT
The traditional memory map for the kernel loader, used for Image or
0230/4 2.05+ kernel_alignment Physical addr alignment required for kernel
0234/1 2.05+ relocatable_kernel Whether kernel is relocatable or not
0235/1 2.10+ min_alignment Minimum alignment, as a power of two
-0236/2 N/A pad3 Unused
+0236/2 2.12+ xloadflags Boot protocol option flags
0238/4 2.06+ cmdline_size Maximum size of the kernel command line
023C/4 2.07+ hardware_subarch Hardware subarchitecture
0240/8 2.07+ hardware_subarch_data Subarchitecture-specific data
misaligned kernel. Therefore, a loader should typically try each
power-of-two alignment from kernel_alignment down to this alignment.
+Field name: xloadflags
+Type: read
+Offset/size: 0x236/2
+Protocol: 2.12+
+
+ This field is a bitmask.
+
+ Bit 0 (read): XLF_KERNEL_64
+ - If 1, this kernel has the legacy 64-bit entry point at 0x200.
+
+ Bit 1 (read): XLF_CAN_BE_LOADED_ABOVE_4G
+ - If 1, kernel/boot_params/cmdline/ramdisk can be above 4G.
+
+ Bit 2 (read): XLF_EFI_HANDOVER_32
+ - If 1, the kernel supports the 32-bit EFI handoff entry point
+ given at handover_offset.
+
+ Bit 3 (read): XLF_EFI_HANDOVER_64
+ - If 1, the kernel supports the 64-bit EFI handoff entry point
+ given at handover_offset + 0x200.
+
Field name: cmdline_size
Type: read
Offset/size: 0x238/4
090/010 ALL hd1_info hd1 disk parameter, OBSOLETE!!
0A0/010 ALL sys_desc_table System description table (struct sys_desc_table)
0B0/010 ALL olpc_ofw_header OLPC's OpenFirmware CIF and friends
+0C0/004 ALL ext_ramdisk_image ramdisk_image high 32bits
+0C4/004 ALL ext_ramdisk_size ramdisk_size high 32bits
+0C8/004 ALL ext_cmd_line_ptr cmd_line_ptr high 32bits
140/080 ALL edid_info Video mode setup (struct edid_info)
1C0/020 ALL efi_info EFI 32 information (struct efi_info)
1E0/004 ALL alk_mem_k Alternative mem check, in KB
1E9/001 ALL eddbuf_entries Number of entries in eddbuf (below)
1EA/001 ALL edd_mbr_sig_buf_entries Number of entries in edd_mbr_sig_buffer
(below)
+1EF/001 ALL sentinel Used to detect broken bootloaders
290/040 ALL edd_mbr_sig_buffer EDD MBR signatures
2D0/A00 ALL e820_map E820 memory map table
(array of struct e820entry)
M: Haavard Skinnemoen <hskinnemoen@gmail.com>
M: Hans-Christian Egtvedt <egtvedt@samfundet.no>
W: http://www.atmel.com/products/AVR32/
-W: http://avr32linux.org/
+W: http://mirror.egtvedt.no/avr32linux.org/
W: http://avrfreaks.net/
S: Maintained
F: arch/avr32/
F: drivers/net/ethernet/i825xx/eexpress.*
ETHERNET BRIDGE
-M: Stephen Hemminger <shemminger@vyatta.com>
+M: Stephen Hemminger <stephen@networkplumber.org>
L: bridge@lists.linux-foundation.org
L: netdev@vger.kernel.org
W: http://www.linuxfoundation.org/en/Net:Bridge
MARVELL GIGABIT ETHERNET DRIVERS (skge/sky2)
M: Mirko Lindner <mlindner@marvell.com>
-M: Stephen Hemminger <shemminger@vyatta.com>
+M: Stephen Hemminger <stephen@networkplumber.org>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/ethernet/marvell/sk*
F: drivers/infiniband/hw/nes/
NETEM NETWORK EMULATOR
-M: Stephen Hemminger <shemminger@vyatta.com>
+M: Stephen Hemminger <stephen@networkplumber.org>
L: netem@lists.linux-foundation.org
S: Maintained
F: net/sched/sch_netem.c
F: include/media/s3c_camif.h
SERIAL DRIVERS
-M: Alan Cox <alan@linux.intel.com>
+M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-serial@vger.kernel.org
S: Maintained
F: drivers/tty/serial
F: sound/
SOUND - SOC LAYER / DYNAMIC AUDIO POWER MANAGEMENT (ASoC)
-M: Liam Girdwood <lrg@ti.com>
+M: Liam Girdwood <lgirdwood@gmail.com>
M: Mark Brown <broonie@opensource.wolfsonmicro.com>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound.git
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
VERSION = 3
PATCHLEVEL = 8
SUBLEVEL = 0
-EXTRAVERSION = -rc4
-NAME = Terrified Chipmunk
+EXTRAVERSION = -rc7
+NAME = Unicycling Gorilla
# *DOCUMENTATION*
# To see a list of typical targets execute "make help"
-e s/arm.*/arm/ -e s/sa110/arm/ \
-e s/s390x/s390/ -e s/parisc64/parisc/ \
-e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
- -e s/sh[234].*/sh/ )
+ -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
# Cross compiling and selecting different set of gcc/bin-utils
# ---------------------------------------------------------------------------
memory {
device_type = "memory";
- reg = <0x00000000 0x20000000>; /* 512 MB */
+ reg = <0x00000000 0x40000000>; /* 1 GB */
};
soc {
};
gpio0: gpio@d0018100 {
- compatible = "marvell,armadaxp-gpio";
- reg = <0xd0018100 0x40>,
- <0xd0018800 0x30>;
+ compatible = "marvell,orion-gpio";
+ reg = <0xd0018100 0x40>;
ngpios = <32>;
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupts-cells = <2>;
- interrupts = <16>, <17>, <18>, <19>;
+ interrupts = <82>, <83>, <84>, <85>;
};
gpio1: gpio@d0018140 {
- compatible = "marvell,armadaxp-gpio";
- reg = <0xd0018140 0x40>,
- <0xd0018840 0x30>;
+ compatible = "marvell,orion-gpio";
+ reg = <0xd0018140 0x40>;
ngpios = <17>;
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupts-cells = <2>;
- interrupts = <20>, <21>, <22>;
+ interrupts = <87>, <88>, <89>;
};
};
};
};
gpio0: gpio@d0018100 {
- compatible = "marvell,armadaxp-gpio";
- reg = <0xd0018100 0x40>,
- <0xd0018800 0x30>;
+ compatible = "marvell,orion-gpio";
+ reg = <0xd0018100 0x40>;
ngpios = <32>;
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupts-cells = <2>;
- interrupts = <16>, <17>, <18>, <19>;
+ interrupts = <82>, <83>, <84>, <85>;
};
gpio1: gpio@d0018140 {
- compatible = "marvell,armadaxp-gpio";
- reg = <0xd0018140 0x40>,
- <0xd0018840 0x30>;
+ compatible = "marvell,orion-gpio";
+ reg = <0xd0018140 0x40>;
ngpios = <32>;
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupts-cells = <2>;
- interrupts = <20>, <21>, <22>, <23>;
+ interrupts = <87>, <88>, <89>, <90>;
};
gpio2: gpio@d0018180 {
- compatible = "marvell,armadaxp-gpio";
- reg = <0xd0018180 0x40>,
- <0xd0018870 0x30>;
+ compatible = "marvell,orion-gpio";
+ reg = <0xd0018180 0x40>;
ngpios = <3>;
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupts-cells = <2>;
- interrupts = <24>;
+ interrupts = <91>;
};
ethernet@d0034000 {
};
gpio0: gpio@d0018100 {
- compatible = "marvell,armadaxp-gpio";
- reg = <0xd0018100 0x40>,
- <0xd0018800 0x30>;
+ compatible = "marvell,orion-gpio";
+ reg = <0xd0018100 0x40>;
ngpios = <32>;
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupts-cells = <2>;
- interrupts = <16>, <17>, <18>, <19>;
+ interrupts = <82>, <83>, <84>, <85>;
};
gpio1: gpio@d0018140 {
- compatible = "marvell,armadaxp-gpio";
- reg = <0xd0018140 0x40>,
- <0xd0018840 0x30>;
+ compatible = "marvell,orion-gpio";
+ reg = <0xd0018140 0x40>;
ngpios = <32>;
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupts-cells = <2>;
- interrupts = <20>, <21>, <22>, <23>;
+ interrupts = <87>, <88>, <89>, <90>;
};
gpio2: gpio@d0018180 {
- compatible = "marvell,armadaxp-gpio";
- reg = <0xd0018180 0x40>,
- <0xd0018870 0x30>;
+ compatible = "marvell,orion-gpio";
+ reg = <0xd0018180 0x40>;
ngpios = <3>;
gpio-controller;
#gpio-cells = <2>;
interrupt-controller;
#interrupts-cells = <2>;
- interrupts = <24>;
+ interrupts = <91>;
};
ethernet@d0034000 {
i2c@0 {
compatible = "i2c-gpio";
- gpios = <&pioA 23 0 /* sda */
- &pioA 24 0 /* scl */
+ gpios = <&pioA 25 0 /* sda */
+ &pioA 26 0 /* scl */
>;
i2c-gpio,sda-open-drain;
i2c-gpio,scl-open-drain;
atmel,pins =
<0 3 0x1 0x0>; /* PA3 periph A */
};
+
+ pinctrl_usart0_sck: usart0_sck-0 {
+ atmel,pins =
+ <0 4 0x1 0x0>; /* PA4 periph A */
+ };
};
usart1 {
pinctrl_usart1_rts: usart1_rts-0 {
atmel,pins =
- <3 27 0x3 0x0>; /* PC27 periph C */
+ <2 27 0x3 0x0>; /* PC27 periph C */
};
pinctrl_usart1_cts: usart1_cts-0 {
atmel,pins =
- <3 28 0x3 0x0>; /* PC28 periph C */
+ <2 28 0x3 0x0>; /* PC28 periph C */
+ };
+
+ pinctrl_usart1_sck: usart1_sck-0 {
+ atmel,pins =
+ <2 28 0x3 0x0>; /* PC29 periph C */
};
};
pinctrl_uart2_rts: uart2_rts-0 {
atmel,pins =
- <0 0 0x2 0x0>; /* PB0 periph B */
+ <1 0 0x2 0x0>; /* PB0 periph B */
};
pinctrl_uart2_cts: uart2_cts-0 {
atmel,pins =
- <0 1 0x2 0x0>; /* PB1 periph B */
+ <1 1 0x2 0x0>; /* PB1 periph B */
+ };
+
+ pinctrl_usart2_sck: usart2_sck-0 {
+ atmel,pins =
+ <1 2 0x2 0x0>; /* PB2 periph B */
};
};
usart3 {
pinctrl_uart3: usart3-0 {
atmel,pins =
- <3 23 0x2 0x1 /* PC22 periph B with pullup */
- 3 23 0x2 0x0>; /* PC23 periph B */
+ <2 23 0x2 0x1 /* PC22 periph B with pullup */
+ 2 23 0x2 0x0>; /* PC23 periph B */
};
pinctrl_usart3_rts: usart3_rts-0 {
atmel,pins =
- <3 24 0x2 0x0>; /* PC24 periph B */
+ <2 24 0x2 0x0>; /* PC24 periph B */
};
pinctrl_usart3_cts: usart3_cts-0 {
atmel,pins =
- <3 25 0x2 0x0>; /* PC25 periph B */
+ <2 25 0x2 0x0>; /* PC25 periph B */
+ };
+
+ pinctrl_usart3_sck: usart3_sck-0 {
+ atmel,pins =
+ <2 26 0x2 0x0>; /* PC26 periph B */
};
};
uart0 {
pinctrl_uart0: uart0-0 {
atmel,pins =
- <3 8 0x3 0x0 /* PC8 periph C */
- 3 9 0x3 0x1>; /* PC9 periph C with pullup */
+ <2 8 0x3 0x0 /* PC8 periph C */
+ 2 9 0x3 0x1>; /* PC9 periph C with pullup */
};
};
uart1 {
pinctrl_uart1: uart1-0 {
atmel,pins =
- <3 16 0x3 0x0 /* PC16 periph C */
- 3 17 0x3 0x1>; /* PC17 periph C with pullup */
+ <2 16 0x3 0x0 /* PC16 periph C */
+ 2 17 0x3 0x1>; /* PC17 periph C with pullup */
};
};
pinctrl_macb0_rmii_mii: macb0_rmii_mii-0 {
atmel,pins =
- <1 8 0x1 0x0 /* PA8 periph A */
- 1 11 0x1 0x0 /* PA11 periph A */
- 1 12 0x1 0x0 /* PA12 periph A */
- 1 13 0x1 0x0 /* PA13 periph A */
- 1 14 0x1 0x0 /* PA14 periph A */
- 1 15 0x1 0x0 /* PA15 periph A */
- 1 16 0x1 0x0 /* PA16 periph A */
- 1 17 0x1 0x0>; /* PA17 periph A */
+ <1 8 0x1 0x0 /* PB8 periph A */
+ 1 11 0x1 0x0 /* PB11 periph A */
+ 1 12 0x1 0x0 /* PB12 periph A */
+ 1 13 0x1 0x0 /* PB13 periph A */
+ 1 14 0x1 0x0 /* PB14 periph A */
+ 1 15 0x1 0x0 /* PB15 periph A */
+ 1 16 0x1 0x0 /* PB16 periph A */
+ 1 17 0x1 0x0>; /* PB17 periph A */
};
};
fifo-depth = <0x80>;
card-detect-delay = <200>;
samsung,dw-mshc-ciu-div = <3>;
- samsung,dw-mshc-sdr-timing = <2 3 3>;
- samsung,dw-mshc-ddr-timing = <1 2 3>;
+ samsung,dw-mshc-sdr-timing = <2 3>;
+ samsung,dw-mshc-ddr-timing = <1 2>;
slot@0 {
reg = <0>;
fifo-depth = <0x80>;
card-detect-delay = <200>;
samsung,dw-mshc-ciu-div = <3>;
- samsung,dw-mshc-sdr-timing = <2 3 3>;
- samsung,dw-mshc-ddr-timing = <1 2 3>;
+ samsung,dw-mshc-sdr-timing = <2 3>;
+ samsung,dw-mshc-ddr-timing = <1 2>;
slot@0 {
reg = <0>;
fifo-depth = <0x80>;
card-detect-delay = <200>;
samsung,dw-mshc-ciu-div = <3>;
- samsung,dw-mshc-sdr-timing = <2 3 3>;
- samsung,dw-mshc-ddr-timing = <1 2 3>;
+ samsung,dw-mshc-sdr-timing = <2 3>;
+ samsung,dw-mshc-ddr-timing = <1 2>;
slot@0 {
reg = <0>;
};
&uart0 { status = "okay"; };
-&sdio0 { status = "okay"; };
&sata0 { status = "okay"; };
&i2c0 { status = "okay"; };
+&sdio0 {
+ status = "okay";
+ /* sdio0 card detect is connected to wrong pin on CuBox */
+ cd-gpios = <&gpio0 12 1>;
+};
+
&spi0 {
status = "okay";
};
&pinctrl {
- pinctrl-0 = <&pmx_gpio_18>;
+ pinctrl-0 = <&pmx_gpio_12 &pmx_gpio_18>;
pinctrl-names = "default";
+ pmx_gpio_12: pmx-gpio-12 {
+ marvell,pins = "mpp12";
+ marvell,function = "gpio";
+ };
+
pmx_gpio_18: pmx-gpio-18 {
marvell,pins = "mpp18";
marvell,function = "gpio";
fifo-depth = <0x80>;
card-detect-delay = <200>;
samsung,dw-mshc-ciu-div = <3>;
- samsung,dw-mshc-sdr-timing = <2 3 3>;
- samsung,dw-mshc-ddr-timing = <1 2 3>;
+ samsung,dw-mshc-sdr-timing = <2 3>;
+ samsung,dw-mshc-ddr-timing = <1 2>;
slot@0 {
reg = <0>;
fifo-depth = <0x80>;
card-detect-delay = <200>;
samsung,dw-mshc-ciu-div = <3>;
- samsung,dw-mshc-sdr-timing = <2 3 3>;
- samsung,dw-mshc-ddr-timing = <1 2 3>;
+ samsung,dw-mshc-sdr-timing = <2 3>;
+ samsung,dw-mshc-ddr-timing = <1 2>;
slot@0 {
reg = <0>;
/include/ "kirkwood.dtsi"
+/include/ "kirkwood-6281.dtsi"
/ {
chosen {
};
ocp@f1000000 {
+ pinctrl: pinctrl@10000 {
+ pinctrl-0 = < &pmx_spi &pmx_twsi0 &pmx_uart0
+ &pmx_ns2_sata0 &pmx_ns2_sata1>;
+ pinctrl-names = "default";
+
+ pmx_ns2_sata0: pmx-ns2-sata0 {
+ marvell,pins = "mpp21";
+ marvell,function = "sata0";
+ };
+ pmx_ns2_sata1: pmx-ns2-sata1 {
+ marvell,pins = "mpp20";
+ marvell,function = "sata1";
+ };
+ };
+
serial@12000 {
clock-frequency = <166666667>;
status = "okay";
reg = <0x10100 0x40>;
ngpios = <32>;
interrupt-controller;
+ #interrupt-cells = <2>;
interrupts = <35>, <36>, <37>, <38>;
};
reg = <0x10140 0x40>;
ngpios = <18>;
interrupt-controller;
+ #interrupt-cells = <2>;
interrupts = <39>, <40>, <41>;
};
macb0: ethernet@fffc4000 {
phy-mode = "mii";
+ pinctrl-0 = <&pinctrl_macb_rmii
+ &pinctrl_macb_rmii_mii_alt>;
status = "okay";
};
};
uart0: uart@01c28000 {
- compatible = "ns8250";
+ compatible = "snps,dw-apb-uart";
reg = <0x01c28000 0x400>;
interrupts = <1>;
reg-shift = <2>;
+ reg-io-width = <4>;
clock-frequency = <24000000>;
status = "disabled";
};
uart1: uart@01c28400 {
- compatible = "ns8250";
+ compatible = "snps,dw-apb-uart";
reg = <0x01c28400 0x400>;
interrupts = <2>;
reg-shift = <2>;
+ reg-io-width = <4>;
clock-frequency = <24000000>;
status = "disabled";
};
reg = <1>;
};
-/* A7s disabled till big.LITTLE patches are available...
cpu2: cpu@2 {
device_type = "cpu";
compatible = "arm,cortex-a7";
compatible = "arm,cortex-a7";
reg = <0x102>;
};
-*/
};
memory@80000000 {
irq_set_chained_handler(irq, gic_handle_cascade_irq);
}
+static u8 gic_get_cpumask(struct gic_chip_data *gic)
+{
+ void __iomem *base = gic_data_dist_base(gic);
+ u32 mask, i;
+
+ for (i = mask = 0; i < 32; i += 4) {
+ mask = readl_relaxed(base + GIC_DIST_TARGET + i);
+ mask |= mask >> 16;
+ mask |= mask >> 8;
+ if (mask)
+ break;
+ }
+
+ if (!mask)
+ pr_crit("GIC CPU mask not found - kernel will fail to boot.\n");
+
+ return mask;
+}
+
static void __init gic_dist_init(struct gic_chip_data *gic)
{
unsigned int i;
/*
* Set all global interrupts to this CPU only.
*/
- cpumask = readl_relaxed(base + GIC_DIST_TARGET + 0);
+ cpumask = gic_get_cpumask(gic);
+ cpumask |= cpumask << 8;
+ cpumask |= cpumask << 16;
for (i = 32; i < gic_irqs; i += 4)
writel_relaxed(cpumask, base + GIC_DIST_TARGET + i * 4 / 4);
* Get what the GIC says our CPU mask is.
*/
BUG_ON(cpu >= NR_GIC_CPU_IF);
- cpu_mask = readl_relaxed(dist_base + GIC_DIST_TARGET + 0);
+ cpu_mask = gic_get_cpumask(gic);
gic_cpu_map[cpu] = cpu_mask;
/*
CONFIG_SOC_AT91SAM9263=y
CONFIG_SOC_AT91SAM9G45=y
CONFIG_SOC_AT91SAM9X5=y
+CONFIG_SOC_AT91SAM9N12=y
CONFIG_MACH_AT91SAM_DT=y
CONFIG_AT91_PROGRAMMABLE_CLOCKS=y
CONFIG_AT91_TIMER_HZ=128
CONFIG_ZBOOT_ROM_BSS=0x0
CONFIG_ARM_APPENDED_DTB=y
CONFIG_ARM_ATAG_DTB_COMPAT=y
-CONFIG_CMDLINE="mem=128M console=ttyS0,115200 initrd=0x21100000,25165824 root=/dev/ram0 rw"
+CONFIG_CMDLINE="console=ttyS0,115200 initrd=0x21100000,25165824 root=/dev/ram0 rw"
CONFIG_KEXEC=y
CONFIG_AUTO_ZRELADDR=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
*/
#define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET)
#define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(0x01000000))
-#define TASK_UNMAPPED_BASE (UL(CONFIG_PAGE_OFFSET) / 3)
+#define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M)
/*
* The maximum size of a 26-bit user space task.
b 1b
ENDPROC(printch)
+#ifdef CONFIG_MMU
ENTRY(debug_ll_addr)
addruart r2, r3, ip
str r2, [r0]
str r3, [r1]
mov pc, lr
ENDPROC(debug_ll_addr)
+#endif
#else
/*
* Then map boot params address in r2 if specified.
+ * We map 2 sections in case the ATAGs/DTB crosses a section boundary.
*/
mov r0, r2, lsr #SECTION_SHIFT
movs r0, r0, lsl #SECTION_SHIFT
addne r3, r3, #PAGE_OFFSET
addne r3, r4, r3, lsr #(SECTION_SHIFT - PMD_ORDER)
orrne r6, r7, r0
+ strne r6, [r3], #1 << PMD_ORDER
+ addne r6, r6, #1 << SECTION_SHIFT
strne r6, [r3]
#ifdef CONFIG_DEBUG_LL
* as it has already been validated by the primary processor.
*/
#ifdef CONFIG_ARM_VIRT_EXT
- bl __hyp_stub_install
+ bl __hyp_stub_install_secondary
#endif
safe_svcmode_maskall r9
* immediately.
*/
compare_cpu_mode_with_primary r4, r5, r6, r7
- bxne lr
+ movne pc, lr
/*
* Once we have given up on one CPU, we do not try to install the
*/
cmp r4, #HYP_MODE
- bxne lr @ give up if the CPU is not in HYP mode
+ movne pc, lr @ give up if the CPU is not in HYP mode
/*
* Configure HSCTLR to set correct exception endianness/instruction set
* Eventually, CPU-specific code might be needed -- assume not for now
*
* This code relies on the "eret" instruction to synchronize the
- * various coprocessor accesses.
+ * various coprocessor accesses. This is done when we switch to SVC
+ * (see safe_svcmode_maskall).
*/
@ Now install the hypervisor stub:
adr r7, __hyp_stub_vectors
1:
#endif
- bic r7, r4, #MODE_MASK
- orr r7, r7, #SVC_MODE
-THUMB( orr r7, r7, #PSR_T_BIT )
- msr spsr_cxsf, r7 @ This is SPSR_hyp.
-
- __MSR_ELR_HYP(14) @ msr elr_hyp, lr
- __ERET @ return, switching to SVC mode
- @ The boot CPU mode is left in r4.
+ bx lr @ The boot CPU mode is left in r4.
ENDPROC(__hyp_stub_install_secondary)
__hyp_stub_do_trap:
@ fall through
ENTRY(__hyp_set_vectors)
__HVC(0)
- bx lr
+ mov pc, lr
ENDPROC(__hyp_set_vectors)
#ifndef ZIMAGE
switch (socid) {
case ARCH_ID_AT91RM9200:
at91_soc_initdata.type = AT91_SOC_RM9200;
+ if (at91_soc_initdata.subtype == AT91_SOC_SUBTYPE_NONE)
+ at91_soc_initdata.subtype = AT91_SOC_RM9200_BGA;
at91_boot_soc = at91rm9200_soc;
break;
select CPU_EXYNOS4210
select HAVE_SAMSUNG_KEYPAD if INPUT_KEYBOARD
select PINCTRL
- select PINCTRL_EXYNOS4
+ select PINCTRL_EXYNOS
select USE_OF
help
Machine support for Samsung Exynos4 machine with device tree enabled.
select HAVE_CAN_FLEXCAN if CAN
select HAVE_IMX_GPC
select HAVE_IMX_MMDC
+ select HAVE_IMX_SRC
select HAVE_SMP
select MFD_SYSCON
select PINCTRL
clk_register_clkdev(clk[ipg], "ipg", "mxc-ehci.2");
clk_register_clkdev(clk[usbotg_ahb], "ahb", "mxc-ehci.2");
clk_register_clkdev(clk[usb_div], "per", "mxc-ehci.2");
- clk_register_clkdev(clk[ipg], "ipg", "fsl-usb2-udc");
- clk_register_clkdev(clk[usbotg_ahb], "ahb", "fsl-usb2-udc");
- clk_register_clkdev(clk[usb_div], "per", "fsl-usb2-udc");
+ clk_register_clkdev(clk[ipg], "ipg", "imx-udc-mx27");
+ clk_register_clkdev(clk[usbotg_ahb], "ahb", "imx-udc-mx27");
+ clk_register_clkdev(clk[usb_div], "per", "imx-udc-mx27");
clk_register_clkdev(clk[nfc_ipg_per], NULL, "imx25-nand.0");
/* i.mx25 has the i.mx35 type cspi */
clk_register_clkdev(clk[cspi1_ipg], NULL, "imx35-cspi.0");
clk_register_clkdev(clk[lcdc_ahb_gate], "ahb", "imx21-fb.0");
clk_register_clkdev(clk[csi_ahb_gate], "ahb", "imx27-camera.0");
clk_register_clkdev(clk[per4_gate], "per", "imx27-camera.0");
- clk_register_clkdev(clk[usb_div], "per", "fsl-usb2-udc");
- clk_register_clkdev(clk[usb_ipg_gate], "ipg", "fsl-usb2-udc");
- clk_register_clkdev(clk[usb_ahb_gate], "ahb", "fsl-usb2-udc");
+ clk_register_clkdev(clk[usb_div], "per", "imx-udc-mx27");
+ clk_register_clkdev(clk[usb_ipg_gate], "ipg", "imx-udc-mx27");
+ clk_register_clkdev(clk[usb_ahb_gate], "ahb", "imx-udc-mx27");
clk_register_clkdev(clk[usb_div], "per", "mxc-ehci.0");
clk_register_clkdev(clk[usb_ipg_gate], "ipg", "mxc-ehci.0");
clk_register_clkdev(clk[usb_ahb_gate], "ahb", "mxc-ehci.0");
clk_register_clkdev(clk[usb_div_post], "per", "mxc-ehci.2");
clk_register_clkdev(clk[usb_gate], "ahb", "mxc-ehci.2");
clk_register_clkdev(clk[ipg], "ipg", "mxc-ehci.2");
- clk_register_clkdev(clk[usb_div_post], "per", "fsl-usb2-udc");
- clk_register_clkdev(clk[usb_gate], "ahb", "fsl-usb2-udc");
- clk_register_clkdev(clk[ipg], "ipg", "fsl-usb2-udc");
+ clk_register_clkdev(clk[usb_div_post], "per", "imx-udc-mx27");
+ clk_register_clkdev(clk[usb_gate], "ahb", "imx-udc-mx27");
+ clk_register_clkdev(clk[ipg], "ipg", "imx-udc-mx27");
clk_register_clkdev(clk[csi_gate], NULL, "mx3-camera.0");
/* i.mx31 has the i.mx21 type uart */
clk_register_clkdev(clk[uart1_gate], "per", "imx21-uart.0");
clk_register_clkdev(clk[usb_div], "per", "mxc-ehci.2");
clk_register_clkdev(clk[ipg], "ipg", "mxc-ehci.2");
clk_register_clkdev(clk[usbotg_gate], "ahb", "mxc-ehci.2");
- clk_register_clkdev(clk[usb_div], "per", "fsl-usb2-udc");
- clk_register_clkdev(clk[ipg], "ipg", "fsl-usb2-udc");
- clk_register_clkdev(clk[usbotg_gate], "ahb", "fsl-usb2-udc");
+ clk_register_clkdev(clk[usb_div], "per", "imx-udc-mx27");
+ clk_register_clkdev(clk[ipg], "ipg", "imx-udc-mx27");
+ clk_register_clkdev(clk[usbotg_gate], "ahb", "imx-udc-mx27");
clk_register_clkdev(clk[wdog_gate], NULL, "imx2-wdt.0");
clk_register_clkdev(clk[nfc_div], NULL, "imx25-nand.0");
clk_register_clkdev(clk[csi_gate], NULL, "mx3-camera.0");
clk_register_clkdev(clk[usboh3_per_gate], "per", "mxc-ehci.2");
clk_register_clkdev(clk[usboh3_gate], "ipg", "mxc-ehci.2");
clk_register_clkdev(clk[usboh3_gate], "ahb", "mxc-ehci.2");
- clk_register_clkdev(clk[usboh3_per_gate], "per", "fsl-usb2-udc");
- clk_register_clkdev(clk[usboh3_gate], "ipg", "fsl-usb2-udc");
- clk_register_clkdev(clk[usboh3_gate], "ahb", "fsl-usb2-udc");
+ clk_register_clkdev(clk[usboh3_per_gate], "per", "imx-udc-mx51");
+ clk_register_clkdev(clk[usboh3_gate], "ipg", "imx-udc-mx51");
+ clk_register_clkdev(clk[usboh3_gate], "ahb", "imx-udc-mx51");
clk_register_clkdev(clk[nfc_gate], NULL, "imx51-nand");
clk_register_clkdev(clk[ssi1_ipg_gate], NULL, "imx-ssi.0");
clk_register_clkdev(clk[ssi2_ipg_gate], NULL, "imx-ssi.1");
for (i = 0; i < ARRAY_SIZE(clks_init_on); i++)
clk_prepare_enable(clk[clks_init_on[i]]);
+ /* Set initial power mode */
+ imx6q_set_lpm(WAIT_CLOCKED);
+
np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-gpt");
base = of_iomap(np, 0);
WARN_ON(!base);
extern void imx6q_clock_map_io(void);
extern void imx_cpu_die(unsigned int cpu);
+extern int imx_cpu_kill(unsigned int cpu);
#ifdef CONFIG_PM
extern void imx6q_pm_init(void);
#include <linux/fsl_devices.h>
struct imx_fsl_usb2_udc_data {
+ const char *devid;
resource_size_t iobase;
resource_size_t irq;
};
#include "../hardware.h"
#include "devices-common.h"
-#define imx_fsl_usb2_udc_data_entry_single(soc) \
+#define imx_fsl_usb2_udc_data_entry_single(soc, _devid) \
{ \
+ .devid = _devid, \
.iobase = soc ## _USB_OTG_BASE_ADDR, \
.irq = soc ## _INT_USB_OTG, \
}
#ifdef CONFIG_SOC_IMX25
const struct imx_fsl_usb2_udc_data imx25_fsl_usb2_udc_data __initconst =
- imx_fsl_usb2_udc_data_entry_single(MX25);
+ imx_fsl_usb2_udc_data_entry_single(MX25, "imx-udc-mx27");
#endif /* ifdef CONFIG_SOC_IMX25 */
#ifdef CONFIG_SOC_IMX27
const struct imx_fsl_usb2_udc_data imx27_fsl_usb2_udc_data __initconst =
- imx_fsl_usb2_udc_data_entry_single(MX27);
+ imx_fsl_usb2_udc_data_entry_single(MX27, "imx-udc-mx27");
#endif /* ifdef CONFIG_SOC_IMX27 */
#ifdef CONFIG_SOC_IMX31
const struct imx_fsl_usb2_udc_data imx31_fsl_usb2_udc_data __initconst =
- imx_fsl_usb2_udc_data_entry_single(MX31);
+ imx_fsl_usb2_udc_data_entry_single(MX31, "imx-udc-mx27");
#endif /* ifdef CONFIG_SOC_IMX31 */
#ifdef CONFIG_SOC_IMX35
const struct imx_fsl_usb2_udc_data imx35_fsl_usb2_udc_data __initconst =
- imx_fsl_usb2_udc_data_entry_single(MX35);
+ imx_fsl_usb2_udc_data_entry_single(MX35, "imx-udc-mx27");
#endif /* ifdef CONFIG_SOC_IMX35 */
#ifdef CONFIG_SOC_IMX51
const struct imx_fsl_usb2_udc_data imx51_fsl_usb2_udc_data __initconst =
- imx_fsl_usb2_udc_data_entry_single(MX51);
+ imx_fsl_usb2_udc_data_entry_single(MX51, "imx-udc-mx51");
#endif
struct platform_device *__init imx_add_fsl_usb2_udc(
.flags = IORESOURCE_IRQ,
},
};
- return imx_add_platform_device_dmamask("fsl-usb2-udc", -1,
+ return imx_add_platform_device_dmamask(data->devid, -1,
res, ARRAY_SIZE(res),
pdata, sizeof(*pdata), DMA_BIT_MASK(32));
}
.flags = IORESOURCE_IRQ,
},
};
- return imx_add_platform_device_dmamask("imx-fb", 0,
+ return imx_add_platform_device_dmamask(data->devid, 0,
res, ARRAY_SIZE(res),
pdata, sizeof(*pdata), DMA_BIT_MASK(32));
}
void imx_cpu_die(unsigned int cpu)
{
cpu_enter_lowpower();
- imx_enable_cpu(cpu, false);
+ cpu_do_idle();
+}
- /* spin here until hardware takes it down */
- while (1)
- ;
+int imx_cpu_kill(unsigned int cpu)
+{
+ imx_enable_cpu(cpu, false);
+ return 1;
}
#include <linux/module.h>
#include <linux/spinlock.h>
#include <linux/genalloc.h>
-
-#include "iram.h"
+#include "linux/platform_data/imx-iram.h"
static unsigned long iram_phys_base;
static void __iomem *iram_virt_base;
.smp_boot_secondary = imx_boot_secondary,
#ifdef CONFIG_HOTPLUG_CPU
.cpu_die = imx_cpu_die,
+ .cpu_kill = imx_cpu_kill,
#endif
};
cpu_suspend(0, imx6q_suspend_finish);
imx_smp_prepare();
imx_gpc_post_resume();
+ imx6q_set_lpm(WAIT_CLOCKED);
break;
default:
return -EINVAL;
{
int ret = 0;
+ if (!ap_syscon_base)
+ return -EINVAL;
+
if (nr == 0) {
sys->mem_offset = PHYS_PCI_MEM_BASE;
ret = pci_v3_setup_resources(sys);
- /* Remap the Integrator system controller */
- ap_syscon_base = ioremap(INTEGRATOR_SC_BASE, 0x100);
- if (!ap_syscon_base)
- return -EINVAL;
}
return ret;
unsigned int temp;
int ret;
+ /* Remap the Integrator system controller */
+ ap_syscon_base = ioremap(INTEGRATOR_SC_BASE, 0x100);
+ if (!ap_syscon_base) {
+ pr_err("unable to remap the AP syscon for PCIv3\n");
+ return;
+ }
+
pcibios_min_mem = 0x00100000;
/*
#include <linux/gpio.h>
#include <linux/of.h>
#include "common.h"
-#include "mpp.h"
static struct mv643xx_eth_platform_data ns2_ge00_data = {
.phy_addr = MV643XX_ETH_PHY_ADDR(8),
};
-static unsigned int ns2_mpp_config[] __initdata = {
- MPP0_SPI_SCn,
- MPP1_SPI_MOSI,
- MPP2_SPI_SCK,
- MPP3_SPI_MISO,
- MPP4_NF_IO6,
- MPP5_NF_IO7,
- MPP6_SYSRST_OUTn,
- MPP7_GPO, /* Fan speed (bit 1) */
- MPP8_TW0_SDA,
- MPP9_TW0_SCK,
- MPP10_UART0_TXD,
- MPP11_UART0_RXD,
- MPP12_GPO, /* Red led */
- MPP14_GPIO, /* USB fuse */
- MPP16_GPIO, /* SATA 0 power */
- MPP17_GPIO, /* SATA 1 power */
- MPP18_NF_IO0,
- MPP19_NF_IO1,
- MPP20_SATA1_ACTn,
- MPP21_SATA0_ACTn,
- MPP22_GPIO, /* Fan speed (bit 0) */
- MPP23_GPIO, /* Fan power */
- MPP24_GPIO, /* USB mode select */
- MPP25_GPIO, /* Fan rotation fail */
- MPP26_GPIO, /* USB device vbus */
- MPP28_GPIO, /* USB enable host vbus */
- MPP29_GPIO, /* Blue led (slow register) */
- MPP30_GPIO, /* Blue led (command register) */
- MPP31_GPIO, /* Board power off */
- MPP32_GPIO, /* Power button (0 = Released, 1 = Pushed) */
- MPP33_GPO, /* Fan speed (bit 2) */
- 0
-};
-
#define NS2_GPIO_POWER_OFF 31
static void ns2_power_off(void)
/*
* Basic setup. Needs to be called early.
*/
- kirkwood_mpp_conf(ns2_mpp_config);
-
if (of_machine_is_compatible("lacie,netspace_lite_v2") ||
of_machine_is_compatible("lacie,netspace_mini_v2"))
ns2_ge00_data.phy_addr = MV643XX_ETH_PHY_ADDR(0);
ccflags-$(CONFIG_ARCH_MULTIPLATFORM) := -I$(srctree)/$(src)/include \
-I$(srctree)/arch/arm/plat-orion/include
+AFLAGS_coherency_ll.o := -Wa,-march=armv7-a
+
obj-y += system-controller.o
obj-$(CONFIG_MACH_ARMADA_370_XP) += armada-370-xp.o irq-armada-370-xp.o addr-map.o coherency.o coherency_ll.o pmsu.o
obj-$(CONFIG_SMP) += platsmp.o headsmp.o
OMAP_PULL_ENA),
OMAP4_MUX(ABE_MCBSP1_FSX, OMAP_MUX_MODE0 | OMAP_PIN_INPUT),
+ /* UART2 - BT/FM/GPS shared transport */
+ OMAP4_MUX(UART2_CTS, OMAP_PIN_INPUT | OMAP_MUX_MODE0),
+ OMAP4_MUX(UART2_RTS, OMAP_PIN_OUTPUT | OMAP_MUX_MODE0),
+ OMAP4_MUX(UART2_RX, OMAP_PIN_INPUT | OMAP_MUX_MODE0),
+ OMAP4_MUX(UART2_TX, OMAP_PIN_OUTPUT | OMAP_MUX_MODE0),
+
{ .reg_offset = OMAP_MUX_TERMINATOR },
};
omap2_init_clk_hw_omap_clocks(c->lk.clk);
}
+ omap2xxx_clkt_vps_late_init();
+
omap2_clk_disable_autoidle_all();
omap2_clk_enable_init_clocks(enable_init_clks,
omap2_init_clk_hw_omap_clocks(c->lk.clk);
}
+ omap2xxx_clkt_vps_late_init();
+
omap2_clk_disable_autoidle_all();
omap2_clk_enable_init_clocks(enable_init_clks,
* On OMAP4460 the ABE DPLL fails to turn on if in idle low-power
* state when turning the ABE clock domain. Workaround this by
* locking the ABE DPLL on boot.
+ * Lock the ABE DPLL in any case to avoid issues with audio.
*/
- if (cpu_is_omap446x()) {
- rc = clk_set_parent(&abe_dpll_refclk_mux_ck, &sys_32k_ck);
- if (!rc)
- rc = clk_set_rate(&dpll_abe_ck, OMAP4_DPLL_ABE_DEFFREQ);
- if (rc)
- pr_err("%s: failed to configure ABE DPLL!\n", __func__);
- }
+ rc = clk_set_parent(&abe_dpll_refclk_mux_ck, &sys_32k_ck);
+ if (!rc)
+ rc = clk_set_rate(&dpll_abe_ck, OMAP4_DPLL_ABE_DEFFREQ);
+ if (rc)
+ pr_err("%s: failed to configure ABE DPLL!\n", __func__);
return 0;
}
return cnt;
}
-static void omap_init_ocp2scp(void)
+static void __init omap_init_ocp2scp(void)
{
struct omap_hwmod *oh;
struct platform_device *pdev;
#include <linux/dma-mapping.h>
#include <linux/platform_data/omap_drm.h>
+#include "soc.h"
#include "omap_device.h"
#include "omap_hwmod.h"
oh->name);
}
- platform_data.omaprev = GET_OMAP_REVISION();
+ platform_data.omaprev = GET_OMAP_TYPE;
return platform_device_register(&omap_drm_device);
* currently reset very early during boot, before I2C is
* available, so it doesn't seem that we have any choice in
* the kernel other than to avoid resetting it.
+ *
+ * Also, McPDM needs to be configured to NO_IDLE mode when it
+ * is in used otherwise vital clocks will be gated which
+ * results 'slow motion' audio playback.
*/
- .flags = HWMOD_EXT_OPT_MAIN_CLK,
+ .flags = HWMOD_EXT_OPT_MAIN_CLK | HWMOD_SWSUP_SIDLE,
.mpu_irqs = omap44xx_mcpdm_irqs,
.sdma_reqs = omap44xx_mcpdm_sdma_reqs,
.main_clk = "mcpdm_fck",
struct device_node *np;
for_each_matching_node(np, match) {
- if (!of_device_is_available(np)) {
- of_node_put(np);
+ if (!of_device_is_available(np))
continue;
- }
- if (property && !of_get_property(np, property, NULL)) {
- of_node_put(np);
+ if (property && !of_get_property(np, property, NULL))
continue;
- }
of_add_property(np, &device_disabled);
return np;
/*
* Only define NR_IRQS if less than NR_IRQS_EB
*/
-#define NR_IRQS_EB (IRQ_EB_GIC_START + 96)
+#define NR_IRQS_EB (IRQ_EB_GIC_START + 128)
#if defined(CONFIG_MACH_REALVIEW_EB) \
&& (!defined(NR_IRQS) || (NR_IRQS < NR_IRQS_EB))
.bus_num = 0,
.chip_select = 0,
.mode = SPI_MODE_0,
- .irq = S3C_EINT(5),
+ .irq = S3C_EINT(4),
.controller_data = &wm0010_spi_csinfo,
.platform_data = &wm0010_pdata,
},
for (i = 0; i < ARRAY_SIZE(s3c64xx_pm_domains); i++)
pm_genpd_init(&s3c64xx_pm_domains[i]->pd, NULL, false);
+#ifdef CONFIG_S3C_DEV_FB
if (dev_get_platdata(&s3c_device_fb.dev))
pm_genpd_add_device(&s3c64xx_pm_f.pd, &s3c_device_fb.dev);
+#endif
return 0;
}
if (is_coherent || nommu())
addr = __alloc_simple_buffer(dev, size, gfp, &page);
- else if (gfp & GFP_ATOMIC)
+ else if (!(gfp & __GFP_WAIT))
addr = __alloc_from_pool(size, &page);
else if (!IS_ENABLED(CONFIG_CMA))
addr = __alloc_remap_buffer(dev, size, gfp, prot, &page, caller);
size_t size, enum dma_data_direction dir,
void (*op)(const void *, size_t, int))
{
+ unsigned long pfn;
+ size_t left = size;
+
+ pfn = page_to_pfn(page) + offset / PAGE_SIZE;
+ offset %= PAGE_SIZE;
+
/*
* A single sg entry may refer to multiple physically contiguous
* pages. But we still need to process highmem pages individually.
* If highmem is not configured then the bulk of this loop gets
* optimized out.
*/
- size_t left = size;
do {
size_t len = left;
void *vaddr;
+ page = pfn_to_page(pfn);
+
if (PageHighMem(page)) {
- if (len + offset > PAGE_SIZE) {
- if (offset >= PAGE_SIZE) {
- page += offset / PAGE_SIZE;
- offset %= PAGE_SIZE;
- }
+ if (len + offset > PAGE_SIZE)
len = PAGE_SIZE - offset;
- }
vaddr = kmap_high_get(page);
if (vaddr) {
vaddr += offset;
op(vaddr, len, dir);
}
offset = 0;
- page++;
+ pfn++;
left -= len;
} while (left);
}
},
[MT_MEMORY_SO] = {
.prot_pte = L_PTE_PRESENT | L_PTE_YOUNG | L_PTE_DIRTY |
- L_PTE_MT_UNCACHED,
+ L_PTE_MT_UNCACHED | L_PTE_XN,
.prot_l1 = PMD_TYPE_TABLE,
.prot_sect = PMD_TYPE_SECT | PMD_SECT_AP_WRITE | PMD_SECT_S |
PMD_SECT_UNCACHED | PMD_SECT_XN,
*/
ENTRY(versatile_secondary_startup)
mrc p15, 0, r0, c0, c0, 5
- and r0, r0, #15
+ bic r0, #0xff000000
adr r4, 1f
ldmia r4, {r5, r6}
sub r4, r4, r5
@ IRQs disabled.
@
ENTRY(do_vfp)
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
ldr r4, [r10, #TI_PREEMPT] @ get preempt count
add r11, r4, #1 @ increment it
str r11, [r10, #TI_PREEMPT]
ENDPROC(do_vfp)
ENTRY(vfp_null_entry)
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
get_thread_info r10
ldr r4, [r10, #TI_PREEMPT] @ get preempt count
sub r11, r4, #1 @ decrement it
__INIT
ENTRY(vfp_testing_entry)
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
get_thread_info r10
ldr r4, [r10, #TI_PREEMPT] @ get preempt count
sub r11, r4, #1 @ decrement it
@ else it's one 32-bit instruction, so
@ always subtract 4 from the following
@ instruction address.
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
get_thread_info r10
ldr r4, [r10, #TI_PREEMPT] @ get preempt count
sub r11, r4, #1 @ decrement it
@ not recognised by VFP
DBGSTR "not VFP"
-#ifdef CONFIG_PREEMPT
+#ifdef CONFIG_PREEMPT_COUNT
get_thread_info r10
ldr r4, [r10, #TI_PREEMPT] @ get preempt count
sub r11, r4, #1 @ decrement it
typedef unsigned long elf_greg_t;
-#define ELF_NGREG (sizeof (struct pt_regs) / sizeof(elf_greg_t))
+#define ELF_NGREG (sizeof(struct user_pt_regs) / sizeof(elf_greg_t))
+#define ELF_CORE_COPY_REGS(dest, regs) \
+ *(struct user_pt_regs *)&(dest) = (regs)->user_regs;
+
typedef elf_greg_t elf_gregset_t[ELF_NGREG];
typedef struct user_fpsimd_state elf_fpregset_t;
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
+/* drivers/base/dma-mapping.c */
+extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
+ void *cpu_addr, dma_addr_t dma_addr, size_t size);
+extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t dma_addr,
+ size_t size);
+
+#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
+#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
+
#endif /* __ASM_AVR32_DMA_MAPPING_H */
_dma_sync((dma_addr_t)vaddr, size, dir);
}
+/* drivers/base/dma-mapping.c */
+extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
+ void *cpu_addr, dma_addr_t dma_addr, size_t size);
+extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t dma_addr,
+ size_t size);
+
+#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
+#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
+
#endif /* _BLACKFIN_DMA_MAPPING_H */
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent((d), (s), (h), (f))
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent((d), (s), (v), (h))
+/* Not supported for now */
+static inline int dma_mmap_coherent(struct device *dev,
+ struct vm_area_struct *vma, void *cpu_addr,
+ dma_addr_t dma_addr, size_t size)
+{
+ return -EINVAL;
+}
+
+static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t dma_addr,
+ size_t size)
+{
+ return -EINVAL;
+}
+
#endif /* _ASM_C6X_DMA_MAPPING_H */
{
}
+/* drivers/base/dma-mapping.c */
+extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
+ void *cpu_addr, dma_addr_t dma_addr, size_t size);
+extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t dma_addr,
+ size_t size);
+
+#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
+#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
+
#endif
flush_write_buffers();
}
+/* Not supported for now */
+static inline int dma_mmap_coherent(struct device *dev,
+ struct vm_area_struct *vma, void *cpu_addr,
+ dma_addr_t dma_addr, size_t size)
+{
+ return -EINVAL;
+}
+
+static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t dma_addr,
+ size_t size)
+{
+ return -EINVAL;
+}
+
#endif /* _ASM_DMA_MAPPING_H */
extern void dma_free_coherent(struct device *, size_t,
void *, dma_addr_t);
+static inline void *dma_alloc_attrs(struct device *dev, size_t size,
+ dma_addr_t *dma_handle, gfp_t flag,
+ struct dma_attrs *attrs)
+{
+ /* attrs is not supported and ignored */
+ return dma_alloc_coherent(dev, size, dma_handle, flag);
+}
+
+static inline void dma_free_attrs(struct device *dev, size_t size,
+ void *cpu_addr, dma_addr_t dma_handle,
+ struct dma_attrs *attrs)
+{
+ /* attrs is not supported and ignored */
+ dma_free_coherent(dev, size, cpu_addr, dma_handle);
+}
+
static inline void *dma_alloc_noncoherent(struct device *dev, size_t size,
dma_addr_t *handle, gfp_t flag)
{
#include <asm-generic/dma-mapping-broken.h>
#endif
+/* drivers/base/dma-mapping.c */
+extern int dma_common_mmap(struct device *dev, struct vm_area_struct *vma,
+ void *cpu_addr, dma_addr_t dma_addr, size_t size);
+extern int dma_common_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t dma_addr,
+ size_t size);
+
+#define dma_mmap_coherent(d, v, c, h, s) dma_common_mmap(d, v, c, h, s)
+#define dma_get_sgtable(d, t, v, h, s) dma_common_get_sgtable(d, t, v, h, s)
+
#endif /* _M68K_DMA_MAPPING_H */
#include <uapi/asm/unistd.h>
-#define NR_syscalls 348
+#define NR_syscalls 349
#define __ARCH_WANT_OLD_READDIR
#define __ARCH_WANT_OLD_STAT
#define __NR_process_vm_readv 345
#define __NR_process_vm_writev 346
#define __NR_kcmp 347
+#define __NR_finit_module 348
#endif /* _UAPI_ASM_M68K_UNISTD_H_ */
.long sys_process_vm_readv /* 345 */
.long sys_process_vm_writev
.long sys_kcmp
+ .long sys_finit_module
select SSB_DRIVER_EXTIF
select SSB_EMBEDDED
select SSB_B43_PCI_BRIDGE if PCI
+ select SSB_DRIVER_PCICORE if PCI
select SSB_PCICORE_HOSTMODE if PCI
select SSB_DRIVER_GPIO
+ select GPIOLIB
default y
help
Add support for old Broadcom BCM47xx boards with Sonics Silicon Backplane support.
select BCMA_HOST_PCI if PCI
select BCMA_DRIVER_PCI_HOSTMODE if PCI
select BCMA_DRIVER_GPIO
+ select GPIOLIB
default y
help
Add support for new Broadcom BCM47xx boards with Broadcom specific Advanced Microcontroller Bus.
* measurement, and debugging facilities.
*/
+#include <linux/compiler.h>
#include <linux/irqflags.h>
#include <asm/octeon/cvmx.h>
#include <asm/octeon/cvmx-l2c.h>
*/
static void fault_in(uint64_t addr, int len)
{
- volatile char *ptr;
- volatile char dummy;
+ char *ptr;
+
/*
* Adjust addr and length so we get all cache lines even for
* small ranges spanning two cache lines.
*/
len += addr & CVMX_CACHE_LINE_MASK;
addr &= ~CVMX_CACHE_LINE_MASK;
- ptr = (volatile char *)cvmx_phys_to_ptr(addr);
+ ptr = cvmx_phys_to_ptr(addr);
/*
* Invalidate L1 cache to make sure all loads result in data
* being in L2.
*/
CVMX_DCACHE_INVALIDATE;
while (len > 0) {
- dummy += *ptr;
+ ACCESS_ONCE(*ptr);
len -= CVMX_CACHE_LINE_SIZE;
ptr += CVMX_CACHE_LINE_SIZE;
}
#include <asm/mipsregs.h>
#define DSP_DEFAULT 0x00000000
-#define DSP_MASK 0x3ff
+#define DSP_MASK 0x3f
#define __enable_dsp_hazard() \
do { \
struct u_format u_format;
struct c_format c_format;
struct r_format r_format;
+ struct p_format p_format;
struct f_format f_format;
struct ma_format ma_format;
struct b_format b_format;
#define R10000_LLSC_WAR 0
#define MIPS34K_MISSED_ITLB_WAR 0
-#endif /* __ASM_MIPS_MACH_PNX8550_WAR_H */
+#endif /* __ASM_MIPS_MACH_PNX833X_WAR_H */
#else
#define pte_pfn(x) ((unsigned long)((x).pte >> _PFN_SHIFT))
#define pfn_pte(pfn, prot) __pte(((pfn) << _PFN_SHIFT) | pgprot_val(prot))
+#define pfn_pmd(pfn, prot) __pmd(((pfn) << _PFN_SHIFT) | pgprot_val(prot))
#endif
#define __pgd_offset(address) pgd_index(address)
header-y += auxvec.h
header-y += bitsperlong.h
+header-y += break.h
header-y += byteorder.h
header-y += cachectl.h
header-y += errno.h
#define MCOUNT_OFFSET_INSNS 4
#endif
+/* Arch override because MIPS doesn't need to run this from stop_machine() */
+void arch_ftrace_update_code(int command)
+{
+ ftrace_modify_all_code(command);
+}
+
/*
* Check if the address is in kernel space
*
return 0;
}
+#ifndef CONFIG_64BIT
+static int ftrace_modify_code_2(unsigned long ip, unsigned int new_code1,
+ unsigned int new_code2)
+{
+ int faulted;
+
+ safe_store_code(new_code1, ip, faulted);
+ if (unlikely(faulted))
+ return -EFAULT;
+ ip += 4;
+ safe_store_code(new_code2, ip, faulted);
+ if (unlikely(faulted))
+ return -EFAULT;
+ flush_icache_range(ip, ip + 8); /* original ip + 12 */
+ return 0;
+}
+#endif
+
/*
* The details about the calling site of mcount on MIPS
*
* needed.
*/
new = in_kernel_space(ip) ? INSN_NOP : INSN_B_1F;
-
+#ifdef CONFIG_64BIT
return ftrace_modify_code(ip, new);
+#else
+ /*
+ * On 32 bit MIPS platforms, gcc adds a stack adjust
+ * instruction in the delay slot after the branch to
+ * mcount and expects mcount to restore the sp on return.
+ * This is based on a legacy API and does nothing but
+ * waste instructions so it's being removed at runtime.
+ */
+ return ftrace_modify_code_2(ip, new, INSN_NOP);
+#endif
}
int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
PTR_L a5, PT_R9(sp)
PTR_L a6, PT_R10(sp)
PTR_L a7, PT_R11(sp)
- PTR_ADDIU sp, PT_SIZE
#else
- PTR_ADDIU sp, (PT_SIZE + 8)
+ PTR_ADDIU sp, PT_SIZE
#endif
.endm
.globl _mcount
_mcount:
b ftrace_stub
- nop
+ addiu sp,sp,8
+
+ /* When tracing is activated, it calls ftrace_caller+8 (aka here) */
lw t1, function_trace_stop
bnez t1, ftrace_stub
nop
printk(KERN_WARNING
"VPE loader: TC %d is already in use.\n",
- t->index);
+ v->tc->index);
return -ENOEXEC;
}
} else {
#endif
/* tell oprofile which irq to use */
- cp0_perfcount_irq = LTQ_PERF_IRQ;
+ cp0_perfcount_irq = irq_create_mapping(ltq_domain, LTQ_PERF_IRQ);
/*
* if the timer irq is not one of the mips irqs we need to
" .set noreorder \n"
" .align 3 \n"
"1: bnez %0, 1b \n"
-#if __SIZEOF_LONG__ == 4
+#if BITS_PER_LONG == 32
" subu %0, 1 \n"
#else
" dsubu %0, 1 \n"
EXPORT_SYMBOL(__ioremap);
EXPORT_SYMBOL(__iounmap);
-
-int __virt_addr_valid(const volatile void *kaddr)
-{
- return pfn_valid(PFN_DOWN(virt_to_phys(kaddr)));
-}
-EXPORT_SYMBOL_GPL(__virt_addr_valid);
return ret;
}
+
+int __virt_addr_valid(const volatile void *kaddr)
+{
+ return pfn_valid(PFN_DOWN(virt_to_phys(kaddr)));
+}
+EXPORT_SYMBOL_GPL(__virt_addr_valid);
void __init prom_init(void)
{
- int i, *argv, *envp; /* passed as 32 bit ptrs */
+ int *argv, *envp; /* passed as 32 bit ptrs */
struct psb_info *prom_infop;
+#ifdef CONFIG_SMP
+ int i;
+#endif
/* truncate to 32 bit and sign extend all args */
argv = (int *)(long)(int)fw_arg1;
#include <asm/mach-ath79/pci.h>
#define AR71XX_PCI_MEM_BASE 0x10000000
-#define AR71XX_PCI_MEM_SIZE 0x08000000
+#define AR71XX_PCI_MEM_SIZE 0x07000000
#define AR71XX_PCI_WIN0_OFFS 0x10000000
#define AR71XX_PCI_WIN1_OFFS 0x11000000
#define AR724X_PCI_CTRL_SIZE 0x100
#define AR724X_PCI_MEM_BASE 0x10000000
-#define AR724X_PCI_MEM_SIZE 0x08000000
+#define AR724X_PCI_MEM_SIZE 0x04000000
#define AR724X_PCI_REG_RESET 0x18
#define AR724X_PCI_REG_INT_STATUS 0x4c
mn10300_dcache_flush_inv();
}
+/* Not supported for now */
+static inline int dma_mmap_coherent(struct device *dev,
+ struct vm_area_struct *vma, void *cpu_addr,
+ dma_addr_t dma_addr, size_t size)
+{
+ return -EINVAL;
+}
+
+static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t dma_addr,
+ size_t size)
+{
+ return -EINVAL;
+}
+
#endif
/* At the moment, we panic on error for IOMMU resource exaustion */
#define dma_mapping_error(dev, x) 0
+/* This API cannot be supported on PA-RISC */
+static inline int dma_mmap_coherent(struct device *dev,
+ struct vm_area_struct *vma, void *cpu_addr,
+ dma_addr_t dma_addr, size_t size)
+{
+ return -EINVAL;
+}
+
+static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t dma_addr,
+ size_t size)
+{
+ return -EINVAL;
+}
+
#endif
/* Are we being ptraced? */
ldw TASK_FLAGS(%r1),%r19
- ldi (_TIF_SINGLESTEP|_TIF_BLOCKSTEP),%r2
+ ldi _TIF_SYSCALL_TRACE_MASK,%r2
and,COND(=) %r19,%r2,%r0
b,n syscall_restore_rfi
/* sr2 should be set to zero for userspace syscalls */
STREG %r0,TASK_PT_SR2(%r1)
-pt_regs_ok:
LDREG TASK_PT_GR31(%r1),%r2
- depi 3,31,2,%r2 /* ensure return to user mode. */
- STREG %r2,TASK_PT_IAOQ0(%r1)
+ depi 3,31,2,%r2 /* ensure return to user mode. */
+ STREG %r2,TASK_PT_IAOQ0(%r1)
ldo 4(%r2),%r2
STREG %r2,TASK_PT_IAOQ1(%r1)
+ b intr_restore
copy %r25,%r16
+
+pt_regs_ok:
+ LDREG TASK_PT_IAOQ0(%r1),%r2
+ depi 3,31,2,%r2 /* ensure return to user mode. */
+ STREG %r2,TASK_PT_IAOQ0(%r1)
+ LDREG TASK_PT_IAOQ1(%r1),%r2
+ depi 3,31,2,%r2
+ STREG %r2,TASK_PT_IAOQ1(%r1)
b intr_restore
- nop
+ copy %r25,%r16
.import schedule,code
syscall_do_resched:
{
local_irq_disable(); /* PARANOID - should already be disabled */
mtctl(~0UL, 23); /* EIRR : clear all pending external intr */
- claim_cpu_irqs();
#ifdef CONFIG_SMP
- if (!cpu_eiem)
+ if (!cpu_eiem) {
+ claim_cpu_irqs();
cpu_eiem = EIEM_MASK(IPI_IRQ) | EIEM_MASK(TIMER_IRQ);
+ }
#else
+ claim_cpu_irqs();
cpu_eiem = EIEM_MASK(TIMER_IRQ);
#endif
set_eiem(cpu_eiem); /* EIEM : enable all external intr */
#include <asm/asm-offsets.h>
/* PSW bits we allow the debugger to modify */
-#define USER_PSW_BITS (PSW_N | PSW_V | PSW_CB)
+#define USER_PSW_BITS (PSW_N | PSW_B | PSW_V | PSW_CB)
/*
* Called by kernel/ptrace.c when detaching..
DBG(1,"get_sigframe: ka = %#lx, sp = %#lx, frame_size = %#lx\n",
(unsigned long)ka, sp, frame_size);
+ /* Align alternate stack and reserve 64 bytes for the signal
+ handler's frame marker. */
if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && ! sas_ss_flags(sp))
- sp = current->sas_ss_sp; /* Stacks grow up! */
+ sp = (current->sas_ss_sp + 0x7f) & ~0x3f; /* Stacks grow up! */
DBG(1,"get_sigframe: Returning sp = %#lx\n", (unsigned long)sp);
return (void __user *) sp; /* Stacks grow up. Fun. */
Sgl_isinexact_to_fix(sgl_value,exponent)
#define Duint_from_sgl_mantissa(sgl_value,exponent,dresultA,dresultB) \
- {Sall(sgl_value) <<= SGL_EXP_LENGTH; /* left-justify */ \
+ {unsigned int val = Sall(sgl_value) << SGL_EXP_LENGTH; \
if (exponent <= 31) { \
- Dintp1(dresultA) = 0; \
- Dintp2(dresultB) = (unsigned)Sall(sgl_value) >> (31 - exponent); \
+ Dintp1(dresultA) = 0; \
+ Dintp2(dresultB) = val >> (31 - exponent); \
} \
else { \
- Dintp1(dresultA) = Sall(sgl_value) >> (63 - exponent); \
- Dintp2(dresultB) = Sall(sgl_value) << (exponent - 31); \
+ Dintp1(dresultA) = val >> (63 - exponent); \
+ Dintp2(dresultB) = exponent <= 62 ? val << (exponent - 31) : 0; \
} \
- Sall(sgl_value) >>= SGL_EXP_LENGTH; /* return to original */ \
}
#define Duint_setzero(dresultA,dresultB) \
ret_from_kernel_thread:
REST_NVGPRS(r1)
bl schedule_tail
+ li r3,0
+ stw r3,0(r1)
mtlr r14
mr r3,r15
PPC440EP_ERR42
ld r4,TI_FLAGS(r9)
andi. r0,r4,_TIF_NEED_RESCHED
bne 1b
+
+ /*
+ * arch_local_irq_restore() from preempt_schedule_irq above may
+ * enable hard interrupt but we really should disable interrupts
+ * when we return from the interrupt, and so that we don't get
+ * interrupted after loading SRR0/1.
+ */
+#ifdef CONFIG_PPC_BOOK3E
+ wrteei 0
+#else
+ ld r10,PACAKMSR(r13) /* Get kernel MSR without EE */
+ mtmsrd r10,1 /* Update machine state */
+#endif /* CONFIG_PPC_BOOK3E */
#endif /* CONFIG_PREEMPT */
.globl fast_exc_return_irq
static int kgdb_singlestep(struct pt_regs *regs)
{
struct thread_info *thread_info, *exception_thread_info;
- struct thread_info *backup_current_thread_info = \
- (struct thread_info *)kmalloc(sizeof(struct thread_info), GFP_KERNEL);
+ struct thread_info *backup_current_thread_info;
if (user_mode(regs))
return 0;
+ backup_current_thread_info = (struct thread_info *)kmalloc(sizeof(struct thread_info), GFP_KERNEL);
/*
* On Book E and perhaps other processors, singlestep is handled on
* the critical exception stack. This causes current_thread_info()
/* Restore current_thread_info lastly. */
memcpy(exception_thread_info, backup_current_thread_info, sizeof *thread_info);
+ kfree(backup_current_thread_info);
return 1;
}
set_dec(DECREMENTER_MAX);
/* Some implementations of hotplug will get timer interrupts while
- * offline, just ignore these
+ * offline, just ignore these and we also need to set
+ * decrementers_next_tb as MAX to make sure __check_irq_replay
+ * don't replay timer interrupt when return, otherwise we'll trap
+ * here infinitely :(
*/
- if (!cpu_online(smp_processor_id()))
+ if (!cpu_online(smp_processor_id())) {
+ *next_tb = ~(u64)0;
return;
+ }
/* Conditionally hard-enable interrupts now that the DEC has been
* bumped to its maximum value
#define OP_31_XOP_TRAP 4
#define OP_31_XOP_LWZX 23
#define OP_31_XOP_TRAP_64 68
+#define OP_31_XOP_DCBF 86
#define OP_31_XOP_LBZX 87
#define OP_31_XOP_STWX 151
#define OP_31_XOP_STBX 215
emulated = kvmppc_emulate_mtspr(vcpu, sprn, rs);
break;
+ case OP_31_XOP_DCBF:
case OP_31_XOP_DCBI:
/* Do nothing. The guest is performing dcbi because
* hardware DMA is not snooped by the dcache, but
sldi r29,r5,SID_SHIFT - VPN_SHIFT
rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT)
or r29,r28,r29
-
- /* Calculate hash value for primary slot and store it in r28 */
- rldicl r5,r5,0,25 /* vsid & 0x0000007fffffffff */
- rldicl r0,r3,64-12,48 /* (ea >> 12) & 0xffff */
- xor r28,r5,r0
+ /*
+ * Calculate hash value for primary slot and store it in r28
+ * r3 = va, r5 = vsid
+ * r0 = (va >> 12) & ((1ul << (28 - 12)) -1)
+ */
+ rldicl r0,r3,64-12,48
+ xor r28,r5,r0 /* hash */
b 4f
3: /* Calc vpn and put it in r29 */
/*
* calculate hash value for primary slot and
* store it in r28 for 1T segment
+ * r3 = va, r5 = vsid
*/
- rldic r28,r5,25,25 /* (vsid << 25) & 0x7fffffffff */
- clrldi r5,r5,40 /* vsid & 0xffffff */
- rldicl r0,r3,64-12,36 /* (ea >> 12) & 0xfffffff */
- xor r28,r28,r5
+ sldi r28,r5,25 /* vsid << 25 */
+ /* r0 = (va >> 12) & ((1ul << (40 - 12)) -1) */
+ rldicl r0,r3,64-12,36
+ xor r28,r28,r5 /* vsid ^ ( vsid << 25) */
xor r28,r28,r0 /* hash */
/* Convert linux PTE bits into HW equivalents */
*/
rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT)
or r29,r28,r29
-
- /* Calculate hash value for primary slot and store it in r28 */
- rldicl r5,r5,0,25 /* vsid & 0x0000007fffffffff */
- rldicl r0,r3,64-12,48 /* (ea >> 12) & 0xffff */
- xor r28,r5,r0
+ /*
+ * Calculate hash value for primary slot and store it in r28
+ * r3 = va, r5 = vsid
+ * r0 = (va >> 12) & ((1ul << (28 - 12)) -1)
+ */
+ rldicl r0,r3,64-12,48
+ xor r28,r5,r0 /* hash */
b 4f
3: /* Calc vpn and put it in r29 */
/*
* Calculate hash value for primary slot and
* store it in r28 for 1T segment
+ * r3 = va, r5 = vsid
*/
- rldic r28,r5,25,25 /* (vsid << 25) & 0x7fffffffff */
- clrldi r5,r5,40 /* vsid & 0xffffff */
- rldicl r0,r3,64-12,36 /* (ea >> 12) & 0xfffffff */
- xor r28,r28,r5
+ sldi r28,r5,25 /* vsid << 25 */
+ /* r0 = (va >> 12) & ((1ul << (40 - 12)) -1) */
+ rldicl r0,r3,64-12,36
+ xor r28,r28,r5 /* vsid ^ ( vsid << 25) */
xor r28,r28,r0 /* hash */
/* Convert linux PTE bits into HW equivalents */
rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT - VPN_SHIFT)
or r29,r28,r29
- /* Calculate hash value for primary slot and store it in r28 */
- rldicl r5,r5,0,25 /* vsid & 0x0000007fffffffff */
- rldicl r0,r3,64-16,52 /* (ea >> 16) & 0xfff */
- xor r28,r5,r0
+ /* Calculate hash value for primary slot and store it in r28
+ * r3 = va, r5 = vsid
+ * r0 = (va >> 16) & ((1ul << (28 - 16)) -1)
+ */
+ rldicl r0,r3,64-16,52
+ xor r28,r5,r0 /* hash */
b 4f
3: /* Calc vpn and put it in r29 */
sldi r29,r5,SID_SHIFT_1T - VPN_SHIFT
rldicl r28,r3,64 - VPN_SHIFT,64 - (SID_SHIFT_1T - VPN_SHIFT)
or r29,r28,r29
-
/*
* calculate hash value for primary slot and
* store it in r28 for 1T segment
+ * r3 = va, r5 = vsid
*/
- rldic r28,r5,25,25 /* (vsid << 25) & 0x7fffffffff */
- clrldi r5,r5,40 /* vsid & 0xffffff */
- rldicl r0,r3,64-16,40 /* (ea >> 16) & 0xffffff */
- xor r28,r28,r5
+ sldi r28,r5,25 /* vsid << 25 */
+ /* r0 = (va >> 16) & ((1ul << (40 - 16)) -1) */
+ rldicl r0,r3,64-16,40
+ xor r28,r28,r5 /* vsid ^ ( vsid << 25) */
xor r28,r28,r0 /* hash */
/* Convert linux PTE bits into HW equivalents */
for (pmc = 0; pmc < 4; pmc++) {
psel = mmcr1 & (OPROFILE_PM_PMCSEL_MSK
<< (OPROFILE_MAX_PMC_NUM - pmc)
- * OPROFILE_MAX_PMC_NUM);
+ * OPROFILE_PMSEL_FIELD_WIDTH);
psel = (psel >> ((OPROFILE_MAX_PMC_NUM - pmc)
* OPROFILE_PMSEL_FIELD_WIDTH)) & ~1ULL;
unit = mmcr1 & (OPROFILE_PM_UNIT_MSK
static int pas_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
+ /*
+ * We don't support CPU hotplug. Don't unmap after the system
+ * has already made it to a running state.
+ */
+ if (system_state != SYSTEM_BOOTING)
+ return 0;
+
if (sdcasr_mapbase)
iounmap(sdcasr_mapbase);
if (sdcpwr_mapbase)
__pmd_idte(address, pmdp);
}
+#define __HAVE_ARCH_PMDP_SET_WRPROTECT
+static inline void pmdp_set_wrprotect(struct mm_struct *mm,
+ unsigned long address, pmd_t *pmdp)
+{
+ pmd_t pmd = *pmdp;
+
+ if (pmd_write(pmd)) {
+ __pmd_idte(address, pmdp);
+ set_pmd_at(mm, address, pmdp, pmd_wrprotect(pmd));
+ }
+}
+
static inline pmd_t mk_pmd_phys(unsigned long physpage, pgprot_t pgprot)
{
pmd_t __pmd;
config OLPC_XO1_SCI
bool "OLPC XO-1 SCI extras"
depends on OLPC && OLPC_XO1_PM
+ depends on INPUT=y
select POWER_SUPPLY
select GPIO_CS5535
select MFD_CORE
$(obj)/bzImage: asflags-y := $(SVGA_MODE)
quiet_cmd_image = BUILD $@
-cmd_image = $(obj)/tools/build $(obj)/setup.bin $(obj)/vmlinux.bin > $@
+cmd_image = $(obj)/tools/build $(obj)/setup.bin $(obj)/vmlinux.bin $(obj)/zoffset.h > $@
$(obj)/bzImage: $(obj)/setup.bin $(obj)/vmlinux.bin $(obj)/tools/build FORCE
$(call if_changed,image)
$(obj)/voffset.h: vmlinux FORCE
$(call if_changed,voffset)
-sed-zoffset := -e 's/^\([0-9a-fA-F]*\) . \(startup_32\|input_data\|_end\|z_.*\)$$/\#define ZO_\2 0x\1/p'
+sed-zoffset := -e 's/^\([0-9a-fA-F]*\) . \(startup_32\|startup_64\|efi_pe_entry\|efi_stub_entry\|input_data\|_end\|z_.*\)$$/\#define ZO_\2 0x\1/p'
quiet_cmd_zoffset = ZOFFSET $@
cmd_zoffset = $(NM) $< | sed -n $(sed-zoffset) > $@
int i;
struct setup_data *data;
- data = (struct setup_data *)params->hdr.setup_data;
+ data = (struct setup_data *)(unsigned long)params->hdr.setup_data;
while (data && data->next)
- data = (struct setup_data *)data->next;
+ data = (struct setup_data *)(unsigned long)data->next;
status = efi_call_phys5(sys_table->boottime->locate_handle,
EFI_LOCATE_BY_PROTOCOL, &pci_proto,
if (!pci)
continue;
+#ifdef CONFIG_X86_64
status = efi_call_phys4(pci->attributes, pci,
EfiPciIoAttributeOperationGet, 0,
&attributes);
-
+#else
+ status = efi_call_phys5(pci->attributes, pci,
+ EfiPciIoAttributeOperationGet, 0, 0,
+ &attributes);
+#endif
if (status != EFI_SUCCESS)
continue;
- if (!(attributes & EFI_PCI_IO_ATTRIBUTE_EMBEDDED_ROM))
- continue;
-
if (!pci->romimage || !pci->romsize)
continue;
memcpy(rom->romdata, pci->romimage, pci->romsize);
if (data)
- data->next = (uint64_t)rom;
+ data->next = (unsigned long)rom;
else
- params->hdr.setup_data = (uint64_t)rom;
+ params->hdr.setup_data = (unsigned long)rom;
data = (struct setup_data *)rom;
* Once we've found a GOP supporting ConOut,
* don't bother looking any further.
*/
+ first_gop = gop;
if (conout_found)
break;
-
- first_gop = gop;
}
}
#ifdef CONFIG_EFI_STUB
jmp preferred_addr
- .balign 0x10
/*
* We don't need the return address, so set up the stack so
- * efi_main() can find its arugments.
+ * efi_main() can find its arguments.
*/
+ENTRY(efi_pe_entry)
add $0x4, %esp
call make_boot_params
pushl %eax
pushl %esi
pushl %ecx
+ sub $0x4, %esp
- .org 0x30,0x90
+ENTRY(efi_stub_entry)
+ add $0x4, %esp
call efi_main
cmpl $0, %eax
movl %eax, %esi
*/
#ifdef CONFIG_EFI_STUB
/*
- * The entry point for the PE/COFF executable is 0x210, so only
- * legacy boot loaders will execute this jmp.
+ * The entry point for the PE/COFF executable is efi_pe_entry, so
+ * only legacy boot loaders will execute this jmp.
*/
jmp preferred_addr
- .org 0x210
+ENTRY(efi_pe_entry)
mov %rcx, %rdi
mov %rdx, %rsi
pushq %rdi
popq %rsi
popq %rdi
- .org 0x230,0x90
+ENTRY(efi_stub_entry)
call efi_main
movq %rax,%rsi
cmpq $0,%rax
#include <asm/e820.h>
#include <asm/page_types.h>
#include <asm/setup.h>
+#include <asm/bootparam.h>
#include "boot.h"
#include "voffset.h"
#include "zoffset.h"
# header, from the old boot sector.
.section ".header", "a"
+ .globl sentinel
+sentinel: .byte 0xff, 0xff /* Used to detect broken loaders */
+
.globl hdr
hdr:
setup_sects: .byte 0 /* Filled in by build.c */
# Part 2 of the header, from the old setup.S
.ascii "HdrS" # header signature
- .word 0x020b # header version number (>= 0x0105)
+ .word 0x020c # header version number (>= 0x0105)
# or else old loadlin-1.5 will fail)
.globl realmode_swtch
realmode_swtch: .word 0, 0 # default_switch, SETUPSEG
# flags, unused bits must be zero (RFU) bit within loadflags
loadflags:
-LOADED_HIGH = 1 # If set, the kernel is loaded high
-CAN_USE_HEAP = 0x80 # If set, the loader also has set
- # heap_end_ptr to tell how much
- # space behind setup.S can be used for
- # heap purposes.
- # Only the loader knows what is free
- .byte LOADED_HIGH
+ .byte LOADED_HIGH # The kernel is to be loaded high
setup_move_size: .word 0x8000 # size to move, when setup is not
# loaded at 0x90000. We will move setup
relocatable_kernel: .byte 0
#endif
min_alignment: .byte MIN_KERNEL_ALIGN_LG2 # minimum alignment
-pad3: .word 0
+
+xloadflags:
+#ifdef CONFIG_X86_64
+# define XLF0 XLF_KERNEL_64 /* 64-bit kernel */
+#else
+# define XLF0 0
+#endif
+#ifdef CONFIG_EFI_STUB
+# ifdef CONFIG_X86_64
+# define XLF23 XLF_EFI_HANDOVER_64 /* 64-bit EFI handover ok */
+# else
+# define XLF23 XLF_EFI_HANDOVER_32 /* 32-bit EFI handover ok */
+# endif
+#else
+# define XLF23 0
+#endif
+ .word XLF0 | XLF23
cmdline_size: .long COMMAND_LINE_SIZE-1 #length of the command line,
#added with boot protocol
#define INIT_SIZE VO_INIT_SIZE
#endif
init_size: .long INIT_SIZE # kernel initialization size
-handover_offset: .long 0x30 # offset to the handover
+handover_offset:
+#ifdef CONFIG_EFI_STUB
+ .long 0x30 # offset to the handover
# protocol entry point
+#else
+ .long 0
+#endif
# End of setup header #####################################################
.bstext : { *(.bstext) }
.bsdata : { *(.bsdata) }
- . = 497;
+ . = 495;
.header : { *(.header) }
.entrytext : { *(.entrytext) }
.inittext : { *(.inittext) }
#define PECOFF_RELOC_RESERVE 0x20
+unsigned long efi_stub_entry;
+unsigned long efi_pe_entry;
+unsigned long startup_64;
+
/*----------------------------------------------------------------------*/
static const u32 crctab32[] = {
static void usage(void)
{
- die("Usage: build setup system [> image]");
+ die("Usage: build setup system [zoffset.h] [> image]");
}
#ifdef CONFIG_EFI_STUB
*/
put_unaligned_le32(file_sz - 512, &buf[pe_header + 0x1c]);
-#ifdef CONFIG_X86_32
/*
- * Address of entry point.
- *
- * The EFI stub entry point is +16 bytes from the start of
- * the .text section.
+ * Address of entry point for PE/COFF executable
*/
- put_unaligned_le32(text_start + 16, &buf[pe_header + 0x28]);
-#else
- /*
- * Address of entry point. startup_32 is at the beginning and
- * the 64-bit entry point (startup_64) is always 512 bytes
- * after. The EFI stub entry point is 16 bytes after that, as
- * the first instruction allows legacy loaders to jump over
- * the EFI stub initialisation
- */
- put_unaligned_le32(text_start + 528, &buf[pe_header + 0x28]);
-#endif /* CONFIG_X86_32 */
+ put_unaligned_le32(text_start + efi_pe_entry, &buf[pe_header + 0x28]);
update_pecoff_section_header(".text", text_start, text_sz);
}
#endif /* CONFIG_EFI_STUB */
+
+/*
+ * Parse zoffset.h and find the entry points. We could just #include zoffset.h
+ * but that would mean tools/build would have to be rebuilt every time. It's
+ * not as if parsing it is hard...
+ */
+#define PARSE_ZOFS(p, sym) do { \
+ if (!strncmp(p, "#define ZO_" #sym " ", 11+sizeof(#sym))) \
+ sym = strtoul(p + 11 + sizeof(#sym), NULL, 16); \
+} while (0)
+
+static void parse_zoffset(char *fname)
+{
+ FILE *file;
+ char *p;
+ int c;
+
+ file = fopen(fname, "r");
+ if (!file)
+ die("Unable to open `%s': %m", fname);
+ c = fread(buf, 1, sizeof(buf) - 1, file);
+ if (ferror(file))
+ die("read-error on `zoffset.h'");
+ buf[c] = 0;
+
+ p = (char *)buf;
+
+ while (p && *p) {
+ PARSE_ZOFS(p, efi_stub_entry);
+ PARSE_ZOFS(p, efi_pe_entry);
+ PARSE_ZOFS(p, startup_64);
+
+ p = strchr(p, '\n');
+ while (p && (*p == '\r' || *p == '\n'))
+ p++;
+ }
+}
+
int main(int argc, char ** argv)
{
unsigned int i, sz, setup_sectors;
void *kernel;
u32 crc = 0xffffffffUL;
- if (argc != 3)
+ /* Defaults for old kernel */
+#ifdef CONFIG_X86_32
+ efi_pe_entry = 0x10;
+ efi_stub_entry = 0x30;
+#else
+ efi_pe_entry = 0x210;
+ efi_stub_entry = 0x230;
+ startup_64 = 0x200;
+#endif
+
+ if (argc == 4)
+ parse_zoffset(argv[3]);
+ else if (argc != 3)
usage();
/* Copy the setup code */
#ifdef CONFIG_EFI_STUB
update_pecoff_text(setup_sectors * 512, sz + i + ((sys_size * 16) - sz));
+
+#ifdef CONFIG_X86_64 /* Yes, this is really how we defined it :( */
+ efi_stub_entry -= 0x200;
+#endif
+ put_unaligned_le32(efi_stub_entry, &buf[0x264]);
#endif
crc = partial_crc32(buf, i, crc);
testl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT),TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)
jnz ia32_ret_from_sys_call
TRACE_IRQS_ON
- sti
+ ENABLE_INTERRUPTS(CLBR_NONE)
movl %eax,%esi /* second arg, syscall return value */
cmpl $-MAX_ERRNO,%eax /* is it an error ? */
jbe 1f
call __audit_syscall_exit
movq RAX-ARGOFFSET(%rsp),%rax /* reload syscall return value */
movl $(_TIF_ALLWORK_MASK & ~_TIF_SYSCALL_AUDIT),%edi
- cli
+ DISABLE_INTERRUPTS(CLBR_NONE)
TRACE_IRQS_OFF
testl %edi,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET)
jz \exit
#endif /* CONFIG_X86_32 */
extern int add_efi_memmap;
+extern unsigned long x86_efi_facility;
extern void efi_set_executable(efi_memory_desc_t *md, bool executable);
extern int efi_memblock_x86_reserve_range(void);
extern void efi_call_phys_prelog(void);
extern const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask,
struct mm_struct *mm,
unsigned long start,
- unsigned end,
+ unsigned long end,
unsigned int cpu);
#else /* X86_UV */
#ifndef _ASM_X86_BOOTPARAM_H
#define _ASM_X86_BOOTPARAM_H
+/* setup_data types */
+#define SETUP_NONE 0
+#define SETUP_E820_EXT 1
+#define SETUP_DTB 2
+#define SETUP_PCI 3
+
+/* ram_size flags */
+#define RAMDISK_IMAGE_START_MASK 0x07FF
+#define RAMDISK_PROMPT_FLAG 0x8000
+#define RAMDISK_LOAD_FLAG 0x4000
+
+/* loadflags */
+#define LOADED_HIGH (1<<0)
+#define QUIET_FLAG (1<<5)
+#define KEEP_SEGMENTS (1<<6)
+#define CAN_USE_HEAP (1<<7)
+
+/* xloadflags */
+#define XLF_KERNEL_64 (1<<0)
+#define XLF_CAN_BE_LOADED_ABOVE_4G (1<<1)
+#define XLF_EFI_HANDOVER_32 (1<<2)
+#define XLF_EFI_HANDOVER_64 (1<<3)
+
+#ifndef __ASSEMBLY__
+
#include <linux/types.h>
#include <linux/screen_info.h>
#include <linux/apm_bios.h>
#include <asm/ist.h>
#include <video/edid.h>
-/* setup data types */
-#define SETUP_NONE 0
-#define SETUP_E820_EXT 1
-#define SETUP_DTB 2
-#define SETUP_PCI 3
-
/* extensible setup data list node */
struct setup_data {
__u64 next;
__u16 root_flags;
__u32 syssize;
__u16 ram_size;
-#define RAMDISK_IMAGE_START_MASK 0x07FF
-#define RAMDISK_PROMPT_FLAG 0x8000
-#define RAMDISK_LOAD_FLAG 0x4000
__u16 vid_mode;
__u16 root_dev;
__u16 boot_flag;
__u16 kernel_version;
__u8 type_of_loader;
__u8 loadflags;
-#define LOADED_HIGH (1<<0)
-#define QUIET_FLAG (1<<5)
-#define KEEP_SEGMENTS (1<<6)
-#define CAN_USE_HEAP (1<<7)
__u16 setup_move_size;
__u32 code32_start;
__u32 ramdisk_image;
__u32 initrd_addr_max;
__u32 kernel_alignment;
__u8 relocatable_kernel;
- __u8 _pad2[3];
+ __u8 min_alignment;
+ __u16 xloadflags;
__u32 cmdline_size;
__u32 hardware_subarch;
__u64 hardware_subarch_data;
__u8 hd1_info[16]; /* obsolete! */ /* 0x090 */
struct sys_desc_table sys_desc_table; /* 0x0a0 */
struct olpc_ofw_header olpc_ofw_header; /* 0x0b0 */
- __u8 _pad4[128]; /* 0x0c0 */
+ __u32 ext_ramdisk_image; /* 0x0c0 */
+ __u32 ext_ramdisk_size; /* 0x0c4 */
+ __u32 ext_cmd_line_ptr; /* 0x0c8 */
+ __u8 _pad4[116]; /* 0x0cc */
struct edid_info edid_info; /* 0x140 */
struct efi_info efi_info; /* 0x1c0 */
__u32 alt_mem_k; /* 0x1e0 */
__u8 eddbuf_entries; /* 0x1e9 */
__u8 edd_mbr_sig_buf_entries; /* 0x1ea */
__u8 kbd_status; /* 0x1eb */
- __u8 _pad6[5]; /* 0x1ec */
+ __u8 _pad5[3]; /* 0x1ec */
+ /*
+ * The sentinel is set to a nonzero value (0xff) in header.S.
+ *
+ * A bootloader is supposed to only take setup_header and put
+ * it into a clean boot_params buffer. If it turns out that
+ * it is clumsy or too generous with the buffer, it most
+ * probably will pick up the sentinel variable too. The fact
+ * that this variable then is still 0xff will let kernel
+ * know that some variables in boot_params are invalid and
+ * kernel should zero out certain portions of boot_params.
+ */
+ __u8 sentinel; /* 0x1ef */
+ __u8 _pad6[1]; /* 0x1f0 */
struct setup_header hdr; /* setup header */ /* 0x1f1 */
__u8 _pad7[0x290-0x1f1-sizeof(struct setup_header)];
__u32 edd_mbr_sig_buffer[EDD_MBR_SIG_MAX]; /* 0x290 */
X86_NR_SUBARCHS,
};
-
+#endif /* __ASSEMBLY__ */
#endif /* _ASM_X86_BOOTPARAM_H */
unsigned int);
};
-#ifdef CONFIG_AMD_NB
-
+#if defined(CONFIG_AMD_NB) && defined(CONFIG_SYSFS)
/*
* L3 cache descriptors
*/
static struct _cache_attr subcaches =
__ATTR(subcaches, 0644, show_subcaches, store_subcaches);
-#else /* CONFIG_AMD_NB */
+#else
#define amd_init_l3_cache(x, y)
-#endif /* CONFIG_AMD_NB */
+#endif /* CONFIG_AMD_NB && CONFIG_SYSFS */
static int
__cpuinit cpuid4_cache_lookup_regs(int index,
/* BTS is currently only allowed for user-mode. */
if (!attr->exclude_kernel)
return -EOPNOTSUPP;
-
- if (!attr->exclude_guest)
- return -EOPNOTSUPP;
}
hwc->config |= config;
if (event->attr.precise_ip) {
int precise = 0;
- if (!event->attr.exclude_guest)
- return -EOPNOTSUPP;
-
/* Support for constant skid */
if (x86_pmu.pebs_active && !x86_pmu.pebs_broken) {
precise++;
break;
case 28: /* Atom */
- case 54: /* Cedariew */
+ case 38: /* Lincroft */
+ case 39: /* Penwell */
+ case 53: /* Cloverview */
+ case 54: /* Cedarview */
memcpy(hw_cache_event_ids, atom_hw_cache_event_ids,
sizeof(hw_cache_event_ids));
pr_cont("SandyBridge events, ");
break;
case 58: /* IvyBridge */
+ case 62: /* IvyBridge EP */
memcpy(hw_cache_event_ids, snb_hw_cache_event_ids,
sizeof(hw_cache_event_ids));
memcpy(hw_cache_extra_regs, snb_hw_cache_extra_regs,
};
-static __initconst u64 p6_hw_cache_event_ids
+static u64 p6_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] =
* Leave room for the "copied" frame
*/
subq $(5*8), %rsp
+ CFI_ADJUST_CFA_OFFSET 5*8
/* Copy the stack frame to the Saved frame */
.rept 5
nmi_swapgs:
SWAPGS_UNSAFE_STACK
nmi_restore:
- RESTORE_ALL 8
-
- /* Pop the extra iret frame */
- addq $(5*8), %rsp
+ /* Pop the extra iret frame at once */
+ RESTORE_ALL 6*8
/* Clear the NMI executing stack variable */
movq $0, 5*8(%rsp)
leal -__PAGE_OFFSET(%ecx),%esp
default_entry:
+#define CR0_STATE (X86_CR0_PE | X86_CR0_MP | X86_CR0_ET | \
+ X86_CR0_NE | X86_CR0_WP | X86_CR0_AM | \
+ X86_CR0_PG)
+ movl $(CR0_STATE & ~X86_CR0_PG),%eax
+ movl %eax,%cr0
+
/*
* New page tables may be in 4Mbyte page mode and may
* be using the global pages.
*/
movl $pa(initial_page_table), %eax
movl %eax,%cr3 /* set the page table pointer.. */
- movl %cr0,%eax
- orl $X86_CR0_PG,%eax
+ movl $CR0_STATE,%eax
movl %eax,%cr0 /* ..and set paging (PG) bit */
ljmp $__BOOT_CS,$1f /* Clear prefetch and normalize %eip */
1:
unsigned int cpu;
struct cpuinfo_x86 *c;
+ if (!capable(CAP_SYS_RAWIO))
+ return -EPERM;
+
cpu = iminor(file->f_path.dentry->d_inode);
if (cpu >= nr_cpu_ids || !cpu_online(cpu))
return -ENXIO; /* No such CPU */
EXPORT_SYMBOL(x86_dma_fallback_dev);
/* Number of entries preallocated for DMA-API debugging */
-#define PREALLOC_DMA_DEBUG_ENTRIES 32768
+#define PREALLOC_DMA_DEBUG_ENTRIES 65536
int dma_set_mask(struct device *dev, u64 mask)
{
break;
case BOOT_EFI:
- if (efi_enabled)
+ if (efi_enabled(EFI_RUNTIME_SERVICES))
efi.reset_system(reboot_mode ?
EFI_RESET_WARM :
EFI_RESET_COLD,
#ifdef CONFIG_EFI
if (!strncmp((char *)&boot_params.efi_info.efi_loader_signature,
"EL32", 4)) {
- efi_enabled = 1;
- efi_64bit = false;
+ set_bit(EFI_BOOT, &x86_efi_facility);
} else if (!strncmp((char *)&boot_params.efi_info.efi_loader_signature,
"EL64", 4)) {
- efi_enabled = 1;
- efi_64bit = true;
+ set_bit(EFI_BOOT, &x86_efi_facility);
+ set_bit(EFI_64BIT, &x86_efi_facility);
}
- if (efi_enabled && efi_memblock_x86_reserve_range())
- efi_enabled = 0;
+
+ if (efi_enabled(EFI_BOOT))
+ efi_memblock_x86_reserve_range();
#endif
x86_init.oem.arch_setup();
finish_e820_parsing();
- if (efi_enabled)
+ if (efi_enabled(EFI_BOOT))
efi_init();
dmi_scan_machine();
* The EFI specification says that boot service code won't be called
* after ExitBootServices(). This is, in fact, a lie.
*/
- if (efi_enabled)
+ if (efi_enabled(EFI_MEMMAP))
efi_reserve_boot_services();
/* preallocate 4k for mptable mpc */
#ifdef CONFIG_VT
#if defined(CONFIG_VGA_CONSOLE)
- if (!efi_enabled || (efi_mem_type(0xa0000) != EFI_CONVENTIONAL_MEMORY))
+ if (!efi_enabled(EFI_BOOT) || (efi_mem_type(0xa0000) != EFI_CONVENTIONAL_MEMORY))
conswitchp = &vga_con;
#elif defined(CONFIG_DUMMY_CONSOLE)
conswitchp = &dummy_con;
register_refined_jiffies(CLOCK_TICK_RATE);
#ifdef CONFIG_EFI
- /* Once setup is done above, disable efi_enabled on mismatched
- * firmware/kernel archtectures since there is no support for
- * runtime services.
+ /* Once setup is done above, unmap the EFI memory map on
+ * mismatched firmware/kernel archtectures since there is no
+ * support for runtime services.
*/
- if (efi_enabled && IS_ENABLED(CONFIG_X86_64) != efi_64bit) {
+ if (efi_enabled(EFI_BOOT) &&
+ IS_ENABLED(CONFIG_X86_64) != efi_enabled(EFI_64BIT)) {
pr_info("efi: Setup done, disabling due to 32/64-bit mismatch\n");
efi_unmap_memmap();
- efi_enabled = 0;
}
#endif
}
* Ensure irq/preemption can't change debugctl in between.
* Note also that both TIF_BLOCKSTEP and debugctl should
* be changed atomically wrt preemption.
- * FIXME: this means that set/clear TIF_BLOCKSTEP is simply
- * wrong if task != current, SIGKILL can wakeup the stopped
- * tracee and set/clear can play with the running task, this
- * can confuse the next __switch_to_xtra().
+ *
+ * NOTE: this means that set/clear TIF_BLOCKSTEP is only safe if
+ * task is current or it can't be running, otherwise we can race
+ * with __switch_to_xtra(). We rely on ptrace_freeze_traced() but
+ * PTRACE_KILL is not safe.
*/
local_irq_disable();
debugctl = get_debugctlmsr();
#define EFI_DEBUG 1
-int efi_enabled;
-EXPORT_SYMBOL(efi_enabled);
-
struct efi __read_mostly efi = {
.mps = EFI_INVALID_TABLE_ADDR,
.acpi = EFI_INVALID_TABLE_ADDR,
struct efi_memory_map memmap;
-bool efi_64bit;
-
static struct efi efi_phys __initdata;
static efi_system_table_t efi_systab __initdata;
static inline bool efi_is_native(void)
{
- return IS_ENABLED(CONFIG_X86_64) == efi_64bit;
+ return IS_ENABLED(CONFIG_X86_64) == efi_enabled(EFI_64BIT);
+}
+
+unsigned long x86_efi_facility;
+
+/*
+ * Returns 1 if 'facility' is enabled, 0 otherwise.
+ */
+int efi_enabled(int facility)
+{
+ return test_bit(facility, &x86_efi_facility) != 0;
}
+EXPORT_SYMBOL(efi_enabled);
static int __init setup_noefi(char *arg)
{
- efi_enabled = 0;
+ clear_bit(EFI_BOOT, &x86_efi_facility);
return 0;
}
early_param("noefi", setup_noefi);
void __init efi_unmap_memmap(void)
{
+ clear_bit(EFI_MEMMAP, &x86_efi_facility);
if (memmap.map) {
early_iounmap(memmap.map, memmap.nr_map * memmap.desc_size);
memmap.map = NULL;
static int __init efi_systab_init(void *phys)
{
- if (efi_64bit) {
+ if (efi_enabled(EFI_64BIT)) {
efi_system_table_64_t *systab64;
u64 tmp = 0;
void *config_tables, *tablep;
int i, sz;
- if (efi_64bit)
+ if (efi_enabled(EFI_64BIT))
sz = sizeof(efi_config_table_64_t);
else
sz = sizeof(efi_config_table_32_t);
efi_guid_t guid;
unsigned long table;
- if (efi_64bit) {
+ if (efi_enabled(EFI_64BIT)) {
u64 table64;
guid = ((efi_config_table_64_t *)tablep)->guid;
table64 = ((efi_config_table_64_t *)tablep)->table;
if (boot_params.efi_info.efi_systab_hi ||
boot_params.efi_info.efi_memmap_hi) {
pr_info("Table located above 4GB, disabling EFI.\n");
- efi_enabled = 0;
return;
}
efi_phys.systab = (efi_system_table_t *)boot_params.efi_info.efi_systab;
((__u64)boot_params.efi_info.efi_systab_hi<<32));
#endif
- if (efi_systab_init(efi_phys.systab)) {
- efi_enabled = 0;
+ if (efi_systab_init(efi_phys.systab))
return;
- }
+
+ set_bit(EFI_SYSTEM_TABLES, &x86_efi_facility);
/*
* Show what we know for posterity
efi.systab->hdr.revision >> 16,
efi.systab->hdr.revision & 0xffff, vendor);
- if (efi_config_init(efi.systab->tables, efi.systab->nr_tables)) {
- efi_enabled = 0;
+ if (efi_config_init(efi.systab->tables, efi.systab->nr_tables))
return;
- }
+
+ set_bit(EFI_CONFIG_TABLES, &x86_efi_facility);
/*
* Note: We currently don't support runtime services on an EFI
if (!efi_is_native())
pr_info("No EFI runtime due to 32/64-bit mismatch with kernel\n");
- else if (efi_runtime_init()) {
- efi_enabled = 0;
- return;
+ else {
+ if (efi_runtime_init())
+ return;
+ set_bit(EFI_RUNTIME_SERVICES, &x86_efi_facility);
}
- if (efi_memmap_init()) {
- efi_enabled = 0;
+ if (efi_memmap_init())
return;
- }
+
+ set_bit(EFI_MEMMAP, &x86_efi_facility);
+
#ifdef CONFIG_X86_32
if (efi_is_native()) {
x86_platform.get_wallclock = efi_get_time;
*
* Call EFI services through wrapper functions.
*/
- efi.runtime_version = efi_systab.fw_revision;
+ efi.runtime_version = efi_systab.hdr.revision;
efi.get_time = virt_efi_get_time;
efi.set_time = virt_efi_set_time;
efi.get_wakeup_time = virt_efi_get_wakeup_time;
efi_memory_desc_t *md;
void *p;
+ if (!efi_enabled(EFI_MEMMAP))
+ return 0;
+
for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
md = p;
if ((md->phys_addr <= phys_addr) &&
#include <asm/cacheflush.h>
#include <asm/fixmap.h>
-static pgd_t save_pgd __initdata;
+static pgd_t *save_pgd __initdata;
static unsigned long efi_flags __initdata;
static void __init early_code_mapping_set_exec(int executable)
void __init efi_call_phys_prelog(void)
{
unsigned long vaddress;
+ int pgd;
+ int n_pgds;
early_code_mapping_set_exec(1);
local_irq_save(efi_flags);
- vaddress = (unsigned long)__va(0x0UL);
- save_pgd = *pgd_offset_k(0x0UL);
- set_pgd(pgd_offset_k(0x0UL), *pgd_offset_k(vaddress));
+
+ n_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT), PGDIR_SIZE);
+ save_pgd = kmalloc(n_pgds * sizeof(pgd_t), GFP_KERNEL);
+
+ for (pgd = 0; pgd < n_pgds; pgd++) {
+ save_pgd[pgd] = *pgd_offset_k(pgd * PGDIR_SIZE);
+ vaddress = (unsigned long)__va(pgd * PGDIR_SIZE);
+ set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), *pgd_offset_k(vaddress));
+ }
__flush_tlb_all();
}
/*
* After the lock is released, the original page table is restored.
*/
- set_pgd(pgd_offset_k(0x0UL), save_pgd);
+ int pgd;
+ int n_pgds = DIV_ROUND_UP((max_pfn << PAGE_SHIFT) , PGDIR_SIZE);
+ for (pgd = 0; pgd < n_pgds; pgd++)
+ set_pgd(pgd_offset_k(pgd * PGDIR_SIZE), save_pgd[pgd]);
+ kfree(save_pgd);
__flush_tlb_all();
local_irq_restore(efi_flags);
early_code_mapping_set_exec(0);
* globally purge translation cache of a virtual address or all TLB's
* @cpumask: mask of all cpu's in which the address is to be removed
* @mm: mm_struct containing virtual address range
- * @va: virtual address to be removed (or TLB_FLUSH_ALL for all TLB's on cpu)
+ * @start: start virtual address to be removed from TLB
+ * @end: end virtual address to be remove from TLB
* @cpu: the current cpu
*
* This is the entry point for initiating any UV global TLB shootdown.
*/
const struct cpumask *uv_flush_tlb_others(const struct cpumask *cpumask,
struct mm_struct *mm, unsigned long start,
- unsigned end, unsigned int cpu)
+ unsigned long end, unsigned int cpu)
{
int locals = 0;
int remotes = 0;
record_send_statistics(stat, locals, hubs, remotes, bau_desc);
- bau_desc->payload.address = start;
+ if (!end || (end - start) <= PAGE_SIZE)
+ bau_desc->payload.address = start;
+ else
+ bau_desc->payload.address = TLB_FLUSH_ALL;
bau_desc->payload.sending_cpu = cpu;
/*
* uv_flush_send_and_wait returns 0 if all cpu's were messaged,
static void usage(const char *err)
{
if (err)
- fprintf(stderr, "Error: %s\n\n", err);
+ fprintf(stderr, "%s: Error: %s\n\n", prog, err);
fprintf(stderr, "Usage: %s [-y|-n|-v] [-s seed[,no]] [-m max] [-i input]\n", prog);
fprintf(stderr, "\t-y 64bit mode\n");
fprintf(stderr, "\t-n 32bit mode\n");
insns++;
}
- fprintf(stdout, "%s: decoded and checked %d %s instructions with %d errors (seed:0x%x)\n", (errors) ? "Failure" : "Success", insns, (input_file) ? "given" : "random", errors, seed);
+ fprintf(stdout, "%s: %s: decoded and checked %d %s instructions with %d errors (seed:0x%x)\n",
+ prog,
+ (errors) ? "Failure" : "Success",
+ insns,
+ (input_file) ? "given" : "random",
+ errors,
+ seed);
return errors ? 1 : 0;
}
read_relocs(fp);
if (show_absolute_syms) {
print_absolute_symbols();
- return 0;
+ goto out;
}
if (show_absolute_relocs) {
print_absolute_relocs();
- return 0;
+ goto out;
}
emit_relocs(as_text, use_real_mode);
+out:
+ fclose(fp);
return 0;
}
consistent_sync(vaddr, size, direction);
}
+/* Not supported for now */
+static inline int dma_mmap_coherent(struct device *dev,
+ struct vm_area_struct *vma, void *cpu_addr,
+ dma_addr_t dma_addr, size_t size)
+{
+ return -EINVAL;
+}
+
+static inline int dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+ void *cpu_addr, dma_addr_t dma_addr,
+ size_t size)
+{
+ return -EINVAL;
+}
+
#endif /* _XTENSA_DMA_MAPPING_H */
static struct device_type disk_type;
+static void disk_check_events(struct disk_events *ev,
+ unsigned int *clearing_ptr);
static void disk_alloc_events(struct gendisk *disk);
static void disk_add_events(struct gendisk *disk);
static void disk_del_events(struct gendisk *disk);
const struct block_device_operations *bdops = disk->fops;
struct disk_events *ev = disk->ev;
unsigned int pending;
+ unsigned int clearing = mask;
if (!ev) {
/* for drivers still using the old ->media_changed method */
return 0;
}
- /* tell the workfn about the events being cleared */
+ disk_block_events(disk);
+
+ /*
+ * store the union of mask and ev->clearing on the stack so that the
+ * race with disk_flush_events does not cause ambiguity (ev->clearing
+ * can still be modified even if events are blocked).
+ */
spin_lock_irq(&ev->lock);
- ev->clearing |= mask;
+ clearing |= ev->clearing;
+ ev->clearing = 0;
spin_unlock_irq(&ev->lock);
- /* uncondtionally schedule event check and wait for it to finish */
- disk_block_events(disk);
- queue_delayed_work(system_freezable_wq, &ev->dwork, 0);
- flush_delayed_work(&ev->dwork);
- __disk_unblock_events(disk, false);
+ disk_check_events(ev, &clearing);
+ /*
+ * if ev->clearing is not 0, the disk_flush_events got called in the
+ * middle of this function, so we want to run the workfn without delay.
+ */
+ __disk_unblock_events(disk, ev->clearing ? true : false);
/* then, fetch and clear pending events */
spin_lock_irq(&ev->lock);
- WARN_ON_ONCE(ev->clearing & mask); /* cleared by workfn */
pending = ev->pending & mask;
ev->pending &= ~mask;
spin_unlock_irq(&ev->lock);
+ WARN_ON_ONCE(clearing & mask);
return pending;
}
+/*
+ * Separate this part out so that a different pointer for clearing_ptr can be
+ * passed in for disk_clear_events.
+ */
static void disk_events_workfn(struct work_struct *work)
{
struct delayed_work *dwork = to_delayed_work(work);
struct disk_events *ev = container_of(dwork, struct disk_events, dwork);
+
+ disk_check_events(ev, &ev->clearing);
+}
+
+static void disk_check_events(struct disk_events *ev,
+ unsigned int *clearing_ptr)
+{
struct gendisk *disk = ev->disk;
char *envp[ARRAY_SIZE(disk_uevents) + 1] = { };
- unsigned int clearing = ev->clearing;
+ unsigned int clearing = *clearing_ptr;
unsigned int events;
unsigned long intv;
int nr_events = 0, i;
events &= ~ev->pending;
ev->pending |= events;
- ev->clearing &= ~clearing;
+ *clearing_ptr &= ~clearing;
intv = disk_events_poll_jiffies(disk);
if (!ev->block && intv)
if (bit_width == 32 && bit_offset == 0 && (*paddr & 0x03) == 0 &&
*access_bit_width < 32)
*access_bit_width = 32;
+ else if (bit_width == 64 && bit_offset == 0 && (*paddr & 0x07) == 0 &&
+ *access_bit_width < 64)
+ *access_bit_width = 64;
if ((bit_width + bit_offset) > *access_bit_width) {
pr_warning(FW_BUG APEI_PFX
return acpi_rsdp;
#endif
- if (efi_enabled) {
+ if (efi_enabled(EFI_CONFIG_TABLES)) {
if (efi.acpi20 != EFI_INVALID_TABLE_ADDR)
return efi.acpi20;
else if (efi.acpi != EFI_INVALID_TABLE_ADDR)
return -EINVAL;
}
+ if (!dev)
+ return -EINVAL;
+
dev->cpu = pr->id;
if (max_cstate == 0)
}
/* Populate Updated C-state information */
+ acpi_processor_get_power_info(pr);
acpi_processor_setup_cpuidle_states(pr);
/* Enable all cpuidle devices */
if ((boot_cpu_data.x86 == 0x10 && boot_cpu_data.x86_model < 10)
|| boot_cpu_data.x86 == 0x11) {
rdmsr(MSR_AMD_PSTATE_DEF_BASE + index, lo, hi);
+ /*
+ * MSR C001_0064+:
+ * Bit 63: PstateEn. Read-write. If set, the P-state is valid.
+ */
+ if (!(hi & BIT(31)))
+ return;
+
fid = lo & 0x3f;
did = (lo >> 6) & 7;
if (boot_cpu_data.x86 == 0x10)
enum {
AHCI_PCI_BAR_STA2X11 = 0,
+ AHCI_PCI_BAR_ENMOTUS = 2,
AHCI_PCI_BAR_STANDARD = 5,
};
{ PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci }, /* ASM1061 */
{ PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci }, /* ASM1062 */
+ /* Enmotus */
+ { PCI_DEVICE(0x1c44, 0x8000), board_ahci },
+
/* Generic, PCI class code for AHCI */
{ PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
PCI_CLASS_STORAGE_SATA_AHCI, 0xffffff, board_ahci },
dev_info(&pdev->dev,
"PDC42819 can only drive SATA devices with this driver\n");
- /* The Connext uses non-standard BAR */
+ /* Both Connext and Enmotus devices use non-standard BARs */
if (pdev->vendor == PCI_VENDOR_ID_STMICRO && pdev->device == 0xCC06)
ahci_pci_bar = AHCI_PCI_BAR_STA2X11;
+ else if (pdev->vendor == 0x1c44 && pdev->device == 0x8000)
+ ahci_pci_bar = AHCI_PCI_BAR_ENMOTUS;
/* acquire resources */
rc = pcim_enable_device(pdev);
/* Use the nominal value 10 ms if the read MDAT is zero,
* the nominal value of DETO is 20 ms.
*/
- if (dev->sata_settings[ATA_LOG_DEVSLP_VALID] &
+ if (dev->devslp_timing[ATA_LOG_DEVSLP_VALID] &
ATA_LOG_DEVSLP_VALID_MASK) {
- mdat = dev->sata_settings[ATA_LOG_DEVSLP_MDAT] &
+ mdat = dev->devslp_timing[ATA_LOG_DEVSLP_MDAT] &
ATA_LOG_DEVSLP_MDAT_MASK;
if (!mdat)
mdat = 10;
- deto = dev->sata_settings[ATA_LOG_DEVSLP_DETO];
+ deto = dev->devslp_timing[ATA_LOG_DEVSLP_DETO];
if (!deto)
deto = 20;
} else {
}
}
- /* check and mark DevSlp capability */
- if (ata_id_has_devslp(dev->id))
- dev->flags |= ATA_DFLAG_DEVSLP;
-
- /* Obtain SATA Settings page from Identify Device Data Log,
- * which contains DevSlp timing variables etc.
- * Exclude old devices with ata_id_has_ncq()
+ /* Check and mark DevSlp capability. Get DevSlp timing variables
+ * from SATA Settings page of Identify Device Data Log.
*/
- if (ata_id_has_ncq(dev->id)) {
+ if (ata_id_has_devslp(dev->id)) {
+ u8 sata_setting[ATA_SECT_SIZE];
+ int i, j;
+
+ dev->flags |= ATA_DFLAG_DEVSLP;
err_mask = ata_read_log_page(dev,
ATA_LOG_SATA_ID_DEV_DATA,
ATA_LOG_SATA_SETTINGS,
- dev->sata_settings,
+ sata_setting,
1);
if (err_mask)
ata_dev_dbg(dev,
"failed to get Identify Device Data, Emask 0x%x\n",
err_mask);
+ else
+ for (i = 0; i < ATA_LOG_DEVSLP_SIZE; i++) {
+ j = ATA_LOG_DEVSLP_OFFSET + i;
+ dev->devslp_timing[i] = sata_setting[j];
+ }
}
dev->cdb_len = 16;
*/
static inline int ata_eh_worth_retry(struct ata_queued_cmd *qc)
{
- if (qc->flags & AC_ERR_MEDIA)
+ if (qc->err_mask & AC_ERR_MEDIA)
return 0; /* don't retry media errors */
if (qc->flags & ATA_QCFLAG_IO)
return 1; /* otherwise retry anything from fs stack */
#define SEG_BASE IPHASE5575_FRAG_CONTROL_REG_BASE
#define REASS_BASE IPHASE5575_REASS_CONTROL_REG_BASE
-typedef volatile u_int freg_t;
+typedef volatile u_int ffreg_t;
typedef u_int rreg_t;
typedef struct _ffredn_t {
- freg_t idlehead_high; /* Idle cell header (high) */
- freg_t idlehead_low; /* Idle cell header (low) */
- freg_t maxrate; /* Maximum rate */
- freg_t stparms; /* Traffic Management Parameters */
- freg_t abrubr_abr; /* ABRUBR Priority Byte 1, TCR Byte 0 */
- freg_t rm_type; /* */
- u_int filler5[0x17 - 0x06];
- freg_t cmd_reg; /* Command register */
- u_int filler18[0x20 - 0x18];
- freg_t cbr_base; /* CBR Pointer Base */
- freg_t vbr_base; /* VBR Pointer Base */
- freg_t abr_base; /* ABR Pointer Base */
- freg_t ubr_base; /* UBR Pointer Base */
- u_int filler24;
- freg_t vbrwq_base; /* VBR Wait Queue Base */
- freg_t abrwq_base; /* ABR Wait Queue Base */
- freg_t ubrwq_base; /* UBR Wait Queue Base */
- freg_t vct_base; /* Main VC Table Base */
- freg_t vcte_base; /* Extended Main VC Table Base */
- u_int filler2a[0x2C - 0x2A];
- freg_t cbr_tab_beg; /* CBR Table Begin */
- freg_t cbr_tab_end; /* CBR Table End */
- freg_t cbr_pointer; /* CBR Pointer */
- u_int filler2f[0x30 - 0x2F];
- freg_t prq_st_adr; /* Packet Ready Queue Start Address */
- freg_t prq_ed_adr; /* Packet Ready Queue End Address */
- freg_t prq_rd_ptr; /* Packet Ready Queue read pointer */
- freg_t prq_wr_ptr; /* Packet Ready Queue write pointer */
- freg_t tcq_st_adr; /* Transmit Complete Queue Start Address*/
- freg_t tcq_ed_adr; /* Transmit Complete Queue End Address */
- freg_t tcq_rd_ptr; /* Transmit Complete Queue read pointer */
- freg_t tcq_wr_ptr; /* Transmit Complete Queue write pointer*/
- u_int filler38[0x40 - 0x38];
- freg_t queue_base; /* Base address for PRQ and TCQ */
- freg_t desc_base; /* Base address of descriptor table */
- u_int filler42[0x45 - 0x42];
- freg_t mode_reg_0; /* Mode register 0 */
- freg_t mode_reg_1; /* Mode register 1 */
- freg_t intr_status_reg;/* Interrupt Status register */
- freg_t mask_reg; /* Mask Register */
- freg_t cell_ctr_high1; /* Total cell transfer count (high) */
- freg_t cell_ctr_lo1; /* Total cell transfer count (low) */
- freg_t state_reg; /* Status register */
- u_int filler4c[0x58 - 0x4c];
- freg_t curr_desc_num; /* Contains the current descriptor num */
- freg_t next_desc; /* Next descriptor */
- freg_t next_vc; /* Next VC */
- u_int filler5b[0x5d - 0x5b];
- freg_t present_slot_cnt;/* Present slot count */
- u_int filler5e[0x6a - 0x5e];
- freg_t new_desc_num; /* New descriptor number */
- freg_t new_vc; /* New VC */
- freg_t sched_tbl_ptr; /* Schedule table pointer */
- freg_t vbrwq_wptr; /* VBR wait queue write pointer */
- freg_t vbrwq_rptr; /* VBR wait queue read pointer */
- freg_t abrwq_wptr; /* ABR wait queue write pointer */
- freg_t abrwq_rptr; /* ABR wait queue read pointer */
- freg_t ubrwq_wptr; /* UBR wait queue write pointer */
- freg_t ubrwq_rptr; /* UBR wait queue read pointer */
- freg_t cbr_vc; /* CBR VC */
- freg_t vbr_sb_vc; /* VBR SB VC */
- freg_t abr_sb_vc; /* ABR SB VC */
- freg_t ubr_sb_vc; /* UBR SB VC */
- freg_t vbr_next_link; /* VBR next link */
- freg_t abr_next_link; /* ABR next link */
- freg_t ubr_next_link; /* UBR next link */
- u_int filler7a[0x7c-0x7a];
- freg_t out_rate_head; /* Out of rate head */
- u_int filler7d[0xca-0x7d]; /* pad out to full address space */
- freg_t cell_ctr_high1_nc;/* Total cell transfer count (high) */
- freg_t cell_ctr_lo1_nc;/* Total cell transfer count (low) */
- u_int fillercc[0x100-0xcc]; /* pad out to full address space */
+ ffreg_t idlehead_high; /* Idle cell header (high) */
+ ffreg_t idlehead_low; /* Idle cell header (low) */
+ ffreg_t maxrate; /* Maximum rate */
+ ffreg_t stparms; /* Traffic Management Parameters */
+ ffreg_t abrubr_abr; /* ABRUBR Priority Byte 1, TCR Byte 0 */
+ ffreg_t rm_type; /* */
+ u_int filler5[0x17 - 0x06];
+ ffreg_t cmd_reg; /* Command register */
+ u_int filler18[0x20 - 0x18];
+ ffreg_t cbr_base; /* CBR Pointer Base */
+ ffreg_t vbr_base; /* VBR Pointer Base */
+ ffreg_t abr_base; /* ABR Pointer Base */
+ ffreg_t ubr_base; /* UBR Pointer Base */
+ u_int filler24;
+ ffreg_t vbrwq_base; /* VBR Wait Queue Base */
+ ffreg_t abrwq_base; /* ABR Wait Queue Base */
+ ffreg_t ubrwq_base; /* UBR Wait Queue Base */
+ ffreg_t vct_base; /* Main VC Table Base */
+ ffreg_t vcte_base; /* Extended Main VC Table Base */
+ u_int filler2a[0x2C - 0x2A];
+ ffreg_t cbr_tab_beg; /* CBR Table Begin */
+ ffreg_t cbr_tab_end; /* CBR Table End */
+ ffreg_t cbr_pointer; /* CBR Pointer */
+ u_int filler2f[0x30 - 0x2F];
+ ffreg_t prq_st_adr; /* Packet Ready Queue Start Address */
+ ffreg_t prq_ed_adr; /* Packet Ready Queue End Address */
+ ffreg_t prq_rd_ptr; /* Packet Ready Queue read pointer */
+ ffreg_t prq_wr_ptr; /* Packet Ready Queue write pointer */
+ ffreg_t tcq_st_adr; /* Transmit Complete Queue Start Address*/
+ ffreg_t tcq_ed_adr; /* Transmit Complete Queue End Address */
+ ffreg_t tcq_rd_ptr; /* Transmit Complete Queue read pointer */
+ ffreg_t tcq_wr_ptr; /* Transmit Complete Queue write pointer*/
+ u_int filler38[0x40 - 0x38];
+ ffreg_t queue_base; /* Base address for PRQ and TCQ */
+ ffreg_t desc_base; /* Base address of descriptor table */
+ u_int filler42[0x45 - 0x42];
+ ffreg_t mode_reg_0; /* Mode register 0 */
+ ffreg_t mode_reg_1; /* Mode register 1 */
+ ffreg_t intr_status_reg;/* Interrupt Status register */
+ ffreg_t mask_reg; /* Mask Register */
+ ffreg_t cell_ctr_high1; /* Total cell transfer count (high) */
+ ffreg_t cell_ctr_lo1; /* Total cell transfer count (low) */
+ ffreg_t state_reg; /* Status register */
+ u_int filler4c[0x58 - 0x4c];
+ ffreg_t curr_desc_num; /* Contains the current descriptor num */
+ ffreg_t next_desc; /* Next descriptor */
+ ffreg_t next_vc; /* Next VC */
+ u_int filler5b[0x5d - 0x5b];
+ ffreg_t present_slot_cnt;/* Present slot count */
+ u_int filler5e[0x6a - 0x5e];
+ ffreg_t new_desc_num; /* New descriptor number */
+ ffreg_t new_vc; /* New VC */
+ ffreg_t sched_tbl_ptr; /* Schedule table pointer */
+ ffreg_t vbrwq_wptr; /* VBR wait queue write pointer */
+ ffreg_t vbrwq_rptr; /* VBR wait queue read pointer */
+ ffreg_t abrwq_wptr; /* ABR wait queue write pointer */
+ ffreg_t abrwq_rptr; /* ABR wait queue read pointer */
+ ffreg_t ubrwq_wptr; /* UBR wait queue write pointer */
+ ffreg_t ubrwq_rptr; /* UBR wait queue read pointer */
+ ffreg_t cbr_vc; /* CBR VC */
+ ffreg_t vbr_sb_vc; /* VBR SB VC */
+ ffreg_t abr_sb_vc; /* ABR SB VC */
+ ffreg_t ubr_sb_vc; /* UBR SB VC */
+ ffreg_t vbr_next_link; /* VBR next link */
+ ffreg_t abr_next_link; /* ABR next link */
+ ffreg_t ubr_next_link; /* UBR next link */
+ u_int filler7a[0x7c-0x7a];
+ ffreg_t out_rate_head; /* Out of rate head */
+ u_int filler7d[0xca-0x7d]; /* pad out to full address space */
+ ffreg_t cell_ctr_high1_nc;/* Total cell transfer count (high) */
+ ffreg_t cell_ctr_lo1_nc;/* Total cell transfer count (low) */
+ u_int fillercc[0x100-0xcc]; /* pad out to full address space */
} ffredn_t;
typedef struct _rfredn_t {
c->max = p - 1;
list_add_tail(&c->list,
&map->debugfs_off_cache);
- } else {
- return base;
}
/*
* @val_count: Number of registers to write
*
* This function is intended to be used for writing a large block of
- * data to be device either in single transfer or multiple transfer.
+ * data to the device either in single transfer or multiple transfer.
*
* A value of zero will be returned on success, a negative errno will
* be returned in error cases.
#ifdef CONFIG_BCMA_DRIVER_GPIO
/* driver_gpio.c */
int bcma_gpio_init(struct bcma_drv_cc *cc);
+int bcma_gpio_unregister(struct bcma_drv_cc *cc);
#else
static inline int bcma_gpio_init(struct bcma_drv_cc *cc)
{
return -ENOTSUPP;
}
+static inline int bcma_gpio_unregister(struct bcma_drv_cc *cc)
+{
+ return 0;
+}
#endif /* CONFIG_BCMA_DRIVER_GPIO */
#endif
struct bcma_bus *bus = cc->core->bus;
if (bus->chipinfo.id != BCMA_CHIP_ID_BCM4706 &&
- cc->core->id.rev != 0x38) {
+ cc->core->id.rev != 38) {
bcma_err(bus, "NAND flash on unsupported board!\n");
return -ENOTSUPP;
}
return gpiochip_add(chip);
}
+
+int bcma_gpio_unregister(struct bcma_drv_cc *cc)
+{
+ return gpiochip_remove(&cc->gpio);
+}
void bcma_bus_unregister(struct bcma_bus *bus)
{
struct bcma_device *cores[3];
+ int err;
+
+ err = bcma_gpio_unregister(&bus->drv_cc);
+ if (err == -EBUSY)
+ bcma_err(bus, "Some GPIOs are still in use.\n");
+ else if (err)
+ bcma_err(bus, "Can not unregister GPIO driver: %i\n", err);
cores[0] = bcma_find_core(bus, BCMA_CORE_MIPS_74K);
cores[1] = bcma_find_core(bus, BCMA_CORE_PCIE);
}
/* must hold resource->req_lock */
-static void start_new_tl_epoch(struct drbd_tconn *tconn)
+void start_new_tl_epoch(struct drbd_tconn *tconn)
{
/* no point closing an epoch, if it is empty, anyways. */
if (tconn->current_tle_writes == 0)
int error;
};
+extern void start_new_tl_epoch(struct drbd_tconn *tconn);
extern void drbd_req_destroy(struct kref *kref);
extern void _req_may_be_done(struct drbd_request *req,
struct bio_and_error *m);
enum drbd_state_rv rv = SS_SUCCESS;
enum sanitize_state_warnings ssw;
struct after_state_chg_work *ascw;
+ bool did_remote, should_do_remote;
os = drbd_read_state(mdev);
(os.disk != D_DISKLESS && ns.disk == D_DISKLESS))
atomic_inc(&mdev->local_cnt);
+ did_remote = drbd_should_do_remote(mdev->state);
mdev->state.i = ns.i;
+ should_do_remote = drbd_should_do_remote(mdev->state);
mdev->tconn->susp = ns.susp;
mdev->tconn->susp_nod = ns.susp_nod;
mdev->tconn->susp_fen = ns.susp_fen;
+ /* put replicated vs not-replicated requests in seperate epochs */
+ if (did_remote != should_do_remote)
+ start_new_tl_epoch(mdev->tconn);
+
if (os.disk == D_ATTACHING && ns.disk >= D_NEGOTIATING)
drbd_print_uuids(mdev, "attached to UUIDs");
}
}
- if (cmdto_cnt && !test_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags)) {
+ if (cmdto_cnt) {
print_tags(port->dd, "timed out", tagaccum, cmdto_cnt);
-
- mtip_restart_port(port);
+ if (!test_bit(MTIP_PF_IC_ACTIVE_BIT, &port->flags)) {
+ mtip_restart_port(port);
+ wake_up_interruptible(&port->svc_wait);
+ }
clear_bit(MTIP_PF_EH_ACTIVE_BIT, &port->flags);
- wake_up_interruptible(&port->svc_wait);
}
if (port->ic_pause_timer) {
* Delete our gendisk structure. This also removes the device
* from /dev
*/
- del_gendisk(dd->disk);
+ if (dd->disk) {
+ if (dd->disk->queue)
+ del_gendisk(dd->disk);
+ else
+ put_disk(dd->disk);
+ }
spin_lock(&rssd_index_lock);
ida_remove(&rssd_index_ida, dd->index);
"Shutting down %s ...\n", dd->disk->disk_name);
/* Delete our gendisk structure, and cleanup the blk queue. */
- del_gendisk(dd->disk);
+ if (dd->disk) {
+ if (dd->disk->queue)
+ del_gendisk(dd->disk);
+ else
+ put_disk(dd->disk);
+ }
+
spin_lock(&rssd_index_lock);
ida_remove(&rssd_index_ida, dd->index);
{
struct virtio_blk *vblk = vdev->priv;
int index = vblk->index;
+ int refc;
/* Prevent config work handler from accessing the device. */
mutex_lock(&vblk->config_lock);
flush_work(&vblk->config_work);
+ refc = atomic_read(&disk_to_dev(vblk->disk)->kobj.kref.refcount);
put_disk(vblk->disk);
mempool_destroy(vblk->pool);
vdev->config->del_vqs(vdev);
kfree(vblk);
- ida_simple_remove(&vd_index_ida, index);
+
+ /* Only free device id if we don't have any users */
+ if (refc == 1)
+ ida_simple_remove(&vd_index_ida, index);
}
#ifdef CONFIG_PM
static void make_response(struct xen_blkif *blkif, u64 id,
unsigned short op, int st);
-#define foreach_grant(pos, rbtree, node) \
- for ((pos) = container_of(rb_first((rbtree)), typeof(*(pos)), node); \
+#define foreach_grant_safe(pos, n, rbtree, node) \
+ for ((pos) = container_of(rb_first((rbtree)), typeof(*(pos)), node), \
+ (n) = rb_next(&(pos)->node); \
&(pos)->node != NULL; \
- (pos) = container_of(rb_next(&(pos)->node), typeof(*(pos)), node))
+ (pos) = container_of(n, typeof(*(pos)), node), \
+ (n) = (&(pos)->node != NULL) ? rb_next(&(pos)->node) : NULL)
static void add_persistent_gnt(struct rb_root *root,
struct gnttab_unmap_grant_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST];
struct page *pages[BLKIF_MAX_SEGMENTS_PER_REQUEST];
struct persistent_gnt *persistent_gnt;
+ struct rb_node *n;
int ret = 0;
int segs_to_unmap = 0;
- foreach_grant(persistent_gnt, root, node) {
+ foreach_grant_safe(persistent_gnt, n, root, node) {
BUG_ON(persistent_gnt->handle ==
BLKBACK_INVALID_HANDLE);
gnttab_set_unmap_op(&unmap[segs_to_unmap],
persistent_gnt->handle);
pages[segs_to_unmap] = persistent_gnt->page;
- rb_erase(&persistent_gnt->node, root);
- kfree(persistent_gnt);
- num--;
if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
!rb_next(&persistent_gnt->node)) {
BUG_ON(ret);
segs_to_unmap = 0;
}
+
+ rb_erase(&persistent_gnt->node, root);
+ kfree(persistent_gnt);
+ num--;
}
BUG_ON(num != 0);
}
{
struct llist_node *all_gnts;
struct grant *persistent_gnt;
+ struct llist_node *n;
/* Prevent new requests being issued until we fix things up. */
spin_lock_irq(&info->io_lock);
/* Remove all persistent grants */
if (info->persistent_gnts_c) {
all_gnts = llist_del_all(&info->persistent_gnts);
- llist_for_each_entry(persistent_gnt, all_gnts, node) {
+ llist_for_each_entry_safe(persistent_gnt, n, all_gnts, node) {
gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL);
__free_page(pfn_to_page(persistent_gnt->pfn));
kfree(persistent_gnt);
static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info,
struct blkif_response *bret)
{
- int i;
+ int i = 0;
struct bio_vec *bvec;
struct req_iterator iter;
unsigned long flags;
*/
rq_for_each_segment(bvec, s->request, iter) {
BUG_ON((bvec->bv_offset + bvec->bv_len) > PAGE_SIZE);
- i = offset >> PAGE_SHIFT;
+ if (bvec->bv_offset < offset)
+ i++;
BUG_ON(i >= s->req.u.rw.nr_segments);
shared_data = kmap_atomic(
pfn_to_page(s->grants_used[i]->pfn));
bvec->bv_len);
bvec_kunmap_irq(bvec_data, &flags);
kunmap_atomic(shared_data);
- offset += bvec->bv_len;
+ offset = bvec->bv_offset + bvec->bv_len;
}
}
/* Add the persistent grant into the list of free grants */
{ USB_DEVICE(0x0CF3, 0x311D) },
{ USB_DEVICE(0x13d3, 0x3375) },
{ USB_DEVICE(0x04CA, 0x3005) },
+ { USB_DEVICE(0x04CA, 0x3006) },
+ { USB_DEVICE(0x04CA, 0x3008) },
{ USB_DEVICE(0x13d3, 0x3362) },
{ USB_DEVICE(0x0CF3, 0xE004) },
{ USB_DEVICE(0x0930, 0x0219) },
{ USB_DEVICE(0x0489, 0xe057) },
+ { USB_DEVICE(0x13d3, 0x3393) },
+ { USB_DEVICE(0x0489, 0xe04e) },
+ { USB_DEVICE(0x0489, 0xe056) },
/* Atheros AR5BBU12 with sflash firmware */
{ USB_DEVICE(0x0489, 0xE02C) },
{ USB_DEVICE(0x0cf3, 0x311D), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 },
/* Atheros AR5BBU22 with sflash firmware */
{ USB_DEVICE(0x0489, 0xE03C), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x0489, 0xe057), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0489, 0xe04e), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0489, 0xe056), .driver_info = BTUSB_ATH3012 },
/* Atheros AR5BBU12 with sflash firmware */
{ USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE },
/* Disable interrupts for vqs */
vdev->config->reset(vdev);
/* Finish up work that's lined up */
- cancel_work_sync(&portdev->control_work);
+ if (use_multiport(portdev))
+ cancel_work_sync(&portdev->control_work);
list_for_each_entry_safe(port, port2, &portdev->ports, list)
unplug_port(port);
clks = kzalloc(ncpus * sizeof(*clks), GFP_KERNEL);
if (WARN_ON(!clks))
- return;
+ goto clks_out;
for_each_node_by_type(dn, "cpu") {
struct clk_init_data init;
int cpu, err;
if (WARN_ON(!clk_name))
- return;
+ goto bail_out;
err = of_property_read_u32(dn, "reg", &cpu);
if (WARN_ON(err))
- return;
+ goto bail_out;
sprintf(clk_name, "cpu%d", cpu);
parent_clk = of_clk_get(node, 0);
return;
bail_out:
kfree(clks);
+ while(ncpus--)
+ kfree(cpuclk[ncpus].clk_name);
+clks_out:
kfree(cpuclk);
}
config X86_POWERNOW_K8
tristate "AMD Opteron/Athlon64 PowerNow!"
select CPU_FREQ_TABLE
- depends on ACPI && ACPI_PROCESSOR
+ depends on ACPI && ACPI_PROCESSOR && X86_ACPI_CPUFREQ
help
This adds the CPUFreq driver for K8/early Opteron/Athlon64 processors.
Support for K10 and newer processors is now in acpi-cpufreq.
late_initcall(acpi_cpufreq_init);
module_exit(acpi_cpufreq_exit);
+static const struct x86_cpu_id acpi_cpufreq_ids[] = {
+ X86_FEATURE_MATCH(X86_FEATURE_ACPI),
+ X86_FEATURE_MATCH(X86_FEATURE_HW_PSTATE),
+ {}
+};
+MODULE_DEVICE_TABLE(x86cpu, acpi_cpufreq_ids);
+
MODULE_ALIAS("acpi");
}
if (cpu_reg) {
+ rcu_read_lock();
opp = opp_find_freq_ceil(cpu_dev, &freq_Hz);
if (IS_ERR(opp)) {
+ rcu_read_unlock();
pr_err("failed to find OPP for %ld\n", freq_Hz);
return PTR_ERR(opp);
}
volt = opp_get_voltage(opp);
+ rcu_read_unlock();
tol = volt * voltage_tolerance / 100;
volt_old = regulator_get_voltage(cpu_reg);
}
*/
for (i = 0; freq_table[i].frequency != CPUFREQ_TABLE_END; i++)
;
+ rcu_read_lock();
opp = opp_find_freq_exact(cpu_dev,
freq_table[0].frequency * 1000, true);
min_uV = opp_get_voltage(opp);
opp = opp_find_freq_exact(cpu_dev,
freq_table[i-1].frequency * 1000, true);
max_uV = opp_get_voltage(opp);
+ rcu_read_unlock();
ret = regulator_set_voltage_time(cpu_reg, min_uV, max_uV);
if (ret > 0)
transition_latency += ret * 1000;
freq = ret;
if (mpu_reg) {
+ rcu_read_lock();
opp = opp_find_freq_ceil(mpu_dev, &freq);
if (IS_ERR(opp)) {
+ rcu_read_unlock();
dev_err(mpu_dev, "%s: unable to find MPU OPP for %d\n",
__func__, freqs.new);
return -EINVAL;
}
volt = opp_get_voltage(opp);
+ rcu_read_unlock();
tol = volt * OPP_TOLERANCE / 100;
volt_old = regulator_get_voltage(mpu_reg);
}
* @freq: The frequency given to target function
* @flags: Flags handed from devfreq framework.
*
+ * Locking: This function must be called under rcu_read_lock(). opp is a rcu
+ * protected pointer. The reason for the same is that the opp pointer which is
+ * returned will remain valid for use with opp_get_{voltage, freq} only while
+ * under the locked area. The pointer returned must be used prior to unlocking
+ * with rcu_read_unlock() to maintain the integrity of the pointer.
*/
struct opp *devfreq_recommended_opp(struct device *dev, unsigned long *freq,
u32 flags)
#define EX4210_LV_NUM (LV_2 + 1)
#define EX4x12_LV_NUM (LV_4 + 1)
+/**
+ * struct busfreq_opp_info - opp information for bus
+ * @rate: Frequency in hertz
+ * @volt: Voltage in microvolts corresponding to this OPP
+ */
+struct busfreq_opp_info {
+ unsigned long rate;
+ unsigned long volt;
+};
+
struct busfreq_data {
enum exynos4_busf_type type;
struct device *dev;
bool disabled;
struct regulator *vdd_int;
struct regulator *vdd_mif; /* Exynos4412/4212 only */
- struct opp *curr_opp;
+ struct busfreq_opp_info curr_oppinfo;
struct exynos4_ppmu dmc[2];
struct notifier_block pm_notifier;
};
-static int exynos4210_set_busclk(struct busfreq_data *data, struct opp *opp)
+static int exynos4210_set_busclk(struct busfreq_data *data,
+ struct busfreq_opp_info *oppi)
{
unsigned int index;
unsigned int tmp;
for (index = LV_0; index < EX4210_LV_NUM; index++)
- if (opp_get_freq(opp) == exynos4210_busclk_table[index].clk)
+ if (oppi->rate == exynos4210_busclk_table[index].clk)
break;
if (index == EX4210_LV_NUM)
return 0;
}
-static int exynos4x12_set_busclk(struct busfreq_data *data, struct opp *opp)
+static int exynos4x12_set_busclk(struct busfreq_data *data,
+ struct busfreq_opp_info *oppi)
{
unsigned int index;
unsigned int tmp;
for (index = LV_0; index < EX4x12_LV_NUM; index++)
- if (opp_get_freq(opp) == exynos4x12_mifclk_table[index].clk)
+ if (oppi->rate == exynos4x12_mifclk_table[index].clk)
break;
if (index == EX4x12_LV_NUM)
return -EINVAL;
}
-static int exynos4_bus_setvolt(struct busfreq_data *data, struct opp *opp,
- struct opp *oldopp)
+static int exynos4_bus_setvolt(struct busfreq_data *data,
+ struct busfreq_opp_info *oppi,
+ struct busfreq_opp_info *oldoppi)
{
int err = 0, tmp;
- unsigned long volt = opp_get_voltage(opp);
+ unsigned long volt = oppi->volt;
switch (data->type) {
case TYPE_BUSF_EXYNOS4210:
if (err)
break;
- tmp = exynos4x12_get_intspec(opp_get_freq(opp));
+ tmp = exynos4x12_get_intspec(oppi->rate);
if (tmp < 0) {
err = tmp;
regulator_set_voltage(data->vdd_mif,
- opp_get_voltage(oldopp),
+ oldoppi->volt,
MAX_SAFEVOLT);
break;
}
/* Try to recover */
if (err)
regulator_set_voltage(data->vdd_mif,
- opp_get_voltage(oldopp),
+ oldoppi->volt,
MAX_SAFEVOLT);
break;
default:
struct platform_device *pdev = container_of(dev, struct platform_device,
dev);
struct busfreq_data *data = platform_get_drvdata(pdev);
- struct opp *opp = devfreq_recommended_opp(dev, _freq, flags);
- unsigned long freq = opp_get_freq(opp);
- unsigned long old_freq = opp_get_freq(data->curr_opp);
+ struct opp *opp;
+ unsigned long freq;
+ unsigned long old_freq = data->curr_oppinfo.rate;
+ struct busfreq_opp_info new_oppinfo;
- if (IS_ERR(opp))
+ rcu_read_lock();
+ opp = devfreq_recommended_opp(dev, _freq, flags);
+ if (IS_ERR(opp)) {
+ rcu_read_unlock();
return PTR_ERR(opp);
+ }
+ new_oppinfo.rate = opp_get_freq(opp);
+ new_oppinfo.volt = opp_get_voltage(opp);
+ rcu_read_unlock();
+ freq = new_oppinfo.rate;
if (old_freq == freq)
return 0;
- dev_dbg(dev, "targetting %lukHz %luuV\n", freq, opp_get_voltage(opp));
+ dev_dbg(dev, "targetting %lukHz %luuV\n", freq, new_oppinfo.volt);
mutex_lock(&data->lock);
goto out;
if (old_freq < freq)
- err = exynos4_bus_setvolt(data, opp, data->curr_opp);
+ err = exynos4_bus_setvolt(data, &new_oppinfo,
+ &data->curr_oppinfo);
if (err)
goto out;
if (old_freq != freq) {
switch (data->type) {
case TYPE_BUSF_EXYNOS4210:
- err = exynos4210_set_busclk(data, opp);
+ err = exynos4210_set_busclk(data, &new_oppinfo);
break;
case TYPE_BUSF_EXYNOS4x12:
- err = exynos4x12_set_busclk(data, opp);
+ err = exynos4x12_set_busclk(data, &new_oppinfo);
break;
default:
err = -EINVAL;
goto out;
if (old_freq > freq)
- err = exynos4_bus_setvolt(data, opp, data->curr_opp);
+ err = exynos4_bus_setvolt(data, &new_oppinfo,
+ &data->curr_oppinfo);
if (err)
goto out;
- data->curr_opp = opp;
+ data->curr_oppinfo = new_oppinfo;
out:
mutex_unlock(&data->lock);
return err;
exynos4_read_ppmu(data);
busier_dmc = exynos4_get_busier_dmc(data);
- stat->current_frequency = opp_get_freq(data->curr_opp);
+ stat->current_frequency = data->curr_oppinfo.rate;
if (busier_dmc)
addr = S5P_VA_DMC1;
struct busfreq_data *data = container_of(this, struct busfreq_data,
pm_notifier);
struct opp *opp;
+ struct busfreq_opp_info new_oppinfo;
unsigned long maxfreq = ULONG_MAX;
int err = 0;
data->disabled = true;
+ rcu_read_lock();
opp = opp_find_freq_floor(data->dev, &maxfreq);
+ if (IS_ERR(opp)) {
+ rcu_read_unlock();
+ dev_err(data->dev, "%s: unable to find a min freq\n",
+ __func__);
+ return PTR_ERR(opp);
+ }
+ new_oppinfo.rate = opp_get_freq(opp);
+ new_oppinfo.volt = opp_get_voltage(opp);
+ rcu_read_unlock();
- err = exynos4_bus_setvolt(data, opp, data->curr_opp);
+ err = exynos4_bus_setvolt(data, &new_oppinfo,
+ &data->curr_oppinfo);
if (err)
goto unlock;
switch (data->type) {
case TYPE_BUSF_EXYNOS4210:
- err = exynos4210_set_busclk(data, opp);
+ err = exynos4210_set_busclk(data, &new_oppinfo);
break;
case TYPE_BUSF_EXYNOS4x12:
- err = exynos4x12_set_busclk(data, opp);
+ err = exynos4x12_set_busclk(data, &new_oppinfo);
break;
default:
err = -EINVAL;
if (err)
goto unlock;
- data->curr_opp = opp;
+ data->curr_oppinfo = new_oppinfo;
unlock:
mutex_unlock(&data->lock);
if (err)
}
}
+ rcu_read_lock();
opp = opp_find_freq_floor(dev, &exynos4_devfreq_profile.initial_freq);
if (IS_ERR(opp)) {
+ rcu_read_unlock();
dev_err(dev, "Invalid initial frequency %lu kHz.\n",
exynos4_devfreq_profile.initial_freq);
return PTR_ERR(opp);
}
- data->curr_opp = opp;
+ data->curr_oppinfo.rate = opp_get_freq(opp);
+ data->curr_oppinfo.volt = opp_get_voltage(opp);
+ rcu_read_unlock();
platform_set_drvdata(pdev, data);
break;
}
- imxdmac->hw_chaining = 1;
- if (!imxdma_hw_chain(imxdmac))
- return -EINVAL;
+ imxdmac->hw_chaining = 0;
+
imxdmac->ccr_from_device = (mode | IMX_DMA_TYPE_FIFO) |
((IMX_DMA_MEMSIZE_32 | IMX_DMA_TYPE_LINEAR) << 2) |
CCR_REN;
goto free_resources;
}
}
- dma_sync_single_for_device(dev, dest_dma, PAGE_SIZE, DMA_TO_DEVICE);
+ dma_sync_single_for_device(dev, dest_dma, PAGE_SIZE, DMA_FROM_DEVICE);
/* skip validate if the capability is not present */
if (!dma_has_cap(DMA_XOR_VAL, dma_chan->device->cap_mask))
if (async_tx_test_ack(&dma_desc->txd)) {
list_del(&dma_desc->node);
spin_unlock_irqrestore(&tdc->lock, flags);
+ dma_desc->txd.flags = 0;
return dma_desc;
}
}
TEGRA_APBDMA_AHBSEQ_WRAP_SHIFT;
ahb_seq |= TEGRA_APBDMA_AHBSEQ_BUS_WIDTH_32;
- csr |= TEGRA_APBDMA_CSR_FLOW | TEGRA_APBDMA_CSR_IE_EOC;
+ csr |= TEGRA_APBDMA_CSR_FLOW;
+ if (flags & DMA_PREP_INTERRUPT)
+ csr |= TEGRA_APBDMA_CSR_IE_EOC;
csr |= tdc->dma_sconfig.slave_id << TEGRA_APBDMA_CSR_REQ_SEL_SHIFT;
apb_seq |= TEGRA_APBDMA_APBSEQ_WRAP_WORD_1;
mem += len;
}
sg_req->last_sg = true;
- dma_desc->txd.flags = 0;
+ if (flags & DMA_CTRL_ACK)
+ dma_desc->txd.flags = DMA_CTRL_ACK;
/*
* Make sure that mode should not be conflicting with currently
/*
* Alocate and fill the csrow/channels structs
*/
- mci->csrows = kcalloc(sizeof(*mci->csrows), tot_csrows, GFP_KERNEL);
+ mci->csrows = kcalloc(tot_csrows, sizeof(*mci->csrows), GFP_KERNEL);
if (!mci->csrows)
goto error;
for (row = 0; row < tot_csrows; row++) {
csr->csrow_idx = row;
csr->mci = mci;
csr->nr_channels = tot_channels;
- csr->channels = kcalloc(sizeof(*csr->channels), tot_channels,
+ csr->channels = kcalloc(tot_channels, sizeof(*csr->channels),
GFP_KERNEL);
if (!csr->channels)
goto error;
/*
* Allocate and fill the dimm structs
*/
- mci->dimms = kcalloc(sizeof(*mci->dimms), tot_dimms, GFP_KERNEL);
+ mci->dimms = kcalloc(tot_dimms, sizeof(*mci->dimms), GFP_KERNEL);
if (!mci->dimms)
goto error;
struct edac_pci_dev_attribute *edac_pci_dev;
edac_pci_dev = (struct edac_pci_dev_attribute *)attr;
- if (edac_pci_dev->show)
+ if (edac_pci_dev->store)
return edac_pci_dev->store(edac_pci_dev->value, buffer, count);
return -EIO;
}
char __iomem *p, *q;
int rc;
- if (efi_enabled) {
+ if (efi_enabled(EFI_CONFIG_TABLES)) {
if (efi.smbios == EFI_INVALID_TABLE_ADDR)
goto error;
err = -EACCES;
break;
case EFI_NOT_FOUND:
- err = -ENOENT;
+ err = -EIO;
break;
default:
err = -EINVAL;
spin_unlock(&efivars->lock);
efivar_unregister(var);
drop_nlink(inode);
+ d_delete(file->f_dentry);
dput(file->f_dentry);
} else {
list_del(&var->list);
spin_unlock(&efivars->lock);
efivar_unregister(var);
- drop_nlink(dir);
+ drop_nlink(dentry->d_inode);
dput(dentry);
return 0;
}
printk(KERN_INFO "EFI Variables Facility v%s %s\n", EFIVARS_VERSION,
EFIVARS_DATE);
- if (!efi_enabled)
+ if (!efi_enabled(EFI_RUNTIME_SERVICES))
return 0;
/* For now we'll register the efi directory at /sys/firmware/efi */
static void __exit
efivars_exit(void)
{
- if (efi_enabled) {
+ if (efi_enabled(EFI_RUNTIME_SERVICES)) {
unregister_efivars(&__efivars);
kobject_put(efi_kobj);
}
/* iBFT 1.03 section 1.4.3.1 mandates that UEFI machines will
* only use ACPI for this */
- if (!efi_enabled)
+ if (!efi_enabled(EFI_BOOT))
find_ibft_in_mem();
if (ibft_addr) {
mvchip->membase = devm_request_and_ioremap(&pdev->dev, res);
if (! mvchip->membase) {
dev_err(&pdev->dev, "Cannot ioremap\n");
- kfree(mvchip->chip.label);
return -ENOMEM;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
if (! res) {
dev_err(&pdev->dev, "Cannot get memory resource\n");
- kfree(mvchip->chip.label);
return -ENODEV;
}
mvchip->percpu_membase = devm_request_and_ioremap(&pdev->dev, res);
if (! mvchip->percpu_membase) {
dev_err(&pdev->dev, "Cannot ioremap\n");
- kfree(mvchip->chip.label);
return -ENOMEM;
}
}
mvchip->irqbase = irq_alloc_descs(-1, 0, ngpios, -1);
if (mvchip->irqbase < 0) {
dev_err(&pdev->dev, "no irqs\n");
- kfree(mvchip->chip.label);
return -ENOMEM;
}
mvchip->membase, handle_level_irq);
if (! gc) {
dev_err(&pdev->dev, "Cannot allocate generic irq_chip\n");
- kfree(mvchip->chip.label);
return -ENOMEM;
}
irq_remove_generic_chip(gc, IRQ_MSK(ngpios), IRQ_NOREQUEST,
IRQ_LEVEL | IRQ_NOPROBE);
kfree(gc);
- kfree(mvchip->chip.label);
return -ENODEV;
}
#include <mach/hardware.h>
#include <mach/map.h>
-#include <mach/regs-clock.h>
#include <mach/regs-gpio.h>
#include <plat/cpu.h>
};
#endif
-#if defined(CONFIG_ARCH_EXYNOS4) || defined(CONFIG_ARCH_EXYNOS5)
+#if defined(CONFIG_ARCH_EXYNOS4) || defined(CONFIG_SOC_EXYNOS5250)
static struct samsung_gpio_cfg exynos_gpio_cfg = {
.set_pull = exynos_gpio_setpull,
.get_pull = exynos_gpio_getpull,
};
#endif
-#ifdef CONFIG_ARCH_EXYNOS5
+#ifdef CONFIG_SOC_EXYNOS5250
static struct samsung_gpio_chip exynos5_gpios_1[] = {
{
.chip = {
};
#endif
-#ifdef CONFIG_ARCH_EXYNOS5
+#ifdef CONFIG_SOC_EXYNOS5250
static struct samsung_gpio_chip exynos5_gpios_2[] = {
{
.chip = {
};
#endif
-#ifdef CONFIG_ARCH_EXYNOS5
+#ifdef CONFIG_SOC_EXYNOS5250
static struct samsung_gpio_chip exynos5_gpios_3[] = {
{
.chip = {
};
#endif
-#ifdef CONFIG_ARCH_EXYNOS5
+#ifdef CONFIG_SOC_EXYNOS5250
static struct samsung_gpio_chip exynos5_gpios_4[] = {
{
.chip = {
int i, nr_chips;
int group = 0;
-#ifdef CONFIG_PINCTRL_SAMSUNG
+#if defined(CONFIG_PINCTRL_EXYNOS) || defined(CONFIG_PINCTRL_EXYNOS5440)
/*
* This gpio driver includes support for device tree support and there
* are platforms using it. In order to maintain compatibility with those
static const struct of_device_id exynos_pinctrl_ids[] = {
{ .compatible = "samsung,pinctrl-exynos4210", },
{ .compatible = "samsung,pinctrl-exynos4x12", },
+ { .compatible = "samsung,pinctrl-exynos5440", },
};
for_each_matching_node(pctrl_np, exynos_pinctrl_ids)
if (pctrl_np && of_device_is_available(pctrl_np))
config DRM_EXYNOS_FIMD
bool "Exynos DRM FIMD"
- depends on DRM_EXYNOS && !FB_S3C
+ depends on DRM_EXYNOS && !FB_S3C && !ARCH_MULTIPLATFORM
help
Choose this option if you want to use Exynos FIMD for DRM.
config DRM_EXYNOS_IPP
bool "Exynos DRM IPP"
- depends on DRM_EXYNOS
+ depends on DRM_EXYNOS && !ARCH_MULTIPLATFORM
help
Choose this option if you want to use IPP feature for DRM.
#include "exynos_drm_drv.h"
#include "exynos_drm_encoder.h"
-#define MAX_EDID 256
#define to_exynos_connector(x) container_of(x, struct exynos_drm_connector,\
drm_connector)
to_exynos_connector(connector);
struct exynos_drm_manager *manager = exynos_connector->manager;
struct exynos_drm_display_ops *display_ops = manager->display_ops;
- unsigned int count;
+ struct edid *edid = NULL;
+ unsigned int count = 0;
+ int ret;
DRM_DEBUG_KMS("%s\n", __FILE__);
* because lcd panel has only one mode.
*/
if (display_ops->get_edid) {
- int ret;
- void *edid;
-
- edid = kzalloc(MAX_EDID, GFP_KERNEL);
- if (!edid) {
- DRM_ERROR("failed to allocate edid\n");
- return 0;
+ edid = display_ops->get_edid(manager->dev, connector);
+ if (IS_ERR_OR_NULL(edid)) {
+ ret = PTR_ERR(edid);
+ edid = NULL;
+ DRM_ERROR("Panel operation get_edid failed %d\n", ret);
+ goto out;
}
- ret = display_ops->get_edid(manager->dev, connector,
- edid, MAX_EDID);
- if (ret < 0) {
- DRM_ERROR("failed to get edid data.\n");
- kfree(edid);
- edid = NULL;
- return 0;
+ count = drm_add_edid_modes(connector, edid);
+ if (count < 0) {
+ DRM_ERROR("Add edid modes failed %d\n", count);
+ goto out;
}
drm_mode_connector_update_edid_property(connector, edid);
- count = drm_add_edid_modes(connector, edid);
- kfree(edid);
} else {
struct exynos_drm_panel_info *panel;
struct drm_display_mode *mode = drm_mode_create(connector->dev);
count = 1;
}
+out:
+ kfree(edid);
return count;
}
struct exynos_drm_dmabuf_attachment {
struct sg_table sgt;
enum dma_data_direction dir;
+ bool is_mapped;
};
static int exynos_gem_attach_dma_buf(struct dma_buf *dmabuf,
DRM_DEBUG_PRIME("%s\n", __FILE__);
- if (WARN_ON(dir == DMA_NONE))
- return ERR_PTR(-EINVAL);
-
/* just return current sgt if already requested. */
- if (exynos_attach->dir == dir)
+ if (exynos_attach->dir == dir && exynos_attach->is_mapped)
return &exynos_attach->sgt;
- /* reattaching is not allowed. */
- if (WARN_ON(exynos_attach->dir != DMA_NONE))
- return ERR_PTR(-EBUSY);
-
buf = gem_obj->buffer;
if (!buf) {
DRM_ERROR("buffer is null.\n");
wr = sg_next(wr);
}
- nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);
- if (!nents) {
- DRM_ERROR("failed to map sgl with iommu.\n");
- sgt = ERR_PTR(-EIO);
- goto err_unlock;
+ if (dir != DMA_NONE) {
+ nents = dma_map_sg(attach->dev, sgt->sgl, sgt->orig_nents, dir);
+ if (!nents) {
+ DRM_ERROR("failed to map sgl with iommu.\n");
+ sg_free_table(sgt);
+ sgt = ERR_PTR(-EIO);
+ goto err_unlock;
+ }
}
+ exynos_attach->is_mapped = true;
exynos_attach->dir = dir;
attach->priv = exynos_attach;
struct exynos_drm_display_ops {
enum exynos_drm_output_type type;
bool (*is_connected)(struct device *dev);
- int (*get_edid)(struct device *dev, struct drm_connector *connector,
- u8 *edid, int len);
+ struct edid *(*get_edid)(struct device *dev,
+ struct drm_connector *connector);
void *(*get_panel)(struct device *dev);
int (*check_timing)(struct device *dev, void *timing);
int (*power_on)(struct device *dev, int mode);
g2d_userptr = NULL;
}
-dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,
+static dma_addr_t *g2d_userptr_get_dma_addr(struct drm_device *drm_dev,
unsigned long userptr,
unsigned long size,
struct drm_file *filp,
return false;
}
-static int drm_hdmi_get_edid(struct device *dev,
- struct drm_connector *connector, u8 *edid, int len)
+static struct edid *drm_hdmi_get_edid(struct device *dev,
+ struct drm_connector *connector)
{
struct drm_hdmi_context *ctx = to_context(dev);
DRM_DEBUG_KMS("%s\n", __FILE__);
if (hdmi_ops && hdmi_ops->get_edid)
- return hdmi_ops->get_edid(ctx->hdmi_ctx->ctx, connector, edid,
- len);
+ return hdmi_ops->get_edid(ctx->hdmi_ctx->ctx, connector);
- return 0;
+ return NULL;
}
static int drm_hdmi_check_timing(struct device *dev, void *timing)
struct exynos_hdmi_ops {
/* display */
bool (*is_connected)(void *ctx);
- int (*get_edid)(void *ctx, struct drm_connector *connector,
- u8 *edid, int len);
+ struct edid *(*get_edid)(void *ctx,
+ struct drm_connector *connector);
int (*check_timing)(void *ctx, void *timing);
int (*power_on)(void *ctx, int mode);
}
}
-void ipp_handle_cmd_work(struct device *dev,
+static void ipp_handle_cmd_work(struct device *dev,
struct exynos_drm_ippdrv *ippdrv,
struct drm_exynos_ipp_cmd_work *cmd_work,
struct drm_exynos_ipp_cmd_node *c_node)
return 0;
}
-struct rot_limit_table rot_limit_tbl = {
+static struct rot_limit_table rot_limit_tbl = {
.ycbcr420_2p = {
.min_w = 32,
.min_h = 32,
},
};
-struct platform_device_id rotator_driver_ids[] = {
+static struct platform_device_id rotator_driver_ids[] = {
{
.name = "exynos-rot",
.driver_data = (unsigned long)&rot_limit_tbl,
return ctx->connected ? true : false;
}
-static int vidi_get_edid(struct device *dev, struct drm_connector *connector,
- u8 *edid, int len)
+static struct edid *vidi_get_edid(struct device *dev,
+ struct drm_connector *connector)
{
struct vidi_context *ctx = get_vidi_context(dev);
+ struct edid *edid;
+ int edid_len;
DRM_DEBUG_KMS("%s\n", __FILE__);
*/
if (!ctx->raw_edid) {
DRM_DEBUG_KMS("raw_edid is null.\n");
- return -EFAULT;
+ return ERR_PTR(-EFAULT);
}
- memcpy(edid, ctx->raw_edid, min((1 + ctx->raw_edid->extensions)
- * EDID_LENGTH, len));
+ edid_len = (1 + ctx->raw_edid->extensions) * EDID_LENGTH;
+ edid = kzalloc(edid_len, GFP_KERNEL);
+ if (!edid) {
+ DRM_DEBUG_KMS("failed to allocate edid\n");
+ return ERR_PTR(-ENOMEM);
+ }
- return 0;
+ memcpy(edid, ctx->raw_edid, edid_len);
+ return edid;
}
static void *vidi_get_panel(struct device *dev)
struct exynos_drm_manager *manager;
struct exynos_drm_display_ops *display_ops;
struct drm_exynos_vidi_connection *vidi = data;
- struct edid *raw_edid;
int edid_len;
DRM_DEBUG_KMS("%s\n", __FILE__);
}
if (vidi->connection) {
- if (!vidi->edid) {
- DRM_DEBUG_KMS("edid data is null.\n");
+ struct edid *raw_edid = (struct edid *)(uint32_t)vidi->edid;
+ if (!drm_edid_is_valid(raw_edid)) {
+ DRM_DEBUG_KMS("edid data is invalid.\n");
return -EINVAL;
}
- raw_edid = (struct edid *)(uint32_t)vidi->edid;
edid_len = (1 + raw_edid->extensions) * EDID_LENGTH;
ctx->raw_edid = kzalloc(edid_len, GFP_KERNEL);
if (!ctx->raw_edid) {
#include <linux/regulator/consumer.h>
#include <linux/io.h>
#include <linux/of_gpio.h>
-#include <plat/gpio-cfg.h>
#include <drm/exynos_drm.h>
void __iomem *regs;
void *parent_ctx;
- int external_irq;
- int internal_irq;
+ int irq;
struct i2c_client *ddc_port;
struct i2c_client *hdmiphy_port;
return hdata->hpd;
}
-static int hdmi_get_edid(void *ctx, struct drm_connector *connector,
- u8 *edid, int len)
+static struct edid *hdmi_get_edid(void *ctx, struct drm_connector *connector)
{
struct edid *raw_edid;
struct hdmi_context *hdata = ctx;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
if (!hdata->ddc_port)
- return -ENODEV;
+ return ERR_PTR(-ENODEV);
raw_edid = drm_get_edid(connector, hdata->ddc_port->adapter);
- if (raw_edid) {
- hdata->dvi_mode = !drm_detect_hdmi_monitor(raw_edid);
- memcpy(edid, raw_edid, min((1 + raw_edid->extensions)
- * EDID_LENGTH, len));
- DRM_DEBUG_KMS("%s : width[%d] x height[%d]\n",
- (hdata->dvi_mode ? "dvi monitor" : "hdmi monitor"),
- raw_edid->width_cm, raw_edid->height_cm);
- kfree(raw_edid);
- } else {
- return -ENODEV;
- }
+ if (!raw_edid)
+ return ERR_PTR(-ENODEV);
- return 0;
+ hdata->dvi_mode = !drm_detect_hdmi_monitor(raw_edid);
+ DRM_DEBUG_KMS("%s : width[%d] x height[%d]\n",
+ (hdata->dvi_mode ? "dvi monitor" : "hdmi monitor"),
+ raw_edid->width_cm, raw_edid->height_cm);
+
+ return raw_edid;
}
static int hdmi_v13_check_timing(struct fb_videomode *check_timing)
/* resetting HDMI core */
hdmi_reg_writemask(hdata, reg, 0, HDMI_CORE_SW_RSTOUT);
- mdelay(10);
+ usleep_range(10000, 12000);
hdmi_reg_writemask(hdata, reg, ~0, HDMI_CORE_SW_RSTOUT);
- mdelay(10);
+ usleep_range(10000, 12000);
}
static void hdmi_conf_init(struct hdmi_context *hdata)
{
struct hdmi_infoframe infoframe;
- /* disable HPD interrupts */
+ /* disable HPD interrupts from HDMI IP block, use GPIO instead */
hdmi_reg_writemask(hdata, HDMI_INTC_CON, 0, HDMI_INTC_EN_GLOBAL |
HDMI_INTC_EN_HPD_PLUG | HDMI_INTC_EN_HPD_UNPLUG);
u32 val = hdmi_reg_read(hdata, HDMI_V13_PHY_STATUS);
if (val & HDMI_PHY_STATUS_READY)
break;
- mdelay(1);
+ usleep_range(1000, 2000);
}
/* steady state not achieved */
if (tries == 0) {
u32 val = hdmi_reg_read(hdata, HDMI_PHY_STATUS_0);
if (val & HDMI_PHY_STATUS_READY)
break;
- mdelay(1);
+ usleep_range(1000, 2000);
}
/* steady state not achieved */
if (tries == 0) {
/* reset hdmiphy */
hdmi_reg_writemask(hdata, reg, ~0, HDMI_PHY_SW_RSTOUT);
- mdelay(10);
+ usleep_range(10000, 12000);
hdmi_reg_writemask(hdata, reg, 0, HDMI_PHY_SW_RSTOUT);
- mdelay(10);
+ usleep_range(10000, 12000);
}
static void hdmiphy_poweron(struct hdmi_context *hdata)
return;
}
- mdelay(10);
+ usleep_range(10000, 12000);
/* operation mode */
operation[0] = 0x1f;
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
+ mutex_lock(&hdata->hdmi_mutex);
+ if (!hdata->powered) {
+ mutex_unlock(&hdata->hdmi_mutex);
+ return;
+ }
+ mutex_unlock(&hdata->hdmi_mutex);
+
hdmi_conf_apply(hdata);
}
.dpms = hdmi_dpms,
};
-static irqreturn_t hdmi_external_irq_thread(int irq, void *arg)
+static irqreturn_t hdmi_irq_thread(int irq, void *arg)
{
struct exynos_drm_hdmi_context *ctx = arg;
struct hdmi_context *hdata = ctx->ctx;
return IRQ_HANDLED;
}
-static irqreturn_t hdmi_internal_irq_thread(int irq, void *arg)
-{
- struct exynos_drm_hdmi_context *ctx = arg;
- struct hdmi_context *hdata = ctx->ctx;
- u32 intc_flag;
-
- intc_flag = hdmi_reg_read(hdata, HDMI_INTC_FLAG);
- /* clearing flags for HPD plug/unplug */
- if (intc_flag & HDMI_INTC_FLAG_HPD_UNPLUG) {
- DRM_DEBUG_KMS("unplugged\n");
- hdmi_reg_writemask(hdata, HDMI_INTC_FLAG, ~0,
- HDMI_INTC_FLAG_HPD_UNPLUG);
- }
- if (intc_flag & HDMI_INTC_FLAG_HPD_PLUG) {
- DRM_DEBUG_KMS("plugged\n");
- hdmi_reg_writemask(hdata, HDMI_INTC_FLAG, ~0,
- HDMI_INTC_FLAG_HPD_PLUG);
- }
-
- if (ctx->drm_dev)
- drm_helper_hpd_irq_event(ctx->drm_dev);
-
- return IRQ_HANDLED;
-}
-
static int hdmi_resources_init(struct hdmi_context *hdata)
{
struct device *dev = hdata->dev;
hdata->hdmiphy_port = hdmi_hdmiphy;
- hdata->external_irq = gpio_to_irq(hdata->hpd_gpio);
- if (hdata->external_irq < 0) {
- DRM_ERROR("failed to get GPIO external irq\n");
- ret = hdata->external_irq;
- goto err_hdmiphy;
- }
-
- hdata->internal_irq = platform_get_irq(pdev, 0);
- if (hdata->internal_irq < 0) {
- DRM_ERROR("failed to get platform internal irq\n");
- ret = hdata->internal_irq;
+ hdata->irq = gpio_to_irq(hdata->hpd_gpio);
+ if (hdata->irq < 0) {
+ DRM_ERROR("failed to get GPIO irq\n");
+ ret = hdata->irq;
goto err_hdmiphy;
}
hdata->hpd = gpio_get_value(hdata->hpd_gpio);
- ret = request_threaded_irq(hdata->external_irq, NULL,
- hdmi_external_irq_thread, IRQF_TRIGGER_RISING |
+ ret = request_threaded_irq(hdata->irq, NULL,
+ hdmi_irq_thread, IRQF_TRIGGER_RISING |
IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
- "hdmi_external", drm_hdmi_ctx);
+ "hdmi", drm_hdmi_ctx);
if (ret) {
- DRM_ERROR("failed to register hdmi external interrupt\n");
+ DRM_ERROR("failed to register hdmi interrupt\n");
goto err_hdmiphy;
}
- ret = request_threaded_irq(hdata->internal_irq, NULL,
- hdmi_internal_irq_thread, IRQF_ONESHOT,
- "hdmi_internal", drm_hdmi_ctx);
- if (ret) {
- DRM_ERROR("failed to register hdmi internal interrupt\n");
- goto err_free_irq;
- }
-
/* Attach HDMI Driver to common hdmi. */
exynos_hdmi_drv_attach(drm_hdmi_ctx);
return 0;
-err_free_irq:
- free_irq(hdata->external_irq, drm_hdmi_ctx);
err_hdmiphy:
i2c_del_driver(&hdmiphy_driver);
err_ddc:
pm_runtime_disable(dev);
- free_irq(hdata->internal_irq, hdata);
- free_irq(hdata->external_irq, hdata);
+ free_irq(hdata->irq, hdata);
/* hdmiphy i2c driver */
DRM_DEBUG_KMS("[%d] %s\n", __LINE__, __func__);
- disable_irq(hdata->internal_irq);
- disable_irq(hdata->external_irq);
+ disable_irq(hdata->irq);
hdata->hpd = false;
if (ctx->drm_dev)
hdata->hpd = gpio_get_value(hdata->hpd_gpio);
- enable_irq(hdata->external_irq);
- enable_irq(hdata->internal_irq);
+ enable_irq(hdata->irq);
if (!pm_runtime_suspended(dev)) {
DRM_DEBUG_KMS("%s : Already resumed\n", __func__);
/* waiting until VP_SRESET_PROCESSING is 0 */
if (~vp_reg_read(res, VP_SRESET) & VP_SRESET_PROCESSING)
break;
- mdelay(10);
+ usleep_range(10000, 12000);
}
WARN(tries == 0, "failed to reset Video Processor\n");
}
DRM_DEBUG_KMS("[%d] %s, win: %d\n", __LINE__, __func__, win);
+ mutex_lock(&mixer_ctx->mixer_mutex);
+ if (!mixer_ctx->powered) {
+ mutex_unlock(&mixer_ctx->mixer_mutex);
+ return;
+ }
+ mutex_unlock(&mixer_ctx->mixer_mutex);
+
if (win > 1 && mixer_ctx->vp_enabled)
vp_video_buffer(mixer_ctx, win);
else
#include <linux/debugfs.h>
#include <linux/slab.h>
#include <linux/export.h>
+#include <generated/utsrelease.h>
#include <drm/drmP.h>
#include "intel_drv.h"
#include "intel_ringbuffer.h"
seq_printf(m, "%s command stream:\n", ring_str(ring));
seq_printf(m, " HEAD: 0x%08x\n", error->head[ring]);
seq_printf(m, " TAIL: 0x%08x\n", error->tail[ring]);
+ seq_printf(m, " CTL: 0x%08x\n", error->ctl[ring]);
seq_printf(m, " ACTHD: 0x%08x\n", error->acthd[ring]);
seq_printf(m, " IPEIR: 0x%08x\n", error->ipeir[ring]);
seq_printf(m, " IPEHR: 0x%08x\n", error->ipehr[ring]);
seq_printf(m, "Time: %ld s %ld us\n", error->time.tv_sec,
error->time.tv_usec);
+ seq_printf(m, "Kernel: " UTS_RELEASE);
seq_printf(m, "PCI ID: 0x%04x\n", dev->pci_device);
seq_printf(m, "EIR: 0x%08x\n", error->eir);
seq_printf(m, "IER: 0x%08x\n", error->ier);
seq_printf(m, "PGTBL_ER: 0x%08x\n", error->pgtbl_er);
+ seq_printf(m, "FORCEWAKE: 0x%08x\n", error->forcewake);
+ seq_printf(m, "DERRMR: 0x%08x\n", error->derrmr);
seq_printf(m, "CCID: 0x%08x\n", error->ccid);
for (i = 0; i < dev_priv->num_fence_regs; i++)
u32 pgtbl_er;
u32 ier;
u32 ccid;
+ u32 derrmr;
+ u32 forcewake;
bool waiting[I915_NUM_RINGS];
u32 pipestat[I915_MAX_PIPES];
u32 tail[I915_NUM_RINGS];
u32 head[I915_NUM_RINGS];
+ u32 ctl[I915_NUM_RINGS];
u32 ipeir[I915_NUM_RINGS];
u32 ipehr[I915_NUM_RINGS];
u32 instdone[I915_NUM_RINGS];
total = 0;
for (i = 0; i < count; i++) {
struct drm_i915_gem_relocation_entry __user *user_relocs;
+ u64 invalid_offset = (u64)-1;
+ int j;
user_relocs = (void __user *)(uintptr_t)exec[i].relocs_ptr;
goto err;
}
+ /* As we do not update the known relocation offsets after
+ * relocating (due to the complexities in lock handling),
+ * we need to mark them as invalid now so that we force the
+ * relocation processing next time. Just in case the target
+ * object is evicted and then rebound into its old
+ * presumed_offset before the next execbuffer - if that
+ * happened we would make the mistake of assuming that the
+ * relocations were valid.
+ */
+ for (j = 0; j < exec[i].relocation_count; j++) {
+ if (copy_to_user(&user_relocs[j].presumed_offset,
+ &invalid_offset,
+ sizeof(invalid_offset))) {
+ ret = -EFAULT;
+ mutex_lock(&dev->struct_mutex);
+ goto err;
+ }
+ }
+
reloc_offset[i] = total;
total += exec[i].relocation_count;
}
error->acthd[ring->id] = intel_ring_get_active_head(ring);
error->head[ring->id] = I915_READ_HEAD(ring);
error->tail[ring->id] = I915_READ_TAIL(ring);
+ error->ctl[ring->id] = I915_READ_CTL(ring);
error->cpu_ring_head[ring->id] = ring->head;
error->cpu_ring_tail[ring->id] = ring->tail;
else
error->ier = I915_READ(IER);
+ if (INTEL_INFO(dev)->gen >= 6)
+ error->derrmr = I915_READ(DERRMR);
+
+ if (IS_VALLEYVIEW(dev))
+ error->forcewake = I915_READ(FORCEWAKE_VLV);
+ else if (INTEL_INFO(dev)->gen >= 7)
+ error->forcewake = I915_READ(FORCEWAKE_MT);
+ else if (INTEL_INFO(dev)->gen == 6)
+ error->forcewake = I915_READ(FORCEWAKE);
+
for_each_pipe(pipe)
error->pipestat[pipe] = I915_READ(PIPESTAT(pipe));
#define GEN7_ERR_INT 0x44040
#define ERR_INT_MMIO_UNCLAIMED (1<<13)
+#define DERRMR 0x44050
+
/* GM45+ chicken bits -- debug workaround bits that may be required
* for various sorts of correct behavior. The top 16 bits of each are
* the enables for writing to the corresponding low bit.
#define MI_MODE 0x0209c
# define VS_TIMER_DISPATCH (1 << 6)
# define MI_FLUSH_ENABLE (1 << 12)
+# define ASYNC_FLIP_PERF_DISABLE (1 << 14)
#define GEN6_GT_MODE 0x20d0
#define GEN6_GT_MODE_HI (1 << 9)
static void
intel_dp_init_panel_power_sequencer(struct drm_device *dev,
- struct intel_dp *intel_dp)
+ struct intel_dp *intel_dp,
+ struct edp_power_seq *out)
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct edp_power_seq cur, vbt, spec, final;
intel_dp->panel_power_cycle_delay = get_delay(t11_t12);
#undef get_delay
+ DRM_DEBUG_KMS("panel power up delay %d, power down delay %d, power cycle delay %d\n",
+ intel_dp->panel_power_up_delay, intel_dp->panel_power_down_delay,
+ intel_dp->panel_power_cycle_delay);
+
+ DRM_DEBUG_KMS("backlight on delay %d, off delay %d\n",
+ intel_dp->backlight_on_delay, intel_dp->backlight_off_delay);
+
+ if (out)
+ *out = final;
+}
+
+static void
+intel_dp_init_panel_power_sequencer_registers(struct drm_device *dev,
+ struct intel_dp *intel_dp,
+ struct edp_power_seq *seq)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ u32 pp_on, pp_off, pp_div;
+
/* And finally store the new values in the power sequencer. */
- pp_on = (final.t1_t3 << PANEL_POWER_UP_DELAY_SHIFT) |
- (final.t8 << PANEL_LIGHT_ON_DELAY_SHIFT);
- pp_off = (final.t9 << PANEL_LIGHT_OFF_DELAY_SHIFT) |
- (final.t10 << PANEL_POWER_DOWN_DELAY_SHIFT);
+ pp_on = (seq->t1_t3 << PANEL_POWER_UP_DELAY_SHIFT) |
+ (seq->t8 << PANEL_LIGHT_ON_DELAY_SHIFT);
+ pp_off = (seq->t9 << PANEL_LIGHT_OFF_DELAY_SHIFT) |
+ (seq->t10 << PANEL_POWER_DOWN_DELAY_SHIFT);
/* Compute the divisor for the pp clock, simply match the Bspec
* formula. */
pp_div = ((100 * intel_pch_rawclk(dev))/2 - 1)
<< PP_REFERENCE_DIVIDER_SHIFT;
- pp_div |= (DIV_ROUND_UP(final.t11_t12, 1000)
+ pp_div |= (DIV_ROUND_UP(seq->t11_t12, 1000)
<< PANEL_POWER_CYCLE_DELAY_SHIFT);
/* Haswell doesn't have any port selection bits for the panel
I915_WRITE(PCH_PP_OFF_DELAYS, pp_off);
I915_WRITE(PCH_PP_DIVISOR, pp_div);
-
- DRM_DEBUG_KMS("panel power up delay %d, power down delay %d, power cycle delay %d\n",
- intel_dp->panel_power_up_delay, intel_dp->panel_power_down_delay,
- intel_dp->panel_power_cycle_delay);
-
- DRM_DEBUG_KMS("backlight on delay %d, off delay %d\n",
- intel_dp->backlight_on_delay, intel_dp->backlight_off_delay);
-
DRM_DEBUG_KMS("panel power sequencer register settings: PP_ON %#x, PP_OFF %#x, PP_DIV %#x\n",
I915_READ(PCH_PP_ON_DELAYS),
I915_READ(PCH_PP_OFF_DELAYS),
struct drm_device *dev = intel_encoder->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_display_mode *fixed_mode = NULL;
+ struct edp_power_seq power_seq = { 0 };
enum port port = intel_dig_port->port;
const char *name = NULL;
int type;
}
if (is_edp(intel_dp))
- intel_dp_init_panel_power_sequencer(dev, intel_dp);
+ intel_dp_init_panel_power_sequencer(dev, intel_dp, &power_seq);
intel_dp_i2c_init(intel_dp, intel_connector, name);
return;
}
+ /* We now know it's not a ghost, init power sequence regs. */
+ intel_dp_init_panel_power_sequencer_registers(dev, intel_dp,
+ &power_seq);
+
ironlake_edp_panel_vdd_on(intel_dp);
edid = drm_get_edid(connector, &intel_dp->adapter);
if (edid) {
static void __gen6_gt_force_wake_mt_reset(struct drm_i915_private *dev_priv)
{
I915_WRITE_NOTRACE(FORCEWAKE_MT, _MASKED_BIT_DISABLE(0xffff));
- POSTING_READ(ECOBUS); /* something from same cacheline, but !FORCEWAKE */
+ /* something from same cacheline, but !FORCEWAKE_MT */
+ POSTING_READ(ECOBUS);
}
static void __gen6_gt_force_wake_mt_get(struct drm_i915_private *dev_priv)
DRM_ERROR("Timed out waiting for forcewake old ack to clear.\n");
I915_WRITE_NOTRACE(FORCEWAKE_MT, _MASKED_BIT_ENABLE(FORCEWAKE_KERNEL));
- POSTING_READ(ECOBUS); /* something from same cacheline, but !FORCEWAKE */
+ /* something from same cacheline, but !FORCEWAKE_MT */
+ POSTING_READ(ECOBUS);
if (wait_for_atomic((I915_READ_NOTRACE(forcewake_ack) & 1),
FORCEWAKE_ACK_TIMEOUT_MS))
static void __gen6_gt_force_wake_put(struct drm_i915_private *dev_priv)
{
I915_WRITE_NOTRACE(FORCEWAKE, 0);
- /* gen6_gt_check_fifodbg doubles as the POSTING_READ */
+ /* something from same cacheline, but !FORCEWAKE */
+ POSTING_READ(ECOBUS);
gen6_gt_check_fifodbg(dev_priv);
}
static void __gen6_gt_force_wake_mt_put(struct drm_i915_private *dev_priv)
{
I915_WRITE_NOTRACE(FORCEWAKE_MT, _MASKED_BIT_DISABLE(FORCEWAKE_KERNEL));
- /* gen6_gt_check_fifodbg doubles as the POSTING_READ */
+ /* something from same cacheline, but !FORCEWAKE_MT */
+ POSTING_READ(ECOBUS);
gen6_gt_check_fifodbg(dev_priv);
}
static void vlv_force_wake_reset(struct drm_i915_private *dev_priv)
{
I915_WRITE_NOTRACE(FORCEWAKE_VLV, _MASKED_BIT_DISABLE(0xffff));
+ /* something from same cacheline, but !FORCEWAKE_VLV */
+ POSTING_READ(FORCEWAKE_ACK_VLV);
}
static void vlv_force_wake_get(struct drm_i915_private *dev_priv)
static void vlv_force_wake_put(struct drm_i915_private *dev_priv)
{
I915_WRITE_NOTRACE(FORCEWAKE_VLV, _MASKED_BIT_DISABLE(FORCEWAKE_KERNEL));
- /* The below doubles as a POSTING_READ */
+ /* something from same cacheline, but !FORCEWAKE_VLV */
+ POSTING_READ(FORCEWAKE_ACK_VLV);
gen6_gt_check_fifodbg(dev_priv);
}
struct drm_i915_private *dev_priv = dev->dev_private;
int ret = init_ring_common(ring);
- if (INTEL_INFO(dev)->gen > 3) {
+ if (INTEL_INFO(dev)->gen > 3)
I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(VS_TIMER_DISPATCH));
- if (IS_GEN7(dev))
- I915_WRITE(GFX_MODE_GEN7,
- _MASKED_BIT_DISABLE(GFX_TLB_INVALIDATE_ALWAYS) |
- _MASKED_BIT_ENABLE(GFX_REPLAY_MODE));
- }
+
+ /* We need to disable the AsyncFlip performance optimisations in order
+ * to use MI_WAIT_FOR_EVENT within the CS. It should already be
+ * programmed to '1' on all products.
+ */
+ if (INTEL_INFO(dev)->gen >= 6)
+ I915_WRITE(MI_MODE, _MASKED_BIT_ENABLE(ASYNC_FLIP_PERF_DISABLE));
+
+ /* Required for the hardware to program scanline values for waiting */
+ if (INTEL_INFO(dev)->gen == 6)
+ I915_WRITE(GFX_MODE,
+ _MASKED_BIT_ENABLE(GFX_TLB_INVALIDATE_ALWAYS));
+
+ if (IS_GEN7(dev))
+ I915_WRITE(GFX_MODE_GEN7,
+ _MASKED_BIT_DISABLE(GFX_TLB_INVALIDATE_ALWAYS) |
+ _MASKED_BIT_ENABLE(GFX_REPLAY_MODE));
if (INTEL_INFO(dev)->gen >= 5) {
ret = init_pipe_control(ring);
if (!(tmp & EVERGREEN_CRTC_BLANK_DATA_EN)) {
radeon_wait_for_vblank(rdev, i);
tmp |= EVERGREEN_CRTC_BLANK_DATA_EN;
+ WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);
WREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i], tmp);
+ WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0);
}
} else {
tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
if (!(tmp & EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE)) {
radeon_wait_for_vblank(rdev, i);
tmp |= EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE;
+ WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);
WREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i], tmp);
+ WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0);
}
}
/* wait for the next frame */
blackout &= ~BLACKOUT_MODE_MASK;
WREG32(MC_SHARED_BLACKOUT_CNTL, blackout | 1);
}
+ /* wait for the MC to settle */
+ udelay(100);
}
void evergreen_mc_resume(struct radeon_device *rdev, struct evergreen_mc_save *save)
if (ASIC_IS_DCE6(rdev)) {
tmp = RREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i]);
tmp |= EVERGREEN_CRTC_BLANK_DATA_EN;
+ WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);
WREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i], tmp);
+ WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0);
} else {
tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]);
tmp &= ~EVERGREEN_CRTC_DISP_READ_REQUEST_DISABLE;
+ WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1);
WREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i], tmp);
+ WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0);
}
/* wait for the next frame */
frame_count = radeon_get_vblank_counter(rdev, i);
WREG32(HDP_ADDR_CONFIG, gb_addr_config);
WREG32(DMA_TILING_CONFIG, gb_addr_config);
- tmp = gb_addr_config & NUM_PIPES_MASK;
- tmp = r6xx_remap_render_backend(rdev, tmp, rdev->config.evergreen.max_backends,
- EVERGREEN_MAX_BACKENDS, disabled_rb_mask);
+ if ((rdev->config.evergreen.max_backends == 1) &&
+ (rdev->flags & RADEON_IS_IGP)) {
+ if ((disabled_rb_mask & 3) == 1) {
+ /* RB0 disabled, RB1 enabled */
+ tmp = 0x11111111;
+ } else {
+ /* RB1 disabled, RB0 enabled */
+ tmp = 0x00000000;
+ }
+ } else {
+ tmp = gb_addr_config & NUM_PIPES_MASK;
+ tmp = r6xx_remap_render_backend(rdev, tmp, rdev->config.evergreen.max_backends,
+ EVERGREEN_MAX_BACKENDS, disabled_rb_mask);
+ }
WREG32(GB_BACKEND_MAP, tmp);
WREG32(CGTS_SYS_TCC_DISABLE, 0);
{
struct evergreen_mc_save save;
+ if (!(RREG32(GRBM_STATUS) & GUI_ACTIVE))
+ reset_mask &= ~(RADEON_RESET_GFX | RADEON_RESET_COMPUTE);
+
+ if (RREG32(DMA_STATUS_REG) & DMA_IDLE)
+ reset_mask &= ~RADEON_RESET_DMA;
+
if (reset_mask == 0)
return 0;
int cayman_dma_resume(struct radeon_device *rdev)
{
struct radeon_ring *ring;
- u32 rb_cntl, dma_cntl;
+ u32 rb_cntl, dma_cntl, ib_cntl;
u32 rb_bufsz;
u32 reg_offset, wb_offset;
int i, r;
WREG32(DMA_RB_BASE + reg_offset, ring->gpu_addr >> 8);
/* enable DMA IBs */
- WREG32(DMA_IB_CNTL + reg_offset, DMA_IB_ENABLE | CMD_VMID_FORCE);
+ ib_cntl = DMA_IB_ENABLE | CMD_VMID_FORCE;
+#ifdef __BIG_ENDIAN
+ ib_cntl |= DMA_IB_SWAP_ENABLE;
+#endif
+ WREG32(DMA_IB_CNTL + reg_offset, ib_cntl);
dma_cntl = RREG32(DMA_CNTL + reg_offset);
dma_cntl &= ~CTXEMPTY_INT_ENABLE;
{
struct evergreen_mc_save save;
+ if (!(RREG32(GRBM_STATUS) & GUI_ACTIVE))
+ reset_mask &= ~(RADEON_RESET_GFX | RADEON_RESET_COMPUTE);
+
+ if (RREG32(DMA_STATUS_REG) & DMA_IDLE)
+ reset_mask &= ~RADEON_RESET_DMA;
+
if (reset_mask == 0)
return 0;
{
struct rv515_mc_save save;
+ if (!(RREG32(GRBM_STATUS) & GUI_ACTIVE))
+ reset_mask &= ~(RADEON_RESET_GFX | RADEON_RESET_COMPUTE);
+
+ if (RREG32(DMA_STATUS_REG) & DMA_IDLE)
+ reset_mask &= ~RADEON_RESET_DMA;
+
if (reset_mask == 0)
return 0;
u32 disabled_rb_mask)
{
u32 rendering_pipe_num, rb_num_width, req_rb_num;
- u32 pipe_rb_ratio, pipe_rb_remain;
+ u32 pipe_rb_ratio, pipe_rb_remain, tmp;
u32 data = 0, mask = 1 << (max_rb_num - 1);
unsigned i, j;
/* mask out the RBs that don't exist on that asic */
- disabled_rb_mask |= (0xff << max_rb_num) & 0xff;
+ tmp = disabled_rb_mask | ((0xff << max_rb_num) & 0xff);
+ /* make sure at least one RB is available */
+ if ((tmp & 0xff) != 0xff)
+ disabled_rb_mask = tmp;
rendering_pipe_num = 1 << tiling_pipe_num;
req_rb_num = total_max_rb_num - r600_count_pipe_bits(disabled_rb_mask);
int r600_dma_resume(struct radeon_device *rdev)
{
struct radeon_ring *ring = &rdev->ring[R600_RING_TYPE_DMA_INDEX];
- u32 rb_cntl, dma_cntl;
+ u32 rb_cntl, dma_cntl, ib_cntl;
u32 rb_bufsz;
int r;
WREG32(DMA_RB_BASE, ring->gpu_addr >> 8);
/* enable DMA IBs */
- WREG32(DMA_IB_CNTL, DMA_IB_ENABLE);
+ ib_cntl = DMA_IB_ENABLE;
+#ifdef __BIG_ENDIAN
+ ib_cntl |= DMA_IB_SWAP_ENABLE;
+#endif
+ WREG32(DMA_IB_CNTL, ib_cntl);
dma_cntl = RREG32(DMA_CNTL);
dma_cntl &= ~CTXEMPTY_INT_ENABLE;
struct list_head list;
/* Protected by tbo.reserved */
u32 placements[3];
- u32 busy_placements[3];
struct ttm_placement placement;
struct ttm_buffer_object tbo;
struct ttm_bo_kmap_obj kmap;
u32 ptr_reg_mask;
u32 nop;
u32 idx;
+ u64 last_semaphore_signal_addr;
+ u64 last_semaphore_wait_addr;
};
/*
.vm = {
.init = &cayman_vm_init,
.fini = &cayman_vm_fini,
- .pt_ring_index = R600_RING_TYPE_DMA_INDEX,
+ .pt_ring_index = RADEON_RING_TYPE_GFX_INDEX,
.set_page = &cayman_vm_set_page,
},
.ring = {
.vm = {
.init = &cayman_vm_init,
.fini = &cayman_vm_fini,
- .pt_ring_index = R600_RING_TYPE_DMA_INDEX,
+ .pt_ring_index = RADEON_RING_TYPE_GFX_INDEX,
.set_page = &cayman_vm_set_page,
},
.ring = {
.vm = {
.init = &si_vm_init,
.fini = &si_vm_fini,
- .pt_ring_index = R600_RING_TYPE_DMA_INDEX,
+ .pt_ring_index = RADEON_RING_TYPE_GFX_INDEX,
.set_page = &si_vm_set_page,
},
.ring = {
1),
ATOM_DEVICE_CRT1_SUPPORT);
}
+ /* RV100 board with external TDMS bit mis-set.
+ * Actually uses internal TMDS, clear the bit.
+ */
+ if (dev->pdev->device == 0x5159 &&
+ dev->pdev->subsystem_vendor == 0x1014 &&
+ dev->pdev->subsystem_device == 0x029A) {
+ tmp &= ~(1 << 4);
+ }
if ((tmp >> 4) & 0x1) {
devices |= ATOM_DEVICE_DFP2_SUPPORT;
radeon_add_legacy_encoder(dev,
p->chunks[p->chunk_ib_idx].kpage[1] == NULL) {
kfree(p->chunks[p->chunk_ib_idx].kpage[0]);
kfree(p->chunks[p->chunk_ib_idx].kpage[1]);
+ p->chunks[p->chunk_ib_idx].kpage[0] = NULL;
+ p->chunks[p->chunk_ib_idx].kpage[1] = NULL;
return -ENOMEM;
}
}
y = 0;
}
- if (ASIC_IS_AVIVO(rdev)) {
+ /* fixed on DCE6 and newer */
+ if (ASIC_IS_AVIVO(rdev) && !ASIC_IS_DCE6(rdev)) {
int i = 0;
struct drm_crtc *crtc_p;
{
uint32_t reg;
- if (efi_enabled && rdev->pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE)
+ if (efi_enabled(EFI_BOOT) &&
+ rdev->pdev->subsystem_vendor == PCI_VENDOR_ID_APPLE)
return false;
/* first check CRTCs */
}
radeon_fb = kzalloc(sizeof(*radeon_fb), GFP_KERNEL);
- if (radeon_fb == NULL)
+ if (radeon_fb == NULL) {
+ drm_gem_object_unreference_unlocked(obj);
return ERR_PTR(-ENOMEM);
+ }
ret = radeon_framebuffer_init(dev, radeon_fb, mode_cmd, obj);
if (ret) {
kfree(radeon_fb);
drm_gem_object_unreference_unlocked(obj);
- return NULL;
+ return ERR_PTR(ret);
}
return &radeon_fb->base;
* 2.26.0 - r600-eg: fix htile size computation
* 2.27.0 - r600-SI: Add CS ioctl support for async DMA
* 2.28.0 - r600-eg: Add MEM_WRITE packet support
+ * 2.29.0 - R500 FP16 color clear registers
*/
#define KMS_DRIVER_MAJOR 2
-#define KMS_DRIVER_MINOR 28
+#define KMS_DRIVER_MINOR 29
#define KMS_DRIVER_PATCHLEVEL 0
int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags);
int radeon_driver_unload_kms(struct drm_device *dev);
rbo->placement.fpfn = 0;
rbo->placement.lpfn = 0;
rbo->placement.placement = rbo->placements;
+ rbo->placement.busy_placement = rbo->placements;
if (domain & RADEON_GEM_DOMAIN_VRAM)
rbo->placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_UNCACHED |
TTM_PL_FLAG_VRAM;
if (!c)
rbo->placements[c++] = TTM_PL_MASK_CACHING | TTM_PL_FLAG_SYSTEM;
rbo->placement.num_placement = c;
-
- c = 0;
- rbo->placement.busy_placement = rbo->busy_placements;
- if (rbo->rdev->flags & RADEON_IS_AGP) {
- rbo->busy_placements[c++] = TTM_PL_FLAG_WC | TTM_PL_FLAG_TT;
- } else {
- rbo->busy_placements[c++] = TTM_PL_FLAG_CACHED | TTM_PL_FLAG_TT;
- }
rbo->placement.num_busy_placement = c;
}
{
struct radeon_bo_list *lobj;
struct radeon_bo *bo;
+ u32 domain;
int r;
r = ttm_eu_reserve_buffers(head);
list_for_each_entry(lobj, head, tv.head) {
bo = lobj->bo;
if (!bo->pin_count) {
+ domain = lobj->wdomain ? lobj->wdomain : lobj->rdomain;
+
+ retry:
+ radeon_ttm_placement_from_domain(bo, domain);
r = ttm_bo_validate(&bo->tbo, &bo->placement,
true, false);
if (unlikely(r)) {
+ if (r != -ERESTARTSYS && domain == RADEON_GEM_DOMAIN_VRAM) {
+ domain |= RADEON_GEM_DOMAIN_GTT;
+ goto retry;
+ }
return r;
}
}
{
int r;
+ /* make sure we aren't trying to allocate more space than there is on the ring */
+ if (ndw > (ring->ring_size / 4))
+ return -ENOMEM;
/* Align requested size with padding so unlock_commit can
* pad safely */
ndw = (ndw + ring->align_mask) & ~ring->align_mask;
}
seq_printf(m, "driver's copy of the wptr: 0x%08x [%5d]\n", ring->wptr, ring->wptr);
seq_printf(m, "driver's copy of the rptr: 0x%08x [%5d]\n", ring->rptr, ring->rptr);
+ seq_printf(m, "last semaphore signal addr : 0x%016llx\n", ring->last_semaphore_signal_addr);
+ seq_printf(m, "last semaphore wait addr : 0x%016llx\n", ring->last_semaphore_wait_addr);
seq_printf(m, "%u free dwords in ring\n", ring->ring_free_dw);
seq_printf(m, "%u dwords in ring\n", count);
/* print 8 dw before current rptr as often it's the last executed
/* we assume caller has already allocated space on waiters ring */
radeon_semaphore_emit_wait(rdev, waiter, semaphore);
+ /* for debugging lockup only, used by sysfs debug files */
+ rdev->ring[signaler].last_semaphore_signal_addr = semaphore->gpu_addr;
+ rdev->ring[waiter].last_semaphore_wait_addr = semaphore->gpu_addr;
+
return 0;
}
cayman 0x9400
0x0000802C GRBM_GFX_INDEX
+0x00008040 WAIT_UNTIL
0x000084FC CP_STRMOUT_CNTL
0x000085F0 CP_COHER_CNTL
0x000085F4 CP_COHER_SIZE
0x46AC US_OUT_FMT_2
0x46B0 US_OUT_FMT_3
0x46B4 US_W_FMT
+0x46C0 RB3D_COLOR_CLEAR_VALUE_AR
+0x46C4 RB3D_COLOR_CLEAR_VALUE_GB
0x4BC0 FG_FOG_BLEND
0x4BC4 FG_FOG_FACTOR
0x4BC8 FG_FOG_COLOR_R
WREG32(R600_CITF_CNTL, blackout);
}
}
+ /* wait for the MC to settle */
+ udelay(100);
}
void rv515_mc_resume(struct radeon_device *rdev, struct rv515_mc_save *save)
{
struct evergreen_mc_save save;
+ if (!(RREG32(GRBM_STATUS) & GUI_ACTIVE))
+ reset_mask &= ~(RADEON_RESET_GFX | RADEON_RESET_COMPUTE);
+
+ if (RREG32(DMA_STATUS_REG) & DMA_IDLE)
+ reset_mask &= ~RADEON_RESET_DMA;
+
if (reset_mask == 0)
return 0;
bo->mem = tmp_mem;
bdev->driver->move_notify(bo, mem);
bo->mem = *mem;
+ *mem = tmp_mem;
}
goto out_err;
if (ttm->state == tt_unpopulated) {
ret = ttm->bdev->driver->ttm_tt_populate(ttm);
- if (ret)
+ if (ret) {
+ /* if we fail here don't nuke the mm node
+ * as the bo still owns it */
+ old_copy.mm_node = NULL;
goto out1;
+ }
}
add = 0;
prot);
} else
ret = ttm_copy_io_page(new_iomap, old_iomap, page);
- if (ret)
+ if (ret) {
+ /* failing here, means keep old copy as-is */
+ old_copy.mm_node = NULL;
goto out1;
+ }
}
mb();
out2:
struct ttm_bo_device *bdev = bo->bdev;
struct ttm_bo_driver *driver = bdev->driver;
- fbo = kzalloc(sizeof(*fbo), GFP_KERNEL);
+ fbo = kmalloc(sizeof(*fbo), GFP_KERNEL);
if (!fbo)
return -ENOMEM;
fbo->vm_node = NULL;
atomic_set(&fbo->cpu_writers, 0);
- fbo->sync_obj = driver->sync_obj_ref(bo->sync_obj);
+ spin_lock(&bdev->fence_lock);
+ if (bo->sync_obj)
+ fbo->sync_obj = driver->sync_obj_ref(bo->sync_obj);
+ else
+ fbo->sync_obj = NULL;
+ spin_unlock(&bdev->fence_lock);
kref_init(&fbo->list_kref);
kref_init(&fbo->kref);
fbo->destroy = &ttm_transfered_destroy;
*/
set_bit(TTM_BO_PRIV_FLAG_MOVING, &bo->priv_flags);
-
- /* ttm_buffer_object_transfer accesses bo->sync_obj */
- ret = ttm_buffer_object_transfer(bo, &ghost_obj);
spin_unlock(&bdev->fence_lock);
if (tmp_obj)
driver->sync_obj_unref(&tmp_obj);
+ ret = ttm_buffer_object_transfer(bo, &ghost_obj);
if (ret)
return ret;
#define USB_VENDOR_ID_EZKEY 0x0518
#define USB_DEVICE_ID_BTC_8193 0x0002
+#define USB_VENDOR_ID_FORMOSA 0x147a
+#define USB_DEVICE_ID_FORMOSA_IR_RECEIVER 0xe03e
+
#define USB_VENDOR_ID_FREESCALE 0x15A2
#define USB_DEVICE_ID_FREESCALE_MX28 0x004F
{
struct i2c_client *client = hid->driver_data;
int report_id = buf[0];
+ int ret;
if (report_type == HID_INPUT_REPORT)
return -EINVAL;
- return i2c_hid_set_report(client,
+ if (report_id) {
+ buf++;
+ count--;
+ }
+
+ ret = i2c_hid_set_report(client,
report_type == HID_FEATURE_REPORT ? 0x03 : 0x02,
report_id, buf, count);
+
+ if (report_id && ret >= 0)
+ ret++; /* add report_id to the number of transfered bytes */
+
+ return ret;
}
static int i2c_hid_parse(struct hid_device *hid)
{ USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_AXIS_295, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET },
+ { USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_NOVATEK, USB_DEVICE_ID_NOVATEK_MOUSE, HID_QUIRK_NO_INIT_REPORTS },
#include <linux/io.h>
#include <linux/pm_runtime.h>
#include <linux/delay.h>
+#include <linux/module.h>
#include "i2c-designware-core.h"
/*
return dw_readl(dev, DW_IC_COMP_PARAM_1);
}
EXPORT_SYMBOL_GPL(i2c_dw_read_comp_param);
+
+MODULE_DESCRIPTION("Synopsys DesignWare I2C bus adapter core");
+MODULE_LICENSE("GPL");
struct device *dev;
void __iomem *regs;
struct completion cmd_complete;
- u32 cmd_err;
+ int cmd_err;
struct i2c_adapter adapter;
const struct mxs_i2c_speed_config *speed;
if (msg->len == 0)
return -EINVAL;
- init_completion(&i2c->cmd_complete);
+ INIT_COMPLETION(i2c->cmd_complete);
i2c->cmd_err = 0;
ret = mxs_i2c_dma_setup_xfer(adap, msg, flags);
i2c->dev = dev;
i2c->speed = &mxs_i2c_95kHz_config;
+ init_completion(&i2c->cmd_complete);
+
if (dev->of_node) {
err = mxs_i2c_get_ofdata(i2c);
if (err)
if (stat & OMAP_I2C_STAT_AL) {
dev_err(dev->dev, "Arbitration lost\n");
dev->cmd_err |= OMAP_I2C_STAT_AL;
- omap_i2c_ack_stat(dev, OMAP_I2C_STAT_NACK);
+ omap_i2c_ack_stat(dev, OMAP_I2C_STAT_AL);
}
return -EIO;
i2c_omap_errata_i207(dev, stat);
omap_i2c_ack_stat(dev, OMAP_I2C_STAT_RDR);
- break;
+ continue;
}
if (stat & OMAP_I2C_STAT_RRDY) {
break;
omap_i2c_ack_stat(dev, OMAP_I2C_STAT_XDR);
- break;
+ continue;
}
if (stat & OMAP_I2C_STAT_XRDY) {
#include <linux/slab.h>
#include <linux/platform_device.h>
#include <linux/i2c.h>
+#include <linux/of_i2c.h>
#include <linux/clk.h>
#include <linux/err.h>
#include <linux/io.h>
adap->algo = &i2c_sirfsoc_algo;
adap->algo_data = siic;
+ adap->dev.of_node = pdev->dev.of_node;
adap->dev.parent = &pdev->dev;
adap->nr = pdev->id;
clk_disable(clk);
+ of_i2c_register_devices(adap);
+
dev_info(&pdev->dev, " I2C adapter ready to operate\n");
return 0;
}
mux->busses = devm_kzalloc(&pdev->dev,
- sizeof(mux->busses) * mux->pdata->bus_count,
+ sizeof(*mux->busses) * mux->pdata->bus_count,
GFP_KERNEL);
if (!mux->busses) {
dev_err(&pdev->dev, "Cannot allocate busses\n");
else
on_each_cpu(__setup_broadcast_timer, (void *)true, 1);
- register_cpu_notifier(&cpu_hotplug_notifier);
-
pr_debug(PREFIX "v" INTEL_IDLE_VERSION
" model 0x%X\n", boot_cpu_data.x86_model);
return retval;
}
}
+ register_cpu_notifier(&cpu_hotplug_notifier);
return 0;
}
struct qib_qp __rcu **qpp;
qpp = &dev->qp_table[n];
- q = rcu_dereference_protected(*qpp,
- lockdep_is_held(&dev->qpt_lock));
- for (; q; qpp = &q->next) {
+ for (; (q = rcu_dereference_protected(*qpp,
+ lockdep_is_held(&dev->qpt_lock))) != NULL;
+ qpp = &q->next)
if (q == qp) {
atomic_dec(&qp->refcount);
*qpp = qp->next;
rcu_assign_pointer(qp->next, NULL);
- q = rcu_dereference_protected(*qpp,
- lockdep_is_held(&dev->qpt_lock));
break;
}
- q = rcu_dereference_protected(*qpp,
- lockdep_is_held(&dev->qpt_lock));
- }
}
spin_unlock_irqrestore(&dev->qpt_lock, flags);
tx_req->mapping = addr;
+ skb_orphan(skb);
+ skb_dst_drop(skb);
+
rc = post_send(priv, tx, tx->tx_head & (ipoib_sendq_size - 1),
addr, skb->len);
if (unlikely(rc)) {
dev->trans_start = jiffies;
++tx->tx_head;
- skb_orphan(skb);
- skb_dst_drop(skb);
-
if (++priv->tx_outstanding == ipoib_sendq_size) {
ipoib_dbg(priv, "TX ring 0x%x full, stopping kernel net queue\n",
tx->qp->qp_num);
netif_stop_queue(dev);
}
+ skb_orphan(skb);
+ skb_dst_drop(skb);
+
rc = post_send(priv, priv->tx_head & (ipoib_sendq_size - 1),
address->ah, qpn, tx_req, phead, hlen);
if (unlikely(rc)) {
address->last_send = priv->tx_head;
++priv->tx_head;
-
- skb_orphan(skb);
- skb_dst_drop(skb);
}
if (unlikely(priv->tx_outstanding > MAX_SEND_CQE))
}
/*
+ * Family15h Model 10h-1fh erratum 746 (IOMMU Logging May Stall Translations)
+ * Workaround:
+ * BIOS should disable L2B micellaneous clock gating by setting
+ * L2_L2B_CK_GATE_CONTROL[CKGateL2BMiscDisable](D0F2xF4_x90[2]) = 1b
+ */
+static void __init amd_iommu_erratum_746_workaround(struct amd_iommu *iommu)
+{
+ u32 value;
+
+ if ((boot_cpu_data.x86 != 0x15) ||
+ (boot_cpu_data.x86_model < 0x10) ||
+ (boot_cpu_data.x86_model > 0x1f))
+ return;
+
+ pci_write_config_dword(iommu->dev, 0xf0, 0x90);
+ pci_read_config_dword(iommu->dev, 0xf4, &value);
+
+ if (value & BIT(2))
+ return;
+
+ /* Select NB indirect register 0x90 and enable writing */
+ pci_write_config_dword(iommu->dev, 0xf0, 0x90 | (1 << 8));
+
+ pci_write_config_dword(iommu->dev, 0xf4, value | 0x4);
+ pr_info("AMD-Vi: Applying erratum 746 workaround for IOMMU at %s\n",
+ dev_name(&iommu->dev->dev));
+
+ /* Clear the enable writing bit */
+ pci_write_config_dword(iommu->dev, 0xf0, 0x90);
+}
+
+/*
* This function clues the initialization function for one IOMMU
* together and also allocates the command buffer and programs the
* hardware. It does NOT enable the IOMMU. This is done afterwards.
iommu->stored_l2[i] = iommu_read_l2(iommu, i);
}
+ amd_iommu_erratum_746_workaround(iommu);
+
return pci_enable_device(iommu->dev);
}
.pgsize_bitmap = INTEL_IOMMU_PGSIZES,
};
+static void quirk_iommu_g4x_gfx(struct pci_dev *dev)
+{
+ /* G4x/GM45 integrated gfx dmar support is totally busted. */
+ printk(KERN_INFO "DMAR: Disabling IOMMU for graphics on this chipset\n");
+ dmar_map_gfx = 0;
+}
+
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2a40, quirk_iommu_g4x_gfx);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e00, quirk_iommu_g4x_gfx);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e10, quirk_iommu_g4x_gfx);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e20, quirk_iommu_g4x_gfx);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e30, quirk_iommu_g4x_gfx);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e40, quirk_iommu_g4x_gfx);
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2e90, quirk_iommu_g4x_gfx);
+
static void quirk_iommu_rwbf(struct pci_dev *dev)
{
/*
*/
printk(KERN_INFO "DMAR: Forcing write-buffer flush capability\n");
rwbf_quirk = 1;
-
- /* https://bugzilla.redhat.com/show_bug.cgi?id=538163 */
- if (dev->revision == 0x07) {
- printk(KERN_INFO "DMAR: Disabling IOMMU for graphics on this chipset\n");
- dmar_map_gfx = 0;
- }
}
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x2a40, quirk_iommu_rwbf);
CAPIMSG_APPID(data), CAPIMSG_MSGID(data), l,
CAPIMSG_CONTROL(data));
l -= 12;
+ if (l <= 0)
+ return;
dbgline = kmalloc(3 * l, GFP_ATOMIC);
if (!dbgline)
return;
}
/*
- * validate_rebuild_devices
+ * validate_raid_redundancy
* @rs
*
- * Determine if the devices specified for rebuild can result in a valid
- * usable array that is capable of rebuilding the given devices.
+ * Determine if there are enough devices in the array that haven't
+ * failed (or are being rebuilt) to form a usable array.
*
* Returns: 0 on success, -EINVAL on failure.
*/
-static int validate_rebuild_devices(struct raid_set *rs)
+static int validate_raid_redundancy(struct raid_set *rs)
{
unsigned i, rebuild_cnt = 0;
unsigned rebuilds_per_group, copies, d;
- if (!(rs->print_flags & DMPF_REBUILD))
- return 0;
-
for (i = 0; i < rs->md.raid_disks; i++)
- if (!test_bit(In_sync, &rs->dev[i].rdev.flags))
+ if (!test_bit(In_sync, &rs->dev[i].rdev.flags) ||
+ !rs->dev[i].rdev.sb_page)
rebuild_cnt++;
switch (rs->raid_type->level) {
* A A B B C
* C D D E E
*/
- rebuilds_per_group = 0;
for (i = 0; i < rs->md.raid_disks * copies; i++) {
+ if (!(i % copies))
+ rebuilds_per_group = 0;
d = i % rs->md.raid_disks;
- if (!test_bit(In_sync, &rs->dev[d].rdev.flags) &&
+ if ((!rs->dev[d].rdev.sb_page ||
+ !test_bit(In_sync, &rs->dev[d].rdev.flags)) &&
(++rebuilds_per_group >= copies))
goto too_many;
- if (!((i + 1) % copies))
- rebuilds_per_group = 0;
}
break;
default:
- DMERR("The rebuild parameter is not supported for %s",
- rs->raid_type->name);
- rs->ti->error = "Rebuild not supported for this RAID type";
- return -EINVAL;
+ if (rebuild_cnt)
+ return -EINVAL;
}
return 0;
too_many:
- rs->ti->error = "Too many rebuild devices specified";
return -EINVAL;
}
}
rs->md.dev_sectors = sectors_per_dev;
- if (validate_rebuild_devices(rs))
- return -EINVAL;
-
/* Assume there are no metadata devices until the drives are parsed */
rs->md.persistent = 0;
rs->md.external = 1;
static int analyse_superblocks(struct dm_target *ti, struct raid_set *rs)
{
int ret;
- unsigned redundancy = 0;
struct raid_dev *dev;
struct md_rdev *rdev, *tmp, *freshest;
struct mddev *mddev = &rs->md;
- switch (rs->raid_type->level) {
- case 1:
- redundancy = rs->md.raid_disks - 1;
- break;
- case 4:
- case 5:
- case 6:
- redundancy = rs->raid_type->parity_devs;
- break;
- case 10:
- redundancy = raid10_md_layout_to_copies(mddev->layout) - 1;
- break;
- default:
- ti->error = "Unknown RAID type";
- return -EINVAL;
- }
-
freshest = NULL;
rdev_for_each_safe(rdev, tmp, mddev) {
/*
break;
default:
dev = container_of(rdev, struct raid_dev, rdev);
- if (redundancy--) {
- if (dev->meta_dev)
- dm_put_device(ti, dev->meta_dev);
-
- dev->meta_dev = NULL;
- rdev->meta_bdev = NULL;
+ if (dev->meta_dev)
+ dm_put_device(ti, dev->meta_dev);
- if (rdev->sb_page)
- put_page(rdev->sb_page);
+ dev->meta_dev = NULL;
+ rdev->meta_bdev = NULL;
- rdev->sb_page = NULL;
+ if (rdev->sb_page)
+ put_page(rdev->sb_page);
- rdev->sb_loaded = 0;
+ rdev->sb_page = NULL;
- /*
- * We might be able to salvage the data device
- * even though the meta device has failed. For
- * now, we behave as though '- -' had been
- * set for this device in the table.
- */
- if (dev->data_dev)
- dm_put_device(ti, dev->data_dev);
+ rdev->sb_loaded = 0;
- dev->data_dev = NULL;
- rdev->bdev = NULL;
+ /*
+ * We might be able to salvage the data device
+ * even though the meta device has failed. For
+ * now, we behave as though '- -' had been
+ * set for this device in the table.
+ */
+ if (dev->data_dev)
+ dm_put_device(ti, dev->data_dev);
- list_del(&rdev->same_set);
+ dev->data_dev = NULL;
+ rdev->bdev = NULL;
- continue;
- }
- ti->error = "Failed to load superblock";
- return ret;
+ list_del(&rdev->same_set);
}
}
if (!freshest)
return 0;
+ if (validate_raid_redundancy(rs)) {
+ rs->ti->error = "Insufficient redundancy to activate array";
+ return -EINVAL;
+ }
+
/*
* Validation of the freshest device provides the source of
* validation for the remaining devices.
static struct target_type raid_target = {
.name = "raid",
- .version = {1, 4, 0},
+ .version = {1, 4, 1},
.module = THIS_MODULE,
.ctr = raid_ctr,
.dtr = raid_dtr,
return 0;
}
-/*
- * A thin device always inherits its queue limits from its pool.
- */
-static void thin_io_hints(struct dm_target *ti, struct queue_limits *limits)
-{
- struct thin_c *tc = ti->private;
-
- *limits = bdev_get_queue(tc->pool_dev->bdev)->limits;
-}
-
static struct target_type thin_target = {
.name = "thin",
- .version = {1, 6, 0},
+ .version = {1, 7, 0},
.module = THIS_MODULE,
.ctr = thin_ctr,
.dtr = thin_dtr,
.postsuspend = thin_postsuspend,
.status = thin_status,
.iterate_devices = thin_iterate_devices,
- .io_hints = thin_io_hints,
};
/*----------------------------------------------------------------*/
{
struct dm_target *ti;
sector_t len;
+ unsigned num_requests;
do {
ti = dm_table_find_target(ci->map, ci->sector);
* reconfiguration might also have changed that since the
* check was performed.
*/
- if (!get_num_requests || !get_num_requests(ti))
+ num_requests = get_num_requests ? get_num_requests(ti) : 0;
+ if (!num_requests)
return -EOPNOTSUPP;
if (is_split_required && !is_split_required(ti))
else
len = min(ci->sector_count, max_io_len(ci->sector, ti));
- __issue_target_requests(ci, ti, ti->num_discard_requests, len);
+ __issue_target_requests(ci, ti, num_requests, len);
ci->sector += len;
} while (ci->sector_count -= len);
mutex_lock(&info->lock);
format = __find_format(info, fh, fmt->which, info->res_type);
- if (!format)
+ if (format)
fmt->format = *format;
else
ret = -EINVAL;
#include <linux/slab.h>
#include <linux/videodev2.h>
#include <linux/of.h>
+#include <linux/platform_data/imx-iram.h>
-#include <mach/iram.h>
#include <media/v4l2-ctrls.h>
#include <media/v4l2-device.h>
#include <media/v4l2-ioctl.h>
#include <linux/vmalloc.h>
#include <media/v4l2-dev.h>
#include <media/v4l2-ioctl.h>
-#include <plat/iommu.h>
-#include <plat/iovmm.h>
-#include <plat/omap-pm.h>
#include "ispvideo.h"
#include "isp.h"
{
struct media_entity *source, *sink;
unsigned int flags = MEDIA_LNK_FL_ENABLED;
- int i, ret;
+ int i, ret = 0;
for (i = 0; i < FIMC_LITE_MAX_DEVS; i++) {
struct fimc_lite *fimc = fmd->fimc_lite[i];
}
/* Error handling for interrupt */
-static void s5p_mfc_handle_error(struct s5p_mfc_ctx *ctx,
- unsigned int reason, unsigned int err)
+static void s5p_mfc_handle_error(struct s5p_mfc_dev *dev,
+ struct s5p_mfc_ctx *ctx, unsigned int reason, unsigned int err)
{
- struct s5p_mfc_dev *dev;
unsigned long flags;
- /* If no context is available then all necessary
- * processing has been done. */
- if (ctx == NULL)
- return;
-
- dev = ctx->dev;
mfc_err("Interrupt Error: %08x\n", err);
- s5p_mfc_hw_call(dev->mfc_ops, clear_int_flags, dev);
- wake_up_dev(dev, reason, err);
- /* Error recovery is dependent on the state of context */
- switch (ctx->state) {
- case MFCINST_INIT:
- /* This error had to happen while acquireing instance */
- case MFCINST_GOT_INST:
- /* This error had to happen while parsing the header */
- case MFCINST_HEAD_PARSED:
- /* This error had to happen while setting dst buffers */
- case MFCINST_RETURN_INST:
- /* This error had to happen while releasing instance */
- clear_work_bit(ctx);
- wake_up_ctx(ctx, reason, err);
- if (test_and_clear_bit(0, &dev->hw_lock) == 0)
- BUG();
- s5p_mfc_clock_off();
- ctx->state = MFCINST_ERROR;
- break;
- case MFCINST_FINISHING:
- case MFCINST_FINISHED:
- case MFCINST_RUNNING:
- /* It is higly probable that an error occured
- * while decoding a frame */
- clear_work_bit(ctx);
- ctx->state = MFCINST_ERROR;
- /* Mark all dst buffers as having an error */
- spin_lock_irqsave(&dev->irqlock, flags);
- s5p_mfc_hw_call(dev->mfc_ops, cleanup_queue, &ctx->dst_queue,
- &ctx->vq_dst);
- /* Mark all src buffers as having an error */
- s5p_mfc_hw_call(dev->mfc_ops, cleanup_queue, &ctx->src_queue,
- &ctx->vq_src);
- spin_unlock_irqrestore(&dev->irqlock, flags);
- if (test_and_clear_bit(0, &dev->hw_lock) == 0)
- BUG();
- s5p_mfc_clock_off();
- break;
- default:
- mfc_err("Encountered an error interrupt which had not been handled\n");
- break;
+ if (ctx != NULL) {
+ /* Error recovery is dependent on the state of context */
+ switch (ctx->state) {
+ case MFCINST_RES_CHANGE_INIT:
+ case MFCINST_RES_CHANGE_FLUSH:
+ case MFCINST_RES_CHANGE_END:
+ case MFCINST_FINISHING:
+ case MFCINST_FINISHED:
+ case MFCINST_RUNNING:
+ /* It is higly probable that an error occured
+ * while decoding a frame */
+ clear_work_bit(ctx);
+ ctx->state = MFCINST_ERROR;
+ /* Mark all dst buffers as having an error */
+ spin_lock_irqsave(&dev->irqlock, flags);
+ s5p_mfc_hw_call(dev->mfc_ops, cleanup_queue,
+ &ctx->dst_queue, &ctx->vq_dst);
+ /* Mark all src buffers as having an error */
+ s5p_mfc_hw_call(dev->mfc_ops, cleanup_queue,
+ &ctx->src_queue, &ctx->vq_src);
+ spin_unlock_irqrestore(&dev->irqlock, flags);
+ wake_up_ctx(ctx, reason, err);
+ break;
+ default:
+ clear_work_bit(ctx);
+ ctx->state = MFCINST_ERROR;
+ wake_up_ctx(ctx, reason, err);
+ break;
+ }
}
+ if (test_and_clear_bit(0, &dev->hw_lock) == 0)
+ BUG();
+ s5p_mfc_hw_call(dev->mfc_ops, clear_int_flags, dev);
+ s5p_mfc_clock_off();
+ wake_up_dev(dev, reason, err);
return;
}
dev->warn_start)
s5p_mfc_handle_frame(ctx, reason, err);
else
- s5p_mfc_handle_error(ctx, reason, err);
+ s5p_mfc_handle_error(dev, ctx, reason, err);
clear_bit(0, &dev->enter_suspend);
break;
radio->vdev.ioctl_ops = &usb_keene_ioctl_ops;
radio->vdev.lock = &radio->lock;
radio->vdev.release = video_device_release_empty;
+ radio->vdev.vfl_dir = VFL_DIR_TX;
radio->usbdev = interface_to_usbdev(intf);
radio->intf = intf;
.name = "radio-si4713",
.release = video_device_release,
.ioctl_ops = &radio_si4713_ioctl_ops,
+ .vfl_dir = VFL_DIR_TX,
};
/* Platform driver interface */
.ioctl_ops = &wl1273_ioctl_ops,
.name = WL1273_FM_DRIVER_NAME,
.release = wl1273_vdev_release,
+ .vfl_dir = VFL_DIR_TX,
};
static int wl1273_fm_radio_remove(struct platform_device *pdev)
.ioctl_ops = &fm_drv_ioctl_ops,
.name = FM_DRV_NAME,
.release = video_device_release,
+ /*
+ * To ensure both the tuner and modulator ioctls are accessible we
+ * set the vfl_dir to M2M to indicate this.
+ *
+ * It is not really a mem2mem device of course, but it can both receive
+ * and transmit using the same radio device. It's the only radio driver
+ * that does this and it should really be split in two radio devices,
+ * but that would affect applications using this driver.
+ */
+ .vfl_dir = VFL_DIR_M2M,
};
int fm_v4l2_init_video_device(struct fmdev *fmdev, int radio_nr)
/* -- module initialisation -- */
static const struct usb_device_id device_table[] = {
{USB_DEVICE(0x045e, 0x02ae)},
+ {USB_DEVICE(0x045e, 0x02bf)},
{}
};
}
}
-static void i2c_w(struct gspca_dev *gspca_dev, const __u8 *buffer)
+static void i2c_w(struct gspca_dev *gspca_dev, const u8 *buf)
{
int retry = 60;
return;
/* is i2c ready */
- reg_w(gspca_dev, 0x08, buffer, 8);
+ reg_w(gspca_dev, 0x08, buf, 8);
while (retry--) {
if (gspca_dev->usb_err < 0)
return;
- msleep(10);
+ msleep(1);
reg_r(gspca_dev, 0x08);
if (gspca_dev->usb_buf[0] & 0x04) {
if (gspca_dev->usb_buf[0] & 0x08) {
dev_err(gspca_dev->v4l2_dev.dev,
- "i2c write error\n");
+ "i2c error writing %02x %02x %02x %02x"
+ " %02x %02x %02x %02x\n",
+ buf[0], buf[1], buf[2], buf[3],
+ buf[4], buf[5], buf[6], buf[7]);
gspca_dev->usb_err = -EIO;
}
return;
for (;;) {
if (gspca_dev->usb_err < 0)
return;
- reg_w(gspca_dev, 0x08, *buffer, 8);
+ i2c_w(gspca_dev, *buffer);
len -= 8;
if (len <= 0)
break;
0,
gspca_dev->usb_buf, 8,
500);
+ msleep(2);
if (ret < 0) {
pr_err("i2c_w1 err %d\n", ret);
gspca_dev->usb_err = ret;
int ret;
ctrl = uvc_find_control(chain, xctrl->id, &mapping);
- if (ctrl == NULL || (ctrl->info.flags & UVC_CTRL_FLAG_SET_CUR) == 0)
+ if (ctrl == NULL)
return -EINVAL;
+ if (!(ctrl->info.flags & UVC_CTRL_FLAG_SET_CUR))
+ return -EACCES;
/* Clamp out of range values. */
switch (mapping->v4l2_type) {
ret = uvc_ctrl_get(chain, ctrl);
if (ret < 0) {
uvc_ctrl_rollback(handle);
- ctrls->error_idx = ret == -ENOENT
- ? ctrls->count : i;
+ ctrls->error_idx = i;
return ret;
}
}
ret = uvc_ctrl_set(chain, ctrl);
if (ret < 0) {
uvc_ctrl_rollback(handle);
- ctrls->error_idx = (ret == -ENOENT &&
- cmd == VIDIOC_S_EXT_CTRLS)
+ ctrls->error_idx = cmd == VIDIOC_S_EXT_CTRLS
? ctrls->count : i;
return ret;
}
* In videobuf we use our internal V4l2_planes struct for
* single-planar buffers as well, for simplicity.
*/
- if (V4L2_TYPE_IS_OUTPUT(b->type))
+ if (V4L2_TYPE_IS_OUTPUT(b->type)) {
v4l2_planes[0].bytesused = b->bytesused;
+ v4l2_planes[0].data_offset = 0;
+ }
if (b->memory == V4L2_MEMORY_USERPTR) {
v4l2_planes[0].m.userptr = b->m.userptr;
depends on I2C=y && GPIOLIB
select MFD_CORE
select REGMAP_I2C
+ select REGMAP_IRQ
select IRQ_DOMAIN
help
if you say yes here you get support for the TPS65910 series of
#include <linux/mfd/core.h>
#include <linux/mfd/abx500.h>
#include <linux/mfd/abx500/ab8500.h>
+#include <linux/mfd/abx500/ab8500-bm.h>
#include <linux/mfd/dbx500-prcmu.h>
#include <linux/regulator/ab8500.h>
#include <linux/of.h>
return ret;
}
- regcache_sync(arizona->regmap);
+ ret = regcache_sync(arizona->regmap);
+ if (ret != 0) {
+ dev_err(arizona->dev, "Failed to restore register cache\n");
+ regulator_disable(arizona->dcvdd);
+ return ret;
+ }
return 0;
}
aod = &wm5102_aod;
irq = &wm5102_irq;
- switch (arizona->rev) {
- case 0:
- case 1:
- ctrlif_error = false;
- break;
- default:
- break;
- }
+ ctrlif_error = false;
break;
#endif
#ifdef CONFIG_MFD_WM5110
aod = &wm5110_aod;
irq = &wm5110_irq;
- switch (arizona->rev) {
- case 0:
- case 1:
- ctrlif_error = false;
- break;
- default:
- break;
- }
+ ctrlif_error = false;
break;
#endif
default:
#include <linux/of_device.h>
#endif
+/* I2C safe register check */
+static inline bool i2c_safe_reg(unsigned char reg)
+{
+ switch (reg) {
+ case DA9052_STATUS_A_REG:
+ case DA9052_STATUS_B_REG:
+ case DA9052_STATUS_C_REG:
+ case DA9052_STATUS_D_REG:
+ case DA9052_ADC_RES_L_REG:
+ case DA9052_ADC_RES_H_REG:
+ case DA9052_VDD_RES_REG:
+ case DA9052_ICHG_AV_REG:
+ case DA9052_TBAT_RES_REG:
+ case DA9052_ADCIN4_RES_REG:
+ case DA9052_ADCIN5_RES_REG:
+ case DA9052_ADCIN6_RES_REG:
+ case DA9052_TJUNC_RES_REG:
+ case DA9052_TSI_X_MSB_REG:
+ case DA9052_TSI_Y_MSB_REG:
+ case DA9052_TSI_LSB_REG:
+ case DA9052_TSI_Z_MSB_REG:
+ return true;
+ default:
+ return false;
+ }
+}
+
+/*
+ * There is an issue with DA9052 and DA9053_AA/BA/BB PMIC where the PMIC
+ * gets lockup up or fails to respond following a system reset.
+ * This fix is to follow any read or write with a dummy read to a safe
+ * register.
+ */
+int da9052_i2c_fix(struct da9052 *da9052, unsigned char reg)
+{
+ int val;
+
+ switch (da9052->chip_id) {
+ case DA9052:
+ case DA9053_AA:
+ case DA9053_BA:
+ case DA9053_BB:
+ /* A dummy read to a safe register address. */
+ if (!i2c_safe_reg(reg))
+ return regmap_read(da9052->regmap,
+ DA9052_PARK_REGISTER,
+ &val);
+ break;
+ default:
+ /*
+ * For other chips parking of I2C register
+ * to a safe place is not required.
+ */
+ break;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(da9052_i2c_fix);
+
static int da9052_i2c_enable_multiwrite(struct da9052 *da9052)
{
int reg_val, ret;
da9052->dev = &client->dev;
da9052->chip_irq = client->irq;
+ da9052->fix_io = da9052_i2c_fix;
i2c_set_clientdata(client, da9052);
for (n = 0; n < NUM_PRCMU_WAKEUPS; n++) {
if (ev & prcmu_irq_bit[n])
- generic_handle_irq(IRQ_PRCMU_BASE + n);
+ generic_handle_irq(irq_find_mapping(db8500_irq_domain, n));
}
r = true;
break;
}
static struct irq_domain_ops db8500_irq_ops = {
- .map = db8500_irq_map,
- .xlate = irq_domain_xlate_twocell,
+ .map = db8500_irq_map,
+ .xlate = irq_domain_xlate_twocell,
};
static int db8500_irq_init(struct device_node *np)
{
- int irq_base = -1;
+ int irq_base = 0;
+ int i;
/* In the device tree case, just take some IRQs */
if (!np)
return -ENOSYS;
}
+ /* All wakeups will be used, so create mappings for all */
+ for (i = 0; i < NUM_PRCMU_WAKEUPS; i++)
+ irq_create_mapping(db8500_irq_domain, i);
+
return 0;
}
if (max77686 == NULL)
return -ENOMEM;
- max77686->regmap = regmap_init_i2c(i2c, &max77686_regmap_config);
- if (IS_ERR(max77686->regmap)) {
- ret = PTR_ERR(max77686->regmap);
- dev_err(max77686->dev, "Failed to allocate register map: %d\n",
- ret);
- kfree(max77686);
- return ret;
- }
-
i2c_set_clientdata(i2c, max77686);
max77686->dev = &i2c->dev;
max77686->i2c = i2c;
max77686->irq_gpio = pdata->irq_gpio;
max77686->irq = i2c->irq;
+ max77686->regmap = regmap_init_i2c(i2c, &max77686_regmap_config);
+ if (IS_ERR(max77686->regmap)) {
+ ret = PTR_ERR(max77686->regmap);
+ dev_err(max77686->dev, "Failed to allocate register map: %d\n",
+ ret);
+ kfree(max77686);
+ return ret;
+ }
+
if (regmap_read(max77686->regmap,
MAX77686_REG_DEVICE_ID, &data) < 0) {
dev_err(max77686->dev,
u8 reg_data;
int ret = 0;
+ if (!pdata) {
+ dev_err(&i2c->dev, "No platform data found.\n");
+ return -EINVAL;
+ }
+
max77693 = devm_kzalloc(&i2c->dev,
sizeof(struct max77693_dev), GFP_KERNEL);
if (max77693 == NULL)
return -ENOMEM;
- max77693->regmap = devm_regmap_init_i2c(i2c, &max77693_regmap_config);
- if (IS_ERR(max77693->regmap)) {
- ret = PTR_ERR(max77693->regmap);
- dev_err(max77693->dev,"failed to allocate register map: %d\n",
- ret);
- goto err_regmap;
- }
-
i2c_set_clientdata(i2c, max77693);
max77693->dev = &i2c->dev;
max77693->i2c = i2c;
max77693->irq = i2c->irq;
max77693->type = id->driver_data;
- if (!pdata)
- goto err_regmap;
+ max77693->regmap = devm_regmap_init_i2c(i2c, &max77693_regmap_config);
+ if (IS_ERR(max77693->regmap)) {
+ ret = PTR_ERR(max77693->regmap);
+ dev_err(max77693->dev, "failed to allocate register map: %d\n",
+ ret);
+ return ret;
+ }
max77693->wakeup = pdata->wakeup;
- if (max77693_read_reg(max77693->regmap,
- MAX77693_PMIC_REG_PMIC_ID2, ®_data) < 0) {
+ ret = max77693_read_reg(max77693->regmap, MAX77693_PMIC_REG_PMIC_ID2,
+ ®_data);
+ if (ret < 0) {
dev_err(max77693->dev, "device not found on this channel\n");
- ret = -ENODEV;
- goto err_regmap;
+ return ret;
} else
dev_info(max77693->dev, "device ID: 0x%x\n", reg_data);
ret = PTR_ERR(max77693->regmap_muic);
dev_err(max77693->dev,
"failed to allocate register map: %d\n", ret);
- goto err_regmap;
+ goto err_regmap_muic;
}
ret = max77693_irq_init(max77693);
err_mfd:
max77693_irq_exit(max77693);
err_irq:
+err_regmap_muic:
i2c_unregister_device(max77693->muic);
i2c_unregister_device(max77693->haptic);
-err_regmap:
return ret;
}
if (!pcf)
return -ENOMEM;
+ i2c_set_clientdata(client, pcf);
+ pcf->dev = &client->dev;
pcf->pdata = pdata;
mutex_init(&pcf->lock);
return ret;
}
- i2c_set_clientdata(client, pcf);
- pcf->dev = &client->dev;
-
version = pcf50633_reg_read(pcf, 0);
variant = pcf50633_reg_read(pcf, 1);
if (version < 0 || variant < 0) {
BPP_LDO_POWB, BPP_LDO_SUSPEND);
}
+static int rtl8411_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage)
+{
+ u8 mask, val;
+
+ mask = (BPP_REG_TUNED18 << BPP_TUNED18_SHIFT_8411) | BPP_PAD_MASK;
+ if (voltage == OUTPUT_3V3)
+ val = (BPP_ASIC_3V3 << BPP_TUNED18_SHIFT_8411) | BPP_PAD_3V3;
+ else if (voltage == OUTPUT_1V8)
+ val = (BPP_ASIC_1V8 << BPP_TUNED18_SHIFT_8411) | BPP_PAD_1V8;
+ else
+ return -EINVAL;
+
+ return rtsx_pci_write_register(pcr, LDO_CTL, mask, val);
+}
+
static unsigned int rtl8411_cd_deglitch(struct rtsx_pcr *pcr)
{
unsigned int card_exist;
return card_exist;
}
+static int rtl8411_conv_clk_and_div_n(int input, int dir)
+{
+ int output;
+
+ if (dir == CLK_TO_DIV_N)
+ output = input * 4 / 5 - 2;
+ else
+ output = (input + 2) * 5 / 4;
+
+ return output;
+}
+
static const struct pcr_ops rtl8411_pcr_ops = {
.extra_init_hw = rtl8411_extra_init_hw,
.optimize_phy = NULL,
.disable_auto_blink = rtl8411_disable_auto_blink,
.card_power_on = rtl8411_card_power_on,
.card_power_off = rtl8411_card_power_off,
+ .switch_output_voltage = rtl8411_switch_output_voltage,
.cd_deglitch = rtl8411_cd_deglitch,
+ .conv_clk_and_div_n = rtl8411_conv_clk_and_div_n,
};
/* SD Pull Control Enable:
return rtsx_pci_send_cmd(pcr, 100);
}
+static int rts5209_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage)
+{
+ int err;
+
+ if (voltage == OUTPUT_3V3) {
+ err = rtsx_pci_write_phy_register(pcr, 0x08, 0x4FC0 | 0x24);
+ if (err < 0)
+ return err;
+ } else if (voltage == OUTPUT_1V8) {
+ err = rtsx_pci_write_phy_register(pcr, 0x08, 0x4C40 | 0x24);
+ if (err < 0)
+ return err;
+ } else {
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
static const struct pcr_ops rts5209_pcr_ops = {
.extra_init_hw = rts5209_extra_init_hw,
.optimize_phy = rts5209_optimize_phy,
.disable_auto_blink = rts5209_disable_auto_blink,
.card_power_on = rts5209_card_power_on,
.card_power_off = rts5209_card_power_off,
+ .switch_output_voltage = rts5209_switch_output_voltage,
.cd_deglitch = NULL,
+ .conv_clk_and_div_n = NULL,
};
/* SD Pull Control Enable:
return rtsx_pci_send_cmd(pcr, 100);
}
+static int rts5229_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage)
+{
+ int err;
+
+ if (voltage == OUTPUT_3V3) {
+ err = rtsx_pci_write_phy_register(pcr, 0x08, 0x4FC0 | 0x24);
+ if (err < 0)
+ return err;
+ } else if (voltage == OUTPUT_1V8) {
+ err = rtsx_pci_write_phy_register(pcr, 0x08, 0x4C40 | 0x24);
+ if (err < 0)
+ return err;
+ } else {
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
static const struct pcr_ops rts5229_pcr_ops = {
.extra_init_hw = rts5229_extra_init_hw,
.optimize_phy = rts5229_optimize_phy,
.disable_auto_blink = rts5229_disable_auto_blink,
.card_power_on = rts5229_card_power_on,
.card_power_off = rts5229_card_power_off,
+ .switch_output_voltage = rts5229_switch_output_voltage,
.cd_deglitch = NULL,
+ .conv_clk_and_div_n = NULL,
};
/* SD Pull Control Enable:
if (clk == pcr->cur_clock)
return 0;
- N = (u8)(clk - 2);
+ if (pcr->ops->conv_clk_and_div_n)
+ N = (u8)pcr->ops->conv_clk_and_div_n(clk, CLK_TO_DIV_N);
+ else
+ N = (u8)(clk - 2);
if ((clk <= 2) || (N > max_N))
return -EINVAL;
/* Make sure that the SSC clock div_n is equal or greater than min_N */
div = CLK_DIV_1;
while ((N < min_N) && (div < max_div)) {
- N = (N + 2) * 2 - 2;
+ if (pcr->ops->conv_clk_and_div_n) {
+ int dbl_clk = pcr->ops->conv_clk_and_div_n(N,
+ DIV_N_TO_CLK) * 2;
+ N = (u8)pcr->ops->conv_clk_and_div_n(dbl_clk,
+ CLK_TO_DIV_N);
+ } else {
+ N = (N + 2) * 2 - 2;
+ }
div++;
}
dev_dbg(&(pcr->pci->dev), "N = %d, div = %d\n", N, div);
}
EXPORT_SYMBOL_GPL(rtsx_pci_card_power_off);
+int rtsx_pci_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage)
+{
+ if (pcr->ops->switch_output_voltage)
+ return pcr->ops->switch_output_voltage(pcr, voltage);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(rtsx_pci_switch_output_voltage);
+
unsigned int rtsx_pci_card_exist(struct rtsx_pcr *pcr)
{
unsigned int val;
spin_unlock_irqrestore(&pcr->lock, flags);
- if (card_detect & SD_EXIST)
+ if ((card_detect & SD_EXIST) && pcr->slots[RTSX_SD_CARD].card_event)
pcr->slots[RTSX_SD_CARD].card_event(
pcr->slots[RTSX_SD_CARD].p_dev);
- if (card_detect & MS_EXIST)
+ if ((card_detect & MS_EXIST) && pcr->slots[RTSX_MS_CARD].card_event)
pcr->slots[RTSX_MS_CARD].card_event(
pcr->slots[RTSX_MS_CARD].p_dev);
}
}
static struct irq_domain_ops tc3589x_irq_ops = {
- .map = tc3589x_irq_map,
+ .map = tc3589x_irq_map,
.unmap = tc3589x_irq_unmap,
- .xlate = irq_domain_xlate_twocell,
+ .xlate = irq_domain_xlate_twocell,
};
static int tc3589x_irq_init(struct tc3589x *tc3589x, struct device_node *np)
{
int base = tc3589x->irq_base;
- if (base) {
- tc3589x->domain = irq_domain_add_legacy(
- NULL, TC3589x_NR_INTERNAL_IRQS, base,
- 0, &tc3589x_irq_ops, tc3589x);
- }
- else {
- tc3589x->domain = irq_domain_add_linear(
- np, TC3589x_NR_INTERNAL_IRQS,
- &tc3589x_irq_ops, tc3589x);
- }
+ tc3589x->domain = irq_domain_add_simple(
+ np, TC3589x_NR_INTERNAL_IRQS, base,
+ &tc3589x_irq_ops, tc3589x);
if (!tc3589x->domain) {
dev_err(tc3589x->dev, "Failed to create irqdomain\n");
static int twl4030_write_script(u8 address, struct twl4030_ins *script,
int len)
{
- int err;
+ int err = -EINVAL;
for (; len; len--, address++, script++) {
if (len == 1) {
return bridge;
}
+EXPORT_SYMBOL(vexpress_config_bridge_register);
void vexpress_config_bridge_unregister(struct vexpress_config_bridge *bridge)
{
while (!list_empty(&__bridge.transactions))
cpu_relax();
}
+EXPORT_SYMBOL(vexpress_config_bridge_unregister);
struct vexpress_config_func {
return func;
}
+EXPORT_SYMBOL(__vexpress_config_func_get);
void vexpress_config_func_put(struct vexpress_config_func *func)
{
of_node_put(func->bridge->node);
kfree(func);
}
-
+EXPORT_SYMBOL(vexpress_config_func_put);
struct vexpress_config_trans {
struct vexpress_config_func *func;
complete(&trans->completion);
}
+EXPORT_SYMBOL(vexpress_config_complete);
int vexpress_config_wait(struct vexpress_config_trans *trans)
{
return trans->status;
}
-
+EXPORT_SYMBOL(vexpress_config_wait);
int vexpress_config_read(struct vexpress_config_func *func, int offset,
u32 *data)
}
-void __init vexpress_sysreg_early_init(void __iomem *base)
+void __init vexpress_sysreg_setup(struct device_node *node)
{
- struct device_node *node = of_find_compatible_node(NULL, NULL,
- "arm,vexpress-sysreg");
-
- if (node)
- base = of_iomap(node, 0);
-
- if (WARN_ON(!base))
+ if (WARN_ON(!vexpress_sysreg_base))
return;
- vexpress_sysreg_base = base;
-
if (readl(vexpress_sysreg_base + SYS_MISC) & SYS_MISC_MASTERSITE)
vexpress_master_site = VEXPRESS_SITE_DB2;
else
WARN_ON(!vexpress_sysreg_config_bridge);
}
+void __init vexpress_sysreg_early_init(void __iomem *base)
+{
+ vexpress_sysreg_base = base;
+ vexpress_sysreg_setup(NULL);
+}
+
void __init vexpress_sysreg_of_early_init(void)
{
- vexpress_sysreg_early_init(NULL);
+ struct device_node *node = of_find_compatible_node(NULL, NULL,
+ "arm,vexpress-sysreg");
+
+ if (node) {
+ vexpress_sysreg_base = of_iomap(node, 0);
+ vexpress_sysreg_setup(node);
+ } else {
+ pr_info("vexpress-sysreg: No Device Tree node found.");
+ }
}
return -EBUSY;
}
- if (!vexpress_sysreg_base)
+ if (!vexpress_sysreg_base) {
vexpress_sysreg_base = devm_ioremap(&pdev->dev, res->start,
resource_size(res));
+ vexpress_sysreg_setup(pdev->dev.of_node);
+ }
if (!vexpress_sysreg_base) {
dev_err(&pdev->dev, "Failed to obtain base address!\n");
}
}
-#define WM5102_MAX_REGISTER 0x1a8fff
+#define WM5102_MAX_REGISTER 0x1a9800
const struct regmap_config wm5102_spi_regmap = {
.reg_bits = 32,
if (pdata->chip_enable)
pdata->chip_enable(kim_gdata);
+ /* Configure BT nShutdown to HIGH state */
+ gpio_set_value(kim_gdata->nshutdown, GPIO_LOW);
+ mdelay(5); /* FIXME: a proper toggle */
+ gpio_set_value(kim_gdata->nshutdown, GPIO_HIGH);
+ mdelay(100);
/* re-initialize the completion */
INIT_COMPLETION(kim_gdata->ldisc_installed);
/* send notification to UIM */
* (b) upon failure to either install ldisc or download firmware.
* The function is responsible to (a) notify UIM about un-installation,
* (b) flush UART if the ldisc was installed.
- * (c) invoke platform's chip disabling routine.
+ * (c) reset BT_EN - pull down nshutdown at the end.
+ * (d) invoke platform's chip disabling routine.
*/
long st_kim_stop(void *kim_data)
{
err = -ETIMEDOUT;
}
+ /* By default configure BT nShutdown to LOW state */
+ gpio_set_value(kim_gdata->nshutdown, GPIO_LOW);
+ mdelay(1);
+ gpio_set_value(kim_gdata->nshutdown, GPIO_HIGH);
+ mdelay(1);
+ gpio_set_value(kim_gdata->nshutdown, GPIO_LOW);
+
/* platform specific disable */
if (pdata->chip_disable)
pdata->chip_disable(kim_gdata);
/* refer to itself */
kim_gdata->core_data->kim_data = kim_gdata;
+ /* Claim the chip enable nShutdown gpio from the system */
+ kim_gdata->nshutdown = pdata->nshutdown_gpio;
+ err = gpio_request(kim_gdata->nshutdown, "kim");
+ if (unlikely(err)) {
+ pr_err(" gpio %ld request failed ", kim_gdata->nshutdown);
+ return err;
+ }
+
+ /* Configure nShutdown GPIO as output=0 */
+ err = gpio_direction_output(kim_gdata->nshutdown, 0);
+ if (unlikely(err)) {
+ pr_err(" unable to configure gpio %ld", kim_gdata->nshutdown);
+ return err;
+ }
/* get reference of pdev for request_firmware
*/
kim_gdata->kim_pdev = pdev;
static int kim_remove(struct platform_device *pdev)
{
+ /* free the GPIOs requested */
+ struct ti_st_plat_data *pdata = pdev->dev.platform_data;
struct kim_data_s *kim_gdata;
kim_gdata = dev_get_drvdata(&pdev->dev);
+ /* Free the Bluetooth/FM/GPIO
+ * nShutdown gpio from the system
+ */
+ gpio_free(pdata->nshutdown_gpio);
+ pr_info("nshutdown GPIO Freed");
+
debugfs_remove_recursive(kim_debugfs_dir);
sysfs_remove_group(&pdev->dev.kobj, &uim_attr_grp);
pr_info("sysfs entries removed");
struct timer_list timer;
struct mmc_host *mmc;
struct device *dev;
- struct resource *res;
- int irq;
struct clk *clk;
int gpio_card_detect;
int gpio_write_protect;
if (!r || irq < 0 || !mvsd_data)
return -ENXIO;
- r = request_mem_region(r->start, SZ_1K, DRIVER_NAME);
- if (!r)
- return -EBUSY;
-
mmc = mmc_alloc_host(sizeof(struct mvsd_host), &pdev->dev);
if (!mmc) {
ret = -ENOMEM;
host = mmc_priv(mmc);
host->mmc = mmc;
host->dev = &pdev->dev;
- host->res = r;
host->base_clock = mvsd_data->clock / 2;
+ host->clk = ERR_PTR(-EINVAL);
mmc->ops = &mvsd_ops;
spin_lock_init(&host->lock);
- host->base = ioremap(r->start, SZ_4K);
+ host->base = devm_request_and_ioremap(&pdev->dev, r);
if (!host->base) {
ret = -ENOMEM;
goto out;
mvsd_power_down(host);
- ret = request_irq(irq, mvsd_irq, 0, DRIVER_NAME, host);
+ ret = devm_request_irq(&pdev->dev, irq, mvsd_irq, 0, DRIVER_NAME, host);
if (ret) {
pr_err("%s: cannot assign irq %d\n", DRIVER_NAME, irq);
goto out;
- } else
- host->irq = irq;
+ }
/* Not all platforms can gate the clock, so it is not
an error if the clock does not exists. */
- host->clk = clk_get(&pdev->dev, NULL);
- if (!IS_ERR(host->clk)) {
+ host->clk = devm_clk_get(&pdev->dev, NULL);
+ if (!IS_ERR(host->clk))
clk_prepare_enable(host->clk);
- }
if (mvsd_data->gpio_card_detect) {
- ret = gpio_request(mvsd_data->gpio_card_detect,
- DRIVER_NAME " cd");
+ ret = devm_gpio_request_one(&pdev->dev,
+ mvsd_data->gpio_card_detect,
+ GPIOF_IN, DRIVER_NAME " cd");
if (ret == 0) {
- gpio_direction_input(mvsd_data->gpio_card_detect);
irq = gpio_to_irq(mvsd_data->gpio_card_detect);
- ret = request_irq(irq, mvsd_card_detect_irq,
- IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING,
- DRIVER_NAME " cd", host);
+ ret = devm_request_irq(&pdev->dev, irq,
+ mvsd_card_detect_irq,
+ IRQ_TYPE_EDGE_RISING |
+ IRQ_TYPE_EDGE_FALLING,
+ DRIVER_NAME " cd", host);
if (ret == 0)
host->gpio_card_detect =
mvsd_data->gpio_card_detect;
else
- gpio_free(mvsd_data->gpio_card_detect);
+ devm_gpio_free(&pdev->dev,
+ mvsd_data->gpio_card_detect);
}
}
if (!host->gpio_card_detect)
mmc->caps |= MMC_CAP_NEEDS_POLL;
if (mvsd_data->gpio_write_protect) {
- ret = gpio_request(mvsd_data->gpio_write_protect,
- DRIVER_NAME " wp");
+ ret = devm_gpio_request_one(&pdev->dev,
+ mvsd_data->gpio_write_protect,
+ GPIOF_IN, DRIVER_NAME " wp");
if (ret == 0) {
- gpio_direction_input(mvsd_data->gpio_write_protect);
host->gpio_write_protect =
mvsd_data->gpio_write_protect;
}
return 0;
out:
- if (host) {
- if (host->irq)
- free_irq(host->irq, host);
- if (host->gpio_card_detect) {
- free_irq(gpio_to_irq(host->gpio_card_detect), host);
- gpio_free(host->gpio_card_detect);
- }
- if (host->gpio_write_protect)
- gpio_free(host->gpio_write_protect);
- if (host->base)
- iounmap(host->base);
- }
- if (r)
- release_resource(r);
- if (mmc)
- if (!IS_ERR_OR_NULL(host->clk)) {
+ if (mmc) {
+ if (!IS_ERR(host->clk))
clk_disable_unprepare(host->clk);
- clk_put(host->clk);
- }
mmc_free_host(mmc);
+ }
return ret;
}
{
struct mmc_host *mmc = platform_get_drvdata(pdev);
- if (mmc) {
- struct mvsd_host *host = mmc_priv(mmc);
+ struct mvsd_host *host = mmc_priv(mmc);
- if (host->gpio_card_detect) {
- free_irq(gpio_to_irq(host->gpio_card_detect), host);
- gpio_free(host->gpio_card_detect);
- }
- mmc_remove_host(mmc);
- free_irq(host->irq, host);
- if (host->gpio_write_protect)
- gpio_free(host->gpio_write_protect);
- del_timer_sync(&host->timer);
- mvsd_power_down(host);
- iounmap(host->base);
- release_resource(host->res);
+ mmc_remove_host(mmc);
+ del_timer_sync(&host->timer);
+ mvsd_power_down(host);
+
+ if (!IS_ERR(host->clk))
+ clk_disable_unprepare(host->clk);
+ mmc_free_host(mmc);
- if (!IS_ERR(host->clk)) {
- clk_disable_unprepare(host->clk);
- clk_put(host->clk);
- }
- mmc_free_host(mmc);
- }
platform_set_drvdata(pdev, NULL);
return 0;
}
return 0;
}
-static int sd_change_bank_voltage(struct realtek_pci_sdmmc *host, u8 voltage)
-{
- struct rtsx_pcr *pcr = host->pcr;
- int err;
-
- if (voltage == SD_IO_3V3) {
- err = rtsx_pci_write_phy_register(pcr, 0x08, 0x4FC0 | 0x24);
- if (err < 0)
- return err;
- } else if (voltage == SD_IO_1V8) {
- err = rtsx_pci_write_phy_register(pcr, 0x08, 0x4C40 | 0x24);
- if (err < 0)
- return err;
- } else {
- return -EINVAL;
- }
-
- return 0;
-}
-
static int sdmmc_switch_voltage(struct mmc_host *mmc, struct mmc_ios *ios)
{
struct realtek_pci_sdmmc *host = mmc_priv(mmc);
rtsx_pci_start_run(pcr);
if (ios->signal_voltage == MMC_SIGNAL_VOLTAGE_330)
- voltage = SD_IO_3V3;
+ voltage = OUTPUT_3V3;
else
- voltage = SD_IO_1V8;
+ voltage = OUTPUT_1V8;
- if (voltage == SD_IO_1V8) {
+ if (voltage == OUTPUT_1V8) {
err = rtsx_pci_write_register(pcr,
SD30_DRIVE_SEL, 0x07, DRIVER_TYPE_B);
if (err < 0)
goto out;
}
- err = sd_change_bank_voltage(host, voltage);
+ err = rtsx_pci_switch_output_voltage(pcr, voltage);
if (err < 0)
goto out;
- if (voltage == SD_IO_1V8) {
+ if (voltage == OUTPUT_1V8) {
err = sd_wait_voltage_stable_2(host);
if (err < 0)
goto out;
tristate "M-Systems Disk-On-Chip G3"
select BCH
select BCH_CONST_PARAMS
+ select BITREVERSE
---help---
This provides an MTD device driver for the M-Systems DiskOnChip
G3 devices.
resource_size_t res_size;
struct mtd_part_parser_data ppdata;
bool map_indirect;
- const char *mtd_name;
+ const char *mtd_name = NULL;
match = of_match_device(of_flash_match, &dev->dev);
if (!match)
#include "bcm47xxnflash.h"
/* Broadcom uses 1'000'000 but it seems to be too many. Tests on WNDR4500 has
- * shown 164 retries as maxiumum. */
-#define NFLASH_READY_RETRIES 1000
+ * shown ~1000 retries as maxiumum. */
+#define NFLASH_READY_RETRIES 10000
#define NFLASH_SECTOR_SIZE 512
static const struct of_device_id davinci_nand_of_match[] = {
{.compatible = "ti,davinci-nand", },
{},
-}
+};
MODULE_DEVICE_TABLE(of, davinci_nand_of_match);
static struct davinci_nand_pdata
int i;
int val;
- /* ONFI need to be probed in 8 bits mode */
- WARN_ON(chip->options & NAND_BUSWIDTH_16);
+ /* ONFI need to be probed in 8 bits mode, and 16 bits should be selected with NAND_BUSWIDTH_AUTO */
+ if (chip->options & NAND_BUSWIDTH_16) {
+ pr_err("Trying ONFI probe in 16 bits mode, aborting !\n");
+ return 0;
+ }
/* Try ONFI for unknown chip or LP */
chip->cmdfunc(mtd, NAND_CMD_READID, 0x20, -1);
if (chip->read_byte(mtd) != 'O' || chip->read_byte(mtd) != 'N' ||
pr_info("%s: Setting primary slave to None.\n",
bond->dev->name);
bond->primary_slave = NULL;
+ memset(bond->params.primary, 0, sizeof(bond->params.primary));
bond_select_active_slave(bond);
goto out;
}
priv->write_reg(priv, C_CAN_IFACE(MASK1_REG, iface),
IFX_WRITE_LOW_16BIT(mask));
+
+ /* According to C_CAN documentation, the reserved bit
+ * in IFx_MASK2 register is fixed 1
+ */
priv->write_reg(priv, C_CAN_IFACE(MASK2_REG, iface),
- IFX_WRITE_HIGH_16BIT(mask));
+ IFX_WRITE_HIGH_16BIT(mask) | BIT(13));
priv->write_reg(priv, C_CAN_IFACE(ARB1_REG, iface),
IFX_WRITE_LOW_16BIT(id));
break;
case LEC_ACK_ERROR:
netdev_dbg(dev, "ack error\n");
- cf->data[2] |= (CAN_ERR_PROT_LOC_ACK |
+ cf->data[3] |= (CAN_ERR_PROT_LOC_ACK |
CAN_ERR_PROT_LOC_ACK_DEL);
break;
case LEC_BIT1_ERROR:
break;
case LEC_CRC_ERROR:
netdev_dbg(dev, "CRC error\n");
- cf->data[2] |= (CAN_ERR_PROT_LOC_CRC_SEQ |
+ cf->data[3] |= (CAN_ERR_PROT_LOC_CRC_SEQ |
CAN_ERR_PROT_LOC_CRC_DEL);
break;
default:
stats->rx_errors++;
break;
case PCH_CRC_ERR:
- cf->data[2] |= CAN_ERR_PROT_LOC_CRC_SEQ |
+ cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ |
CAN_ERR_PROT_LOC_CRC_DEL;
priv->can.can_stats.bus_error++;
stats->rx_errors++;
}
if (err_status & HECC_CANES_CRCE) {
hecc_set_bit(priv, HECC_CANES, HECC_CANES_CRCE);
- cf->data[2] |= CAN_ERR_PROT_LOC_CRC_SEQ |
+ cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ |
CAN_ERR_PROT_LOC_CRC_DEL;
}
if (err_status & HECC_CANES_ACKE) {
hecc_set_bit(priv, HECC_CANES, HECC_CANES_ACKE);
- cf->data[2] |= CAN_ERR_PROT_LOC_ACK |
+ cf->data[3] |= CAN_ERR_PROT_LOC_ACK |
CAN_ERR_PROT_LOC_ACK_DEL;
}
}
netdev_info(dev, "%s at io %#3lx, irq %d, hw_addr %pM\n",
cardname, dev->base_addr, dev->irq, dev->dev_addr);
netdev_info(dev, " %dK FIFO split %s Rx:Tx, %sMII interface.\n",
- 8 << config & Ram_size,
+ 8 << (config & Ram_size),
ram_split[(config & Ram_split) >> Ram_split_shift],
config & Autoselect ? "autoselect " : "");
return tg3_writephy(tp, MII_TG3_AUX_CTRL, set | reg);
}
-#define TG3_PHY_AUXCTL_SMDSP_ENABLE(tp) \
- tg3_phy_auxctl_write((tp), MII_TG3_AUXCTL_SHDWSEL_AUXCTL, \
- MII_TG3_AUXCTL_ACTL_SMDSP_ENA | \
- MII_TG3_AUXCTL_ACTL_TX_6DB)
+static int tg3_phy_toggle_auxctl_smdsp(struct tg3 *tp, bool enable)
+{
+ u32 val;
+ int err;
-#define TG3_PHY_AUXCTL_SMDSP_DISABLE(tp) \
- tg3_phy_auxctl_write((tp), MII_TG3_AUXCTL_SHDWSEL_AUXCTL, \
- MII_TG3_AUXCTL_ACTL_TX_6DB);
+ err = tg3_phy_auxctl_read(tp, MII_TG3_AUXCTL_SHDWSEL_AUXCTL, &val);
+
+ if (err)
+ return err;
+ if (enable)
+
+ val |= MII_TG3_AUXCTL_ACTL_SMDSP_ENA;
+ else
+ val &= ~MII_TG3_AUXCTL_ACTL_SMDSP_ENA;
+
+ err = tg3_phy_auxctl_write((tp), MII_TG3_AUXCTL_SHDWSEL_AUXCTL,
+ val | MII_TG3_AUXCTL_ACTL_TX_6DB);
+
+ return err;
+}
static int tg3_bmcr_reset(struct tg3 *tp)
{
otp = tp->phy_otp;
- if (TG3_PHY_AUXCTL_SMDSP_ENABLE(tp))
+ if (tg3_phy_toggle_auxctl_smdsp(tp, true))
return;
phy = ((otp & TG3_OTP_AGCTGT_MASK) >> TG3_OTP_AGCTGT_SHIFT);
((otp & TG3_OTP_RCOFF_MASK) >> TG3_OTP_RCOFF_SHIFT);
tg3_phydsp_write(tp, MII_TG3_DSP_EXP97, phy);
- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);
+ tg3_phy_toggle_auxctl_smdsp(tp, false);
}
static void tg3_phy_eee_adjust(struct tg3 *tp, u32 current_link_up)
if (!tp->setlpicnt) {
if (current_link_up == 1 &&
- !TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) {
+ !tg3_phy_toggle_auxctl_smdsp(tp, true)) {
tg3_phydsp_write(tp, MII_TG3_DSP_TAP26, 0x0000);
- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);
+ tg3_phy_toggle_auxctl_smdsp(tp, false);
}
val = tr32(TG3_CPMU_EEE_MODE);
(GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5717 ||
GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719 ||
tg3_flag(tp, 57765_CLASS)) &&
- !TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) {
+ !tg3_phy_toggle_auxctl_smdsp(tp, true)) {
val = MII_TG3_DSP_TAP26_ALNOKO |
MII_TG3_DSP_TAP26_RMRXSTO;
tg3_phydsp_write(tp, MII_TG3_DSP_TAP26, val);
- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);
+ tg3_phy_toggle_auxctl_smdsp(tp, false);
}
val = tr32(TG3_CPMU_EEE_MODE);
tg3_writephy(tp, MII_CTRL1000,
CTL1000_AS_MASTER | CTL1000_ENABLE_MASTER);
- err = TG3_PHY_AUXCTL_SMDSP_ENABLE(tp);
+ err = tg3_phy_toggle_auxctl_smdsp(tp, true);
if (err)
return err;
tg3_writephy(tp, MII_TG3_DSP_ADDRESS, 0x8200);
tg3_writephy(tp, MII_TG3_DSP_CONTROL, 0x0000);
- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);
+ tg3_phy_toggle_auxctl_smdsp(tp, false);
tg3_writephy(tp, MII_CTRL1000, phy9_orig);
out:
if ((tp->phy_flags & TG3_PHYFLG_ADC_BUG) &&
- !TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) {
+ !tg3_phy_toggle_auxctl_smdsp(tp, true)) {
tg3_phydsp_write(tp, 0x201f, 0x2aaa);
tg3_phydsp_write(tp, 0x000a, 0x0323);
- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);
+ tg3_phy_toggle_auxctl_smdsp(tp, false);
}
if (tp->phy_flags & TG3_PHYFLG_5704_A0_BUG) {
}
if (tp->phy_flags & TG3_PHYFLG_BER_BUG) {
- if (!TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) {
+ if (!tg3_phy_toggle_auxctl_smdsp(tp, true)) {
tg3_phydsp_write(tp, 0x000a, 0x310b);
tg3_phydsp_write(tp, 0x201f, 0x9506);
tg3_phydsp_write(tp, 0x401f, 0x14e2);
- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);
+ tg3_phy_toggle_auxctl_smdsp(tp, false);
}
} else if (tp->phy_flags & TG3_PHYFLG_JITTER_BUG) {
- if (!TG3_PHY_AUXCTL_SMDSP_ENABLE(tp)) {
+ if (!tg3_phy_toggle_auxctl_smdsp(tp, true)) {
tg3_writephy(tp, MII_TG3_DSP_ADDRESS, 0x000a);
if (tp->phy_flags & TG3_PHYFLG_ADJUST_TRIM) {
tg3_writephy(tp, MII_TG3_DSP_RW_PORT, 0x110b);
} else
tg3_writephy(tp, MII_TG3_DSP_RW_PORT, 0x010b);
- TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);
+ tg3_phy_toggle_auxctl_smdsp(tp, false);
}
}
tw32(TG3_CPMU_EEE_MODE,
tr32(TG3_CPMU_EEE_MODE) & ~TG3_CPMU_EEEMD_LPI_ENABLE);
- err = TG3_PHY_AUXCTL_SMDSP_ENABLE(tp);
+ err = tg3_phy_toggle_auxctl_smdsp(tp, true);
if (!err) {
u32 err2;
MII_TG3_DSP_CH34TP2_HIBW01);
}
- err2 = TG3_PHY_AUXCTL_SMDSP_DISABLE(tp);
+ err2 = tg3_phy_toggle_auxctl_smdsp(tp, false);
if (!err)
err = err2;
}
int i;
struct tg3 *tp = netdev_priv(dev);
+ if (tg3_irq_sync(tp))
+ return;
+
for (i = 0; i < tp->irq_cnt; i++)
tg3_interrupt(tp->napi[i].irq_vec, &tp->napi[i]);
}
tp->pm_cap = pm_cap;
tp->rx_mode = TG3_DEF_RX_MODE;
tp->tx_mode = TG3_DEF_TX_MODE;
+ tp->irq_sync = 1;
if (tg3_debug > 0)
tp->msg_enable = tg3_debug;
return -1;
}
+ /* All frames should fit into a single buffer */
+ if (!(status & RXDESC_FIRST_SEG) || !(status & RXDESC_LAST_SEG))
+ return -1;
+
/* Check if packet has checksum already */
if ((status & RXDESC_FRAME_TYPE) && (status & RXDESC_EXT_STATUS) &&
!(ext_status & RXDESC_IP_PAYLOAD_MASK))
{
const struct port_info *pi = netdev_priv(dev);
struct adapter *adap = pi->adapter;
-
- return set_rxq_intr_params(adap, &adap->sge.ethrxq[pi->first_qset].rspq,
- c->rx_coalesce_usecs, c->rx_max_coalesced_frames);
+ struct sge_rspq *q;
+ int i;
+ int r = 0;
+
+ for (i = pi->first_qset; i < pi->first_qset + pi->nqsets; i++) {
+ q = &adap->sge.ethrxq[i].rspq;
+ r = set_rxq_intr_params(adap, q, c->rx_coalesce_usecs,
+ c->rx_max_coalesced_frames);
+ if (r) {
+ dev_err(&dev->dev, "failed to set coalesce %d\n", r);
+ break;
+ }
+ }
+ return r;
}
static int get_coalesce(struct net_device *dev, struct ethtool_coalesce *c)
#define DRV_VER "4.4.161.0u"
#define DRV_NAME "be2net"
-#define BE_NAME "ServerEngines BladeEngine2 10Gbps NIC"
-#define BE3_NAME "ServerEngines BladeEngine3 10Gbps NIC"
-#define OC_NAME "Emulex OneConnect 10Gbps NIC"
+#define BE_NAME "Emulex BladeEngine2"
+#define BE3_NAME "Emulex BladeEngine3"
+#define OC_NAME "Emulex OneConnect"
#define OC_NAME_BE OC_NAME "(be3)"
#define OC_NAME_LANCER OC_NAME "(Lancer)"
#define OC_NAME_SH OC_NAME "(Skyhawk)"
-#define DRV_DESC "ServerEngines BladeEngine 10Gbps NIC Driver"
+#define DRV_DESC "Emulex OneConnect 10Gbps NIC Driver"
#define BE_VENDOR_ID 0x19a2
#define EMULEX_VENDOR_ID 0x10df
MODULE_VERSION(DRV_VER);
MODULE_DEVICE_TABLE(pci, be_dev_ids);
MODULE_DESCRIPTION(DRV_DESC " " DRV_VER);
-MODULE_AUTHOR("ServerEngines Corporation");
+MODULE_AUTHOR("Emulex Corporation");
MODULE_LICENSE("GPL");
static unsigned int num_vfs;
#define E1000_CTRL_FRCDPX 0x00001000 /* Force Duplex */
#define E1000_CTRL_LANPHYPC_OVERRIDE 0x00010000 /* SW control of LANPHYPC */
#define E1000_CTRL_LANPHYPC_VALUE 0x00020000 /* SW value of LANPHYPC */
+#define E1000_CTRL_MEHE 0x00080000 /* Memory Error Handling Enable */
#define E1000_CTRL_SWDPIN0 0x00040000 /* SWDPIN 0 value */
#define E1000_CTRL_SWDPIN1 0x00080000 /* SWDPIN 1 value */
#define E1000_CTRL_SWDPIO0 0x00400000 /* SWDPIN 0 Input or output */
#define E1000_PBS_16K E1000_PBA_16K
+/* Uncorrectable/correctable ECC Error counts and enable bits */
+#define E1000_PBECCSTS_CORR_ERR_CNT_MASK 0x000000FF
+#define E1000_PBECCSTS_UNCORR_ERR_CNT_MASK 0x0000FF00
+#define E1000_PBECCSTS_UNCORR_ERR_CNT_SHIFT 8
+#define E1000_PBECCSTS_ECC_ENABLE 0x00010000
+
#define IFS_MAX 80
#define IFS_MIN 40
#define IFS_RATIO 4
#define E1000_ICR_RXSEQ 0x00000008 /* Rx sequence error */
#define E1000_ICR_RXDMT0 0x00000010 /* Rx desc min. threshold (0) */
#define E1000_ICR_RXT0 0x00000080 /* Rx timer intr (ring 0) */
+#define E1000_ICR_ECCER 0x00400000 /* Uncorrectable ECC Error */
#define E1000_ICR_INT_ASSERTED 0x80000000 /* If this bit asserted, the driver should claim the interrupt */
#define E1000_ICR_RXQ0 0x00100000 /* Rx Queue 0 Interrupt */
#define E1000_ICR_RXQ1 0x00200000 /* Rx Queue 1 Interrupt */
#define E1000_IMS_RXSEQ E1000_ICR_RXSEQ /* Rx sequence error */
#define E1000_IMS_RXDMT0 E1000_ICR_RXDMT0 /* Rx desc min. threshold */
#define E1000_IMS_RXT0 E1000_ICR_RXT0 /* Rx timer intr */
+#define E1000_IMS_ECCER E1000_ICR_ECCER /* Uncorrectable ECC Error */
#define E1000_IMS_RXQ0 E1000_ICR_RXQ0 /* Rx Queue 0 Interrupt */
#define E1000_IMS_RXQ1 E1000_ICR_RXQ1 /* Rx Queue 1 Interrupt */
#define E1000_IMS_TXQ0 E1000_ICR_TXQ0 /* Tx Queue 0 Interrupt */
struct napi_struct napi;
+ unsigned int uncorr_errors; /* uncorrectable ECC errors */
+ unsigned int corr_errors; /* correctable ECC errors */
unsigned int restart_queue;
u32 txd_cmd;
E1000_STAT("dropped_smbus", stats.mgpdc),
E1000_STAT("rx_dma_failed", rx_dma_failed),
E1000_STAT("tx_dma_failed", tx_dma_failed),
+ E1000_STAT("uncorr_ecc_errors", uncorr_errors),
+ E1000_STAT("corr_ecc_errors", corr_errors),
};
#define E1000_GLOBAL_STATS_LEN ARRAY_SIZE(e1000_gstrings_stats)
#define E1000_POEMB E1000_PHY_CTRL /* PHY OEM Bits */
E1000_PBA = 0x01000, /* Packet Buffer Allocation - RW */
E1000_PBS = 0x01008, /* Packet Buffer Size */
+ E1000_PBECCSTS = 0x0100C, /* Packet Buffer ECC Status - RW */
E1000_EEMNGCTL = 0x01010, /* MNG EEprom Control */
E1000_EEWR = 0x0102C, /* EEPROM Write Register - RW */
E1000_FLOP = 0x0103C, /* FLASH Opcode Register */
if (hw->mac.type == e1000_ich8lan)
reg |= (E1000_RFCTL_IPV6_EX_DIS | E1000_RFCTL_NEW_IPV6_EXT_DIS);
ew32(RFCTL, reg);
+
+ /* Enable ECC on Lynxpoint */
+ if (hw->mac.type == e1000_pch_lpt) {
+ reg = er32(PBECCSTS);
+ reg |= E1000_PBECCSTS_ECC_ENABLE;
+ ew32(PBECCSTS, reg);
+
+ reg = er32(CTRL);
+ reg |= E1000_CTRL_MEHE;
+ ew32(CTRL, reg);
+ }
}
/**
mod_timer(&adapter->watchdog_timer, jiffies + 1);
}
+ /* Reset on uncorrectable ECC error */
+ if ((icr & E1000_ICR_ECCER) && (hw->mac.type == e1000_pch_lpt)) {
+ u32 pbeccsts = er32(PBECCSTS);
+
+ adapter->corr_errors +=
+ pbeccsts & E1000_PBECCSTS_CORR_ERR_CNT_MASK;
+ adapter->uncorr_errors +=
+ (pbeccsts & E1000_PBECCSTS_UNCORR_ERR_CNT_MASK) >>
+ E1000_PBECCSTS_UNCORR_ERR_CNT_SHIFT;
+
+ /* Do the reset outside of interrupt context */
+ schedule_work(&adapter->reset_task);
+
+ /* return immediately since reset is imminent */
+ return IRQ_HANDLED;
+ }
+
if (napi_schedule_prep(&adapter->napi)) {
adapter->total_tx_bytes = 0;
adapter->total_tx_packets = 0;
mod_timer(&adapter->watchdog_timer, jiffies + 1);
}
+ /* Reset on uncorrectable ECC error */
+ if ((icr & E1000_ICR_ECCER) && (hw->mac.type == e1000_pch_lpt)) {
+ u32 pbeccsts = er32(PBECCSTS);
+
+ adapter->corr_errors +=
+ pbeccsts & E1000_PBECCSTS_CORR_ERR_CNT_MASK;
+ adapter->uncorr_errors +=
+ (pbeccsts & E1000_PBECCSTS_UNCORR_ERR_CNT_MASK) >>
+ E1000_PBECCSTS_UNCORR_ERR_CNT_SHIFT;
+
+ /* Do the reset outside of interrupt context */
+ schedule_work(&adapter->reset_task);
+
+ /* return immediately since reset is imminent */
+ return IRQ_HANDLED;
+ }
+
if (napi_schedule_prep(&adapter->napi)) {
adapter->total_tx_bytes = 0;
adapter->total_tx_packets = 0;
if (adapter->msix_entries) {
ew32(EIAC_82574, adapter->eiac_mask & E1000_EIAC_MASK_82574);
ew32(IMS, adapter->eiac_mask | E1000_IMS_OTHER | E1000_IMS_LSC);
+ } else if (hw->mac.type == e1000_pch_lpt) {
+ ew32(IMS, IMS_ENABLE_MASK | E1000_IMS_ECCER);
} else {
ew32(IMS, IMS_ENABLE_MASK);
}
adapter->stats.mgptc += er32(MGTPTC);
adapter->stats.mgprc += er32(MGTPRC);
adapter->stats.mgpdc += er32(MGTPDC);
+
+ /* Correctable ECC Errors */
+ if (hw->mac.type == e1000_pch_lpt) {
+ u32 pbeccsts = er32(PBECCSTS);
+ adapter->corr_errors +=
+ pbeccsts & E1000_PBECCSTS_CORR_ERR_CNT_MASK;
+ adapter->uncorr_errors +=
+ (pbeccsts & E1000_PBECCSTS_UNCORR_ERR_CNT_MASK) >>
+ E1000_PBECCSTS_UNCORR_ERR_CNT_SHIFT;
+ }
}
/**
obj-$(CONFIG_IXGBE) += ixgbe.o
-ixgbe-objs := ixgbe_main.o ixgbe_common.o ixgbe_ethtool.o ixgbe_debugfs.o\
+ixgbe-objs := ixgbe_main.o ixgbe_common.o ixgbe_ethtool.o \
ixgbe_82599.o ixgbe_82598.o ixgbe_phy.o ixgbe_sriov.o \
ixgbe_mbx.o ixgbe_x540.o ixgbe_lib.o ixgbe_ptp.o
ixgbe_dcb_82599.o ixgbe_dcb_nl.o
ixgbe-$(CONFIG_IXGBE_HWMON) += ixgbe_sysfs.o
+ixgbe-$(CONFIG_DEBUG_FS) += ixgbe_debugfs.o
ixgbe-$(CONFIG_FCOE:m=y) += ixgbe_fcoe.o
Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
*******************************************************************************/
-
-#ifdef CONFIG_DEBUG_FS
-
#include <linux/debugfs.h>
#include <linux/module.h>
{
debugfs_remove_recursive(ixgbe_dbg_root);
}
-
-#endif /* CONFIG_DEBUG_FS */
break;
case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
tsync_rx_ctl |= IXGBE_TSYNCRXCTL_TYPE_L4_V1;
- tsync_rx_mtrl = IXGBE_RXMTRL_V1_SYNC_MSG;
+ tsync_rx_mtrl |= IXGBE_RXMTRL_V1_SYNC_MSG;
break;
case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
tsync_rx_ctl |= IXGBE_TSYNCRXCTL_TYPE_L4_V1;
- tsync_rx_mtrl = IXGBE_RXMTRL_V1_DELAY_REQ_MSG;
+ tsync_rx_mtrl |= IXGBE_RXMTRL_V1_DELAY_REQ_MSG;
break;
case HWTSTAMP_FILTER_PTP_V2_EVENT:
case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
ring->tx_csum++;
}
- /* Copy dst mac address to wqe */
- ethh = (struct ethhdr *)skb->data;
- tx_desc->ctrl.srcrb_flags16[0] = get_unaligned((__be16 *)ethh->h_dest);
- tx_desc->ctrl.imm = get_unaligned((__be32 *)(ethh->h_dest + 2));
+ if (mlx4_is_mfunc(mdev->dev) || priv->validate_loopback) {
+ /* Copy dst mac address to wqe. This allows loopback in eSwitch,
+ * so that VFs and PF can communicate with each other
+ */
+ ethh = (struct ethhdr *)skb->data;
+ tx_desc->ctrl.srcrb_flags16[0] = get_unaligned((__be16 *)ethh->h_dest);
+ tx_desc->ctrl.imm = get_unaligned((__be32 *)(ethh->h_dest + 2));
+ }
+
/* Handle LSO (TSO) packets */
if (lso_header_size) {
/* Mark opcode as LSO */
}
}
- if ((dev_cap->flags &
+ if ((dev->caps.flags &
(MLX4_DEV_CAP_FLAG_64B_CQE | MLX4_DEV_CAP_FLAG_64B_EQE)) &&
mlx4_is_master(dev))
dev->caps.function_caps |= MLX4_FUNC_CAP_64B_EQE_CQE;
int i;
if (msi_x) {
- /* In multifunction mode each function gets 2 msi-X vectors
- * one for data path completions anf the other for asynch events
- * or command completions */
- if (mlx4_is_mfunc(dev)) {
- nreq = 2;
- } else {
- nreq = min_t(int, dev->caps.num_eqs -
- dev->caps.reserved_eqs, nreq);
- }
+ nreq = min_t(int, dev->caps.num_eqs - dev->caps.reserved_eqs,
+ nreq);
entries = kcalloc(nreq, sizeof *entries, GFP_KERNEL);
if (!entries)
buffrag->length, PCI_DMA_TODEVICE);
buffrag->dma = 0ULL;
}
- for (j = 0; j < cmd_buf->frag_count; j++) {
+ for (j = 1; j < cmd_buf->frag_count; j++) {
buffrag++;
if (buffrag->dma) {
pci_unmap_page(adapter->pdev, buffrag->dma,
while (--i >= 0) {
nf = &pbuf->frag_array[i+1];
pci_unmap_page(pdev, nf->dma, nf->length, PCI_DMA_TODEVICE);
+ nf->dma = 0ULL;
}
nf = &pbuf->frag_array[0];
pci_unmap_single(pdev, nf->dma, skb_headlen(skb), PCI_DMA_TODEVICE);
+ nf->dma = 0ULL;
out_err:
return -ENOMEM;
if (opts2 & RxVlanTag)
__vlan_hwaccel_put_tag(skb, swab16(opts2 & 0xffff));
-
- desc->opts2 = 0;
}
static int rtl8169_gset_tbi(struct net_device *dev, struct ethtool_cmd *cmd)
!(status & (RxRWT | RxFOVF)) &&
(dev->features & NETIF_F_RXALL))
goto process_pkt;
-
- rtl8169_mark_to_asic(desc, rx_buf_sz);
} else {
struct sk_buff *skb;
dma_addr_t addr;
if (unlikely(rtl8169_fragmented_frame(status))) {
dev->stats.rx_dropped++;
dev->stats.rx_length_errors++;
- rtl8169_mark_to_asic(desc, rx_buf_sz);
- continue;
+ goto release_descriptor;
}
skb = rtl8169_try_rx_copy(tp->Rx_databuff[entry],
tp, pkt_size, addr);
- rtl8169_mark_to_asic(desc, rx_buf_sz);
if (!skb) {
dev->stats.rx_dropped++;
- continue;
+ goto release_descriptor;
}
rtl8169_rx_csum(skb, status);
tp->rx_stats.bytes += pkt_size;
u64_stats_update_end(&tp->rx_stats.syncp);
}
-
- /* Work around for AMD plateform. */
- if ((desc->opts2 & cpu_to_le32(0xfffe000)) &&
- (tp->mac_version == RTL_GIGA_MAC_VER_05)) {
- desc->opts2 = 0;
- cur_rx++;
- }
+release_descriptor:
+ desc->opts2 = 0;
+ wmb();
+ rtl8169_mark_to_asic(desc, rx_buf_sz);
}
count = cur_rx - tp->cur_rx;
rp->tx_skbuff[entry]->len,
PCI_DMA_TODEVICE);
}
- dev_kfree_skb_irq(rp->tx_skbuff[entry]);
+ dev_kfree_skb(rp->tx_skbuff[entry]);
rp->tx_skbuff[entry] = NULL;
entry = (++rp->dirty_tx) % TX_RING_SIZE;
}
if (intr_status & IntrPCIErr)
netif_warn(rp, hw, dev, "PCI error\n");
- napi_disable(&rp->napi);
- rhine_irq_disable(rp);
- /* Slow and safe. Consider __napi_schedule as a replacement ? */
- napi_enable(&rp->napi);
- napi_schedule(&rp->napi);
+ iowrite16(RHINE_EVENT & 0xffff, rp->base + IntrEnable);
out_unlock:
mutex_unlock(&rp->task_lock);
};
struct netvsc_device_info {
- unsigned char mac_adr[6];
+ unsigned char mac_adr[ETH_ALEN];
bool link_state; /* 0 - link up, 1 - link down */
int ring_size;
};
struct net_device_context *ndevctx = netdev_priv(ndev);
struct hv_device *hdev = ndevctx->device_ctx;
struct sockaddr *addr = p;
- char save_adr[14];
+ char save_adr[ETH_ALEN];
unsigned char save_aatype;
int err;
skb_orphan(skb);
+ /* Before queueing this packet to netif_rx(),
+ * make sure dst is refcounted.
+ */
+ skb_dst_force(skb);
+
skb->protocol = eth_type_trans(skb, dev);
/* it's OK to use per_cpu_ptr() because BHs are off */
static size_t macvlan_get_size(const struct net_device *dev)
{
- return nla_total_size(4);
+ return (0
+ + nla_total_size(4) /* IFLA_MACVLAN_MODE */
+ + nla_total_size(2) /* IFLA_MACVLAN_FLAGS */
+ );
}
static int macvlan_fill_info(struct sk_buff *skb,
/* IP101A/G - IP1001 */
#define IP10XX_SPEC_CTRL_STATUS 16 /* Spec. Control Register */
+#define IP1001_RXPHASE_SEL (1<<0) /* Add delay on RX_CLK */
+#define IP1001_TXPHASE_SEL (1<<1) /* Add delay on TX_CLK */
#define IP1001_SPEC_CTRL_STATUS_2 20 /* IP1001 Spec. Control Reg 2 */
-#define IP1001_PHASE_SEL_MASK 3 /* IP1001 RX/TXPHASE_SEL */
#define IP1001_APS_ON 11 /* IP1001 APS Mode bit */
#define IP101A_G_APS_ON 2 /* IP101A/G APS Mode bit */
#define IP101A_G_IRQ_CONF_STATUS 0x11 /* Conf Info IRQ & Status Reg */
if (c < 0)
return c;
- /* INTR pin used: speed/link/duplex will cause an interrupt */
- c = phy_write(phydev, IP101A_G_IRQ_CONF_STATUS, IP101A_G_IRQ_DEFAULT);
- if (c < 0)
- return c;
+ if ((phydev->interface == PHY_INTERFACE_MODE_RGMII) ||
+ (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID) ||
+ (phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID) ||
+ (phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID)) {
- if (phydev->interface == PHY_INTERFACE_MODE_RGMII) {
- /* Additional delay (2ns) used to adjust RX clock phase
- * at RGMII interface */
c = phy_read(phydev, IP10XX_SPEC_CTRL_STATUS);
if (c < 0)
return c;
- c |= IP1001_PHASE_SEL_MASK;
+ c &= ~(IP1001_RXPHASE_SEL | IP1001_TXPHASE_SEL);
+
+ if (phydev->interface == PHY_INTERFACE_MODE_RGMII_ID)
+ c |= (IP1001_RXPHASE_SEL | IP1001_TXPHASE_SEL);
+ else if (phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID)
+ c |= IP1001_RXPHASE_SEL;
+ else if (phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID)
+ c |= IP1001_TXPHASE_SEL;
+
c = phy_write(phydev, IP10XX_SPEC_CTRL_STATUS, c);
if (c < 0)
return c;
if (c < 0)
return c;
+ /* INTR pin used: speed/link/duplex will cause an interrupt */
+ c = phy_write(phydev, IP101A_G_IRQ_CONF_STATUS, IP101A_G_IRQ_DEFAULT);
+ if (c < 0)
+ return c;
+
/* Enable Auto Power Saving mode */
c = phy_read(phydev, IP10XX_SPEC_CTRL_STATUS);
c |= IP101A_G_APS_ON;
int err;
int temp;
- /* Enable Fiber/Copper auto selection */
- temp = phy_read(phydev, MII_M1111_PHY_EXT_SR);
- temp &= ~MII_M1111_HWCFG_FIBER_COPPER_AUTO;
- phy_write(phydev, MII_M1111_PHY_EXT_SR, temp);
-
- temp = phy_read(phydev, MII_BMCR);
- temp |= BMCR_RESET;
- phy_write(phydev, MII_BMCR, temp);
-
if ((phydev->interface == PHY_INTERFACE_MODE_RGMII) ||
(phydev->interface == PHY_INTERFACE_MODE_RGMII_ID) ||
(phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID) ||
unsigned char addr[FLT_EXACT_COUNT][ETH_ALEN];
};
-/* 1024 is probably a high enough limit: modern hypervisors seem to support on
- * the order of 100-200 CPUs so this leaves us some breathing space if we want
- * to match a queue per guest CPU.
- */
-#define MAX_TAP_QUEUES 1024
+/* DEFAULT_MAX_NUM_RSS_QUEUES were choosed to let the rx/tx queues allocated for
+ * the netdevice to be fit in one page. So we can make sure the success of
+ * memory allocation. TODO: increase the limit. */
+#define MAX_TAP_QUEUES DEFAULT_MAX_NUM_RSS_QUEUES
+#define MAX_TAP_FLOWS 4096
#define TUN_FLOW_EXPIRE (3 * HZ)
unsigned long ageing_time;
unsigned int numdisabled;
struct list_head disabled;
+ void *security;
+ u32 flow_count;
};
static inline u32 tun_hashfn(u32 rxhash)
e->queue_index = queue_index;
e->tun = tun;
hlist_add_head_rcu(&e->hash_link, head);
+ ++tun->flow_count;
}
return e;
}
e->rxhash, e->queue_index);
hlist_del_rcu(&e->hash_link);
kfree_rcu(e, rcu);
+ --tun->flow_count;
}
static void tun_flow_flush(struct tun_struct *tun)
}
static void tun_flow_update(struct tun_struct *tun, u32 rxhash,
- u16 queue_index)
+ struct tun_file *tfile)
{
struct hlist_head *head;
struct tun_flow_entry *e;
unsigned long delay = tun->ageing_time;
+ u16 queue_index = tfile->queue_index;
if (!rxhash)
return;
rcu_read_lock();
- if (tun->numqueues == 1)
+ /* We may get a very small possibility of OOO during switching, not
+ * worth to optimize.*/
+ if (tun->numqueues == 1 || tfile->detached)
goto unlock;
e = tun_flow_find(head, rxhash);
e->updated = jiffies;
} else {
spin_lock_bh(&tun->lock);
- if (!tun_flow_find(head, rxhash))
+ if (!tun_flow_find(head, rxhash) &&
+ tun->flow_count < MAX_TAP_FLOWS)
tun_flow_create(tun, head, rxhash, queue_index);
if (!timer_pending(&tun->flow_gc_timer))
tun = rtnl_dereference(tfile->tun);
- if (tun) {
+ if (tun && !tfile->detached) {
u16 index = tfile->queue_index;
BUG_ON(index >= tun->numqueues);
dev = tun->dev;
rcu_assign_pointer(tun->tfiles[index],
tun->tfiles[tun->numqueues - 1]);
- rcu_assign_pointer(tfile->tun, NULL);
ntfile = rtnl_dereference(tun->tfiles[index]);
ntfile->queue_index = index;
--tun->numqueues;
- if (clean)
+ if (clean) {
+ rcu_assign_pointer(tfile->tun, NULL);
sock_put(&tfile->sk);
- else
+ } else
tun_disable_queue(tun, tfile);
synchronize_net();
}
if (clean) {
- if (tun && tun->numqueues == 0 && tun->numdisabled == 0 &&
- !(tun->flags & TUN_PERSIST))
- if (tun->dev->reg_state == NETREG_REGISTERED)
+ if (tun && tun->numqueues == 0 && tun->numdisabled == 0) {
+ netif_carrier_off(tun->dev);
+
+ if (!(tun->flags & TUN_PERSIST) &&
+ tun->dev->reg_state == NETREG_REGISTERED)
unregister_netdevice(tun->dev);
+ }
BUG_ON(!test_bit(SOCK_EXTERNALLY_ALLOCATED,
&tfile->socket.flags));
rcu_assign_pointer(tfile->tun, NULL);
--tun->numqueues;
}
+ list_for_each_entry(tfile, &tun->disabled, next) {
+ wake_up_all(&tfile->wq.wait);
+ rcu_assign_pointer(tfile->tun, NULL);
+ }
BUG_ON(tun->numqueues != 0);
synchronize_net();
struct tun_file *tfile = file->private_data;
int err;
+ err = security_tun_dev_attach(tfile->socket.sk, tun->security);
+ if (err < 0)
+ goto out;
+
err = -EINVAL;
- if (rtnl_dereference(tfile->tun))
+ if (rtnl_dereference(tfile->tun) && !tfile->detached)
goto out;
err = -EBUSY;
tun->dev->stats.rx_packets++;
tun->dev->stats.rx_bytes += len;
- tun_flow_update(tun, rxhash, tfile->queue_index);
+ tun_flow_update(tun, rxhash, tfile);
return total_len;
}
BUG_ON(!(list_empty(&tun->disabled)));
tun_flow_uninit(tun);
+ security_tun_dev_free_security(tun->security);
free_netdev(dev);
}
if (tun_not_capable(tun))
return -EPERM;
- err = security_tun_dev_attach(tfile->socket.sk);
+ err = security_tun_dev_open(tun->security);
if (err < 0)
return err;
else {
char *name;
unsigned long flags = 0;
+ int queues = ifr->ifr_flags & IFF_MULTI_QUEUE ?
+ MAX_TAP_QUEUES : 1;
if (!ns_capable(net->user_ns, CAP_NET_ADMIN))
return -EPERM;
name = ifr->ifr_name;
dev = alloc_netdev_mqs(sizeof(struct tun_struct), name,
- tun_setup,
- MAX_TAP_QUEUES, MAX_TAP_QUEUES);
+ tun_setup, queues, queues);
+
if (!dev)
return -ENOMEM;
spin_lock_init(&tun->lock);
- security_tun_dev_post_create(&tfile->sk);
+ err = security_tun_dev_alloc_security(&tun->security);
+ if (err < 0)
+ goto err_free_dev;
tun_net_init(dev);
device_create_file(&tun->dev->dev, &dev_attr_owner) ||
device_create_file(&tun->dev->dev, &dev_attr_group))
pr_err("Failed to create tun sysfs files\n");
-
- netif_carrier_on(tun->dev);
}
+ netif_carrier_on(tun->dev);
+
tun_debug(KERN_INFO, tun, "tun_set_iff\n");
if (ifr->ifr_flags & IFF_NO_PI)
if (ifr->ifr_flags & IFF_ATTACH_QUEUE) {
tun = tfile->detached;
- if (!tun)
+ if (!tun) {
ret = -EINVAL;
- else
- ret = tun_attach(tun, file);
+ goto unlock;
+ }
+ ret = security_tun_dev_attach_queue(tun->security);
+ if (ret < 0)
+ goto unlock;
+ ret = tun_attach(tun, file);
} else if (ifr->ifr_flags & IFF_DETACH_QUEUE) {
tun = rtnl_dereference(tfile->tun);
- if (!tun || !(tun->flags & TUN_TAP_MQ))
+ if (!tun || !(tun->flags & TUN_TAP_MQ) || tfile->detached)
ret = -EINVAL;
else
__tun_detach(tfile, false);
} else
ret = -EINVAL;
+unlock:
rtnl_unlock();
return ret;
}
.tx_fixup = cdc_mbim_tx_fixup,
};
+/* MBIM and NCM devices should not need a ZLP after NTBs with
+ * dwNtbOutMaxSize length. This driver_info is for the exceptional
+ * devices requiring it anyway, allowing them to be supported without
+ * forcing the performance penalty on all the sane devices.
+ */
+static const struct driver_info cdc_mbim_info_zlp = {
+ .description = "CDC MBIM",
+ .flags = FLAG_NO_SETINT | FLAG_MULTI_PACKET | FLAG_WWAN | FLAG_SEND_ZLP,
+ .bind = cdc_mbim_bind,
+ .unbind = cdc_mbim_unbind,
+ .manage_power = cdc_mbim_manage_power,
+ .rx_fixup = cdc_mbim_rx_fixup,
+ .tx_fixup = cdc_mbim_tx_fixup,
+};
+
static const struct usb_device_id mbim_devs[] = {
/* This duplicate NCM entry is intentional. MBIM devices can
* be disguised as NCM by default, and this is necessary to
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_NCM, USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&cdc_mbim_info,
},
+ /* Sierra Wireless MC7710 need ZLPs */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x1199, 0x68a2, USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long)&cdc_mbim_info_zlp,
+ },
{ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MBIM, USB_CDC_PROTO_NONE),
.driver_info = (unsigned long)&cdc_mbim_info,
},
len -= temp;
}
+ /* some buggy devices have an IAD but no CDC Union */
+ if (!ctx->union_desc && intf->intf_assoc && intf->intf_assoc->bInterfaceCount == 2) {
+ ctx->control = intf;
+ ctx->data = usb_ifnum_to_if(dev->udev, intf->cur_altsetting->desc.bInterfaceNumber + 1);
+ dev_dbg(&intf->dev, "CDC Union missing - got slave from IAD\n");
+ }
+
/* check if we got everything */
if ((ctx->control == NULL) || (ctx->data == NULL) ||
((!ctx->mbim_desc) && ((ctx->ether_desc == NULL) || (ctx->control != intf))))
error2:
usb_set_intfdata(ctx->control, NULL);
usb_set_intfdata(ctx->data, NULL);
- usb_driver_release_interface(driver, ctx->data);
+ if (ctx->data != ctx->control)
+ usb_driver_release_interface(driver, ctx->data);
error:
cdc_ncm_free((struct cdc_ncm_ctx *)dev->data[0]);
dev->data[0] = 0;
.tx_fixup = cdc_ncm_tx_fixup,
};
+/* Same as wwan_info, but with FLAG_NOARP */
+static const struct driver_info wwan_noarp_info = {
+ .description = "Mobile Broadband Network Device (NO ARP)",
+ .flags = FLAG_POINTTOPOINT | FLAG_NO_SETINT | FLAG_MULTI_PACKET
+ | FLAG_WWAN | FLAG_NOARP,
+ .bind = cdc_ncm_bind,
+ .unbind = cdc_ncm_unbind,
+ .check_connect = cdc_ncm_check_connect,
+ .manage_power = usbnet_manage_power,
+ .status = cdc_ncm_status,
+ .rx_fixup = cdc_ncm_rx_fixup,
+ .tx_fixup = cdc_ncm_tx_fixup,
+};
+
static const struct usb_device_id cdc_devs[] = {
/* Ericsson MBM devices like F5521gw */
{ .match_flags = USB_DEVICE_ID_MATCH_INT_INFO
{ USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x02, 0x46),
.driver_info = (unsigned long)&wwan_info,
},
+ { USB_VENDOR_AND_INTERFACE_INFO(0x12d1, 0xff, 0x02, 0x76),
+ .driver_info = (unsigned long)&wwan_info,
+ },
+
+ /* Infineon(now Intel) HSPA Modem platform */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x1519, 0x0443,
+ USB_CLASS_COMM,
+ USB_CDC_SUBCLASS_NCM, USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long)&wwan_noarp_info,
+ },
/* Generic CDC-NCM devices */
{ USB_INTERFACE_INFO(USB_CLASS_COMM,
#define DM_MCAST_ADDR 0x16 /* 8 bytes */
#define DM_GPR_CTRL 0x1e
#define DM_GPR_DATA 0x1f
+#define DM_CHIP_ID 0x2c
+#define DM_MODE_CTRL 0x91 /* only on dm9620 */
+
+/* chip id values */
+#define ID_DM9601 0
+#define ID_DM9620 1
#define DM_MAX_MCAST 64
#define DM_MCAST_SIZE 8
#define DM_RX_OVERHEAD 7 /* 3 byte header + 4 byte crc tail */
#define DM_TIMEOUT 1000
-
static int dm_read(struct usbnet *dev, u8 reg, u16 length, void *data)
{
int err;
static int dm_write_reg(struct usbnet *dev, u8 reg, u8 value)
{
- return usbnet_write_cmd(dev, DM_WRITE_REGS,
+ return usbnet_write_cmd(dev, DM_WRITE_REG,
USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
value, reg, NULL, 0);
}
-static void dm_write_async_helper(struct usbnet *dev, u8 reg, u8 value,
- u16 length, void *data)
+static void dm_write_async(struct usbnet *dev, u8 reg, u16 length, void *data)
{
usbnet_write_cmd_async(dev, DM_WRITE_REGS,
USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
- value, reg, data, length);
-}
-
-static void dm_write_async(struct usbnet *dev, u8 reg, u16 length, void *data)
-{
- netdev_dbg(dev->net, "dm_write_async() reg=0x%02x length=%d\n", reg, length);
-
- dm_write_async_helper(dev, reg, 0, length, data);
+ 0, reg, data, length);
}
static void dm_write_reg_async(struct usbnet *dev, u8 reg, u8 value)
{
- netdev_dbg(dev->net, "dm_write_reg_async() reg=0x%02x value=0x%02x\n",
- reg, value);
-
- dm_write_async_helper(dev, reg, value, 0, NULL);
+ usbnet_write_cmd_async(dev, DM_WRITE_REG,
+ USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ value, reg, NULL, 0);
}
static int dm_read_shared_word(struct usbnet *dev, int phy, u8 reg, __le16 *value)
static int dm9601_bind(struct usbnet *dev, struct usb_interface *intf)
{
int ret;
- u8 mac[ETH_ALEN];
+ u8 mac[ETH_ALEN], id;
ret = usbnet_get_endpoints(dev, intf);
if (ret)
__dm9601_set_mac_address(dev);
}
+ if (dm_read_reg(dev, DM_CHIP_ID, &id) < 0) {
+ netdev_err(dev->net, "Error reading chip ID\n");
+ ret = -ENODEV;
+ goto out;
+ }
+
+ /* put dm9620 devices in dm9601 mode */
+ if (id == ID_DM9620) {
+ u8 mode;
+
+ if (dm_read_reg(dev, DM_MODE_CTRL, &mode) < 0) {
+ netdev_err(dev->net, "Error reading MODE_CTRL\n");
+ ret = -ENODEV;
+ goto out;
+ }
+ dm_write_reg(dev, DM_MODE_CTRL, mode & 0x7f);
+ }
+
/* power up phy */
dm_write_reg(dev, DM_GPR_CTRL, 1);
dm_write_reg(dev, DM_GPR_DATA, 0);
USB_DEVICE(0x0a46, 0x9000), /* DM9000E */
.driver_info = (unsigned long)&dm9601_info,
},
+ {
+ USB_DEVICE(0x0a46, 0x9620), /* DM9620 USB to Fast Ethernet Adapter */
+ .driver_info = (unsigned long)&dm9601_info,
+ },
{}, // END
};
USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, USB_CLASS_VENDOR_SPEC, 1, 57),
.driver_info = (unsigned long)&qmi_wwan_info,
},
+ { /* HUAWEI_INTERFACE_NDIS_CONTROL_QUALCOMM */
+ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, USB_CLASS_VENDOR_SPEC, 0x01, 0x69),
+ .driver_info = (unsigned long)&qmi_wwan_info,
+ },
/* 2. Combined interface devices matching on class+protocol */
{ /* Huawei E367 and possibly others in "Windows mode" */
USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, USB_CLASS_VENDOR_SPEC, 1, 17),
.driver_info = (unsigned long)&qmi_wwan_info,
},
+ { /* HUAWEI_NDIS_SINGLE_INTERFACE_VDF */
+ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, USB_CLASS_VENDOR_SPEC, 0x01, 0x37),
+ .driver_info = (unsigned long)&qmi_wwan_info,
+ },
+ { /* HUAWEI_INTERFACE_NDIS_HW_QUALCOMM */
+ USB_VENDOR_AND_INTERFACE_INFO(HUAWEI_VENDOR_ID, USB_CLASS_VENDOR_SPEC, 0x01, 0x67),
+ .driver_info = (unsigned long)&qmi_wwan_info,
+ },
{ /* Pantech UML290, P4200 and more */
USB_VENDOR_AND_INTERFACE_INFO(0x106c, USB_CLASS_VENDOR_SPEC, 0xf0, 0xff),
.driver_info = (unsigned long)&qmi_wwan_info,
{QMI_FIXED_INTF(0x19d2, 0x0199, 1)}, /* ZTE MF820S */
{QMI_FIXED_INTF(0x19d2, 0x0200, 1)},
{QMI_FIXED_INTF(0x19d2, 0x0257, 3)}, /* ZTE MF821 */
+ {QMI_FIXED_INTF(0x19d2, 0x0265, 4)}, /* ONDA MT8205 4G LTE */
{QMI_FIXED_INTF(0x19d2, 0x0284, 4)}, /* ZTE MF880 */
{QMI_FIXED_INTF(0x19d2, 0x0326, 4)}, /* ZTE MF821D */
{QMI_FIXED_INTF(0x19d2, 0x1008, 4)}, /* ZTE (Vodafone) K3570-Z */
{QMI_FIXED_INTF(0x1199, 0x68a2, 19)}, /* Sierra Wireless MC7710 in QMI mode */
{QMI_FIXED_INTF(0x1199, 0x901c, 8)}, /* Sierra Wireless EM7700 */
{QMI_FIXED_INTF(0x1bbb, 0x011e, 4)}, /* Telekom Speedstick LTE II (Alcatel One Touch L100V LTE) */
+ {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */
+ {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
/* 4. Gobi 1000 devices */
{QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
unsigned long lockflags;
size_t size = dev->rx_urb_size;
+ /* prevent rx skb allocation when error ratio is high */
+ if (test_bit(EVENT_RX_KILL, &dev->flags)) {
+ usb_free_urb(urb);
+ return -ENOLINK;
+ }
+
skb = __netdev_alloc_skb_ip_align(dev->net, size, flags);
if (!skb) {
netif_dbg(dev, rx_err, dev->net, "no rx skb\n");
break;
}
+ /* stop rx if packet error rate is high */
+ if (++dev->pkt_cnt > 30) {
+ dev->pkt_cnt = 0;
+ dev->pkt_err = 0;
+ } else {
+ if (state == rx_cleanup)
+ dev->pkt_err++;
+ if (dev->pkt_err > 20)
+ set_bit(EVENT_RX_KILL, &dev->flags);
+ }
+
state = defer_bh(dev, skb, &dev->rxq, state);
if (urb) {
(dev->driver_info->flags & FLAG_FRAMING_AX) ? "ASIX" :
"simple");
+ /* reset rx error state */
+ dev->pkt_cnt = 0;
+ dev->pkt_err = 0;
+ clear_bit(EVENT_RX_KILL, &dev->flags);
+
// delay posting reads until we're fully open
tasklet_schedule (&dev->bh);
if (info->manage_power) {
if (info->tx_fixup) {
skb = info->tx_fixup (dev, skb, GFP_ATOMIC);
if (!skb) {
- if (netif_msg_tx_err(dev)) {
- netif_dbg(dev, tx_err, dev->net, "can't tx_fixup skb\n");
- goto drop;
- } else {
- /* cdc_ncm collected packet; waits for more */
+ /* packet collected; minidriver waiting for more */
+ if (info->flags & FLAG_MULTI_PACKET)
goto not_drop;
- }
+ netif_dbg(dev, tx_err, dev->net, "can't tx_fixup skb\n");
+ goto drop;
}
}
length = skb->len;
}
}
+ /* restart RX again after disabling due to high error rate */
+ clear_bit(EVENT_RX_KILL, &dev->flags);
+
// waiting for all pending urbs to complete?
if (dev->wait) {
if ((dev->txq.qlen + dev->rxq.qlen + dev->done.qlen) == 0) {
if ((dev->driver_info->flags & FLAG_WWAN) != 0)
strcpy(net->name, "wwan%d");
+ /* devices that cannot do ARP */
+ if ((dev->driver_info->flags & FLAG_NOARP) != 0)
+ net->flags |= IFF_NOARP;
+
/* maybe the remote can't receive an Ethernet MTU */
if (net->mtu > (dev->hard_mtu - net->hard_header_len))
net->mtu = dev->hard_mtu - net->hard_header_len;
#include <linux/scatterlist.h>
#include <linux/if_vlan.h>
#include <linux/slab.h>
+#include <linux/cpu.h>
static int napi_weight = 128;
module_param(napi_weight, int, 0444);
/* Does the affinity hint is set for virtqueues? */
bool affinity_hint_set;
+
+ /* Per-cpu variable to show the mapping from CPU to virtqueue */
+ int __percpu *vq_index;
+
+ /* CPU hot plug notifier */
+ struct notifier_block nb;
};
struct skb_vnet_hdr {
return 0;
}
-static void virtnet_set_affinity(struct virtnet_info *vi, bool set)
+static void virtnet_clean_affinity(struct virtnet_info *vi, long hcpu)
{
int i;
+ int cpu;
+
+ if (vi->affinity_hint_set) {
+ for (i = 0; i < vi->max_queue_pairs; i++) {
+ virtqueue_set_affinity(vi->rq[i].vq, -1);
+ virtqueue_set_affinity(vi->sq[i].vq, -1);
+ }
+
+ vi->affinity_hint_set = false;
+ }
+
+ i = 0;
+ for_each_online_cpu(cpu) {
+ if (cpu == hcpu) {
+ *per_cpu_ptr(vi->vq_index, cpu) = -1;
+ } else {
+ *per_cpu_ptr(vi->vq_index, cpu) =
+ ++i % vi->curr_queue_pairs;
+ }
+ }
+}
+
+static void virtnet_set_affinity(struct virtnet_info *vi)
+{
+ int i;
+ int cpu;
/* In multiqueue mode, when the number of cpu is equal to the number of
* queue pairs, we let the queue pairs to be private to one cpu by
* setting the affinity hint to eliminate the contention.
*/
- if ((vi->curr_queue_pairs == 1 ||
- vi->max_queue_pairs != num_online_cpus()) && set) {
- if (vi->affinity_hint_set)
- set = false;
- else
- return;
+ if (vi->curr_queue_pairs == 1 ||
+ vi->max_queue_pairs != num_online_cpus()) {
+ virtnet_clean_affinity(vi, -1);
+ return;
}
- for (i = 0; i < vi->max_queue_pairs; i++) {
- int cpu = set ? i : -1;
+ i = 0;
+ for_each_online_cpu(cpu) {
virtqueue_set_affinity(vi->rq[i].vq, cpu);
virtqueue_set_affinity(vi->sq[i].vq, cpu);
+ *per_cpu_ptr(vi->vq_index, cpu) = i;
+ i++;
}
- if (set)
- vi->affinity_hint_set = true;
- else
- vi->affinity_hint_set = false;
+ vi->affinity_hint_set = true;
+}
+
+static int virtnet_cpu_callback(struct notifier_block *nfb,
+ unsigned long action, void *hcpu)
+{
+ struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb);
+
+ switch(action & ~CPU_TASKS_FROZEN) {
+ case CPU_ONLINE:
+ case CPU_DOWN_FAILED:
+ case CPU_DEAD:
+ virtnet_set_affinity(vi);
+ break;
+ case CPU_DOWN_PREPARE:
+ virtnet_clean_affinity(vi, (long)hcpu);
+ break;
+ default:
+ break;
+ }
+ return NOTIFY_OK;
}
static void virtnet_get_ringparam(struct net_device *dev,
if (queue_pairs > vi->max_queue_pairs)
return -EINVAL;
+ get_online_cpus();
err = virtnet_set_queues(vi, queue_pairs);
if (!err) {
netif_set_real_num_tx_queues(dev, queue_pairs);
netif_set_real_num_rx_queues(dev, queue_pairs);
- virtnet_set_affinity(vi, true);
+ virtnet_set_affinity(vi);
}
+ put_online_cpus();
return err;
}
/* To avoid contending a lock hold by a vcpu who would exit to host, select the
* txq based on the processor id.
- * TODO: handle cpu hotplug.
*/
static u16 virtnet_select_queue(struct net_device *dev, struct sk_buff *skb)
{
- int txq = skb_rx_queue_recorded(skb) ? skb_get_rx_queue(skb) :
- smp_processor_id();
+ int txq;
+ struct virtnet_info *vi = netdev_priv(dev);
+
+ if (skb_rx_queue_recorded(skb)) {
+ txq = skb_get_rx_queue(skb);
+ } else {
+ txq = *__this_cpu_ptr(vi->vq_index);
+ if (txq == -1)
+ txq = 0;
+ }
while (unlikely(txq >= dev->real_num_tx_queues))
txq -= dev->real_num_tx_queues;
{
struct virtio_device *vdev = vi->vdev;
- virtnet_set_affinity(vi, false);
+ virtnet_clean_affinity(vi, -1);
vdev->config->del_vqs(vdev);
if (ret)
goto err_free;
- virtnet_set_affinity(vi, true);
+ get_online_cpus();
+ virtnet_set_affinity(vi);
+ put_online_cpus();
+
return 0;
err_free:
if (vi->stats == NULL)
goto free;
+ vi->vq_index = alloc_percpu(int);
+ if (vi->vq_index == NULL)
+ goto free_stats;
+
mutex_init(&vi->config_lock);
vi->config_enable = true;
INIT_WORK(&vi->config_work, virtnet_config_changed_work);
/* Allocate/initialize the rx/tx queues, and invoke find_vqs */
err = init_vqs(vi);
if (err)
- goto free_stats;
+ goto free_index;
netif_set_real_num_tx_queues(dev, 1);
netif_set_real_num_rx_queues(dev, 1);
}
}
+ vi->nb.notifier_call = &virtnet_cpu_callback;
+ err = register_hotcpu_notifier(&vi->nb);
+ if (err) {
+ pr_debug("virtio_net: registering cpu notifier failed\n");
+ goto free_recv_bufs;
+ }
+
/* Assume link up if device can't report link status,
otherwise get link status from config. */
if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) {
free_vqs:
cancel_delayed_work_sync(&vi->refill);
virtnet_del_vqs(vi);
+free_index:
+ free_percpu(vi->vq_index);
free_stats:
free_percpu(vi->stats);
free:
{
struct virtnet_info *vi = vdev->priv;
+ unregister_hotcpu_notifier(&vi->nb);
+
/* Prevent config work handler from accessing the device. */
mutex_lock(&vi->config_lock);
vi->config_enable = false;
flush_work(&vi->config_work);
+ free_percpu(vi->vq_index);
free_percpu(vi->stats);
free_netdev(vi->dev);
}
if (ret & 1) { /* Link is up. */
printk(KERN_INFO "%s: NIC Link is Up %d Mbps\n",
adapter->netdev->name, adapter->link_speed);
- if (!netif_carrier_ok(adapter->netdev))
- netif_carrier_on(adapter->netdev);
+ netif_carrier_on(adapter->netdev);
if (affectTxQueue) {
for (i = 0; i < adapter->num_tx_queues; i++)
} else {
printk(KERN_INFO "%s: NIC Link is Down\n",
adapter->netdev->name);
- if (netif_carrier_ok(adapter->netdev))
- netif_carrier_off(adapter->netdev);
+ netif_carrier_off(adapter->netdev);
if (affectTxQueue) {
for (i = 0; i < adapter->num_tx_queues; i++)
netif_set_real_num_tx_queues(adapter->netdev, adapter->num_tx_queues);
netif_set_real_num_rx_queues(adapter->netdev, adapter->num_rx_queues);
+ netif_carrier_off(netdev);
err = register_netdev(netdev);
if (err) {
AR_PHY_CL_TAB_1,
AR_PHY_CL_TAB_2 };
+ ar9003_hw_set_chain_masks(ah, ah->caps.rx_chainmask, ah->caps.tx_chainmask);
+
if (rtt) {
if (!ar9003_hw_rtt_restore(ah, chan))
run_rtt_cal = true;
ath9k_hw_synth_delay(ah, chan, synthDelay);
}
-static void ar9003_hw_set_chain_masks(struct ath_hw *ah, u8 rx, u8 tx)
+void ar9003_hw_set_chain_masks(struct ath_hw *ah, u8 rx, u8 tx)
{
- switch (rx) {
- case 0x5:
+ if (ah->caps.tx_chainmask == 5 || ah->caps.rx_chainmask == 5)
REG_SET_BIT(ah, AR_PHY_ANALOG_SWAP,
AR_PHY_SWAP_ALT_CHAIN);
- case 0x3:
- case 0x1:
- case 0x2:
- case 0x7:
- REG_WRITE(ah, AR_PHY_RX_CHAINMASK, rx);
- REG_WRITE(ah, AR_PHY_CAL_CHAINMASK, rx);
- break;
- default:
- break;
- }
+
+ REG_WRITE(ah, AR_PHY_RX_CHAINMASK, rx);
+ REG_WRITE(ah, AR_PHY_CAL_CHAINMASK, rx);
if ((ah->caps.hw_caps & ATH9K_HW_CAP_APM) && (tx == 0x7))
- REG_WRITE(ah, AR_SELFGEN_MASK, 0x3);
- else
- REG_WRITE(ah, AR_SELFGEN_MASK, tx);
+ tx = 3;
- if (tx == 0x5) {
- REG_SET_BIT(ah, AR_PHY_ANALOG_SWAP,
- AR_PHY_SWAP_ALT_CHAIN);
- }
+ REG_WRITE(ah, AR_SELFGEN_MASK, tx);
}
/*
u32 *rxlink;
u32 num_pkts;
unsigned int rxfilter;
- spinlock_t rxbuflock;
struct list_head rxbuf;
struct ath_descdma rxdma;
struct ath_buf *rx_bufptr;
int ath_startrecv(struct ath_softc *sc);
bool ath_stoprecv(struct ath_softc *sc);
-void ath_flushrecv(struct ath_softc *sc);
u32 ath_calcrxfilter(struct ath_softc *sc);
int ath_rx_init(struct ath_softc *sc, int nbufs);
void ath_rx_cleanup(struct ath_softc *sc);
enum sc_op_flags {
SC_OP_INVALID,
SC_OP_BEACONS,
- SC_OP_RXFLUSH,
SC_OP_ANI_RUN,
SC_OP_PRIM_STA_VIF,
SC_OP_HW_RESET,
skb->len, DMA_TO_DEVICE);
dev_kfree_skb_any(skb);
bf->bf_buf_addr = 0;
+ bf->bf_mpdu = NULL;
}
skb = ieee80211_beacon_get(hw, vif);
return;
bf = ath9k_beacon_generate(sc->hw, vif);
- WARN_ON(!bf);
if (sc->beacon.bmisscnt != 0) {
ath_dbg(common, BSTUCK, "resume beacon xmit after %u misses\n",
RXS_ERR("RX-LENGTH-ERR", rx_len_err);
RXS_ERR("RX-OOM-ERR", rx_oom_err);
RXS_ERR("RX-RATE-ERR", rx_rate_err);
- RXS_ERR("RX-DROP-RXFLUSH", rx_drop_rxflush);
RXS_ERR("RX-TOO-MANY-FRAGS", rx_too_many_frags_err);
PHY_ERR("UNDERRUN ERR", ATH9K_PHYERR_UNDERRUN);
* @rx_oom_err: No. of frames dropped due to OOM issues.
* @rx_rate_err: No. of frames dropped due to rate errors.
* @rx_too_many_frags_err: Frames dropped due to too-many-frags received.
- * @rx_drop_rxflush: No. of frames dropped due to RX-FLUSH.
* @rx_beacons: No. of beacons received.
* @rx_frags: No. of rx-fragements received.
*/
u32 rx_oom_err;
u32 rx_rate_err;
u32 rx_too_many_frags_err;
- u32 rx_drop_rxflush;
u32 rx_beacons;
u32 rx_frags;
};
endpoint->ep_callbacks.tx(endpoint->ep_callbacks.priv,
skb, htc_hdr->endpoint_id,
txok);
+ } else {
+ kfree_skb(skb);
}
}
int ar9003_paprd_init_table(struct ath_hw *ah);
bool ar9003_paprd_is_done(struct ath_hw *ah);
bool ar9003_is_paprd_enabled(struct ath_hw *ah);
+void ar9003_hw_set_chain_masks(struct ath_hw *ah, u8 rx, u8 tx);
/* Hardware family op attach helpers */
void ar5008_hw_attach_phy_ops(struct ath_hw *ah);
ath_start_ani(sc);
}
-static bool ath_prepare_reset(struct ath_softc *sc, bool retry_tx, bool flush)
+static bool ath_prepare_reset(struct ath_softc *sc, bool retry_tx)
{
struct ath_hw *ah = sc->sc_ah;
bool ret = true;
if (!ath_drain_all_txq(sc, retry_tx))
ret = false;
- if (!flush) {
- if (ah->caps.hw_caps & ATH9K_HW_CAP_EDMA)
- ath_rx_tasklet(sc, 1, true);
- ath_rx_tasklet(sc, 1, false);
- } else {
- ath_flushrecv(sc);
- }
-
return ret;
}
struct ath_common *common = ath9k_hw_common(ah);
struct ath9k_hw_cal_data *caldata = NULL;
bool fastcc = true;
- bool flush = false;
int r;
__ath_cancel_work(sc);
+ tasklet_disable(&sc->intr_tq);
spin_lock_bh(&sc->sc_pcu_lock);
if (!(sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)) {
if (!hchan) {
fastcc = false;
- flush = true;
hchan = ah->curchan;
}
- if (!ath_prepare_reset(sc, retry_tx, flush))
+ if (!ath_prepare_reset(sc, retry_tx))
fastcc = false;
ath_dbg(common, CONFIG, "Reset to %u MHz, HT40: %d fastcc: %d\n",
out:
spin_unlock_bh(&sc->sc_pcu_lock);
+ tasklet_enable(&sc->intr_tq);
+
return r;
}
ath9k_hw_cfg_gpio_input(ah, ah->led_pin);
}
- ath_prepare_reset(sc, false, true);
+ ath_prepare_reset(sc, false);
if (sc->rx.frag) {
dev_kfree_skb_any(sc->rx.frag);
static bool validate_antenna_mask(struct ath_hw *ah, u32 val)
{
+ if (AR_SREV_9300_20_OR_LATER(ah))
+ return true;
+
switch (val & 0x7) {
case 0x1:
case 0x3:
static void ath_edma_start_recv(struct ath_softc *sc)
{
- spin_lock_bh(&sc->rx.rxbuflock);
-
ath9k_hw_rxena(sc->sc_ah);
ath_rx_addbuffer_edma(sc, ATH9K_RX_QUEUE_HP,
ath_opmode_init(sc);
ath9k_hw_startpcureceive(sc->sc_ah, !!(sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL));
-
- spin_unlock_bh(&sc->rx.rxbuflock);
}
static void ath_edma_stop_recv(struct ath_softc *sc)
int error = 0;
spin_lock_init(&sc->sc_pcu_lock);
- spin_lock_init(&sc->rx.rxbuflock);
- clear_bit(SC_OP_RXFLUSH, &sc->sc_flags);
common->rx_bufsize = IEEE80211_MAX_MPDU_LEN / 2 +
sc->sc_ah->caps.rx_status_len;
return 0;
}
- spin_lock_bh(&sc->rx.rxbuflock);
if (list_empty(&sc->rx.rxbuf))
goto start_recv;
ath_opmode_init(sc);
ath9k_hw_startpcureceive(ah, !!(sc->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL));
- spin_unlock_bh(&sc->rx.rxbuflock);
-
return 0;
}
+static void ath_flushrecv(struct ath_softc *sc)
+{
+ if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA)
+ ath_rx_tasklet(sc, 1, true);
+ ath_rx_tasklet(sc, 1, false);
+}
+
bool ath_stoprecv(struct ath_softc *sc)
{
struct ath_hw *ah = sc->sc_ah;
bool stopped, reset = false;
- spin_lock_bh(&sc->rx.rxbuflock);
ath9k_hw_abortpcurecv(ah);
ath9k_hw_setrxfilter(ah, 0);
stopped = ath9k_hw_stopdmarecv(ah, &reset);
+ ath_flushrecv(sc);
+
if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA)
ath_edma_stop_recv(sc);
else
sc->rx.rxlink = NULL;
- spin_unlock_bh(&sc->rx.rxbuflock);
if (!(ah->ah_flags & AH_UNPLUGGED) &&
unlikely(!stopped)) {
return stopped && !reset;
}
-void ath_flushrecv(struct ath_softc *sc)
-{
- set_bit(SC_OP_RXFLUSH, &sc->sc_flags);
- if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_EDMA)
- ath_rx_tasklet(sc, 1, true);
- ath_rx_tasklet(sc, 1, false);
- clear_bit(SC_OP_RXFLUSH, &sc->sc_flags);
-}
-
static bool ath_beacon_dtim_pending_cab(struct sk_buff *skb)
{
/* Check whether the Beacon frame has DTIM indicating buffered bc/mc */
return NULL;
}
+ list_del(&bf->list);
if (!bf->bf_mpdu)
return bf;
dma_type = DMA_FROM_DEVICE;
qtype = hp ? ATH9K_RX_QUEUE_HP : ATH9K_RX_QUEUE_LP;
- spin_lock_bh(&sc->rx.rxbuflock);
tsf = ath9k_hw_gettsf64(ah);
tsf_lower = tsf & 0xffffffff;
do {
bool decrypt_error = false;
- /* If handling rx interrupt and flush is in progress => exit */
- if (test_bit(SC_OP_RXFLUSH, &sc->sc_flags) && (flush == 0))
- break;
memset(&rs, 0, sizeof(rs));
if (edma)
ath_debug_stat_rx(sc, &rs);
- /*
- * If we're asked to flush receive queue, directly
- * chain it back at the queue without processing it.
- */
- if (test_bit(SC_OP_RXFLUSH, &sc->sc_flags)) {
- RX_STAT_INC(rx_drop_rxflush);
- goto requeue_drop_frag;
- }
-
memset(rxs, 0, sizeof(struct ieee80211_rx_status));
rxs->mactime = (tsf & ~0xffffffffULL) | rs.rs_tstamp;
sc->rx.frag = NULL;
}
requeue:
+ list_add_tail(&bf->list, &sc->rx.rxbuf);
+ if (flush)
+ continue;
+
if (edma) {
- list_add_tail(&bf->list, &sc->rx.rxbuf);
ath_rx_edma_buf_link(sc, qtype);
} else {
- list_move_tail(&bf->list, &sc->rx.rxbuf);
ath_rx_buf_link(sc, bf);
- if (!flush)
- ath9k_hw_rxena(ah);
+ ath9k_hw_rxena(ah);
}
} while (1);
- spin_unlock_bh(&sc->rx.rxbuflock);
-
if (!(ah->imask & ATH9K_INT_RXEOL)) {
ah->imask |= (ATH9K_INT_RXEOL | ATH9K_INT_RXORN);
ath9k_hw_set_interrupts(ah);
#include "debug.h"
#define N_TX_QUEUES 4 /* #tx queues on mac80211<->driver interface */
+#define BRCMS_FLUSH_TIMEOUT 500 /* msec */
/* Flags we support */
#define MAC_FILTERS (FIF_PROMISC_IN_BSS | \
wiphy_rfkill_set_hw_state(wl->pub->ieee_hw->wiphy, blocked);
}
+static bool brcms_tx_flush_completed(struct brcms_info *wl)
+{
+ bool result;
+
+ spin_lock_bh(&wl->lock);
+ result = brcms_c_tx_flush_completed(wl->wlc);
+ spin_unlock_bh(&wl->lock);
+ return result;
+}
+
static void brcms_ops_flush(struct ieee80211_hw *hw, bool drop)
{
struct brcms_info *wl = hw->priv;
+ int ret;
no_printk("%s: drop = %s\n", __func__, drop ? "true" : "false");
- /* wait for packet queue and dma fifos to run empty */
- spin_lock_bh(&wl->lock);
- brcms_c_wait_for_tx_completion(wl->wlc, drop);
- spin_unlock_bh(&wl->lock);
+ ret = wait_event_timeout(wl->tx_flush_wq,
+ brcms_tx_flush_completed(wl),
+ msecs_to_jiffies(BRCMS_FLUSH_TIMEOUT));
+
+ brcms_dbg_mac80211(wl->wlc->hw->d11core,
+ "ret=%d\n", jiffies_to_msecs(ret));
}
static const struct ieee80211_ops brcms_ops = {
done:
spin_unlock_bh(&wl->lock);
+ wake_up(&wl->tx_flush_wq);
}
/*
atomic_set(&wl->callbacks, 0);
+ init_waitqueue_head(&wl->tx_flush_wq);
+
/* setup the bottom half handler */
tasklet_init(&wl->tasklet, brcms_dpc, (unsigned long) wl);
#endif
t->ms = ms;
t->periodic = (bool) periodic;
- t->set = true;
-
- atomic_inc(&t->wl->callbacks);
+ if (!t->set) {
+ t->set = true;
+ atomic_inc(&t->wl->callbacks);
+ }
ieee80211_queue_delayed_work(hw, &t->dly_wrk, msecs_to_jiffies(ms));
}
spin_lock_bh(&wl->lock);
return blocked;
}
-
-/*
- * precondition: perimeter lock has been acquired
- */
-void brcms_msleep(struct brcms_info *wl, uint ms)
-{
- spin_unlock_bh(&wl->lock);
- msleep(ms);
- spin_lock_bh(&wl->lock);
-}
spinlock_t lock; /* per-device perimeter lock */
spinlock_t isr_lock; /* per-device ISR synchronization lock */
+ /* tx flush */
+ wait_queue_head_t tx_flush_wq;
/* timer related fields */
atomic_t callbacks; /* # outstanding callback functions */
extern void brcms_free_timer(struct brcms_timer *timer);
extern void brcms_add_timer(struct brcms_timer *timer, uint ms, int periodic);
extern bool brcms_del_timer(struct brcms_timer *timer);
-extern void brcms_msleep(struct brcms_info *wl, uint ms);
extern void brcms_dpc(unsigned long data);
extern void brcms_timer(struct brcms_timer *t);
extern void brcms_fatal_error(struct brcms_info *wl);
static bool
brcms_b_txstatus(struct brcms_hardware *wlc_hw, bool bound, bool *fatal)
{
- bool morepending = false;
struct bcma_device *core;
struct tx_status txstatus, *txs;
u32 s1, s2;
txs = &txstatus;
core = wlc_hw->d11core;
*fatal = false;
- s1 = bcma_read32(core, D11REGOFFS(frmtxstatus));
- while (!(*fatal)
- && (s1 & TXS_V)) {
- /* !give others some time to run! */
- if (n >= max_tx_num) {
- morepending = true;
- break;
- }
+ while (n < max_tx_num) {
+ s1 = bcma_read32(core, D11REGOFFS(frmtxstatus));
if (s1 == 0xffffffff) {
brcms_err(core, "wl%d: %s: dead chip\n", wlc_hw->unit,
__func__);
*fatal = true;
return false;
}
- s2 = bcma_read32(core, D11REGOFFS(frmtxstatus2));
+ /* only process when valid */
+ if (!(s1 & TXS_V))
+ break;
+ s2 = bcma_read32(core, D11REGOFFS(frmtxstatus2));
txs->status = s1 & TXS_STATUS_MASK;
txs->frameid = (s1 & TXS_FID_MASK) >> TXS_FID_SHIFT;
txs->sequence = s2 & TXS_SEQ_MASK;
txs->lasttxtime = 0;
*fatal = brcms_c_dotxstatus(wlc_hw->wlc, txs);
-
- s1 = bcma_read32(core, D11REGOFFS(frmtxstatus));
+ if (*fatal == true)
+ return false;
n++;
}
- if (*fatal)
- return false;
-
- return morepending;
+ return n >= max_tx_num;
}
static void brcms_c_tbtt(struct brcms_c_info *wlc)
return wlc->band->bandunit;
}
-void brcms_c_wait_for_tx_completion(struct brcms_c_info *wlc, bool drop)
+bool brcms_c_tx_flush_completed(struct brcms_c_info *wlc)
{
- int timeout = 20;
int i;
/* Kick DMA to send any pending AMPDU */
for (i = 0; i < ARRAY_SIZE(wlc->hw->di); i++)
if (wlc->hw->di[i])
- dma_txflush(wlc->hw->di[i]);
-
- /* wait for queue and DMA fifos to run dry */
- while (brcms_txpktpendtot(wlc) > 0) {
- brcms_msleep(wlc->wl, 1);
-
- if (--timeout == 0)
- break;
- }
+ dma_kick_tx(wlc->hw->di[i]);
- WARN_ON_ONCE(timeout == 0);
+ return !brcms_txpktpendtot(wlc);
}
void brcms_c_set_beacon_listen_interval(struct brcms_c_info *wlc, u8 interval)
extern void brcms_c_scan_start(struct brcms_c_info *wlc);
extern void brcms_c_scan_stop(struct brcms_c_info *wlc);
extern int brcms_c_get_curband(struct brcms_c_info *wlc);
-extern void brcms_c_wait_for_tx_completion(struct brcms_c_info *wlc,
- bool drop);
extern int brcms_c_set_channel(struct brcms_c_info *wlc, u16 channel);
extern int brcms_c_set_rate_limit(struct brcms_c_info *wlc, u16 srl, u16 lrl);
extern void brcms_c_get_current_rateset(struct brcms_c_info *wlc,
extern int brcms_c_get_tx_power(struct brcms_c_info *wlc);
extern bool brcms_c_check_radio_disabled(struct brcms_c_info *wlc);
extern void brcms_c_mute(struct brcms_c_info *wlc, bool on);
+extern bool brcms_c_tx_flush_completed(struct brcms_c_info *wlc);
#endif /* _BRCM_PUB_H_ */
memset(&il->staging, 0, sizeof(il->staging));
- if (!il->vif) {
+ switch (il->iw_mode) {
+ case NL80211_IFTYPE_UNSPECIFIED:
il->staging.dev_type = RXON_DEV_TYPE_ESS;
- } else if (il->vif->type == NL80211_IFTYPE_STATION) {
+ break;
+ case NL80211_IFTYPE_STATION:
il->staging.dev_type = RXON_DEV_TYPE_ESS;
il->staging.filter_flags = RXON_FILTER_ACCEPT_GRP_MSK;
- } else if (il->vif->type == NL80211_IFTYPE_ADHOC) {
+ break;
+ case NL80211_IFTYPE_ADHOC:
il->staging.dev_type = RXON_DEV_TYPE_IBSS;
il->staging.flags = RXON_FLG_SHORT_PREAMBLE_MSK;
il->staging.filter_flags =
RXON_FILTER_BCON_AWARE_MSK | RXON_FILTER_ACCEPT_GRP_MSK;
- } else {
+ break;
+ default:
IL_ERR("Unsupported interface type %d\n", il->vif->type);
return;
}
EXPORT_SYMBOL(il_mac_add_interface);
static void
-il_teardown_interface(struct il_priv *il, struct ieee80211_vif *vif,
- bool mode_change)
+il_teardown_interface(struct il_priv *il, struct ieee80211_vif *vif)
{
lockdep_assert_held(&il->mutex);
il_force_scan_end(il);
}
- if (!mode_change)
- il_set_mode(il);
-
+ il_set_mode(il);
}
void
WARN_ON(il->vif != vif);
il->vif = NULL;
-
- il_teardown_interface(il, vif, false);
+ il->iw_mode = NL80211_IFTYPE_UNSPECIFIED;
+ il_teardown_interface(il, vif);
memset(il->bssid, 0, ETH_ALEN);
D_MAC80211("leave\n");
}
/* success */
- il_teardown_interface(il, vif, true);
vif->type = newtype;
vif->p2p = false;
- err = il_set_mode(il);
- WARN_ON(err);
- /*
- * We've switched internally, but submitting to the
- * device may have failed for some reason. Mask this
- * error, because otherwise mac80211 will not switch
- * (and set the interface type back) and we'll be
- * out of sync with it.
- */
+ il->iw_mode = newtype;
+ il_teardown_interface(il, vif);
err = 0;
out:
{
u16 status = le16_to_cpu(tx_resp->status.status);
+ info->flags &= ~IEEE80211_TX_CTL_AMPDU;
+
info->status.rates[0].count = tx_resp->failure_frame + 1;
info->flags |= iwl_tx_status_to_mac80211(status);
iwlagn_hwrate_to_tx_control(priv, le32_to_cpu(tx_resp->rate_n_flags),
next_reclaimed = ssn;
}
+ if (tid != IWL_TID_NON_QOS) {
+ priv->tid_data[sta_id][tid].next_reclaimed =
+ next_reclaimed;
+ IWL_DEBUG_TX_REPLY(priv, "Next reclaimed packet:%d\n",
+ next_reclaimed);
+ }
+
iwl_trans_reclaim(priv->trans, txq_id, ssn, &skbs);
iwlagn_check_ratid_empty(priv, sta_id, tid);
if (!is_agg)
iwlagn_non_agg_tx_status(priv, ctx, hdr->addr1);
- /*
- * W/A for FW bug - the seq_ctl isn't updated when the
- * queues are flushed. Fetch it from the packet itself
- */
- if (!is_agg && status == TX_STATUS_FAIL_FIFO_FLUSHED) {
- next_reclaimed = le16_to_cpu(hdr->seq_ctrl);
- next_reclaimed =
- SEQ_TO_SN(next_reclaimed + 0x10);
- }
-
is_offchannel_skb =
(info->flags & IEEE80211_TX_CTL_TX_OFFCHAN);
freed++;
}
- if (tid != IWL_TID_NON_QOS) {
- priv->tid_data[sta_id][tid].next_reclaimed =
- next_reclaimed;
- IWL_DEBUG_TX_REPLY(priv, "Next reclaimed packet:%d\n",
- next_reclaimed);
- }
-
WARN_ON(!is_agg && freed != 1);
/*
struct cfg80211_ssid req_ssid;
int ret, auth_type = 0;
struct cfg80211_bss *bss = NULL;
- u8 is_scanning_required = 0, config_bands = 0;
+ u8 is_scanning_required = 0;
memset(&req_ssid, 0, sizeof(struct cfg80211_ssid));
/* disconnect before try to associate */
mwifiex_deauthenticate(priv, NULL);
- if (channel) {
- if (mode == NL80211_IFTYPE_STATION) {
- if (channel->band == IEEE80211_BAND_2GHZ)
- config_bands = BAND_B | BAND_G | BAND_GN;
- else
- config_bands = BAND_A | BAND_AN;
-
- if (!((config_bands | priv->adapter->fw_bands) &
- ~priv->adapter->fw_bands))
- priv->adapter->config_bands = config_bands;
- }
- }
-
/* As this is new association, clear locally stored
* keys and security related flags */
priv->sec_info.wpa_enabled = false;
if (cfg80211_get_chandef_type(¶ms->chandef) !=
NL80211_CHAN_NO_HT)
- config_bands |= BAND_GN;
+ config_bands |= BAND_G | BAND_GN;
} else {
if (cfg80211_get_chandef_type(¶ms->chandef) ==
NL80211_CHAN_NO_HT)
if (pdev) {
card = (struct pcie_service_card *) pci_get_drvdata(pdev);
- if (!card || card->adapter) {
+ if (!card || !card->adapter) {
pr_err("Card or adapter structure is not valid\n");
return 0;
}
dev_err(adapter->dev, "SCAN_RESP: too many AP returned (%d)\n",
scan_rsp->number_of_sets);
ret = -1;
- goto done;
+ goto check_next_scan;
}
bytes_left = le16_to_cpu(scan_rsp->bss_descript_size);
if (!beacon_size || beacon_size > bytes_left) {
bss_info += bytes_left;
bytes_left = 0;
- return -1;
+ ret = -1;
+ goto check_next_scan;
}
/* Initialize the current working beacon pointer for this BSS
dev_err(priv->adapter->dev,
"%s: bytes left < IE length\n",
__func__);
- goto done;
+ goto check_next_scan;
}
if (element_id == WLAN_EID_DS_PARAMS) {
channel = *(current_ptr + sizeof(struct ieee_types_header));
}
}
+check_next_scan:
spin_lock_irqsave(&adapter->scan_pending_q_lock, flags);
if (list_empty(&adapter->scan_pending_q)) {
spin_unlock_irqrestore(&adapter->scan_pending_q_lock, flags);
}
}
-done:
return ret;
}
if (ret)
goto done;
+ if (bss_desc) {
+ u8 config_bands = 0;
+
+ if (mwifiex_band_to_radio_type((u8) bss_desc->bss_band)
+ == HostCmd_SCAN_RADIO_TYPE_BG)
+ config_bands = BAND_B | BAND_G | BAND_GN;
+ else
+ config_bands = BAND_A | BAND_AN;
+
+ if (!((config_bands | adapter->fw_bands) &
+ ~adapter->fw_bands))
+ adapter->config_bands = config_bands;
+ }
+
ret = mwifiex_check_network_compatibility(priv, bss_desc);
if (ret)
goto done;
config RTLWIFI
tristate
- depends on RTL8192CE || RTL8192CU || RTL8192SE || RTL8192DE
+ depends on RTL8192CE || RTL8192CU || RTL8192SE || RTL8192DE || RTL8723AE
default m
config RTLWIFI_DEBUG
bool "Additional debugging output"
- depends on RTL8192CE || RTL8192CU || RTL8192SE || RTL8192DE
+ depends on RTL8192CE || RTL8192CU || RTL8192SE || RTL8192DE || RTL8723AE
default y
config RTL8192C_COMMON
is_tx ? "Tx" : "Rx");
if (is_tx) {
- rtl_lps_leave(hw);
+ schedule_work(&rtlpriv->
+ works.lps_leave_work);
ppsc->last_delaylps_stamp_jiffies =
jiffies;
}
}
} else if (ETH_P_ARP == ether_type) {
if (is_tx) {
- rtl_lps_leave(hw);
+ schedule_work(&rtlpriv->works.lps_leave_work);
ppsc->last_delaylps_stamp_jiffies = jiffies;
}
"802.1X %s EAPOL pkt!!\n", is_tx ? "Tx" : "Rx");
if (is_tx) {
- rtl_lps_leave(hw);
+ schedule_work(&rtlpriv->works.lps_leave_work);
ppsc->last_delaylps_stamp_jiffies = jiffies;
}
WARN_ON(skb_queue_empty(&rx_queue));
while (!skb_queue_empty(&rx_queue)) {
_skb = skb_dequeue(&rx_queue);
- _rtl_usb_rx_process_agg(hw, skb);
- ieee80211_rx_irqsafe(hw, skb);
+ _rtl_usb_rx_process_agg(hw, _skb);
+ ieee80211_rx_irqsafe(hw, _skb);
}
}
/* Notify xenvif that ring now has space to send an skb to the frontend */
void xenvif_notify_tx_completion(struct xenvif *vif);
+/* Prevent the device from generating any further traffic. */
+void xenvif_carrier_off(struct xenvif *vif);
+
/* Returns number of ring slots required to send an skb to the frontend */
unsigned int xen_netbk_count_skb_slots(struct xenvif *vif, struct sk_buff *skb);
return err;
}
-void xenvif_disconnect(struct xenvif *vif)
+void xenvif_carrier_off(struct xenvif *vif)
{
struct net_device *dev = vif->dev;
- if (netif_carrier_ok(dev)) {
- rtnl_lock();
- netif_carrier_off(dev); /* discard queued packets */
- if (netif_running(dev))
- xenvif_down(vif);
- rtnl_unlock();
- xenvif_put(vif);
- }
+
+ rtnl_lock();
+ netif_carrier_off(dev); /* discard queued packets */
+ if (netif_running(dev))
+ xenvif_down(vif);
+ rtnl_unlock();
+ xenvif_put(vif);
+}
+
+void xenvif_disconnect(struct xenvif *vif)
+{
+ if (netif_carrier_ok(vif->dev))
+ xenvif_carrier_off(vif);
atomic_dec(&vif->refcnt);
wait_event(vif->waiting_to_free, atomic_read(&vif->refcnt) == 0);
atomic_dec(&netbk->netfront_count);
}
-static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx);
+static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx,
+ u8 status);
static void make_tx_response(struct xenvif *vif,
struct xen_netif_tx_request *txp,
s8 st);
do {
make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
- if (cons >= end)
+ if (cons == end)
break;
txp = RING_GET_REQUEST(&vif->tx, cons++);
} while (1);
xenvif_put(vif);
}
+static void netbk_fatal_tx_err(struct xenvif *vif)
+{
+ netdev_err(vif->dev, "fatal error; disabling device\n");
+ xenvif_carrier_off(vif);
+ xenvif_put(vif);
+}
+
static int netbk_count_requests(struct xenvif *vif,
struct xen_netif_tx_request *first,
struct xen_netif_tx_request *txp,
do {
if (frags >= work_to_do) {
- netdev_dbg(vif->dev, "Need more frags\n");
+ netdev_err(vif->dev, "Need more frags\n");
+ netbk_fatal_tx_err(vif);
return -frags;
}
if (unlikely(frags >= MAX_SKB_FRAGS)) {
- netdev_dbg(vif->dev, "Too many frags\n");
+ netdev_err(vif->dev, "Too many frags\n");
+ netbk_fatal_tx_err(vif);
return -frags;
}
memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + frags),
sizeof(*txp));
if (txp->size > first->size) {
- netdev_dbg(vif->dev, "Frags galore\n");
+ netdev_err(vif->dev, "Frag is bigger than frame.\n");
+ netbk_fatal_tx_err(vif);
return -frags;
}
frags++;
if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
- netdev_dbg(vif->dev, "txp->offset: %x, size: %u\n",
+ netdev_err(vif->dev, "txp->offset: %x, size: %u\n",
txp->offset, txp->size);
+ netbk_fatal_tx_err(vif);
return -frags;
}
} while ((txp++)->flags & XEN_NETTXF_more_data);
pending_idx = netbk->pending_ring[index];
page = xen_netbk_alloc_page(netbk, skb, pending_idx);
if (!page)
- return NULL;
+ goto err;
gop->source.u.ref = txp->gref;
gop->source.domid = vif->domid;
}
return gop;
+err:
+ /* Unwind, freeing all pages and sending error responses. */
+ while (i-- > start) {
+ xen_netbk_idx_release(netbk, frag_get_pending_idx(&frags[i]),
+ XEN_NETIF_RSP_ERROR);
+ }
+ /* The head too, if necessary. */
+ if (start)
+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_ERROR);
+
+ return NULL;
}
static int xen_netbk_tx_check_gop(struct xen_netbk *netbk,
{
struct gnttab_copy *gop = *gopp;
u16 pending_idx = *((u16 *)skb->data);
- struct pending_tx_info *pending_tx_info = netbk->pending_tx_info;
- struct xenvif *vif = pending_tx_info[pending_idx].vif;
- struct xen_netif_tx_request *txp;
struct skb_shared_info *shinfo = skb_shinfo(skb);
int nr_frags = shinfo->nr_frags;
int i, err, start;
/* Check status of header. */
err = gop->status;
- if (unlikely(err)) {
- pending_ring_idx_t index;
- index = pending_index(netbk->pending_prod++);
- txp = &pending_tx_info[pending_idx].req;
- make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
- netbk->pending_ring[index] = pending_idx;
- xenvif_put(vif);
- }
+ if (unlikely(err))
+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_ERROR);
/* Skip first skb fragment if it is on same page as header fragment. */
start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
for (i = start; i < nr_frags; i++) {
int j, newerr;
- pending_ring_idx_t index;
pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
if (likely(!newerr)) {
/* Had a previous error? Invalidate this fragment. */
if (unlikely(err))
- xen_netbk_idx_release(netbk, pending_idx);
+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY);
continue;
}
/* Error on this fragment: respond to client with an error. */
- txp = &netbk->pending_tx_info[pending_idx].req;
- make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
- index = pending_index(netbk->pending_prod++);
- netbk->pending_ring[index] = pending_idx;
- xenvif_put(vif);
+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_ERROR);
/* Not the first error? Preceding frags already invalidated. */
if (err)
/* First error: invalidate header and preceding fragments. */
pending_idx = *((u16 *)skb->data);
- xen_netbk_idx_release(netbk, pending_idx);
+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY);
for (j = start; j < i; j++) {
pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
- xen_netbk_idx_release(netbk, pending_idx);
+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY);
}
/* Remember the error: invalidate all subsequent fragments. */
/* Take an extra reference to offset xen_netbk_idx_release */
get_page(netbk->mmap_pages[pending_idx]);
- xen_netbk_idx_release(netbk, pending_idx);
+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY);
}
}
do {
if (unlikely(work_to_do-- <= 0)) {
- netdev_dbg(vif->dev, "Missing extra info\n");
+ netdev_err(vif->dev, "Missing extra info\n");
+ netbk_fatal_tx_err(vif);
return -EBADR;
}
if (unlikely(!extra.type ||
extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
vif->tx.req_cons = ++cons;
- netdev_dbg(vif->dev,
+ netdev_err(vif->dev,
"Invalid extra type: %d\n", extra.type);
+ netbk_fatal_tx_err(vif);
return -EINVAL;
}
struct xen_netif_extra_info *gso)
{
if (!gso->u.gso.size) {
- netdev_dbg(vif->dev, "GSO size must not be zero.\n");
+ netdev_err(vif->dev, "GSO size must not be zero.\n");
+ netbk_fatal_tx_err(vif);
return -EINVAL;
}
/* Currently only TCPv4 S.O. is supported. */
if (gso->u.gso.type != XEN_NETIF_GSO_TYPE_TCPV4) {
- netdev_dbg(vif->dev, "Bad GSO type %d.\n", gso->u.gso.type);
+ netdev_err(vif->dev, "Bad GSO type %d.\n", gso->u.gso.type);
+ netbk_fatal_tx_err(vif);
return -EINVAL;
}
/* Get a netif from the list with work to do. */
vif = poll_net_schedule_list(netbk);
+ /* This can sometimes happen because the test of
+ * list_empty(net_schedule_list) at the top of the
+ * loop is unlocked. Just go back and have another
+ * look.
+ */
if (!vif)
continue;
+ if (vif->tx.sring->req_prod - vif->tx.req_cons >
+ XEN_NETIF_TX_RING_SIZE) {
+ netdev_err(vif->dev,
+ "Impossible number of requests. "
+ "req_prod %d, req_cons %d, size %ld\n",
+ vif->tx.sring->req_prod, vif->tx.req_cons,
+ XEN_NETIF_TX_RING_SIZE);
+ netbk_fatal_tx_err(vif);
+ continue;
+ }
+
RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, work_to_do);
if (!work_to_do) {
xenvif_put(vif);
work_to_do = xen_netbk_get_extras(vif, extras,
work_to_do);
idx = vif->tx.req_cons;
- if (unlikely(work_to_do < 0)) {
- netbk_tx_err(vif, &txreq, idx);
+ if (unlikely(work_to_do < 0))
continue;
- }
}
ret = netbk_count_requests(vif, &txreq, txfrags, work_to_do);
- if (unlikely(ret < 0)) {
- netbk_tx_err(vif, &txreq, idx - ret);
+ if (unlikely(ret < 0))
continue;
- }
+
idx += ret;
if (unlikely(txreq.size < ETH_HLEN)) {
/* No crossing a page as the payload mustn't fragment. */
if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
- netdev_dbg(vif->dev,
+ netdev_err(vif->dev,
"txreq.offset: %x, size: %u, end: %lu\n",
txreq.offset, txreq.size,
(txreq.offset&~PAGE_MASK) + txreq.size);
- netbk_tx_err(vif, &txreq, idx);
+ netbk_fatal_tx_err(vif);
continue;
}
gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
if (netbk_set_skb_gso(vif, skb, gso)) {
+ /* Failure in netbk_set_skb_gso is fatal. */
kfree_skb(skb);
- netbk_tx_err(vif, &txreq, idx);
continue;
}
}
txp->size -= data_len;
} else {
/* Schedule a response immediately. */
- xen_netbk_idx_release(netbk, pending_idx);
+ xen_netbk_idx_release(netbk, pending_idx, XEN_NETIF_RSP_OKAY);
}
if (txp->flags & XEN_NETTXF_csum_blank)
xen_netbk_tx_submit(netbk);
}
-static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx)
+static void xen_netbk_idx_release(struct xen_netbk *netbk, u16 pending_idx,
+ u8 status)
{
struct xenvif *vif;
struct pending_tx_info *pending_tx_info;
vif = pending_tx_info->vif;
- make_tx_response(vif, &pending_tx_info->req, XEN_NETIF_RSP_OKAY);
+ make_tx_response(vif, &pending_tx_info->req, status);
index = pending_index(netbk->pending_prod++);
netbk->pending_ring[index] = pending_idx;
extern int pciehp_poll_time;
extern bool pciehp_debug;
extern bool pciehp_force;
-extern struct workqueue_struct *pciehp_wq;
#define dbg(format, arg...) \
do { \
struct hotplug_slot *hotplug_slot;
struct delayed_work work; /* work for button event */
struct mutex lock;
+ struct workqueue_struct *wq;
};
struct event_info {
bool pciehp_poll_mode;
int pciehp_poll_time;
bool pciehp_force;
-struct workqueue_struct *pciehp_wq;
#define DRIVER_VERSION "0.4"
#define DRIVER_AUTHOR "Dan Zink <dan.zink@compaq.com>, Greg Kroah-Hartman <greg@kroah.com>, Dely Sy <dely.l.sy@intel.com>"
{
int retval = 0;
- pciehp_wq = alloc_workqueue("pciehp", 0, 0);
- if (!pciehp_wq)
- return -ENOMEM;
-
pciehp_firmware_init();
retval = pcie_port_service_register(&hpdriver_portdrv);
dbg("pcie_port_service_register = %d\n", retval);
info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
- if (retval) {
- destroy_workqueue(pciehp_wq);
+ if (retval)
dbg("Failure to register service\n");
- }
+
return retval;
}
{
dbg("unload_pciehpd()\n");
pcie_port_service_unregister(&hpdriver_portdrv);
- destroy_workqueue(pciehp_wq);
info(DRIVER_DESC " version: " DRIVER_VERSION " unloaded\n");
}
info->p_slot = p_slot;
INIT_WORK(&info->work, interrupt_event_handler);
- queue_work(pciehp_wq, &info->work);
+ queue_work(p_slot->wq, &info->work);
return 0;
}
kfree(info);
goto out;
}
- queue_work(pciehp_wq, &info->work);
+ queue_work(p_slot->wq, &info->work);
out:
mutex_unlock(&p_slot->lock);
}
if (ATTN_LED(ctrl))
pciehp_set_attention_status(p_slot, 0);
- queue_delayed_work(pciehp_wq, &p_slot->work, 5*HZ);
+ queue_delayed_work(p_slot->wq, &p_slot->work, 5*HZ);
break;
case BLINKINGOFF_STATE:
case BLINKINGON_STATE:
else
p_slot->state = POWERON_STATE;
- queue_work(pciehp_wq, &info->work);
+ queue_work(p_slot->wq, &info->work);
}
static void interrupt_event_handler(struct work_struct *work)
static int pcie_init_slot(struct controller *ctrl)
{
struct slot *slot;
+ char name[32];
slot = kzalloc(sizeof(*slot), GFP_KERNEL);
if (!slot)
return -ENOMEM;
+ snprintf(name, sizeof(name), "pciehp-%u", PSN(ctrl));
+ slot->wq = alloc_workqueue(name, 0, 0);
+ if (!slot->wq)
+ goto abort;
+
slot->ctrl = ctrl;
mutex_init(&slot->lock);
INIT_DELAYED_WORK(&slot->work, pciehp_queue_pushbutton_work);
ctrl->slot = slot;
return 0;
+abort:
+ kfree(slot);
+ return -ENOMEM;
}
static void pcie_cleanup_slot(struct controller *ctrl)
{
struct slot *slot = ctrl->slot;
cancel_delayed_work(&slot->work);
- flush_workqueue(pciehp_wq);
+ destroy_workqueue(slot->wq);
kfree(slot);
}
extern bool shpchp_poll_mode;
extern int shpchp_poll_time;
extern bool shpchp_debug;
-extern struct workqueue_struct *shpchp_wq;
-extern struct workqueue_struct *shpchp_ordered_wq;
#define dbg(format, arg...) \
do { \
struct list_head slot_list;
struct delayed_work work; /* work for button event */
struct mutex lock;
+ struct workqueue_struct *wq;
u8 hp_slot;
};
bool shpchp_debug;
bool shpchp_poll_mode;
int shpchp_poll_time;
-struct workqueue_struct *shpchp_wq;
-struct workqueue_struct *shpchp_ordered_wq;
#define DRIVER_VERSION "0.4"
#define DRIVER_AUTHOR "Dan Zink <dan.zink@compaq.com>, Greg Kroah-Hartman <greg@kroah.com>, Dely Sy <dely.l.sy@intel.com>"
slot->device = ctrl->slot_device_offset + i;
slot->hpc_ops = ctrl->hpc_ops;
slot->number = ctrl->first_slot + (ctrl->slot_num_inc * i);
+
+ snprintf(name, sizeof(name), "shpchp-%d", slot->number);
+ slot->wq = alloc_workqueue(name, 0, 0);
+ if (!slot->wq) {
+ retval = -ENOMEM;
+ goto error_info;
+ }
+
mutex_init(&slot->lock);
INIT_DELAYED_WORK(&slot->work, shpchp_queue_pushbutton_work);
if (retval) {
ctrl_err(ctrl, "pci_hp_register failed with error %d\n",
retval);
- goto error_info;
+ goto error_slotwq;
}
get_power_status(hotplug_slot, &info->power_status);
}
return 0;
+error_slotwq:
+ destroy_workqueue(slot->wq);
error_info:
kfree(info);
error_hpslot:
slot = list_entry(tmp, struct slot, slot_list);
list_del(&slot->slot_list);
cancel_delayed_work(&slot->work);
- flush_workqueue(shpchp_wq);
- flush_workqueue(shpchp_ordered_wq);
+ destroy_workqueue(slot->wq);
pci_hp_deregister(slot->hotplug_slot);
}
}
static int __init shpcd_init(void)
{
- int retval = 0;
-
- shpchp_wq = alloc_ordered_workqueue("shpchp", 0);
- if (!shpchp_wq)
- return -ENOMEM;
-
- shpchp_ordered_wq = alloc_ordered_workqueue("shpchp_ordered", 0);
- if (!shpchp_ordered_wq) {
- destroy_workqueue(shpchp_wq);
- return -ENOMEM;
- }
+ int retval;
retval = pci_register_driver(&shpc_driver);
dbg("%s: pci_register_driver = %d\n", __func__, retval);
info(DRIVER_DESC " version: " DRIVER_VERSION "\n");
- if (retval) {
- destroy_workqueue(shpchp_ordered_wq);
- destroy_workqueue(shpchp_wq);
- }
+
return retval;
}
{
dbg("unload_shpchpd()\n");
pci_unregister_driver(&shpc_driver);
- destroy_workqueue(shpchp_ordered_wq);
- destroy_workqueue(shpchp_wq);
info(DRIVER_DESC " version: " DRIVER_VERSION " unloaded\n");
}
info->p_slot = p_slot;
INIT_WORK(&info->work, interrupt_event_handler);
- queue_work(shpchp_wq, &info->work);
+ queue_work(p_slot->wq, &info->work);
return 0;
}
kfree(info);
goto out;
}
- queue_work(shpchp_ordered_wq, &info->work);
+ queue_work(p_slot->wq, &info->work);
out:
mutex_unlock(&p_slot->lock);
}
p_slot->hpc_ops->green_led_blink(p_slot);
p_slot->hpc_ops->set_attention_status(p_slot, 0);
- queue_delayed_work(shpchp_wq, &p_slot->work, 5*HZ);
+ queue_delayed_work(p_slot->wq, &p_slot->work, 5*HZ);
break;
case BLINKINGOFF_STATE:
case BLINKINGON_STATE:
config PCIE_PME
def_bool y
- depends on PCIEPORTBUS && PM_RUNTIME && EXPERIMENTAL && ACPI
+ depends on PCIEPORTBUS && PM_RUNTIME && ACPI
continue;
}
do_recovery(pdev, entry.severity);
+ pci_dev_put(pdev);
}
}
#endif
{
struct pci_dev *child;
+ if (aspm_force)
+ return;
+
/*
* Clear any ASPM setup that the firmware has carried out on this bus
*/
config PINCTRL_SAMSUNG
bool
- depends on OF && GPIOLIB
select PINMUX
select PINCONF
-config PINCTRL_EXYNOS4
- bool "Pinctrl driver data for Exynos4 SoC"
+config PINCTRL_EXYNOS
+ bool "Pinctrl driver data for Samsung EXYNOS SoCs"
depends on OF && GPIOLIB
select PINCTRL_SAMSUNG
obj-$(CONFIG_PINCTRL_U300) += pinctrl-u300.o
obj-$(CONFIG_PINCTRL_COH901) += pinctrl-coh901.o
obj-$(CONFIG_PINCTRL_SAMSUNG) += pinctrl-samsung.o
-obj-$(CONFIG_PINCTRL_EXYNOS4) += pinctrl-exynos.o
+obj-$(CONFIG_PINCTRL_EXYNOS) += pinctrl-exynos.o
obj-$(CONFIG_PINCTRL_EXYNOS5440) += pinctrl-exynos5440.o
obj-$(CONFIG_PINCTRL_XWAY) += pinctrl-xway.o
obj-$(CONFIG_PINCTRL_LANTIQ) += pinctrl-lantiq.o
{
const struct of_device_id *match =
of_match_device(dove_pinctrl_of_match, &pdev->dev);
- pdev->dev.platform_data = match->data;
+ pdev->dev.platform_data = (void *)match->data;
/*
* General MPP Configuration Register is part of pdma registers.
MPP_VAR_FUNCTION(0x5, "sata0", "act", V(0, 1, 1, 1, 1, 0)),
MPP_VAR_FUNCTION(0xb, "lcd", "vsync", V(0, 0, 0, 0, 1, 0))),
MPP_MODE(6,
- MPP_VAR_FUNCTION(0x0, "sysrst", "out", V(1, 1, 1, 1, 1, 1)),
- MPP_VAR_FUNCTION(0x1, "spi", "mosi", V(1, 1, 1, 1, 1, 1)),
- MPP_VAR_FUNCTION(0x2, "ptp", "trig", V(1, 1, 1, 1, 0, 0))),
+ MPP_VAR_FUNCTION(0x1, "sysrst", "out", V(1, 1, 1, 1, 1, 1)),
+ MPP_VAR_FUNCTION(0x2, "spi", "mosi", V(1, 1, 1, 1, 1, 1)),
+ MPP_VAR_FUNCTION(0x3, "ptp", "trig", V(1, 1, 1, 1, 0, 0))),
MPP_MODE(7,
MPP_VAR_FUNCTION(0x0, "gpo", NULL, V(1, 1, 1, 1, 1, 1)),
MPP_VAR_FUNCTION(0x1, "pex", "rsto", V(1, 1, 1, 1, 0, 1)),
{
const struct of_device_id *match =
of_match_device(kirkwood_pinctrl_of_match, &pdev->dev);
- pdev->dev.platform_data = match->data;
+ pdev->dev.platform_data = (void *)match->data;
return mvebu_pinctrl_probe(pdev);
}
}
/* parse the pin numbers listed in the 'samsung,exynos5440-pins' property */
-static int __init exynos5440_pinctrl_parse_dt_pins(struct platform_device *pdev,
+static int exynos5440_pinctrl_parse_dt_pins(struct platform_device *pdev,
struct device_node *cfg_np, unsigned int **pin_list,
unsigned int *npins)
{
* Parse the information about all the available pin groups and pin functions
* from device node of the pin-controller.
*/
-static int __init exynos5440_pinctrl_parse_dt(struct platform_device *pdev,
+static int exynos5440_pinctrl_parse_dt(struct platform_device *pdev,
struct exynos5440_pinctrl_priv_data *priv)
{
struct device *dev = &pdev->dev;
}
/* register the pinctrl interface with the pinctrl subsystem */
-static int __init exynos5440_pinctrl_register(struct platform_device *pdev,
+static int exynos5440_pinctrl_register(struct platform_device *pdev,
struct exynos5440_pinctrl_priv_data *priv)
{
struct device *dev = &pdev->dev;
}
/* register the gpiolib interface with the gpiolib subsystem */
-static int __init exynos5440_gpiolib_register(struct platform_device *pdev,
+static int exynos5440_gpiolib_register(struct platform_device *pdev,
struct exynos5440_pinctrl_priv_data *priv)
{
struct gpio_chip *gc;
}
/* unregister the gpiolib interface with the gpiolib subsystem */
-static int __init exynos5440_gpiolib_unregister(struct platform_device *pdev,
+static int exynos5440_gpiolib_unregister(struct platform_device *pdev,
struct exynos5440_pinctrl_priv_data *priv)
{
int ret = gpiochip_remove(priv->gc);
static void mxs_dt_free_map(struct pinctrl_dev *pctldev,
struct pinctrl_map *map, unsigned num_maps)
{
- int i;
+ u32 i;
for (i = 0; i < num_maps; i++) {
if (map[i].type == PIN_MAP_TYPE_MUX_GROUP)
void __iomem *reg;
u8 bank, shift;
u16 pin;
- int i;
+ u32 i;
for (i = 0; i < g->npins; i++) {
bank = PINID_TO_BANK(g->pins[i]);
void __iomem *reg;
u8 ma, vol, pull, bank, shift;
u16 pin;
- int i;
+ u32 i;
ma = CONFIG_TO_MA(config);
vol = CONFIG_TO_VOL(config);
const char *propname = "fsl,pinmux-ids";
char *group;
int length = strlen(np->name) + SUFFIX_LEN;
- int i;
- u32 val;
+ u32 val, i;
group = devm_kzalloc(&pdev->dev, length, GFP_KERNEL);
if (!group)
}
EXPORT_SYMBOL(nmk_gpio_set_mode);
-static int nmk_prcm_gpiocr_get_mode(struct pinctrl_dev *pctldev, int gpio)
+static int __maybe_unused nmk_prcm_gpiocr_get_mode(struct pinctrl_dev *pctldev, int gpio)
{
int i;
u16 reg;
#define PCS_MUX_BITS_NAME "pinctrl-single,bits"
#define PCS_REG_NAME_LEN ((sizeof(unsigned long) * 2) + 1)
#define PCS_OFF_DISABLED ~0U
-#define PCS_MAX_GPIO_VALUES 2
/**
* struct pcs_pingroup - pingroups for a function
};
/**
- * struct pcs_gpio_range - pinctrl gpio range
- * @range: subrange of the GPIO number space
- * @gpio_func: gpio function value in the pinmux register
- */
-struct pcs_gpio_range {
- struct pinctrl_gpio_range range;
- int gpio_func;
-};
-
-/**
* struct pcs_data - wrapper for data needed by pinctrl framework
* @pa: pindesc array
* @cur: index to current element
}
static int pcs_request_gpio(struct pinctrl_dev *pctldev,
- struct pinctrl_gpio_range *range, unsigned pin)
+ struct pinctrl_gpio_range *range, unsigned offset)
{
- struct pcs_device *pcs = pinctrl_dev_get_drvdata(pctldev);
- struct pcs_gpio_range *gpio = NULL;
- int end, mux_bytes;
- unsigned data;
-
- gpio = container_of(range, struct pcs_gpio_range, range);
- end = range->pin_base + range->npins - 1;
- if (pin < range->pin_base || pin > end) {
- dev_err(pctldev->dev,
- "pin %d isn't in the range of %d to %d\n",
- pin, range->pin_base, end);
- return -EINVAL;
- }
- mux_bytes = pcs->width / BITS_PER_BYTE;
- data = pcs->read(pcs->base + pin * mux_bytes) & ~pcs->fmask;
- data |= gpio->gpio_func;
- pcs->write(data, pcs->base + pin * mux_bytes);
- return 0;
+ return -ENOTSUPP;
}
static struct pinmux_ops pcs_pinmux_ops = {
static struct of_device_id pcs_of_match[];
-static int pcs_add_gpio_range(struct device_node *node, struct pcs_device *pcs)
-{
- struct pcs_gpio_range *gpio;
- struct device_node *child;
- struct resource r;
- const char name[] = "pinctrl-single";
- u32 gpiores[PCS_MAX_GPIO_VALUES];
- int ret, i = 0, mux_bytes = 0;
-
- for_each_child_of_node(node, child) {
- ret = of_address_to_resource(child, 0, &r);
- if (ret < 0)
- continue;
- memset(gpiores, 0, sizeof(u32) * PCS_MAX_GPIO_VALUES);
- ret = of_property_read_u32_array(child, "pinctrl-single,gpio",
- gpiores, PCS_MAX_GPIO_VALUES);
- if (ret < 0)
- continue;
- gpio = devm_kzalloc(pcs->dev, sizeof(*gpio), GFP_KERNEL);
- if (!gpio) {
- dev_err(pcs->dev, "failed to allocate pcs gpio\n");
- return -ENOMEM;
- }
- gpio->range.name = devm_kzalloc(pcs->dev, sizeof(name),
- GFP_KERNEL);
- if (!gpio->range.name) {
- dev_err(pcs->dev, "failed to allocate range name\n");
- return -ENOMEM;
- }
- memcpy((char *)gpio->range.name, name, sizeof(name));
-
- gpio->range.id = i++;
- gpio->range.base = gpiores[0];
- gpio->gpio_func = gpiores[1];
- mux_bytes = pcs->width / BITS_PER_BYTE;
- gpio->range.pin_base = (r.start - pcs->res->start) / mux_bytes;
- gpio->range.npins = (r.end - r.start) / mux_bytes + 1;
-
- pinctrl_add_gpio_range(pcs->pctl, &gpio->range);
- }
- return 0;
-}
-
static int pcs_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
goto free;
}
- ret = pcs_add_gpio_range(np, pcs);
- if (ret < 0)
- goto free;
-
dev_info(pcs->dev, "%i pins at pa %p size %u\n",
pcs->desc.npins, pcs->base, pcs->size);
return of_iomap(np, 0);
}
+static int sirfsoc_gpio_of_xlate(struct gpio_chip *gc,
+ const struct of_phandle_args *gpiospec,
+ u32 *flags)
+{
+ if (gpiospec->args[0] > SIRFSOC_GPIO_NO_OF_BANKS * SIRFSOC_GPIO_BANK_SIZE)
+ return -EINVAL;
+
+ if (gc != &sgpio_bank[gpiospec->args[0] / SIRFSOC_GPIO_BANK_SIZE].chip.gc)
+ return -EINVAL;
+
+ if (flags)
+ *flags = gpiospec->args[1];
+
+ return gpiospec->args[0] % SIRFSOC_GPIO_BANK_SIZE;
+}
+
static int sirfsoc_pinmux_probe(struct platform_device *pdev)
{
int ret;
bank->chip.gc.ngpio = SIRFSOC_GPIO_BANK_SIZE;
bank->chip.gc.label = kstrdup(np->full_name, GFP_KERNEL);
bank->chip.gc.of_node = np;
+ bank->chip.gc.of_xlate = sirfsoc_gpio_of_xlate;
+ bank->chip.gc.of_gpio_n_cells = 2;
bank->chip.regs = regs;
bank->id = i;
bank->is_marco = is_marco;
if (force)
pr_warn("module loaded by force\n");
/* first ensure that we are running on IBM HW */
- else if (efi_enabled || !dmi_check_system(ibm_rtl_dmi_table))
+ else if (efi_enabled(EFI_BOOT) || !dmi_check_system(ibm_rtl_dmi_table))
return -ENODEV;
/* Get the address for the Extended BIOS Data Area */
#include <linux/seq_file.h>
#include <linux/debugfs.h>
#include <linux/ctype.h>
+#include <linux/efi.h>
#include <acpi/video.h>
/*
struct samsung_laptop *samsung;
int ret;
+ if (efi_enabled(EFI_BOOT))
+ return -ENODEV;
+
quirks = &samsung_unknown;
if (!force && !dmi_check_system(samsung_dmi_table))
return -ENODEV;
#include <linux/debugfs.h>
#include <linux/seq_file.h>
#include <linux/slab.h>
+#include <linux/module.h>
#include "dbx500-prcmu.h"
};
#ifdef CONFIG_OF
-static int max77686_pmic_dt_parse_pdata(struct max77686_dev *iodev,
+static int max77686_pmic_dt_parse_pdata(struct platform_device *pdev,
struct max77686_platform_data *pdata)
{
+ struct max77686_dev *iodev = dev_get_drvdata(pdev->dev.parent);
struct device_node *pmic_np, *regulators_np;
struct max77686_regulator_data *rdata;
struct of_regulator_match rmatch;
pmic_np = iodev->dev->of_node;
regulators_np = of_find_node_by_name(pmic_np, "voltage-regulators");
if (!regulators_np) {
- dev_err(iodev->dev, "could not find regulators sub-node\n");
+ dev_err(&pdev->dev, "could not find regulators sub-node\n");
return -EINVAL;
}
pdata->num_regulators = ARRAY_SIZE(regulators);
- rdata = devm_kzalloc(iodev->dev, sizeof(*rdata) *
+ rdata = devm_kzalloc(&pdev->dev, sizeof(*rdata) *
pdata->num_regulators, GFP_KERNEL);
if (!rdata) {
- dev_err(iodev->dev,
+ dev_err(&pdev->dev,
"could not allocate memory for regulator data\n");
return -ENOMEM;
}
rmatch.name = regulators[i].name;
rmatch.init_data = NULL;
rmatch.of_node = NULL;
- of_regulator_match(iodev->dev, regulators_np, &rmatch, 1);
+ of_regulator_match(&pdev->dev, regulators_np, &rmatch, 1);
rdata[i].initdata = rmatch.init_data;
rdata[i].of_node = rmatch.of_node;
}
return 0;
}
#else
-static int max77686_pmic_dt_parse_pdata(struct max77686_dev *iodev,
+static int max77686_pmic_dt_parse_pdata(struct platform_device *pdev,
struct max77686_platform_data *pdata)
{
return 0;
}
if (iodev->dev->of_node) {
- ret = max77686_pmic_dt_parse_pdata(iodev, pdata);
+ ret = max77686_pmic_dt_parse_pdata(pdev, pdata);
if (ret)
return ret;
}
return -EINVAL;
}
- ret = of_regulator_match(pdev->dev.parent, regulators,
- max8907_matches,
+ ret = of_regulator_match(&pdev->dev, regulators, max8907_matches,
ARRAY_SIZE(max8907_matches));
if (ret < 0) {
dev_err(&pdev->dev, "Error parsing regulator init data: %d\n",
};
#ifdef CONFIG_OF
-static int max8997_pmic_dt_parse_dvs_gpio(struct max8997_dev *iodev,
+static int max8997_pmic_dt_parse_dvs_gpio(struct platform_device *pdev,
struct max8997_platform_data *pdata,
struct device_node *pmic_np)
{
gpio = of_get_named_gpio(pmic_np,
"max8997,pmic-buck125-dvs-gpios", i);
if (!gpio_is_valid(gpio)) {
- dev_err(iodev->dev, "invalid gpio[%d]: %d\n", i, gpio);
+ dev_err(&pdev->dev, "invalid gpio[%d]: %d\n", i, gpio);
return -EINVAL;
}
pdata->buck125_gpios[i] = gpio;
return 0;
}
-static int max8997_pmic_dt_parse_pdata(struct max8997_dev *iodev,
+static int max8997_pmic_dt_parse_pdata(struct platform_device *pdev,
struct max8997_platform_data *pdata)
{
+ struct max8997_dev *iodev = dev_get_drvdata(pdev->dev.parent);
struct device_node *pmic_np, *regulators_np, *reg_np;
struct max8997_regulator_data *rdata;
unsigned int i, dvs_voltage_nr = 1, ret;
pmic_np = iodev->dev->of_node;
if (!pmic_np) {
- dev_err(iodev->dev, "could not find pmic sub-node\n");
+ dev_err(&pdev->dev, "could not find pmic sub-node\n");
return -ENODEV;
}
regulators_np = of_find_node_by_name(pmic_np, "regulators");
if (!regulators_np) {
- dev_err(iodev->dev, "could not find regulators sub-node\n");
+ dev_err(&pdev->dev, "could not find regulators sub-node\n");
return -EINVAL;
}
for_each_child_of_node(regulators_np, reg_np)
pdata->num_regulators++;
- rdata = devm_kzalloc(iodev->dev, sizeof(*rdata) *
+ rdata = devm_kzalloc(&pdev->dev, sizeof(*rdata) *
pdata->num_regulators, GFP_KERNEL);
if (!rdata) {
- dev_err(iodev->dev, "could not allocate memory for "
- "regulator data\n");
+ dev_err(&pdev->dev, "could not allocate memory for regulator data\n");
return -ENOMEM;
}
break;
if (i == ARRAY_SIZE(regulators)) {
- dev_warn(iodev->dev, "don't know how to configure "
- "regulator %s\n", reg_np->name);
+ dev_warn(&pdev->dev, "don't know how to configure regulator %s\n",
+ reg_np->name);
continue;
}
rdata->id = i;
- rdata->initdata = of_get_regulator_init_data(
- iodev->dev, reg_np);
+ rdata->initdata = of_get_regulator_init_data(&pdev->dev,
+ reg_np);
rdata->reg_node = reg_np;
rdata++;
}
if (pdata->buck1_gpiodvs || pdata->buck2_gpiodvs ||
pdata->buck5_gpiodvs) {
- ret = max8997_pmic_dt_parse_dvs_gpio(iodev, pdata, pmic_np);
+ ret = max8997_pmic_dt_parse_dvs_gpio(pdev, pdata, pmic_np);
if (ret)
return -EINVAL;
} else {
if (pdata->buck125_default_idx >= 8) {
pdata->buck125_default_idx = 0;
- dev_info(iodev->dev, "invalid value for "
- "default dvs index, using 0 instead\n");
+ dev_info(&pdev->dev, "invalid value for default dvs index, using 0 instead\n");
}
}
if (of_property_read_u32_array(pmic_np,
"max8997,pmic-buck1-dvs-voltage",
pdata->buck1_voltage, dvs_voltage_nr)) {
- dev_err(iodev->dev, "buck1 voltages not specified\n");
+ dev_err(&pdev->dev, "buck1 voltages not specified\n");
return -EINVAL;
}
if (of_property_read_u32_array(pmic_np,
"max8997,pmic-buck2-dvs-voltage",
pdata->buck2_voltage, dvs_voltage_nr)) {
- dev_err(iodev->dev, "buck2 voltages not specified\n");
+ dev_err(&pdev->dev, "buck2 voltages not specified\n");
return -EINVAL;
}
if (of_property_read_u32_array(pmic_np,
"max8997,pmic-buck5-dvs-voltage",
pdata->buck5_voltage, dvs_voltage_nr)) {
- dev_err(iodev->dev, "buck5 voltages not specified\n");
+ dev_err(&pdev->dev, "buck5 voltages not specified\n");
return -EINVAL;
}
return 0;
}
#else
-static int max8997_pmic_dt_parse_pdata(struct max8997_dev *iodev,
+static int max8997_pmic_dt_parse_pdata(struct platform_device *pdev,
struct max8997_platform_data *pdata)
{
return 0;
}
if (iodev->dev->of_node) {
- ret = max8997_pmic_dt_parse_pdata(iodev, pdata);
+ ret = max8997_pmic_dt_parse_pdata(pdev, pdata);
if (ret)
return ret;
}
.min = 2800000, .step = 100000, .max = 3100000,
};
static const struct voltage_map_desc ldo10_voltage_map_desc = {
- .min = 95000, .step = 50000, .max = 1300000,
+ .min = 950000, .step = 50000, .max = 1300000,
};
static const struct voltage_map_desc ldo1213_voltage_map_desc = {
.min = 800000, .step = 100000, .max = 3300000,
if (!dev || !node)
return -EINVAL;
+ for (i = 0; i < num_matches; i++) {
+ struct of_regulator_match *match = &matches[i];
+ match->init_data = NULL;
+ match->of_node = NULL;
+ }
+
for_each_child_of_node(node, child) {
name = of_get_property(child,
"regulator-compatible", NULL);
.min_uV = S2MPS11_BUCK_MIN2, \
.uV_step = S2MPS11_BUCK_STEP2, \
.n_voltages = S2MPS11_BUCK_N_VOLTAGES, \
- .vsel_reg = S2MPS11_REG_B9CTRL2, \
+ .vsel_reg = S2MPS11_REG_B10CTRL2, \
.vsel_mask = S2MPS11_BUCK_VSEL_MASK, \
- .enable_reg = S2MPS11_REG_B9CTRL1, \
+ .enable_reg = S2MPS11_REG_B10CTRL1, \
.enable_mask = S2MPS11_ENABLE_MASK \
}
if (!regs)
return NULL;
- count = of_regulator_match(pdev->dev.parent, regs,
- reg_matches, TPS65217_NUM_REGULATOR);
+ count = of_regulator_match(&pdev->dev, regs, reg_matches,
+ TPS65217_NUM_REGULATOR);
of_node_put(regs);
if ((count < 0) || (count > TPS65217_NUM_REGULATOR))
return NULL;
return NULL;
}
- ret = of_regulator_match(pdev->dev.parent, regulators, matches, count);
+ ret = of_regulator_match(&pdev->dev, regulators, matches, count);
if (ret < 0) {
dev_err(&pdev->dev, "Error parsing regulator init data: %d\n",
ret);
}
}
rdev = regulator_register(&ri->rinfo->desc, &config);
- if (IS_ERR_OR_NULL(rdev)) {
+ if (IS_ERR(rdev)) {
dev_err(&pdev->dev,
"register regulator failed %s\n",
ri->rinfo->desc.name);
{
unsigned long timeout = jiffies + msecs_to_jiffies(1000);
struct i2c_client *client = data;
+ struct rtc_device *rtc = i2c_get_clientdata(client);
int handled = 0, sr, err;
/*
if (sr & ISL1208_REG_SR_ALM) {
dev_dbg(&client->dev, "alarm!\n");
+ rtc_update_irq(rtc, 1, RTC_IRQF | RTC_AF);
+
/* Clear the alarm */
sr &= ~ISL1208_REG_SR_ALM;
sr = i2c_smbus_write_byte_data(client, ISL1208_REG_SR, sr);
#define RTC_YMR 0x34 /* Year match register */
#define RTC_YLR 0x38 /* Year data load register */
+#define RTC_CR_EN (1 << 0) /* counter enable bit */
#define RTC_CR_CWEN (1 << 26) /* Clockwatch enable bit */
#define RTC_TCR_EN (1 << 1) /* Periodic timer enable bit */
struct pl031_local *ldata;
struct pl031_vendor_data *vendor = id->data;
struct rtc_class_ops *ops = &vendor->ops;
- unsigned long time;
+ unsigned long time, data;
ret = amba_request_regions(adev, NULL);
if (ret)
dev_dbg(&adev->dev, "designer ID = 0x%02x\n", amba_manf(adev));
dev_dbg(&adev->dev, "revision = 0x%01x\n", amba_rev(adev));
+ data = readl(ldata->base + RTC_CR);
/* Enable the clockwatch on ST Variants */
if (vendor->clockwatch)
- writel(readl(ldata->base + RTC_CR) | RTC_CR_CWEN,
- ldata->base + RTC_CR);
+ data |= RTC_CR_CWEN;
+ writel(data | RTC_CR_EN, ldata->base + RTC_CR);
/*
* On ST PL031 variants, the RTC reset value does not provide correct
return -EINVAL;
}
- writel((bin2bcd(tm->tm_year - 100) << DATE_YEAR_S)
+ writel((bin2bcd(tm->tm_year % 100) << DATE_YEAR_S)
| (bin2bcd(tm->tm_mon + 1) << DATE_MONTH_S)
| (bin2bcd(tm->tm_mday))
| ((tm->tm_year >= 200) << DATE_CENTURY_S),
return -ENOMEM;
pci_set_drvdata(pdev, pci_info);
- if (efi_enabled)
+ if (efi_enabled(EFI_RUNTIME_SERVICES))
orom = isci_get_efi_var(pdev);
if (!orom)
return -1;
}
+
+int ssb_gpio_unregister(struct ssb_bus *bus)
+{
+ if (ssb_chipco_available(&bus->chipco) ||
+ ssb_extif_available(&bus->extif)) {
+ return gpiochip_remove(&bus->gpio);
+ } else {
+ SSB_WARN_ON(1);
+ }
+
+ return -1;
+}
void ssb_bus_unregister(struct ssb_bus *bus)
{
+ int err;
+
+ err = ssb_gpio_unregister(bus);
+ if (err == -EBUSY)
+ ssb_dprintk(KERN_ERR PFX "Some GPIOs are still in use.\n");
+ else if (err)
+ ssb_dprintk(KERN_ERR PFX
+ "Can not unregister GPIO driver: %i\n", err);
+
ssb_buses_lock();
ssb_devices_unregister(bus);
list_del(&bus->list);
#ifdef CONFIG_SSB_DRIVER_GPIO
extern int ssb_gpio_init(struct ssb_bus *bus);
+extern int ssb_gpio_unregister(struct ssb_bus *bus);
#else /* CONFIG_SSB_DRIVER_GPIO */
static inline int ssb_gpio_init(struct ssb_bus *bus)
{
return -ENOTSUPP;
}
+static inline int ssb_gpio_unregister(struct ssb_bus *bus)
+{
+ return 0;
+}
#endif /* CONFIG_SSB_DRIVER_GPIO */
#endif /* LINUX_SSB_PRIVATE_H_ */
int se_dev_set_fabric_max_sectors(struct se_device *dev, u32 fabric_max_sectors)
{
+ int block_size = dev->dev_attrib.block_size;
+
if (dev->export_count) {
pr_err("dev[%p]: Unable to change SE Device"
" fabric_max_sectors while export_count is %d\n",
/*
* Align max_sectors down to PAGE_SIZE to follow transport_allocate_data_tasks()
*/
+ if (!block_size) {
+ block_size = 512;
+ pr_warn("Defaulting to 512 for zero block_size\n");
+ }
fabric_max_sectors = se_dev_align_max_sectors(fabric_max_sectors,
- dev->dev_attrib.block_size);
+ block_size);
dev->dev_attrib.fabric_max_sectors = fabric_max_sectors;
pr_debug("dev[%p]: SE Device max_sectors changed to %u\n",
return -EFAULT;
}
+ if (!(dev->dev_flags & DF_CONFIGURED)) {
+ pr_err("se_device not configured yet, cannot port link\n");
+ return -ENODEV;
+ }
+
tpg_ci = &lun_ci->ci_parent->ci_group->cg_item;
se_tpg = container_of(to_config_group(tpg_ci),
struct se_portal_group, tpg_group);
buf[7] = dev->dev_attrib.block_size & 0xff;
rbuf = transport_kmap_data_sg(cmd);
- if (!rbuf)
- return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
-
- memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length));
- transport_kunmap_data_sg(cmd);
+ if (rbuf) {
+ memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length));
+ transport_kunmap_data_sg(cmd);
+ }
target_complete_cmd(cmd, GOOD);
return 0;
buf[14] = 0x80;
rbuf = transport_kmap_data_sg(cmd);
- if (!rbuf)
- return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
-
- memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length));
- transport_kunmap_data_sg(cmd);
+ if (rbuf) {
+ memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length));
+ transport_kunmap_data_sg(cmd);
+ }
target_complete_cmd(cmd, GOOD);
return 0;
out:
rbuf = transport_kmap_data_sg(cmd);
- if (!rbuf)
- return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
-
- memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length));
- transport_kunmap_data_sg(cmd);
+ if (rbuf) {
+ memcpy(rbuf, buf, min_t(u32, sizeof(buf), cmd->data_length));
+ transport_kunmap_data_sg(cmd);
+ }
if (!ret)
target_complete_cmd(cmd, GOOD);
{
struct se_device *dev = cmd->se_dev;
char *cdb = cmd->t_task_cdb;
- unsigned char *buf, *map_buf;
+ unsigned char buf[SE_MODE_PAGE_BUF], *rbuf;
int type = dev->transport->get_device_type(dev);
int ten = (cmd->t_task_cdb[0] == MODE_SENSE_10);
bool dbd = !!(cdb[1] & 0x08);
int ret;
int i;
- map_buf = transport_kmap_data_sg(cmd);
- if (!map_buf)
- return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
- /*
- * If SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC is not set, then we
- * know we actually allocated a full page. Otherwise, if the
- * data buffer is too small, allocate a temporary buffer so we
- * don't have to worry about overruns in all our INQUIRY
- * emulation handling.
- */
- if (cmd->data_length < SE_MODE_PAGE_BUF &&
- (cmd->se_cmd_flags & SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC)) {
- buf = kzalloc(SE_MODE_PAGE_BUF, GFP_KERNEL);
- if (!buf) {
- transport_kunmap_data_sg(cmd);
- return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
- }
- } else {
- buf = map_buf;
- }
+ memset(buf, 0, SE_MODE_PAGE_BUF);
+
/*
* Skip over MODE DATA LENGTH + MEDIUM TYPE fields to byte 3 for
* MODE_SENSE_10 and byte 2 for MODE_SENSE (6).
if (page == 0x3f) {
if (subpage != 0x00 && subpage != 0xff) {
pr_warn("MODE_SENSE: Invalid subpage code: 0x%02x\n", subpage);
- kfree(buf);
- transport_kunmap_data_sg(cmd);
return TCM_INVALID_CDB_FIELD;
}
pr_err("MODE SENSE: unimplemented page/subpage: 0x%02x/0x%02x\n",
page, subpage);
- transport_kunmap_data_sg(cmd);
return TCM_UNKNOWN_MODE_PAGE;
set_length:
else
buf[0] = length - 1;
- if (buf != map_buf) {
- memcpy(map_buf, buf, cmd->data_length);
- kfree(buf);
+ rbuf = transport_kmap_data_sg(cmd);
+ if (rbuf) {
+ memcpy(rbuf, buf, min_t(u32, SE_MODE_PAGE_BUF, cmd->data_length));
+ transport_kunmap_data_sg(cmd);
}
- transport_kunmap_data_sg(cmd);
target_complete_cmd(cmd, GOOD);
return 0;
}
#include <asm/unaligned.h>
#include <linux/platform_device.h>
#include <linux/workqueue.h>
+#include <linux/pm_runtime.h>
#include <linux/usb.h>
#include <linux/usb/hcd.h>
return retval;
}
+/*
+ * usb_hcd_start_port_resume - a root-hub port is sending a resume signal
+ * @bus: the bus which the root hub belongs to
+ * @portnum: the port which is being resumed
+ *
+ * HCDs should call this function when they know that a resume signal is
+ * being sent to a root-hub port. The root hub will be prevented from
+ * going into autosuspend until usb_hcd_end_port_resume() is called.
+ *
+ * The bus's private lock must be held by the caller.
+ */
+void usb_hcd_start_port_resume(struct usb_bus *bus, int portnum)
+{
+ unsigned bit = 1 << portnum;
+
+ if (!(bus->resuming_ports & bit)) {
+ bus->resuming_ports |= bit;
+ pm_runtime_get_noresume(&bus->root_hub->dev);
+ }
+}
+EXPORT_SYMBOL_GPL(usb_hcd_start_port_resume);
+
+/*
+ * usb_hcd_end_port_resume - a root-hub port has stopped sending a resume signal
+ * @bus: the bus which the root hub belongs to
+ * @portnum: the port which is being resumed
+ *
+ * HCDs should call this function when they know that a resume signal has
+ * stopped being sent to a root-hub port. The root hub will be allowed to
+ * autosuspend again.
+ *
+ * The bus's private lock must be held by the caller.
+ */
+void usb_hcd_end_port_resume(struct usb_bus *bus, int portnum)
+{
+ unsigned bit = 1 << portnum;
+
+ if (bus->resuming_ports & bit) {
+ bus->resuming_ports &= ~bit;
+ pm_runtime_put_noidle(&bus->root_hub->dev);
+ }
+}
+EXPORT_SYMBOL_GPL(usb_hcd_end_port_resume);
/*-------------------------------------------------------------------------*/
EXPORT_SYMBOL_GPL(usb_enable_ltm);
#ifdef CONFIG_USB_SUSPEND
+/*
+ * usb_disable_function_remotewakeup - disable usb3.0
+ * device's function remote wakeup
+ * @udev: target device
+ *
+ * Assume there's only one function on the USB 3.0
+ * device and disable remote wake for the first
+ * interface. FIXME if the interface association
+ * descriptor shows there's more than one function.
+ */
+static int usb_disable_function_remotewakeup(struct usb_device *udev)
+{
+ return usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
+ USB_REQ_CLEAR_FEATURE, USB_RECIP_INTERFACE,
+ USB_INTRF_FUNC_SUSPEND, 0, NULL, 0,
+ USB_CTRL_SET_TIMEOUT);
+}
/*
* usb_port_suspend - suspend a usb device's upstream port
dev_dbg(hub->intfdev, "can't suspend port %d, status %d\n",
port1, status);
/* paranoia: "should not happen" */
- if (udev->do_remote_wakeup)
- (void) usb_control_msg(udev, usb_sndctrlpipe(udev, 0),
- USB_REQ_CLEAR_FEATURE, USB_RECIP_DEVICE,
- USB_DEVICE_REMOTE_WAKEUP, 0,
- NULL, 0,
- USB_CTRL_SET_TIMEOUT);
+ if (udev->do_remote_wakeup) {
+ if (!hub_is_superspeed(hub->hdev)) {
+ (void) usb_control_msg(udev,
+ usb_sndctrlpipe(udev, 0),
+ USB_REQ_CLEAR_FEATURE,
+ USB_RECIP_DEVICE,
+ USB_DEVICE_REMOTE_WAKEUP, 0,
+ NULL, 0,
+ USB_CTRL_SET_TIMEOUT);
+ } else
+ (void) usb_disable_function_remotewakeup(udev);
+
+ }
/* Try to enable USB2 hardware LPM again */
if (udev->usb2_hw_lpm_capable == 1)
* udev->reset_resume
*/
} else if (udev->actconfig && !udev->reset_resume) {
- le16_to_cpus(&devstatus);
- if (devstatus & (1 << USB_DEVICE_REMOTE_WAKEUP)) {
- status = usb_control_msg(udev,
- usb_sndctrlpipe(udev, 0),
- USB_REQ_CLEAR_FEATURE,
+ if (!hub_is_superspeed(udev->parent)) {
+ le16_to_cpus(&devstatus);
+ if (devstatus & (1 << USB_DEVICE_REMOTE_WAKEUP))
+ status = usb_control_msg(udev,
+ usb_sndctrlpipe(udev, 0),
+ USB_REQ_CLEAR_FEATURE,
USB_RECIP_DEVICE,
- USB_DEVICE_REMOTE_WAKEUP, 0,
- NULL, 0,
- USB_CTRL_SET_TIMEOUT);
- if (status)
- dev_dbg(&udev->dev,
- "disable remote wakeup, status %d\n",
- status);
+ USB_DEVICE_REMOTE_WAKEUP, 0,
+ NULL, 0,
+ USB_CTRL_SET_TIMEOUT);
+ } else {
+ status = usb_get_status(udev, USB_RECIP_INTERFACE, 0,
+ &devstatus);
+ le16_to_cpus(&devstatus);
+ if (!status && devstatus & (USB_INTRF_STAT_FUNC_RW_CAP
+ | USB_INTRF_STAT_FUNC_RW))
+ status =
+ usb_disable_function_remotewakeup(udev);
}
+
+ if (status)
+ dev_dbg(&udev->dev,
+ "disable remote wakeup, status %d\n",
+ status);
status = 0;
}
return status;
if (epnum == 0 || epnum == 1) {
dep->endpoint.maxpacket = 512;
+ dep->endpoint.maxburst = 1;
dep->endpoint.ops = &dwc3_gadget_ep0_ops;
if (!epnum)
dwc->gadget.ep0 = &dep->endpoint;
pr_err("%s: unmapped value: %lu\n", opts, value);
return -EINVAL;
}
- }
- else if (!memcmp(opts, "gid", 3))
+ } else if (!memcmp(opts, "gid", 3)) {
data->perms.gid = make_kgid(current_user_ns(), value);
if (!gid_valid(data->perms.gid)) {
pr_err("%s: unmapped value: %lu\n", opts, value);
return -EINVAL;
}
- else
+ } else {
goto invalid;
+ }
break;
default:
#include <linux/platform_device.h>
#include <linux/io.h>
-#include <mach/hardware.h>
-
static struct clk *mxc_ahb_clk;
static struct clk *mxc_per_clk;
static struct clk *mxc_ipg_clk;
/* workaround ENGcm09152 for i.MX35 */
-#define USBPHYCTRL_OTGBASE_OFFSET 0x608
+#define MX35_USBPHYCTRL_OFFSET 0x600
+#define USBPHYCTRL_OTGBASE_OFFSET 0x8
#define USBPHYCTRL_EVDO (1 << 23)
int fsl_udc_clk_init(struct platform_device *pdev)
clk_prepare_enable(mxc_per_clk);
/* make sure USB_CLK is running at 60 MHz +/- 1000 Hz */
- if (!cpu_is_mx51()) {
+ if (!strcmp(pdev->id_entry->name, "imx-udc-mx27")) {
freq = clk_get_rate(mxc_per_clk);
if (pdata->phy_mode != FSL_USB2_PHY_ULPI &&
(freq < 59999000 || freq > 60001000)) {
return ret;
}
-void fsl_udc_clk_finalize(struct platform_device *pdev)
+int fsl_udc_clk_finalize(struct platform_device *pdev)
{
struct fsl_usb2_platform_data *pdata = pdev->dev.platform_data;
- if (cpu_is_mx35()) {
- unsigned int v;
+ int ret = 0;
- /* workaround ENGcm09152 for i.MX35 */
- if (pdata->workaround & FLS_USB2_WORKAROUND_ENGCM09152) {
- v = readl(MX35_IO_ADDRESS(MX35_USB_BASE_ADDR +
- USBPHYCTRL_OTGBASE_OFFSET));
- writel(v | USBPHYCTRL_EVDO,
- MX35_IO_ADDRESS(MX35_USB_BASE_ADDR +
- USBPHYCTRL_OTGBASE_OFFSET));
+ /* workaround ENGcm09152 for i.MX35 */
+ if (pdata->workaround & FLS_USB2_WORKAROUND_ENGCM09152) {
+ unsigned int v;
+ struct resource *res = platform_get_resource
+ (pdev, IORESOURCE_MEM, 0);
+ void __iomem *phy_regs = ioremap(res->start +
+ MX35_USBPHYCTRL_OFFSET, 512);
+ if (!phy_regs) {
+ dev_err(&pdev->dev, "ioremap for phy address fails\n");
+ ret = -EINVAL;
+ goto ioremap_err;
}
+
+ v = readl(phy_regs + USBPHYCTRL_OTGBASE_OFFSET);
+ writel(v | USBPHYCTRL_EVDO,
+ phy_regs + USBPHYCTRL_OTGBASE_OFFSET);
+
+ iounmap(phy_regs);
}
+
+ioremap_err:
/* ULPI transceivers don't need usbpll */
if (pdata->phy_mode == FSL_USB2_PHY_ULPI) {
clk_disable_unprepare(mxc_per_clk);
mxc_per_clk = NULL;
}
+
+ return ret;
}
void fsl_udc_clk_release(void)
#include <linux/fsl_devices.h>
#include <linux/dmapool.h>
#include <linux/delay.h>
+#include <linux/of_device.h>
#include <asm/byteorder.h>
#include <asm/io.h>
unsigned int i;
u32 dccparams;
- if (strcmp(pdev->name, driver_name)) {
- VDBG("Wrong device");
- return -ENODEV;
- }
-
udc_controller = kzalloc(sizeof(struct fsl_udc), GFP_KERNEL);
if (udc_controller == NULL) {
ERR("malloc udc failed\n");
dr_controller_setup(udc_controller);
}
- fsl_udc_clk_finalize(pdev);
+ ret = fsl_udc_clk_finalize(pdev);
+ if (ret)
+ goto err_free_irq;
/* Setup gadget structure */
udc_controller->gadget.ops = &fsl_gadget_ops;
return fsl_udc_resume(NULL);
}
-
/*-------------------------------------------------------------------------
Register entry point for the peripheral controller driver
--------------------------------------------------------------------------*/
-
+static const struct platform_device_id fsl_udc_devtype[] = {
+ {
+ .name = "imx-udc-mx27",
+ }, {
+ .name = "imx-udc-mx51",
+ }, {
+ /* sentinel */
+ }
+};
+MODULE_DEVICE_TABLE(platform, fsl_udc_devtype);
static struct platform_driver udc_driver = {
- .remove = __exit_p(fsl_udc_remove),
+ .remove = __exit_p(fsl_udc_remove),
+ /* Just for FSL i.mx SoC currently */
+ .id_table = fsl_udc_devtype,
/* these suspend and resume are not usb suspend and resume */
- .suspend = fsl_udc_suspend,
- .resume = fsl_udc_resume,
- .driver = {
- .name = (char *)driver_name,
- .owner = THIS_MODULE,
- /* udc suspend/resume called from OTG driver */
- .suspend = fsl_udc_otg_suspend,
- .resume = fsl_udc_otg_resume,
+ .suspend = fsl_udc_suspend,
+ .resume = fsl_udc_resume,
+ .driver = {
+ .name = (char *)driver_name,
+ .owner = THIS_MODULE,
+ /* udc suspend/resume called from OTG driver */
+ .suspend = fsl_udc_otg_suspend,
+ .resume = fsl_udc_otg_resume,
},
};
struct platform_device;
#ifdef CONFIG_ARCH_MXC
int fsl_udc_clk_init(struct platform_device *pdev);
-void fsl_udc_clk_finalize(struct platform_device *pdev);
+int fsl_udc_clk_finalize(struct platform_device *pdev);
void fsl_udc_clk_release(void);
#else
static inline int fsl_udc_clk_init(struct platform_device *pdev)
{
return 0;
}
-static inline void fsl_udc_clk_finalize(struct platform_device *pdev)
+static inline int fsl_udc_clk_finalize(struct platform_device *pdev)
{
+ return 0;
}
static inline void fsl_udc_clk_release(void)
{
Variation of ARC USB block used in some Freescale chips.
config USB_EHCI_MXC
- bool "Support for Freescale i.MX on-chip EHCI USB controller"
+ tristate "Support for Freescale i.MX on-chip EHCI USB controller"
depends on USB_EHCI_HCD && ARCH_MXC
select USB_EHCI_ROOT_HUB_TT
---help---
obj-$(CONFIG_USB_EHCI_HCD) += ehci-hcd.o
obj-$(CONFIG_USB_EHCI_PCI) += ehci-pci.o
obj-$(CONFIG_USB_EHCI_HCD_PLATFORM) += ehci-platform.o
+obj-$(CONFIG_USB_EHCI_MXC) += ehci-mxc.o
obj-$(CONFIG_USB_OXU210HP_HCD) += oxu210hp-hcd.o
obj-$(CONFIG_USB_ISP116X_HCD) += isp116x-hcd.o
#undef VERBOSE_DEBUG
#undef EHCI_URB_TRACE
-#ifdef DEBUG
-#define EHCI_STATS
-#endif
-
/* magic numbers that can affect system performance */
#define EHCI_TUNE_CERR 3 /* 0-3 qtd retries; 0 == don't stop */
#define EHCI_TUNE_RL_HS 4 /* nak throttle; see 4.9 */
ehci->reset_done[i] = jiffies + msecs_to_jiffies(25);
set_bit(i, &ehci->resuming_ports);
ehci_dbg (ehci, "port %d remote wakeup\n", i + 1);
+ usb_hcd_start_port_resume(&hcd->self, i);
mod_timer(&hcd->rh_timer, ehci->reset_done[i]);
}
}
#define PLATFORM_DRIVER ehci_fsl_driver
#endif
-#ifdef CONFIG_USB_EHCI_MXC
-#include "ehci-mxc.c"
-#define PLATFORM_DRIVER ehci_mxc_driver
-#endif
-
#ifdef CONFIG_USB_EHCI_SH
#include "ehci-sh.c"
#define PLATFORM_DRIVER ehci_hcd_sh_driver
#if !IS_ENABLED(CONFIG_USB_EHCI_PCI) && \
!IS_ENABLED(CONFIG_USB_EHCI_HCD_PLATFORM) && \
- !defined(CONFIG_USB_CHIPIDEA_HOST) && \
+ !IS_ENABLED(CONFIG_USB_CHIPIDEA_HOST) && \
+ !IS_ENABLED(CONFIG_USB_EHCI_MXC) && \
!defined(PLATFORM_DRIVER) && \
!defined(PS3_SYSTEM_BUS_DRIVER) && \
!defined(OF_PLATFORM_DRIVER) && \
status = STS_PCD;
}
}
- /* FIXME autosuspend idle root hubs */
+
+ /* If a resume is in progress, make sure it can finish */
+ if (ehci->resuming_ports)
+ mod_timer(&hcd->rh_timer, jiffies + msecs_to_jiffies(25));
+
spin_unlock_irqrestore (&ehci->lock, flags);
return status ? retval : 0;
}
/* resume signaling for 20 msec */
ehci->reset_done[wIndex] = jiffies
+ msecs_to_jiffies(20);
+ usb_hcd_start_port_resume(&hcd->self, wIndex);
/* check the port again */
mod_timer(&ehci_to_hcd(ehci)->rh_timer,
ehci->reset_done[wIndex]);
clear_bit(wIndex, &ehci->suspended_ports);
set_bit(wIndex, &ehci->port_c_suspend);
ehci->reset_done[wIndex] = 0;
+ usb_hcd_end_port_resume(&hcd->self, wIndex);
/* stop resume signaling */
temp = ehci_readl(ehci, status_reg);
ehci->reset_done[wIndex] = 0;
if (temp & PORT_PE)
set_bit(wIndex, &ehci->port_c_suspend);
+ usb_hcd_end_port_resume(&hcd->self, wIndex);
}
if (temp & PORT_OC)
* Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/io.h>
#include <linux/platform_device.h>
#include <linux/clk.h>
#include <linux/delay.h>
#include <linux/usb/otg.h>
#include <linux/usb/ulpi.h>
#include <linux/slab.h>
+#include <linux/usb.h>
+#include <linux/usb/hcd.h>
#include <linux/platform_data/usb-ehci-mxc.h>
#include <asm/mach-types.h>
+#include "ehci.h"
+
+#define DRIVER_DESC "Freescale On-Chip EHCI Host driver"
+
+static const char hcd_name[] = "ehci-mxc";
+
#define ULPI_VIEWPORT_OFFSET 0x170
struct ehci_mxc_priv {
struct clk *usbclk, *ahbclk, *phyclk;
- struct usb_hcd *hcd;
};
-/* called during probe() after chip reset completes */
-static int ehci_mxc_setup(struct usb_hcd *hcd)
-{
- hcd->has_tt = 1;
-
- return ehci_setup(hcd);
-}
+static struct hc_driver __read_mostly ehci_mxc_hc_driver;
-static const struct hc_driver ehci_mxc_hc_driver = {
- .description = hcd_name,
- .product_desc = "Freescale On-Chip EHCI Host Controller",
- .hcd_priv_size = sizeof(struct ehci_hcd),
-
- /*
- * generic hardware linkage
- */
- .irq = ehci_irq,
- .flags = HCD_USB2 | HCD_MEMORY,
-
- /*
- * basic lifecycle operations
- */
- .reset = ehci_mxc_setup,
- .start = ehci_run,
- .stop = ehci_stop,
- .shutdown = ehci_shutdown,
-
- /*
- * managing i/o requests and associated device resources
- */
- .urb_enqueue = ehci_urb_enqueue,
- .urb_dequeue = ehci_urb_dequeue,
- .endpoint_disable = ehci_endpoint_disable,
- .endpoint_reset = ehci_endpoint_reset,
-
- /*
- * scheduling support
- */
- .get_frame_number = ehci_get_frame,
-
- /*
- * root hub support
- */
- .hub_status_data = ehci_hub_status_data,
- .hub_control = ehci_hub_control,
- .bus_suspend = ehci_bus_suspend,
- .bus_resume = ehci_bus_resume,
- .relinquish_port = ehci_relinquish_port,
- .port_handed_over = ehci_port_handed_over,
-
- .clear_tt_buffer_complete = ehci_clear_tt_buffer_complete,
+static const struct ehci_driver_overrides ehci_mxc_overrides __initdata = {
+ .extra_priv_size = sizeof(struct ehci_mxc_priv),
};
static int ehci_mxc_drv_probe(struct platform_device *pdev)
if (!hcd)
return -ENOMEM;
- priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
- if (!priv) {
- ret = -ENOMEM;
- goto err_alloc;
- }
-
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(dev, "Found HC with no register addr. Check setup!\n");
goto err_alloc;
}
+ hcd->has_tt = 1;
+ ehci = hcd_to_ehci(hcd);
+ priv = (struct ehci_mxc_priv *) ehci->priv;
+
/* enable clocks */
priv->usbclk = devm_clk_get(&pdev->dev, "ipg");
if (IS_ERR(priv->usbclk)) {
mdelay(10);
}
- ehci = hcd_to_ehci(hcd);
-
/* EHCI registers start at offset 0x100 */
ehci->caps = hcd->regs + 0x100;
ehci->regs = hcd->regs + 0x100 +
}
}
- priv->hcd = hcd;
- platform_set_drvdata(pdev, priv);
+ platform_set_drvdata(pdev, hcd);
ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
if (ret)
static int __exit ehci_mxc_drv_remove(struct platform_device *pdev)
{
struct mxc_usbh_platform_data *pdata = pdev->dev.platform_data;
- struct ehci_mxc_priv *priv = platform_get_drvdata(pdev);
- struct usb_hcd *hcd = priv->hcd;
+ struct usb_hcd *hcd = platform_get_drvdata(pdev);
+ struct ehci_hcd *ehci = hcd_to_ehci(hcd);
+ struct ehci_mxc_priv *priv = (struct ehci_mxc_priv *) ehci->priv;
+
+ usb_remove_hcd(hcd);
if (pdata && pdata->exit)
pdata->exit(pdev);
if (pdata->otg)
usb_phy_shutdown(pdata->otg);
- usb_remove_hcd(hcd);
- usb_put_hcd(hcd);
- platform_set_drvdata(pdev, NULL);
-
clk_disable_unprepare(priv->usbclk);
clk_disable_unprepare(priv->ahbclk);
if (priv->phyclk)
clk_disable_unprepare(priv->phyclk);
+ usb_put_hcd(hcd);
+ platform_set_drvdata(pdev, NULL);
return 0;
}
static void ehci_mxc_drv_shutdown(struct platform_device *pdev)
{
- struct ehci_mxc_priv *priv = platform_get_drvdata(pdev);
- struct usb_hcd *hcd = priv->hcd;
+ struct usb_hcd *hcd = platform_get_drvdata(pdev);
if (hcd->driver->shutdown)
hcd->driver->shutdown(hcd);
static struct platform_driver ehci_mxc_driver = {
.probe = ehci_mxc_drv_probe,
- .remove = __exit_p(ehci_mxc_drv_remove),
+ .remove = ehci_mxc_drv_remove,
.shutdown = ehci_mxc_drv_shutdown,
.driver = {
.name = "mxc-ehci",
},
};
+
+static int __init ehci_mxc_init(void)
+{
+ if (usb_disabled())
+ return -ENODEV;
+
+ pr_info("%s: " DRIVER_DESC "\n", hcd_name);
+
+ ehci_init_driver(&ehci_mxc_hc_driver, &ehci_mxc_overrides);
+ return platform_driver_register(&ehci_mxc_driver);
+}
+module_init(ehci_mxc_init);
+
+static void __exit ehci_mxc_cleanup(void)
+{
+ platform_driver_unregister(&ehci_mxc_driver);
+}
+module_exit(ehci_mxc_cleanup);
+
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_AUTHOR("Sascha Hauer");
+MODULE_LICENSE("GPL");
if (ehci->async_iaa || ehci->async_unlinking)
return;
- /* Do all the waiting QHs at once */
- ehci->async_iaa = ehci->async_unlink;
- ehci->async_unlink = NULL;
-
/* If the controller isn't running, we don't have to wait for it */
if (unlikely(ehci->rh_state < EHCI_RH_RUNNING)) {
+
+ /* Do all the waiting QHs */
+ ehci->async_iaa = ehci->async_unlink;
+ ehci->async_unlink = NULL;
+
if (!nested) /* Avoid recursion */
end_unlink_async(ehci);
/* Otherwise start a new IAA cycle */
} else if (likely(ehci->rh_state == EHCI_RH_RUNNING)) {
+ struct ehci_qh *qh;
+
+ /* Do only the first waiting QH (nVidia bug?) */
+ qh = ehci->async_unlink;
+ ehci->async_iaa = qh;
+ ehci->async_unlink = qh->unlink_next;
+ qh->unlink_next = NULL;
+
/* Make sure the unlinks are all visible to the hardware */
wmb();
}
}
+static void start_unlink_async(struct ehci_hcd *ehci, struct ehci_qh *qh);
+
static void unlink_empty_async(struct ehci_hcd *ehci)
{
- struct ehci_qh *qh, *next;
- bool stopped = (ehci->rh_state < EHCI_RH_RUNNING);
+ struct ehci_qh *qh;
+ struct ehci_qh *qh_to_unlink = NULL;
bool check_unlinks_later = false;
+ int count = 0;
- /* Unlink all the async QHs that have been empty for a timer cycle */
- next = ehci->async->qh_next.qh;
- while (next) {
- qh = next;
- next = qh->qh_next.qh;
-
+ /* Find the last async QH which has been empty for a timer cycle */
+ for (qh = ehci->async->qh_next.qh; qh; qh = qh->qh_next.qh) {
if (list_empty(&qh->qtd_list) &&
qh->qh_state == QH_STATE_LINKED) {
- if (!stopped && qh->unlink_cycle ==
- ehci->async_unlink_cycle)
+ ++count;
+ if (qh->unlink_cycle == ehci->async_unlink_cycle)
check_unlinks_later = true;
else
- single_unlink_async(ehci, qh);
+ qh_to_unlink = qh;
}
}
- /* Start a new IAA cycle if any QHs are waiting for it */
- if (ehci->async_unlink)
- start_iaa_cycle(ehci, false);
+ /* If nothing else is being unlinked, unlink the last empty QH */
+ if (!ehci->async_iaa && !ehci->async_unlink && qh_to_unlink) {
+ start_unlink_async(ehci, qh_to_unlink);
+ --count;
+ }
- /* QHs that haven't been empty for long enough will be handled later */
- if (check_unlinks_later) {
+ /* Other QHs will be handled later */
+ if (count > 0) {
ehci_enable_event(ehci, EHCI_HRTIMER_ASYNC_UNLINKS, true);
++ehci->async_unlink_cycle;
}
}
static const unsigned char
-max_tt_usecs[] = { 125, 125, 125, 125, 125, 125, 30, 0 };
+max_tt_usecs[] = { 125, 125, 125, 125, 125, 125, 125, 25 };
/* carryover low/fullspeed bandwidth that crosses uframe boundries */
static inline void carryover_tt_bandwidth(unsigned short tt_usecs[8])
}
ehci->now_frame = now_frame;
+ frame = ehci->last_iso_frame;
for (;;) {
union ehci_shadow q, *q_p;
__hc32 type, *hw_p;
- frame = ehci->last_iso_frame;
restart:
/* scan each element in frame's queue for completions */
q_p = &ehci->pshadow [frame];
/* Stop when we have reached the current frame */
if (frame == now_frame)
break;
- ehci->last_iso_frame = (frame + 1) & fmask;
+
+ /* The last frame may still have active siTDs */
+ ehci->last_iso_frame = frame;
+ frame = (frame + 1) & fmask;
}
}
if (want != actual) {
- /* Poll again later, but give up after about 20 ms */
- if (ehci->ASS_poll_count++ < 20) {
- ehci_enable_event(ehci, EHCI_HRTIMER_POLL_ASS, true);
- return;
- }
- ehci_dbg(ehci, "Waited too long for the async schedule status (%x/%x), giving up\n",
- want, actual);
+ /* Poll again later */
+ ehci_enable_event(ehci, EHCI_HRTIMER_POLL_ASS, true);
+ ++ehci->ASS_poll_count;
+ return;
}
+
+ if (ehci->ASS_poll_count > 20)
+ ehci_dbg(ehci, "ASS poll count reached %d\n",
+ ehci->ASS_poll_count);
ehci->ASS_poll_count = 0;
/* The status is up-to-date; restart or stop the schedule as needed */
if (want != actual) {
- /* Poll again later, but give up after about 20 ms */
- if (ehci->PSS_poll_count++ < 20) {
- ehci_enable_event(ehci, EHCI_HRTIMER_POLL_PSS, true);
- return;
- }
- ehci_dbg(ehci, "Waited too long for the periodic schedule status (%x/%x), giving up\n",
- want, actual);
+ /* Poll again later */
+ ehci_enable_event(ehci, EHCI_HRTIMER_POLL_PSS, true);
+ return;
}
+
+ if (ehci->PSS_poll_count > 20)
+ ehci_dbg(ehci, "PSS poll count reached %d\n",
+ ehci->PSS_poll_count);
ehci->PSS_poll_count = 0;
/* The status is up-to-date; restart or stop the schedule as needed */
#endif
/* statistics can be kept for tuning/monitoring */
+#ifdef DEBUG
+#define EHCI_STATS
+#endif
+
struct ehci_stats {
/* irq usage */
unsigned long normal;
#ifdef DEBUG
struct dentry *debug_dir;
#endif
+
+ /* platform-specific data -- must come last */
+ unsigned long priv[0] __aligned(sizeof(s64));
};
/* convert between an HCD pointer and the corresponding EHCI_HCD */
"defaulting to EHCI.\n");
dev_warn(&xhci_pdev->dev,
"USB 3.0 devices will work at USB 2.0 speeds.\n");
+ usb_disable_xhci_ports(xhci_pdev);
return;
}
return IRQ_NONE;
uhci_writew(uhci, status, USBSTS); /* Clear it */
+ spin_lock(&uhci->lock);
+ if (unlikely(!uhci->is_initialized)) /* not yet configured */
+ goto done;
+
if (status & ~(USBSTS_USBINT | USBSTS_ERROR | USBSTS_RD)) {
if (status & USBSTS_HSE)
dev_err(uhci_dev(uhci), "host system error, "
dev_err(uhci_dev(uhci), "host controller process "
"error, something bad happened!\n");
if (status & USBSTS_HCH) {
- spin_lock(&uhci->lock);
if (uhci->rh_state >= UHCI_RH_RUNNING) {
dev_err(uhci_dev(uhci),
"host controller halted, "
* pending unlinks */
mod_timer(&hcd->rh_timer, jiffies);
}
- spin_unlock(&uhci->lock);
}
}
- if (status & USBSTS_RD)
+ if (status & USBSTS_RD) {
+ spin_unlock(&uhci->lock);
usb_hcd_poll_rh_status(hcd);
- else {
- spin_lock(&uhci->lock);
+ } else {
uhci_scan_schedule(uhci);
+ done:
spin_unlock(&uhci->lock);
}
*/
mb();
+ spin_lock_irq(&uhci->lock);
configure_hc(uhci);
uhci->is_initialized = 1;
- spin_lock_irq(&uhci->lock);
start_rh(uhci);
spin_unlock_irq(&uhci->lock);
return 0;
}
}
clear_bit(port, &uhci->resuming_ports);
+ usb_hcd_end_port_resume(&uhci_to_hcd(uhci)->self, port);
}
/* Wait for the UHCI controller in HP's iLO2 server management chip.
set_bit(port, &uhci->resuming_ports);
uhci->ports_timeout = jiffies +
msecs_to_jiffies(25);
+ usb_hcd_start_port_resume(
+ &uhci_to_hcd(uhci)->self, port);
/* Make sure we see the port again
* after the resuming period is over. */
faked_port_index + 1);
if (slot_id && xhci->devs[slot_id])
xhci_ring_device(xhci, slot_id);
- if (bus_state->port_remote_wakeup && (1 << faked_port_index)) {
+ if (bus_state->port_remote_wakeup & (1 << faked_port_index)) {
bus_state->port_remote_wakeup &=
~(1 << faked_port_index);
xhci_test_and_clear_bit(xhci, port_array,
(trb_comp_code != COMP_STALL &&
trb_comp_code != COMP_BABBLE))
xhci_urb_free_priv(xhci, urb_priv);
+ else
+ kfree(urb_priv);
usb_hcd_unlink_urb_from_ep(bus_to_hcd(urb->dev->bus), urb);
if ((urb->actual_length != urb->transfer_buffer_length &&
* running_total.
*/
packets_transferred = (running_total + trb_buff_len) /
- usb_endpoint_maxp(&urb->ep->desc);
+ GET_MAX_PACKET(usb_endpoint_maxp(&urb->ep->desc));
if ((total_packet_count - packets_transferred) > 31)
return 31 << 17;
td_len = urb->iso_frame_desc[i].length;
td_remain_len = td_len;
total_packet_count = DIV_ROUND_UP(td_len,
- usb_endpoint_maxp(&urb->ep->desc));
+ GET_MAX_PACKET(
+ usb_endpoint_maxp(&urb->ep->desc)));
/* A zero-length transfer still involves at least one packet. */
if (total_packet_count == 0)
total_packet_count++;
td = urb_priv->td[i];
for (j = 0; j < trbs_per_td; j++) {
u32 remainder = 0;
- field = TRB_TBC(burst_count) | TRB_TLBPC(residue);
+ field = 0;
if (first_trb) {
+ field = TRB_TBC(burst_count) |
+ TRB_TLBPC(residue);
/* Queue the isoc TRB */
field |= TRB_TYPE(TRB_ISOC);
/* Assume URB_ISO_ASAP is set */
musb_writel(&tx->tx_complete, 0, ptr);
}
-static void __init cppi_pool_init(struct cppi *cppi, struct cppi_channel *c)
+static void cppi_pool_init(struct cppi *cppi, struct cppi_channel *c)
{
int j;
c->last_processed = NULL;
}
-static int __init cppi_controller_start(struct dma_controller *c)
+static int cppi_controller_start(struct dma_controller *c)
{
struct cppi *controller;
void __iomem *tibase;
{ USB_DEVICE(0x0FCF, 0x1003) }, /* Dynastream ANT development board */
{ USB_DEVICE(0x0FCF, 0x1004) }, /* Dynastream ANT2USB */
{ USB_DEVICE(0x0FCF, 0x1006) }, /* Dynastream ANT development board */
+ { USB_DEVICE(0x0FDE, 0xCA05) }, /* OWL Wireless Electricity Monitor CM-160 */
{ USB_DEVICE(0x10A6, 0xAA26) }, /* Knock-off DCU-11 cable */
{ USB_DEVICE(0x10AB, 0x10C5) }, /* Siemens MC60 Cable */
{ USB_DEVICE(0x10B5, 0xAC70) }, /* Nokia CA-42 USB */
/*
* ELV devices:
*/
+ { USB_DEVICE(FTDI_ELV_VID, FTDI_ELV_WS300_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_USR_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_MSM1_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ELV_KL100_PID) },
{ USB_DEVICE(FTDI_VID, XSENS_CONVERTER_5_PID) },
{ USB_DEVICE(FTDI_VID, XSENS_CONVERTER_6_PID) },
{ USB_DEVICE(FTDI_VID, XSENS_CONVERTER_7_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_OMNI1509) },
{ USB_DEVICE(MOBILITY_VID, MOBILITY_USB_SERIAL_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_ACTIVE_ROBOTS_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_MHAM_KW_PID) },
#define XSENS_CONVERTER_6_PID 0xD38E
#define XSENS_CONVERTER_7_PID 0xD38F
+/**
+ * Zolix (www.zolix.com.cb) product ids
+ */
+#define FTDI_OMNI1509 0xD491 /* Omni1509 embedded USB-serial */
+
/*
* NDI (www.ndigital.com) product ids
*/
/*
* ELV USB devices submitted by Christian Abt of ELV (www.elv.de).
- * All of these devices use FTDI's vendor ID (0x0403).
+ * Almost all of these devices use FTDI's vendor ID (0x0403).
* Further IDs taken from ELV Windows .inf file.
*
* The previously included PID for the UO 100 module was incorrect.
*
* Armin Laeuger originally sent the PID for the UM 100 module.
*/
+#define FTDI_ELV_VID 0x1B1F /* ELV AG */
+#define FTDI_ELV_WS300_PID 0xC006 /* eQ3 WS 300 PC II */
#define FTDI_ELV_USR_PID 0xE000 /* ELV Universal-Sound-Recorder */
#define FTDI_ELV_MSM1_PID 0xE001 /* ELV Mini-Sound-Modul */
#define FTDI_ELV_KL100_PID 0xE002 /* ELV Kfz-Leistungsmesser KL 100 */
#define TELIT_PRODUCT_CC864_DUAL 0x1005
#define TELIT_PRODUCT_CC864_SINGLE 0x1006
#define TELIT_PRODUCT_DE910_DUAL 0x1010
+#define TELIT_PRODUCT_LE920 0x1200
/* ZTE PRODUCTS */
#define ZTE_VENDOR_ID 0x19d2
#define TPLINK_VENDOR_ID 0x2357
#define TPLINK_PRODUCT_MA180 0x0201
+/* Changhong products */
+#define CHANGHONG_VENDOR_ID 0x2077
+#define CHANGHONG_PRODUCT_CH690 0x7001
+
/* some devices interfaces need special handling due to a number of reasons */
enum option_blacklist_reason {
OPTION_BLACKLIST_NONE = 0,
.reserved = BIT(3) | BIT(4),
};
+static const struct option_blacklist_info telit_le920_blacklist = {
+ .sendsetup = BIT(0),
+ .reserved = BIT(1) | BIT(5),
+};
+
static const struct usb_device_id option_ids[] = {
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_COLT) },
{ USB_DEVICE(OPTION_VENDOR_ID, OPTION_PRODUCT_RICOLA) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_DUAL) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_SINGLE) },
{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920),
+ .driver_info = (kernel_ulong_t)&telit_le920_blacklist },
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */
{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x0002, 0xff, 0xff, 0xff),
.driver_info = (kernel_ulong_t)&net_intf1_blacklist },
{ USB_DEVICE(PETATEL_VENDOR_ID, PETATEL_PRODUCT_NP10T) },
{ USB_DEVICE(TPLINK_VENDOR_ID, TPLINK_PRODUCT_MA180),
.driver_info = (kernel_ulong_t)&net_intf4_blacklist },
+ { USB_DEVICE(CHANGHONG_VENDOR_ID, CHANGHONG_PRODUCT_CH690) },
{ } /* Terminating entry */
};
MODULE_DEVICE_TABLE(usb, option_ids);
{DEVICE_G1K(0x05c6, 0x9221)}, /* Generic Gobi QDL device */
{DEVICE_G1K(0x05c6, 0x9231)}, /* Generic Gobi QDL device */
{DEVICE_G1K(0x1f45, 0x0001)}, /* Unknown Gobi QDL device */
+ {DEVICE_G1K(0x1bc7, 0x900e)}, /* Telit Gobi QDL device */
/* Gobi 2000 devices */
{USB_DEVICE(0x1410, 0xa010)}, /* Novatel Gobi 2000 QDL device */
return 0;
}
-/* This places the HUAWEI E220 devices in multi-port mode */
-int usb_stor_huawei_e220_init(struct us_data *us)
+/* This places the HUAWEI usb dongles in multi-port mode */
+static int usb_stor_huawei_feature_init(struct us_data *us)
{
int result;
US_DEBUGP("Huawei mode set result is %d\n", result);
return 0;
}
+
+/*
+ * It will send a scsi switch command called rewind' to huawei dongle.
+ * When the dongle receives this command at the first time,
+ * it will reboot immediately. After rebooted, it will ignore this command.
+ * So it is unnecessary to read its response.
+ */
+static int usb_stor_huawei_scsi_init(struct us_data *us)
+{
+ int result = 0;
+ int act_len = 0;
+ struct bulk_cb_wrap *bcbw = (struct bulk_cb_wrap *) us->iobuf;
+ char rewind_cmd[] = {0x11, 0x06, 0x20, 0x00, 0x00, 0x01, 0x01, 0x00,
+ 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00};
+
+ bcbw->Signature = cpu_to_le32(US_BULK_CB_SIGN);
+ bcbw->Tag = 0;
+ bcbw->DataTransferLength = 0;
+ bcbw->Flags = bcbw->Lun = 0;
+ bcbw->Length = sizeof(rewind_cmd);
+ memset(bcbw->CDB, 0, sizeof(bcbw->CDB));
+ memcpy(bcbw->CDB, rewind_cmd, sizeof(rewind_cmd));
+
+ result = usb_stor_bulk_transfer_buf(us, us->send_bulk_pipe, bcbw,
+ US_BULK_CB_WRAP_LEN, &act_len);
+ US_DEBUGP("transfer actual length=%d, result=%d\n", act_len, result);
+ return result;
+}
+
+/*
+ * It tries to find the supported Huawei USB dongles.
+ * In Huawei, they assign the following product IDs
+ * for all of their mobile broadband dongles,
+ * including the new dongles in the future.
+ * So if the product ID is not included in this list,
+ * it means it is not Huawei's mobile broadband dongles.
+ */
+static int usb_stor_huawei_dongles_pid(struct us_data *us)
+{
+ struct usb_interface_descriptor *idesc;
+ int idProduct;
+
+ idesc = &us->pusb_intf->cur_altsetting->desc;
+ idProduct = us->pusb_dev->descriptor.idProduct;
+ /* The first port is CDROM,
+ * means the dongle in the single port mode,
+ * and a switch command is required to be sent. */
+ if (idesc && idesc->bInterfaceNumber == 0) {
+ if ((idProduct == 0x1001)
+ || (idProduct == 0x1003)
+ || (idProduct == 0x1004)
+ || (idProduct >= 0x1401 && idProduct <= 0x1500)
+ || (idProduct >= 0x1505 && idProduct <= 0x1600)
+ || (idProduct >= 0x1c02 && idProduct <= 0x2202)) {
+ return 1;
+ }
+ }
+ return 0;
+}
+
+int usb_stor_huawei_init(struct us_data *us)
+{
+ int result = 0;
+
+ if (usb_stor_huawei_dongles_pid(us)) {
+ if (us->pusb_dev->descriptor.idProduct >= 0x1446)
+ result = usb_stor_huawei_scsi_init(us);
+ else
+ result = usb_stor_huawei_feature_init(us);
+ }
+ return result;
+}
* flash reader */
int usb_stor_ucr61s2b_init(struct us_data *us);
-/* This places the HUAWEI E220 devices in multi-port mode */
-int usb_stor_huawei_e220_init(struct us_data *us);
+/* This places the HUAWEI usb dongles in multi-port mode */
+int usb_stor_huawei_init(struct us_data *us);
/* Reported by fangxiaozhi <huananhu@huawei.com>
* This brings the HUAWEI data card devices into multi-port mode
*/
-UNUSUAL_DEV( 0x12d1, 0x1001, 0x0000, 0x0000,
+UNUSUAL_VENDOR_INTF(0x12d1, 0x08, 0x06, 0x50,
"HUAWEI MOBILE",
"Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1003, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1004, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1401, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1402, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1403, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1404, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1405, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1406, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1407, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1408, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1409, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x140A, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x140B, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x140C, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x140D, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x140E, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x140F, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1410, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1411, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1412, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1413, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1414, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1415, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1416, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1417, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1418, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1419, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x141A, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x141B, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x141C, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x141D, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x141E, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x141F, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1420, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1421, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1422, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1423, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1424, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1425, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1426, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1427, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1428, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1429, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x142A, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x142B, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x142C, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x142D, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x142E, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x142F, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1430, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1431, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1432, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1433, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1434, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1435, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1436, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1437, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1438, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x1439, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x143A, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x143B, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x143C, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x143D, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x143E, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
- 0),
-UNUSUAL_DEV( 0x12d1, 0x143F, 0x0000, 0x0000,
- "HUAWEI MOBILE",
- "Mass Storage",
- USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_e220_init,
+ USB_SC_DEVICE, USB_PR_DEVICE, usb_stor_huawei_init,
0),
/* Reported by Vilius Bilinkevicius <vilisas AT xxx DOT lt) */
.useTransport = use_transport, \
}
+#define UNUSUAL_VENDOR_INTF(idVendor, cl, sc, pr, \
+ vendor_name, product_name, use_protocol, use_transport, \
+ init_function, Flags) \
+{ \
+ .vendorName = vendor_name, \
+ .productName = product_name, \
+ .useProtocol = use_protocol, \
+ .useTransport = use_transport, \
+ .initFunction = init_function, \
+}
+
static struct us_unusual_dev us_unusual_dev_list[] = {
# include "unusual_devs.h"
{ } /* Terminating entry */
#undef UNUSUAL_DEV
#undef COMPLIANT_DEV
#undef USUAL_DEV
+#undef UNUSUAL_VENDOR_INTF
#ifdef CONFIG_LOCKDEP
#define USUAL_DEV(useProto, useTrans) \
{ USB_INTERFACE_INFO(USB_CLASS_MASS_STORAGE, useProto, useTrans) }
+/* Define the device is matched with Vendor ID and interface descriptors */
+#define UNUSUAL_VENDOR_INTF(id_vendor, cl, sc, pr, \
+ vendorName, productName, useProtocol, useTransport, \
+ initFunction, flags) \
+{ \
+ .match_flags = USB_DEVICE_ID_MATCH_INT_INFO \
+ | USB_DEVICE_ID_MATCH_VENDOR, \
+ .idVendor = (id_vendor), \
+ .bInterfaceClass = (cl), \
+ .bInterfaceSubClass = (sc), \
+ .bInterfaceProtocol = (pr), \
+ .driver_info = (flags) \
+}
+
struct usb_device_id usb_storage_usb_ids[] = {
# include "unusual_devs.h"
{ } /* Terminating entry */
#undef UNUSUAL_DEV
#undef COMPLIANT_DEV
#undef USUAL_DEV
+#undef UNUSUAL_VENDOR_INTF
/*
* The table of devices to ignore
filled = 1;
} else {
/* Drop writes, fill reads with FF */
+ filled = min((size_t)(x_end - pos), count);
if (!iswrite) {
char val = 0xFF;
size_t i;
- for (i = 0; i < x_end - pos; i++) {
+ for (i = 0; i < filled; i++) {
if (put_user(val, buf + i))
goto out;
}
}
- filled = x_end - pos;
}
count -= filled;
}
/* Caller must have TX VQ lock */
-static void tx_poll_start(struct vhost_net *net, struct socket *sock)
+static int tx_poll_start(struct vhost_net *net, struct socket *sock)
{
+ int ret;
+
if (unlikely(net->tx_poll_state != VHOST_NET_POLL_STOPPED))
- return;
- vhost_poll_start(net->poll + VHOST_NET_VQ_TX, sock->file);
- net->tx_poll_state = VHOST_NET_POLL_STARTED;
+ return 0;
+ ret = vhost_poll_start(net->poll + VHOST_NET_VQ_TX, sock->file);
+ if (!ret)
+ net->tx_poll_state = VHOST_NET_POLL_STARTED;
+ return ret;
}
/* In case of DMA done not in order in lower device driver for some reason.
vhost_poll_stop(n->poll + VHOST_NET_VQ_RX);
}
-static void vhost_net_enable_vq(struct vhost_net *n,
+static int vhost_net_enable_vq(struct vhost_net *n,
struct vhost_virtqueue *vq)
{
struct socket *sock;
+ int ret;
sock = rcu_dereference_protected(vq->private_data,
lockdep_is_held(&vq->mutex));
if (!sock)
- return;
+ return 0;
if (vq == n->vqs + VHOST_NET_VQ_TX) {
n->tx_poll_state = VHOST_NET_POLL_STOPPED;
- tx_poll_start(n, sock);
+ ret = tx_poll_start(n, sock);
} else
- vhost_poll_start(n->poll + VHOST_NET_VQ_RX, sock->file);
+ ret = vhost_poll_start(n->poll + VHOST_NET_VQ_RX, sock->file);
+
+ return ret;
}
static struct socket *vhost_net_stop_vq(struct vhost_net *n,
r = PTR_ERR(ubufs);
goto err_ubufs;
}
- oldubufs = vq->ubufs;
- vq->ubufs = ubufs;
+
vhost_net_disable_vq(n, vq);
rcu_assign_pointer(vq->private_data, sock);
- vhost_net_enable_vq(n, vq);
-
r = vhost_init_used(vq);
if (r)
- goto err_vq;
+ goto err_used;
+ r = vhost_net_enable_vq(n, vq);
+ if (r)
+ goto err_used;
+
+ oldubufs = vq->ubufs;
+ vq->ubufs = ubufs;
n->tx_packets = 0;
n->tx_zcopy_err = 0;
mutex_unlock(&n->dev.mutex);
return 0;
+err_used:
+ rcu_assign_pointer(vq->private_data, oldsock);
+ vhost_net_enable_vq(n, vq);
+ if (ubufs)
+ vhost_ubuf_put_and_wait(ubufs);
err_ubufs:
fput(sock->file);
err_vq:
/* Must use ioctl VHOST_SCSI_SET_ENDPOINT */
tv_tpg = vs->vs_tpg;
- if (unlikely(!tv_tpg)) {
- pr_err("%s endpoint not set\n", __func__);
+ if (unlikely(!tv_tpg))
return;
- }
mutex_lock(&vq->mutex);
vhost_disable_notify(&vs->dev, vq);
init_poll_funcptr(&poll->table, vhost_poll_func);
poll->mask = mask;
poll->dev = dev;
+ poll->wqh = NULL;
vhost_work_init(&poll->work, fn);
}
/* Start polling a file. We add ourselves to file's wait queue. The caller must
* keep a reference to a file until after vhost_poll_stop is called. */
-void vhost_poll_start(struct vhost_poll *poll, struct file *file)
+int vhost_poll_start(struct vhost_poll *poll, struct file *file)
{
unsigned long mask;
+ int ret = 0;
mask = file->f_op->poll(file, &poll->table);
if (mask)
vhost_poll_wakeup(&poll->wait, 0, 0, (void *)mask);
+ if (mask & POLLERR) {
+ if (poll->wqh)
+ remove_wait_queue(poll->wqh, &poll->wait);
+ ret = -EINVAL;
+ }
+
+ return ret;
}
/* Stop polling a file. After this function returns, it becomes safe to drop the
* file reference. You must also flush afterwards. */
void vhost_poll_stop(struct vhost_poll *poll)
{
- remove_wait_queue(poll->wqh, &poll->wait);
+ if (poll->wqh) {
+ remove_wait_queue(poll->wqh, &poll->wait);
+ poll->wqh = NULL;
+ }
}
static bool vhost_work_seq_done(struct vhost_dev *dev, struct vhost_work *work,
fput(filep);
if (pollstart && vq->handle_kick)
- vhost_poll_start(&vq->poll, vq->kick);
+ r = vhost_poll_start(&vq->poll, vq->kick);
mutex_unlock(&vq->mutex);
void vhost_poll_init(struct vhost_poll *poll, vhost_work_fn_t fn,
unsigned long mask, struct vhost_dev *dev);
-void vhost_poll_start(struct vhost_poll *poll, struct file *file);
+int vhost_poll_start(struct vhost_poll *poll, struct file *file);
void vhost_poll_stop(struct vhost_poll *poll);
void vhost_poll_flush(struct vhost_poll *poll);
void vhost_poll_queue(struct vhost_poll *poll);
struct clk *clk_ahb;
struct clk *clk_per;
enum imxfb_type devtype;
+ bool enabled;
/*
* These are the addresses we mapped
static void imxfb_enable_controller(struct imxfb_info *fbi)
{
+
+ if (fbi->enabled)
+ return;
+
pr_debug("Enabling LCD controller\n");
writel(fbi->screen_dma, fbi->regs + LCDC_SSA);
clk_prepare_enable(fbi->clk_ipg);
clk_prepare_enable(fbi->clk_ahb);
clk_prepare_enable(fbi->clk_per);
+ fbi->enabled = true;
if (fbi->backlight_power)
fbi->backlight_power(1);
static void imxfb_disable_controller(struct imxfb_info *fbi)
{
+ if (!fbi->enabled)
+ return;
+
pr_debug("Disabling LCD controller\n");
if (fbi->backlight_power)
clk_disable_unprepare(fbi->clk_per);
clk_disable_unprepare(fbi->clk_ipg);
clk_disable_unprepare(fbi->clk_ahb);
+ fbi->enabled = false;
writel(0, fbi->regs + LCDC_RMCR);
}
memset(fbi, 0, sizeof(struct imxfb_info));
+ fbi->devtype = pdev->id_entry->driver_data;
+
strlcpy(info->fix.id, IMX_NAME, sizeof(info->fix.id));
info->fix.type = FB_TYPE_PACKED_PIXELS;
return -ENOMEM;
fbi = info->par;
- fbi->devtype = pdev->id_entry->driver_data;
if (!fb_mode)
fb_mode = pdata->mode[0].mode.name;
if (irq == -1) {
irq = xen_allocate_irq_dynamic();
- if (irq == -1)
+ if (irq < 0)
goto out;
irq_set_chip_and_handler_name(irq, &xen_dynamic_chip,
if (irq == -1) {
irq = xen_allocate_irq_dynamic();
- if (irq == -1)
+ if (irq < 0)
goto out;
irq_set_chip_and_handler_name(irq, &xen_percpu_chip,
struct pci_dev *dev, struct xen_pci_op *op)
{
struct xen_pcibk_dev_data *dev_data;
- int otherend = pdev->xdev->otherend_id;
int status;
if (unlikely(verbose_request))
status = pci_enable_msi(dev);
if (status) {
- printk(KERN_ERR "error enable msi for guest %x status %x\n",
- otherend, status);
+ pr_warn_ratelimited(DRV_NAME ": %s: error enabling MSI for guest %u: err %d\n",
+ pci_name(dev), pdev->xdev->otherend_id,
+ status);
op->value = 0;
return XEN_PCI_ERR_op_failed;
}
pci_name(dev), i,
op->msix_entries[i].vector);
}
- } else {
- printk(KERN_WARNING DRV_NAME ": %s: failed to enable MSI-X: err %d!\n",
- pci_name(dev), result);
- }
+ } else
+ pr_warn_ratelimited(DRV_NAME ": %s: error enabling MSI-X for guest %u: err %d!\n",
+ pci_name(dev), pdev->xdev->otherend_id,
+ result);
kfree(entries);
op->value = result;
source "fs/autofs4/Kconfig"
source "fs/fuse/Kconfig"
-config CUSE
- tristate "Character device in Userspace support"
- depends on FUSE_FS
- help
- This FUSE extension allows character devices to be
- implemented in userspace.
-
- If you want to develop or use userspace character device
- based on CUSE, answer Y or M.
-
config GENERIC_ACL
bool
select FS_POSIX_ACL
* We make the other tasks wait for the flush only when we can flush
* all things.
*/
- if (ret && flush == BTRFS_RESERVE_FLUSH_ALL) {
+ if (ret && flush != BTRFS_RESERVE_NO_FLUSH) {
flushing = true;
space_info->flush = 1;
}
unsigned nr_extents = 0;
int extra_reserve = 0;
enum btrfs_reserve_flush_enum flush = BTRFS_RESERVE_FLUSH_ALL;
- int ret;
+ int ret = 0;
bool delalloc_lock = true;
/* If we are a free space inode we need to not flush since we will be in
csum_bytes = BTRFS_I(inode)->csum_bytes;
spin_unlock(&BTRFS_I(inode)->lock);
- if (root->fs_info->quota_enabled) {
+ if (root->fs_info->quota_enabled)
ret = btrfs_qgroup_reserve(root, num_bytes +
nr_extents * root->leafsize);
- if (ret) {
- spin_lock(&BTRFS_I(inode)->lock);
- calc_csum_metadata_size(inode, num_bytes, 0);
- spin_unlock(&BTRFS_I(inode)->lock);
- if (delalloc_lock)
- mutex_unlock(&BTRFS_I(inode)->delalloc_mutex);
- return ret;
- }
- }
- ret = reserve_metadata_bytes(root, block_rsv, to_reserve, flush);
+ /*
+ * ret != 0 here means the qgroup reservation failed, we go straight to
+ * the shared error handling then.
+ */
+ if (ret == 0)
+ ret = reserve_metadata_bytes(root, block_rsv,
+ to_reserve, flush);
+
if (ret) {
u64 to_free = 0;
unsigned dropped;
int empty_cluster = 2 * 1024 * 1024;
struct btrfs_space_info *space_info;
int loop = 0;
- int index = 0;
+ int index = __get_raid_index(data);
int alloc_type = (data & BTRFS_BLOCK_GROUP_DATA) ?
RESERVE_ALLOC_NO_ACCOUNT : RESERVE_ALLOC;
bool found_uncached_bg = false;
&wc->flags[level]);
if (ret < 0) {
btrfs_tree_unlock_rw(eb, path->locks[level]);
+ path->locks[level] = 0;
return ret;
}
BUG_ON(wc->refs[level] == 0);
if (wc->refs[level] == 1) {
btrfs_tree_unlock_rw(eb, path->locks[level]);
+ path->locks[level] = 0;
return 1;
}
}
if (test_bit(EXTENT_FLAG_COMPRESSED, &prev->flags))
return 0;
+ if (test_bit(EXTENT_FLAG_LOGGING, &prev->flags) ||
+ test_bit(EXTENT_FLAG_LOGGING, &next->flags))
+ return 0;
+
if (extent_map_end(prev) == next->start &&
prev->flags == next->flags &&
prev->bdev == next->bdev &&
if (!em)
goto out;
- list_move(&em->list, &tree->modified_extents);
+ if (!test_bit(EXTENT_FLAG_LOGGING, &em->flags))
+ list_move(&em->list, &tree->modified_extents);
em->generation = gen;
clear_bit(EXTENT_FLAG_PINNED, &em->flags);
em->mod_start = em->start;
}
+void clear_em_logging(struct extent_map_tree *tree, struct extent_map *em)
+{
+ clear_bit(EXTENT_FLAG_LOGGING, &em->flags);
+ if (em->in_tree)
+ try_merge_map(tree, em);
+}
+
/**
* add_extent_mapping - add new extent map to the extent tree
* @tree: tree to insert new map in
int __init extent_map_init(void);
void extent_map_exit(void);
int unpin_extent_cache(struct extent_map_tree *tree, u64 start, u64 len, u64 gen);
+void clear_em_logging(struct extent_map_tree *tree, struct extent_map *em);
struct extent_map *search_extent_mapping(struct extent_map_tree *tree,
u64 start, u64 len);
#endif
if (!contig)
offset = page_offset(bvec->bv_page) + bvec->bv_offset;
- if (!contig && (offset >= ordered->file_offset + ordered->len ||
- offset < ordered->file_offset)) {
+ if (offset >= ordered->file_offset + ordered->len ||
+ offset < ordered->file_offset) {
unsigned long bytes_left;
sums->len = this_sum_bytes;
this_sum_bytes = 0;
struct btrfs_key key;
struct btrfs_ioctl_defrag_range_args range;
int num_defrag;
+ int index;
+ int ret;
/* get the inode */
key.objectid = defrag->root;
btrfs_set_key_type(&key, BTRFS_ROOT_ITEM_KEY);
key.offset = (u64)-1;
+
+ index = srcu_read_lock(&fs_info->subvol_srcu);
+
inode_root = btrfs_read_fs_root_no_name(fs_info, &key);
if (IS_ERR(inode_root)) {
- kmem_cache_free(btrfs_inode_defrag_cachep, defrag);
- return PTR_ERR(inode_root);
+ ret = PTR_ERR(inode_root);
+ goto cleanup;
+ }
+ if (btrfs_root_refs(&inode_root->root_item) == 0) {
+ ret = -ENOENT;
+ goto cleanup;
}
key.objectid = defrag->ino;
key.offset = 0;
inode = btrfs_iget(fs_info->sb, &key, inode_root, NULL);
if (IS_ERR(inode)) {
- kmem_cache_free(btrfs_inode_defrag_cachep, defrag);
- return PTR_ERR(inode);
+ ret = PTR_ERR(inode);
+ goto cleanup;
}
+ srcu_read_unlock(&fs_info->subvol_srcu, index);
/* do a chunk of defrag */
clear_bit(BTRFS_INODE_IN_DEFRAG, &BTRFS_I(inode)->runtime_flags);
iput(inode);
return 0;
+cleanup:
+ srcu_read_unlock(&fs_info->subvol_srcu, index);
+ kmem_cache_free(btrfs_inode_defrag_cachep, defrag);
+ return ret;
}
/*
if (err < 0 && num_written > 0)
num_written = err;
}
-out:
+
if (sync)
atomic_dec(&BTRFS_I(inode)->sync_writers);
+out:
sb_end_write(inode->i_sb);
current->backing_dev_info = NULL;
return num_written ? num_written : err;
if (lockend <= lockstart)
lockend = lockstart + root->sectorsize;
+ lockend--;
len = lockend - lockstart + 1;
len = max_t(u64, len, root->sectorsize);
}
}
- *offset = start;
- free_extent_map(em);
- break;
+ if (!test_bit(EXTENT_FLAG_PREALLOC,
+ &em->flags)) {
+ *offset = start;
+ free_extent_map(em);
+ break;
+ }
}
}
{
struct btrfs_free_space_ctl *ctl = block_group->free_space_ctl;
struct btrfs_free_space *info;
- int ret = 0;
+ int ret;
+ bool re_search = false;
spin_lock(&ctl->tree_lock);
again:
+ ret = 0;
if (!bytes)
goto out_lock;
info = tree_search_offset(ctl, offset_to_bitmap(ctl, offset),
1, 0);
if (!info) {
- /* the tree logging code might be calling us before we
- * have fully loaded the free space rbtree for this
- * block group. So it is possible the entry won't
- * be in the rbtree yet at all. The caching code
- * will make sure not to put it in the rbtree if
- * the logging code has pinned it.
+ /*
+ * If we found a partial bit of our free space in a
+ * bitmap but then couldn't find the other part this may
+ * be a problem, so WARN about it.
*/
+ WARN_ON(re_search);
goto out_lock;
}
}
+ re_search = false;
if (!info->bitmap) {
unlink_free_space(ctl, info);
if (offset == info->offset) {
}
ret = remove_from_bitmap(ctl, info, &offset, &bytes);
- if (ret == -EAGAIN)
+ if (ret == -EAGAIN) {
+ re_search = true;
goto again;
+ }
BUG_ON(ret); /* logic error */
out_lock:
spin_unlock(&ctl->tree_lock);
[S_IFLNK >> S_SHIFT] = BTRFS_FT_SYMLINK,
};
-static int btrfs_setsize(struct inode *inode, loff_t newsize);
+static int btrfs_setsize(struct inode *inode, struct iattr *attr);
static int btrfs_truncate(struct inode *inode);
static int btrfs_finish_ordered_io(struct btrfs_ordered_extent *ordered_extent);
static noinline int cow_file_range(struct inode *inode,
continue;
}
nr_truncate++;
+
+ /* 1 for the orphan item deletion. */
+ trans = btrfs_start_transaction(root, 1);
+ if (IS_ERR(trans)) {
+ ret = PTR_ERR(trans);
+ goto out;
+ }
+ ret = btrfs_orphan_add(trans, inode);
+ btrfs_end_transaction(trans, root);
+ if (ret)
+ goto out;
+
ret = btrfs_truncate(inode);
} else {
nr_unlink++;
block_end - cur_offset, 0);
if (IS_ERR(em)) {
err = PTR_ERR(em);
+ em = NULL;
break;
}
last_byte = min(extent_map_end(em), block_end);
return err;
}
-static int btrfs_setsize(struct inode *inode, loff_t newsize)
+static int btrfs_setsize(struct inode *inode, struct iattr *attr)
{
struct btrfs_root *root = BTRFS_I(inode)->root;
struct btrfs_trans_handle *trans;
loff_t oldsize = i_size_read(inode);
+ loff_t newsize = attr->ia_size;
+ int mask = attr->ia_valid;
int ret;
if (newsize == oldsize)
return 0;
+ /*
+ * The regular truncate() case without ATTR_CTIME and ATTR_MTIME is a
+ * special case where we need to update the times despite not having
+ * these flags set. For all other operations the VFS set these flags
+ * explicitly if it wants a timestamp update.
+ */
+ if (newsize != oldsize && (!(mask & (ATTR_CTIME | ATTR_MTIME))))
+ inode->i_ctime = inode->i_mtime = current_fs_time(inode->i_sb);
+
if (newsize > oldsize) {
truncate_pagecache(inode, oldsize, newsize);
ret = btrfs_cont_expand(inode, oldsize, newsize);
set_bit(BTRFS_INODE_ORDERED_DATA_CLOSE,
&BTRFS_I(inode)->runtime_flags);
+ /*
+ * 1 for the orphan item we're going to add
+ * 1 for the orphan item deletion.
+ */
+ trans = btrfs_start_transaction(root, 2);
+ if (IS_ERR(trans))
+ return PTR_ERR(trans);
+
+ /*
+ * We need to do this in case we fail at _any_ point during the
+ * actual truncate. Once we do the truncate_setsize we could
+ * invalidate pages which forces any outstanding ordered io to
+ * be instantly completed which will give us extents that need
+ * to be truncated. If we fail to get an orphan inode down we
+ * could have left over extents that were never meant to live,
+ * so we need to garuntee from this point on that everything
+ * will be consistent.
+ */
+ ret = btrfs_orphan_add(trans, inode);
+ btrfs_end_transaction(trans, root);
+ if (ret)
+ return ret;
+
/* we don't support swapfiles, so vmtruncate shouldn't fail */
truncate_setsize(inode, newsize);
ret = btrfs_truncate(inode);
+ if (ret && inode->i_nlink)
+ btrfs_orphan_del(NULL, inode);
}
return ret;
return err;
if (S_ISREG(inode->i_mode) && (attr->ia_valid & ATTR_SIZE)) {
- err = btrfs_setsize(inode, attr->ia_size);
+ err = btrfs_setsize(inode, attr);
if (err)
return err;
}
return em;
if (em) {
/*
- * if our em maps to a hole, there might
- * actually be delalloc bytes behind it
+ * if our em maps to
+ * - a hole or
+ * - a pre-alloc extent,
+ * there might actually be delalloc bytes behind it.
*/
- if (em->block_start != EXTENT_MAP_HOLE)
+ if (em->block_start != EXTENT_MAP_HOLE &&
+ !test_bit(EXTENT_FLAG_PREALLOC, &em->flags))
return em;
else
hole_em = em;
*/
em->block_start = hole_em->block_start;
em->block_len = hole_len;
+ if (test_bit(EXTENT_FLAG_PREALLOC, &hole_em->flags))
+ set_bit(EXTENT_FLAG_PREALLOC, &em->flags);
} else {
em->start = range_start;
em->len = found;
/*
* 1 for the truncate slack space
- * 1 for the orphan item we're going to add
- * 1 for the orphan item deletion
* 1 for updating the inode.
*/
- trans = btrfs_start_transaction(root, 4);
+ trans = btrfs_start_transaction(root, 2);
if (IS_ERR(trans)) {
err = PTR_ERR(trans);
goto out;
min_size);
BUG_ON(ret);
- ret = btrfs_orphan_add(trans, inode);
- if (ret) {
- btrfs_end_transaction(trans, root);
- goto out;
- }
-
/*
* setattr is responsible for setting the ordered_data_close flag,
* but that is only tested during the last file release. That
ret = btrfs_orphan_del(trans, inode);
if (ret)
err = ret;
- } else if (ret && inode->i_nlink > 0) {
- /*
- * Failed to do the truncate, remove us from the in memory
- * orphan list.
- */
- ret = btrfs_orphan_del(NULL, inode);
}
if (trans) {
*/
int btrfs_start_delalloc_inodes(struct btrfs_root *root, int delay_iput)
{
- struct list_head *head = &root->fs_info->delalloc_inodes;
struct btrfs_inode *binode;
struct inode *inode;
struct btrfs_delalloc_work *work, *next;
struct list_head works;
+ struct list_head splice;
int ret = 0;
if (root->fs_info->sb->s_flags & MS_RDONLY)
return -EROFS;
INIT_LIST_HEAD(&works);
-
+ INIT_LIST_HEAD(&splice);
+again:
spin_lock(&root->fs_info->delalloc_lock);
- while (!list_empty(head)) {
- binode = list_entry(head->next, struct btrfs_inode,
+ list_splice_init(&root->fs_info->delalloc_inodes, &splice);
+ while (!list_empty(&splice)) {
+ binode = list_entry(splice.next, struct btrfs_inode,
delalloc_inodes);
+
+ list_del_init(&binode->delalloc_inodes);
+
inode = igrab(&binode->vfs_inode);
if (!inode)
- list_del_init(&binode->delalloc_inodes);
+ continue;
+
+ list_add_tail(&binode->delalloc_inodes,
+ &root->fs_info->delalloc_inodes);
spin_unlock(&root->fs_info->delalloc_lock);
- if (inode) {
- work = btrfs_alloc_delalloc_work(inode, 0, delay_iput);
- if (!work) {
- ret = -ENOMEM;
- goto out;
- }
- list_add_tail(&work->list, &works);
- btrfs_queue_worker(&root->fs_info->flush_workers,
- &work->work);
+
+ work = btrfs_alloc_delalloc_work(inode, 0, delay_iput);
+ if (unlikely(!work)) {
+ ret = -ENOMEM;
+ goto out;
}
+ list_add_tail(&work->list, &works);
+ btrfs_queue_worker(&root->fs_info->flush_workers,
+ &work->work);
+
cond_resched();
spin_lock(&root->fs_info->delalloc_lock);
}
spin_unlock(&root->fs_info->delalloc_lock);
+ list_for_each_entry_safe(work, next, &works, list) {
+ list_del_init(&work->list);
+ btrfs_wait_and_free_delalloc_work(work);
+ }
+
+ spin_lock(&root->fs_info->delalloc_lock);
+ if (!list_empty(&root->fs_info->delalloc_inodes)) {
+ spin_unlock(&root->fs_info->delalloc_lock);
+ goto again;
+ }
+ spin_unlock(&root->fs_info->delalloc_lock);
+
/* the filemap_flush will queue IO into the worker threads, but
* we have to make sure the IO is actually started and that
* ordered extents get created before we return
atomic_read(&root->fs_info->async_delalloc_pages) == 0));
}
atomic_dec(&root->fs_info->async_submit_draining);
+ return 0;
out:
list_for_each_entry_safe(work, next, &works, list) {
list_del_init(&work->list);
btrfs_wait_and_free_delalloc_work(work);
}
+
+ if (!list_empty_careful(&splice)) {
+ spin_lock(&root->fs_info->delalloc_lock);
+ list_splice_tail(&splice, &root->fs_info->delalloc_inodes);
+ spin_unlock(&root->fs_info->delalloc_lock);
+ }
return ret;
}
BUG_ON(ret);
- d_instantiate(dentry, btrfs_lookup_dentry(dir, dentry));
fail:
if (async_transid) {
*async_transid = trans->transid;
}
if (err && !ret)
ret = err;
+
+ if (!ret)
+ d_instantiate(dentry, btrfs_lookup_dentry(dir, dentry));
+
return ret;
}
if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running,
1)) {
pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n");
- return -EINPROGRESS;
+ mnt_drop_write_file(file);
+ return -EINVAL;
}
mutex_lock(&root->fs_info->volume_mutex);
printk(KERN_INFO "btrfs: resizing devid %llu\n",
(unsigned long long)devid);
}
+
device = btrfs_find_device(root->fs_info, devid, NULL, NULL);
if (!device) {
printk(KERN_INFO "btrfs: resizer unable to find device %llu\n",
ret = -EINVAL;
goto out_free;
}
- if (device->fs_devices && device->fs_devices->seeding) {
+
+ if (!device->writeable) {
printk(KERN_INFO "btrfs: resizer unable to apply on "
- "seeding device %llu\n",
+ "readonly device %llu\n",
(unsigned long long)devid);
ret = -EINVAL;
goto out_free;
kfree(vol_args);
out:
mutex_unlock(&root->fs_info->volume_mutex);
- mnt_drop_write_file(file);
atomic_set(&root->fs_info->mutually_exclusive_operation_running, 0);
+ mnt_drop_write_file(file);
return ret;
}
err = inode_permission(inode, MAY_WRITE | MAY_EXEC);
if (err)
goto out_dput;
-
- /* check if subvolume may be deleted by a non-root user */
- err = btrfs_may_delete(dir, dentry, 1);
- if (err)
- goto out_dput;
}
+ /* check if subvolume may be deleted by a user */
+ err = btrfs_may_delete(dir, dentry, 1);
+ if (err)
+ goto out_dput;
+
if (btrfs_ino(inode) != BTRFS_FIRST_FREE_OBJECTID) {
err = -EINVAL;
goto out_dput;
struct btrfs_ioctl_defrag_range_args *range;
int ret;
- if (btrfs_root_readonly(root))
- return -EROFS;
+ ret = mnt_want_write_file(file);
+ if (ret)
+ return ret;
if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running,
1)) {
pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n");
- return -EINPROGRESS;
+ mnt_drop_write_file(file);
+ return -EINVAL;
}
- ret = mnt_want_write_file(file);
- if (ret) {
- atomic_set(&root->fs_info->mutually_exclusive_operation_running,
- 0);
- return ret;
+
+ if (btrfs_root_readonly(root)) {
+ ret = -EROFS;
+ goto out;
}
switch (inode->i_mode & S_IFMT) {
ret = -EINVAL;
}
out:
- mnt_drop_write_file(file);
atomic_set(&root->fs_info->mutually_exclusive_operation_running, 0);
+ mnt_drop_write_file(file);
return ret;
}
if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running,
1)) {
pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n");
- return -EINPROGRESS;
+ return -EINVAL;
}
mutex_lock(&root->fs_info->volume_mutex);
1)) {
pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n");
mnt_drop_write_file(file);
- return -EINPROGRESS;
+ return -EINVAL;
}
mutex_lock(&root->fs_info->volume_mutex);
kfree(vol_args);
out:
mutex_unlock(&root->fs_info->volume_mutex);
- mnt_drop_write_file(file);
atomic_set(&root->fs_info->mutually_exclusive_operation_running, 0);
+ mnt_drop_write_file(file);
return ret;
}
struct btrfs_fs_info *fs_info = root->fs_info;
struct btrfs_ioctl_balance_args *bargs;
struct btrfs_balance_control *bctl;
+ bool need_unlock; /* for mut. excl. ops lock */
int ret;
- int need_to_clear_lock = 0;
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
if (ret)
return ret;
- mutex_lock(&fs_info->volume_mutex);
+again:
+ if (!atomic_xchg(&fs_info->mutually_exclusive_operation_running, 1)) {
+ mutex_lock(&fs_info->volume_mutex);
+ mutex_lock(&fs_info->balance_mutex);
+ need_unlock = true;
+ goto locked;
+ }
+
+ /*
+ * mut. excl. ops lock is locked. Three possibilites:
+ * (1) some other op is running
+ * (2) balance is running
+ * (3) balance is paused -- special case (think resume)
+ */
mutex_lock(&fs_info->balance_mutex);
+ if (fs_info->balance_ctl) {
+ /* this is either (2) or (3) */
+ if (!atomic_read(&fs_info->balance_running)) {
+ mutex_unlock(&fs_info->balance_mutex);
+ if (!mutex_trylock(&fs_info->volume_mutex))
+ goto again;
+ mutex_lock(&fs_info->balance_mutex);
+
+ if (fs_info->balance_ctl &&
+ !atomic_read(&fs_info->balance_running)) {
+ /* this is (3) */
+ need_unlock = false;
+ goto locked;
+ }
+
+ mutex_unlock(&fs_info->balance_mutex);
+ mutex_unlock(&fs_info->volume_mutex);
+ goto again;
+ } else {
+ /* this is (2) */
+ mutex_unlock(&fs_info->balance_mutex);
+ ret = -EINPROGRESS;
+ goto out;
+ }
+ } else {
+ /* this is (1) */
+ mutex_unlock(&fs_info->balance_mutex);
+ pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n");
+ ret = -EINVAL;
+ goto out;
+ }
+
+locked:
+ BUG_ON(!atomic_read(&fs_info->mutually_exclusive_operation_running));
if (arg) {
bargs = memdup_user(arg, sizeof(*bargs));
if (IS_ERR(bargs)) {
ret = PTR_ERR(bargs);
- goto out;
+ goto out_unlock;
}
if (bargs->flags & BTRFS_BALANCE_RESUME) {
bargs = NULL;
}
- if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running,
- 1)) {
- pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n");
+ if (fs_info->balance_ctl) {
ret = -EINPROGRESS;
goto out_bargs;
}
- need_to_clear_lock = 1;
bctl = kzalloc(sizeof(*bctl), GFP_NOFS);
if (!bctl) {
}
do_balance:
- ret = btrfs_balance(bctl, bargs);
/*
- * bctl is freed in __cancel_balance or in free_fs_info if
- * restriper was paused all the way until unmount
+ * Ownership of bctl and mutually_exclusive_operation_running
+ * goes to to btrfs_balance. bctl is freed in __cancel_balance,
+ * or, if restriper was paused all the way until unmount, in
+ * free_fs_info. mutually_exclusive_operation_running is
+ * cleared in __cancel_balance.
*/
+ need_unlock = false;
+
+ ret = btrfs_balance(bctl, bargs);
+
if (arg) {
if (copy_to_user(arg, bargs, sizeof(*bargs)))
ret = -EFAULT;
out_bargs:
kfree(bargs);
-out:
- if (need_to_clear_lock)
- atomic_set(&root->fs_info->mutually_exclusive_operation_running,
- 0);
+out_unlock:
mutex_unlock(&fs_info->balance_mutex);
mutex_unlock(&fs_info->volume_mutex);
+ if (need_unlock)
+ atomic_set(&fs_info->mutually_exclusive_operation_running, 0);
+out:
mnt_drop_write_file(file);
return ret;
}
goto drop_write;
}
+ if (!sa->qgroupid) {
+ ret = -EINVAL;
+ goto out;
+ }
+
trans = btrfs_join_transaction(root);
if (IS_ERR(trans)) {
ret = PTR_ERR(trans);
* if the disk i_size is already at the inode->i_size, or
* this ordered extent is inside the disk i_size, we're done
*/
- if (disk_i_size == i_size || offset <= disk_i_size) {
+ if (disk_i_size == i_size)
+ goto out;
+
+ /*
+ * We still need to update disk_i_size if outstanding_isize is greater
+ * than disk_i_size.
+ */
+ if (offset <= disk_i_size &&
+ (!ordered || ordered->outstanding_isize <= disk_i_size))
goto out;
- }
/*
* walk backward from this ordered extent to disk_i_size.
break;
if (test->file_offset >= i_size)
break;
- if (test->file_offset >= disk_i_size) {
+ if (entry_end(test) > disk_i_size) {
/*
* we don't update disk_i_size now, so record this
* undealt i_size. Or we will not know the real
ret = add_relation_rb(fs_info, found_key.objectid,
found_key.offset);
+ if (ret == -ENOENT) {
+ printk(KERN_WARNING
+ "btrfs: orphan qgroup relation 0x%llx->0x%llx\n",
+ (unsigned long long)found_key.objectid,
+ (unsigned long long)found_key.offset);
+ ret = 0; /* ignore the error */
+ }
if (ret)
goto out;
next2:
struct btrfs_fs_info *fs_info, u64 qgroupid)
{
struct btrfs_root *quota_root;
+ struct btrfs_qgroup *qgroup;
int ret = 0;
quota_root = fs_info->quota_root;
if (!quota_root)
return -EINVAL;
+ /* check if there are no relations to this qgroup */
+ spin_lock(&fs_info->qgroup_lock);
+ qgroup = find_qgroup_rb(fs_info, qgroupid);
+ if (qgroup) {
+ if (!list_empty(&qgroup->groups) || !list_empty(&qgroup->members)) {
+ spin_unlock(&fs_info->qgroup_lock);
+ return -EBUSY;
+ }
+ }
+ spin_unlock(&fs_info->qgroup_lock);
+
ret = del_qgroup_item(trans, quota_root, qgroupid);
spin_lock(&fs_info->qgroup_lock);
del_qgroup_rb(quota_root->fs_info, qgroupid);
-
spin_unlock(&fs_info->qgroup_lock);
return ret;
int corrected = 0;
struct btrfs_key key;
struct inode *inode = NULL;
+ struct btrfs_fs_info *fs_info;
u64 end = offset + PAGE_SIZE - 1;
struct btrfs_root *local_root;
+ int srcu_index;
key.objectid = root;
key.type = BTRFS_ROOT_ITEM_KEY;
key.offset = (u64)-1;
- local_root = btrfs_read_fs_root_no_name(fixup->root->fs_info, &key);
- if (IS_ERR(local_root))
+
+ fs_info = fixup->root->fs_info;
+ srcu_index = srcu_read_lock(&fs_info->subvol_srcu);
+
+ local_root = btrfs_read_fs_root_no_name(fs_info, &key);
+ if (IS_ERR(local_root)) {
+ srcu_read_unlock(&fs_info->subvol_srcu, srcu_index);
return PTR_ERR(local_root);
+ }
key.type = BTRFS_INODE_ITEM_KEY;
key.objectid = inum;
key.offset = 0;
- inode = btrfs_iget(fixup->root->fs_info->sb, &key, local_root, NULL);
+ inode = btrfs_iget(fs_info->sb, &key, local_root, NULL);
+ srcu_read_unlock(&fs_info->subvol_srcu, srcu_index);
if (IS_ERR(inode))
return PTR_ERR(inode);
}
if (PageUptodate(page)) {
- struct btrfs_fs_info *fs_info;
if (PageDirty(page)) {
/*
* we need to write the data to the defect sector. the
u64 physical_for_dev_replace;
u64 len;
struct btrfs_fs_info *fs_info = nocow_ctx->sctx->dev_root->fs_info;
+ int srcu_index;
key.objectid = root;
key.type = BTRFS_ROOT_ITEM_KEY;
key.offset = (u64)-1;
+
+ srcu_index = srcu_read_lock(&fs_info->subvol_srcu);
+
local_root = btrfs_read_fs_root_no_name(fs_info, &key);
- if (IS_ERR(local_root))
+ if (IS_ERR(local_root)) {
+ srcu_read_unlock(&fs_info->subvol_srcu, srcu_index);
return PTR_ERR(local_root);
+ }
key.type = BTRFS_INODE_ITEM_KEY;
key.objectid = inum;
key.offset = 0;
inode = btrfs_iget(fs_info->sb, &key, local_root, NULL);
+ srcu_read_unlock(&fs_info->subvol_srcu, srcu_index);
if (IS_ERR(inode))
return PTR_ERR(inode);
(unsigned long)nce->ino);
if (!nce_head) {
nce_head = kmalloc(sizeof(*nce_head), GFP_NOFS);
- if (!nce_head)
+ if (!nce_head) {
+ kfree(nce);
return -ENOMEM;
+ }
INIT_LIST_HEAD(nce_head);
ret = radix_tree_insert(&sctx->name_cache, nce->ino, nce_head);
function, line, errstr);
return;
}
- trans->transaction->aborted = errno;
+ ACCESS_ONCE(trans->transaction->aborted) = errno;
__btrfs_std_error(root->fs_info, function, line, errno, NULL);
}
/*
&root->fs_info->trans_block_rsv,
num_bytes, flush);
if (ret)
- return ERR_PTR(ret);
+ goto reserve_fail;
}
again:
h = kmem_cache_alloc(btrfs_trans_handle_cachep, GFP_NOFS);
- if (!h)
- return ERR_PTR(-ENOMEM);
+ if (!h) {
+ ret = -ENOMEM;
+ goto alloc_fail;
+ }
/*
* If we are JOIN_NOLOCK we're already committing a transaction and
if (ret < 0) {
/* We must get the transaction if we are JOIN_NOLOCK. */
BUG_ON(type == TRANS_JOIN_NOLOCK);
-
- if (type < TRANS_JOIN_NOLOCK)
- sb_end_intwrite(root->fs_info->sb);
- kmem_cache_free(btrfs_trans_handle_cachep, h);
- return ERR_PTR(ret);
+ goto join_fail;
}
cur_trans = root->fs_info->running_transaction;
if (!current->journal_info && type != TRANS_USERSPACE)
current->journal_info = h;
return h;
+
+join_fail:
+ if (type < TRANS_JOIN_NOLOCK)
+ sb_end_intwrite(root->fs_info->sb);
+ kmem_cache_free(btrfs_trans_handle_cachep, h);
+alloc_fail:
+ if (num_bytes)
+ btrfs_block_rsv_release(root, &root->fs_info->trans_block_rsv,
+ num_bytes);
+reserve_fail:
+ if (qgroup_reserved)
+ btrfs_qgroup_free(root, qgroup_reserved);
+ return ERR_PTR(ret);
}
struct btrfs_trans_handle *btrfs_start_transaction(struct btrfs_root *root,
goto cleanup_transaction;
}
- if (cur_trans->aborted) {
+ /* Stop the commit early if ->aborted is set */
+ if (unlikely(ACCESS_ONCE(cur_trans->aborted))) {
ret = cur_trans->aborted;
goto cleanup_transaction;
}
wait_event(cur_trans->writer_wait,
atomic_read(&cur_trans->num_writers) == 1);
+ /* ->aborted might be set after the previous check, so check it */
+ if (unlikely(ACCESS_ONCE(cur_trans->aborted))) {
+ ret = cur_trans->aborted;
+ goto cleanup_transaction;
+ }
/*
* the reloc mutex makes sure that we stop
* the balancing code from coming in and moving
goto cleanup_transaction;
}
+ /*
+ * The tasks which save the space cache and inode cache may also
+ * update ->aborted, check it.
+ */
+ if (unlikely(ACCESS_ONCE(cur_trans->aborted))) {
+ ret = cur_trans->aborted;
+ mutex_unlock(&root->fs_info->tree_log_mutex);
+ mutex_unlock(&root->fs_info->reloc_mutex);
+ goto cleanup_transaction;
+ }
+
btrfs_prepare_extent_commit(trans, root);
cur_trans = root->fs_info->running_transaction;
if (skip_csum)
return 0;
+ if (em->compress_type) {
+ csum_offset = 0;
+ csum_len = block_len;
+ }
+
/* block start is already adjusted for the file extent offset. */
ret = btrfs_lookup_csums_range(log->fs_info->csum_root,
em->block_start + csum_offset,
em = list_entry(extents.next, struct extent_map, list);
list_del_init(&em->list);
- clear_bit(EXTENT_FLAG_LOGGING, &em->flags);
/*
* If we had an error we just need to delete everybody from our
* private list.
*/
if (ret) {
+ clear_em_logging(tree, em);
free_extent_map(em);
continue;
}
write_unlock(&tree->lock);
ret = log_one_extent(trans, inode, root, em, path);
- free_extent_map(em);
write_lock(&tree->lock);
+ clear_em_logging(tree, em);
+ free_extent_map(em);
}
WARN_ON(!list_empty(&extents));
write_unlock(&tree->lock);
}
} else {
ret = btrfs_get_bdev_and_sb(device_path,
- FMODE_READ | FMODE_EXCL,
+ FMODE_WRITE | FMODE_EXCL,
root->fs_info->bdev_holder, 0,
&bdev, &bh);
if (ret)
ret = 0;
/* Notify udev that device has changed */
- btrfs_kobject_uevent(bdev, KOBJ_CHANGE);
+ if (bdev)
+ btrfs_kobject_uevent(bdev, KOBJ_CHANGE);
error_brelse:
brelse(bh);
cache = btrfs_lookup_block_group(fs_info, chunk_offset);
chunk_used = btrfs_block_group_used(&cache->item);
- user_thresh = div_factor_fine(cache->key.offset, bargs->usage);
+ if (bargs->usage == 0)
+ user_thresh = 0;
+ else if (bargs->usage > 100)
+ user_thresh = cache->key.offset;
+ else
+ user_thresh = div_factor_fine(cache->key.offset,
+ bargs->usage);
+
if (chunk_used < user_thresh)
ret = 0;
unset_balance_control(fs_info);
ret = del_balance_item(fs_info->tree_root);
BUG_ON(ret);
+
+ atomic_set(&fs_info->mutually_exclusive_operation_running, 0);
}
void update_ioctl_balance_args(struct btrfs_fs_info *fs_info, int lock,
out:
if (bctl->flags & BTRFS_BALANCE_RESUME)
__cancel_balance(fs_info);
- else
+ else {
kfree(bctl);
+ atomic_set(&fs_info->mutually_exclusive_operation_running, 0);
+ }
return ret;
}
ret = btrfs_balance(fs_info->balance_ctl, NULL);
}
- atomic_set(&fs_info->mutually_exclusive_operation_running, 0);
mutex_unlock(&fs_info->balance_mutex);
mutex_unlock(&fs_info->volume_mutex);
return 0;
}
- WARN_ON(atomic_xchg(&fs_info->mutually_exclusive_operation_running, 1));
tsk = kthread_run(balance_kthread, fs_info, "btrfs-balance");
if (IS_ERR(tsk))
return PTR_ERR(tsk);
btrfs_balance_sys(leaf, item, &disk_bargs);
btrfs_disk_balance_args_to_cpu(&bctl->sys, &disk_bargs);
+ WARN_ON(atomic_xchg(&fs_info->mutually_exclusive_operation_running, 1));
+
mutex_lock(&fs_info->volume_mutex);
mutex_lock(&fs_info->balance_mutex);
{ 1, 1, 2, 2, 2, 2 /* raid1 */ },
{ 1, 2, 1, 1, 1, 2 /* dup */ },
{ 1, 1, 0, 2, 1, 1 /* raid0 */ },
- { 1, 1, 0, 1, 1, 1 /* single */ },
+ { 1, 1, 1, 1, 1, 1 /* single */ },
};
static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
compose_mount_options_err:
kfree(mountdata);
mountdata = ERR_PTR(rc);
+ kfree(*devname);
+ *devname = NULL;
goto compose_mount_options_out;
}
}
case AF_INET6: {
struct sockaddr_in6 *saddr6 = (struct sockaddr_in6 *)srcaddr;
- struct sockaddr_in6 *vaddr6 = (struct sockaddr_in6 *)&rhs;
+ struct sockaddr_in6 *vaddr6 = (struct sockaddr_in6 *)rhs;
return ipv6_addr_equal(&saddr6->sin6_addr, &vaddr6->sin6_addr);
}
default:
#endif
return -EINVAL;
-#ifdef CONFIG_COMPAT
- if (count > sizeof(struct dlm_write_request32) + DLM_RESNAME_MAXLEN)
-#else
+ /*
+ * can't compare against COMPAT/dlm_write_request32 because
+ * we don't yet know if is64bit is zero
+ */
if (count > sizeof(struct dlm_write_request) + DLM_RESNAME_MAXLEN)
-#endif
return -EINVAL;
kbuf = kzalloc(count + 1, GFP_NOFS);
retval = f2fs_getxattr(inode, name_index, "", value, retval);
}
- if (retval < 0) {
- if (retval == -ENODATA)
- acl = NULL;
- else
- acl = ERR_PTR(retval);
- } else {
+ if (retval > 0)
acl = f2fs_acl_from_disk(value, retval);
- }
+ else if (retval == -ENODATA)
+ acl = NULL;
+ else
+ acl = ERR_PTR(retval);
kfree(value);
+
if (!IS_ERR(acl))
set_cached_acl(inode, type, acl);
{
struct inode *inode = page->mapping->host;
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
- int err;
- wait_on_page_writeback(page);
-
- err = write_meta_page(sbi, page, wbc);
- if (err) {
+ /* Should not write any meta pages, if any IO error was occurred */
+ if (wbc->for_reclaim ||
+ is_set_ckpt_flags(F2FS_CKPT(sbi), CP_ERROR_FLAG)) {
+ dec_page_count(sbi, F2FS_DIRTY_META);
wbc->pages_skipped++;
set_page_dirty(page);
+ return AOP_WRITEPAGE_ACTIVATE;
}
- dec_page_count(sbi, F2FS_DIRTY_META);
+ wait_on_page_writeback(page);
- /* In this case, we should not unlock this page */
- if (err != AOP_WRITEPAGE_ACTIVATE)
- unlock_page(page);
- return err;
+ write_meta_page(sbi, page);
+ dec_page_count(sbi, F2FS_DIRTY_META);
+ unlock_page(page);
+ return 0;
}
static int f2fs_write_meta_pages(struct address_space *mapping,
BUG_ON(page->mapping != mapping);
BUG_ON(!PageDirty(page));
clear_page_dirty_for_io(page);
- f2fs_write_meta_page(page, &wbc);
+ if (f2fs_write_meta_page(page, &wbc)) {
+ unlock_page(page);
+ break;
+ }
if (nwritten++ >= nr_to_write)
break;
}
if (!PageDirty(page)) {
__set_page_dirty_nobuffers(page);
inc_page_count(sbi, F2FS_DIRTY_META);
- F2FS_SET_SB_DIRT(sbi);
return 1;
}
return 0;
goto retry;
}
new->ino = ino;
- INIT_LIST_HEAD(&new->list);
/* add new_oentry into list which is sorted by inode number */
- if (orphan) {
- struct orphan_inode_entry *prev;
-
- /* get previous entry */
- prev = list_entry(orphan->list.prev, typeof(*prev), list);
- if (&prev->list != head)
- /* insert new orphan inode entry */
- list_add(&new->list, &prev->list);
- else
- list_add(&new->list, head);
- } else {
+ if (orphan)
+ list_add(&new->list, this->prev);
+ else
list_add_tail(&new->list, head);
- }
+
sbi->n_orphans++;
out:
mutex_unlock(&sbi->orphan_inode_mutex);
/*
* Freeze all the FS-operations for checkpoint.
*/
-void block_operations(struct f2fs_sb_info *sbi)
+static void block_operations(struct f2fs_sb_info *sbi)
{
int t;
struct writeback_control wbc = {
sbi->alloc_valid_block_count = 0;
/* Here, we only have one bio having CP pack */
- if (is_set_ckpt_flags(ckpt, CP_ERROR_FLAG))
- sbi->sb->s_flags |= MS_RDONLY;
- else
- sync_meta_pages(sbi, META_FLUSH, LONG_MAX);
+ sync_meta_pages(sbi, META_FLUSH, LONG_MAX);
- clear_prefree_segments(sbi);
- F2FS_RESET_SB_DIRT(sbi);
+ if (!is_set_ckpt_flags(ckpt, CP_ERROR_FLAG)) {
+ clear_prefree_segments(sbi);
+ F2FS_RESET_SB_DIRT(sbi);
+ }
}
/*
* We guarantee that this checkpoint procedure should not fail.
*/
-void write_checkpoint(struct f2fs_sb_info *sbi, bool blocked, bool is_umount)
+void write_checkpoint(struct f2fs_sb_info *sbi, bool is_umount)
{
struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
unsigned long long ckpt_ver;
- if (!blocked) {
- mutex_lock(&sbi->cp_mutex);
- block_operations(sbi);
- }
+ mutex_lock(&sbi->cp_mutex);
+ block_operations(sbi);
f2fs_submit_bio(sbi, DATA, true);
f2fs_submit_bio(sbi, NODE, true);
sbi->n_orphans = 0;
}
-int create_checkpoint_caches(void)
+int __init create_checkpoint_caches(void)
{
orphan_entry_slab = f2fs_kmem_cache_create("f2fs_orphan_entry",
sizeof(struct orphan_inode_entry), NULL);
#define MAX_DESIRED_PAGES_WP 4096
+static int __f2fs_writepage(struct page *page, struct writeback_control *wbc,
+ void *data)
+{
+ struct address_space *mapping = data;
+ int ret = mapping->a_ops->writepage(page, wbc);
+ mapping_set_error(mapping, ret);
+ return ret;
+}
+
static int f2fs_write_data_pages(struct address_space *mapping,
struct writeback_control *wbc)
{
if (!S_ISDIR(inode->i_mode))
mutex_lock(&sbi->writepages);
- ret = generic_writepages(mapping, wbc);
+ ret = write_cache_pages(mapping, wbc, __f2fs_writepage, mapping);
if (!S_ISDIR(inode->i_mode))
mutex_unlock(&sbi->writepages);
f2fs_submit_bio(sbi, DATA, (wbc->sync_mode == WB_SYNC_ALL));
return 0;
}
+static sector_t f2fs_bmap(struct address_space *mapping, sector_t block)
+{
+ return generic_block_bmap(mapping, block, get_data_block_ro);
+}
+
const struct address_space_operations f2fs_dblock_aops = {
.readpage = f2fs_read_data_page,
.readpages = f2fs_read_data_pages,
.invalidatepage = f2fs_invalidate_data_page,
.releasepage = f2fs_release_data_page,
.direct_IO = f2fs_direct_IO,
+ .bmap = f2fs_bmap,
};
static LIST_HEAD(f2fs_stat_list);
static struct dentry *debugfs_root;
+static DEFINE_MUTEX(f2fs_stat_mutex);
static void update_general_status(struct f2fs_sb_info *sbi)
{
int i = 0;
int j;
+ mutex_lock(&f2fs_stat_mutex);
list_for_each_entry_safe(si, next, &f2fs_stat_list, stat_list) {
+ char devname[BDEVNAME_SIZE];
- mutex_lock(&si->stat_lock);
- if (!si->sbi) {
- mutex_unlock(&si->stat_lock);
- continue;
- }
update_general_status(si->sbi);
- seq_printf(s, "\n=====[ partition info. #%d ]=====\n", i++);
- seq_printf(s, "[SB: 1] [CP: 2] [NAT: %d] [SIT: %d] ",
- si->nat_area_segs, si->sit_area_segs);
+ seq_printf(s, "\n=====[ partition info(%s). #%d ]=====\n",
+ bdevname(si->sbi->sb->s_bdev, devname), i++);
+ seq_printf(s, "[SB: 1] [CP: 2] [SIT: %d] [NAT: %d] ",
+ si->sit_area_segs, si->nat_area_segs);
seq_printf(s, "[SSA: %d] [MAIN: %d",
si->ssa_area_segs, si->main_area_segs);
seq_printf(s, "(OverProv:%d Resv:%d)]\n\n",
seq_printf(s, "\nMemory: %u KB = static: %u + cached: %u\n",
(si->base_mem + si->cache_mem) >> 10,
si->base_mem >> 10, si->cache_mem >> 10);
- mutex_unlock(&si->stat_lock);
}
+ mutex_unlock(&f2fs_stat_mutex);
return 0;
}
.release = single_release,
};
-static int init_stats(struct f2fs_sb_info *sbi)
+int f2fs_build_stats(struct f2fs_sb_info *sbi)
{
struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
struct f2fs_stat_info *si;
return -ENOMEM;
si = sbi->stat_info;
- mutex_init(&si->stat_lock);
- list_add_tail(&si->stat_list, &f2fs_stat_list);
-
si->all_area_segs = le32_to_cpu(raw_super->segment_count);
si->sit_area_segs = le32_to_cpu(raw_super->segment_count_sit);
si->nat_area_segs = le32_to_cpu(raw_super->segment_count_nat);
si->main_area_zones = si->main_area_sections /
le32_to_cpu(raw_super->secs_per_zone);
si->sbi = sbi;
- return 0;
-}
-int f2fs_build_stats(struct f2fs_sb_info *sbi)
-{
- int retval;
-
- retval = init_stats(sbi);
- if (retval)
- return retval;
-
- if (!debugfs_root)
- debugfs_root = debugfs_create_dir("f2fs", NULL);
+ mutex_lock(&f2fs_stat_mutex);
+ list_add_tail(&si->stat_list, &f2fs_stat_list);
+ mutex_unlock(&f2fs_stat_mutex);
- debugfs_create_file("status", S_IRUGO, debugfs_root, NULL, &stat_fops);
return 0;
}
{
struct f2fs_stat_info *si = sbi->stat_info;
+ mutex_lock(&f2fs_stat_mutex);
list_del(&si->stat_list);
- mutex_lock(&si->stat_lock);
- si->sbi = NULL;
- mutex_unlock(&si->stat_lock);
+ mutex_unlock(&f2fs_stat_mutex);
+
kfree(sbi->stat_info);
}
-void destroy_root_stats(void)
+void __init f2fs_create_root_stats(void)
+{
+ debugfs_root = debugfs_create_dir("f2fs", NULL);
+ if (debugfs_root)
+ debugfs_create_file("status", S_IRUGO, debugfs_root,
+ NULL, &stat_fops);
+}
+
+void f2fs_destroy_root_stats(void)
{
debugfs_remove_recursive(debugfs_root);
debugfs_root = NULL;
}
if (inode) {
- inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME;
+ inode->i_ctime = CURRENT_TIME;
drop_nlink(inode);
if (S_ISDIR(inode->i_mode)) {
drop_nlink(inode);
}
/*
+ * ioctl commands
+ */
+#define F2FS_IOC_GETFLAGS FS_IOC_GETFLAGS
+#define F2FS_IOC_SETFLAGS FS_IOC_SETFLAGS
+
+#if defined(__KERNEL__) && defined(CONFIG_COMPAT)
+/*
+ * ioctl commands in 32 bit emulation
+ */
+#define F2FS_IOC32_GETFLAGS FS_IOC32_GETFLAGS
+#define F2FS_IOC32_SETFLAGS FS_IOC32_SETFLAGS
+#endif
+
+/*
* For INODE and NODE manager
*/
#define XATTR_NODE_OFFSET (-1) /*
/* Use below internally in f2fs*/
unsigned long flags; /* use to pass per-file flags */
- unsigned long long data_version;/* lastes version of data for fsync */
+ unsigned long long data_version;/* latest version of data for fsync */
atomic_t dirty_dents; /* # of dirty dentry pages */
f2fs_hash_t chash; /* hash value of given file name */
unsigned int clevel; /* maximum level of given file name */
static inline void set_new_dnode(struct dnode_of_data *dn, struct inode *inode,
struct page *ipage, struct page *npage, nid_t nid)
{
+ memset(dn, 0, sizeof(*dn));
dn->inode = inode;
dn->inode_page = ipage;
dn->node_page = npage;
dn->nid = nid;
- dn->inode_page_locked = 0;
}
/*
return atomic_read(&sbi->nr_pages[count_type]);
}
+static inline int get_blocktype_secs(struct f2fs_sb_info *sbi, int block_type)
+{
+ unsigned int pages_per_sec = sbi->segs_per_sec *
+ (1 << sbi->log_blocks_per_seg);
+ return ((get_pages(sbi, block_type) + pages_per_sec - 1)
+ >> sbi->log_blocks_per_seg) / sbi->segs_per_sec;
+}
+
static inline block_t valid_user_blocks(struct f2fs_sb_info *sbi)
{
block_t ret;
int f2fs_setattr(struct dentry *, struct iattr *);
int truncate_hole(struct inode *, pgoff_t, pgoff_t);
long f2fs_ioctl(struct file *, unsigned int, unsigned long);
+long f2fs_compat_ioctl(struct file *, unsigned int, unsigned long);
/*
* inode.c
*/
void f2fs_set_inode_flags(struct inode *);
-struct inode *f2fs_iget_nowait(struct super_block *, unsigned long);
struct inode *f2fs_iget(struct super_block *, unsigned long);
void update_inode(struct inode *, struct page *);
int f2fs_write_inode(struct inode *, struct writeback_control *);
* super.c
*/
int f2fs_sync_fs(struct super_block *, int);
+extern __printf(3, 4)
+void f2fs_msg(struct super_block *, const char *, const char *, ...);
/*
* hash.c
void flush_nat_entries(struct f2fs_sb_info *);
int build_node_manager(struct f2fs_sb_info *);
void destroy_node_manager(struct f2fs_sb_info *);
-int create_node_manager_caches(void);
+int __init create_node_manager_caches(void);
void destroy_node_manager_caches(void);
/*
struct page *get_sum_page(struct f2fs_sb_info *, unsigned int);
struct bio *f2fs_bio_alloc(struct block_device *, int);
void f2fs_submit_bio(struct f2fs_sb_info *, enum page_type, bool sync);
-int write_meta_page(struct f2fs_sb_info *, struct page *,
- struct writeback_control *);
+void write_meta_page(struct f2fs_sb_info *, struct page *);
void write_node_page(struct f2fs_sb_info *, struct page *, unsigned int,
block_t, block_t *);
void write_data_page(struct inode *, struct page *, struct dnode_of_data*,
void set_dirty_dir_page(struct inode *, struct page *);
void remove_dirty_dir_inode(struct inode *);
void sync_dirty_dir_inodes(struct f2fs_sb_info *);
-void block_operations(struct f2fs_sb_info *);
-void write_checkpoint(struct f2fs_sb_info *, bool, bool);
+void write_checkpoint(struct f2fs_sb_info *, bool);
void init_orphan_info(struct f2fs_sb_info *);
-int create_checkpoint_caches(void);
+int __init create_checkpoint_caches(void);
void destroy_checkpoint_caches(void);
/*
int start_gc_thread(struct f2fs_sb_info *);
void stop_gc_thread(struct f2fs_sb_info *);
block_t start_bidx_of_node(unsigned int);
-int f2fs_gc(struct f2fs_sb_info *, int);
+int f2fs_gc(struct f2fs_sb_info *);
void build_gc_manager(struct f2fs_sb_info *);
-int create_gc_caches(void);
+int __init create_gc_caches(void);
void destroy_gc_caches(void);
/*
int f2fs_build_stats(struct f2fs_sb_info *);
void f2fs_destroy_stats(struct f2fs_sb_info *);
-void destroy_root_stats(void);
+void __init f2fs_create_root_stats(void);
+void f2fs_destroy_root_stats(void);
#else
#define stat_inc_call_count(si)
#define stat_inc_seg_count(si, type)
static inline int f2fs_build_stats(struct f2fs_sb_info *sbi) { return 0; }
static inline void f2fs_destroy_stats(struct f2fs_sb_info *sbi) { }
-static inline void destroy_root_stats(void) { }
+static inline void __init f2fs_create_root_stats(void) { }
+static inline void f2fs_destroy_root_stats(void) { }
#endif
extern const struct file_operations f2fs_dir_operations;
#include <linux/writeback.h>
#include <linux/falloc.h>
#include <linux/types.h>
+#include <linux/compat.h>
#include <linux/uaccess.h>
#include <linux/mount.h>
}
static const struct vm_operations_struct f2fs_file_vm_ops = {
- .fault = filemap_fault,
- .page_mkwrite = f2fs_vm_page_mkwrite,
+ .fault = filemap_fault,
+ .page_mkwrite = f2fs_vm_page_mkwrite,
+ .remap_pages = generic_file_remap_pages,
};
static int need_to_sync_dir(struct f2fs_sb_info *sbi, struct inode *inode)
if (ret)
return ret;
+ /* guarantee free sections for fsync */
+ f2fs_balance_fs(sbi);
+
mutex_lock(&inode->i_mutex);
if (datasync && !(inode->i_state & I_DIRTY_DATASYNC))
if (!S_ISREG(inode->i_mode) || inode->i_nlink != 1)
need_cp = true;
- if (is_inode_flag_set(F2FS_I(inode), FI_NEED_CP))
+ else if (is_inode_flag_set(F2FS_I(inode), FI_NEED_CP))
need_cp = true;
- if (!space_for_roll_forward(sbi))
+ else if (!space_for_roll_forward(sbi))
need_cp = true;
- if (need_to_sync_dir(sbi, inode))
+ else if (need_to_sync_dir(sbi, inode))
need_cp = true;
if (need_cp) {
inode->i_mtime = inode->i_ctime = CURRENT_TIME;
mark_inode_dirty(inode);
}
-
- f2fs_balance_fs(F2FS_SB(inode->i_sb));
}
static int f2fs_getattr(struct vfsmount *mnt,
attr->ia_size != i_size_read(inode)) {
truncate_setsize(inode, attr->ia_size);
f2fs_truncate(inode);
+ f2fs_balance_fs(F2FS_SB(inode->i_sb));
}
__setattr_copy(inode, attr);
static void fill_zero(struct inode *inode, pgoff_t index,
loff_t start, loff_t len)
{
+ struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
struct page *page;
if (!len)
return;
+ f2fs_balance_fs(sbi);
+
+ mutex_lock_op(sbi, DATA_NEW);
page = get_new_data_page(inode, index, false);
+ mutex_unlock_op(sbi, DATA_NEW);
if (!IS_ERR(page)) {
wait_on_page_writeback(page);
struct dnode_of_data dn;
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
+ f2fs_balance_fs(sbi);
+
mutex_lock_op(sbi, DATA_TRUNC);
set_new_dnode(&dn, inode, NULL, NULL, 0);
err = get_dnode_of_data(&dn, index, RDONLY_NODE);
loff_t offset, loff_t len)
{
struct inode *inode = file->f_path.dentry->d_inode;
- struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
long ret;
if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
else
ret = expand_inode_data(inode, offset, len, mode);
- f2fs_balance_fs(sbi);
+ if (!ret) {
+ inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ mark_inode_dirty(inode);
+ }
return ret;
}
}
}
+#ifdef CONFIG_COMPAT
+long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ case F2FS_IOC32_GETFLAGS:
+ cmd = F2FS_IOC_GETFLAGS;
+ break;
+ case F2FS_IOC32_SETFLAGS:
+ cmd = F2FS_IOC_SETFLAGS;
+ break;
+ default:
+ return -ENOIOCTLCMD;
+ }
+ return f2fs_ioctl(file, cmd, (unsigned long) compat_ptr(arg));
+}
+#endif
+
const struct file_operations f2fs_file_operations = {
.llseek = generic_file_llseek,
.read = do_sync_read,
.fsync = f2fs_sync_file,
.fallocate = f2fs_fallocate,
.unlocked_ioctl = f2fs_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = f2fs_compat_ioctl,
+#endif
.splice_read = generic_file_splice_read,
.splice_write = generic_file_splice_write,
};
if (kthread_should_stop())
break;
- f2fs_balance_fs(sbi);
-
- if (!test_opt(sbi, BG_GC))
+ if (sbi->sb->s_writers.frozen >= SB_FREEZE_WRITE) {
+ wait_ms = GC_THREAD_MAX_SLEEP_TIME;
continue;
+ }
/*
* [GC triggering condition]
sbi->bg_gc++;
- if (f2fs_gc(sbi, 1) == GC_NONE)
+ /* if return value is not zero, no victim was selected */
+ if (f2fs_gc(sbi))
wait_ms = GC_THREAD_NOGC_SLEEP_TIME;
else if (wait_ms == GC_THREAD_NOGC_SLEEP_TIME)
wait_ms = GC_THREAD_MAX_SLEEP_TIME;
int start_gc_thread(struct f2fs_sb_info *sbi)
{
struct f2fs_gc_kthread *gc_th;
+ dev_t dev = sbi->sb->s_bdev->bd_dev;
+ if (!test_opt(sbi, BG_GC))
+ return 0;
gc_th = kmalloc(sizeof(struct f2fs_gc_kthread), GFP_KERNEL);
if (!gc_th)
return -ENOMEM;
sbi->gc_thread = gc_th;
init_waitqueue_head(&sbi->gc_thread->gc_wait_queue_head);
sbi->gc_thread->f2fs_gc_task = kthread_run(gc_thread_func, sbi,
- GC_THREAD_NAME);
+ "f2fs_gc-%u:%u", MAJOR(dev), MINOR(dev));
if (IS_ERR(gc_th->f2fs_gc_task)) {
kfree(gc_th);
+ sbi->gc_thread = NULL;
return -ENOMEM;
}
return 0;
static unsigned int get_max_cost(struct f2fs_sb_info *sbi,
struct victim_sel_policy *p)
{
+ /* SSR allocates in a segment unit */
+ if (p->alloc_mode == SSR)
+ return 1 << sbi->log_blocks_per_seg;
if (p->gc_mode == GC_GREEDY)
return (1 << sbi->log_blocks_per_seg) * p->ofs_unit;
else if (p->gc_mode == GC_CB)
sentry = get_seg_entry(sbi, segno);
ret = f2fs_test_bit(offset, sentry->cur_valid_map);
mutex_unlock(&sit_i->sentry_lock);
- return ret ? GC_OK : GC_NEXT;
+ return ret;
}
/*
* On validity, copy that node with cold status, otherwise (invalid node)
* ignore that.
*/
-static int gc_node_segment(struct f2fs_sb_info *sbi,
+static void gc_node_segment(struct f2fs_sb_info *sbi,
struct f2fs_summary *sum, unsigned int segno, int gc_type)
{
bool initial = true;
for (off = 0; off < sbi->blocks_per_seg; off++, entry++) {
nid_t nid = le32_to_cpu(entry->nid);
struct page *node_page;
- int err;
- /*
- * It makes sure that free segments are able to write
- * all the dirty node pages before CP after this CP.
- * So let's check the space of dirty node pages.
- */
- if (should_do_checkpoint(sbi)) {
- mutex_lock(&sbi->cp_mutex);
- block_operations(sbi);
- return GC_BLOCKED;
- }
+ /* stop BG_GC if there is not enough free sections. */
+ if (gc_type == BG_GC && has_not_enough_free_secs(sbi, 0))
+ return;
- err = check_valid_map(sbi, segno, off);
- if (err == GC_NEXT)
+ if (check_valid_map(sbi, segno, off) == 0)
continue;
if (initial) {
};
sync_node_pages(sbi, 0, &wbc);
}
- return GC_DONE;
}
/*
- * Calculate start block index that this node page contains
+ * Calculate start block index indicating the given node offset.
+ * Be careful, caller should give this node offset only indicating direct node
+ * blocks. If any node offsets, which point the other types of node blocks such
+ * as indirect or double indirect node blocks, are given, it must be a caller's
+ * bug.
*/
block_t start_bidx_of_node(unsigned int node_ofs)
{
node_page = get_node_page(sbi, nid);
if (IS_ERR(node_page))
- return GC_NEXT;
+ return 0;
get_node_info(sbi, nid, dni);
if (sum->version != dni->version) {
f2fs_put_page(node_page, 1);
- return GC_NEXT;
+ return 0;
}
*nofs = ofs_of_node(node_page);
f2fs_put_page(node_page, 1);
if (source_blkaddr != blkaddr)
- return GC_NEXT;
- return GC_OK;
+ return 0;
+ return 1;
}
static void move_data_page(struct inode *inode, struct page *page, int gc_type)
* If the parent node is not valid or the data block address is different,
* the victim data block is ignored.
*/
-static int gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+static void gc_data_segment(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
struct list_head *ilist, unsigned int segno, int gc_type)
{
struct super_block *sb = sbi->sb;
struct f2fs_summary *entry;
block_t start_addr;
- int err, off;
+ int off;
int phase = 0;
start_addr = START_BLOCK(sbi, segno);
unsigned int ofs_in_node, nofs;
block_t start_bidx;
- /*
- * It makes sure that free segments are able to write
- * all the dirty node pages before CP after this CP.
- * So let's check the space of dirty node pages.
- */
- if (should_do_checkpoint(sbi)) {
- mutex_lock(&sbi->cp_mutex);
- block_operations(sbi);
- err = GC_BLOCKED;
- goto stop;
- }
+ /* stop BG_GC if there is not enough free sections. */
+ if (gc_type == BG_GC && has_not_enough_free_secs(sbi, 0))
+ return;
- err = check_valid_map(sbi, segno, off);
- if (err == GC_NEXT)
+ if (check_valid_map(sbi, segno, off) == 0)
continue;
if (phase == 0) {
}
/* Get an inode by ino with checking validity */
- err = check_dnode(sbi, entry, &dni, start_addr + off, &nofs);
- if (err == GC_NEXT)
+ if (check_dnode(sbi, entry, &dni, start_addr + off, &nofs) == 0)
continue;
if (phase == 1) {
ofs_in_node = le16_to_cpu(entry->ofs_in_node);
if (phase == 2) {
- inode = f2fs_iget_nowait(sb, dni.ino);
+ inode = f2fs_iget(sb, dni.ino);
if (IS_ERR(inode))
continue;
}
if (++phase < 4)
goto next_step;
- err = GC_DONE;
-stop:
+
if (gc_type == FG_GC)
f2fs_submit_bio(sbi, DATA, true);
- return err;
}
static int __get_victim(struct f2fs_sb_info *sbi, unsigned int *victim,
return ret;
}
-static int do_garbage_collect(struct f2fs_sb_info *sbi, unsigned int segno,
+static void do_garbage_collect(struct f2fs_sb_info *sbi, unsigned int segno,
struct list_head *ilist, int gc_type)
{
struct page *sum_page;
struct f2fs_summary_block *sum;
- int ret = GC_DONE;
/* read segment summary of victim */
sum_page = get_sum_page(sbi, segno);
if (IS_ERR(sum_page))
- return GC_ERROR;
+ return;
/*
* CP needs to lock sum_page. In this time, we don't need
switch (GET_SUM_TYPE((&sum->footer))) {
case SUM_TYPE_NODE:
- ret = gc_node_segment(sbi, sum->entries, segno, gc_type);
+ gc_node_segment(sbi, sum->entries, segno, gc_type);
break;
case SUM_TYPE_DATA:
- ret = gc_data_segment(sbi, sum->entries, ilist, segno, gc_type);
+ gc_data_segment(sbi, sum->entries, ilist, segno, gc_type);
break;
}
stat_inc_seg_count(sbi, GET_SUM_TYPE((&sum->footer)));
stat_inc_call_count(sbi->stat_info);
f2fs_put_page(sum_page, 0);
- return ret;
}
-int f2fs_gc(struct f2fs_sb_info *sbi, int nGC)
+int f2fs_gc(struct f2fs_sb_info *sbi)
{
- unsigned int segno;
- int old_free_secs, cur_free_secs;
- int gc_status, nfree;
struct list_head ilist;
+ unsigned int segno, i;
int gc_type = BG_GC;
+ int nfree = 0;
+ int ret = -1;
INIT_LIST_HEAD(&ilist);
gc_more:
- nfree = 0;
- gc_status = GC_NONE;
+ if (!(sbi->sb->s_flags & MS_ACTIVE))
+ goto stop;
- if (has_not_enough_free_secs(sbi))
- old_free_secs = reserved_sections(sbi);
- else
- old_free_secs = free_sections(sbi);
+ if (gc_type == BG_GC && has_not_enough_free_secs(sbi, nfree))
+ gc_type = FG_GC;
- while (sbi->sb->s_flags & MS_ACTIVE) {
- int i;
- if (has_not_enough_free_secs(sbi))
- gc_type = FG_GC;
+ if (!__get_victim(sbi, &segno, gc_type, NO_CHECK_TYPE))
+ goto stop;
+ ret = 0;
- cur_free_secs = free_sections(sbi) + nfree;
+ for (i = 0; i < sbi->segs_per_sec; i++)
+ do_garbage_collect(sbi, segno + i, &ilist, gc_type);
- /* We got free space successfully. */
- if (nGC < cur_free_secs - old_free_secs)
- break;
+ if (gc_type == FG_GC &&
+ get_valid_blocks(sbi, segno, sbi->segs_per_sec) == 0)
+ nfree++;
- if (!__get_victim(sbi, &segno, gc_type, NO_CHECK_TYPE))
- break;
+ if (has_not_enough_free_secs(sbi, nfree))
+ goto gc_more;
- for (i = 0; i < sbi->segs_per_sec; i++) {
- /*
- * do_garbage_collect will give us three gc_status:
- * GC_ERROR, GC_DONE, and GC_BLOCKED.
- * If GC is finished uncleanly, we have to return
- * the victim to dirty segment list.
- */
- gc_status = do_garbage_collect(sbi, segno + i,
- &ilist, gc_type);
- if (gc_status != GC_DONE)
- goto stop;
- nfree++;
- }
- }
+ if (gc_type == FG_GC)
+ write_checkpoint(sbi, false);
stop:
- if (has_not_enough_free_secs(sbi) || gc_status == GC_BLOCKED) {
- write_checkpoint(sbi, (gc_status == GC_BLOCKED), false);
- if (nfree)
- goto gc_more;
- }
mutex_unlock(&sbi->gc_mutex);
put_gc_inode(&ilist);
- BUG_ON(!list_empty(&ilist));
- return gc_status;
+ return ret;
}
void build_gc_manager(struct f2fs_sb_info *sbi)
DIRTY_I(sbi)->v_ops = &default_v_ops;
}
-int create_gc_caches(void)
+int __init create_gc_caches(void)
{
winode_slab = f2fs_kmem_cache_create("f2fs_gc_inodes",
sizeof(struct inode_entry), NULL);
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
-#define GC_THREAD_NAME "f2fs_gc_task"
#define GC_THREAD_MIN_WB_PAGES 1 /*
* a threshold to determine
* whether IO subsystem is idle
/* Search max. number of dirty segments to select a victim segment */
#define MAX_VICTIM_SEARCH 20
-enum {
- GC_NONE = 0,
- GC_ERROR,
- GC_OK,
- GC_NEXT,
- GC_BLOCKED,
- GC_DONE,
-};
-
struct f2fs_gc_kthread {
struct task_struct *f2fs_gc_task;
wait_queue_head_t gc_wait_queue_head;
struct request_list *rl = &q->root_rl;
return !(rl->count[BLK_RW_SYNC]) && !(rl->count[BLK_RW_ASYNC]);
}
-
-static inline bool should_do_checkpoint(struct f2fs_sb_info *sbi)
-{
- unsigned int pages_per_sec = sbi->segs_per_sec *
- (1 << sbi->log_blocks_per_seg);
- int node_secs = ((get_pages(sbi, F2FS_DIRTY_NODES) + pages_per_sec - 1)
- >> sbi->log_blocks_per_seg) / sbi->segs_per_sec;
- int dent_secs = ((get_pages(sbi, F2FS_DIRTY_DENTS) + pages_per_sec - 1)
- >> sbi->log_blocks_per_seg) / sbi->segs_per_sec;
- return free_sections(sbi) <= (node_secs + 2 * dent_secs + 2);
-}
#include "f2fs.h"
#include "node.h"
-struct f2fs_iget_args {
- u64 ino;
- int on_free;
-};
-
void f2fs_set_inode_flags(struct inode *inode)
{
unsigned int flags = F2FS_I(inode)->i_flags;
inode->i_flags |= S_DIRSYNC;
}
-static int f2fs_iget_test(struct inode *inode, void *data)
-{
- struct f2fs_iget_args *args = data;
-
- if (inode->i_ino != args->ino)
- return 0;
- if (inode->i_state & (I_FREEING | I_WILL_FREE)) {
- args->on_free = 1;
- return 0;
- }
- return 1;
-}
-
-struct inode *f2fs_iget_nowait(struct super_block *sb, unsigned long ino)
-{
- struct f2fs_iget_args args = {
- .ino = ino,
- .on_free = 0
- };
- struct inode *inode = ilookup5(sb, ino, f2fs_iget_test, &args);
-
- if (inode)
- return inode;
- if (!args.on_free)
- return f2fs_iget(sb, ino);
- return ERR_PTR(-ENOENT);
-}
-
static int do_read_inode(struct inode *inode)
{
struct f2fs_sb_info *sbi = F2FS_SB(inode->i_sb);
inode->i_ctime.tv_nsec = le32_to_cpu(ri->i_ctime_nsec);
inode->i_mtime.tv_nsec = le32_to_cpu(ri->i_mtime_nsec);
inode->i_generation = le32_to_cpu(ri->i_generation);
+ if (ri->i_addr[0])
+ inode->i_rdev = old_decode_dev(le32_to_cpu(ri->i_addr[0]));
+ else
+ inode->i_rdev = new_decode_dev(le32_to_cpu(ri->i_addr[1]));
fi->i_current_depth = le32_to_cpu(ri->i_current_depth);
fi->i_xattr_nid = le32_to_cpu(ri->i_xattr_nid);
ri->i_flags = cpu_to_le32(F2FS_I(inode)->i_flags);
ri->i_pino = cpu_to_le32(F2FS_I(inode)->i_pino);
ri->i_generation = cpu_to_le32(inode->i_generation);
+
+ if (S_ISCHR(inode->i_mode) || S_ISBLK(inode->i_mode)) {
+ if (old_valid_dev(inode->i_rdev)) {
+ ri->i_addr[0] =
+ cpu_to_le32(old_encode_dev(inode->i_rdev));
+ ri->i_addr[1] = 0;
+ } else {
+ ri->i_addr[0] = 0;
+ ri->i_addr[1] =
+ cpu_to_le32(new_encode_dev(inode->i_rdev));
+ ri->i_addr[2] = 0;
+ }
+ }
+
set_cold_node(inode, node_page);
set_page_dirty(node_page);
}
inode->i_ino == F2FS_META_INO(sbi))
return 0;
+ if (wbc)
+ f2fs_balance_fs(sbi);
+
node_page = get_node_page(sbi, inode->i_ino);
if (IS_ERR(node_page))
return PTR_ERR(node_page);
if (inode->i_nlink || is_bad_inode(inode))
goto no_delete;
+ sb_start_intwrite(inode->i_sb);
set_inode_flag(F2FS_I(inode), FI_NO_ALLOC);
i_size_write(inode, 0);
f2fs_truncate(inode);
remove_inode_page(inode);
+ sb_end_intwrite(inode->i_sb);
no_delete:
clear_inode(inode);
}
f2fs_put_page(page, 1);
continue;
}
- page_cache_release(page);
+ f2fs_put_page(page, 0);
}
}
return;
if (read_node_page(apage, READA))
- goto unlock_out;
+ unlock_page(apage);
- page_cache_release(apage);
- return;
-
-unlock_out:
- unlock_page(apage);
release_out:
- page_cache_release(apage);
+ f2fs_put_page(apage, 0);
+ return;
}
struct page *get_node_page(struct f2fs_sb_info *sbi, pgoff_t nid)
return 0;
}
+/*
+ * It is very important to gather dirty pages and write at once, so that we can
+ * submit a big bio without interfering other data writes.
+ * Be default, 512 pages (2MB), a segment size, is quite reasonable.
+ */
+#define COLLECT_DIRTY_NODES 512
static int f2fs_write_node_pages(struct address_space *mapping,
struct writeback_control *wbc)
{
struct block_device *bdev = sbi->sb->s_bdev;
long nr_to_write = wbc->nr_to_write;
- if (wbc->for_kupdate)
- return 0;
-
- if (get_pages(sbi, F2FS_DIRTY_NODES) == 0)
- return 0;
-
+ /* First check balancing cached NAT entries */
if (try_to_free_nats(sbi, NAT_ENTRY_PER_BLOCK)) {
- write_checkpoint(sbi, false, false);
+ write_checkpoint(sbi, false);
return 0;
}
+ /* collect a number of dirty node pages and write together */
+ if (get_pages(sbi, F2FS_DIRTY_NODES) < COLLECT_DIRTY_NODES)
+ return 0;
+
/* if mounting is failed, skip writing node pages */
wbc->nr_to_write = bio_get_nr_vecs(bdev);
sync_node_pages(sbi, 0, wbc);
kfree(nm_i);
}
-int create_node_manager_caches(void)
+int __init create_node_manager_caches(void)
{
nat_entry_slab = f2fs_kmem_cache_create("nat_entry",
sizeof(struct nat_entry), NULL);
kunmap(page);
f2fs_put_page(page, 0);
} else {
- __f2fs_add_link(dir, &name, inode);
+ err = __f2fs_add_link(dir, &name, inode);
}
iput(dir);
out:
goto out;
}
- INIT_LIST_HEAD(&entry->list);
list_add_tail(&entry->list, head);
entry->blkaddr = blkaddr;
}
static void destroy_fsync_dnodes(struct f2fs_sb_info *sbi,
struct list_head *head)
{
- struct list_head *this;
- struct fsync_inode_entry *entry;
- list_for_each(this, head) {
- entry = list_entry(this, struct fsync_inode_entry, list);
+ struct fsync_inode_entry *entry, *tmp;
+
+ list_for_each_entry_safe(entry, tmp, head, list) {
iput(entry->inode);
list_del(&entry->list);
kmem_cache_free(fsync_entry_slab, entry);
f2fs_put_page(node_page, 1);
/* Deallocate previous index in the node page */
- inode = f2fs_iget_nowait(sbi->sb, ino);
+ inode = f2fs_iget(sbi->sb, ino);
if (IS_ERR(inode))
return;
out:
destroy_fsync_dnodes(sbi, &inode_list);
kmem_cache_destroy(fsync_entry_slab);
- write_checkpoint(sbi, false, false);
+ write_checkpoint(sbi, false);
}
* We should do GC or end up with checkpoint, if there are so many dirty
* dir/node pages without enough free segments.
*/
- if (has_not_enough_free_secs(sbi)) {
+ if (has_not_enough_free_secs(sbi, 0)) {
mutex_lock(&sbi->gc_mutex);
- f2fs_gc(sbi, 1);
+ f2fs_gc(sbi);
}
}
* If there is not enough reserved sections,
* we should not reuse prefree segments.
*/
- if (has_not_enough_free_secs(sbi))
+ if (has_not_enough_free_secs(sbi, 0))
return NULL_SEGNO;
/*
}
}
+static int get_ssr_segment(struct f2fs_sb_info *sbi, int type)
+{
+ struct curseg_info *curseg = CURSEG_I(sbi, type);
+ const struct victim_selection *v_ops = DIRTY_I(sbi)->v_ops;
+
+ if (IS_NODESEG(type) || !has_not_enough_free_secs(sbi, 0))
+ return v_ops->get_victim(sbi,
+ &(curseg)->next_segno, BG_GC, type, SSR);
+
+ /* For data segments, let's do SSR more intensively */
+ for (; type >= CURSEG_HOT_DATA; type--)
+ if (v_ops->get_victim(sbi, &(curseg)->next_segno,
+ BG_GC, type, SSR))
+ return 1;
+ return 0;
+}
+
/*
* flush out current segment and replace it with new segment
* This function should be returned with success, otherwise BUG
if (page->mapping)
set_bit(AS_EIO, &page->mapping->flags);
set_ckpt_flags(p->sbi->ckpt, CP_ERROR_FLAG);
+ p->sbi->sb->s_flags |= MS_RDONLY;
}
end_page_writeback(page);
dec_page_count(p->sbi, F2FS_WRITEBACK);
mutex_unlock(&curseg->curseg_mutex);
}
-int write_meta_page(struct f2fs_sb_info *sbi, struct page *page,
- struct writeback_control *wbc)
+void write_meta_page(struct f2fs_sb_info *sbi, struct page *page)
{
- if (wbc->for_reclaim)
- return AOP_WRITEPAGE_ACTIVATE;
-
set_page_writeback(page);
submit_write_page(sbi, page, page->index, META);
- return 0;
}
void write_node_page(struct f2fs_sb_info *sbi, struct page *page,
return (free_sections(sbi) < overprovision_sections(sbi));
}
-static inline int get_ssr_segment(struct f2fs_sb_info *sbi, int type)
+static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi, int freed)
{
- struct curseg_info *curseg = CURSEG_I(sbi, type);
- return DIRTY_I(sbi)->v_ops->get_victim(sbi,
- &(curseg)->next_segno, BG_GC, type, SSR);
-}
-
-static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi)
-{
- unsigned int pages_per_sec = (1 << sbi->log_blocks_per_seg) *
- sbi->segs_per_sec;
- int node_secs = ((get_pages(sbi, F2FS_DIRTY_NODES) + pages_per_sec - 1)
- >> sbi->log_blocks_per_seg) / sbi->segs_per_sec;
- int dent_secs = ((get_pages(sbi, F2FS_DIRTY_DENTS) + pages_per_sec - 1)
- >> sbi->log_blocks_per_seg) / sbi->segs_per_sec;
+ int node_secs = get_blocktype_secs(sbi, F2FS_DIRTY_NODES);
+ int dent_secs = get_blocktype_secs(sbi, F2FS_DIRTY_DENTS);
if (sbi->por_doing)
return false;
- if (free_sections(sbi) <= (node_secs + 2 * dent_secs +
- reserved_sections(sbi)))
- return true;
- return false;
+ return ((free_sections(sbi) + freed) <= (node_secs + 2 * dent_secs +
+ reserved_sections(sbi)));
}
static inline int utilization(struct f2fs_sb_info *sbi)
{Opt_err, NULL},
};
+void f2fs_msg(struct super_block *sb, const char *level, const char *fmt, ...)
+{
+ struct va_format vaf;
+ va_list args;
+
+ va_start(args, fmt);
+ vaf.fmt = fmt;
+ vaf.va = &args;
+ printk("%sF2FS-fs (%s): %pV\n", level, sb->s_id, &vaf);
+ va_end(args);
+}
+
static void init_once(void *foo)
{
struct f2fs_inode_info *fi = (struct f2fs_inode_info *) foo;
f2fs_destroy_stats(sbi);
stop_gc_thread(sbi);
- write_checkpoint(sbi, false, true);
+ write_checkpoint(sbi, true);
iput(sbi->node_inode);
iput(sbi->meta_inode);
return 0;
if (sync)
- write_checkpoint(sbi, false, false);
+ write_checkpoint(sbi, false);
+ else
+ f2fs_balance_fs(sbi);
return 0;
}
+static int f2fs_freeze(struct super_block *sb)
+{
+ int err;
+
+ if (sb->s_flags & MS_RDONLY)
+ return 0;
+
+ err = f2fs_sync_fs(sb, 1);
+ return err;
+}
+
+static int f2fs_unfreeze(struct super_block *sb)
+{
+ return 0;
+}
+
static int f2fs_statfs(struct dentry *dentry, struct kstatfs *buf)
{
struct super_block *sb = dentry->d_sb;
seq_puts(seq, ",noacl");
#endif
if (test_opt(sbi, DISABLE_EXT_IDENTIFY))
- seq_puts(seq, ",disable_ext_indentify");
+ seq_puts(seq, ",disable_ext_identify");
seq_printf(seq, ",active_logs=%u", sbi->active_logs);
.evict_inode = f2fs_evict_inode,
.put_super = f2fs_put_super,
.sync_fs = f2fs_sync_fs,
+ .freeze_fs = f2fs_freeze,
+ .unfreeze_fs = f2fs_unfreeze,
.statfs = f2fs_statfs,
};
.get_parent = f2fs_get_parent,
};
-static int parse_options(struct f2fs_sb_info *sbi, char *options)
+static int parse_options(struct super_block *sb, struct f2fs_sb_info *sbi,
+ char *options)
{
substring_t args[MAX_OPT_ARGS];
char *p;
break;
#else
case Opt_nouser_xattr:
- pr_info("nouser_xattr options not supported\n");
+ f2fs_msg(sb, KERN_INFO,
+ "nouser_xattr options not supported");
break;
#endif
#ifdef CONFIG_F2FS_FS_POSIX_ACL
break;
#else
case Opt_noacl:
- pr_info("noacl options not supported\n");
+ f2fs_msg(sb, KERN_INFO, "noacl options not supported");
break;
#endif
case Opt_active_logs:
set_opt(sbi, DISABLE_EXT_IDENTIFY);
break;
default:
- pr_err("Unrecognized mount option \"%s\" or missing value\n",
- p);
+ f2fs_msg(sb, KERN_ERR,
+ "Unrecognized mount option \"%s\" or missing value",
+ p);
return -EINVAL;
}
}
return result;
}
-static int sanity_check_raw_super(struct f2fs_super_block *raw_super)
+static int sanity_check_raw_super(struct super_block *sb,
+ struct f2fs_super_block *raw_super)
{
unsigned int blocksize;
- if (F2FS_SUPER_MAGIC != le32_to_cpu(raw_super->magic))
+ if (F2FS_SUPER_MAGIC != le32_to_cpu(raw_super->magic)) {
+ f2fs_msg(sb, KERN_INFO,
+ "Magic Mismatch, valid(0x%x) - read(0x%x)",
+ F2FS_SUPER_MAGIC, le32_to_cpu(raw_super->magic));
return 1;
+ }
+
+ /* Currently, support only 4KB page cache size */
+ if (F2FS_BLKSIZE != PAGE_CACHE_SIZE) {
+ f2fs_msg(sb, KERN_INFO,
+ "Invalid page_cache_size (%lu), supports only 4KB\n",
+ PAGE_CACHE_SIZE);
+ return 1;
+ }
/* Currently, support only 4KB block size */
blocksize = 1 << le32_to_cpu(raw_super->log_blocksize);
- if (blocksize != PAGE_CACHE_SIZE)
+ if (blocksize != F2FS_BLKSIZE) {
+ f2fs_msg(sb, KERN_INFO,
+ "Invalid blocksize (%u), supports only 4KB\n",
+ blocksize);
return 1;
+ }
+
if (le32_to_cpu(raw_super->log_sectorsize) !=
- F2FS_LOG_SECTOR_SIZE)
+ F2FS_LOG_SECTOR_SIZE) {
+ f2fs_msg(sb, KERN_INFO, "Invalid log sectorsize");
return 1;
+ }
if (le32_to_cpu(raw_super->log_sectors_per_block) !=
- F2FS_LOG_SECTORS_PER_BLOCK)
+ F2FS_LOG_SECTORS_PER_BLOCK) {
+ f2fs_msg(sb, KERN_INFO, "Invalid log sectors per block");
return 1;
+ }
return 0;
}
-static int sanity_check_ckpt(struct f2fs_super_block *raw_super,
- struct f2fs_checkpoint *ckpt)
+static int sanity_check_ckpt(struct f2fs_sb_info *sbi)
{
unsigned int total, fsmeta;
+ struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
+ struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
total = le32_to_cpu(raw_super->segment_count);
fsmeta = le32_to_cpu(raw_super->segment_count_ckpt);
if (fsmeta >= total)
return 1;
+
+ if (is_set_ckpt_flags(ckpt, CP_ERROR_FLAG)) {
+ f2fs_msg(sbi->sb, KERN_ERR, "A bug case: need to run fsck");
+ return 1;
+ }
return 0;
}
atomic_set(&sbi->nr_pages[i], 0);
}
+static int validate_superblock(struct super_block *sb,
+ struct f2fs_super_block **raw_super,
+ struct buffer_head **raw_super_buf, sector_t block)
+{
+ const char *super = (block == 0 ? "first" : "second");
+
+ /* read f2fs raw super block */
+ *raw_super_buf = sb_bread(sb, block);
+ if (!*raw_super_buf) {
+ f2fs_msg(sb, KERN_ERR, "unable to read %s superblock",
+ super);
+ return 1;
+ }
+
+ *raw_super = (struct f2fs_super_block *)
+ ((char *)(*raw_super_buf)->b_data + F2FS_SUPER_OFFSET);
+
+ /* sanity checking of raw super */
+ if (!sanity_check_raw_super(sb, *raw_super))
+ return 0;
+
+ f2fs_msg(sb, KERN_ERR, "Can't find a valid F2FS filesystem "
+ "in %s superblock", super);
+ return 1;
+}
+
static int f2fs_fill_super(struct super_block *sb, void *data, int silent)
{
struct f2fs_sb_info *sbi;
if (!sbi)
return -ENOMEM;
- /* set a temporary block size */
- if (!sb_set_blocksize(sb, F2FS_BLKSIZE))
- goto free_sbi;
-
- /* read f2fs raw super block */
- raw_super_buf = sb_bread(sb, 0);
- if (!raw_super_buf) {
- err = -EIO;
+ /* set a block size */
+ if (!sb_set_blocksize(sb, F2FS_BLKSIZE)) {
+ f2fs_msg(sb, KERN_ERR, "unable to set blocksize");
goto free_sbi;
}
- raw_super = (struct f2fs_super_block *)
- ((char *)raw_super_buf->b_data + F2FS_SUPER_OFFSET);
+ if (validate_superblock(sb, &raw_super, &raw_super_buf, 0)) {
+ brelse(raw_super_buf);
+ if (validate_superblock(sb, &raw_super, &raw_super_buf, 1))
+ goto free_sb_buf;
+ }
/* init some FS parameters */
sbi->active_logs = NR_CURSEG_TYPE;
set_opt(sbi, POSIX_ACL);
#endif
/* parse mount options */
- if (parse_options(sbi, (char *)data))
- goto free_sb_buf;
-
- /* sanity checking of raw super */
- if (sanity_check_raw_super(raw_super))
+ if (parse_options(sb, sbi, (char *)data))
goto free_sb_buf;
sb->s_maxbytes = max_file_size(le32_to_cpu(raw_super->log_blocksize));
/* get an inode for meta space */
sbi->meta_inode = f2fs_iget(sb, F2FS_META_INO(sbi));
if (IS_ERR(sbi->meta_inode)) {
+ f2fs_msg(sb, KERN_ERR, "Failed to read F2FS meta data inode");
err = PTR_ERR(sbi->meta_inode);
goto free_sb_buf;
}
err = get_valid_checkpoint(sbi);
- if (err)
+ if (err) {
+ f2fs_msg(sb, KERN_ERR, "Failed to get valid F2FS checkpoint");
goto free_meta_inode;
+ }
/* sanity checking of checkpoint */
err = -EINVAL;
- if (sanity_check_ckpt(raw_super, sbi->ckpt))
+ if (sanity_check_ckpt(sbi)) {
+ f2fs_msg(sb, KERN_ERR, "Invalid F2FS checkpoint");
goto free_cp;
+ }
sbi->total_valid_node_count =
le32_to_cpu(sbi->ckpt->valid_node_count);
INIT_LIST_HEAD(&sbi->dir_inode_list);
spin_lock_init(&sbi->dir_inode_lock);
- /* init super block */
- if (!sb_set_blocksize(sb, sbi->blocksize))
- goto free_cp;
-
init_orphan_info(sbi);
/* setup f2fs internal modules */
err = build_segment_manager(sbi);
- if (err)
+ if (err) {
+ f2fs_msg(sb, KERN_ERR,
+ "Failed to initialize F2FS segment manager");
goto free_sm;
+ }
err = build_node_manager(sbi);
- if (err)
+ if (err) {
+ f2fs_msg(sb, KERN_ERR,
+ "Failed to initialize F2FS node manager");
goto free_nm;
+ }
build_gc_manager(sbi);
/* get an inode for node space */
sbi->node_inode = f2fs_iget(sb, F2FS_NODE_INO(sbi));
if (IS_ERR(sbi->node_inode)) {
+ f2fs_msg(sb, KERN_ERR, "Failed to read node inode");
err = PTR_ERR(sbi->node_inode);
goto free_nm;
}
/* read root inode and dentry */
root = f2fs_iget(sb, F2FS_ROOT_INO(sbi));
if (IS_ERR(root)) {
+ f2fs_msg(sb, KERN_ERR, "Failed to read root inode");
err = PTR_ERR(root);
goto free_node_inode;
}
.fs_flags = FS_REQUIRES_DEV,
};
-static int init_inodecache(void)
+static int __init init_inodecache(void)
{
f2fs_inode_cachep = f2fs_kmem_cache_create("f2fs_inode_cache",
sizeof(struct f2fs_inode_info), NULL);
err = create_checkpoint_caches();
if (err)
goto fail;
- return register_filesystem(&f2fs_fs_type);
+ err = register_filesystem(&f2fs_fs_type);
+ if (err)
+ goto fail;
+ f2fs_create_root_stats();
fail:
return err;
}
static void __exit exit_f2fs_fs(void)
{
- destroy_root_stats();
+ f2fs_destroy_root_stats();
unregister_filesystem(&f2fs_fs_type);
destroy_checkpoint_caches();
destroy_gc_caches();
if (name_len > 255 || value_len > MAX_VALUE_LEN)
return -ERANGE;
+ f2fs_balance_fs(sbi);
+
mutex_lock_op(sbi, NODE_NEW);
if (!fi->i_xattr_nid) {
/* Allocate new attribute block */
With FUSE it is possible to implement a fully functional filesystem
in a userspace program.
- There's also companion library: libfuse. This library along with
- utilities is available from the FUSE homepage:
+ There's also a companion library: libfuse2. This library is available
+ from the FUSE homepage:
<http://fuse.sourceforge.net/>
+ although chances are your distribution already has that library
+ installed if you've installed the "fuse" package itself.
See <file:Documentation/filesystems/fuse.txt> for more information.
See <file:Documentation/Changes> for needed library/utility version.
If you want to develop a userspace FS, or if you want to use
a filesystem based on FUSE, answer Y or M.
+
+config CUSE
+ tristate "Character device in Userspace support"
+ depends on FUSE_FS
+ help
+ This FUSE extension allows character devices to be
+ implemented in userspace.
+
+ If you want to develop or use a userspace character device
+ based on CUSE, answer Y or M.
#include <linux/miscdevice.h>
#include <linux/mutex.h>
#include <linux/slab.h>
-#include <linux/spinlock.h>
#include <linux/stat.h>
#include <linux/module.h>
bool unrestricted_ioctl;
};
-static DEFINE_SPINLOCK(cuse_lock); /* protects cuse_conntbl */
+static DEFINE_MUTEX(cuse_lock); /* protects registration */
static struct list_head cuse_conntbl[CUSE_CONNTBL_LEN];
static struct class *cuse_class;
int rc;
/* look up and get the connection */
- spin_lock(&cuse_lock);
+ mutex_lock(&cuse_lock);
list_for_each_entry(pos, cuse_conntbl_head(devt), list)
if (pos->dev->devt == devt) {
fuse_conn_get(&pos->fc);
cc = pos;
break;
}
- spin_unlock(&cuse_lock);
+ mutex_unlock(&cuse_lock);
/* dead? */
if (!cc)
static int cuse_parse_devinfo(char *p, size_t len, struct cuse_devinfo *devinfo)
{
char *end = p + len;
- char *key, *val;
+ char *uninitialized_var(key), *uninitialized_var(val);
int rc;
while (true) {
*/
static void cuse_process_init_reply(struct fuse_conn *fc, struct fuse_req *req)
{
- struct cuse_conn *cc = fc_to_cc(fc);
+ struct cuse_conn *cc = fc_to_cc(fc), *pos;
struct cuse_init_out *arg = req->out.args[0].value;
struct page *page = req->pages[0];
struct cuse_devinfo devinfo = { };
struct device *dev;
struct cdev *cdev;
dev_t devt;
- int rc;
+ int rc, i;
if (req->out.h.error ||
arg->major != FUSE_KERNEL_VERSION || arg->minor < 11) {
dev_set_drvdata(dev, cc);
dev_set_name(dev, "%s", devinfo.name);
+ mutex_lock(&cuse_lock);
+
+ /* make sure the device-name is unique */
+ for (i = 0; i < CUSE_CONNTBL_LEN; ++i) {
+ list_for_each_entry(pos, &cuse_conntbl[i], list)
+ if (!strcmp(dev_name(pos->dev), dev_name(dev)))
+ goto err_unlock;
+ }
+
rc = device_add(dev);
if (rc)
- goto err_device;
+ goto err_unlock;
/* register cdev */
rc = -ENOMEM;
cdev = cdev_alloc();
if (!cdev)
- goto err_device;
+ goto err_unlock;
cdev->owner = THIS_MODULE;
cdev->ops = &cuse_frontend_fops;
cc->cdev = cdev;
/* make the device available */
- spin_lock(&cuse_lock);
list_add(&cc->list, cuse_conntbl_head(devt));
- spin_unlock(&cuse_lock);
+ mutex_unlock(&cuse_lock);
/* announce device availability */
dev_set_uevent_suppress(dev, 0);
err_cdev:
cdev_del(cdev);
-err_device:
+err_unlock:
+ mutex_unlock(&cuse_lock);
put_device(dev);
err_region:
unregister_chrdev_region(devt, 1);
int rc;
/* remove from the conntbl, no more access from this point on */
- spin_lock(&cuse_lock);
+ mutex_lock(&cuse_lock);
list_del_init(&cc->list);
- spin_unlock(&cuse_lock);
+ mutex_unlock(&cuse_lock);
/* remove device */
if (cc->dev)
struct page *oldpage = *pagep;
struct page *newpage;
struct pipe_buffer *buf = cs->pipebufs;
- struct address_space *mapping;
- pgoff_t index;
unlock_request(cs->fc, cs->req);
fuse_copy_finish(cs);
if (fuse_check_page(newpage) != 0)
goto out_fallback_unlock;
- mapping = oldpage->mapping;
- index = oldpage->index;
-
/*
* This is a new and locked page, it shouldn't be mapped or
* have any special flags on it
return ret;
}
-long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
- loff_t length)
+static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
+ loff_t length)
{
struct fuse_file *ff = file->private_data;
struct fuse_conn *fc = ff->fc;
return err;
}
-EXPORT_SYMBOL_GPL(fuse_file_fallocate);
static const struct file_operations fuse_file_operations = {
.llseek = fuse_file_llseek,
{
struct gfs2_sbd *sdp = gl->gl_sbd;
struct lm_lockstruct *ls = &sdp->sd_lockstruct;
+ int lvb_needs_unlock = 0;
int error;
if (gl->gl_lksb.sb_lkid == 0) {
gfs2_update_request_times(gl);
/* don't want to skip dlm_unlock writing the lvb when lock is ex */
+
+ if (gl->gl_lksb.sb_lvbptr && (gl->gl_state == LM_ST_EXCLUSIVE))
+ lvb_needs_unlock = 1;
+
if (test_bit(SDF_SKIP_DLM_UNLOCK, &sdp->sd_flags) &&
- gl->gl_lksb.sb_lvbptr && (gl->gl_state != LM_ST_EXCLUSIVE)) {
+ !lvb_needs_unlock) {
gfs2_glock_free(gl);
return;
}
return mnt;
}
+static int
+nfs_namespace_getattr(struct vfsmount *mnt, struct dentry *dentry, struct kstat *stat)
+{
+ if (NFS_FH(dentry->d_inode)->size != 0)
+ return nfs_getattr(mnt, dentry, stat);
+ generic_fillattr(dentry->d_inode, stat);
+ return 0;
+}
+
+static int
+nfs_namespace_setattr(struct dentry *dentry, struct iattr *attr)
+{
+ if (NFS_FH(dentry->d_inode)->size != 0)
+ return nfs_setattr(dentry, attr);
+ return -EACCES;
+}
+
const struct inode_operations nfs_mountpoint_inode_operations = {
.getattr = nfs_getattr,
+ .setattr = nfs_setattr,
};
const struct inode_operations nfs_referral_inode_operations = {
+ .getattr = nfs_namespace_getattr,
+ .setattr = nfs_namespace_setattr,
};
static void nfs_expire_automounts(struct work_struct *work)
error = nfs4_discover_server_trunking(clp, &old);
if (error < 0)
goto error;
+ nfs_put_client(clp);
if (clp != old) {
clp->cl_preserve_clid = true;
- nfs_put_client(clp);
clp = old;
- atomic_inc(&clp->cl_count);
}
return clp;
.clientid = new->cl_clientid,
.confirm = new->cl_confirm,
};
- int status;
+ int status = -NFS4ERR_STALE_CLIENTID;
spin_lock(&nn->nfs_client_lock);
list_for_each_entry_safe(pos, n, &nn->nfs_client_list, cl_share_link) {
if (prev)
nfs_put_client(prev);
+ prev = pos;
status = nfs4_proc_setclientid_confirm(pos, &clid, cred);
- if (status == 0) {
+ switch (status) {
+ case -NFS4ERR_STALE_CLIENTID:
+ break;
+ case 0:
nfs4_swap_callback_idents(pos, new);
- nfs_put_client(pos);
+ prev = NULL;
*result = pos;
dprintk("NFS: <-- %s using nfs_client = %p ({%d})\n",
__func__, pos, atomic_read(&pos->cl_count));
- return 0;
- }
- if (status != -NFS4ERR_STALE_CLIENTID) {
- nfs_put_client(pos);
- dprintk("NFS: <-- %s status = %d, no result\n",
- __func__, status);
- return status;
+ default:
+ goto out;
}
spin_lock(&nn->nfs_client_lock);
- prev = pos;
}
+ spin_unlock(&nn->nfs_client_lock);
- /*
- * No matching nfs_client found. This should be impossible,
- * because the new nfs_client has already been added to
- * nfs_client_list by nfs_get_client().
- *
- * Don't BUG(), since the caller is holding a mutex.
- */
+ /* No match found. The server lost our clientid */
+out:
if (prev)
nfs_put_client(prev);
- spin_unlock(&nn->nfs_client_lock);
- pr_err("NFS: %s Error: no matching nfs_client found\n", __func__);
- return -NFS4ERR_STALE_CLIENTID;
+ dprintk("NFS: <-- %s status = %d\n", __func__, status);
+ return status;
}
#ifdef CONFIG_NFS_V4_1
{
struct nfs_net *nn = net_generic(new->cl_net, nfs_net_id);
struct nfs_client *pos, *n, *prev = NULL;
- int error;
+ int status = -NFS4ERR_STALE_CLIENTID;
spin_lock(&nn->nfs_client_lock);
list_for_each_entry_safe(pos, n, &nn->nfs_client_list, cl_share_link) {
nfs_put_client(prev);
prev = pos;
- error = nfs_wait_client_init_complete(pos);
- if (error < 0) {
+ nfs4_schedule_lease_recovery(pos);
+ status = nfs_wait_client_init_complete(pos);
+ if (status < 0) {
nfs_put_client(pos);
spin_lock(&nn->nfs_client_lock);
continue;
}
-
+ status = pos->cl_cons_state;
spin_lock(&nn->nfs_client_lock);
+ if (status < 0)
+ continue;
}
if (pos->rpc_ops != new->rpc_ops)
if (!nfs4_match_serverowners(pos, new))
continue;
+ atomic_inc(&pos->cl_count);
spin_unlock(&nn->nfs_client_lock);
dprintk("NFS: <-- %s using nfs_client = %p ({%d})\n",
__func__, pos, atomic_read(&pos->cl_count));
return 0;
}
- /*
- * No matching nfs_client found. This should be impossible,
- * because the new nfs_client has already been added to
- * nfs_client_list by nfs_get_client().
- *
- * Don't BUG(), since the caller is holding a mutex.
- */
+ /* No matching nfs_client found. */
spin_unlock(&nn->nfs_client_lock);
- pr_err("NFS: %s Error: no matching nfs_client found\n", __func__);
- return -NFS4ERR_STALE_CLIENTID;
+ dprintk("NFS: <-- %s status = %d\n", __func__, status);
+ return status;
}
#endif /* CONFIG_NFS_V4_1 */
clp->cl_confirm = clid.confirm;
status = nfs40_walk_client_list(clp, result, cred);
- switch (status) {
- case -NFS4ERR_STALE_CLIENTID:
- set_bit(NFS4CLNT_LEASE_CONFIRM, &clp->cl_state);
- case 0:
+ if (status == 0) {
/* Sustain the lease, even if it's empty. If the clientid4
* goes stale it's of no use for trunking discovery. */
nfs4_schedule_state_renewal(*result);
- break;
}
-
out:
return status;
}
case -ETIMEDOUT:
case -EAGAIN:
ssleep(1);
+ case -NFS4ERR_STALE_CLIENTID:
dprintk("NFS: %s after status %d, retrying\n",
__func__, status);
goto again;
nfs4_begin_drain_session(clp);
cred = nfs4_get_exchange_id_cred(clp);
status = nfs4_proc_destroy_session(clp->cl_session, cred);
- if (status && status != -NFS4ERR_BADSESSION &&
- status != -NFS4ERR_DEADSESSION) {
+ switch (status) {
+ case 0:
+ case -NFS4ERR_BADSESSION:
+ case -NFS4ERR_DEADSESSION:
+ break;
+ case -NFS4ERR_BACK_CHAN_BUSY:
+ case -NFS4ERR_DELAY:
+ set_bit(NFS4CLNT_SESSION_RESET, &clp->cl_state);
+ status = 0;
+ ssleep(1);
+ goto out;
+ default:
status = nfs4_recovery_handle_error(clp, status);
goto out;
}
struct nfs_server *server;
struct dentry *mntroot = ERR_PTR(-ENOMEM);
struct nfs_subversion *nfs_mod = NFS_SB(data->sb)->nfs_client->cl_nfs_mod;
- int error;
- dprintk("--> nfs_xdev_mount_common()\n");
+ dprintk("--> nfs_xdev_mount()\n");
mount_info.mntfh = mount_info.cloned->fh;
/* create a new volume representation */
server = nfs_mod->rpc_ops->clone_server(NFS_SB(data->sb), data->fh, data->fattr, data->authflavor);
- if (IS_ERR(server)) {
- error = PTR_ERR(server);
- goto out_err;
- }
- mntroot = nfs_fs_mount_common(server, flags, dev_name, &mount_info, nfs_mod);
- dprintk("<-- nfs_xdev_mount_common() = 0\n");
-out:
- return mntroot;
+ if (IS_ERR(server))
+ mntroot = ERR_CAST(server);
+ else
+ mntroot = nfs_fs_mount_common(server, flags,
+ dev_name, &mount_info, nfs_mod);
-out_err:
- dprintk("<-- nfs_xdev_mount_common() = %d [error]\n", error);
- goto out;
+ dprintk("<-- nfs_xdev_mount() = %ld\n",
+ IS_ERR(mntroot) ? PTR_ERR(mntroot) : 0L);
+ return mntroot;
}
#if IS_ENABLED(CONFIG_NFS_V4)
if (ret < 0)
printk(KERN_ERR "NILFS: GC failed during preparation: "
"cannot read source blocks: err=%d\n", ret);
- else
+ else {
+ if (nilfs_sb_need_update(nilfs))
+ set_nilfs_discontinued(nilfs);
ret = nilfs_clean_segments(inode->i_sb, argv, kbufs);
+ }
nilfs_remove_all_gcinodes(nilfs);
clear_nilfs_gc_running(nilfs);
}
if (ioend->io_iocb) {
+ inode_dio_done(ioend->io_inode);
if (ioend->io_isasync) {
aio_complete(ioend->io_iocb, ioend->io_error ?
ioend->io_error : ioend->io_result, 0);
}
- inode_dio_done(ioend->io_inode);
}
mempool_free(ioend, xfs_ioend_pool);
return error;
}
- if (bma->flags & XFS_BMAPI_STACK_SWITCH)
- bma->stack_switch = 1;
-
error = xfs_bmap_alloc(bma);
if (error)
return error;
bma.flist = flist;
bma.firstblock = firstblock;
+ if (flags & XFS_BMAPI_STACK_SWITCH)
+ bma.stack_switch = 1;
+
while (bno < end && n < *nmap) {
inhole = eof || bma.got.br_startoff > bno;
wasdelay = !inhole && isnullstartblock(bma.got.br_startblock);
struct rb_node *parent;
xfs_buf_t *bp;
xfs_daddr_t blkno = map[0].bm_bn;
+ xfs_daddr_t eofs;
int numblks = 0;
int i;
ASSERT(!(numbytes < (1 << btp->bt_sshift)));
ASSERT(!(BBTOB(blkno) & (xfs_off_t)btp->bt_smask));
+ /*
+ * Corrupted block numbers can get through to here, unfortunately, so we
+ * have to check that the buffer falls within the filesystem bounds.
+ */
+ eofs = XFS_FSB_TO_BB(btp->bt_mount, btp->bt_mount->m_sb.sb_dblocks);
+ if (blkno >= eofs) {
+ /*
+ * XXX (dgc): we should really be returning EFSCORRUPTED here,
+ * but none of the higher level infrastructure supports
+ * returning a specific error on buffer lookup failures.
+ */
+ xfs_alert(btp->bt_mount,
+ "%s: Block out of range: block 0x%llx, EOFS 0x%llx ",
+ __func__, blkno, eofs);
+ return NULL;
+ }
+
/* get tree root */
pag = xfs_perag_get(btp->bt_mount,
xfs_daddr_to_agno(btp->bt_mount, blkno));
while (!list_empty(&btp->bt_lru)) {
bp = list_first_entry(&btp->bt_lru, struct xfs_buf, b_lru);
if (atomic_read(&bp->b_hold) > 1) {
+ trace_xfs_buf_wait_buftarg(bp, _RET_IP_);
+ list_move_tail(&bp->b_lru, &btp->bt_lru);
spin_unlock(&btp->bt_lru_lock);
delay(100);
goto restart;
/*
* If the buf item isn't tracking any data, free it, otherwise drop the
- * reference we hold to it.
+ * reference we hold to it. If we are aborting the transaction, this may
+ * be the only reference to the buf item, so we free it anyway
+ * regardless of whether it is dirty or not. A dirty abort implies a
+ * shutdown, anyway.
*/
clean = 1;
for (i = 0; i < bip->bli_format_count; i++) {
}
if (clean)
xfs_buf_item_relse(bp);
- else
+ else if (aborted) {
+ if (atomic_dec_and_test(&bip->bli_refcount)) {
+ ASSERT(XFS_FORCED_SHUTDOWN(lip->li_mountp));
+ xfs_buf_item_relse(bp);
+ }
+ } else
atomic_dec(&bip->bli_refcount);
if (!hold)
goto out_unlock;
}
- error = -filemap_write_and_wait(VFS_I(ip)->i_mapping);
+ error = -filemap_write_and_wait(VFS_I(tip)->i_mapping);
if (error)
goto out_unlock;
- truncate_pagecache_range(VFS_I(ip), 0, -1);
+ truncate_pagecache_range(VFS_I(tip), 0, -1);
/* Verify O_DIRECT for ftmp */
if (VN_CACHED(VFS_I(tip)) != 0) {
}
if (shift)
alloc_blocks >>= shift;
+
+ /*
+ * If we are still trying to allocate more space than is
+ * available, squash the prealloc hard. This can happen if we
+ * have a large file on a small filesystem and the above
+ * lowspace thresholds are smaller than MAXEXTLEN.
+ */
+ while (alloc_blocks >= freesp)
+ alloc_blocks >>= 4;
}
if (alloc_blocks < mp->m_writeio_blocks)
return;
}
/* quietly fail */
- xfs_buf_ioerror(bp, EFSCORRUPTED);
+ xfs_buf_ioerror(bp, EWRONGFS);
}
static void
DEFINE_BUF_EVENT(xfs_buf_item_iodone);
DEFINE_BUF_EVENT(xfs_buf_item_iodone_async);
DEFINE_BUF_EVENT(xfs_buf_error_relse);
+DEFINE_BUF_EVENT(xfs_buf_wait_buftarg);
DEFINE_BUF_EVENT(xfs_trans_read_buf_io);
DEFINE_BUF_EVENT(xfs_trans_read_buf_shut);
dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
dma_addr_t dma_handle);
+static inline void *dma_alloc_attrs(struct device *dev, size_t size,
+ dma_addr_t *dma_handle, gfp_t flag,
+ struct dma_attrs *attrs)
+{
+ /* attrs is not supported and ignored */
+ return dma_alloc_coherent(dev, size, dma_handle, flag);
+}
+
+static inline void dma_free_attrs(struct device *dev, size_t size,
+ void *cpu_addr, dma_addr_t dma_handle,
+ struct dma_attrs *attrs)
+{
+ /* attrs is not supported and ignored */
+ dma_free_coherent(dev, size, cpu_addr, dma_handle);
+}
+
#define dma_alloc_noncoherent(d, s, h, f) dma_alloc_coherent(d, s, h, f)
#define dma_free_noncoherent(d, s, v, h) dma_free_coherent(d, s, v, h)
ATA_LOG_SATA_NCQ = 0x10,
ATA_LOG_SATA_ID_DEV_DATA = 0x30,
ATA_LOG_SATA_SETTINGS = 0x08,
- ATA_LOG_DEVSLP_MDAT = 0x30,
+ ATA_LOG_DEVSLP_OFFSET = 0x30,
+ ATA_LOG_DEVSLP_SIZE = 0x08,
+ ATA_LOG_DEVSLP_MDAT = 0x00,
ATA_LOG_DEVSLP_MDAT_MASK = 0x1F,
- ATA_LOG_DEVSLP_DETO = 0x31,
- ATA_LOG_DEVSLP_VALID = 0x37,
+ ATA_LOG_DEVSLP_DETO = 0x01,
+ ATA_LOG_DEVSLP_VALID = 0x07,
ATA_LOG_DEVSLP_VALID_MASK = 0x80,
/* READ/WRITE LONG (obsolete) */
#endif
/*
- * We play games with efi_enabled so that the compiler will, if possible, remove
- * EFI-related code altogether.
+ * We play games with efi_enabled so that the compiler will, if
+ * possible, remove EFI-related code altogether.
*/
+#define EFI_BOOT 0 /* Were we booted from EFI? */
+#define EFI_SYSTEM_TABLES 1 /* Can we use EFI system tables? */
+#define EFI_CONFIG_TABLES 2 /* Can we use EFI config tables? */
+#define EFI_RUNTIME_SERVICES 3 /* Can we use runtime services? */
+#define EFI_MEMMAP 4 /* Can we use EFI memory map? */
+#define EFI_64BIT 5 /* Is the firmware 64-bit? */
+
#ifdef CONFIG_EFI
# ifdef CONFIG_X86
- extern int efi_enabled;
- extern bool efi_64bit;
+extern int efi_enabled(int facility);
# else
-# define efi_enabled 1
+static inline int efi_enabled(int facility)
+{
+ return 1;
+}
# endif
#else
-# define efi_enabled 0
+static inline int efi_enabled(int facility)
+{
+ return 0;
+}
#endif
/*
u32 gscr[SATA_PMP_GSCR_DWORDS]; /* PMP GSCR block */
};
- /* Identify Device Data Log (30h), SATA Settings (page 08h) */
- u8 sata_settings[ATA_SECT_SIZE];
+ /* DEVSLP Timing Variables from Identify Device Data Log */
+ u8 devslp_timing[ATA_LOG_DEVSLP_SIZE];
/* error history */
int spdn_cnt;
(pos) = llist_entry((pos)->member.next, typeof(*(pos)), member))
/**
+ * llist_for_each_entry_safe - iterate safely against remove over some entries
+ * of lock-less list of given type.
+ * @pos: the type * to use as a loop cursor.
+ * @n: another type * to use as a temporary storage.
+ * @node: the fist entry of deleted list entries.
+ * @member: the name of the llist_node with the struct.
+ *
+ * In general, some entries of the lock-less list can be traversed
+ * safely only after being removed from list, so start with an entry
+ * instead of list head. This variant allows removal of entries
+ * as we iterate.
+ *
+ * If being used on entries deleted from lock-less list directly, the
+ * traverse order is from the newest to the oldest added entry. If
+ * you want to traverse from the oldest to the newest, you must
+ * reverse the order by yourself before traversing.
+ */
+#define llist_for_each_entry_safe(pos, n, node, member) \
+ for ((pos) = llist_entry((node), typeof(*(pos)), member), \
+ (n) = (pos)->member.next; \
+ &(pos)->member != NULL; \
+ (pos) = llist_entry(n, typeof(*(pos)), member), \
+ (n) = (&(pos)->member != NULL) ? (pos)->member.next : NULL)
+
+/**
* llist_empty - tests whether a lock-less list is empty
* @head: the list to test
*
* the slab_mutex must be held when looping through those caches
*/
#define for_each_memcg_cache_index(_idx) \
- for ((_idx) = 0; i < memcg_limited_groups_array_size; (_idx)++)
+ for ((_idx) = 0; (_idx) < memcg_limited_groups_array_size; (_idx)++)
static inline bool memcg_kmem_enabled(void)
{
const struct abx500_fg_parameters *fg_params;
};
-extern struct abx500_bm_data ab8500_bm_data;
-
enum {
NTC_EXTERNAL = 0,
NTC_INTERNAL,
struct ab8500_btemp;
struct ab8500_gpadc;
struct ab8500_fg;
+
#ifdef CONFIG_AB8500_BM
+extern struct abx500_bm_data ab8500_bm_data;
+
void ab8500_fg_reinit(void);
void ab8500_charger_usb_state_changed(u8 bm_usb_state, u16 mA);
struct ab8500_btemp *ab8500_btemp_get(void);
int ab8500_fg_inst_curr_done(struct ab8500_fg *di);
#else
-int ab8500_fg_inst_curr_done(struct ab8500_fg *di)
-{
-}
-static void ab8500_fg_reinit(void)
-{
-}
-static void ab8500_charger_usb_state_changed(u8 bm_usb_state, u16 mA)
-{
-}
-static struct ab8500_btemp *ab8500_btemp_get(void)
-{
- return NULL;
-}
-static int ab8500_btemp_get_batctrl_temp(struct ab8500_btemp *btemp)
-{
- return 0;
-}
-struct ab8500_fg *ab8500_fg_get(void)
-{
- return NULL;
-}
-static int ab8500_fg_inst_curr_blocking(struct ab8500_fg *dev)
-{
- return -ENODEV;
-}
+static struct abx500_bm_data ab8500_bm_data;
static inline int ab8500_fg_inst_curr_start(struct ab8500_fg *di)
{
u8 chip_id;
int chip_irq;
+
+ /* SOC I/O transfer related fixes for DA9052/53 */
+ int (*fix_io) (struct da9052 *da9052, unsigned char reg);
};
/* ADC API */
ret = regmap_read(da9052->regmap, reg, &val);
if (ret < 0)
return ret;
+
+ if (da9052->fix_io) {
+ ret = da9052->fix_io(da9052, reg);
+ if (ret < 0)
+ return ret;
+ }
+
return val;
}
static inline int da9052_reg_write(struct da9052 *da9052, unsigned char reg,
unsigned char val)
{
- return regmap_write(da9052->regmap, reg, val);
+ int ret;
+
+ ret = regmap_write(da9052->regmap, reg, val);
+ if (ret < 0)
+ return ret;
+
+ if (da9052->fix_io) {
+ ret = da9052->fix_io(da9052, reg);
+ if (ret < 0)
+ return ret;
+ }
+
+ return ret;
}
static inline int da9052_group_read(struct da9052 *da9052, unsigned char reg,
unsigned reg_cnt, unsigned char *val)
{
- return regmap_bulk_read(da9052->regmap, reg, val, reg_cnt);
+ int ret;
+
+ ret = regmap_bulk_read(da9052->regmap, reg, val, reg_cnt);
+ if (ret < 0)
+ return ret;
+
+ if (da9052->fix_io) {
+ ret = da9052->fix_io(da9052, reg);
+ if (ret < 0)
+ return ret;
+ }
+
+ return ret;
}
static inline int da9052_group_write(struct da9052 *da9052, unsigned char reg,
unsigned reg_cnt, unsigned char *val)
{
- return regmap_raw_write(da9052->regmap, reg, val, reg_cnt);
+ int ret;
+
+ ret = regmap_raw_write(da9052->regmap, reg, val, reg_cnt);
+ if (ret < 0)
+ return ret;
+
+ if (da9052->fix_io) {
+ ret = da9052->fix_io(da9052, reg);
+ if (ret < 0)
+ return ret;
+ }
+
+ return ret;
}
static inline int da9052_reg_update(struct da9052 *da9052, unsigned char reg,
unsigned char bit_mask,
unsigned char reg_val)
{
- return regmap_update_bits(da9052->regmap, reg, bit_mask, reg_val);
+ int ret;
+
+ ret = regmap_update_bits(da9052->regmap, reg, bit_mask, reg_val);
+ if (ret < 0)
+ return ret;
+
+ if (da9052->fix_io) {
+ ret = da9052->fix_io(da9052, reg);
+ if (ret < 0)
+ return ret;
+ }
+
+ return ret;
}
int da9052_device_init(struct da9052 *da9052, u8 chip_id);
#define DA9052_STATUS_C_REG 3
#define DA9052_STATUS_D_REG 4
+/* PARK REGISTER */
+#define DA9052_PARK_REGISTER DA9052_STATUS_D_REG
+
/* EVENT REGISTERS */
#define DA9052_EVENT_A_REG 5
#define DA9052_EVENT_B_REG 6
#define RTSX_SD_CARD 0
#define RTSX_MS_CARD 1
+#define CLK_TO_DIV_N 0
+#define DIV_N_TO_CLK 1
+
struct platform_device;
struct rtsx_slot {
#define SG_TRANS_DATA (0x02 << 4)
#define SG_LINK_DESC (0x03 << 4)
-/* SD bank voltage */
-#define SD_IO_3V3 0
-#define SD_IO_1V8 1
-
+/* Output voltage */
+#define OUTPUT_3V3 0
+#define OUTPUT_1V8 1
/* Card Clock Enable Register */
#define SD_CLK_EN 0x04
#define CHANGE_CLK 0x01
/* LDO_CTL */
+#define BPP_ASIC_1V7 0x00
+#define BPP_ASIC_1V8 0x01
+#define BPP_ASIC_1V9 0x02
+#define BPP_ASIC_2V0 0x03
+#define BPP_ASIC_2V7 0x04
+#define BPP_ASIC_2V8 0x05
+#define BPP_ASIC_3V2 0x06
+#define BPP_ASIC_3V3 0x07
+#define BPP_REG_TUNED18 0x07
+#define BPP_TUNED18_SHIFT_8402 5
+#define BPP_TUNED18_SHIFT_8411 4
+#define BPP_PAD_MASK 0x04
+#define BPP_PAD_3V3 0x04
+#define BPP_PAD_1V8 0x00
#define BPP_LDO_POWB 0x03
#define BPP_LDO_ON 0x00
#define BPP_LDO_SUSPEND 0x02
int (*disable_auto_blink)(struct rtsx_pcr *pcr);
int (*card_power_on)(struct rtsx_pcr *pcr, int card);
int (*card_power_off)(struct rtsx_pcr *pcr, int card);
+ int (*switch_output_voltage)(struct rtsx_pcr *pcr,
+ u8 voltage);
unsigned int (*cd_deglitch)(struct rtsx_pcr *pcr);
+ int (*conv_clk_and_div_n)(int clk, int dir);
};
enum PDEV_STAT {PDEV_STAT_IDLE, PDEV_STAT_RUN};
u8 ssc_depth, bool initial_mode, bool double_clk, bool vpclk);
int rtsx_pci_card_power_on(struct rtsx_pcr *pcr, int card);
int rtsx_pci_card_power_off(struct rtsx_pcr *pcr, int card);
+int rtsx_pci_switch_output_voltage(struct rtsx_pcr *pcr, u8 voltage);
unsigned int rtsx_pci_card_exist(struct rtsx_pcr *pcr);
void rtsx_pci_complete_unfinished_transfer(struct rtsx_pcr *pcr);
* Therefore notifier chains can only be traversed when either
*
* 1. mmap_sem is held.
- * 2. One of the reverse map locks is held (i_mmap_mutex or anon_vma->mutex).
+ * 2. One of the reverse map locks is held (i_mmap_mutex or anon_vma->rwsem).
* 3. No other concurrent thread can access the list (release)
*/
struct mmu_notifier {
struct module *source, *target;
};
-enum module_state
-{
- MODULE_STATE_LIVE,
- MODULE_STATE_COMING,
- MODULE_STATE_GOING,
+enum module_state {
+ MODULE_STATE_LIVE, /* Normal state. */
+ MODULE_STATE_COMING, /* Full formed, running module_init. */
+ MODULE_STATE_GOING, /* Going away. */
+ MODULE_STATE_UNFORMED, /* Still setting it up. */
};
/**
extern void recalc_sigpending_and_wake(struct task_struct *t);
extern void recalc_sigpending(void);
-extern void signal_wake_up(struct task_struct *t, int resume_stopped);
+extern void signal_wake_up_state(struct task_struct *t, unsigned int state);
+
+static inline void signal_wake_up(struct task_struct *t, bool resume)
+{
+ signal_wake_up_state(t, resume ? TASK_WAKEKILL : 0);
+}
+static inline void ptrace_signal_wake_up(struct task_struct *t, bool resume)
+{
+ signal_wake_up_state(t, resume ? __TASK_TRACED : 0);
+}
/*
* Wrappers for p->thread_info->cpu access. No-op on UP.
* tells the LSM to decrement the number of secmark labeling rules loaded
* @req_classify_flow:
* Sets the flow's sid to the openreq sid.
+ * @tun_dev_alloc_security:
+ * This hook allows a module to allocate a security structure for a TUN
+ * device.
+ * @security pointer to a security structure pointer.
+ * Returns a zero on success, negative values on failure.
+ * @tun_dev_free_security:
+ * This hook allows a module to free the security structure for a TUN
+ * device.
+ * @security pointer to the TUN device's security structure
* @tun_dev_create:
* Check permissions prior to creating a new TUN device.
- * @tun_dev_post_create:
- * This hook allows a module to update or allocate a per-socket security
- * structure.
- * @sk contains the newly created sock structure.
+ * @tun_dev_attach_queue:
+ * Check permissions prior to attaching to a TUN device queue.
+ * @security pointer to the TUN device's security structure.
* @tun_dev_attach:
- * Check permissions prior to attaching to a persistent TUN device. This
- * hook can also be used by the module to update any security state
+ * This hook can be used by the module to update any security state
* associated with the TUN device's sock structure.
* @sk contains the existing sock structure.
+ * @security pointer to the TUN device's security structure.
+ * @tun_dev_open:
+ * This hook can be used by the module to update any security state
+ * associated with the TUN device's security structure.
+ * @security pointer to the TUN devices's security structure.
*
* Security hooks for XFRM operations.
*
void (*secmark_refcount_inc) (void);
void (*secmark_refcount_dec) (void);
void (*req_classify_flow) (const struct request_sock *req, struct flowi *fl);
- int (*tun_dev_create)(void);
- void (*tun_dev_post_create)(struct sock *sk);
- int (*tun_dev_attach)(struct sock *sk);
+ int (*tun_dev_alloc_security) (void **security);
+ void (*tun_dev_free_security) (void *security);
+ int (*tun_dev_create) (void);
+ int (*tun_dev_attach_queue) (void *security);
+ int (*tun_dev_attach) (struct sock *sk, void *security);
+ int (*tun_dev_open) (void *security);
#endif /* CONFIG_SECURITY_NETWORK */
#ifdef CONFIG_SECURITY_NETWORK_XFRM
int security_secmark_relabel_packet(u32 secid);
void security_secmark_refcount_inc(void);
void security_secmark_refcount_dec(void);
+int security_tun_dev_alloc_security(void **security);
+void security_tun_dev_free_security(void *security);
int security_tun_dev_create(void);
-void security_tun_dev_post_create(struct sock *sk);
-int security_tun_dev_attach(struct sock *sk);
+int security_tun_dev_attach_queue(void *security);
+int security_tun_dev_attach(struct sock *sk, void *security);
+int security_tun_dev_open(void *security);
#else /* CONFIG_SECURITY_NETWORK */
static inline int security_unix_stream_connect(struct sock *sock,
{
}
+static inline int security_tun_dev_alloc_security(void **security)
+{
+ return 0;
+}
+
+static inline void security_tun_dev_free_security(void *security)
+{
+}
+
static inline int security_tun_dev_create(void)
{
return 0;
}
-static inline void security_tun_dev_post_create(struct sock *sk)
+static inline int security_tun_dev_attach_queue(void *security)
+{
+ return 0;
+}
+
+static inline int security_tun_dev_attach(struct sock *sk, void *security)
{
+ return 0;
}
-static inline int security_tun_dev_attach(struct sock *sk)
+static inline int security_tun_dev_open(void *security)
{
return 0;
}
int bandwidth_int_reqs; /* number of Interrupt requests */
int bandwidth_isoc_reqs; /* number of Isoc. requests */
+ unsigned resuming_ports; /* bit array: resuming root-hub ports */
+
#if defined(CONFIG_USB_MON) || defined(CONFIG_USB_MON_MODULE)
struct mon_bus *mon_bus; /* non-null when associated */
int monitored; /* non-zero when monitored */
extern void usb_wakeup_notification(struct usb_device *hdev,
unsigned int portnum);
+extern void usb_hcd_start_port_resume(struct usb_bus *bus, int portnum);
+extern void usb_hcd_end_port_resume(struct usb_bus *bus, int portnum);
+
/* The D0/D1 toggle bits ... USE WITH CAUTION (they're almost hcd-internal) */
#define usb_gettoggle(dev, ep, out) (((dev)->toggle[out] >> (ep)) & 1)
#define usb_dotoggle(dev, ep, out) ((dev)->toggle[out] ^= (1 << (ep)))
wait_queue_head_t *wait;
struct mutex phy_mutex;
unsigned char suspend_count;
+ unsigned char pkt_cnt, pkt_err;
/* i/o info: pipes etc */
unsigned in, out;
# define EVENT_DEV_OPEN 7
# define EVENT_DEVICE_REPORT_IDLE 8
# define EVENT_NO_RUNTIME_PM 9
+# define EVENT_RX_KILL 10
};
static inline struct usb_driver *driver_of(struct usb_interface *intf)
*/
#define FLAG_MULTI_PACKET 0x2000
#define FLAG_RX_ASSEMBLE 0x4000 /* rx packets may span >1 frames */
+#define FLAG_NOARP 0x8000 /* device can't do ARP */
/* init device ... can sleep, or cause probe() failure */
int (*bind)(struct usbnet *, struct usb_interface *);
extern int ip4_datagram_connect(struct sock *sk,
struct sockaddr *uaddr, int addr_len);
+extern void ip4_datagram_release_cb(struct sock *sk);
+
struct ip_reply_arg {
struct kvec iov[1];
int flags;
extern int nf_conntrack_proto_init(struct net *net);
extern void nf_conntrack_proto_fini(struct net *net);
+extern void nf_conntrack_cleanup_end(void);
+
extern bool
nf_ct_get_tuple(const struct sk_buff *skb,
unsigned int nhoff,
struct sockaddr *uaddr,
int addr_len);
-extern int datagram_recv_ctl(struct sock *sk,
- struct msghdr *msg,
- struct sk_buff *skb);
-
-extern int datagram_send_ctl(struct net *net,
- struct sock *sk,
- struct msghdr *msg,
- struct flowi6 *fl6,
- struct ipv6_txoptions *opt,
- int *hlimit, int *tclass,
- int *dontfrag);
+extern int ip6_datagram_recv_ctl(struct sock *sk,
+ struct msghdr *msg,
+ struct sk_buff *skb);
+
+extern int ip6_datagram_send_ctl(struct net *net,
+ struct sock *sk,
+ struct msghdr *msg,
+ struct flowi6 *fl6,
+ struct ipv6_txoptions *opt,
+ int *hlimit, int *tclass,
+ int *dontfrag);
#define LOOPBACK4_IPV6 cpu_to_be32(0x7f000006)
#define USB_INTRF_FUNC_SUSPEND_LP (1 << (8 + 0))
#define USB_INTRF_FUNC_SUSPEND_RW (1 << (8 + 1))
+/*
+ * Interface status, Figure 9-5 USB 3.0 spec
+ */
+#define USB_INTRF_STAT_FUNC_RW_CAP 1
+#define USB_INTRF_STAT_FUNC_RW 2
+
#define USB_ENDPOINT_HALT 0 /* IN/OUT will STALL */
/* Bit array elements as returned by the USB_REQ_GET_STATUS request. */
pidmap_init();
anon_vma_init();
#ifdef CONFIG_X86
- if (efi_enabled)
+ if (efi_enabled(EFI_RUNTIME_SERVICES))
efi_enter_virtual_mode();
#endif
thread_info_cache_init();
acpi_early_init(); /* before LAPIC and SMP init */
sfi_init_late();
- if (efi_enabled) {
+ if (efi_enabled(EFI_RUNTIME_SERVICES)) {
efi_late_init();
efi_free_boot_services();
}
*/
static async_cookie_t __lowest_in_progress(struct async_domain *running)
{
+ async_cookie_t first_running = next_cookie; /* infinity value */
+ async_cookie_t first_pending = next_cookie; /* ditto */
struct async_entry *entry;
+ /*
+ * Both running and pending lists are sorted but not disjoint.
+ * Take the first cookies from both and return the min.
+ */
if (!list_empty(&running->domain)) {
entry = list_first_entry(&running->domain, typeof(*entry), list);
- return entry->cookie;
+ first_running = entry->cookie;
}
- list_for_each_entry(entry, &async_pending, list)
- if (entry->running == running)
- return entry->cookie;
+ list_for_each_entry(entry, &async_pending, list) {
+ if (entry->running == running) {
+ first_pending = entry->cookie;
+ break;
+ }
+ }
- return next_cookie; /* "infinity" value */
+ return min(first_running, first_pending);
}
static async_cookie_t lowest_in_progress(struct async_domain *running)
{
struct async_entry *entry =
container_of(work, struct async_entry, work);
+ struct async_entry *pos;
unsigned long flags;
ktime_t uninitialized_var(calltime), delta, rettime;
struct async_domain *running = entry->running;
- /* 1) move self to the running queue */
+ /* 1) move self to the running queue, make sure it stays sorted */
spin_lock_irqsave(&async_lock, flags);
- list_move_tail(&entry->list, &running->domain);
+ list_for_each_entry_reverse(pos, &running->domain, list)
+ if (entry->cookie < pos->cookie)
+ break;
+ list_move_tail(&entry->list, &pos->list);
spin_unlock_irqrestore(&async_lock, flags);
/* 2) run (and print duration) */
kdb_printf("Module Size modstruct Used by\n");
list_for_each_entry(mod, kdb_modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
kdb_printf("%-20s%8u 0x%p ", mod->name,
mod->core_size, (void *)mod);
}
/*
+ * Initialize event state based on the perf_event_attr::disabled.
+ */
+static inline void perf_event__state_init(struct perf_event *event)
+{
+ event->state = event->attr.disabled ? PERF_EVENT_STATE_OFF :
+ PERF_EVENT_STATE_INACTIVE;
+}
+
+/*
* Called at perf_event creation and when events are attached/detached from a
* group.
*/
event->overflow_handler = overflow_handler;
event->overflow_handler_context = context;
- if (attr->disabled)
- event->state = PERF_EVENT_STATE_OFF;
+ perf_event__state_init(event);
pmu = NULL;
mutex_lock(&gctx->mutex);
perf_remove_from_context(group_leader);
+
+ /*
+ * Removing from the context ends up with disabled
+ * event. What we want here is event in the initial
+ * startup state, ready to be add into new context.
+ */
+ perf_event__state_init(group_leader);
list_for_each_entry(sibling, &group_leader->sibling_list,
group_entry) {
perf_remove_from_context(sibling);
+ perf_event__state_init(sibling);
put_ctx(gctx);
}
mutex_unlock(&gctx->mutex);
ongoing or failed initialization etc. */
static inline int strong_try_module_get(struct module *mod)
{
+ BUG_ON(mod && mod->state == MODULE_STATE_UNFORMED);
if (mod && mod->state == MODULE_STATE_COMING)
return -EBUSY;
if (try_module_get(mod))
#endif
};
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
+
if (each_symbol_in_section(arr, ARRAY_SIZE(arr), mod, fn, data))
return true;
}
EXPORT_SYMBOL_GPL(find_symbol);
/* Search for module by name: must hold module_mutex. */
-struct module *find_module(const char *name)
+static struct module *find_module_all(const char *name,
+ bool even_unformed)
{
struct module *mod;
list_for_each_entry(mod, &modules, list) {
+ if (!even_unformed && mod->state == MODULE_STATE_UNFORMED)
+ continue;
if (strcmp(mod->name, name) == 0)
return mod;
}
return NULL;
}
+
+struct module *find_module(const char *name)
+{
+ return find_module_all(name, false);
+}
EXPORT_SYMBOL_GPL(find_module);
#ifdef CONFIG_SMP
preempt_disable();
list_for_each_entry_rcu(mod, &modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
if (!mod->percpu_size)
continue;
for_each_possible_cpu(cpu) {
case MODULE_STATE_GOING:
state = "going";
break;
+ default:
+ BUG();
}
return sprintf(buffer, "%s\n", state);
}
mutex_lock(&module_mutex);
list_for_each_entry_rcu(mod, &modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
if ((mod->module_core) && (mod->core_text_size)) {
set_page_attributes(mod->module_core,
mod->module_core + mod->core_text_size,
mutex_lock(&module_mutex);
list_for_each_entry_rcu(mod, &modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
if ((mod->module_core) && (mod->core_text_size)) {
set_page_attributes(mod->module_core,
mod->module_core + mod->core_text_size,
err = -EFBIG;
goto out;
}
+
+ /* Don't hand 0 to vmalloc, it whines. */
+ if (stat.size == 0) {
+ err = -EINVAL;
+ goto out;
+ }
+
info->hdr = vmalloc(stat.size);
if (!info->hdr) {
err = -ENOMEM;
bool ret;
mutex_lock(&module_mutex);
- mod = find_module(name);
- ret = !mod || mod->state != MODULE_STATE_COMING;
+ mod = find_module_all(name, true);
+ ret = !mod || mod->state == MODULE_STATE_LIVE
+ || mod->state == MODULE_STATE_GOING;
mutex_unlock(&module_mutex);
return ret;
goto free_copy;
}
+ /*
+ * We try to place it in the list now to make sure it's unique
+ * before we dedicate too many resources. In particular,
+ * temporary percpu memory exhaustion.
+ */
+ mod->state = MODULE_STATE_UNFORMED;
+again:
+ mutex_lock(&module_mutex);
+ if ((old = find_module_all(mod->name, true)) != NULL) {
+ if (old->state == MODULE_STATE_COMING
+ || old->state == MODULE_STATE_UNFORMED) {
+ /* Wait in case it fails to load. */
+ mutex_unlock(&module_mutex);
+ err = wait_event_interruptible(module_wq,
+ finished_loading(mod->name));
+ if (err)
+ goto free_module;
+ goto again;
+ }
+ err = -EEXIST;
+ mutex_unlock(&module_mutex);
+ goto free_module;
+ }
+ list_add_rcu(&mod->list, &modules);
+ mutex_unlock(&module_mutex);
+
#ifdef CONFIG_MODULE_SIG
mod->sig_ok = info->sig_ok;
if (!mod->sig_ok)
/* Now module is in final location, initialize linked lists, etc. */
err = module_unload_init(mod);
if (err)
- goto free_module;
+ goto unlink_mod;
/* Now we've got everything in the final locations, we can
* find optional sections. */
goto free_arch_cleanup;
}
- /* Mark state as coming so strong_try_module_get() ignores us. */
- mod->state = MODULE_STATE_COMING;
-
- /* Now sew it into the lists so we can get lockdep and oops
- * info during argument parsing. No one should access us, since
- * strong_try_module_get() will fail.
- * lockdep/oops can run asynchronous, so use the RCU list insertion
- * function to insert in a way safe to concurrent readers.
- * The mutex protects against concurrent writers.
- */
-again:
- mutex_lock(&module_mutex);
- if ((old = find_module(mod->name)) != NULL) {
- if (old->state == MODULE_STATE_COMING) {
- /* Wait in case it fails to load. */
- mutex_unlock(&module_mutex);
- err = wait_event_interruptible(module_wq,
- finished_loading(mod->name));
- if (err)
- goto free_arch_cleanup;
- goto again;
- }
- err = -EEXIST;
- goto unlock;
- }
-
- /* This has to be done once we're sure module name is unique. */
dynamic_debug_setup(info->debug, info->num_debug);
- /* Find duplicate symbols */
+ mutex_lock(&module_mutex);
+ /* Find duplicate symbols (must be called under lock). */
err = verify_export_symbols(mod);
if (err < 0)
- goto ddebug;
+ goto ddebug_cleanup;
+ /* This relies on module_mutex for list integrity. */
module_bug_finalize(info->hdr, info->sechdrs, mod);
- list_add_rcu(&mod->list, &modules);
+
+ /* Mark state as coming so strong_try_module_get() ignores us,
+ * but kallsyms etc. can see us. */
+ mod->state = MODULE_STATE_COMING;
+
mutex_unlock(&module_mutex);
/* Module is ready to execute: parsing args may do that. */
err = parse_args(mod->name, mod->args, mod->kp, mod->num_kp,
-32768, 32767, &ddebug_dyndbg_module_param_cb);
if (err < 0)
- goto unlink;
+ goto bug_cleanup;
/* Link in to syfs. */
err = mod_sysfs_setup(mod, info, mod->kp, mod->num_kp);
if (err < 0)
- goto unlink;
+ goto bug_cleanup;
/* Get rid of temporary copy. */
free_copy(info);
return do_init_module(mod);
- unlink:
+ bug_cleanup:
+ /* module_bug_cleanup needs module_mutex protection */
mutex_lock(&module_mutex);
- /* Unlink carefully: kallsyms could be walking list. */
- list_del_rcu(&mod->list);
module_bug_cleanup(mod);
- wake_up_all(&module_wq);
- ddebug:
- dynamic_debug_remove(info->debug);
- unlock:
+ ddebug_cleanup:
mutex_unlock(&module_mutex);
+ dynamic_debug_remove(info->debug);
synchronize_sched();
kfree(mod->args);
free_arch_cleanup:
free_modinfo(mod);
free_unload:
module_unload_free(mod);
+ unlink_mod:
+ mutex_lock(&module_mutex);
+ /* Unlink carefully: kallsyms could be walking list. */
+ list_del_rcu(&mod->list);
+ wake_up_all(&module_wq);
+ mutex_unlock(&module_mutex);
free_module:
module_deallocate(mod, info);
free_copy:
preempt_disable();
list_for_each_entry_rcu(mod, &modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
if (within_module_init(addr, mod) ||
within_module_core(addr, mod)) {
if (modname)
preempt_disable();
list_for_each_entry_rcu(mod, &modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
if (within_module_init(addr, mod) ||
within_module_core(addr, mod)) {
const char *sym;
preempt_disable();
list_for_each_entry_rcu(mod, &modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
if (within_module_init(addr, mod) ||
within_module_core(addr, mod)) {
const char *sym;
preempt_disable();
list_for_each_entry_rcu(mod, &modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
if (symnum < mod->num_symtab) {
*value = mod->symtab[symnum].st_value;
*type = mod->symtab[symnum].st_info;
ret = mod_find_symname(mod, colon+1);
*colon = ':';
} else {
- list_for_each_entry_rcu(mod, &modules, list)
+ list_for_each_entry_rcu(mod, &modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
if ((ret = mod_find_symname(mod, name)) != 0)
break;
+ }
}
preempt_enable();
return ret;
int ret;
list_for_each_entry(mod, &modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
for (i = 0; i < mod->num_symtab; i++) {
ret = fn(data, mod->strtab + mod->symtab[i].st_name,
mod, mod->symtab[i].st_value);
{
int bx = 0;
+ BUG_ON(mod->state == MODULE_STATE_UNFORMED);
if (mod->taints ||
mod->state == MODULE_STATE_GOING ||
mod->state == MODULE_STATE_COMING) {
struct module *mod = list_entry(p, struct module, list);
char buf[8];
+ /* We always ignore unformed modules. */
+ if (mod->state == MODULE_STATE_UNFORMED)
+ return 0;
+
seq_printf(m, "%s %u",
mod->name, mod->init_size + mod->core_size);
print_unload_info(m, mod);
preempt_disable();
list_for_each_entry_rcu(mod, &modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
if (mod->num_exentries == 0)
continue;
if (addr < module_addr_min || addr > module_addr_max)
return NULL;
- list_for_each_entry_rcu(mod, &modules, list)
+ list_for_each_entry_rcu(mod, &modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
if (within_module_core(addr, mod)
|| within_module_init(addr, mod))
return mod;
+ }
return NULL;
}
EXPORT_SYMBOL_GPL(__module_address);
printk(KERN_DEFAULT "Modules linked in:");
/* Most callers should already have preempt disabled, but make sure */
preempt_disable();
- list_for_each_entry_rcu(mod, &modules, list)
+ list_for_each_entry_rcu(mod, &modules, list) {
+ if (mod->state == MODULE_STATE_UNFORMED)
+ continue;
printk(" %s%s", mod->name, module_flags(mod, buf));
+ }
preempt_enable();
if (last_unloaded_module[0])
printk(" [last unloaded: %s]", last_unloaded_module);
struct console *console_drivers;
EXPORT_SYMBOL_GPL(console_drivers);
-#ifdef CONFIG_LOCKDEP
-static struct lockdep_map console_lock_dep_map = {
- .name = "console_lock"
-};
-#endif
-
/*
* This is used for debugging the mess that is the VT code by
* keeping track if we have the console semaphore held. It's
return;
console_locked = 1;
console_may_schedule = 1;
- mutex_acquire(&console_lock_dep_map, 0, 0, _RET_IP_);
}
EXPORT_SYMBOL(console_lock);
}
console_locked = 1;
console_may_schedule = 0;
- mutex_acquire(&console_lock_dep_map, 0, 1, _RET_IP_);
return 1;
}
EXPORT_SYMBOL(console_trylock);
local_irq_restore(flags);
}
console_locked = 0;
- mutex_release(&console_lock_dep_map, 1, _RET_IP_);
/* Release the exclusive_console once it is used */
if (unlikely(exclusive_console))
* TASK_KILLABLE sleeps.
*/
if (child->jobctl & JOBCTL_STOP_PENDING || task_is_traced(child))
- signal_wake_up(child, task_is_traced(child));
+ ptrace_signal_wake_up(child, true);
spin_unlock(&child->sighand->siglock);
}
+/* Ensure that nothing can wake it up, even SIGKILL */
+static bool ptrace_freeze_traced(struct task_struct *task)
+{
+ bool ret = false;
+
+ /* Lockless, nobody but us can set this flag */
+ if (task->jobctl & JOBCTL_LISTENING)
+ return ret;
+
+ spin_lock_irq(&task->sighand->siglock);
+ if (task_is_traced(task) && !__fatal_signal_pending(task)) {
+ task->state = __TASK_TRACED;
+ ret = true;
+ }
+ spin_unlock_irq(&task->sighand->siglock);
+
+ return ret;
+}
+
+static void ptrace_unfreeze_traced(struct task_struct *task)
+{
+ if (task->state != __TASK_TRACED)
+ return;
+
+ WARN_ON(!task->ptrace || task->parent != current);
+
+ spin_lock_irq(&task->sighand->siglock);
+ if (__fatal_signal_pending(task))
+ wake_up_state(task, __TASK_TRACED);
+ else
+ task->state = TASK_TRACED;
+ spin_unlock_irq(&task->sighand->siglock);
+}
+
/**
* ptrace_check_attach - check whether ptracee is ready for ptrace operation
* @child: ptracee to check for
* be changed by us so it's not changing right after this.
*/
read_lock(&tasklist_lock);
- if ((child->ptrace & PT_PTRACED) && child->parent == current) {
+ if (child->ptrace && child->parent == current) {
+ WARN_ON(child->state == __TASK_TRACED);
/*
* child->sighand can't be NULL, release_task()
* does ptrace_unlink() before __exit_signal().
*/
- spin_lock_irq(&child->sighand->siglock);
- WARN_ON_ONCE(task_is_stopped(child));
- if (ignore_state || (task_is_traced(child) &&
- !(child->jobctl & JOBCTL_LISTENING)))
+ if (ignore_state || ptrace_freeze_traced(child))
ret = 0;
- spin_unlock_irq(&child->sighand->siglock);
}
read_unlock(&tasklist_lock);
- if (!ret && !ignore_state)
- ret = wait_task_inactive(child, TASK_TRACED) ? 0 : -ESRCH;
+ if (!ret && !ignore_state) {
+ if (!wait_task_inactive(child, __TASK_TRACED)) {
+ /*
+ * This can only happen if may_ptrace_stop() fails and
+ * ptrace_stop() changes ->state back to TASK_RUNNING,
+ * so we should not worry about leaking __TASK_TRACED.
+ */
+ WARN_ON(child->state == __TASK_TRACED);
+ ret = -ESRCH;
+ }
+ }
- /* All systems go.. */
return ret;
}
*/
if (task_is_stopped(task) &&
task_set_jobctl_pending(task, JOBCTL_TRAP_STOP | JOBCTL_TRAPPING))
- signal_wake_up(task, 1);
+ signal_wake_up_state(task, __TASK_STOPPED);
spin_unlock(&task->sighand->siglock);
* tracee into STOP.
*/
if (likely(task_set_jobctl_pending(child, JOBCTL_TRAP_STOP)))
- signal_wake_up(child, child->jobctl & JOBCTL_LISTENING);
+ ptrace_signal_wake_up(child, child->jobctl & JOBCTL_LISTENING);
unlock_task_sighand(child, &flags);
ret = 0;
* start of this trap and now. Trigger re-trap.
*/
if (child->jobctl & JOBCTL_TRAP_NOTIFY)
- signal_wake_up(child, true);
+ ptrace_signal_wake_up(child, true);
ret = 0;
}
unlock_task_sighand(child, &flags);
goto out_put_task_struct;
ret = arch_ptrace(child, request, addr, data);
+ if (ret || request != PTRACE_DETACH)
+ ptrace_unfreeze_traced(child);
out_put_task_struct:
put_task_struct(child);
ret = ptrace_check_attach(child, request == PTRACE_KILL ||
request == PTRACE_INTERRUPT);
- if (!ret)
+ if (!ret) {
ret = compat_arch_ptrace(child, request, addr, data);
+ if (ret || request != PTRACE_DETACH)
+ ptrace_unfreeze_traced(child);
+ }
out_put_task_struct:
put_task_struct(child);
#ifdef CONFIG_RCU_NOCB_CPU
static cpumask_var_t rcu_nocb_mask; /* CPUs to have callbacks offloaded. */
static bool have_rcu_nocb_mask; /* Was rcu_nocb_mask allocated? */
-static bool rcu_nocb_poll; /* Offload kthread are to poll. */
-module_param(rcu_nocb_poll, bool, 0444);
+static bool __read_mostly rcu_nocb_poll; /* Offload kthread are to poll. */
static char __initdata nocb_buf[NR_CPUS * 5];
#endif /* #ifdef CONFIG_RCU_NOCB_CPU */
}
__setup("rcu_nocbs=", rcu_nocb_setup);
+static int __init parse_rcu_nocb_poll(char *arg)
+{
+ rcu_nocb_poll = 1;
+ return 0;
+}
+early_param("rcu_nocb_poll", parse_rcu_nocb_poll);
+
/* Is the specified CPU a no-CPUs CPU? */
static bool is_nocb_cpu(int cpu)
{
for (;;) {
/* If not polling, wait for next batch of callbacks. */
if (!rcu_nocb_poll)
- wait_event(rdp->nocb_wq, rdp->nocb_head);
+ wait_event_interruptible(rdp->nocb_wq, rdp->nocb_head);
list = ACCESS_ONCE(rdp->nocb_head);
if (!list) {
schedule_timeout_interruptible(1);
+ flush_signals(current);
continue;
}
*/
int wake_up_process(struct task_struct *p)
{
- return try_to_wake_up(p, TASK_ALL, 0);
+ WARN_ON(task_is_stopped_or_traced(p));
+ return try_to_wake_up(p, TASK_NORMAL, 0);
}
EXPORT_SYMBOL(wake_up_process);
cfs_rq->runnable_load_avg);
SEQ_printf(m, " .%-30s: %lld\n", "blocked_load_avg",
cfs_rq->blocked_load_avg);
- SEQ_printf(m, " .%-30s: %ld\n", "tg_load_avg",
- atomic64_read(&cfs_rq->tg->load_avg));
+ SEQ_printf(m, " .%-30s: %lld\n", "tg_load_avg",
+ (unsigned long long)atomic64_read(&cfs_rq->tg->load_avg));
SEQ_printf(m, " .%-30s: %lld\n", "tg_load_contrib",
cfs_rq->tg_load_contrib);
SEQ_printf(m, " .%-30s: %d\n", "tg_runnable_contrib",
hrtimer_cancel(&cfs_b->slack_timer);
}
-static void unthrottle_offline_cfs_rqs(struct rq *rq)
+static void __maybe_unused unthrottle_offline_cfs_rqs(struct rq *rq)
{
struct cfs_rq *cfs_rq;
static int do_balance_runtime(struct rt_rq *rt_rq)
{
struct rt_bandwidth *rt_b = sched_rt_bandwidth(rt_rq);
- struct root_domain *rd = cpu_rq(smp_processor_id())->rd;
+ struct root_domain *rd = rq_of_rt_rq(rt_rq)->rd;
int i, weight, more = 0;
u64 rt_period;
* No need to set need_resched since signal event passing
* goes through ->blocked
*/
-void signal_wake_up(struct task_struct *t, int resume)
+void signal_wake_up_state(struct task_struct *t, unsigned int state)
{
- unsigned int mask;
-
set_tsk_thread_flag(t, TIF_SIGPENDING);
-
/*
- * For SIGKILL, we want to wake it up in the stopped/traced/killable
+ * TASK_WAKEKILL also means wake it up in the stopped/traced/killable
* case. We don't check t->state here because there is a race with it
* executing another processor and just now entering stopped state.
* By using wake_up_state, we ensure the process will wake up and
* handle its death signal.
*/
- mask = TASK_INTERRUPTIBLE;
- if (resume)
- mask |= TASK_WAKEKILL;
- if (!wake_up_state(t, mask))
+ if (!wake_up_state(t, state | TASK_INTERRUPTIBLE))
kick_process(t);
}
assert_spin_locked(&t->sighand->siglock);
task_set_jobctl_pending(t, JOBCTL_TRAP_NOTIFY);
- signal_wake_up(t, t->jobctl & JOBCTL_LISTENING);
+ ptrace_signal_wake_up(t, t->jobctl & JOBCTL_LISTENING);
}
/*
* If SIGKILL was already sent before the caller unlocked
* ->siglock we must see ->core_state != NULL. Otherwise it
* is safe to enter schedule().
+ *
+ * This is almost outdated, a task with the pending SIGKILL can't
+ * block in TASK_TRACED. But PTRACE_EVENT_EXIT can be reported
+ * after SIGKILL was already dequeued.
*/
if (unlikely(current->mm->core_state) &&
unlikely(current->mm == current->parent->mm))
if (gstop_done)
do_notify_parent_cldstop(current, false, why);
+ /* tasklist protects us from ptrace_freeze_traced() */
__set_current_state(TASK_RUNNING);
if (clear_code)
current->exit_code = 0;
struct call_single_data csd;
atomic_t refs;
cpumask_var_t cpumask;
+ cpumask_var_t cpumask_ipi;
};
static DEFINE_PER_CPU_SHARED_ALIGNED(struct call_function_data, cfd_data);
if (!zalloc_cpumask_var_node(&cfd->cpumask, GFP_KERNEL,
cpu_to_node(cpu)))
return notifier_from_errno(-ENOMEM);
+ if (!zalloc_cpumask_var_node(&cfd->cpumask_ipi, GFP_KERNEL,
+ cpu_to_node(cpu)))
+ return notifier_from_errno(-ENOMEM);
break;
#ifdef CONFIG_HOTPLUG_CPU
case CPU_DEAD:
case CPU_DEAD_FROZEN:
free_cpumask_var(cfd->cpumask);
+ free_cpumask_var(cfd->cpumask_ipi);
break;
#endif
};
return;
}
+ /*
+ * After we put an entry into the list, data->cpumask
+ * may be cleared again when another CPU sends another IPI for
+ * a SMP function call, so data->cpumask will be zero.
+ */
+ cpumask_copy(data->cpumask_ipi, data->cpumask);
raw_spin_lock_irqsave(&call_function.lock, flags);
/*
* Place entry at the _HEAD_ of the list, so that any cpu still
smp_mb();
/* Send a message to all CPUs in the map */
- arch_send_call_function_ipi_mask(data->cpumask);
+ arch_send_call_function_ipi_mask(data->cpumask_ipi);
/* Optionally wait for the CPUs to complete */
if (wait)
struct notifier_block ftrace_module_nb = {
.notifier_call = ftrace_module_notify,
- .priority = 0,
+ .priority = INT_MAX, /* Run before anything that can use kprobes */
};
extern unsigned long __start_mcount_loc[];
}
#ifdef CONFIG_MODULES
+/* Updates are protected by module mutex */
static LIST_HEAD(module_bug_list);
static const struct bug_entry *module_find_bug(unsigned long bugaddr)
memset(out1, 0, head);
memcpy(out1 + head, p, l);
+ kfree(p);
+
err = pkcs_1_v1_5_decode_emsa(out1, len, mblen, out2, &len);
if (err)
goto err;
if (flags & FOLL_WRITE && !pmd_write(*pmd))
goto out;
+ /* Avoid dumping huge zero page */
+ if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd))
+ return ERR_PTR(-EFAULT);
+
page = pmd_page(*pmd);
VM_BUG_ON(!PageHead(page));
if (flags & FOLL_TOUCH) {
if (!huge_pte_none(huge_ptep_get(ptep))) {
pte = huge_ptep_get_and_clear(mm, address, ptep);
pte = pte_mkhuge(pte_modify(pte, newprot));
+ pte = arch_make_huge_pte(pte, vma, NULL, 0);
set_huge_pte_at(mm, address, ptep, pte);
pages++;
}
if (is_write_migration_entry(entry))
pte = pte_mkwrite(pte);
#ifdef CONFIG_HUGETLB_PAGE
- if (PageHuge(new))
+ if (PageHuge(new)) {
pte = pte_mkhuge(pte);
+ pte = arch_make_huge_pte(pte, vma, new, 0);
+ }
#endif
flush_cache_page(vma, addr, pte_pfn(pte));
set_pte_at(mm, addr, ptep, pte);
* vma in this mm is backed by the same anon_vma or address_space.
*
* We can take all the locks in random order because the VM code
- * taking i_mmap_mutex or anon_vma->mutex outside the mmap_sem never
+ * taking i_mmap_mutex or anon_vma->rwsem outside the mmap_sem never
* takes more than one of them in a row. Secondly we're protected
* against a concurrent mm_take_all_locks() by the mm_all_locks_mutex.
*
struct arphdr *arphdr;
struct ethhdr *ethhdr;
__be32 ip_src, ip_dst;
+ uint8_t *hw_src, *hw_dst;
uint16_t type = 0;
/* pull the ethernet header */
ip_src = batadv_arp_ip_src(skb, hdr_size);
ip_dst = batadv_arp_ip_dst(skb, hdr_size);
if (ipv4_is_loopback(ip_src) || ipv4_is_multicast(ip_src) ||
- ipv4_is_loopback(ip_dst) || ipv4_is_multicast(ip_dst))
+ ipv4_is_loopback(ip_dst) || ipv4_is_multicast(ip_dst) ||
+ ipv4_is_zeronet(ip_src) || ipv4_is_lbcast(ip_src) ||
+ ipv4_is_zeronet(ip_dst) || ipv4_is_lbcast(ip_dst))
goto out;
+ hw_src = batadv_arp_hw_src(skb, hdr_size);
+ if (is_zero_ether_addr(hw_src) || is_multicast_ether_addr(hw_src))
+ goto out;
+
+ /* we don't care about the destination MAC address in ARP requests */
+ if (arphdr->ar_op != htons(ARPOP_REQUEST)) {
+ hw_dst = batadv_arp_hw_dst(skb, hdr_size);
+ if (is_zero_ether_addr(hw_dst) ||
+ is_multicast_ether_addr(hw_dst))
+ goto out;
+ }
+
type = ntohs(arphdr->ar_op);
out:
return type;
*/
ret = !batadv_is_my_client(bat_priv, hw_dst);
out:
+ if (ret)
+ kfree_skb(skb);
/* if ret == false -> packet has to be delivered to the interface */
return ret;
}
__u8 reason = hci_proto_disconn_ind(conn);
switch (conn->type) {
- case ACL_LINK:
- hci_acl_disconn(conn, reason);
- break;
case AMP_LINK:
hci_amp_disconn(conn, reason);
break;
+ default:
+ hci_acl_disconn(conn, reason);
+ break;
}
}
if (conn) {
hci_conn_enter_active_mode(conn, BT_POWER_FORCE_ACTIVE_OFF);
- hci_dev_lock(hdev);
- if (test_bit(HCI_MGMT, &hdev->dev_flags) &&
- !test_and_set_bit(HCI_CONN_MGMT_CONNECTED, &conn->flags))
- mgmt_device_connected(hdev, &conn->dst, conn->type,
- conn->dst_type, 0, NULL, 0,
- conn->dev_class);
- hci_dev_unlock(hdev);
-
/* Send to upper protocol */
l2cap_recv_acldata(conn, skb, flags);
return;
if (ev->opcode != HCI_OP_NOP)
del_timer(&hdev->cmd_timer);
- if (ev->ncmd) {
+ if (ev->ncmd && !test_bit(HCI_RESET, &hdev->flags)) {
atomic_set(&hdev->cmd_cnt, 1);
if (!skb_queue_empty(&hdev->cmd_q))
queue_work(hdev->workqueue, &hdev->cmd_work);
hid->version = req->version;
hid->country = req->country;
- strncpy(hid->name, req->name, 128);
+ strncpy(hid->name, req->name, sizeof(req->name) - 1);
snprintf(hid->phys, sizeof(hid->phys), "%pMR",
&bt_sk(session->ctrl_sock->sk)->src);
static int l2cap_connect_req(struct l2cap_conn *conn,
struct l2cap_cmd_hdr *cmd, u8 *data)
{
+ struct hci_dev *hdev = conn->hcon->hdev;
+ struct hci_conn *hcon = conn->hcon;
+
+ hci_dev_lock(hdev);
+ if (test_bit(HCI_MGMT, &hdev->dev_flags) &&
+ !test_and_set_bit(HCI_CONN_MGMT_CONNECTED, &hcon->flags))
+ mgmt_device_connected(hdev, &hcon->dst, hcon->type,
+ hcon->dst_type, 0, NULL, 0,
+ hcon->dev_class);
+ hci_dev_unlock(hdev);
+
l2cap_connect(conn, cmd, data, L2CAP_CONN_RSP, 0);
return 0;
}
case BT_CONNECTED:
case BT_CONFIG:
- if (sco_pi(sk)->conn) {
+ if (sco_pi(sk)->conn->hcon) {
sk->sk_state = BT_DISCONN;
sco_sock_set_timer(sk, SCO_DISCONN_TIMEOUT);
hci_conn_put(sco_pi(sk)->conn->hcon);
skb_pull(skb, sizeof(code));
+ /*
+ * The SMP context must be initialized for all other PDUs except
+ * pairing and security requests. If we get any other PDU when
+ * not initialized simply disconnect (done if this function
+ * returns an error).
+ */
+ if (code != SMP_CMD_PAIRING_REQ && code != SMP_CMD_SECURITY_REQ &&
+ !conn->smp_chan) {
+ BT_ERR("Unexpected SMP command 0x%02x. Disconnecting.", code);
+ kfree_skb(skb);
+ return -ENOTSUPP;
+ }
+
switch (code) {
case SMP_CMD_PAIRING_REQ:
reason = smp_cmd_pairing_req(conn, skb);
return -EFAULT;
i += len;
mutex_lock(&pktgen_thread_lock);
- pktgen_add_device(t, f);
+ ret = pktgen_add_device(t, f);
mutex_unlock(&pktgen_thread_lock);
- ret = count;
- sprintf(pg_result, "OK: add_device=%s", f);
+ if (!ret) {
+ ret = count;
+ sprintf(pg_result, "OK: add_device=%s", f);
+ } else
+ sprintf(pg_result, "ERROR: can not add device %s", f);
goto out;
}
struct fastopen_queue *fastopenq =
inet_csk(lsk)->icsk_accept_queue.fastopenq;
- BUG_ON(!spin_is_locked(&sk->sk_lock.slock) && !sock_owned_by_user(sk));
-
tcp_sk(sk)->fastopen_rsk = NULL;
spin_lock_bh(&fastopenq->lock);
fastopenq->qlen--;
#include <net/sock.h>
#include <net/compat.h>
#include <net/scm.h>
+#include <net/cls_cgroup.h>
/*
}
/* Bump the usage count and install the file. */
sock = sock_from_file(fp[i], &err);
- if (sock)
+ if (sock) {
sock_update_netprioidx(sock->sk, current);
+ sock_update_classid(sock->sk, current);
+ }
fd_install(new_fd, get_file(fp[i]));
}
new->network_header = old->network_header;
new->mac_header = old->mac_header;
new->inner_transport_header = old->inner_transport_header;
- new->inner_network_header = old->inner_transport_header;
+ new->inner_network_header = old->inner_network_header;
skb_dst_copy(new, old);
new->rxhash = old->rxhash;
new->ooo_okay = old->ooo_okay;
static struct page *linear_to_page(struct page *page, unsigned int *len,
unsigned int *offset,
- struct sk_buff *skb, struct sock *sk)
+ struct sock *sk)
{
struct page_frag *pfrag = sk_page_frag(sk);
static bool spd_fill_page(struct splice_pipe_desc *spd,
struct pipe_inode_info *pipe, struct page *page,
unsigned int *len, unsigned int offset,
- struct sk_buff *skb, bool linear,
+ bool linear,
struct sock *sk)
{
if (unlikely(spd->nr_pages == MAX_SKB_FRAGS))
return true;
if (linear) {
- page = linear_to_page(page, len, &offset, skb, sk);
+ page = linear_to_page(page, len, &offset, sk);
if (!page)
return true;
}
return false;
}
-static inline void __segment_seek(struct page **page, unsigned int *poff,
- unsigned int *plen, unsigned int off)
-{
- unsigned long n;
-
- *poff += off;
- n = *poff / PAGE_SIZE;
- if (n)
- *page = nth_page(*page, n);
-
- *poff = *poff % PAGE_SIZE;
- *plen -= off;
-}
-
static bool __splice_segment(struct page *page, unsigned int poff,
unsigned int plen, unsigned int *off,
- unsigned int *len, struct sk_buff *skb,
+ unsigned int *len,
struct splice_pipe_desc *spd, bool linear,
struct sock *sk,
struct pipe_inode_info *pipe)
}
/* ignore any bits we already processed */
- if (*off) {
- __segment_seek(&page, &poff, &plen, *off);
- *off = 0;
- }
+ poff += *off;
+ plen -= *off;
+ *off = 0;
do {
unsigned int flen = min(*len, plen);
- /* the linear region may spread across several pages */
- flen = min_t(unsigned int, flen, PAGE_SIZE - poff);
-
- if (spd_fill_page(spd, pipe, page, &flen, poff, skb, linear, sk))
+ if (spd_fill_page(spd, pipe, page, &flen, poff,
+ linear, sk))
return true;
-
- __segment_seek(&page, &poff, &plen, flen);
+ poff += flen;
+ plen -= flen;
*len -= flen;
-
} while (*len && plen);
return false;
if (__splice_segment(virt_to_page(skb->data),
(unsigned long) skb->data & (PAGE_SIZE - 1),
skb_headlen(skb),
- offset, len, skb, spd,
+ offset, len, spd,
skb_head_is_locked(skb),
sk, pipe))
return true;
if (__splice_segment(skb_frag_page(f),
f->page_offset, skb_frag_size(f),
- offset, len, skb, spd, false, sk, pipe))
+ offset, len, spd, false, sk, pipe))
return true;
}
skb->network_header += ah_hlen;
memcpy(skb_network_header(skb), work_iph, ihl);
__skb_pull(skb, ah_hlen + ihl);
- skb_set_transport_header(skb, -ihl);
+
+ if (x->props.mode == XFRM_MODE_TUNNEL)
+ skb_reset_transport_header(skb);
+ else
+ skb_set_transport_header(skb, -ihl);
out:
kfree(AH_SKB_CB(skb)->tmp);
xfrm_input_resume(skb, err);
skb->network_header += ah_hlen;
memcpy(skb_network_header(skb), work_iph, ihl);
__skb_pull(skb, ah_hlen + ihl);
- skb_set_transport_header(skb, -ihl);
+ if (x->props.mode == XFRM_MODE_TUNNEL)
+ skb_reset_transport_header(skb);
+ else
+ skb_set_transport_header(skb, -ihl);
err = nexthdr;
if (!x)
return;
- if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH)
+ if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) {
+ atomic_inc(&flow_cache_genid);
+ rt_genid_bump(net);
+
ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_AH, 0);
- else
+ } else
ipv4_redirect(skb, net, 0, 0, IPPROTO_AH, 0);
xfrm_state_put(x);
}
return err;
}
EXPORT_SYMBOL(ip4_datagram_connect);
+
+void ip4_datagram_release_cb(struct sock *sk)
+{
+ const struct inet_sock *inet = inet_sk(sk);
+ const struct ip_options_rcu *inet_opt;
+ __be32 daddr = inet->inet_daddr;
+ struct flowi4 fl4;
+ struct rtable *rt;
+
+ if (! __sk_dst_get(sk) || __sk_dst_check(sk, 0))
+ return;
+
+ rcu_read_lock();
+ inet_opt = rcu_dereference(inet->inet_opt);
+ if (inet_opt && inet_opt->opt.srr)
+ daddr = inet_opt->opt.faddr;
+ rt = ip_route_output_ports(sock_net(sk), &fl4, sk, daddr,
+ inet->inet_saddr, inet->inet_dport,
+ inet->inet_sport, sk->sk_protocol,
+ RT_CONN_FLAGS(sk), sk->sk_bound_dev_if);
+ if (!IS_ERR(rt))
+ __sk_dst_set(sk, &rt->dst);
+ rcu_read_unlock();
+}
+EXPORT_SYMBOL_GPL(ip4_datagram_release_cb);
pskb_trim(skb, skb->len - alen - padlen - 2);
__skb_pull(skb, hlen);
- skb_set_transport_header(skb, -ihl);
+ if (x->props.mode == XFRM_MODE_TUNNEL)
+ skb_reset_transport_header(skb);
+ else
+ skb_set_transport_header(skb, -ihl);
err = nexthdr[1];
if (!x)
return;
- if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH)
+ if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) {
+ atomic_inc(&flow_cache_genid);
+ rt_genid_bump(net);
+
ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_ESP, 0);
- else
+ } else
ipv4_redirect(skb, net, 0, 0, IPPROTO_ESP, 0);
xfrm_state_put(x);
}
ptr--;
}
if (tunnel->parms.o_flags&GRE_CSUM) {
+ int offset = skb_transport_offset(skb);
+
*ptr = 0;
- *(__sum16 *)ptr = ip_compute_csum((void *)(iph+1), skb->len - sizeof(struct iphdr));
+ *(__sum16 *)ptr = csum_fold(skb_checksum(skb, offset,
+ skb->len - offset,
+ 0));
}
}
if (!x)
return;
- if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH)
+ if (icmp_hdr(skb)->type == ICMP_DEST_UNREACH) {
+ atomic_inc(&flow_cache_genid);
+ rt_genid_bump(net);
+
ipv4_update_pmtu(skb, net, info, 0, 0, IPPROTO_COMP, 0);
- else
+ } else
ipv4_redirect(skb, net, 0, 0, IPPROTO_COMP, 0);
xfrm_state_put(x);
}
.recvmsg = ping_recvmsg,
.bind = ping_bind,
.backlog_rcv = ping_queue_rcv_skb,
+ .release_cb = ip4_datagram_release_cb,
.hash = ping_v4_hash,
.unhash = ping_v4_unhash,
.get_port = ping_v4_get_port,
.recvmsg = raw_recvmsg,
.bind = raw_bind,
.backlog_rcv = raw_rcv_skb,
+ .release_cb = ip4_datagram_release_cb,
.hash = raw_hash_sk,
.unhash = raw_unhash_sk,
.obj_size = sizeof(struct raw_sock),
struct dst_entry *dst = &rt->dst;
struct fib_result res;
+ if (dst_metric_locked(dst, RTAX_MTU))
+ return;
+
if (dst->dev->mtu < mtu)
return;
}
EXPORT_SYMBOL_GPL(ipv4_update_pmtu);
-void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu)
+static void __ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu)
{
const struct iphdr *iph = (const struct iphdr *) skb->data;
struct flowi4 fl4;
ip_rt_put(rt);
}
}
+
+void ipv4_sk_update_pmtu(struct sk_buff *skb, struct sock *sk, u32 mtu)
+{
+ const struct iphdr *iph = (const struct iphdr *) skb->data;
+ struct flowi4 fl4;
+ struct rtable *rt;
+ struct dst_entry *dst;
+ bool new = false;
+
+ bh_lock_sock(sk);
+ rt = (struct rtable *) __sk_dst_get(sk);
+
+ if (sock_owned_by_user(sk) || !rt) {
+ __ipv4_sk_update_pmtu(skb, sk, mtu);
+ goto out;
+ }
+
+ __build_flow_key(&fl4, sk, iph, 0, 0, 0, 0, 0);
+
+ if (!__sk_dst_check(sk, 0)) {
+ rt = ip_route_output_flow(sock_net(sk), &fl4, sk);
+ if (IS_ERR(rt))
+ goto out;
+
+ new = true;
+ }
+
+ __ip_rt_update_pmtu((struct rtable *) rt->dst.path, &fl4, mtu);
+
+ dst = dst_check(&rt->dst, 0);
+ if (!dst) {
+ if (new)
+ dst_release(&rt->dst);
+
+ rt = ip_route_output_flow(sock_net(sk), &fl4, sk);
+ if (IS_ERR(rt))
+ goto out;
+
+ new = true;
+ }
+
+ if (new)
+ __sk_dst_set(sk, &rt->dst);
+
+out:
+ bh_unlock_sock(sk);
+}
EXPORT_SYMBOL_GPL(ipv4_sk_update_pmtu);
void ipv4_redirect(struct sk_buff *skb, struct net *net,
if (!mtu || time_after_eq(jiffies, rt->dst.expires))
mtu = dst_metric_raw(dst, RTAX_MTU);
- if (mtu && rt_is_output_route(rt))
+ if (mtu)
return mtu;
mtu = dst->dev->mtu;
{
int cnt; /* increase in packets */
unsigned int delta = 0;
+ u32 snd_cwnd = tp->snd_cwnd;
+
+ if (unlikely(!snd_cwnd)) {
+ pr_err_once("snd_cwnd is nul, please report this bug.\n");
+ snd_cwnd = 1U;
+ }
/* RFC3465: ABC Slow start
* Increase only after a full MSS of bytes is acked
if (sysctl_tcp_max_ssthresh > 0 && tp->snd_cwnd > sysctl_tcp_max_ssthresh)
cnt = sysctl_tcp_max_ssthresh >> 1; /* limited slow start */
else
- cnt = tp->snd_cwnd; /* exponential increase */
+ cnt = snd_cwnd; /* exponential increase */
/* RFC3465: ABC
* We MAY increase by 2 if discovered delayed ack
tp->bytes_acked = 0;
tp->snd_cwnd_cnt += cnt;
- while (tp->snd_cwnd_cnt >= tp->snd_cwnd) {
- tp->snd_cwnd_cnt -= tp->snd_cwnd;
+ while (tp->snd_cwnd_cnt >= snd_cwnd) {
+ tp->snd_cwnd_cnt -= snd_cwnd;
delta++;
}
- tp->snd_cwnd = min(tp->snd_cwnd + delta, tp->snd_cwnd_clamp);
+ tp->snd_cwnd = min(snd_cwnd + delta, tp->snd_cwnd_clamp);
}
EXPORT_SYMBOL_GPL(tcp_slow_start);
}
} else {
if (!(flag & FLAG_DATA_ACKED) && (tp->frto_counter == 1)) {
+ if (!tcp_packets_in_flight(tp)) {
+ tcp_enter_frto_loss(sk, 2, flag);
+ return true;
+ }
+
/* Prevent sending of new data. */
tp->snd_cwnd = min(tp->snd_cwnd,
tcp_packets_in_flight(tp));
* the remote receives only the retransmitted (regular) SYNs: either
* the original SYN-data or the corresponding SYN-ACK is lost.
*/
- syn_drop = (cookie->len <= 0 && data &&
- inet_csk(sk)->icsk_retransmits);
+ syn_drop = (cookie->len <= 0 && data && tp->total_retrans);
tcp_fastopen_cache_set(sk, mss, cookie, syn_drop);
* We do take care of PMTU discovery (RFC1191) special case :
* we can receive locally generated ICMP messages while socket is held.
*/
- if (sock_owned_by_user(sk) &&
- type != ICMP_DEST_UNREACH &&
- code != ICMP_FRAG_NEEDED)
- NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);
-
+ if (sock_owned_by_user(sk)) {
+ if (!(type == ICMP_DEST_UNREACH && code == ICMP_FRAG_NEEDED))
+ NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);
+ }
if (sk->sk_state == TCP_CLOSE)
goto out;
* errors returned from accept().
*/
inet_csk_reqsk_queue_drop(sk, req, prev);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);
goto out;
case TCP_SYN_SENT:
* clogging syn queue with openreqs with exponentially increasing
* timeout.
*/
- if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1)
+ if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) {
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
goto drop;
+ }
req = inet_reqsk_alloc(&tcp_request_sock_ops);
if (!req)
drop_and_free:
reqsk_free(req);
drop:
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);
return 0;
}
EXPORT_SYMBOL(tcp_v4_conn_request);
.recvmsg = udp_recvmsg,
.sendpage = udp_sendpage,
.backlog_rcv = __udp_queue_rcv_skb,
+ .release_cb = ip4_datagram_release_cb,
.hash = udp_lib_hash,
.unhash = udp_lib_unhash,
.rehash = udp_v4_rehash,
if (dev->addr_len != IEEE802154_ADDR_LEN)
return -1;
memcpy(eui, dev->dev_addr, 8);
+ eui[0] ^= 2;
return 0;
}
skb->network_header += ah_hlen;
memcpy(skb_network_header(skb), work_iph, hdr_len);
__skb_pull(skb, ah_hlen + hdr_len);
- skb_set_transport_header(skb, -hdr_len);
+ if (x->props.mode == XFRM_MODE_TUNNEL)
+ skb_reset_transport_header(skb);
+ else
+ skb_set_transport_header(skb, -hdr_len);
out:
kfree(AH_SKB_CB(skb)->tmp);
xfrm_input_resume(skb, err);
skb->network_header += ah_hlen;
memcpy(skb_network_header(skb), work_iph, hdr_len);
- skb->transport_header = skb->network_header;
__skb_pull(skb, ah_hlen + hdr_len);
+ if (x->props.mode == XFRM_MODE_TUNNEL)
+ skb_reset_transport_header(skb);
+ else
+ skb_set_transport_header(skb, -hdr_len);
+
err = nexthdr;
out_free:
if (skb->protocol == htons(ETH_P_IPV6)) {
sin->sin6_addr = ipv6_hdr(skb)->saddr;
if (np->rxopt.all)
- datagram_recv_ctl(sk, msg, skb);
+ ip6_datagram_recv_ctl(sk, msg, skb);
if (ipv6_addr_type(&sin->sin6_addr) & IPV6_ADDR_LINKLOCAL)
sin->sin6_scope_id = IP6CB(skb)->iif;
} else {
}
-int datagram_recv_ctl(struct sock *sk, struct msghdr *msg, struct sk_buff *skb)
+int ip6_datagram_recv_ctl(struct sock *sk, struct msghdr *msg,
+ struct sk_buff *skb)
{
struct ipv6_pinfo *np = inet6_sk(sk);
struct inet6_skb_parm *opt = IP6CB(skb);
}
return 0;
}
+EXPORT_SYMBOL_GPL(ip6_datagram_recv_ctl);
-int datagram_send_ctl(struct net *net, struct sock *sk,
- struct msghdr *msg, struct flowi6 *fl6,
- struct ipv6_txoptions *opt,
- int *hlimit, int *tclass, int *dontfrag)
+int ip6_datagram_send_ctl(struct net *net, struct sock *sk,
+ struct msghdr *msg, struct flowi6 *fl6,
+ struct ipv6_txoptions *opt,
+ int *hlimit, int *tclass, int *dontfrag)
{
struct in6_pktinfo *src_info;
struct cmsghdr *cmsg;
exit_f:
return err;
}
-EXPORT_SYMBOL_GPL(datagram_send_ctl);
+EXPORT_SYMBOL_GPL(ip6_datagram_send_ctl);
pskb_trim(skb, skb->len - alen - padlen - 2);
__skb_pull(skb, hlen);
- skb_set_transport_header(skb, -hdr_len);
+ if (x->props.mode == XFRM_MODE_TUNNEL)
+ skb_reset_transport_header(skb);
+ else
+ skb_set_transport_header(skb, -hdr_len);
err = nexthdr[1];
return net->ipv6.icmp_sk[smp_processor_id()];
}
+static void icmpv6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ u8 type, u8 code, int offset, __be32 info)
+{
+ struct net *net = dev_net(skb->dev);
+
+ if (type == ICMPV6_PKT_TOOBIG)
+ ip6_update_pmtu(skb, net, info, 0, 0);
+ else if (type == NDISC_REDIRECT)
+ ip6_redirect(skb, net, 0, 0);
+}
+
static int icmpv6_rcv(struct sk_buff *skb);
static const struct inet6_protocol icmpv6_protocol = {
.handler = icmpv6_rcv,
+ .err_handler = icmpv6_err,
.flags = INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL,
};
msg.msg_control = (void*)(fl->opt+1);
memset(&flowi6, 0, sizeof(flowi6));
- err = datagram_send_ctl(net, sk, &msg, &flowi6, fl->opt, &junk,
- &junk, &junk);
+ err = ip6_datagram_send_ctl(net, sk, &msg, &flowi6, fl->opt,
+ &junk, &junk, &junk);
if (err)
goto done;
err = -EINVAL;
int ret;
if (!ip6_tnl_xmit_ctl(t))
- return -1;
+ goto tx_err;
switch (skb->protocol) {
case htons(ETH_P_IP):
if (dst_allfrag(rt->dst.path))
cork->flags |= IPCORK_ALLFRAG;
cork->length = 0;
- exthdrlen = (opt ? opt->opt_flen : 0) - rt->rt6i_nfheader_len;
+ exthdrlen = (opt ? opt->opt_flen : 0);
length += exthdrlen;
transhdrlen += exthdrlen;
- dst_exthdrlen = rt->dst.header_len;
+ dst_exthdrlen = rt->dst.header_len - rt->rt6i_nfheader_len;
} else {
rt = (struct rt6_info *)cork->dst;
fl6 = &inet->cork.fl.u.ip6;
return -EINVAL;
if (get_user(v, (u32 __user *)optval))
return -EFAULT;
+ /* "pim6reg%u" should not exceed 16 bytes (IFNAMSIZ) */
+ if (v != RT_TABLE_DEFAULT && v >= 100000000)
+ return -EINVAL;
if (sk == mrt->mroute6_sk)
return -EBUSY;
msg.msg_controllen = optlen;
msg.msg_control = (void*)(opt+1);
- retv = datagram_send_ctl(net, sk, &msg, &fl6, opt, &junk, &junk,
- &junk);
+ retv = ip6_datagram_send_ctl(net, sk, &msg, &fl6, opt, &junk,
+ &junk, &junk);
if (retv)
goto done;
update:
release_sock(sk);
if (skb) {
- int err = datagram_recv_ctl(sk, &msg, skb);
+ int err = ip6_datagram_recv_ctl(sk, &msg, skb);
kfree_skb(skb);
if (err)
return err;
sock_recv_ts_and_drops(msg, sk, skb);
if (np->rxopt.all)
- datagram_recv_ctl(sk, msg, skb);
+ ip6_datagram_recv_ctl(sk, msg, skb);
err = copied;
if (flags & MSG_TRUNC)
memset(opt, 0, sizeof(struct ipv6_txoptions));
opt->tot_len = sizeof(struct ipv6_txoptions);
- err = datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt,
- &hlimit, &tclass, &dontfrag);
+ err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt,
+ &hlimit, &tclass, &dontfrag);
if (err < 0) {
fl6_sock_release(flowlabel);
return err;
dst_hold(&rt->dst);
read_unlock_bh(&table->tb6_lock);
- if (!rt->n && !(rt->rt6i_flags & RTF_NONEXTHOP))
+ if (!rt->n && !(rt->rt6i_flags & (RTF_NONEXTHOP | RTF_LOCAL)))
nrt = rt6_alloc_cow(rt, &fl6->daddr, &fl6->saddr);
else if (!(rt->dst.flags & DST_HOST))
nrt = rt6_alloc_clone(rt, &fl6->daddr);
}
inet_csk_reqsk_queue_drop(sk, req, prev);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);
goto out;
case TCP_SYN_SENT:
goto drop;
}
- if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1)
+ if (sk_acceptq_is_full(sk) && inet_csk_reqsk_queue_young(sk) > 1) {
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
goto drop;
+ }
req = inet6_reqsk_alloc(&tcp6_request_sock_ops);
if (req == NULL)
drop_and_free:
reqsk_free(req);
drop:
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);
return 0; /* don't send reset */
}
ip_cmsg_recv(msg, skb);
} else {
if (np->rxopt.all)
- datagram_recv_ctl(sk, msg, skb);
+ ip6_datagram_recv_ctl(sk, msg, skb);
}
err = copied;
memset(opt, 0, sizeof(struct ipv6_txoptions));
opt->tot_len = sizeof(*opt);
- err = datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt,
- &hlimit, &tclass, &dontfrag);
+ err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt,
+ &hlimit, &tclass, &dontfrag);
if (err < 0) {
fl6_sock_release(flowlabel);
return err;
}
+/* Lookup the tunnel socket, possibly involving the fs code if the socket is
+ * owned by userspace. A struct sock returned from this function must be
+ * released using l2tp_tunnel_sock_put once you're done with it.
+ */
+struct sock *l2tp_tunnel_sock_lookup(struct l2tp_tunnel *tunnel)
+{
+ int err = 0;
+ struct socket *sock = NULL;
+ struct sock *sk = NULL;
+
+ if (!tunnel)
+ goto out;
+
+ if (tunnel->fd >= 0) {
+ /* Socket is owned by userspace, who might be in the process
+ * of closing it. Look the socket up using the fd to ensure
+ * consistency.
+ */
+ sock = sockfd_lookup(tunnel->fd, &err);
+ if (sock)
+ sk = sock->sk;
+ } else {
+ /* Socket is owned by kernelspace */
+ sk = tunnel->sock;
+ }
+
+out:
+ return sk;
+}
+EXPORT_SYMBOL_GPL(l2tp_tunnel_sock_lookup);
+
+/* Drop a reference to a tunnel socket obtained via. l2tp_tunnel_sock_put */
+void l2tp_tunnel_sock_put(struct sock *sk)
+{
+ struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
+ if (tunnel) {
+ if (tunnel->fd >= 0) {
+ /* Socket is owned by userspace */
+ sockfd_put(sk->sk_socket);
+ }
+ sock_put(sk);
+ }
+}
+EXPORT_SYMBOL_GPL(l2tp_tunnel_sock_put);
+
/* Lookup a session by id in the global session list
*/
static struct l2tp_session *l2tp_session_find_2(struct net *net, u32 session_id)
struct udphdr *uh;
struct inet_sock *inet;
__wsum csum;
- int old_headroom;
- int new_headroom;
int headroom;
int uhlen = (tunnel->encap == L2TP_ENCAPTYPE_UDP) ? sizeof(struct udphdr) : 0;
int udp_len;
*/
headroom = NET_SKB_PAD + sizeof(struct iphdr) +
uhlen + hdr_len;
- old_headroom = skb_headroom(skb);
if (skb_cow_head(skb, headroom)) {
kfree_skb(skb);
return NET_XMIT_DROP;
}
- new_headroom = skb_headroom(skb);
skb_orphan(skb);
- skb->truesize += new_headroom - old_headroom;
-
/* Setup L2TP header */
session->build_header(session, __skb_push(skb, hdr_len));
tunnel->old_sk_destruct = sk->sk_destruct;
sk->sk_destruct = &l2tp_tunnel_destruct;
tunnel->sock = sk;
+ tunnel->fd = fd;
lockdep_set_class_and_name(&sk->sk_lock.slock, &l2tp_socket_class, "l2tp_sock");
sk->sk_allocation = GFP_ATOMIC;
*/
int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel)
{
- int err = 0;
- struct socket *sock = tunnel->sock ? tunnel->sock->sk_socket : NULL;
+ int err = -EBADF;
+ struct socket *sock = NULL;
+ struct sock *sk = NULL;
+
+ sk = l2tp_tunnel_sock_lookup(tunnel);
+ if (!sk)
+ goto out;
+
+ sock = sk->sk_socket;
+ BUG_ON(!sock);
/* Force the tunnel socket to close. This will eventually
* cause the tunnel to be deleted via the normal socket close
* mechanisms when userspace closes the tunnel socket.
*/
- if (sock != NULL) {
- err = inet_shutdown(sock, 2);
+ err = inet_shutdown(sock, 2);
- /* If the tunnel's socket was created by the kernel,
- * close the socket here since the socket was not
- * created by userspace.
- */
- if (sock->file == NULL)
- err = inet_release(sock);
- }
+ /* If the tunnel's socket was created by the kernel,
+ * close the socket here since the socket was not
+ * created by userspace.
+ */
+ if (sock->file == NULL)
+ err = inet_release(sock);
+ l2tp_tunnel_sock_put(sk);
+out:
return err;
}
EXPORT_SYMBOL_GPL(l2tp_tunnel_delete);
int (*recv_payload_hook)(struct sk_buff *skb);
void (*old_sk_destruct)(struct sock *);
struct sock *sock; /* Parent socket */
- int fd;
+ int fd; /* Parent fd, if tunnel socket
+ * was created by userspace */
uint8_t priv[0]; /* private data */
};
return tunnel;
}
+extern struct sock *l2tp_tunnel_sock_lookup(struct l2tp_tunnel *tunnel);
+extern void l2tp_tunnel_sock_put(struct sock *sk);
extern struct l2tp_session *l2tp_session_find(struct net *net, struct l2tp_tunnel *tunnel, u32 session_id);
extern struct l2tp_session *l2tp_session_find_nth(struct l2tp_tunnel *tunnel, int nth);
extern struct l2tp_session *l2tp_session_find_by_ifname(struct net *net, char *ifname);
memset(opt, 0, sizeof(struct ipv6_txoptions));
opt->tot_len = sizeof(struct ipv6_txoptions);
- err = datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt,
- &hlimit, &tclass, &dontfrag);
+ err = ip6_datagram_send_ctl(sock_net(sk), sk, msg, &fl6, opt,
+ &hlimit, &tclass, &dontfrag);
if (err < 0) {
fl6_sock_release(flowlabel);
return err;
struct msghdr *msg, size_t len, int noblock,
int flags, int *addr_len)
{
- struct inet_sock *inet = inet_sk(sk);
+ struct ipv6_pinfo *np = inet6_sk(sk);
struct sockaddr_l2tpip6 *lsa = (struct sockaddr_l2tpip6 *)msg->msg_name;
size_t copied = 0;
int err = -EOPNOTSUPP;
lsa->l2tp_scope_id = IP6CB(skb)->iif;
}
- if (inet->cmsg_flags)
- ip_cmsg_recv(msg, skb);
+ if (np->rxopt.all)
+ ip6_datagram_recv_ctl(sk, msg, skb);
if (flags & MSG_TRUNC)
copied = skb->len;
struct l2tp_session *session;
struct l2tp_tunnel *tunnel;
struct pppol2tp_session *ps;
- int old_headroom;
- int new_headroom;
int uhlen, headroom;
if (sock_flag(sk, SOCK_DEAD) || !(sk->sk_state & PPPOX_CONNECTED))
if (tunnel == NULL)
goto abort_put_sess;
- old_headroom = skb_headroom(skb);
uhlen = (tunnel->encap == L2TP_ENCAPTYPE_UDP) ? sizeof(struct udphdr) : 0;
headroom = NET_SKB_PAD +
sizeof(struct iphdr) + /* IP header */
if (skb_cow_head(skb, headroom))
goto abort_put_sess_tun;
- new_headroom = skb_headroom(skb);
- skb->truesize += new_headroom - old_headroom;
-
/* Setup PPP header */
__skb_push(skb, sizeof(ppph));
skb->data[0] = ppph[0];
sta = sta_info_get(sdata, mac_addr);
else
sta = sta_info_get_bss(sdata, mac_addr);
- if (!sta) {
+ /*
+ * The ASSOC test makes sure the driver is ready to
+ * receive the key. When wpa_supplicant has roamed
+ * using FT, it attempts to set the key before
+ * association has completed, this rejects that attempt
+ * so it will set the key again after assocation.
+ *
+ * TODO: accept the key if we have a station entry and
+ * add it to the device after the station.
+ */
+ if (!sta || !test_sta_flag(sta, WLAN_STA_ASSOC)) {
ieee80211_key_free(sdata->local, key);
err = -ENOENT;
goto out_unlock;
void ieee80211_sched_scan_stopped_work(struct work_struct *work);
/* off-channel helpers */
-void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local,
- bool offchannel_ps_enable);
-void ieee80211_offchannel_return(struct ieee80211_local *local,
- bool offchannel_ps_disable);
+void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local);
+void ieee80211_offchannel_return(struct ieee80211_local *local);
void ieee80211_roc_setup(struct ieee80211_local *local);
void ieee80211_start_next_roc(struct ieee80211_local *local);
void ieee80211_roc_purge(struct ieee80211_sub_if_data *sdata);
skb->priority = 7;
info->control.vif = &sdata->vif;
+ info->flags |= IEEE80211_TX_INTFL_NEED_TXPROCESSING;
ieee80211_set_qos_hdr(sdata, skb);
}
return -EAGAIN;
skb = dev_alloc_skb(local->tx_headroom +
+ IEEE80211_ENCRYPT_HEADROOM +
+ IEEE80211_ENCRYPT_TAILROOM +
hdr_len +
2 + 15 /* PERR IE */);
if (!skb)
return -1;
- skb_reserve(skb, local->tx_headroom);
+ skb_reserve(skb, local->tx_headroom + IEEE80211_ENCRYPT_HEADROOM);
mgmt = (struct ieee80211_mgmt *) skb_put(skb, hdr_len);
memset(mgmt, 0, hdr_len);
mgmt->frame_control = cpu_to_le16(IEEE80211_FTYPE_MGMT |
ieee80211_sta_reset_conn_monitor(sdata);
}
-void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local,
- bool offchannel_ps_enable)
+void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local)
{
struct ieee80211_sub_if_data *sdata;
if (sdata->vif.type != NL80211_IFTYPE_MONITOR) {
netif_tx_stop_all_queues(sdata->dev);
- if (offchannel_ps_enable &&
- (sdata->vif.type == NL80211_IFTYPE_STATION) &&
+ if (sdata->vif.type == NL80211_IFTYPE_STATION &&
sdata->u.mgd.associated)
ieee80211_offchannel_ps_enable(sdata);
}
mutex_unlock(&local->iflist_mtx);
}
-void ieee80211_offchannel_return(struct ieee80211_local *local,
- bool offchannel_ps_disable)
+void ieee80211_offchannel_return(struct ieee80211_local *local)
{
struct ieee80211_sub_if_data *sdata;
continue;
/* Tell AP we're back */
- if (offchannel_ps_disable &&
- sdata->vif.type == NL80211_IFTYPE_STATION) {
- if (sdata->u.mgd.associated)
- ieee80211_offchannel_ps_disable(sdata);
- }
+ if (sdata->vif.type == NL80211_IFTYPE_STATION &&
+ sdata->u.mgd.associated)
+ ieee80211_offchannel_ps_disable(sdata);
if (sdata->vif.type != NL80211_IFTYPE_MONITOR) {
/*
local->tmp_channel = NULL;
ieee80211_hw_config(local, 0);
- ieee80211_offchannel_return(local, true);
+ ieee80211_offchannel_return(local);
}
ieee80211_recalc_idle(local);
if (!was_hw_scan) {
ieee80211_configure_filter(local);
drv_sw_scan_complete(local);
- ieee80211_offchannel_return(local, true);
+ ieee80211_offchannel_return(local);
}
ieee80211_recalc_idle(local);
local->next_scan_state = SCAN_DECISION;
local->scan_channel_idx = 0;
- ieee80211_offchannel_stop_vifs(local, true);
+ ieee80211_offchannel_stop_vifs(local);
ieee80211_configure_filter(local);
local->scan_channel = NULL;
ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_CHANNEL);
- /*
- * Re-enable vifs and beaconing. Leave PS
- * in off-channel state..will put that back
- * on-channel at the end of scanning.
- */
- ieee80211_offchannel_return(local, false);
+ /* disable PS */
+ ieee80211_offchannel_return(local);
*next_delay = HZ / 5;
/* afterwards, resume scan & go to next channel */
static void ieee80211_scan_state_resume(struct ieee80211_local *local,
unsigned long *next_delay)
{
- /* PS already is in off-channel mode */
- ieee80211_offchannel_stop_vifs(local, false);
+ ieee80211_offchannel_stop_vifs(local);
if (local->ops->flush) {
drv_flush(local, false);
chanctx_conf =
rcu_dereference(tmp_sdata->vif.chanctx_conf);
}
- if (!chanctx_conf)
- goto fail_rcu;
- chan = chanctx_conf->def.chan;
+ if (chanctx_conf)
+ chan = chanctx_conf->def.chan;
+ else if (!local->use_chanctx)
+ chan = local->_oper_channel;
+ else
+ goto fail_rcu;
/*
* Frame injection is not allowed if beaconing is not allowed
synchronize_net();
nf_conntrack_proto_fini(net);
nf_conntrack_cleanup_net(net);
+}
- if (net_eq(net, &init_net)) {
- RCU_INIT_POINTER(nf_ct_destroy, NULL);
- nf_conntrack_cleanup_init_net();
- }
+void nf_conntrack_cleanup_end(void)
+{
+ RCU_INIT_POINTER(nf_ct_destroy, NULL);
+ nf_conntrack_cleanup_init_net();
}
void *nf_ct_alloc_hashtable(unsigned int *sizep, int nulls)
static void __exit nf_conntrack_standalone_fini(void)
{
unregister_pernet_subsys(&nf_conntrack_net_ops);
+ nf_conntrack_cleanup_end();
}
module_init(nf_conntrack_standalone_init);
}
EXPORT_SYMBOL_GPL(xt_find_revision);
-static char *textify_hooks(char *buf, size_t size, unsigned int mask)
+static char *
+textify_hooks(char *buf, size_t size, unsigned int mask, uint8_t nfproto)
{
- static const char *const names[] = {
+ static const char *const inetbr_names[] = {
"PREROUTING", "INPUT", "FORWARD",
"OUTPUT", "POSTROUTING", "BROUTING",
};
- unsigned int i;
+ static const char *const arp_names[] = {
+ "INPUT", "FORWARD", "OUTPUT",
+ };
+ const char *const *names;
+ unsigned int i, max;
char *p = buf;
bool np = false;
int res;
+ names = (nfproto == NFPROTO_ARP) ? arp_names : inetbr_names;
+ max = (nfproto == NFPROTO_ARP) ? ARRAY_SIZE(arp_names) :
+ ARRAY_SIZE(inetbr_names);
*p = '\0';
- for (i = 0; i < ARRAY_SIZE(names); ++i) {
+ for (i = 0; i < max; ++i) {
if (!(mask & (1 << i)))
continue;
res = snprintf(p, size, "%s%s", np ? "/" : "", names[i]);
pr_err("%s_tables: %s match: used from hooks %s, but only "
"valid from %s\n",
xt_prefix[par->family], par->match->name,
- textify_hooks(used, sizeof(used), par->hook_mask),
- textify_hooks(allow, sizeof(allow), par->match->hooks));
+ textify_hooks(used, sizeof(used), par->hook_mask,
+ par->family),
+ textify_hooks(allow, sizeof(allow), par->match->hooks,
+ par->family));
return -EINVAL;
}
if (par->match->proto && (par->match->proto != proto || inv_proto)) {
pr_err("%s_tables: %s target: used from hooks %s, but only "
"usable from %s\n",
xt_prefix[par->family], par->target->name,
- textify_hooks(used, sizeof(used), par->hook_mask),
- textify_hooks(allow, sizeof(allow), par->target->hooks));
+ textify_hooks(used, sizeof(used), par->hook_mask,
+ par->family),
+ textify_hooks(allow, sizeof(allow), par->target->hooks,
+ par->family));
return -EINVAL;
}
if (par->target->proto && (par->target->proto != proto || inv_proto)) {
struct xt_ct_target_info *info = par->targinfo;
struct nf_conntrack_tuple t;
struct nf_conn *ct;
- int ret;
+ int ret = -EOPNOTSUPP;
if (info->flags & ~XT_CT_NOTRACK)
return -EINVAL;
struct xt_ct_target_info_v1 *info = par->targinfo;
struct nf_conntrack_tuple t;
struct nf_conn *ct;
- int ret;
+ int ret = -EOPNOTSUPP;
if (info->flags & ~XT_CT_NOTRACK)
return -EINVAL;
/* Must be called with rcu_read_lock. */
static void netdev_port_receive(struct vport *vport, struct sk_buff *skb)
{
- if (unlikely(!vport)) {
- kfree_skb(skb);
- return;
- }
+ if (unlikely(!vport))
+ goto error;
+
+ if (unlikely(skb_warn_if_lro(skb)))
+ goto error;
/* Make our own copy of the packet. Otherwise we will mangle the
* packet for anyone who came before us (e.g. tcpdump via AF_PACKET).
skb_push(skb, ETH_HLEN);
ovs_vport_receive(vport, skb);
+ return;
+
+error:
+ kfree_skb(skb);
}
/* Called with rcu_read_lock and bottom-halves disabled. */
goto error;
}
- if (unlikely(skb_warn_if_lro(skb)))
- goto error;
-
skb->dev = netdev_vport->dev;
len = skb->len;
dev_queue_xmit(skb);
packet_flush_mclist(sk);
- memset(&req_u, 0, sizeof(req_u));
-
- if (po->rx_ring.pg_vec)
+ if (po->rx_ring.pg_vec) {
+ memset(&req_u, 0, sizeof(req_u));
packet_set_ring(sk, &req_u, 1, 0);
+ }
- if (po->tx_ring.pg_vec)
+ if (po->tx_ring.pg_vec) {
+ memset(&req_u, 0, sizeof(req_u));
packet_set_ring(sk, &req_u, 1, 1);
+ }
fanout_release(sk);
if (q->rate) {
struct sk_buff_head *list = &sch->q;
- delay += packet_len_2_sched_time(skb->len, q);
-
if (!skb_queue_empty(list)) {
/*
- * Last packet in queue is reference point (now).
- * First packet in queue is already in flight,
- * calculate this time bonus and substract
+ * Last packet in queue is reference point (now),
+ * calculate this time bonus and subtract
* from delay.
*/
- delay -= now - netem_skb_cb(skb_peek(list))->time_to_send;
+ delay -= netem_skb_cb(skb_peek_tail(list))->time_to_send - now;
+ delay = max_t(psched_tdiff_t, 0, delay);
now = netem_skb_cb(skb_peek_tail(list))->time_to_send;
}
+
+ delay += packet_len_2_sched_time(skb->len, q);
}
cb->time_to_send = now + delay;
return;
if (atomic_dec_and_test(&key->refcnt)) {
- kfree(key);
+ kzfree(key);
SCTP_DBG_OBJCNT_DEC(keys);
}
}
/* Final destructor for endpoint. */
static void sctp_endpoint_destroy(struct sctp_endpoint *ep)
{
+ int i;
+
SCTP_ASSERT(ep->base.dead, "Endpoint is not dead", return);
/* Free up the HMAC transform. */
sctp_inq_free(&ep->base.inqueue);
sctp_bind_addr_free(&ep->base.bind_addr);
+ for (i = 0; i < SCTP_HOW_MANY_SECRETS; ++i)
+ memset(&ep->secret_key[i], 0, SCTP_SECRET_SIZE);
+
/* Remove and free the port */
if (sctp_sk(ep->base.sk)->bind_hash)
sctp_put_port(ep->base.sk);
/* Free the outqueue structure and any related pending chunks.
*/
-void sctp_outq_teardown(struct sctp_outq *q)
+static void __sctp_outq_teardown(struct sctp_outq *q)
{
struct sctp_transport *transport;
struct list_head *lchunk, *temp;
sctp_chunk_free(chunk);
}
- q->error = 0;
-
/* Throw away any leftover control chunks. */
list_for_each_entry_safe(chunk, tmp, &q->control_chunk_list, list) {
list_del_init(&chunk->list);
}
}
+void sctp_outq_teardown(struct sctp_outq *q)
+{
+ __sctp_outq_teardown(q);
+ sctp_outq_init(q->asoc, q);
+}
+
/* Free the outqueue structure and any related pending chunks. */
void sctp_outq_free(struct sctp_outq *q)
{
/* Throw away leftover chunks. */
- sctp_outq_teardown(q);
+ __sctp_outq_teardown(q);
/* If we were kmalloc()'d, free the memory. */
if (q->malloced)
/* Update the content of current association. */
sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));
- sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl));
sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev));
+ sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,
+ SCTP_STATE(SCTP_STATE_ESTABLISHED));
+ sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl));
return SCTP_DISPOSITION_CONSUME;
nomem_ev:
ret = sctp_auth_set_key(sctp_sk(sk)->ep, asoc, authkey);
out:
- kfree(authkey);
+ kzfree(authkey);
return ret;
}
void sctp_sysctl_net_unregister(struct net *net)
{
+ struct ctl_table *table;
+
+ table = net->sctp.sysctl_header->ctl_table_arg;
unregister_net_sysctl_table(net->sctp.sysctl_header);
+ kfree(table);
}
static struct ctl_table_header * sctp_sysctl_header;
list_add(&task->u.tk_wait.timer_list, &queue->timer_list.list);
}
+static void rpc_rotate_queue_owner(struct rpc_wait_queue *queue)
+{
+ struct list_head *q = &queue->tasks[queue->priority];
+ struct rpc_task *task;
+
+ if (!list_empty(q)) {
+ task = list_first_entry(q, struct rpc_task, u.tk_wait.list);
+ if (task->tk_owner == queue->owner)
+ list_move_tail(&task->u.tk_wait.list, q);
+ }
+}
+
static void rpc_set_waitqueue_priority(struct rpc_wait_queue *queue, int priority)
{
- queue->priority = priority;
+ if (queue->priority != priority) {
+ /* Fairness: rotate the list when changing priority */
+ rpc_rotate_queue_owner(queue);
+ queue->priority = priority;
+ }
}
static void rpc_set_waitqueue_owner(struct rpc_wait_queue *queue, pid_t pid)
}
/*
- * See net/ipv6/datagram.c : datagram_recv_ctl
+ * See net/ipv6/datagram.c : ip6_datagram_recv_ctl
*/
static int svc_udp_get_dest_address6(struct svc_rqst *rqstp,
struct cmsghdr *cmh)
&iwe, IW_EV_UINT_LEN);
}
- buf = kmalloc(30, GFP_ATOMIC);
+ buf = kmalloc(31, GFP_ATOMIC);
if (buf) {
memset(&iwe, 0, sizeof(iwe));
iwe.cmd = IWEVCUSTOM;
WARN_ON(!hlist_empty(&net->xfrm.policy_inexact[dir]));
htab = &net->xfrm.policy_bydst[dir];
- sz = (htab->hmask + 1);
+ sz = (htab->hmask + 1) * sizeof(struct hlist_head);
WARN_ON(!hlist_empty(htab->table));
xfrm_hash_free(htab->table, sz);
}
u32 diff;
struct xfrm_replay_state_esn *replay_esn = x->replay_esn;
u32 seq = ntohl(net_seq);
- u32 pos = (replay_esn->seq - 1) % replay_esn->replay_window;
+ u32 pos;
if (!replay_esn->replay_window)
return;
+ pos = (replay_esn->seq - 1) % replay_esn->replay_window;
+
if (seq > replay_esn->seq) {
diff = seq - replay_esn->seq;
# Try to match the kernel target.
ifndef CONFIG_64BIT
+ifndef CROSS_COMPILE
# s390 has -m31 flag to build 31 bit binaries
ifndef CONFIG_S390
HOSTLOADLIBES_bpf-fancy += $(MFLAG)
HOSTLOADLIBES_dropper += $(MFLAG)
endif
+endif
# Tell kbuild to always build the programs
always := $(hostprogs-y)
our $Member = qr{->$Ident|\.$Ident|\[[^]]*\]};
our $Lval = qr{$Ident(?:$Member)*};
-our $Float_hex = qr{(?i:0x[0-9a-f]+p-?[0-9]+[fl]?)};
-our $Float_dec = qr{(?i:((?:[0-9]+\.[0-9]*|[0-9]*\.[0-9]+)(?:e-?[0-9]+)?[fl]?))};
-our $Float_int = qr{(?i:[0-9]+e-?[0-9]+[fl]?)};
+our $Float_hex = qr{(?i)0x[0-9a-f]+p-?[0-9]+[fl]?};
+our $Float_dec = qr{(?i)(?:[0-9]+\.[0-9]*|[0-9]*\.[0-9]+)(?:e-?[0-9]+)?[fl]?};
+our $Float_int = qr{(?i)[0-9]+e-?[0-9]+[fl]?};
our $Float = qr{$Float_hex|$Float_dec|$Float_int};
-our $Constant = qr{(?:$Float|(?i:(?:0x[0-9a-f]+|[0-9]+)[ul]*))};
-our $Assignment = qr{(?:\*\=|/=|%=|\+=|-=|<<=|>>=|&=|\^=|\|=|=)};
+our $Constant = qr{$Float|(?i)(?:0x[0-9a-f]+|[0-9]+)[ul]*};
+our $Assignment = qr{\*\=|/=|%=|\+=|-=|<<=|>>=|&=|\^=|\|=|=};
our $Compare = qr{<=|>=|==|!=|<|>};
our $Operators = qr{
<=|>=|==|!=|
{
}
+static int cap_tun_dev_alloc_security(void **security)
+{
+ return 0;
+}
+
+static void cap_tun_dev_free_security(void *security)
+{
+}
+
static int cap_tun_dev_create(void)
{
return 0;
}
-static void cap_tun_dev_post_create(struct sock *sk)
+static int cap_tun_dev_attach_queue(void *security)
+{
+ return 0;
+}
+
+static int cap_tun_dev_attach(struct sock *sk, void *security)
{
+ return 0;
}
-static int cap_tun_dev_attach(struct sock *sk)
+static int cap_tun_dev_open(void *security)
{
return 0;
}
set_to_cap_if_null(ops, secmark_refcount_inc);
set_to_cap_if_null(ops, secmark_refcount_dec);
set_to_cap_if_null(ops, req_classify_flow);
+ set_to_cap_if_null(ops, tun_dev_alloc_security);
+ set_to_cap_if_null(ops, tun_dev_free_security);
set_to_cap_if_null(ops, tun_dev_create);
- set_to_cap_if_null(ops, tun_dev_post_create);
+ set_to_cap_if_null(ops, tun_dev_open);
+ set_to_cap_if_null(ops, tun_dev_attach_queue);
set_to_cap_if_null(ops, tun_dev_attach);
#endif /* CONFIG_SECURITY_NETWORK */
#ifdef CONFIG_SECURITY_NETWORK_XFRM
struct dev_cgroup *dev_cgroup;
dev_cgroup = cgroup_to_devcgroup(cgroup);
+ mutex_lock(&devcgroup_mutex);
dev_exception_clean(dev_cgroup);
+ mutex_unlock(&devcgroup_mutex);
kfree(dev_cgroup);
}
rc = __vfs_setxattr_noperm(dentry, XATTR_NAME_EVM,
&xattr_data,
sizeof(xattr_data), 0);
- }
- else if (rc == -ENODATA)
+ } else if (rc == -ENODATA && inode->i_op->removexattr) {
rc = inode->i_op->removexattr(dentry, XATTR_NAME_EVM);
+ }
return rc;
}
}
EXPORT_SYMBOL(security_secmark_refcount_dec);
+int security_tun_dev_alloc_security(void **security)
+{
+ return security_ops->tun_dev_alloc_security(security);
+}
+EXPORT_SYMBOL(security_tun_dev_alloc_security);
+
+void security_tun_dev_free_security(void *security)
+{
+ security_ops->tun_dev_free_security(security);
+}
+EXPORT_SYMBOL(security_tun_dev_free_security);
+
int security_tun_dev_create(void)
{
return security_ops->tun_dev_create();
}
EXPORT_SYMBOL(security_tun_dev_create);
-void security_tun_dev_post_create(struct sock *sk)
+int security_tun_dev_attach_queue(void *security)
{
- return security_ops->tun_dev_post_create(sk);
+ return security_ops->tun_dev_attach_queue(security);
}
-EXPORT_SYMBOL(security_tun_dev_post_create);
+EXPORT_SYMBOL(security_tun_dev_attach_queue);
-int security_tun_dev_attach(struct sock *sk)
+int security_tun_dev_attach(struct sock *sk, void *security)
{
- return security_ops->tun_dev_attach(sk);
+ return security_ops->tun_dev_attach(sk, security);
}
EXPORT_SYMBOL(security_tun_dev_attach);
+int security_tun_dev_open(void *security)
+{
+ return security_ops->tun_dev_open(security);
+}
+EXPORT_SYMBOL(security_tun_dev_open);
+
#endif /* CONFIG_SECURITY_NETWORK */
#ifdef CONFIG_SECURITY_NETWORK_XFRM
fl->flowi_secid = req->secid;
}
+static int selinux_tun_dev_alloc_security(void **security)
+{
+ struct tun_security_struct *tunsec;
+
+ tunsec = kzalloc(sizeof(*tunsec), GFP_KERNEL);
+ if (!tunsec)
+ return -ENOMEM;
+ tunsec->sid = current_sid();
+
+ *security = tunsec;
+ return 0;
+}
+
+static void selinux_tun_dev_free_security(void *security)
+{
+ kfree(security);
+}
+
static int selinux_tun_dev_create(void)
{
u32 sid = current_sid();
NULL);
}
-static void selinux_tun_dev_post_create(struct sock *sk)
+static int selinux_tun_dev_attach_queue(void *security)
{
+ struct tun_security_struct *tunsec = security;
+
+ return avc_has_perm(current_sid(), tunsec->sid, SECCLASS_TUN_SOCKET,
+ TUN_SOCKET__ATTACH_QUEUE, NULL);
+}
+
+static int selinux_tun_dev_attach(struct sock *sk, void *security)
+{
+ struct tun_security_struct *tunsec = security;
struct sk_security_struct *sksec = sk->sk_security;
/* we don't currently perform any NetLabel based labeling here and it
* cause confusion to the TUN user that had no idea network labeling
* protocols were being used */
- /* see the comments in selinux_tun_dev_create() about why we don't use
- * the sockcreate SID here */
-
- sksec->sid = current_sid();
+ sksec->sid = tunsec->sid;
sksec->sclass = SECCLASS_TUN_SOCKET;
+
+ return 0;
}
-static int selinux_tun_dev_attach(struct sock *sk)
+static int selinux_tun_dev_open(void *security)
{
- struct sk_security_struct *sksec = sk->sk_security;
+ struct tun_security_struct *tunsec = security;
u32 sid = current_sid();
int err;
- err = avc_has_perm(sid, sksec->sid, SECCLASS_TUN_SOCKET,
+ err = avc_has_perm(sid, tunsec->sid, SECCLASS_TUN_SOCKET,
TUN_SOCKET__RELABELFROM, NULL);
if (err)
return err;
TUN_SOCKET__RELABELTO, NULL);
if (err)
return err;
-
- sksec->sid = sid;
+ tunsec->sid = sid;
return 0;
}
.secmark_refcount_inc = selinux_secmark_refcount_inc,
.secmark_refcount_dec = selinux_secmark_refcount_dec,
.req_classify_flow = selinux_req_classify_flow,
+ .tun_dev_alloc_security = selinux_tun_dev_alloc_security,
+ .tun_dev_free_security = selinux_tun_dev_free_security,
.tun_dev_create = selinux_tun_dev_create,
- .tun_dev_post_create = selinux_tun_dev_post_create,
+ .tun_dev_attach_queue = selinux_tun_dev_attach_queue,
.tun_dev_attach = selinux_tun_dev_attach,
+ .tun_dev_open = selinux_tun_dev_open,
#ifdef CONFIG_SECURITY_NETWORK_XFRM
.xfrm_policy_alloc_security = selinux_xfrm_policy_alloc,
NULL } },
{ "kernel_service", { "use_as_override", "create_files_as", NULL } },
{ "tun_socket",
- { COMMON_SOCK_PERMS, NULL } },
+ { COMMON_SOCK_PERMS, "attach_queue", NULL } },
{ NULL }
};
u16 sclass; /* sock security class */
};
+struct tun_security_struct {
+ u32 sid; /* SID for the tun device sockets */
+};
+
struct key_security_struct {
u32 sid; /* SID of key */
};
hda_set_power_state(codec, AC_PWRST_D0);
restore_shutup_pins(codec);
hda_exec_init_verbs(codec);
+ snd_hda_jack_set_dirty_all(codec);
if (codec->patch_ops.resume)
codec->patch_ops.resume(codec);
else {
if (codec->jackpoll_interval)
hda_jackpoll_work(&codec->jackpoll_work.work);
- else {
- snd_hda_jack_set_dirty_all(codec);
+ else
snd_hda_jack_report_sync(codec);
- }
codec->in_pm = 0;
snd_hda_power_down(codec); /* flag down before returning */
#define get_azx_dev(substream) (substream->runtime->private_data)
#ifdef CONFIG_X86
-static void __mark_pages_wc(struct azx *chip, void *addr, size_t size, bool on)
+static void __mark_pages_wc(struct azx *chip, struct snd_dma_buffer *dmab, bool on)
{
+ int pages;
+
if (azx_snoop(chip))
return;
- if (addr && size) {
- int pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ if (!dmab || !dmab->area || !dmab->bytes)
+ return;
+
+#ifdef CONFIG_SND_DMA_SGBUF
+ if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_SG) {
+ struct snd_sg_buf *sgbuf = dmab->private_data;
if (on)
- set_memory_wc((unsigned long)addr, pages);
+ set_pages_array_wc(sgbuf->page_table, sgbuf->pages);
else
- set_memory_wb((unsigned long)addr, pages);
+ set_pages_array_wb(sgbuf->page_table, sgbuf->pages);
+ return;
}
+#endif
+
+ pages = (dmab->bytes + PAGE_SIZE - 1) >> PAGE_SHIFT;
+ if (on)
+ set_memory_wc((unsigned long)dmab->area, pages);
+ else
+ set_memory_wb((unsigned long)dmab->area, pages);
}
static inline void mark_pages_wc(struct azx *chip, struct snd_dma_buffer *buf,
bool on)
{
- __mark_pages_wc(chip, buf->area, buf->bytes, on);
+ __mark_pages_wc(chip, buf, on);
}
static inline void mark_runtime_wc(struct azx *chip, struct azx_dev *azx_dev,
- struct snd_pcm_runtime *runtime, bool on)
+ struct snd_pcm_substream *substream, bool on)
{
if (azx_dev->wc_marked != on) {
- __mark_pages_wc(chip, runtime->dma_area, runtime->dma_bytes, on);
+ __mark_pages_wc(chip, snd_pcm_get_dma_buf(substream), on);
azx_dev->wc_marked = on;
}
}
{
}
static inline void mark_runtime_wc(struct azx *chip, struct azx_dev *azx_dev,
- struct snd_pcm_runtime *runtime, bool on)
+ struct snd_pcm_substream *substream, bool on)
{
}
#endif
{
struct azx_pcm *apcm = snd_pcm_substream_chip(substream);
struct azx *chip = apcm->chip;
- struct snd_pcm_runtime *runtime = substream->runtime;
struct azx_dev *azx_dev = get_azx_dev(substream);
int ret;
- mark_runtime_wc(chip, azx_dev, runtime, false);
+ mark_runtime_wc(chip, azx_dev, substream, false);
azx_dev->bufsize = 0;
azx_dev->period_bytes = 0;
azx_dev->format_val = 0;
params_buffer_bytes(hw_params));
if (ret < 0)
return ret;
- mark_runtime_wc(chip, azx_dev, runtime, true);
+ mark_runtime_wc(chip, azx_dev, substream, true);
return ret;
}
struct azx_pcm *apcm = snd_pcm_substream_chip(substream);
struct azx_dev *azx_dev = get_azx_dev(substream);
struct azx *chip = apcm->chip;
- struct snd_pcm_runtime *runtime = substream->runtime;
struct hda_pcm_stream *hinfo = apcm->hinfo[substream->stream];
/* reset BDL address */
snd_hda_codec_cleanup(apcm->codec, hinfo, substream);
- mark_runtime_wc(chip, azx_dev, runtime, false);
+ mark_runtime_wc(chip, azx_dev, substream, false);
return snd_pcm_lib_free_pages(substream);
}
/* 5 Series/3400 */
{ PCI_DEVICE(0x8086, 0x3b56),
.driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH },
- /* SCH */
+ /* Poulsbo */
{ PCI_DEVICE(0x8086, 0x811b),
- .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_SCH_SNOOP |
- AZX_DCAPS_BUFSIZE | AZX_DCAPS_POSFIX_LPIB }, /* Poulsbo */
+ .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_NOPM },
+ /* Oaktrail */
{ PCI_DEVICE(0x8086, 0x080a),
- .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_SCH_SNOOP |
- AZX_DCAPS_BUFSIZE | AZX_DCAPS_POSFIX_LPIB }, /* Oaktrail */
+ .driver_data = AZX_DRIVER_SCH | AZX_DCAPS_INTEL_PCH_NOPM },
/* ICH */
{ PCI_DEVICE(0x8086, 0x2668),
.driver_data = AZX_DRIVER_ICH | AZX_DCAPS_OLD_SSYNC |
.patch = patch_conexant_auto },
{ .id = 0x14f15111, .name = "CX20753/4",
.patch = patch_conexant_auto },
+ { .id = 0x14f15113, .name = "CX20755",
+ .patch = patch_conexant_auto },
+ { .id = 0x14f15114, .name = "CX20756",
+ .patch = patch_conexant_auto },
+ { .id = 0x14f15115, .name = "CX20757",
+ .patch = patch_conexant_auto },
{} /* terminator */
};
MODULE_ALIAS("snd-hda-codec-id:14f1510f");
MODULE_ALIAS("snd-hda-codec-id:14f15110");
MODULE_ALIAS("snd-hda-codec-id:14f15111");
+MODULE_ALIAS("snd-hda-codec-id:14f15113");
+MODULE_ALIAS("snd-hda-codec-id:14f15114");
+MODULE_ALIAS("snd-hda-codec-id:14f15115");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Conexant HD-audio codec");
SND_PCI_QUIRK(0x1584, 0x9077, "Uniwill P53", ALC880_FIXUP_VOL_KNOB),
SND_PCI_QUIRK(0x161f, 0x203d, "W810", ALC880_FIXUP_W810),
SND_PCI_QUIRK(0x161f, 0x205d, "Medion Rim 2150", ALC880_FIXUP_MEDION_RIM),
+ SND_PCI_QUIRK(0x1631, 0xe011, "PB 13201056", ALC880_FIXUP_6ST),
SND_PCI_QUIRK(0x1734, 0x107c, "FSC F1734", ALC880_FIXUP_F1734),
SND_PCI_QUIRK(0x1734, 0x1094, "FSC Amilo M1451G", ALC880_FIXUP_FUJITSU),
SND_PCI_QUIRK(0x1734, 0x10ac, "FSC AMILO Xi 1526", ALC880_FIXUP_F1734),
};
static const struct snd_pci_quirk alc268_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x015b, "Acer AOA 150 (ZG5)", ALC268_FIXUP_INV_DMIC),
/* below is codec SSID since multiple Toshiba laptops have the
* same PCI SSID 1179:ff00
*/
SND_PCI_QUIRK(0x1025, 0x0349, "Acer AOD260", ALC269_FIXUP_INV_DMIC),
SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_MIC2_MUTE_LED),
SND_PCI_QUIRK(0x103c, 0x1972, "HP Pavilion 17", ALC269_FIXUP_MIC1_MUTE_LED),
+ SND_PCI_QUIRK(0x103c, 0x1977, "HP Pavilion 14", ALC269_FIXUP_MIC1_MUTE_LED),
SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_DMIC),
SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_DMIC),
SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
SND_PCI_QUIRK_VENDOR(0x104d, "Sony VAIO", ALC269_FIXUP_SONY_VAIO),
SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+ SND_PCI_QUIRK(0x1025, 0x0740, "Acer AO725", ALC271_FIXUP_HP_GATE_MIC_JACK),
SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK),
SND_PCI_QUIRK_VENDOR(0x1025, "Acer Aspire", ALC271_FIXUP_DMIC),
SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook", ALC269_FIXUP_LIFEBOOK),
}
sr_val = i;
- lrclk = snd_soc_params_to_bclk(params) / params_rate(params);
+ lrclk = rates[bclk] / params_rate(params);
arizona_aif_dbg(dai, "BCLK %dHz LRCLK %dHz\n",
rates[bclk], rates[bclk] / lrclk);
id, ret);
}
+ regmap_update_bits(arizona->regmap, fll->base + 1,
+ ARIZONA_FLL1_FREERUN, 0);
+
return 0;
}
EXPORT_SYMBOL_GPL(arizona_init_fll);
"EQR",
"LHPF1",
"LHPF2",
- "LHPF3",
- "LHPF4",
"DSP1.1",
"DSP1.2",
"DSP1.3",
0x25,
0x50, /* EQ */
0x51,
- 0x52,
0x60, /* LHPF1 */
0x61, /* LHPF2 */
0x68, /* DSP1 */
static const struct soc_enum wm5102_aec_loopback =
SOC_VALUE_ENUM_SINGLE(ARIZONA_DAC_AEC_CONTROL_1,
- ARIZONA_AEC_LOOPBACK_SRC_SHIFT,
- ARIZONA_AEC_LOOPBACK_SRC_MASK,
+ ARIZONA_AEC_LOOPBACK_SRC_SHIFT, 0xf,
ARRAY_SIZE(wm5102_aec_loopback_texts),
wm5102_aec_loopback_texts,
wm5102_aec_loopback_values);
static const struct soc_enum wm5110_aec_loopback =
SOC_VALUE_ENUM_SINGLE(ARIZONA_DAC_AEC_CONTROL_1,
- ARIZONA_AEC_LOOPBACK_SRC_SHIFT,
- ARIZONA_AEC_LOOPBACK_SRC_MASK,
+ ARIZONA_AEC_LOOPBACK_SRC_SHIFT, 0xf,
ARRAY_SIZE(wm5110_aec_loopback_texts),
wm5110_aec_loopback_texts,
wm5110_aec_loopback_values);
if (reg) {
buf = kmemdup(region->data, le32_to_cpu(region->len),
- GFP_KERNEL);
+ GFP_KERNEL | GFP_DMA);
if (!buf) {
adsp_err(dsp, "Out of memory\n");
return -ENOMEM;
hdr = (void*)&firmware->data[0];
if (memcmp(hdr->magic, "WMDR", 4) != 0) {
adsp_err(dsp, "%s: invalid magic\n", file);
- return -EINVAL;
+ goto out_fw;
}
adsp_dbg(dsp, "%s: v%d.%d.%d\n", file,
if (reg) {
buf = kmemdup(blk->data, le32_to_cpu(blk->len),
- GFP_KERNEL);
+ GFP_KERNEL | GFP_DMA);
if (!buf) {
adsp_err(dsp, "Out of memory\n");
return -ENOMEM;
.pcm_free = imx_pcm_free,
};
-static int imx_soc_platform_probe(struct platform_device *pdev)
+int imx_pcm_dma_init(struct platform_device *pdev)
{
return snd_soc_register_platform(&pdev->dev, &imx_soc_platform_mx2);
}
-
-static int imx_soc_platform_remove(struct platform_device *pdev)
-{
- snd_soc_unregister_platform(&pdev->dev);
- return 0;
-}
-
-static struct platform_driver imx_pcm_driver = {
- .driver = {
- .name = "imx-pcm-audio",
- .owner = THIS_MODULE,
- },
- .probe = imx_soc_platform_probe,
- .remove = imx_soc_platform_remove,
-};
-
-module_platform_driver(imx_pcm_driver);
-MODULE_LICENSE("GPL");
-MODULE_ALIAS("platform:imx-pcm-audio");
.pcm_free = imx_pcm_fiq_free,
};
-static int imx_soc_platform_probe(struct platform_device *pdev)
+int imx_pcm_fiq_init(struct platform_device *pdev)
{
struct imx_ssi *ssi = platform_get_drvdata(pdev);
int ret;
return ret;
}
-
-static int imx_soc_platform_remove(struct platform_device *pdev)
-{
- snd_soc_unregister_platform(&pdev->dev);
- return 0;
-}
-
-static struct platform_driver imx_pcm_driver = {
- .driver = {
- .name = "imx-fiq-pcm-audio",
- .owner = THIS_MODULE,
- },
-
- .probe = imx_soc_platform_probe,
- .remove = imx_soc_platform_remove,
-};
-
-module_platform_driver(imx_pcm_driver);
-
-MODULE_LICENSE("GPL");
}
EXPORT_SYMBOL_GPL(imx_pcm_free);
+static int imx_pcm_probe(struct platform_device *pdev)
+{
+ if (strcmp(pdev->id_entry->name, "imx-fiq-pcm-audio") == 0)
+ return imx_pcm_fiq_init(pdev);
+
+ return imx_pcm_dma_init(pdev);
+}
+
+static int imx_pcm_remove(struct platform_device *pdev)
+{
+ snd_soc_unregister_platform(&pdev->dev);
+ return 0;
+}
+
+static struct platform_device_id imx_pcm_devtype[] = {
+ { .name = "imx-pcm-audio", },
+ { .name = "imx-fiq-pcm-audio", },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(platform, imx_pcm_devtype);
+
+static struct platform_driver imx_pcm_driver = {
+ .driver = {
+ .name = "imx-pcm",
+ .owner = THIS_MODULE,
+ },
+ .id_table = imx_pcm_devtype,
+ .probe = imx_pcm_probe,
+ .remove = imx_pcm_remove,
+};
+module_platform_driver(imx_pcm_driver);
+
MODULE_DESCRIPTION("Freescale i.MX PCM driver");
MODULE_AUTHOR("Sascha Hauer <s.hauer@pengutronix.de>");
MODULE_LICENSE("GPL");
int imx_pcm_new(struct snd_soc_pcm_runtime *rtd);
void imx_pcm_free(struct snd_pcm *pcm);
+#ifdef CONFIG_SND_SOC_IMX_PCM_DMA
+int imx_pcm_dma_init(struct platform_device *pdev);
+#else
+static inline int imx_pcm_dma_init(struct platform_device *pdev)
+{
+ return -ENODEV;
+}
+#endif
+
+#ifdef CONFIG_SND_SOC_IMX_PCM_FIQ
+int imx_pcm_fiq_init(struct platform_device *pdev);
+#else
+static inline int imx_pcm_fiq_init(struct platform_device *pdev)
+{
+ return -ENODEV;
+}
+#endif
+
#endif /* _IMX_PCM_H */
if (SND_SOC_DAPM_EVENT_ON(event)) {
if (w->invert & SND_SOC_DAPM_REGULATOR_BYPASS) {
- ret = regulator_allow_bypass(w->regulator, true);
+ ret = regulator_allow_bypass(w->regulator, false);
if (ret != 0)
dev_warn(w->dapm->dev,
"ASoC: Failed to bypass %s: %d\n",
return regulator_enable(w->regulator);
} else {
if (w->invert & SND_SOC_DAPM_REGULATOR_BYPASS) {
- ret = regulator_allow_bypass(w->regulator, false);
+ ret = regulator_allow_bypass(w->regulator, true);
if (ret != 0)
dev_warn(w->dapm->dev,
"ASoC: Failed to unbypass %s: %d\n",
w->name, ret);
return NULL;
}
+
+ if (w->invert & SND_SOC_DAPM_REGULATOR_BYPASS) {
+ ret = regulator_allow_bypass(w->regulator, true);
+ if (ret != 0)
+ dev_warn(w->dapm->dev,
+ "ASoC: Failed to unbypass %s: %d\n",
+ w->name, ret);
+ }
break;
case snd_soc_dapm_clock_supply:
#ifdef CONFIG_CLKDEV_LOOKUP
}
channels = (hdr->bLength - 7) / csize - 1;
bmaControls = hdr->bmaControls;
+ if (hdr->bLength < 7 + csize) {
+ snd_printk(KERN_ERR "usbaudio: unit %u: "
+ "invalid UAC_FEATURE_UNIT descriptor\n",
+ unitid);
+ return -EINVAL;
+ }
} else {
struct uac2_feature_unit_descriptor *ftr = _ftr;
csize = 4;
channels = (hdr->bLength - 6) / 4 - 1;
bmaControls = ftr->bmaControls;
- }
-
- if (hdr->bLength < 7 || !csize || hdr->bLength < 7 + csize) {
- snd_printk(KERN_ERR "usbaudio: unit %u: invalid UAC_FEATURE_UNIT descriptor\n", unitid);
- return -EINVAL;
+ if (hdr->bLength < 6 + csize) {
+ snd_printk(KERN_ERR "usbaudio: unit %u: "
+ "invalid UAC_FEATURE_UNIT descriptor\n",
+ unitid);
+ return -EINVAL;
+ }
}
/* parse the source unit */
include/linux/swab.h
arch/*/include/asm/unistd*.h
arch/*/include/asm/perf_regs.h
+arch/*/include/uapi/asm/unistd*.h
+arch/*/include/uapi/asm/perf_regs.h
arch/*/lib/memcpy*.S
arch/*/lib/memset*.S
include/linux/poison.h
include/linux/magic.h
include/linux/hw_breakpoint.h
+include/linux/rbtree_augmented.h
+include/uapi/linux/perf_event.h
+include/uapi/linux/const.h
+include/uapi/linux/swab.h
+include/uapi/linux/hw_breakpoint.h
arch/x86/include/asm/svm.h
arch/x86/include/asm/vmx.h
arch/x86/include/asm/kvm_host.h
+arch/x86/include/uapi/asm/svm.h
+arch/x86/include/uapi/asm/vmx.h
+arch/x86/include/uapi/asm/kvm.h
-e s/arm.*/arm/ -e s/sa110/arm/ \
-e s/s390x/s390/ -e s/parisc64/parisc/ \
-e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
- -e s/sh[234].*/sh/ )
+ -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ )
NO_PERF_REGS := 1
CC = $(CROSS_COMPILE)gcc
--- /dev/null
+slabinfo
+page-types