* 'core-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip:
kernel/softirq.c: fix sparse warning
rcu: Make hierarchical RCU less IPI-happy
iii. Plugging the queue to batch requests in anticipation of opportunities for
merge/sort optimizations
-This is just the same as in 2.4 so far, though per-device unplugging
-support is anticipated for 2.5. Also with a priority-based i/o scheduler,
-such decisions could be based on request priorities.
-
Plugging is an approach that the current i/o scheduling algorithm resorts to so
that it collects up enough requests in the queue to be able to take
advantage of the sorting/merging logic in the elevator. If the
queue is empty when a request comes in, then it plugs the request queue
-(sort of like plugging the bottom of a vessel to get fluid to build up)
+(sort of like plugging the bath tub of a vessel to get fluid to build up)
till it fills up with a few more requests, before starting to service
the requests. This provides an opportunity to merge/sort the requests before
passing them down to the device. There are various conditions when the queue is
unplugged (to open up the flow again), either through a scheduled task or
could be on demand. For example wait_on_buffer sets the unplugging going
-(by running tq_disk) so the read gets satisfied soon. So in the read case,
-the queue gets explicitly unplugged as part of waiting for completion,
-in fact all queues get unplugged as a side-effect.
+through sync_buffer() running blk_run_address_space(mapping). Or the caller
+can do it explicity through blk_unplug(bdev). So in the read case,
+the queue gets explicitly unplugged as part of waiting for completion on that
+buffer. For page driven IO, the address space ->sync_page() takes care of
+doing the blk_run_address_space().
Aside:
This is kind of controversial territory, as it's not clear if plugging is
multi-page bios being queued in one shot, we may not need to wait to merge
a big request from the broken up pieces coming by.
- Per-queue granularity unplugging (still a Todo) may help reduce some of the
- concerns with just a single tq_disk flush approach. Something like
- blk_kick_queue() to unplug a specific queue (right away ?)
- or optionally, all queues, is in the plan.
-
4.4 I/O contexts
I/O contexts provide a dynamically allocated per process data area. They may
be used in I/O schedulers, and in the block layer (could be used for IO statis,
To add ARP targets:
# echo +192.168.0.100 > /sys/class/net/bond0/bonding/arp_ip_target
# echo +192.168.0.101 > /sys/class/net/bond0/bonding/arp_ip_target
- NOTE: up to 10 target addresses may be specified.
+ NOTE: up to 16 target addresses may be specified.
To remove an ARP target:
# echo -192.168.0.100 > /sys/class/net/bond0/bonding/arp_ip_target
Recommended properties :
- - compatible : Should be "fsl-i2c" for parts compatible with
- Freescale I2C specifications.
+ - compatible : compatibility list with 2 entries, the first should
+ be "fsl,CHIP-i2c" where CHIP is the name of a compatible processor,
+ e.g. mpc8313, mpc8543, mpc8544, mpc5200 or mpc5200b. The second one
+ should be "fsl-i2c".
- interrupts : <a b> where a is the interrupt number and b is a
field that represents an encoding of the sense and level
information for the interrupt. This should be encoded based on
controller you have.
- interrupt-parent : the phandle for the interrupt controller that
services interrupts for this device.
- - dfsrr : boolean; if defined, indicates that this I2C device has
- a digital filter sampling rate register
- - fsl5200-clocking : boolean; if defined, indicated that this device
- uses the FSL 5200 clocking mechanism.
-
-Example :
- i2c@3000 {
- interrupt-parent = <40000>;
- interrupts = <1b 3>;
- reg = <3000 18>;
- device_type = "i2c";
- compatible = "fsl-i2c";
- dfsrr;
+ - fsl,preserve-clocking : boolean; if defined, the clock settings
+ from the bootloader are preserved (not touched).
+ - clock-frequency : desired I2C bus clock frequency in Hz.
+
+Examples :
+
+ i2c@3d00 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "fsl,mpc5200b-i2c","fsl,mpc5200-i2c","fsl-i2c";
+ cell-index = <0>;
+ reg = <0x3d00 0x40>;
+ interrupts = <2 15 0>;
+ interrupt-parent = <&mpc5200_pic>;
+ fsl,preserve-clocking;
};
+
+ i2c@3100 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ cell-index = <1>;
+ compatible = "fsl,mpc8544-i2c", "fsl-i2c";
+ reg = <0x3100 0x100>;
+ interrupts = <43 2>;
+ interrupt-parent = <&mpic>;
+ clock-frequency = <400000>;
+ };
+
What `model` option values are available depends on the codec chip.
Check your codec chip from the codec proc file (see "Codec Proc-File"
section below). It will show the vendor/product name of your codec
-chip. Then, see Documentation/sound/alsa/HD-Audio-Modelstxt file,
+chip. Then, see Documentation/sound/alsa/HD-Audio-Models.txt file,
the section of HD-audio driver. You can find a list of codecs
and `model` options belonging to each codec. For example, for Realtek
ALC262 codec chip, pass `model=ultra` for devices that are compatible
Thus, the first thing you can do for any brand-new, unsupported and
non-working HD-audio hardware is to check HD-audio codec and several
-different `model` option values. If you have a luck, some of them
+different `model` option values. If you have any luck, some of them
might suit with your device well.
Some codecs such as ALC880 have a special model option `model=test`.
W: http://www.monstr.eu/fdt/
T: git git://git.monstr.eu/linux-2.6-microblaze.git
S: Supported
+F: arch/microblaze/
MICROTEK X6 SCANNER
P: Oliver Neukum
F: drivers/block/nbd.c
F: include/linux/nbd.h
-NETWORK DEVICE DRIVERS
-P: Jeff Garzik
-M: jgarzik@pobox.com
-L: netdev@vger.kernel.org
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/netdev-2.6.git
-S: Maintained
-F: drivers/net/
-
NETWORKING [GENERAL]
-P: Networking Team
-M: netdev@vger.kernel.org
+P: David S. Miller
+M: davem@davemloft.net
L: netdev@vger.kernel.org
W: http://linux-net.osdl.org/
+T: git kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6.git
S: Maintained
F: net/
F: include/net/
W: http://www.linux-sh.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6.git
S: Supported
+F: Documentation/sh/
F: arch/sh/
+F: drivers/sh/
SUSPEND TO RAM
P: Len Brown
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 30
-EXTRAVERSION = -rc1
+EXTRAVERSION = -rc2
NAME = Temporary Tasmanian Devil
# *DOCUMENTATION*
-e s/arm.*/arm/ -e s/sa110/arm/ \
-e s/s390x/s390/ -e s/parisc64/parisc/ \
-e s/ppc.*/powerpc/ -e s/mips.*/mips/ \
- -e s/sh.*/sh/ )
+ -e s/sh[234].*/sh/ )
# Cross compiling and selecting different set of gcc/bin-utils
# ---------------------------------------------------------------------------
SRCARCH := sparc
endif
+# Additional ARCH settings for sh
+ifeq ($(ARCH),sh64)
+ SRCARCH := sh
+endif
+
# Where to locate arch specific headers
hdr-arch := $(SRCARCH)
#include <asm/byteorder.h>
#include <asm/page.h>
#include <linux/types.h>
-#include <asm/page.h>
#define IO_SPACE_LIMIT (0xFFFFFFFF)
#define SO_MARK 36
+#define SO_TIMESTAMPING 37
+#define SCM_TIMESTAMPING SO_TIMESTAMPING
+
#endif /* _ASM_MICROBLAZE_SOCKET_H */
{
static atomic_t bus_no_reg_magic;
struct device_node *node = dev->node;
- char *name = dev->dev.bus_id;
const u32 *reg;
u64 addr;
int magic;
if (reg) {
addr = of_translate_address(node, reg);
if (addr != OF_BAD_ADDR) {
- snprintf(name, BUS_ID_SIZE,
- "%llx.%s", (unsigned long long)addr,
- node->name);
+ dev_set_name(&dev->dev, "%llx.%s",
+ (unsigned long long)addr, node->name);
return;
}
}
* counter (and pray...)
*/
magic = atomic_add_return(1, &bus_no_reg_magic);
- snprintf(name, BUS_ID_SIZE, "%s.%d", node->name, magic - 1);
+ dev_set_name(&dev->dev, "%s.%d", node->name, magic - 1);
}
EXPORT_SYMBOL(of_device_make_bus_id);
dev->dev.archdata.of_node = np;
if (bus_id)
- strlcpy(dev->dev.bus_id, bus_id, BUS_ID_SIZE);
+ dev_set_name(&dev->dev, bus_id);
else
of_device_make_bus_id(dev);
{
}
-/* FIXME - here will be a proposed change -> remove nr parameter */
-int copy_thread(int nr, unsigned long clone_flags, unsigned long usp,
+int copy_thread(unsigned long clone_flags, unsigned long usp,
unsigned long unused,
struct task_struct *p, struct pt_regs *regs)
{
#include <asm/system.h>
#include <asm/mmu.h>
#include <asm/pgtable.h>
-#include <linux/pci.h>
#include <asm/sections.h>
#include <asm/pci-bridge.h>
#include <linux/signal.h>
#include <linux/errno.h>
-#include <linux/ptrace.h>
#include <asm/processor.h>
#include <linux/uaccess.h>
#include <asm/asm-offsets.h>
#include <linux/uaccess.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
-#include <linux/signal.h>
#include <linux/syscalls.h>
#include <asm/cacheflush.h>
#include <asm/syscalls.h>
#include <linux/module.h>
#include <linux/err.h>
#include <linux/fs.h>
-#include <linux/ipc.h>
#include <linux/semaphore.h>
-#include <linux/syscalls.h>
#include <linux/uaccess.h>
#include <linux/unistd.h>
#ifndef _ASM_BUG_H
#define _ASM_BUG_H
+#ifdef CONFIG_BUG
+
/*
* Tell the user there is some problem.
*/
-#define _debug_bug_trap() \
+#define BUG() \
do { \
asm volatile( \
" syscall 15 \n" \
: \
: "i"(__FILE__), "i"(__LINE__) \
); \
-} while (0)
-
-#define BUG() _debug_bug_trap()
+} while (1)
#define HAVE_ARCH_BUG
+#endif /* CONFIG_BUG */
+
#include <asm-generic/bug.h>
#endif /* _ASM_BUG_H */
#define __NR_dup3 331
#define __NR_pipe2 332
#define __NR_inotify_init1 333
+#define __NR_preadv 334
+#define __NR_pwritev 335
#ifdef __KERNEL__
.long sys_dup3
.long sys_pipe2
.long sys_inotify_init1
+ .long sys_preadv
+ .long sys_pwritev /* 335 */
nr_syscalls=(.-sys_call_table)/4
data_resource.start = virt_to_bus(&_etext);
data_resource.end = virt_to_bus(&_edata)-1;
-#define PFN_UP(x) (((x) + PAGE_SIZE-1) >> PAGE_SHIFT)
-#define PFN_DOWN(x) ((x) >> PAGE_SHIFT)
-#define PFN_PHYS(x) ((x) << PAGE_SHIFT)
-
start_pfn = (CONFIG_KERNEL_RAM_BASE_ADDRESS >> PAGE_SHIFT);
kstart_pfn = PFN_UP(__pa(&_text));
free_pfn = PFN_UP(__pa(&_end));
config PPC_256K_PAGES
bool "256k page size" if 44x
- depends on !STDBINUTILS && (!SHMEM || BROKEN)
+ depends on !STDBINUTILS
help
Make the page size 256k.
interrupt-parent = <&mpic>;
dfsrr;
- dtt@50 {
+ dtt@48 {
compatible = "national,lm75";
- reg = <0x50>;
+ reg = <0x48>;
};
rtc@68 {
interrupt-parent = <&mpic>;
dfsrr;
- dtt@50 {
+ dtt@48 {
compatible = "national,lm75";
- reg = <0x50>;
+ reg = <0x48>;
};
rtc@68 {
interrupt-parent = <&mpic>;
dfsrr;
- dtt@50 {
+ dtt@48 {
compatible = "national,lm75";
- reg = <0x50>;
+ reg = <0x48>;
};
rtc@68 {
interrupts = <31 2 32 2 33 2>;
interrupt-parent = <&mpic>;
tbi-handle = <&tbi2>;
- phy-handle = <&phy3>;
+ phy-handle = <&phy4>;
mdio@520 {
#address-cells = <1>;
interrupts = <37 2 38 2 39 2>;
interrupt-parent = <&mpic>;
tbi-handle = <&tbi3>;
- phy-handle = <&phy4>;
+ phy-handle = <&phy5>;
mdio@520 {
#address-cells = <1>;
interrupt-parent = <&mpic>;
dfsrr;
- dtt@50 {
+ dtt@48 {
compatible = "national,lm75";
- reg = <0x50>;
+ reg = <0x48>;
};
rtc@68 {
interrupts = <31 2 32 2 33 2>;
interrupt-parent = <&mpic>;
tbi-handle = <&tbi2>;
- phy-handle = <&phy3>;
+ phy-handle = <&phy4>;
mdio@520 {
#address-cells = <1>;
interrupts = <37 2 38 2 39 2>;
interrupt-parent = <&mpic>;
tbi-handle = <&tbi3>;
- phy-handle = <&phy4>;
+ phy-handle = <&phy5>;
mdio@520 {
#address-cells = <1>;
interrupt-parent = <&mpic>;
dfsrr;
- dtt@50 {
+ dtt@48 {
compatible = "national,lm75";
- reg = <0x50>;
+ reg = <0x48>;
};
rtc@68 {
interrupt-parent = <&mpic>;
dfsrr;
- dtt@50 {
+ dtt@48 {
compatible = "national,lm75";
- reg = <0x50>;
+ reg = <0x48>;
};
rtc@68 {
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.29-rc2
-# Mon Jan 26 15:36:20 2009
+# Linux kernel version: 2.6.29-rc7
+# Mon Mar 16 09:03:28 2009
#
# CONFIG_PPC64 is not set
# CONFIG_PHYS_64BIT is not set
CONFIG_SPE=y
CONFIG_PPC_MMU_NOHASH=y
+CONFIG_PPC_BOOK3E_MMU=y
# CONFIG_PPC_MM_SLICES is not set
# CONFIG_SMP is not set
CONFIG_PPC32=y
# CONFIG_BSD_PROCESS_ACCT is not set
# CONFIG_TASKSTATS is not set
# CONFIG_AUDIT is not set
+
+#
+# RCU Subsystem
+#
+CONFIG_CLASSIC_RCU=y
+# CONFIG_TREE_RCU is not set
+# CONFIG_PREEMPT_RCU is not set
+# CONFIG_TREE_RCU_TRACE is not set
+# CONFIG_PREEMPT_RCU_TRACE is not set
# CONFIG_IKCONFIG is not set
CONFIG_LOG_BUF_SHIFT=14
CONFIG_GROUP_SCHED=y
# CONFIG_DEFAULT_CFQ is not set
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="anticipatory"
-CONFIG_CLASSIC_RCU=y
-# CONFIG_TREE_RCU is not set
-# CONFIG_PREEMPT_RCU is not set
-# CONFIG_TREE_RCU_TRACE is not set
-# CONFIG_PREEMPT_RCU_TRACE is not set
# CONFIG_FREEZER is not set
#
#
# Kernel options
#
-# CONFIG_HIGHMEM is not set
+CONFIG_HIGHMEM=y
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_PPC_4K_PAGES=y
# CONFIG_PPC_16K_PAGES is not set
# CONFIG_PPC_64K_PAGES is not set
+# CONFIG_PPC_256K_PAGES is not set
CONFIG_FORCE_MAX_ZONEORDER=11
CONFIG_PROC_DEVICETREE=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_PPC_INDIRECT_PCI=y
CONFIG_FSL_SOC=y
CONFIG_FSL_PCI=y
+CONFIG_FSL_LBC=y
CONFIG_PPC_PCI_CHOICE=y
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
# Default settings for advanced configuration options are used
#
CONFIG_LOWMEM_SIZE=0x30000000
+CONFIG_LOWMEM_CAM_NUM=3
CONFIG_PAGE_OFFSET=0xc0000000
CONFIG_KERNEL_START=0xc0000000
CONFIG_PHYSICAL_START=0x00000000
-CONFIG_PHYSICAL_ALIGN=0x10000000
+CONFIG_PHYSICAL_ALIGN=0x04000000
CONFIG_TASK_SIZE=0xc0000000
CONFIG_NET=y
# CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
# CONFIG_PHONET is not set
-CONFIG_WIRELESS=y
-# CONFIG_CFG80211 is not set
-CONFIG_WIRELESS_OLD_REGULATORY=y
-# CONFIG_WIRELESS_EXT is not set
-# CONFIG_LIB80211 is not set
-# CONFIG_MAC80211 is not set
+# CONFIG_WIRELESS is not set
# CONFIG_WIMAX is not set
# CONFIG_RFKILL is not set
# CONFIG_NET_9P is not set
# CONFIG_MTD_NAND_NANDSIM is not set
# CONFIG_MTD_NAND_PLATFORM is not set
# CONFIG_MTD_NAND_FSL_ELBC is not set
-# CONFIG_MTD_NAND_FSL_UPM is not set
+CONFIG_MTD_NAND_FSL_UPM=y
# CONFIG_MTD_ONENAND is not set
#
# LPDDR flash memory drivers
#
# CONFIG_MTD_LPDDR is not set
-# CONFIG_MTD_QINFO_PROBE is not set
#
# UBI - Unsorted block images
#
-CONFIG_MTD_UBI=m
-CONFIG_MTD_UBI_WL_THRESHOLD=4096
-CONFIG_MTD_UBI_BEB_RESERVE=1
-# CONFIG_MTD_UBI_GLUEBI is not set
-
-#
-# UBI debugging options
-#
-# CONFIG_MTD_UBI_DEBUG is not set
+# CONFIG_MTD_UBI is not set
CONFIG_OF_DEVICE=y
CONFIG_OF_I2C=y
# CONFIG_PARPORT is not set
# CONFIG_BLK_DEV_HD is not set
CONFIG_MISC_DEVICES=y
# CONFIG_PHANTOM is not set
-# CONFIG_EEPROM_93CX6 is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_C2PORT is not set
+
+#
+# EEPROM support
+#
+# CONFIG_EEPROM_AT24 is not set
+# CONFIG_EEPROM_LEGACY is not set
+# CONFIG_EEPROM_93CX6 is not set
CONFIG_HAVE_IDE=y
-CONFIG_IDE=y
-
-#
-# Please see Documentation/ide/ide.txt for help/info on IDE drives
-#
-CONFIG_IDE_TIMINGS=y
-# CONFIG_BLK_DEV_IDE_SATA is not set
-CONFIG_IDE_GD=y
-CONFIG_IDE_GD_ATA=y
-# CONFIG_IDE_GD_ATAPI is not set
-# CONFIG_BLK_DEV_IDECD is not set
-# CONFIG_BLK_DEV_IDETAPE is not set
-# CONFIG_IDE_TASK_IOCTL is not set
-CONFIG_IDE_PROC_FS=y
-
-#
-# IDE chipset support/bugfixes
-#
-# CONFIG_BLK_DEV_PLATFORM is not set
-CONFIG_BLK_DEV_IDEDMA_SFF=y
-
-#
-# PCI IDE chipsets support
-#
-CONFIG_BLK_DEV_IDEPCI=y
-CONFIG_IDEPCI_PCIBUS_ORDER=y
-# CONFIG_BLK_DEV_OFFBOARD is not set
-CONFIG_BLK_DEV_GENERIC=y
-# CONFIG_BLK_DEV_OPTI621 is not set
-CONFIG_BLK_DEV_IDEDMA_PCI=y
-# CONFIG_BLK_DEV_AEC62XX is not set
-# CONFIG_BLK_DEV_ALI15X3 is not set
-# CONFIG_BLK_DEV_AMD74XX is not set
-# CONFIG_BLK_DEV_CMD64X is not set
-# CONFIG_BLK_DEV_TRIFLEX is not set
-# CONFIG_BLK_DEV_CS5520 is not set
-# CONFIG_BLK_DEV_CS5530 is not set
-# CONFIG_BLK_DEV_HPT366 is not set
-# CONFIG_BLK_DEV_JMICRON is not set
-# CONFIG_BLK_DEV_SC1200 is not set
-# CONFIG_BLK_DEV_PIIX is not set
-# CONFIG_BLK_DEV_IT8172 is not set
-# CONFIG_BLK_DEV_IT8213 is not set
-# CONFIG_BLK_DEV_IT821X is not set
-# CONFIG_BLK_DEV_NS87415 is not set
-# CONFIG_BLK_DEV_PDC202XX_OLD is not set
-# CONFIG_BLK_DEV_PDC202XX_NEW is not set
-# CONFIG_BLK_DEV_SVWKS is not set
-# CONFIG_BLK_DEV_SIIMAGE is not set
-# CONFIG_BLK_DEV_SL82C105 is not set
-# CONFIG_BLK_DEV_SLC90E66 is not set
-# CONFIG_BLK_DEV_TRM290 is not set
-CONFIG_BLK_DEV_VIA82CXXX=y
-# CONFIG_BLK_DEV_TC86C001 is not set
-CONFIG_BLK_DEV_IDEDMA=y
+# CONFIG_IDE is not set
#
# SCSI device support
CONFIG_NETDEV_1000=y
# CONFIG_ACENIC is not set
# CONFIG_DL2K is not set
-CONFIG_E1000=y
+# CONFIG_E1000 is not set
# CONFIG_E1000E is not set
# CONFIG_IP1000 is not set
# CONFIG_IGB is not set
# CONFIG_QLA3XXX is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
+# CONFIG_ATL1C is not set
# CONFIG_JME is not set
CONFIG_NETDEV_10000=y
# CONFIG_CHELSIO_T1 is not set
# Miscellaneous I2C Chip support
#
# CONFIG_DS1682 is not set
-# CONFIG_EEPROM_AT24 is not set
-# CONFIG_EEPROM_LEGACY is not set
# CONFIG_SENSORS_PCF8574 is not set
# CONFIG_PCF8575 is not set
# CONFIG_SENSORS_PCA9539 is not set
# Special HID drivers
#
CONFIG_HID_COMPAT=y
-CONFIG_USB_SUPPORT=y
-CONFIG_USB_ARCH_HAS_HCD=y
-CONFIG_USB_ARCH_HAS_OHCI=y
-CONFIG_USB_ARCH_HAS_EHCI=y
-# CONFIG_USB is not set
-# CONFIG_USB_OTG_WHITELIST is not set
-# CONFIG_USB_OTG_BLACKLIST_HUB is not set
-
-#
-# Enable Host or Gadget support to see Inventra options
-#
-
-#
-# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may also be needed;
-#
-# CONFIG_USB_GADGET is not set
-
-#
-# OTG and related infrastructure
-#
+# CONFIG_USB_SUPPORT is not set
# CONFIG_UWB is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
#
# File systems
#
-CONFIG_EXT2_FS=y
-# CONFIG_EXT2_FS_XATTR is not set
-# CONFIG_EXT2_FS_XIP is not set
-CONFIG_EXT3_FS=y
-CONFIG_EXT3_FS_XATTR=y
-# CONFIG_EXT3_FS_POSIX_ACL is not set
-# CONFIG_EXT3_FS_SECURITY is not set
+# CONFIG_EXT2_FS is not set
+# CONFIG_EXT3_FS is not set
# CONFIG_EXT4_FS is not set
-CONFIG_JBD=y
-CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
# CONFIG_FS_POSIX_ACL is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
-# CONFIG_JFFS2_FS is not set
-# CONFIG_UBIFS_FS is not set
+CONFIG_JFFS2_FS=y
+CONFIG_JFFS2_FS_DEBUG=0
+CONFIG_JFFS2_FS_WRITEBUFFER=y
+# CONFIG_JFFS2_FS_WBUF_VERIFY is not set
+# CONFIG_JFFS2_SUMMARY is not set
+# CONFIG_JFFS2_FS_XATTR is not set
+# CONFIG_JFFS2_COMPRESSION_OPTIONS is not set
+CONFIG_JFFS2_ZLIB=y
+# CONFIG_JFFS2_LZO is not set
+CONFIG_JFFS2_RTIME=y
+# CONFIG_JFFS2_RUBIN is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
CONFIG_CRC32=y
# CONFIG_CRC7 is not set
# CONFIG_LIBCRC32C is not set
+CONFIG_ZLIB_INFLATE=y
+CONFIG_ZLIB_DEFLATE=y
CONFIG_PLIST=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
# CONFIG_DEBUG_KOBJECT is not set
+# CONFIG_DEBUG_HIGHMEM is not set
# CONFIG_DEBUG_BUGVERBOSE is not set
# CONFIG_DEBUG_INFO is not set
# CONFIG_DEBUG_VM is not set
# CONFIG_LATENCYTOP is not set
CONFIG_SYSCTL_SYSCALL_CHECK=y
CONFIG_HAVE_FUNCTION_TRACER=y
+CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
PPC_LONG "1b,4b,2b,4b\n" \
".previous" \
: "=&r" (oldval), "=&r" (ret) \
- : "b" (uaddr), "i" (-EFAULT), "1" (oparg) \
+ : "b" (uaddr), "i" (-EFAULT), "r" (oparg) \
: "cr0", "memory")
static inline int futex_atomic_op_inuser (int encoded_op, int __user *uaddr)
switch (op) {
case FUTEX_OP_SET:
- __futex_atomic_op("", ret, oldval, uaddr, oparg);
+ __futex_atomic_op("mr %1,%4\n", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_ADD:
- __futex_atomic_op("add %1,%0,%1\n", ret, oldval, uaddr, oparg);
+ __futex_atomic_op("add %1,%0,%4\n", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_OR:
- __futex_atomic_op("or %1,%0,%1\n", ret, oldval, uaddr, oparg);
+ __futex_atomic_op("or %1,%0,%4\n", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_ANDN:
- __futex_atomic_op("andc %1,%0,%1\n", ret, oldval, uaddr, oparg);
+ __futex_atomic_op("andc %1,%0,%4\n", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_XOR:
- __futex_atomic_op("xor %1,%0,%1\n", ret, oldval, uaddr, oparg);
+ __futex_atomic_op("xor %1,%0,%4\n", ret, oldval, uaddr, oparg);
break;
default:
ret = -ENOSYS;
*/
#define MMU_FTR_NEED_DTLB_SW_LRU ASM_CONST(0x00200000)
+/* This indicates that the processor uses the wrong opcode for tlbilx
+ * instructions. During the ISA 2.06 development the opcode for tlbilx
+ * changed and some early implementations used to old opcode
+ */
+#define MMU_FTR_TLBILX_EARLY_OPCODE ASM_CONST(0x00400000)
+
#ifndef __ASSEMBLY__
#include <asm/cputable.h>
prop = of_get_property(np, "interrupts", NULL);
if (!prop)
continue;
- if (parport_pc_probe_port(io1, io2, prop[0], autodma, NULL) != NULL)
+ if (parport_pc_probe_port(io1, io2, prop[0], autodma, NULL, 0) != NULL)
count++;
}
return count;
#define PPC_INST_STSWI 0x7c0005aa
#define PPC_INST_STSWX 0x7c00052a
-#define PPC_INST_TLBILX 0x7c000626
+#define PPC_INST_TLBILX 0x7c000024
+#define PPC_INST_TLBILX_EARLY 0x7c000626
#define PPC_INST_WAIT 0x7c00007c
/* macros to insert fields into opcodes */
#define PPC_RFDI stringify_in_c(.long PPC_INST_RFDI)
#define PPC_RFMCI stringify_in_c(.long PPC_INST_RFMCI)
#define PPC_TLBILX(t, a, b) stringify_in_c(.long PPC_INST_TLBILX | \
- __PPC_T_TLB(t) | __PPC_RA(a) | __PPC_RB(b))
+ __PPC_T_TLB(t) | \
+ __PPC_RA(a) | __PPC_RB(b))
#define PPC_TLBILX_ALL(a, b) PPC_TLBILX(0, a, b)
#define PPC_TLBILX_PID(a, b) PPC_TLBILX(1, a, b)
#define PPC_TLBILX_VA(a, b) PPC_TLBILX(3, a, b)
+
+#define PPC_TLBILX_EARLY(t, a, b) stringify_in_c(.long PPC_INST_TLBILX_EARLY | \
+ __PPC_T_TLB(t) | \
+ __PPC_RA(a) | __PPC_RB(b))
+#define PPC_TLBILX_ALL_EARLY(a, b) PPC_TLBILX_EARLY(0, a, b)
+#define PPC_TLBILX_PID_EARLY(a, b) PPC_TLBILX_EARLY(1, a, b)
+#define PPC_TLBILX_VA_EARLY(a, b) PPC_TLBILX_EARLY(3, a, b)
#define PPC_WAIT(w) stringify_in_c(.long PPC_INST_WAIT | \
__PPC_WC(w))
.cpu_features = CPU_FTRS_E500MC,
.cpu_user_features = COMMON_USER_BOOKE | PPC_FEATURE_HAS_FPU,
.mmu_features = MMU_FTR_TYPE_FSL_E | MMU_FTR_BIG_PHYS |
- MMU_FTR_USE_TLBILX,
+ MMU_FTR_USE_TLBILX | MMU_FTR_TLBILX_EARLY_OPCODE,
.icache_bsize = 64,
.dcache_bsize = 64,
.num_pmcs = 4,
void flush_tlb_mm(struct mm_struct *mm)
{
- cpumask_t cpu_mask;
unsigned int pid;
preempt_disable();
andi. r3,r3,MMUCSR0_TLBFI@l
bne 1b
MMU_FTR_SECTION_ELSE
- PPC_TLBILX_ALL(0,0)
+ BEGIN_MMU_FTR_SECTION_NESTED(96)
+ PPC_TLBILX_ALL(0,r3)
+ MMU_FTR_SECTION_ELSE_NESTED(96)
+ PPC_TLBILX_ALL_EARLY(0,r3)
+ ALT_MMU_FTR_SECTION_END_NESTED_IFCLR(MMU_FTR_TLBILX_EARLY_OPCODE, 96)
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_USE_TLBILX)
msync
isync
wrteei 0
mfspr r4,SPRN_MAS6 /* save MAS6 */
mtspr SPRN_MAS6,r3
+ BEGIN_MMU_FTR_SECTION_NESTED(96)
PPC_TLBILX_PID(0,0)
+ MMU_FTR_SECTION_ELSE_NESTED(96)
+ PPC_TLBILX_PID_EARLY(0,0)
+ ALT_MMU_FTR_SECTION_END_NESTED_IFCLR(MMU_FTR_TLBILX_EARLY_OPCODE, 96)
mtspr SPRN_MAS6,r4 /* restore MAS6 */
wrtee r10
MMU_FTR_SECTION_ELSE
mtspr SPRN_MAS1,r4
tlbwe
MMU_FTR_SECTION_ELSE
+ BEGIN_MMU_FTR_SECTION_NESTED(96)
PPC_TLBILX_VA(0,r3)
+ MMU_FTR_SECTION_ELSE_NESTED(96)
+ PPC_TLBILX_VA_EARLY(0,r3)
+ ALT_MMU_FTR_SECTION_END_NESTED_IFCLR(MMU_FTR_TLBILX_EARLY_OPCODE, 96)
ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_USE_TLBILX)
msync
isync
#include <asm/smp.h>
#include <asm/system.h>
#include <asm/uaccess.h>
+#include <asm/firmware.h>
#include "plpar_wrappers.h"
if (!driver)
return;
+ dev->error_state = pci_channel_io_normal;
+
eeh_enable_irq(dev);
if (!driver->err_handler ||
struct vtimer_list *timer;
u64 expires;
} *args = p;
- mod_virt_timer(args->timer, args->expires);
+ mod_virt_timer_periodic(args->timer, args->expires);
}
#define APPLDATA_ADD_TIMER 0
--- /dev/null
+/*
+ * Copyright IBM Corp. 2000,2009
+ * Author(s): Hartmut Penner <hp@de.ibm.com>,
+ * Martin Schwidefsky <schwidefsky@de.ibm.com>
+ * Christian Ehrhardt <ehrhardt@de.ibm.com>
+ */
+
+#ifndef _ASM_S390_CPUID_H_
+#define _ASM_S390_CPUID_H_
+
+/*
+ * CPU type and hardware bug flags. Kept separately for each CPU.
+ * Members of this structure are referenced in head.S, so think twice
+ * before touching them. [mj]
+ */
+
+typedef struct
+{
+ unsigned int version : 8;
+ unsigned int ident : 24;
+ unsigned int machine : 16;
+ unsigned int unused : 16;
+} __attribute__ ((packed)) cpuid_t;
+
+#endif /* _ASM_S390_CPUID_H_ */
#define ASM_KVM_HOST_H
#include <linux/kvm_host.h>
#include <asm/debug.h>
+#include <asm/cpuid.h>
#define KVM_MAX_VCPUS 64
#define KVM_MEMORY_SLOTS 32
#define __LC_USER_EXEC_ASCE 0x02ac
#define __LC_CPUID 0x02b0
#define __LC_INT_CLOCK 0x02c8
+#define __LC_MACHINE_FLAGS 0x02d8
#define __LC_IRB 0x0300
#define __LC_PFAULT_INTPARM 0x0080
#define __LC_CPU_TIMER_SAVE_AREA 0x00d8
#define __LC_CPUID 0x0320
#define __LC_INT_CLOCK 0x0340
#define __LC_VDSO_PER_CPU 0x0350
+#define __LC_MACHINE_FLAGS 0x0358
#define __LC_IRB 0x0380
#define __LC_PASTE 0x03c0
#define __LC_PFAULT_INTPARM 0x11b8
#ifndef __ASSEMBLY__
-#include <asm/processor.h>
+#include <asm/cpuid.h>
+#include <asm/ptrace.h>
#include <linux/types.h>
-#include <asm/sigp.h>
void restart_int_handler(void);
void ext_int_handler(void);
__u32 ext_call_fast; /* 0x02c4 */
__u64 int_clock; /* 0x02c8 */
__u64 clock_comparator; /* 0x02d0 */
- __u8 pad_0x02d8[0x0300-0x02d8]; /* 0x02d8 */
+ __u32 machine_flags; /* 0x02d8 */
+ __u8 pad_0x02dc[0x0300-0x02dc]; /* 0x02dc */
/* Interrupt response block */
__u8 irb[64]; /* 0x0300 */
__u64 int_clock; /* 0x0340 */
__u64 clock_comparator; /* 0x0348 */
__u64 vdso_per_cpu_data; /* 0x0350 */
- __u8 pad_0x0358[0x0380-0x0358]; /* 0x0358 */
+ __u64 machine_flags; /* 0x0358 */
+ __u8 pad_0x0360[0x0380-0x0360]; /* 0x0360 */
/* Interrupt response block. */
__u8 irb[64]; /* 0x0380 */
#define __ASM_S390_PROCESSOR_H
#include <linux/linkage.h>
+#include <asm/cpuid.h>
+#include <asm/page.h>
#include <asm/ptrace.h>
+#include <asm/setup.h>
#ifdef __KERNEL__
/*
*/
#define current_text_addr() ({ void *pc; asm("basr %0,0" : "=a" (pc)); pc; })
-/*
- * CPU type and hardware bug flags. Kept separately for each CPU.
- * Members of this structure are referenced in head.S, so think twice
- * before touching them. [mj]
- */
-
-typedef struct
-{
- unsigned int version : 8;
- unsigned int ident : 24;
- unsigned int machine : 16;
- unsigned int unused : 16;
-} __attribute__ ((packed)) cpuid_t;
-
static inline void get_cpu_id(cpuid_t *ptr)
{
asm volatile("stidp 0(%1)" : "=m" (*ptr) : "a" (ptr));
#ifdef __KERNEL__
-#include <asm/setup.h>
-#include <asm/page.h>
/*
* The pt_regs struct defines the way the registers are stored on
#ifdef __KERNEL__
+#include <asm/lowcore.h>
#include <asm/types.h>
#define PARMAREA 0x10400
/*
* Machine features detected in head.S
*/
-extern unsigned long machine_flags;
#define MACHINE_FLAG_VM (1UL << 0)
#define MACHINE_FLAG_IEEE (1UL << 1)
#define MACHINE_FLAG_HPAGE (1UL << 10)
#define MACHINE_FLAG_PFMF (1UL << 11)
-#define MACHINE_IS_VM (machine_flags & MACHINE_FLAG_VM)
-#define MACHINE_IS_KVM (machine_flags & MACHINE_FLAG_KVM)
-#define MACHINE_HAS_DIAG9C (machine_flags & MACHINE_FLAG_DIAG9C)
+#define MACHINE_IS_VM (S390_lowcore.machine_flags & MACHINE_FLAG_VM)
+#define MACHINE_IS_KVM (S390_lowcore.machine_flags & MACHINE_FLAG_KVM)
+#define MACHINE_HAS_DIAG9C (S390_lowcore.machine_flags & MACHINE_FLAG_DIAG9C)
#ifndef __s390x__
-#define MACHINE_HAS_IEEE (machine_flags & MACHINE_FLAG_IEEE)
-#define MACHINE_HAS_CSP (machine_flags & MACHINE_FLAG_CSP)
+#define MACHINE_HAS_IEEE (S390_lowcore.machine_flags & MACHINE_FLAG_IEEE)
+#define MACHINE_HAS_CSP (S390_lowcore.machine_flags & MACHINE_FLAG_CSP)
#define MACHINE_HAS_IDTE (0)
#define MACHINE_HAS_DIAG44 (1)
-#define MACHINE_HAS_MVPG (machine_flags & MACHINE_FLAG_MVPG)
+#define MACHINE_HAS_MVPG (S390_lowcore.machine_flags & MACHINE_FLAG_MVPG)
#define MACHINE_HAS_MVCOS (0)
#define MACHINE_HAS_HPAGE (0)
#define MACHINE_HAS_PFMF (0)
#else /* __s390x__ */
#define MACHINE_HAS_IEEE (1)
#define MACHINE_HAS_CSP (1)
-#define MACHINE_HAS_IDTE (machine_flags & MACHINE_FLAG_IDTE)
-#define MACHINE_HAS_DIAG44 (machine_flags & MACHINE_FLAG_DIAG44)
+#define MACHINE_HAS_IDTE (S390_lowcore.machine_flags & MACHINE_FLAG_IDTE)
+#define MACHINE_HAS_DIAG44 (S390_lowcore.machine_flags & MACHINE_FLAG_DIAG44)
#define MACHINE_HAS_MVPG (1)
-#define MACHINE_HAS_MVCOS (machine_flags & MACHINE_FLAG_MVCOS)
-#define MACHINE_HAS_HPAGE (machine_flags & MACHINE_FLAG_HPAGE)
-#define MACHINE_HAS_PFMF (machine_flags & MACHINE_FLAG_PFMF)
+#define MACHINE_HAS_MVCOS (S390_lowcore.machine_flags & MACHINE_FLAG_MVCOS)
+#define MACHINE_HAS_HPAGE (S390_lowcore.machine_flags & MACHINE_FLAG_HPAGE)
+#define MACHINE_HAS_PFMF (S390_lowcore.machine_flags & MACHINE_FLAG_PFMF)
#endif /* __s390x__ */
#define ZFCPDUMP_HSA_SIZE (32UL<<20)
#define ASYNC_SIZE (PAGE_SIZE << ASYNC_ORDER)
#ifndef __ASSEMBLY__
-#include <asm/processor.h>
#include <asm/lowcore.h>
+#include <asm/page.h>
+#include <asm/processor.h>
/*
* low level task data that entry.S needs immediate access to
extern void add_virt_timer(void *new);
extern void add_virt_timer_periodic(void *new);
extern int mod_virt_timer(struct vtimer_list *timer, __u64 expires);
+extern int mod_virt_timer_periodic(struct vtimer_list *timer, __u64 expires);
extern int del_virt_timer(struct vtimer_list *timer);
extern void init_cpu_vtimer(void);
#ifndef _ASM_S390_TIMEX_H
#define _ASM_S390_TIMEX_H
+/* The value of the TOD clock for 1.1.1970. */
+#define TOD_UNIX_EPOCH 0x7d91048bca000000ULL
+
/* Inline functions for clock register access. */
static inline int set_clock(__u64 time)
{
void init_cpu_timer(void);
unsigned long long monotonic_clock(void);
+extern u64 sched_clock_base_cc;
+
#endif
#define __NR_pipe2 325
#define __NR_dup3 326
#define __NR_epoll_create1 327
-#define NR_syscalls 328
+#define __NR_preadv 328
+#define __NR_pwritev 329
+#define NR_syscalls 330
/*
* There are some system calls that are not present on 64 bit, some
DEFINE(__TI_flags, offsetof(struct thread_info, flags));
DEFINE(__TI_cpu, offsetof(struct thread_info, cpu));
DEFINE(__TI_precount, offsetof(struct thread_info, preempt_count));
+ DEFINE(__TI_user_timer, offsetof(struct thread_info, user_timer));
+ DEFINE(__TI_system_timer, offsetof(struct thread_info, system_timer));
BLANK();
DEFINE(__PT_ARGS, offsetof(struct pt_regs, args));
DEFINE(__PT_PSW, offsetof(struct pt_regs, psw));
llgfr %r5,%r5 # u32
llgfr %r6,%r6 # u32
jg compat_sys_keyctl # branch to system call
+
+ .globl compat_sys_preadv_wrapper
+compat_sys_preadv_wrapper:
+ llgfr %r2,%r2 # unsigned long
+ llgtr %r3,%r3 # compat_iovec *
+ llgfr %r4,%r4 # unsigned long
+ llgfr %r5,%r5 # u32
+ llgfr %r6,%r6 # u32
+ jg compat_sys_preadv # branch to system call
+
+ .globl compat_sys_pwritev_wrapper
+compat_sys_pwritev_wrapper:
+ llgfr %r2,%r2 # unsigned long
+ llgtr %r3,%r3 # compat_iovec *
+ llgfr %r4,%r4 # unsigned long
+ llgfr %r5,%r5 # u32
+ llgfr %r6,%r6 # u32
+ jg compat_sys_pwritev # branch to system call
char kernel_nss_name[NSS_NAME_SIZE + 1];
+static unsigned long machine_flags;
+
static void __init setup_boot_command_line(void);
+/*
+ * Get the TOD clock running.
+ */
+static void __init reset_tod_clock(void)
+{
+ u64 time;
+
+ if (store_clock(&time) == 0)
+ return;
+ /* TOD clock not running. Set the clock to Unix Epoch. */
+ if (set_clock(TOD_UNIX_EPOCH) != 0 || store_clock(&time) != 0)
+ disabled_wait(0);
+
+ sched_clock_base_cc = TOD_UNIX_EPOCH;
+}
#ifdef CONFIG_SHARED_KERNEL
int __init savesys_ipl_nss(char *cmd, const int cmdlen);
*/
void __init startup_init(void)
{
+ reset_tod_clock();
ipl_save_parameters();
rescue_initrd();
clear_bss_section();
setup_hpage();
sclp_facilities_detect();
detect_memory_layout(memory_chunk);
+ S390_lowcore.machine_flags = machine_flags;
lockdep_on();
}
__CPUINIT
.globl restart_int_handler
restart_int_handler:
+ basr %r1,0
+restart_base:
+ spt restart_vtime-restart_base(%r1)
+ stck __LC_LAST_UPDATE_CLOCK
+ mvc __LC_LAST_UPDATE_TIMER(8),restart_vtime-restart_base(%r1)
+ mvc __LC_EXIT_TIMER(8),restart_vtime-restart_base(%r1)
l %r15,__LC_SAVE_AREA+60 # load ksp
lctl %c0,%c15,__LC_CREGS_SAVE_AREA # get new ctl regs
lam %a0,%a15,__LC_AREGS_SAVE_AREA
lm %r6,%r15,__SF_GPRS(%r15) # load registers from clone
+ l %r1,__LC_THREAD_INFO
+ mvc __LC_USER_TIMER(8),__TI_user_timer(%r1)
+ mvc __LC_SYSTEM_TIMER(8),__TI_system_timer(%r1)
+ xc __LC_STEAL_TIMER(8),__LC_STEAL_TIMER
stosm __SF_EMPTY(%r15),0x04 # now we can turn dat on
basr %r14,0
l %r14,restart_addr-.(%r14)
br %r14 # branch to start_secondary
restart_addr:
.long start_secondary
+ .align 8
+restart_vtime:
+ .long 0x7fffffff,0xffffffff
.previous
#else
/*
__CPUINIT
.globl restart_int_handler
restart_int_handler:
+ basr %r1,0
+restart_base:
+ spt restart_vtime-restart_base(%r1)
+ stck __LC_LAST_UPDATE_CLOCK
+ mvc __LC_LAST_UPDATE_TIMER(8),restart_vtime-restart_base(%r1)
+ mvc __LC_EXIT_TIMER(8),restart_vtime-restart_base(%r1)
lg %r15,__LC_SAVE_AREA+120 # load ksp
lghi %r10,__LC_CREGS_SAVE_AREA
lctlg %c0,%c15,0(%r10) # get new ctl regs
lghi %r10,__LC_AREGS_SAVE_AREA
lam %a0,%a15,0(%r10)
lmg %r6,%r15,__SF_GPRS(%r15) # load registers from clone
+ lg %r1,__LC_THREAD_INFO
+ mvc __LC_USER_TIMER(8),__TI_user_timer(%r1)
+ mvc __LC_SYSTEM_TIMER(8),__TI_system_timer(%r1)
+ xc __LC_STEAL_TIMER(8),__LC_STEAL_TIMER
stosm __SF_EMPTY(%r15),0x04 # now we can turn dat on
jg start_secondary
+ .align 8
+restart_vtime:
+ .long 0x7fffffff,0xffffffff
.previous
#else
/*
.LPG0:
xc 0x200(256),0x200 # partially clear lowcore
xc 0x300(256),0x300
-
+ l %r1,5f-.LPG0(%r13)
+ stck 0(%r1)
+ spt 6f-.LPG0(%r13)
+ mvc __LC_LAST_UPDATE_CLOCK(8),0(%r1)
+ mvc __LC_LAST_UPDATE_TIMER(8),6f-.LPG0(%r13)
+ mvc __LC_EXIT_TIMER(8),5f-.LPG0(%r13)
#ifndef CONFIG_MARCH_G5
# check processor version against MARCH_{G5,Z900,Z990,Z9_109,Z10}
stidp __LC_CPUID # store cpuid
brct %r0,0b
#endif
- l %r13,0f-.LPG0(%r13)
+ l %r13,4f-.LPG0(%r13)
b 0(%r13)
-0: .long startup_continue
+ .align 4
+4: .long startup_continue
+5: .long sched_clock_base_cc
+ .align 8
+6: .long 0x7fffffff,0xffffffff
#
# params at 10400 (setup.h)
#include <linux/init.h>
#include <linux/errno.h>
+#include <linux/hardirq.h>
#include <linux/time.h>
#include <linux/module.h>
#include <asm/lowcore.h>
struct mci *mci;
int umode;
- lockdep_off();
+ nmi_enter();
s390_idle_check();
mci = (struct mci *) &S390_lowcore.mcck_interruption_code;
mcck->warning = 1;
set_thread_flag(TIF_MCCK_PENDING);
}
- lockdep_on();
+ nmi_exit();
}
static int __init machine_check_init(void)
unsigned int console_irq = -1;
EXPORT_SYMBOL(console_irq);
-unsigned long machine_flags;
-EXPORT_SYMBOL(machine_flags);
-
unsigned long elf_hwcap = 0;
char elf_platform[ELF_PLATFORM_SIZE];
__alloc_bootmem(PAGE_SIZE, PAGE_SIZE, 0) + PAGE_SIZE;
lc->current_task = (unsigned long) init_thread_union.thread_info.task;
lc->thread_info = (unsigned long) &init_thread_union;
+ lc->machine_flags = S390_lowcore.machine_flags;
#ifndef CONFIG_64BIT
if (MACHINE_HAS_IEEE) {
lc->extended_save_area_addr = (__u32)
#else
lc->vdso_per_cpu_data = (unsigned long) &lc->paste[0];
#endif
+ lc->sync_enter_timer = S390_lowcore.sync_enter_timer;
+ lc->async_enter_timer = S390_lowcore.async_enter_timer;
+ lc->exit_timer = S390_lowcore.exit_timer;
+ lc->user_timer = S390_lowcore.user_timer;
+ lc->system_timer = S390_lowcore.system_timer;
+ lc->steal_timer = S390_lowcore.steal_timer;
+ lc->last_update_timer = S390_lowcore.last_update_timer;
+ lc->last_update_clock = S390_lowcore.last_update_clock;
set_prefix((u32)(unsigned long) lc);
lowcore_ptr[0] = lc;
}
cpu_lowcore->current_task = (unsigned long) idle;
cpu_lowcore->cpu_nr = cpu;
cpu_lowcore->kernel_asce = S390_lowcore.kernel_asce;
+ cpu_lowcore->machine_flags = S390_lowcore.machine_flags;
eieio();
while (signal_processor(cpu, sigp_restart) == sigp_busy)
int pcpus, cpu;
pcpus = simple_strtoul(s, NULL, 0);
- for (cpu = 0; cpu < pcpus && cpu < nr_cpu_ids; cpu++)
+ init_cpu_possible(cpumask_of(0));
+ for (cpu = 1; cpu < pcpus && cpu < nr_cpu_ids; cpu++)
set_cpu_possible(cpu, true);
return 0;
}
SYSCALL(sys_pipe2,sys_pipe2,sys_pipe2_wrapper) /* 325 */
SYSCALL(sys_dup3,sys_dup3,sys_dup3_wrapper)
SYSCALL(sys_epoll_create1,sys_epoll_create1,sys_epoll_create1_wrapper)
+SYSCALL(sys_preadv,sys_preadv,compat_sys_preadv_wrapper)
+SYSCALL(sys_pwritev,sys_pwritev,compat_sys_pwritev_wrapper)
#define USECS_PER_JIFFY ((unsigned long) 1000000/HZ)
#define CLK_TICKS_PER_JIFFY ((unsigned long) USECS_PER_JIFFY << 12)
-/* The value of the TOD clock for 1.1.1970. */
-#define TOD_UNIX_EPOCH 0x7d91048bca000000ULL
-
/*
* Create a small time difference between the timer interrupts
* on the different cpus to avoid lock contention.
#define TICK_SIZE tick
+u64 sched_clock_base_cc = -1; /* Force to data section. */
+
static ext_int_info_t ext_int_info_cc;
static ext_int_info_t ext_int_etr_cc;
-static u64 sched_clock_base_cc;
static DEFINE_PER_CPU(struct clock_event_device, comparators);
static void etr_reset(void);
static void stp_reset(void);
-/*
- * Get the TOD clock running.
- */
-static u64 __init reset_tod_clock(void)
+unsigned long read_persistent_clock(void)
{
- u64 time;
-
- etr_reset();
- stp_reset();
- if (store_clock(&time) == 0)
- return time;
- /* TOD clock not running. Set the clock to Unix Epoch. */
- if (set_clock(TOD_UNIX_EPOCH) != 0 || store_clock(&time) != 0)
- panic("TOD clock not operational.");
+ struct timespec ts;
- return TOD_UNIX_EPOCH;
+ tod_to_timeval(get_clock() - TOD_UNIX_EPOCH, &ts);
+ return ts.tv_sec;
}
static cycle_t read_tod_clock(void)
*/
void __init time_init(void)
{
- sched_clock_base_cc = reset_tod_clock();
+ struct timespec ts;
+ unsigned long flags;
+ cycle_t now;
- /* set xtime */
- tod_to_timeval(sched_clock_base_cc - TOD_UNIX_EPOCH, &xtime);
- set_normalized_timespec(&wall_to_monotonic,
- -xtime.tv_sec, -xtime.tv_nsec);
+ /* Reset time synchronization interfaces. */
+ etr_reset();
+ stp_reset();
/* request the clock comparator external interrupt */
if (register_early_external_interrupt(0x1004,
&ext_int_info_cc) != 0)
panic("Couldn't request external interrupt 0x1004");
- if (clocksource_register(&clocksource_tod) != 0)
- panic("Could not register TOD clock source");
-
/* request the timing alert external interrupt */
if (register_early_external_interrupt(0x1406,
timing_alert_interrupt,
&ext_int_etr_cc) != 0)
panic("Couldn't request external interrupt 0x1406");
+ if (clocksource_register(&clocksource_tod) != 0)
+ panic("Could not register TOD clock source");
+
+ /*
+ * The TOD clock is an accurate clock. The xtime should be
+ * initialized in a way that the difference between TOD and
+ * xtime is reasonably small. Too bad that timekeeping_init
+ * sets xtime.tv_nsec to zero. In addition the clock source
+ * change from the jiffies clock source to the TOD clock
+ * source add another error of up to 1/HZ second. The same
+ * function sets wall_to_monotonic to a value that is too
+ * small for /proc/uptime to be accurate.
+ * Reset xtime and wall_to_monotonic to sane values.
+ */
+ write_seqlock_irqsave(&xtime_lock, flags);
+ now = get_clock();
+ tod_to_timeval(now - TOD_UNIX_EPOCH, &xtime);
+ clocksource_tod.cycle_last = now;
+ clocksource_tod.raw_time = xtime;
+ tod_to_timeval(sched_clock_base_cc - TOD_UNIX_EPOCH, &ts);
+ set_normalized_timespec(&wall_to_monotonic, -ts.tv_sec, -ts.tv_nsec);
+ write_sequnlock_irqrestore(&xtime_lock, flags);
+
/* Enable TOD clock interrupts on the boot cpu. */
init_cpu_timer();
+
/* Enable cpu timer interrupts on the boot cpu. */
vtime_init();
}
static void stp_work_fn(struct work_struct *work);
static DEFINE_MUTEX(stp_work_mutex);
static DECLARE_WORK(stp_work, stp_work_fn);
+static struct timer_list stp_timer;
static int __init early_parse_stp(char *p)
{
}
}
+static void stp_timeout(unsigned long dummy)
+{
+ queue_work(time_sync_wq, &stp_work);
+}
+
static int __init stp_init(void)
{
if (!test_bit(CLOCK_SYNC_HAS_STP, &clock_sync_flags))
return 0;
+ setup_timer(&stp_timer, stp_timeout, 0UL);
time_init_wq();
if (!stp_online)
return 0;
if (!stp_online) {
chsc_sstpc(stp_page, STP_OP_CTRL, 0x0000);
+ del_timer_sync(&stp_timer);
goto out_unlock;
}
stop_machine(stp_sync_clock, &stp_sync, &cpu_online_map);
put_online_cpus();
+ if (!check_sync_clock())
+ /*
+ * There is a usable clock but the synchonization failed.
+ * Retry after a second.
+ */
+ mod_timer(&stp_timer, jiffies + HZ);
+
out_unlock:
mutex_unlock(&stp_work_mutex);
}
/* Account time spent with enabled wait psw loaded as idle time. */
idle_time = S390_lowcore.int_clock - idle->idle_enter;
account_idle_time(idle_time);
+ S390_lowcore.steal_timer +=
+ idle->idle_enter - S390_lowcore.last_update_clock;
S390_lowcore.last_update_clock = S390_lowcore.int_clock;
/* Account system time spent going idle. */
}
EXPORT_SYMBOL(add_virt_timer_periodic);
-/*
- * If we change a pending timer the function must be called on the CPU
- * where the timer is running on, e.g. by smp_call_function_single()
- *
- * The original mod_timer adds the timer if it is not pending. For
- * compatibility we do the same. The timer will be added on the current
- * CPU as a oneshot timer.
- *
- * returns whether it has modified a pending timer (1) or not (0)
- */
-int mod_virt_timer(struct vtimer_list *timer, __u64 expires)
+int __mod_vtimer(struct vtimer_list *timer, __u64 expires, int periodic)
{
struct vtimer_queue *vq;
unsigned long flags;
BUG_ON(!timer->function);
BUG_ON(!expires || expires > VTIMER_MAX_SLICE);
- /*
- * This is a common optimization triggered by the
- * networking code - if the timer is re-modified
- * to be the same thing then just return:
- */
if (timer->expires == expires && vtimer_pending(timer))
return 1;
cpu = get_cpu();
vq = &per_cpu(virt_cpu_timer, cpu);
- /* check if we run on the right CPU */
- BUG_ON(timer->cpu != cpu);
-
/* disable interrupts before test if timer is pending */
spin_lock_irqsave(&vq->lock, flags);
/* if timer isn't pending add it on the current CPU */
if (!vtimer_pending(timer)) {
spin_unlock_irqrestore(&vq->lock, flags);
- /* we do not activate an interval timer with mod_virt_timer */
- timer->interval = 0;
+
+ if (periodic)
+ timer->interval = expires;
+ else
+ timer->interval = 0;
timer->expires = expires;
timer->cpu = cpu;
internal_add_vtimer(timer);
return 0;
}
+ /* check if we run on the right CPU */
+ BUG_ON(timer->cpu != cpu);
+
list_del_init(&timer->entry);
timer->expires = expires;
-
- /* also change the interval if we have an interval timer */
- if (timer->interval)
+ if (periodic)
timer->interval = expires;
/* the timer can't expire anymore so we can release the lock */
internal_add_vtimer(timer);
return 1;
}
+
+/*
+ * If we change a pending timer the function must be called on the CPU
+ * where the timer is running on.
+ *
+ * returns whether it has modified a pending timer (1) or not (0)
+ */
+int mod_virt_timer(struct vtimer_list *timer, __u64 expires)
+{
+ return __mod_vtimer(timer, expires, 0);
+}
EXPORT_SYMBOL(mod_virt_timer);
/*
+ * If we change a pending timer the function must be called on the CPU
+ * where the timer is running on.
+ *
+ * returns whether it has modified a pending timer (1) or not (0)
+ */
+int mod_virt_timer_periodic(struct vtimer_list *timer, __u64 expires)
+{
+ return __mod_vtimer(timer, expires, 1);
+}
+EXPORT_SYMBOL(mod_virt_timer_periodic);
+
+/*
* delete a virtual timer
*
* returns whether the deleted timer was pending (1) or not (0)
*/
void init_cpu_vtimer(void)
{
- struct thread_info *ti = current_thread_info();
struct vtimer_queue *vq;
- S390_lowcore.user_timer = ti->user_timer;
- S390_lowcore.system_timer = ti->system_timer;
-
- /* kick the virtual timer */
- asm volatile ("STCK %0" : "=m" (S390_lowcore.last_update_clock));
- asm volatile ("STPT %0" : "=m" (S390_lowcore.last_update_timer));
-
/* initialize per cpu vtimer structure */
vq = &__get_cpu_var(virt_cpu_timer);
INIT_LIST_HEAD(&vq->list);
select HAVE_GENERIC_DMA_COHERENT
select HAVE_IOREMAP_PROT if MMU
select HAVE_ARCH_TRACEHOOK
+ select HAVE_DMA_API_DEBUG
help
The SuperH is a RISC processor targeted for use in embedded systems
and consumer electronics; it was also used in the Sega Dreamcast
<http://www.linux-sh.org/>.
config SUPERH32
- def_bool !SUPERH64
+ def_bool ARCH = "sh"
select HAVE_KPROBES
select HAVE_KRETPROBES
select HAVE_FUNCTION_TRACER
select ARCH_HIBERNATION_POSSIBLE if MMU
config SUPERH64
- def_bool y if CPU_SH5
+ def_bool ARCH = "sh64"
config ARCH_DEFCONFIG
string
bool
select ARCH_SUSPEND_POSSIBLE
+if SUPERH32
+
choice
prompt "Processor sub-type selection"
select SYS_SUPPORTS_NUMA
select SYS_SUPPORTS_CMT
+endchoice
+
+endif
+
+if SUPERH64
+
+choice
+ prompt "Processor sub-type selection"
+
# SH-5 Processor Support
config CPU_SUBTYPE_SH5_101
endchoice
+endif
+
source "arch/sh/mm/Kconfig"
source "arch/sh/Kconfig.cpu"
static struct ov772x_camera_info ov7725_info = {
.buswidth = SOCAM_DATAWIDTH_8,
.flags = OV772X_FLAG_VFLIP | OV772X_FLAG_HFLIP,
+ .edgectrl = OV772X_AUTO_EDGECTRL(0xf, 0),
.link = {
.power = ov7725_power,
},
* Renesas Technology Corp. SH7786 Urquell Support.
*
* Copyright (C) 2008 Kuninori Morimoto <morimoto.kuninori@renesas.com>
+ *
+ * Based on board-sh7785lcr.c
* Copyright (C) 2008 Yoshihiro Shimoda
*
* This file is subject to the terms and conditions of the GNU General Public
#include <asm/heartbeat.h>
#include <asm/sizes.h>
+/*
+ * bit 1234 5678
+ *----------------------------
+ * SW1 0101 0010 -> Pck 33MHz version
+ * (1101 0010) Pck 66MHz version
+ * SW2 0x1x xxxx -> little endian
+ * 29bit mode
+ * SW47 0001 1000 -> CS0 : on-board flash
+ * CS1 : SRAM, registers, LAN, PCMCIA
+ * 38400 bps for SCIF1
+ *
+ * Address
+ * 0x00000000 - 0x04000000 (CS0) Nor Flash
+ * 0x04000000 - 0x04200000 (CS1) SRAM
+ * 0x05000000 - 0x05800000 (CS1) on board register
+ * 0x05800000 - 0x06000000 (CS1) LAN91C111
+ * 0x06000000 - 0x06400000 (CS1) PCMCIA
+ * 0x08000000 - 0x10000000 (CS2-CS3) DDR3
+ * 0x10000000 - 0x14000000 (CS4) PCIe
+ * 0x14000000 - 0x14800000 (CS5) Core0 LRAM/URAM
+ * 0x14800000 - 0x15000000 (CS5) Core1 LRAM/URAM
+ * 0x18000000 - 0x1C000000 (CS6) ATA/NAND-Flash
+ * 0x1C000000 - (CS7) SH7786 Control register
+ */
+
+/* HeartBeat */
static struct resource heartbeat_resources[] = {
[0] = {
.start = BOARDREG(SLEDR),
.resource = heartbeat_resources,
};
+/* LAN91C111 */
static struct smc91x_platdata smc91x_info = {
.flags = SMC91X_USE_16BIT | SMC91X_NOWAIT,
};
},
};
+/* Nor Flash */
static struct mtd_partition nor_flash_partitions[] = {
{
.name = "loader",
static struct sh4_pci_address_map sh7785_pci_map = {
.window0 = {
+#if defined(CONFIG_32BIT)
+ .base = SH7780_32BIT_DDR_BASE_ADDR,
+ .size = 0x40000000,
+#else
.base = SH7780_CS0_BASE_ADDR,
.size = 0x20000000,
+#endif
},
.flags = SH4_PCIC_NO_RESET,
#define SH7780_CS5_BASE_ADDR (SH7780_CS4_BASE_ADDR + SH7780_MEM_REGION_SIZE)
#define SH7780_CS6_BASE_ADDR (SH7780_CS5_BASE_ADDR + SH7780_MEM_REGION_SIZE)
+#define SH7780_32BIT_DDR_BASE_ADDR 0x40000000
+
struct sh4_pci_address_map;
/* arch/sh/drivers/pci/pci-sh7780.c */
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/init.h>
+#include <linux/dma-debug.h>
#include <asm/io.h>
static int __init pcibios_init(void)
pci_fixup_irqs(pci_common_swizzle, pcibios_map_platform_irq);
+ dma_debug_add_bus(&pci_bus_type);
+
return 0;
}
subsys_initcall(pcibios_init);
#include <linux/mm.h>
#include <linux/scatterlist.h>
+#include <linux/dma-debug.h>
#include <asm/cacheflush.h>
#include <asm/io.h>
#include <asm-generic/dma-coherent.h>
void *ptr, size_t size,
enum dma_data_direction dir)
{
+ dma_addr_t addr = virt_to_phys(ptr);
+
#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
if (dev->bus == &pci_bus_type)
- return virt_to_phys(ptr);
+ return addr;
#endif
dma_cache_sync(dev, ptr, size, dir);
- return virt_to_phys(ptr);
+ debug_dma_map_page(dev, virt_to_page(ptr),
+ (unsigned long)ptr & ~PAGE_MASK, size,
+ dir, addr, true);
+
+ return addr;
}
-#define dma_unmap_single(dev, addr, size, dir) do { } while (0)
+static inline void dma_unmap_single(struct device *dev, dma_addr_t addr,
+ size_t size, enum dma_data_direction dir)
+{
+ debug_dma_unmap_page(dev, addr, size, dir, true);
+}
static inline int dma_map_sg(struct device *dev, struct scatterlist *sg,
int nents, enum dma_data_direction dir)
dma_cache_sync(dev, sg_virt(&sg[i]), sg[i].length, dir);
#endif
sg[i].dma_address = sg_phys(&sg[i]);
+ sg[i].dma_length = sg[i].length;
}
+ debug_dma_map_sg(dev, sg, nents, i, dir);
+
return nents;
}
-#define dma_unmap_sg(dev, sg, nents, dir) do { } while (0)
+static inline void dma_unmap_sg(struct device *dev, struct scatterlist *sg,
+ int nents, enum dma_data_direction dir)
+{
+ debug_dma_unmap_sg(dev, sg, nents, dir);
+}
static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
unsigned long offset, size_t size,
dma_cache_sync(dev, sg_virt(&sg[i]), sg[i].length, dir);
#endif
sg[i].dma_address = sg_phys(&sg[i]);
+ sg[i].dma_length = sg[i].length;
}
}
enum dma_data_direction dir)
{
dma_sync_single(dev, dma_handle, size, dir);
+ debug_dma_sync_single_for_cpu(dev, dma_handle, size, dir);
}
static inline void dma_sync_single_for_device(struct device *dev,
enum dma_data_direction dir)
{
dma_sync_single(dev, dma_handle, size, dir);
+ debug_dma_sync_single_for_device(dev, dma_handle, size, dir);
}
static inline void dma_sync_single_range_for_cpu(struct device *dev,
enum dma_data_direction direction)
{
dma_sync_single_for_cpu(dev, dma_handle+offset, size, direction);
+ debug_dma_sync_single_range_for_cpu(dev, dma_handle,
+ offset, size, direction);
}
static inline void dma_sync_single_range_for_device(struct device *dev,
enum dma_data_direction direction)
{
dma_sync_single_for_device(dev, dma_handle+offset, size, direction);
+ debug_dma_sync_single_range_for_device(dev, dma_handle,
+ offset, size, direction);
}
enum dma_data_direction dir)
{
dma_sync_sg(dev, sg, nelems, dir);
+ debug_dma_sync_sg_for_cpu(dev, sg, nelems, dir);
}
static inline void dma_sync_sg_for_device(struct device *dev,
enum dma_data_direction dir)
{
dma_sync_sg(dev, sg, nelems, dir);
+ debug_dma_sync_sg_for_device(dev, sg, nelems, dir);
}
-
static inline int dma_get_cache_alignment(void)
{
/*
struct scatterlist {
#ifdef CONFIG_DEBUG_SG
- unsigned long sg_magic;
+ unsigned long sg_magic;
#endif
- unsigned long page_link;
- unsigned int offset;/* for highmem, page offset */
- dma_addr_t dma_address;
- unsigned int length;
+ unsigned long page_link;
+ unsigned int offset; /* for highmem, page offset */
+ unsigned int length;
+ dma_addr_t dma_address;
+ unsigned int dma_length;
};
#define ISA_DMA_THRESHOLD PHYS_ADDR_MASK
#define pcibus_to_node(bus) ((void)(bus), -1)
#define pcibus_to_cpumask(bus) (pcibus_to_node(bus) == -1 ? \
CPU_MASK_ALL : \
- node_to_cpumask(pcibus_to_node(bus)) \
- )
+ node_to_cpumask(pcibus_to_node(bus)))
+#define cpumask_of_pcibus(bus) (pcibus_to_node(bus) == -1 ? \
+ CPU_MASK_ALL_PTR : \
+ cpumask_of_node(pcibus_to_node(bus)))
+
#endif
#include <asm-generic/topology.h>
#define __NR_dup3 330
#define __NR_pipe2 331
#define __NR_inotify_init1 332
+#define __NR_preadv 333
+#define __NR_pwritev 334
-#define NR_syscalls 333
+#define NR_syscalls 335
#ifdef __KERNEL__
#define __NR_dup3 358
#define __NR_pipe2 359
#define __NR_inotify_init1 360
+#define __NR_preadv 361
+#define __NR_pwritev 362
#ifdef __KERNEL__
-#define NR_syscalls 361
+#define NR_syscalls 363
#define __ARCH_WANT_IPC_PARSE_VERSION
#define __ARCH_WANT_OLD_READDIR
* Set the PHY and PLL enable bit
*/
__raw_writel(PHY_ENB | PLL_ENB, USBPCTL1);
- while (i-- &&
- ((__raw_readl(USBST) & ACT_PLL_STATUS) != ACT_PLL_STATUS))
+ while (i--) {
+ if (ACT_PLL_STATUS == (__raw_readl(USBST) & ACT_PLL_STATUS)) {
+ /* Set the PHY RST bit */
+ __raw_writel(PHY_ENB | PLL_ENB | PHY_RST, USBPCTL1);
+ printk(KERN_INFO "sh7786 usb setup done\n");
+ break;
+ }
cpu_relax();
-
- if (i) {
- /* Set the PHY RST bit */
- __raw_writel(PHY_ENB | PLL_ENB | PHY_RST, USBPCTL1);
- printk(KERN_INFO "sh7786 usb setup done\n");
}
}
.long sys_dup3 /* 330 */
.long sys_pipe2
.long sys_inotify_init1
+ .long sys_preadv
+ .long sys_writev
.long sys_dup3
.long sys_pipe2
.long sys_inotify_init1 /* 360 */
+ .long sys_preadv
+ .long sys_pwritev
* for more details.
*/
#include <linux/mm.h>
+#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
+#include <linux/dma-debug.h>
+#include <linux/io.h>
#include <asm/cacheflush.h>
#include <asm/addrspace.h>
-#include <asm/io.h>
+
+#define PREALLOC_DMA_DEBUG_ENTRIES 4096
+
+static int __init dma_init(void)
+{
+ dma_debug_init(PREALLOC_DMA_DEBUG_ENTRIES);
+ return 0;
+}
+fs_initcall(dma_init);
void *dma_alloc_coherent(struct device *dev, size_t size,
dma_addr_t *dma_handle, gfp_t gfp)
split_page(pfn_to_page(virt_to_phys(ret) >> PAGE_SHIFT), order);
*dma_handle = virt_to_phys(ret);
+
+ debug_dma_alloc_coherent(dev, size, *dma_handle, ret_nocache);
+
return ret_nocache;
}
EXPORT_SYMBOL(dma_alloc_coherent);
unsigned long pfn = dma_handle >> PAGE_SHIFT;
int k;
- if (!dma_release_from_coherent(dev, order, vaddr)) {
- WARN_ON(irqs_disabled()); /* for portability */
- for (k = 0; k < (1 << order); k++)
- __free_pages(pfn_to_page(pfn + k), 0);
- iounmap(vaddr);
- }
+ WARN_ON(irqs_disabled()); /* for portability */
+
+ if (dma_release_from_coherent(dev, order, vaddr))
+ return;
+
+ debug_dma_free_coherent(dev, size, vaddr, dma_handle);
+ for (k = 0; k < (1 << order); k++)
+ __free_pages(pfn_to_page(pfn + k), 0);
+ iounmap(vaddr);
}
EXPORT_SYMBOL(dma_free_coherent);
#ifdef __KERNEL__
+#include <asm/system.h>
+
#define ATOMIC_INIT(i) { (i) }
extern int __atomic_add_return(int, atomic_t *);
if (!strcmp(parent->name, "dma")) {
p = parport_pc_probe_port(base, base + 0x400,
op->irqs[0], PARPORT_DMA_NOFIFO,
- op->dev.parent->parent);
+ op->dev.parent->parent, 0);
if (!p)
return -ENOMEM;
dev_set_drvdata(&op->dev, p);
p = parport_pc_probe_port(base, base + 0x400,
op->irqs[0],
slot,
- op->dev.parent);
+ op->dev.parent,
+ 0);
err = -ENOMEM;
if (!p)
goto out_disable_irq;
free_queue(lp->tx_num_entries, lp->tx_base);
out_free_mssbuf:
- if (mssbuf)
- kfree(mssbuf);
+ kfree(mssbuf);
out_free_iommu:
ldc_iommu_release(lp);
hlist_del(&lp->list);
- if (lp->mssbuf)
- kfree(lp->mssbuf);
+ kfree(lp->mssbuf);
ldc_iommu_release(lp);
while (!cpu_isset(cpuid, smp_commenced_mask))
rmb();
- ipi_call_lock();
+ ipi_call_lock_irq();
cpu_set(cpuid, cpu_online_map);
- ipi_call_unlock();
+ ipi_call_unlock_irq();
/* idle thread is expected to have preempt disabled */
preempt_disable();
bool "SGI Ultraviolet"
depends on X86_64
depends on X86_EXTENDED_PLATFORM
+ depends on NUMA
select X86_X2APIC
---help---
This option is needed in order to support SGI Ultraviolet systems.
/* ========================================================================= */
/* UVH_BAU_DATA_CONFIG */
/* ========================================================================= */
+#define UVH_LB_BAU_MISC_CONTROL 0x320170UL
+#define UV_ENABLE_INTD_SOFT_ACK_MODE_SHIFT 15
+#define UV_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHIFT 16
+#define UV_INTD_SOFT_ACK_TIMEOUT_PERIOD 0x000000000bUL
+/* 1011 timebase 7 (168millisec) * 3 ticks -> 500ms */
#define UVH_BAU_DATA_CONFIG 0x61680UL
#define UVH_BAU_DATA_CONFIG_32 0x0438
unsigned long gnode_upper, lowmem_redir_base, lowmem_redir_size;
int bytes, nid, cpu, lcpu, pnode, blade, i, j, m_val, n_val;
int max_pnode = 0;
- unsigned long mmr_base, present;
+ unsigned long mmr_base, present, paddr;
+ unsigned short pnode_mask;
map_low_mmrs();
}
}
+ pnode_mask = (1 << n_val) - 1;
node_id.v = uv_read_local_mmr(UVH_NODE_ID);
gnode_upper = (((unsigned long)node_id.s.node_id) &
~((1 << n_val) - 1)) << m_val;
uv_cpu_hub_info(cpu)->numa_blade_id = blade;
uv_cpu_hub_info(cpu)->blade_processor_id = lcpu;
uv_cpu_hub_info(cpu)->pnode = pnode;
- uv_cpu_hub_info(cpu)->pnode_mask = (1 << n_val) - 1;
+ uv_cpu_hub_info(cpu)->pnode_mask = pnode_mask;
uv_cpu_hub_info(cpu)->gpa_mask = (1 << (m_val + n_val)) - 1;
uv_cpu_hub_info(cpu)->gnode_upper = gnode_upper;
uv_cpu_hub_info(cpu)->global_mmr_base = mmr_base;
lcpu, blade);
}
+ /* Add blade/pnode info for nodes without cpus */
+ for_each_online_node(nid) {
+ if (uv_node_to_blade[nid] >= 0)
+ continue;
+ paddr = node_start_pfn(nid) << PAGE_SHIFT;
+ pnode = (paddr >> m_val) & pnode_mask;
+ blade = boot_pnode_to_blade(pnode);
+ uv_node_to_blade[nid] = blade;
+ }
+
map_gru_high(max_pnode);
map_mmr_high(max_pnode);
map_config_high(max_pnode);
memcpy(&uv_systab, tab, sizeof(struct uv_systab));
iounmap(tab);
- printk(KERN_INFO "EFI UV System Table Revision %d\n", tab->revision);
+ printk(KERN_INFO "EFI UV System Table Revision %d\n",
+ uv_systab.revision);
}
#else /* !CONFIG_EFI */
static void drv_write(struct drv_cmd *cmd)
{
+ int this_cpu;
+
+ this_cpu = get_cpu();
+ if (cpumask_test_cpu(this_cpu, cmd->mask))
+ do_drv_write(cmd);
smp_call_function_many(cmd->mask, do_drv_write, cmd, 1);
+ put_cpu();
}
static u32 get_cur_val(const struct cpumask *mask)
EXPORT_SYMBOL_GPL(ucode_cpu_info);
#ifdef CONFIG_MICROCODE_OLD_INTERFACE
-struct update_for_cpu {
- const void __user *buf;
- size_t size;
-};
-
-static long update_for_cpu(void *_ufc)
-{
- struct update_for_cpu *ufc = _ufc;
- int error;
-
- error = microcode_ops->request_microcode_user(smp_processor_id(),
- ufc->buf, ufc->size);
- if (error < 0)
- return error;
- if (!error)
- microcode_ops->apply_microcode(smp_processor_id());
- return error;
-}
-
static int do_microcode_update(const void __user *buf, size_t size)
{
+ cpumask_t old;
int error = 0;
int cpu;
- struct update_for_cpu ufc = { .buf = buf, .size = size };
+
+ old = current->cpus_allowed;
for_each_online_cpu(cpu) {
struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
if (!uci->valid)
continue;
- error = work_on_cpu(cpu, update_for_cpu, &ufc);
+
+ set_cpus_allowed_ptr(current, &cpumask_of_cpu(cpu));
+ error = microcode_ops->request_microcode_user(cpu, buf, size);
if (error < 0)
- break;
+ goto out;
+ if (!error)
+ microcode_ops->apply_microcode(cpu);
}
+out:
+ set_cpus_allowed_ptr(current, &old);
return error;
}
/* position of pnode (which is nasid>>1): */
static int uv_nshift __read_mostly;
+/* base pnode in this partition */
+static int uv_partition_base_pnode __read_mostly;
static unsigned long uv_mmask __read_mostly;
static DEFINE_PER_CPU(struct bau_control, bau_control);
/*
+ * Determine the first node on a blade.
+ */
+static int __init blade_to_first_node(int blade)
+{
+ int node, b;
+
+ for_each_online_node(node) {
+ b = uv_node_to_blade_id(node);
+ if (blade == b)
+ return node;
+ }
+ return -1; /* shouldn't happen */
+}
+
+/*
+ * Determine the apicid of the first cpu on a blade.
+ */
+static int __init blade_to_first_apicid(int blade)
+{
+ int cpu;
+
+ for_each_present_cpu(cpu)
+ if (blade == uv_cpu_to_blade_id(cpu))
+ return per_cpu(x86_cpu_to_apicid, cpu);
+ return -1;
+}
+
+/*
* Free a software acknowledge hardware resource by clearing its Pending
* bit. This will return a reply to the sender.
* If the message has timed out, a reply has already been sent by the
msp = __get_cpu_var(bau_control).msg_statuses + msg_slot;
cpu = uv_blade_processor_id();
msg->number_of_cpus =
- uv_blade_nr_online_cpus(uv_node_to_blade_id(numa_node_id()));
+ uv_blade_nr_online_cpus(uv_node_to_blade_id(numa_node_id()));
this_cpu_mask = 1UL << cpu;
if (msp->seen_by.bits & this_cpu_mask)
return;
* Returns @flush_mask if some remote flushing remains to be done. The
* mask will have some bits still set.
*/
-const struct cpumask *uv_flush_send_and_wait(int cpu, int this_blade,
+const struct cpumask *uv_flush_send_and_wait(int cpu, int this_pnode,
struct bau_desc *bau_desc,
struct cpumask *flush_mask)
{
int completion_status = 0;
int right_shift;
int tries = 0;
- int blade;
+ int pnode;
int bit;
unsigned long mmr_offset;
unsigned long index;
* use the IPI method of shootdown on them.
*/
for_each_cpu(bit, flush_mask) {
- blade = uv_cpu_to_blade_id(bit);
- if (blade == this_blade)
+ pnode = uv_cpu_to_pnode(bit);
+ if (pnode == this_pnode)
continue;
cpumask_clear_cpu(bit, flush_mask);
}
struct cpumask *flush_mask = __get_cpu_var(uv_flush_tlb_mask);
int i;
int bit;
- int blade;
+ int pnode;
int uv_cpu;
- int this_blade;
+ int this_pnode;
int locals = 0;
struct bau_desc *bau_desc;
cpumask_andnot(flush_mask, cpumask, cpumask_of(cpu));
uv_cpu = uv_blade_processor_id();
- this_blade = uv_numa_blade_id();
+ this_pnode = uv_hub_info->pnode;
bau_desc = __get_cpu_var(bau_control).descriptor_base;
bau_desc += UV_ITEMS_PER_DESCRIPTOR * uv_cpu;
i = 0;
for_each_cpu(bit, flush_mask) {
- blade = uv_cpu_to_blade_id(bit);
- BUG_ON(blade > (UV_DISTRIBUTION_SIZE - 1));
- if (blade == this_blade) {
+ pnode = uv_cpu_to_pnode(bit);
+ BUG_ON(pnode > (UV_DISTRIBUTION_SIZE - 1));
+ if (pnode == this_pnode) {
locals++;
continue;
}
- bau_node_set(blade, &bau_desc->distribution);
+ bau_node_set(pnode - uv_partition_base_pnode,
+ &bau_desc->distribution);
i++;
}
if (i == 0) {
bau_desc->payload.address = va;
bau_desc->payload.sending_cpu = cpu;
- return uv_flush_send_and_wait(uv_cpu, this_blade, bau_desc, flush_mask);
+ return uv_flush_send_and_wait(uv_cpu, this_pnode, bau_desc, flush_mask);
}
/*
set_irq_regs(old_regs);
}
+/*
+ * uv_enable_timeouts
+ *
+ * Each target blade (i.e. blades that have cpu's) needs to have
+ * shootdown message timeouts enabled. The timeout does not cause
+ * an interrupt, but causes an error message to be returned to
+ * the sender.
+ */
static void uv_enable_timeouts(void)
{
- int i;
int blade;
- int last_blade;
+ int nblades;
int pnode;
- int cur_cpu = 0;
- unsigned long apicid;
+ unsigned long mmr_image;
- last_blade = -1;
- for_each_online_node(i) {
- blade = uv_node_to_blade_id(i);
- if (blade == last_blade)
+ nblades = uv_num_possible_blades();
+
+ for (blade = 0; blade < nblades; blade++) {
+ if (!uv_blade_nr_possible_cpus(blade))
continue;
- last_blade = blade;
- apicid = per_cpu(x86_cpu_to_apicid, cur_cpu);
+
pnode = uv_blade_to_pnode(blade);
- cur_cpu += uv_blade_nr_possible_cpus(i);
+ mmr_image =
+ uv_read_global_mmr64(pnode, UVH_LB_BAU_MISC_CONTROL);
+ /*
+ * Set the timeout period and then lock it in, in three
+ * steps; captures and locks in the period.
+ *
+ * To program the period, the SOFT_ACK_MODE must be off.
+ */
+ mmr_image &= ~((unsigned long)1 <<
+ UV_ENABLE_INTD_SOFT_ACK_MODE_SHIFT);
+ uv_write_global_mmr64
+ (pnode, UVH_LB_BAU_MISC_CONTROL, mmr_image);
+ /*
+ * Set the 4-bit period.
+ */
+ mmr_image &= ~((unsigned long)0xf <<
+ UV_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHIFT);
+ mmr_image |= (UV_INTD_SOFT_ACK_TIMEOUT_PERIOD <<
+ UV_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHIFT);
+ uv_write_global_mmr64
+ (pnode, UVH_LB_BAU_MISC_CONTROL, mmr_image);
+ /*
+ * Subsequent reversals of the timebase bit (3) cause an
+ * immediate timeout of one or all INTD resources as
+ * indicated in bits 2:0 (7 causes all of them to timeout).
+ */
+ mmr_image |= ((unsigned long)1 <<
+ UV_ENABLE_INTD_SOFT_ACK_MODE_SHIFT);
+ uv_write_global_mmr64
+ (pnode, UVH_LB_BAU_MISC_CONTROL, mmr_image);
}
}
stat->requestee, stat->onetlb, stat->alltlb,
stat->s_retry, stat->d_retry, stat->ptc_i);
seq_printf(file, "%lx %ld %ld %ld %ld %ld %ld\n",
- uv_read_global_mmr64(uv_blade_to_pnode
- (uv_cpu_to_blade_id(cpu)),
+ uv_read_global_mmr64(uv_cpu_to_pnode(cpu),
UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE),
stat->sflush, stat->dflush,
stat->retriesok, stat->nomsg,
* finish the initialization of the per-blade control structures
*/
static void __init
-uv_table_bases_finish(int blade, int node, int cur_cpu,
+uv_table_bases_finish(int blade,
struct bau_control *bau_tablesp,
struct bau_desc *adp)
{
struct bau_control *bcp;
- int i;
+ int cpu;
- for (i = cur_cpu; i < cur_cpu + uv_blade_nr_possible_cpus(blade); i++) {
- bcp = (struct bau_control *)&per_cpu(bau_control, i);
+ for_each_present_cpu(cpu) {
+ if (blade != uv_cpu_to_blade_id(cpu))
+ continue;
+ bcp = (struct bau_control *)&per_cpu(bau_control, cpu);
bcp->bau_msg_head = bau_tablesp->va_queue_first;
bcp->va_queue_first = bau_tablesp->va_queue_first;
bcp->va_queue_last = bau_tablesp->va_queue_last;
struct bau_desc *adp;
struct bau_desc *ad2;
- adp = (struct bau_desc *)
- kmalloc_node(16384, GFP_KERNEL, node);
+ adp = (struct bau_desc *)kmalloc_node(16384, GFP_KERNEL, node);
BUG_ON(!adp);
- pa = __pa((unsigned long)adp);
+ pa = uv_gpa(adp); /* need the real nasid*/
n = pa >> uv_nshift;
m = pa & uv_mmask;
for (i = 0, ad2 = adp; i < UV_ACTIVATION_DESCRIPTOR_SIZE; i++, ad2++) {
memset(ad2, 0, sizeof(struct bau_desc));
ad2->header.sw_ack_flag = 1;
- ad2->header.base_dest_nodeid =
- uv_blade_to_pnode(uv_cpu_to_blade_id(0));
+ /*
+ * base_dest_nodeid is the first node in the partition, so
+ * the bit map will indicate partition-relative node numbers.
+ * note that base_dest_nodeid is actually a nasid.
+ */
+ ad2->header.base_dest_nodeid = uv_partition_base_pnode << 1;
ad2->header.command = UV_NET_ENDPOINT_INTD;
ad2->header.int_both = 1;
/*
uv_payload_queue_init(int node, int pnode, struct bau_control *bau_tablesp)
{
struct bau_payload_queue_entry *pqp;
+ unsigned long pa;
+ int pn;
char *cp;
pqp = (struct bau_payload_queue_entry *) kmalloc_node(
cp = (char *)pqp + 31;
pqp = (struct bau_payload_queue_entry *)(((unsigned long)cp >> 5) << 5);
bau_tablesp->va_queue_first = pqp;
+ /*
+ * need the pnode of where the memory was really allocated
+ */
+ pa = uv_gpa(pqp);
+ pn = pa >> uv_nshift;
uv_write_global_mmr64(pnode,
UVH_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST,
- ((unsigned long)pnode <<
- UV_PAYLOADQ_PNODE_SHIFT) |
+ ((unsigned long)pn << UV_PAYLOADQ_PNODE_SHIFT) |
uv_physnodeaddr(pqp));
uv_write_global_mmr64(pnode, UVH_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL,
uv_physnodeaddr(pqp));
/*
* Initialization of each UV blade's structures
*/
-static int __init uv_init_blade(int blade, int node, int cur_cpu)
+static int __init uv_init_blade(int blade)
{
+ int node;
int pnode;
unsigned long pa;
unsigned long apicid;
struct bau_payload_queue_entry *pqp;
struct bau_control *bau_tablesp;
+ node = blade_to_first_node(blade);
bau_tablesp = uv_table_bases_init(blade, node);
pnode = uv_blade_to_pnode(blade);
adp = uv_activation_descriptor_init(node, pnode);
pqp = uv_payload_queue_init(node, pnode, bau_tablesp);
- uv_table_bases_finish(blade, node, cur_cpu, bau_tablesp, adp);
+ uv_table_bases_finish(blade, bau_tablesp, adp);
/*
* the below initialization can't be in firmware because the
* messaging IRQ will be determined by the OS
*/
- apicid = per_cpu(x86_cpu_to_apicid, cur_cpu);
+ apicid = blade_to_first_apicid(blade);
pa = uv_read_global_mmr64(pnode, UVH_BAU_DATA_CONFIG);
if ((pa & 0xff) != UV_BAU_MESSAGE) {
uv_write_global_mmr64(pnode, UVH_BAU_DATA_CONFIG,
static int __init uv_bau_init(void)
{
int blade;
- int node;
int nblades;
- int last_blade;
int cur_cpu;
if (!is_uv_system())
uv_bau_retry_limit = 1;
uv_nshift = uv_hub_info->n_val;
uv_mmask = (1UL << uv_hub_info->n_val) - 1;
- nblades = 0;
- last_blade = -1;
- cur_cpu = 0;
- for_each_online_node(node) {
- blade = uv_node_to_blade_id(node);
- if (blade == last_blade)
- continue;
- last_blade = blade;
- nblades++;
- }
+ nblades = uv_num_possible_blades();
+
uv_bau_table_bases = (struct bau_control **)
kmalloc(nblades * sizeof(struct bau_control *), GFP_KERNEL);
BUG_ON(!uv_bau_table_bases);
- last_blade = -1;
- for_each_online_node(node) {
- blade = uv_node_to_blade_id(node);
- if (blade == last_blade)
- continue;
- last_blade = blade;
- uv_init_blade(blade, node, cur_cpu);
- cur_cpu += uv_blade_nr_possible_cpus(blade);
- }
+ uv_partition_base_pnode = 0x7fffffff;
+ for (blade = 0; blade < nblades; blade++)
+ if (uv_blade_nr_possible_cpus(blade) &&
+ (uv_blade_to_pnode(blade) < uv_partition_base_pnode))
+ uv_partition_base_pnode = uv_blade_to_pnode(blade);
+ for (blade = 0; blade < nblades; blade++)
+ if (uv_blade_nr_possible_cpus(blade))
+ uv_init_blade(blade);
+
alloc_intr_gate(UV_BAU_MESSAGE, uv_bau_message_intr1);
uv_enable_timeouts();
#include <linux/sysdev.h>
#include <asm/uv/bios.h>
+#include <asm/uv/uv.h>
struct kobject *sgi_uv_kobj;
{
unsigned long ret;
+ if (!is_uv_system())
+ return -ENODEV;
+
if (!sgi_uv_kobj)
sgi_uv_kobj = kobject_create_and_add("sgi_uv", firmware_kobj);
if (!sgi_uv_kobj) {
#include <linux/rbtree.h>
#include <linux/interrupt.h>
-#define REQ_SYNC 1
-#define REQ_ASYNC 0
-
/*
* See Documentation/block/as-iosched.txt
*/
struct list_head fifo_list[2];
struct request *next_rq[2]; /* next in sort order */
- sector_t last_sector[2]; /* last REQ_SYNC & REQ_ASYNC sectors */
+ sector_t last_sector[2]; /* last SYNC & ASYNC sectors */
unsigned long exit_prob; /* probability a task will exit while
being waited on */
unsigned long last_check_fifo[2];
int changed_batch; /* 1: waiting for old batch to end */
int new_batch; /* 1: waiting on first read complete */
- int batch_data_dir; /* current batch REQ_SYNC / REQ_ASYNC */
+ int batch_data_dir; /* current batch SYNC / ASYNC */
int write_batch_count; /* max # of reqs in a write batch */
int current_write_count; /* how many requests left this batch */
int write_batch_idled; /* has the write batch gone idle? */
if (aic == NULL)
return;
- if (data_dir == REQ_SYNC) {
+ if (data_dir == BLK_RW_SYNC) {
unsigned long in_flight = atomic_read(&aic->nr_queued)
+ atomic_read(&aic->nr_dispatched);
spin_lock(&aic->lock);
*/
static void update_write_batch(struct as_data *ad)
{
- unsigned long batch = ad->batch_expire[REQ_ASYNC];
+ unsigned long batch = ad->batch_expire[BLK_RW_ASYNC];
long write_time;
write_time = (jiffies - ad->current_batch_expires) + batch;
kblockd_schedule_work(q, &ad->antic_work);
ad->changed_batch = 0;
- if (ad->batch_data_dir == REQ_SYNC)
+ if (ad->batch_data_dir == BLK_RW_SYNC)
ad->new_batch = 1;
}
WARN_ON(ad->nr_dispatched == 0);
if (ad->new_batch && ad->batch_data_dir == rq_is_sync(rq)) {
update_write_batch(ad);
ad->current_batch_expires = jiffies +
- ad->batch_expire[REQ_SYNC];
+ ad->batch_expire[BLK_RW_SYNC];
ad->new_batch = 0;
}
if (ad->changed_batch || ad->new_batch)
return 0;
- if (ad->batch_data_dir == REQ_SYNC)
+ if (ad->batch_data_dir == BLK_RW_SYNC)
/* TODO! add a check so a complete fifo gets written? */
return time_after(jiffies, ad->current_batch_expires);
*/
ad->last_sector[data_dir] = rq->sector + rq->nr_sectors;
- if (data_dir == REQ_SYNC) {
+ if (data_dir == BLK_RW_SYNC) {
struct io_context *ioc = RQ_IOC(rq);
/* In case we have to anticipate after this */
copy_io_context(&ad->io_context, &ioc);
static int as_dispatch_request(struct request_queue *q, int force)
{
struct as_data *ad = q->elevator->elevator_data;
- const int reads = !list_empty(&ad->fifo_list[REQ_SYNC]);
- const int writes = !list_empty(&ad->fifo_list[REQ_ASYNC]);
+ const int reads = !list_empty(&ad->fifo_list[BLK_RW_SYNC]);
+ const int writes = !list_empty(&ad->fifo_list[BLK_RW_ASYNC]);
struct request *rq;
if (unlikely(force)) {
/*
* Forced dispatch, accounting is useless. Reset
* accounting states and dump fifo_lists. Note that
- * batch_data_dir is reset to REQ_SYNC to avoid
+ * batch_data_dir is reset to BLK_RW_SYNC to avoid
* screwing write batch accounting as write batch
* accounting occurs on W->R transition.
*/
int dispatched = 0;
- ad->batch_data_dir = REQ_SYNC;
+ ad->batch_data_dir = BLK_RW_SYNC;
ad->changed_batch = 0;
ad->new_batch = 0;
- while (ad->next_rq[REQ_SYNC]) {
- as_move_to_dispatch(ad, ad->next_rq[REQ_SYNC]);
+ while (ad->next_rq[BLK_RW_SYNC]) {
+ as_move_to_dispatch(ad, ad->next_rq[BLK_RW_SYNC]);
dispatched++;
}
- ad->last_check_fifo[REQ_SYNC] = jiffies;
+ ad->last_check_fifo[BLK_RW_SYNC] = jiffies;
- while (ad->next_rq[REQ_ASYNC]) {
- as_move_to_dispatch(ad, ad->next_rq[REQ_ASYNC]);
+ while (ad->next_rq[BLK_RW_ASYNC]) {
+ as_move_to_dispatch(ad, ad->next_rq[BLK_RW_ASYNC]);
dispatched++;
}
- ad->last_check_fifo[REQ_ASYNC] = jiffies;
+ ad->last_check_fifo[BLK_RW_ASYNC] = jiffies;
return dispatched;
}
/* Signal that the write batch was uncontended, so we can't time it */
- if (ad->batch_data_dir == REQ_ASYNC && !reads) {
+ if (ad->batch_data_dir == BLK_RW_ASYNC && !reads) {
if (ad->current_write_count == 0 || !writes)
ad->write_batch_idled = 1;
}
*/
rq = ad->next_rq[ad->batch_data_dir];
- if (ad->batch_data_dir == REQ_SYNC && ad->antic_expire) {
- if (as_fifo_expired(ad, REQ_SYNC))
+ if (ad->batch_data_dir == BLK_RW_SYNC && ad->antic_expire) {
+ if (as_fifo_expired(ad, BLK_RW_SYNC))
goto fifo_expired;
if (as_can_anticipate(ad, rq)) {
/* we have a "next request" */
if (reads && !writes)
ad->current_batch_expires =
- jiffies + ad->batch_expire[REQ_SYNC];
+ jiffies + ad->batch_expire[BLK_RW_SYNC];
goto dispatch_request;
}
}
*/
if (reads) {
- BUG_ON(RB_EMPTY_ROOT(&ad->sort_list[REQ_SYNC]));
+ BUG_ON(RB_EMPTY_ROOT(&ad->sort_list[BLK_RW_SYNC]));
- if (writes && ad->batch_data_dir == REQ_SYNC)
+ if (writes && ad->batch_data_dir == BLK_RW_SYNC)
/*
* Last batch was a read, switch to writes
*/
goto dispatch_writes;
- if (ad->batch_data_dir == REQ_ASYNC) {
+ if (ad->batch_data_dir == BLK_RW_ASYNC) {
WARN_ON(ad->new_batch);
ad->changed_batch = 1;
}
- ad->batch_data_dir = REQ_SYNC;
- rq = rq_entry_fifo(ad->fifo_list[REQ_SYNC].next);
+ ad->batch_data_dir = BLK_RW_SYNC;
+ rq = rq_entry_fifo(ad->fifo_list[BLK_RW_SYNC].next);
ad->last_check_fifo[ad->batch_data_dir] = jiffies;
goto dispatch_request;
}
if (writes) {
dispatch_writes:
- BUG_ON(RB_EMPTY_ROOT(&ad->sort_list[REQ_ASYNC]));
+ BUG_ON(RB_EMPTY_ROOT(&ad->sort_list[BLK_RW_ASYNC]));
- if (ad->batch_data_dir == REQ_SYNC) {
+ if (ad->batch_data_dir == BLK_RW_SYNC) {
ad->changed_batch = 1;
/*
*/
ad->new_batch = 0;
}
- ad->batch_data_dir = REQ_ASYNC;
+ ad->batch_data_dir = BLK_RW_ASYNC;
ad->current_write_count = ad->write_batch_count;
ad->write_batch_idled = 0;
- rq = rq_entry_fifo(ad->fifo_list[REQ_ASYNC].next);
- ad->last_check_fifo[REQ_ASYNC] = jiffies;
+ rq = rq_entry_fifo(ad->fifo_list[BLK_RW_ASYNC].next);
+ ad->last_check_fifo[BLK_RW_ASYNC] = jiffies;
goto dispatch_request;
}
if (ad->nr_dispatched)
return 0;
- if (ad->batch_data_dir == REQ_ASYNC)
+ if (ad->batch_data_dir == BLK_RW_ASYNC)
ad->current_batch_expires = jiffies +
- ad->batch_expire[REQ_ASYNC];
+ ad->batch_expire[BLK_RW_ASYNC];
else
ad->new_batch = 1;
{
struct as_data *ad = q->elevator->elevator_data;
- return list_empty(&ad->fifo_list[REQ_ASYNC])
- && list_empty(&ad->fifo_list[REQ_SYNC]);
+ return list_empty(&ad->fifo_list[BLK_RW_ASYNC])
+ && list_empty(&ad->fifo_list[BLK_RW_SYNC]);
}
static int
del_timer_sync(&ad->antic_timer);
cancel_work_sync(&ad->antic_work);
- BUG_ON(!list_empty(&ad->fifo_list[REQ_SYNC]));
- BUG_ON(!list_empty(&ad->fifo_list[REQ_ASYNC]));
+ BUG_ON(!list_empty(&ad->fifo_list[BLK_RW_SYNC]));
+ BUG_ON(!list_empty(&ad->fifo_list[BLK_RW_ASYNC]));
put_io_context(ad->io_context);
kfree(ad);
init_timer(&ad->antic_timer);
INIT_WORK(&ad->antic_work, as_work_handler);
- INIT_LIST_HEAD(&ad->fifo_list[REQ_SYNC]);
- INIT_LIST_HEAD(&ad->fifo_list[REQ_ASYNC]);
- ad->sort_list[REQ_SYNC] = RB_ROOT;
- ad->sort_list[REQ_ASYNC] = RB_ROOT;
- ad->fifo_expire[REQ_SYNC] = default_read_expire;
- ad->fifo_expire[REQ_ASYNC] = default_write_expire;
+ INIT_LIST_HEAD(&ad->fifo_list[BLK_RW_SYNC]);
+ INIT_LIST_HEAD(&ad->fifo_list[BLK_RW_ASYNC]);
+ ad->sort_list[BLK_RW_SYNC] = RB_ROOT;
+ ad->sort_list[BLK_RW_ASYNC] = RB_ROOT;
+ ad->fifo_expire[BLK_RW_SYNC] = default_read_expire;
+ ad->fifo_expire[BLK_RW_ASYNC] = default_write_expire;
ad->antic_expire = default_antic_expire;
- ad->batch_expire[REQ_SYNC] = default_read_batch_expire;
- ad->batch_expire[REQ_ASYNC] = default_write_batch_expire;
+ ad->batch_expire[BLK_RW_SYNC] = default_read_batch_expire;
+ ad->batch_expire[BLK_RW_ASYNC] = default_write_batch_expire;
- ad->current_batch_expires = jiffies + ad->batch_expire[REQ_SYNC];
- ad->write_batch_count = ad->batch_expire[REQ_ASYNC] / 10;
+ ad->current_batch_expires = jiffies + ad->batch_expire[BLK_RW_SYNC];
+ ad->write_batch_count = ad->batch_expire[BLK_RW_ASYNC] / 10;
if (ad->write_batch_count < 2)
ad->write_batch_count = 2;
struct as_data *ad = e->elevator_data; \
return as_var_show(jiffies_to_msecs((__VAR)), (page)); \
}
-SHOW_FUNCTION(as_read_expire_show, ad->fifo_expire[REQ_SYNC]);
-SHOW_FUNCTION(as_write_expire_show, ad->fifo_expire[REQ_ASYNC]);
+SHOW_FUNCTION(as_read_expire_show, ad->fifo_expire[BLK_RW_SYNC]);
+SHOW_FUNCTION(as_write_expire_show, ad->fifo_expire[BLK_RW_ASYNC]);
SHOW_FUNCTION(as_antic_expire_show, ad->antic_expire);
-SHOW_FUNCTION(as_read_batch_expire_show, ad->batch_expire[REQ_SYNC]);
-SHOW_FUNCTION(as_write_batch_expire_show, ad->batch_expire[REQ_ASYNC]);
+SHOW_FUNCTION(as_read_batch_expire_show, ad->batch_expire[BLK_RW_SYNC]);
+SHOW_FUNCTION(as_write_batch_expire_show, ad->batch_expire[BLK_RW_ASYNC]);
#undef SHOW_FUNCTION
#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX) \
*(__PTR) = msecs_to_jiffies(*(__PTR)); \
return ret; \
}
-STORE_FUNCTION(as_read_expire_store, &ad->fifo_expire[REQ_SYNC], 0, INT_MAX);
-STORE_FUNCTION(as_write_expire_store, &ad->fifo_expire[REQ_ASYNC], 0, INT_MAX);
+STORE_FUNCTION(as_read_expire_store, &ad->fifo_expire[BLK_RW_SYNC], 0, INT_MAX);
+STORE_FUNCTION(as_write_expire_store,
+ &ad->fifo_expire[BLK_RW_ASYNC], 0, INT_MAX);
STORE_FUNCTION(as_antic_expire_store, &ad->antic_expire, 0, INT_MAX);
STORE_FUNCTION(as_read_batch_expire_store,
- &ad->batch_expire[REQ_SYNC], 0, INT_MAX);
+ &ad->batch_expire[BLK_RW_SYNC], 0, INT_MAX);
STORE_FUNCTION(as_write_batch_expire_store,
- &ad->batch_expire[REQ_ASYNC], 0, INT_MAX);
+ &ad->batch_expire[BLK_RW_ASYNC], 0, INT_MAX);
#undef STORE_FUNCTION
#define AS_ATTR(name) \
return -ENXIO;
bio = bio_alloc(GFP_KERNEL, 0);
- if (!bio)
- return -ENOMEM;
-
bio->bi_end_io = bio_end_empty_barrier;
bio->bi_private = &wait;
bio->bi_bdev = bdev;
ssize_t ret = queue_var_store(&stats, page, count);
spin_lock_irq(q->queue_lock);
- elv_quisce_start(q);
+ elv_quiesce_start(q);
if (stats)
queue_flag_set(QUEUE_FLAG_IO_STAT, q);
else
queue_flag_clear(QUEUE_FLAG_IO_STAT, q);
- elv_quisce_end(q);
+ elv_quiesce_end(q);
spin_unlock_irq(q->queue_lock);
return ret;
int blk_dev_init(void);
-void elv_quisce_start(struct request_queue *q);
-void elv_quisce_end(struct request_queue *q);
+void elv_quiesce_start(struct request_queue *q);
+void elv_quiesce_end(struct request_queue *q);
/*
#define cfq_class_idle(cfqq) ((cfqq)->ioprio_class == IOPRIO_CLASS_IDLE)
#define cfq_class_rt(cfqq) ((cfqq)->ioprio_class == IOPRIO_CLASS_RT)
-#define ASYNC (0)
-#define SYNC (1)
-
#define sample_valid(samples) ((samples) > 80)
/*
* rr list of queues with requests and the count of them
*/
struct cfq_rb_root service_tree;
+
+ /*
+ * Each priority tree is sorted by next_request position. These
+ * trees are used when determining if two or more queues are
+ * interleaving requests (see cfq_close_cooperator).
+ */
+ struct rb_root prio_trees[CFQ_PRIO_LISTS];
+
unsigned int busy_queues;
/*
* Used to track any pending rt requests so we can pre-empt current
struct rb_node rb_node;
/* service_tree key */
unsigned long rb_key;
+ /* prio tree member */
+ struct rb_node p_node;
/* sorted list of pending requests */
struct rb_root sort_list;
/* if fifo isn't expired, next request to serve */
CFQ_CFQQ_FLAG_prio_changed, /* task priority has changed */
CFQ_CFQQ_FLAG_slice_new, /* no requests dispatched in slice */
CFQ_CFQQ_FLAG_sync, /* synchronous queue */
+ CFQ_CFQQ_FLAG_coop, /* has done a coop jump of the queue */
};
#define CFQ_CFQQ_FNS(name) \
CFQ_CFQQ_FNS(prio_changed);
CFQ_CFQQ_FNS(slice_new);
CFQ_CFQQ_FNS(sync);
+CFQ_CFQQ_FNS(coop);
#undef CFQ_CFQQ_FNS
#define cfq_log_cfqq(cfqd, cfqq, fmt, args...) \
return NULL;
}
+static void rb_erase_init(struct rb_node *n, struct rb_root *root)
+{
+ rb_erase(n, root);
+ RB_CLEAR_NODE(n);
+}
+
static void cfq_rb_erase(struct rb_node *n, struct cfq_rb_root *root)
{
if (root->left == n)
root->left = NULL;
-
- rb_erase(n, &root->rb);
- RB_CLEAR_NODE(n);
+ rb_erase_init(n, &root->rb);
}
/*
* requests waiting to be processed. It is sorted in the order that
* we will service the queues.
*/
-static void cfq_service_tree_add(struct cfq_data *cfqd,
- struct cfq_queue *cfqq, int add_front)
+static void cfq_service_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq,
+ int add_front)
{
struct rb_node **p, *parent;
struct cfq_queue *__cfqq;
rb_insert_color(&cfqq->rb_node, &cfqd->service_tree.rb);
}
+static struct cfq_queue *
+cfq_prio_tree_lookup(struct cfq_data *cfqd, int ioprio, sector_t sector,
+ struct rb_node **ret_parent, struct rb_node ***rb_link)
+{
+ struct rb_root *root = &cfqd->prio_trees[ioprio];
+ struct rb_node **p, *parent;
+ struct cfq_queue *cfqq = NULL;
+
+ parent = NULL;
+ p = &root->rb_node;
+ while (*p) {
+ struct rb_node **n;
+
+ parent = *p;
+ cfqq = rb_entry(parent, struct cfq_queue, p_node);
+
+ /*
+ * Sort strictly based on sector. Smallest to the left,
+ * largest to the right.
+ */
+ if (sector > cfqq->next_rq->sector)
+ n = &(*p)->rb_right;
+ else if (sector < cfqq->next_rq->sector)
+ n = &(*p)->rb_left;
+ else
+ break;
+ p = n;
+ }
+
+ *ret_parent = parent;
+ if (rb_link)
+ *rb_link = p;
+ return NULL;
+}
+
+static void cfq_prio_tree_add(struct cfq_data *cfqd, struct cfq_queue *cfqq)
+{
+ struct rb_root *root = &cfqd->prio_trees[cfqq->ioprio];
+ struct rb_node **p, *parent;
+ struct cfq_queue *__cfqq;
+
+ if (!RB_EMPTY_NODE(&cfqq->p_node))
+ rb_erase_init(&cfqq->p_node, root);
+
+ if (cfq_class_idle(cfqq))
+ return;
+ if (!cfqq->next_rq)
+ return;
+
+ __cfqq = cfq_prio_tree_lookup(cfqd, cfqq->ioprio, cfqq->next_rq->sector,
+ &parent, &p);
+ BUG_ON(__cfqq);
+
+ rb_link_node(&cfqq->p_node, parent, p);
+ rb_insert_color(&cfqq->p_node, root);
+}
+
/*
* Update cfqq's position in the service tree.
*/
/*
* Resorting requires the cfqq to be on the RR list already.
*/
- if (cfq_cfqq_on_rr(cfqq))
+ if (cfq_cfqq_on_rr(cfqq)) {
cfq_service_tree_add(cfqd, cfqq, 0);
+ cfq_prio_tree_add(cfqd, cfqq);
+ }
}
/*
if (!RB_EMPTY_NODE(&cfqq->rb_node))
cfq_rb_erase(&cfqq->rb_node, &cfqd->service_tree);
+ if (!RB_EMPTY_NODE(&cfqq->p_node))
+ rb_erase_init(&cfqq->p_node, &cfqd->prio_trees[cfqq->ioprio]);
BUG_ON(!cfqd->busy_queues);
cfqd->busy_queues--;
{
struct cfq_queue *cfqq = RQ_CFQQ(rq);
struct cfq_data *cfqd = cfqq->cfqd;
- struct request *__alias;
+ struct request *__alias, *prev;
cfqq->queued[rq_is_sync(rq)]++;
/*
* check if this request is a better next-serve candidate
*/
+ prev = cfqq->next_rq;
cfqq->next_rq = cfq_choose_req(cfqd, cfqq->next_rq, rq);
+
+ /*
+ * adjust priority tree position, if ->next_rq changes
+ */
+ if (prev != cfqq->next_rq)
+ cfq_prio_tree_add(cfqd, cfqq);
+
BUG_ON(!cfqq->next_rq);
}
/*
* Get and set a new active queue for service.
*/
-static struct cfq_queue *cfq_set_active_queue(struct cfq_data *cfqd)
+static struct cfq_queue *cfq_set_active_queue(struct cfq_data *cfqd,
+ struct cfq_queue *cfqq)
{
- struct cfq_queue *cfqq;
+ if (!cfqq) {
+ cfqq = cfq_get_next_queue(cfqd);
+ if (cfqq)
+ cfq_clear_cfqq_coop(cfqq);
+ }
- cfqq = cfq_get_next_queue(cfqd);
__cfq_set_active_queue(cfqd, cfqq);
return cfqq;
}
return cfq_dist_from_last(cfqd, rq) <= cic->seek_mean;
}
-static int cfq_close_cooperator(struct cfq_data *cfq_data,
- struct cfq_queue *cfqq)
+static struct cfq_queue *cfqq_close(struct cfq_data *cfqd,
+ struct cfq_queue *cur_cfqq)
+{
+ struct rb_root *root = &cfqd->prio_trees[cur_cfqq->ioprio];
+ struct rb_node *parent, *node;
+ struct cfq_queue *__cfqq;
+ sector_t sector = cfqd->last_position;
+
+ if (RB_EMPTY_ROOT(root))
+ return NULL;
+
+ /*
+ * First, if we find a request starting at the end of the last
+ * request, choose it.
+ */
+ __cfqq = cfq_prio_tree_lookup(cfqd, cur_cfqq->ioprio,
+ sector, &parent, NULL);
+ if (__cfqq)
+ return __cfqq;
+
+ /*
+ * If the exact sector wasn't found, the parent of the NULL leaf
+ * will contain the closest sector.
+ */
+ __cfqq = rb_entry(parent, struct cfq_queue, p_node);
+ if (cfq_rq_close(cfqd, __cfqq->next_rq))
+ return __cfqq;
+
+ if (__cfqq->next_rq->sector < sector)
+ node = rb_next(&__cfqq->p_node);
+ else
+ node = rb_prev(&__cfqq->p_node);
+ if (!node)
+ return NULL;
+
+ __cfqq = rb_entry(node, struct cfq_queue, p_node);
+ if (cfq_rq_close(cfqd, __cfqq->next_rq))
+ return __cfqq;
+
+ return NULL;
+}
+
+/*
+ * cfqd - obvious
+ * cur_cfqq - passed in so that we don't decide that the current queue is
+ * closely cooperating with itself.
+ *
+ * So, basically we're assuming that that cur_cfqq has dispatched at least
+ * one request, and that cfqd->last_position reflects a position on the disk
+ * associated with the I/O issued by cur_cfqq. I'm not sure this is a valid
+ * assumption.
+ */
+static struct cfq_queue *cfq_close_cooperator(struct cfq_data *cfqd,
+ struct cfq_queue *cur_cfqq,
+ int probe)
{
+ struct cfq_queue *cfqq;
+
+ /*
+ * A valid cfq_io_context is necessary to compare requests against
+ * the seek_mean of the current cfqq.
+ */
+ if (!cfqd->active_cic)
+ return NULL;
+
/*
* We should notice if some of the queues are cooperating, eg
* working closely on the same area of the disk. In that case,
* we can group them together and don't waste time idling.
*/
- return 0;
+ cfqq = cfqq_close(cfqd, cur_cfqq);
+ if (!cfqq)
+ return NULL;
+
+ if (cfq_cfqq_coop(cfqq))
+ return NULL;
+
+ if (!probe)
+ cfq_mark_cfqq_coop(cfqq);
+ return cfqq;
}
+
#define CIC_SEEKY(cic) ((cic)->seek_mean > (8 * 1024))
static void cfq_arm_slice_timer(struct cfq_data *cfqd)
if (!cic || !atomic_read(&cic->ioc->nr_tasks))
return;
- /*
- * See if this prio level has a good candidate
- */
- if (cfq_close_cooperator(cfqd, cfqq) &&
- (sample_valid(cic->ttime_samples) && cic->ttime_mean > 2))
- return;
-
cfq_mark_cfqq_wait_request(cfqq);
/*
sl = min(sl, msecs_to_jiffies(CFQ_MIN_TT));
mod_timer(&cfqd->idle_slice_timer, jiffies + sl);
- cfq_log(cfqd, "arm_idle: %lu", sl);
+ cfq_log_cfqq(cfqd, cfqq, "arm_idle: %lu", sl);
}
/*
*/
static struct cfq_queue *cfq_select_queue(struct cfq_data *cfqd)
{
- struct cfq_queue *cfqq;
+ struct cfq_queue *cfqq, *new_cfqq = NULL;
cfqq = cfqd->active_queue;
if (!cfqq)
goto keep_queue;
/*
+ * If another queue has a request waiting within our mean seek
+ * distance, let it run. The expire code will check for close
+ * cooperators and put the close queue at the front of the service
+ * tree.
+ */
+ new_cfqq = cfq_close_cooperator(cfqd, cfqq, 0);
+ if (new_cfqq)
+ goto expire;
+
+ /*
* No requests pending. If the active queue still has requests in
* flight or is idling for a new request, allow either of these
* conditions to happen (or time out) before selecting a new queue.
expire:
cfq_slice_expired(cfqd, 0);
new_queue:
- cfqq = cfq_set_active_queue(cfqd);
+ cfqq = cfq_set_active_queue(cfqd, new_cfqq);
keep_queue:
return cfqq;
}
if (ioc->ioc_data == cic)
rcu_assign_pointer(ioc->ioc_data, NULL);
- if (cic->cfqq[ASYNC]) {
- cfq_exit_cfqq(cfqd, cic->cfqq[ASYNC]);
- cic->cfqq[ASYNC] = NULL;
+ if (cic->cfqq[BLK_RW_ASYNC]) {
+ cfq_exit_cfqq(cfqd, cic->cfqq[BLK_RW_ASYNC]);
+ cic->cfqq[BLK_RW_ASYNC] = NULL;
}
- if (cic->cfqq[SYNC]) {
- cfq_exit_cfqq(cfqd, cic->cfqq[SYNC]);
- cic->cfqq[SYNC] = NULL;
+ if (cic->cfqq[BLK_RW_SYNC]) {
+ cfq_exit_cfqq(cfqd, cic->cfqq[BLK_RW_SYNC]);
+ cic->cfqq[BLK_RW_SYNC] = NULL;
}
}
spin_lock_irqsave(cfqd->queue->queue_lock, flags);
- cfqq = cic->cfqq[ASYNC];
+ cfqq = cic->cfqq[BLK_RW_ASYNC];
if (cfqq) {
struct cfq_queue *new_cfqq;
- new_cfqq = cfq_get_queue(cfqd, ASYNC, cic->ioc, GFP_ATOMIC);
+ new_cfqq = cfq_get_queue(cfqd, BLK_RW_ASYNC, cic->ioc,
+ GFP_ATOMIC);
if (new_cfqq) {
- cic->cfqq[ASYNC] = new_cfqq;
+ cic->cfqq[BLK_RW_ASYNC] = new_cfqq;
cfq_put_queue(cfqq);
}
}
- cfqq = cic->cfqq[SYNC];
+ cfqq = cic->cfqq[BLK_RW_SYNC];
if (cfqq)
cfq_mark_cfqq_prio_changed(cfqq);
}
RB_CLEAR_NODE(&cfqq->rb_node);
+ RB_CLEAR_NODE(&cfqq->p_node);
INIT_LIST_HEAD(&cfqq->fifo);
atomic_set(&cfqq->ref, 0);
* Remember that we saw a request from this process, but
* don't start queuing just yet. Otherwise we risk seeing lots
* of tiny requests, because we disrupt the normal plugging
- * and merging.
+ * and merging. If the request is already larger than a single
+ * page, let it rip immediately. For that case we assume that
+ * merging is already done. Ditto for a busy system that
+ * has other work pending, don't risk delaying until the
+ * idle timer unplug to continue working.
*/
- if (cfq_cfqq_wait_request(cfqq))
+ if (cfq_cfqq_wait_request(cfqq)) {
+ if (blk_rq_bytes(rq) > PAGE_CACHE_SIZE ||
+ cfqd->busy_queues > 1) {
+ del_timer(&cfqd->idle_slice_timer);
+ blk_start_queueing(cfqd->queue);
+ }
cfq_mark_cfqq_must_dispatch(cfqq);
+ }
} else if (cfq_should_preempt(cfqd, cfqq, rq)) {
/*
* not the active queue - expire current slice if it is
* or if we want to idle in case it has no pending requests.
*/
if (cfqd->active_queue == cfqq) {
+ const bool cfqq_empty = RB_EMPTY_ROOT(&cfqq->sort_list);
+
if (cfq_cfqq_slice_new(cfqq)) {
cfq_set_prio_slice(cfqd, cfqq);
cfq_clear_cfqq_slice_new(cfqq);
}
+ /*
+ * If there are no requests waiting in this queue, and
+ * there are other queues ready to issue requests, AND
+ * those other queues are issuing requests within our
+ * mean seek distance, give them a chance to run instead
+ * of idling.
+ */
if (cfq_slice_used(cfqq) || cfq_class_idle(cfqq))
cfq_slice_expired(cfqd, 1);
- else if (sync && !rq_noidle(rq) &&
- RB_EMPTY_ROOT(&cfqq->sort_list)) {
+ else if (cfqq_empty && !cfq_close_cooperator(cfqd, cfqq, 1) &&
+ sync && !rq_noidle(rq))
cfq_arm_slice_timer(cfqd);
- }
}
if (!cfqd->rq_in_driver)
if (!cic)
return ELV_MQUEUE_MAY;
- cfqq = cic_to_cfqq(cic, rw & REQ_RW_SYNC);
+ cfqq = cic_to_cfqq(cic, rw_is_sync(rw));
if (cfqq) {
cfq_init_prio_data(cfqq, cic->ioc);
cfq_prio_boost(cfqq);
struct cfq_data *cfqd =
container_of(work, struct cfq_data, unplug_work);
struct request_queue *q = cfqd->queue;
- unsigned long flags;
- spin_lock_irqsave(q->queue_lock, flags);
+ spin_lock_irq(q->queue_lock);
blk_start_queueing(q);
- spin_unlock_irqrestore(q->queue_lock, flags);
+ spin_unlock_irq(q->queue_lock);
}
/*
/*
* Call with queue lock held, interrupts disabled
*/
-void elv_quisce_start(struct request_queue *q)
+void elv_quiesce_start(struct request_queue *q)
{
queue_flag_set(QUEUE_FLAG_ELVSWITCH, q);
}
}
-void elv_quisce_end(struct request_queue *q)
+void elv_quiesce_end(struct request_queue *q)
{
queue_flag_clear(QUEUE_FLAG_ELVSWITCH, q);
}
* Turn on BYPASS and drain all requests w/ elevator private data
*/
spin_lock_irq(q->queue_lock);
- elv_quisce_start(q);
+ elv_quiesce_start(q);
/*
* Remember old elevator.
*/
elevator_exit(old_elevator);
spin_lock_irq(q->queue_lock);
- elv_quisce_end(q);
+ elv_quiesce_end(q);
spin_unlock_irq(q->queue_lock);
blk_add_trace_msg(q, "elv switch: %s", e->elevator_type->elevator_name);
struct bio *bio;
bio = bio_alloc(GFP_KERNEL, 0);
- if (!bio)
- return -ENOMEM;
bio->bi_end_io = blk_ioc_discard_endio;
bio->bi_bdev = bdev;
static int blk_complete_sghdr_rq(struct request *rq, struct sg_io_hdr *hdr,
struct bio *bio)
{
- int ret = 0;
+ int r, ret = 0;
/*
* fill in all the output members
ret = -EFAULT;
}
- blk_rq_unmap_user(bio);
+ r = blk_rq_unmap_user(bio);
+ if (!ret)
+ ret = r;
blk_put_request(rq);
return ret;
*
* We follow the current spec and consider that 0x69/0x96
* identifies a port multiplier and 0x3c/0xc3 a SEMB device.
+ * Unfortunately, WDC WD1600JS-62MHB5 (a hard drive) reports
+ * SEMB signature. This is worked around in
+ * ata_dev_read_id().
*/
if ((tf->lbam == 0) && (tf->lbah == 0)) {
DPRINTK("found ATA device by sig\n");
}
if ((tf->lbam == 0x3c) && (tf->lbah == 0xc3)) {
- printk(KERN_INFO "ata: SEMB device ignored\n");
- return ATA_DEV_SEMB_UNSUP; /* not yet */
+ DPRINTK("found SEMB device by sig (could be ATA device)\n");
+ return ATA_DEV_SEMB;
}
DPRINTK("unknown device\n");
/*
* Process compact flash extended modes
*/
- int pio = id[163] & 0x7;
- int dma = (id[163] >> 3) & 7;
+ int pio = (id[ATA_ID_CFA_MODES] >> 0) & 0x7;
+ int dma = (id[ATA_ID_CFA_MODES] >> 3) & 0x7;
if (pio)
pio_mask |= (1 << 5);
struct ata_taskfile tf;
unsigned int err_mask = 0;
const char *reason;
+ bool is_semb = class == ATA_DEV_SEMB;
int may_fallback = 1, tried_spinup = 0;
int rc;
ata_tf_init(dev, &tf);
switch (class) {
+ case ATA_DEV_SEMB:
+ class = ATA_DEV_ATA; /* some hard drives report SEMB sig */
case ATA_DEV_ATA:
tf.command = ATA_CMD_ID_ATA;
break;
return -ENOENT;
}
+ if (is_semb) {
+ ata_dev_printk(dev, KERN_INFO, "IDENTIFY failed on "
+ "device w/ SEMB sig, disabled\n");
+ /* SEMB is not supported yet */
+ *p_class = ATA_DEV_SEMB_UNSUP;
+ return 0;
+ }
+
if ((err_mask == AC_ERR_DEV) && (tf.feature & ATA_ABORTED)) {
/* Device or controller might have reported
* the wrong device class. Give a shot at the
/* ATA-specific feature tests */
if (dev->class == ATA_DEV_ATA) {
if (ata_id_is_cfa(id)) {
- if (id[162] & 1) /* CPRM may make this media unusable */
+ /* CPRM may make this media unusable */
+ if (id[ATA_ID_CFA_KEY_MGMT] & 1)
ata_dev_printk(dev, KERN_WARNING,
"supports DRM functions and may "
"not be fully accessable.\n");
return rc;
}
+static int ata_ioc32(struct ata_port *ap)
+{
+ if (ap->flags & ATA_FLAG_PIO_DMA)
+ return 1;
+ if (ap->pflags & ATA_PFLAG_PIO32)
+ return 1;
+ return 0;
+}
+
int ata_sas_scsi_ioctl(struct ata_port *ap, struct scsi_device *scsidev,
int cmd, void __user *arg)
{
int val = -EINVAL, rc = -EINVAL;
+ unsigned long flags;
switch (cmd) {
case ATA_IOC_GET_IO32:
- val = 0;
+ spin_lock_irqsave(ap->lock, flags);
+ val = ata_ioc32(ap);
+ spin_unlock_irqrestore(ap->lock, flags);
if (copy_to_user(arg, &val, 1))
return -EFAULT;
return 0;
case ATA_IOC_SET_IO32:
val = (unsigned long) arg;
- if (val != 0)
- return -EINVAL;
- return 0;
+ rc = 0;
+ spin_lock_irqsave(ap->lock, flags);
+ if (ap->pflags & ATA_PFLAG_PIO32CHANGE) {
+ if (val)
+ ap->pflags |= ATA_PFLAG_PIO32;
+ else
+ ap->pflags &= ~ATA_PFLAG_PIO32;
+ } else {
+ if (val != ata_ioc32(ap))
+ rc = -EINVAL;
+ }
+ spin_unlock_irqrestore(ap->lock, flags);
+ return rc;
case HDIO_GET_IDENTITY:
return ata_get_identity(ap, scsidev, arg);
.inherits = &ata_bmdma_port_ops,
.sff_data_xfer = ata_sff_data_xfer32,
+ .port_start = ata_sff_port_start32,
};
EXPORT_SYMBOL_GPL(ata_bmdma32_port_ops);
void __iomem *data_addr = ap->ioaddr.data_addr;
unsigned int words = buflen >> 2;
int slop = buflen & 3;
+
+ if (!(ap->pflags & ATA_PFLAG_PIO32))
+ return ata_sff_data_xfer(dev, buf, buflen, rw);
/* Transfer multiple of 4 bytes */
if (rw == READ)
EXPORT_SYMBOL_GPL(ata_sff_port_start);
/**
+ * ata_sff_port_start32 - Set port up for dma.
+ * @ap: Port to initialize
+ *
+ * Called just after data structures for each port are
+ * initialized. Allocates space for PRD table if the device
+ * is DMA capable SFF.
+ *
+ * May be used as the port_start() entry in ata_port_operations for
+ * devices that are capable of 32bit PIO.
+ *
+ * LOCKING:
+ * Inherited from caller.
+ */
+int ata_sff_port_start32(struct ata_port *ap)
+{
+ ap->pflags |= ATA_PFLAG_PIO32 | ATA_PFLAG_PIO32CHANGE;
+ if (ap->ioaddr.bmdma_addr)
+ return ata_port_start(ap);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ata_sff_port_start32);
+
+/**
* ata_sff_std_ports - initialize ioaddr with standard port offsets.
* @ioaddr: IO address structure to be initialized
*
* Copyright (C) 1999-2003 Andre Hedrick <andre@linux-ide.org>
* Portions Copyright (C) 2001 Sun Microsystems, Inc.
* Portions Copyright (C) 2003 Red Hat Inc
- * Portions Copyright (C) 2005-2007 MontaVista Software, Inc.
+ * Portions Copyright (C) 2005-2009 MontaVista Software, Inc.
*
* TODO
* Look into engine reset on timeout errors. Should not be required.
#include <linux/libata.h>
#define DRV_NAME "pata_hpt37x"
-#define DRV_VERSION "0.6.11"
+#define DRV_VERSION "0.6.12"
struct hpt_clock {
u8 xfer_speed;
}
/**
- * hpt370_bmdma_start - DMA engine begin
- * @qc: ATA command
- *
- * The 370 and 370A want us to reset the DMA engine each time we
- * use it. The 372 and later are fine.
- */
-
-static void hpt370_bmdma_start(struct ata_queued_cmd *qc)
-{
- struct ata_port *ap = qc->ap;
- struct pci_dev *pdev = to_pci_dev(ap->host->dev);
- pci_write_config_byte(pdev, 0x50 + 4 * ap->port_no, 0x37);
- udelay(10);
- ata_bmdma_start(qc);
-}
-
-/**
* hpt370_bmdma_end - DMA engine stop
* @qc: ATA command
*
static struct ata_port_operations hpt370_port_ops = {
.inherits = &ata_bmdma_port_ops,
- .bmdma_start = hpt370_bmdma_start,
.bmdma_stop = hpt370_bmdma_stop,
.mode_filter = hpt370_filter,
struct ata_port_operations *ops;
unsigned int pio_mask;
unsigned int flags;
+ unsigned int pflags;
int (*setup)(struct platform_device *, struct legacy_probe *probe,
struct legacy_data *data);
};
{
int slop = buflen & 3;
/* 32bit I/O capable *and* we need to write a whole number of dwords */
- if (ata_id_has_dword_io(dev->id) && (slop == 0 || slop == 3)) {
+ if (ata_id_has_dword_io(dev->id) && (slop == 0 || slop == 3)
+ && (ap->pflags & ATA_PFLAG_PIO32)) {
struct ata_port *ap = dev->link->ap;
unsigned long flags;
struct ata_port *ap = adev->link->ap;
int slop = buflen & 3;
- if (ata_id_has_dword_io(adev->id) && (slop == 0 || slop == 3)) {
+ if (ata_id_has_dword_io(adev->id) && (slop == 0 || slop == 3)
+ && (ap->pflags & ATA_PFLAG_PIO32)) {
if (rw == WRITE)
iowrite32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
else
static struct legacy_controller controllers[] = {
{"BIOS", &legacy_port_ops, 0x1F,
- ATA_FLAG_NO_IORDY, NULL },
+ ATA_FLAG_NO_IORDY, 0, NULL },
{"Snooping", &simple_port_ops, 0x1F,
- 0 , NULL },
+ 0, 0, NULL },
{"PDC20230", &pdc20230_port_ops, 0x7,
- ATA_FLAG_NO_IORDY, NULL },
+ ATA_FLAG_NO_IORDY,
+ ATA_PFLAG_PIO32 | ATA_PFLAG_PIO32_CHANGE, NULL },
{"HT6560A", &ht6560a_port_ops, 0x07,
- ATA_FLAG_NO_IORDY, NULL },
+ ATA_FLAG_NO_IORDY, 0, NULL },
{"HT6560B", &ht6560b_port_ops, 0x1F,
- ATA_FLAG_NO_IORDY, NULL },
+ ATA_FLAG_NO_IORDY, 0, NULL },
{"OPTI82C611A", &opti82c611a_port_ops, 0x0F,
- 0 , NULL },
+ 0, 0, NULL },
{"OPTI82C46X", &opti82c46x_port_ops, 0x0F,
- 0 , NULL },
+ 0, 0, NULL },
{"QDI6500", &qdi6500_port_ops, 0x07,
- ATA_FLAG_NO_IORDY, qdi_port },
+ ATA_FLAG_NO_IORDY,
+ ATA_PFLAG_PIO32 | ATA_PFLAG_PIO32_CHANGE, qdi_port },
{"QDI6580", &qdi6580_port_ops, 0x1F,
- 0 , qdi_port },
+ 0, ATA_PFLAG_PIO32 | ATA_PFLAG_PIO32_CHANGE, qdi_port },
{"QDI6580DP", &qdi6580dp_port_ops, 0x1F,
- 0 , qdi_port },
+ 0, ATA_PFLAG_PIO32 | ATA_PFLAG_PIO32_CHANGE, qdi_port },
{"W83759A", &winbond_port_ops, 0x1F,
- 0 , winbond_port }
+ 0, ATA_PFLAG_PIO32 | ATA_PFLAG_PIO32_CHANGE,
+ winbond_port }
};
/**
ap->ops = ops;
ap->pio_mask = pio_modes;
ap->flags |= ATA_FLAG_SLAVE_POSS | iordy;
+ ap->pflags |= controller->pflags;
ap->ioaddr.cmd_addr = io_addr;
ap->ioaddr.altstatus_addr = ctrl_addr;
ap->ioaddr.ctl_addr = ctrl_addr;
return 0;
}
}
+ ata_host_detach(host);
fail:
platform_device_unregister(pdev);
return ret;
#include <linux/libata.h>
#define DRV_NAME "pata_ninja32"
-#define DRV_VERSION "0.1.3"
+#define DRV_VERSION "0.1.5"
/**
.sff_dev_select = ninja32_dev_select,
.cable_detect = ata_cable_40wire,
.set_piomode = ninja32_set_piomode,
+ .sff_data_xfer = ata_sff_data_xfer32
};
static void ninja32_program(void __iomem *base)
ap->ioaddr.altstatus_addr = base + 0x1E;
ap->ioaddr.bmdma_addr = base;
ata_sff_std_ports(&ap->ioaddr);
+ ap->pflags = ATA_PFLAG_PIO32 | ATA_PFLAG_PIO32CHANGE;
ninja32_program(base);
/* FIXME: Should we disable them at remove ? */
if (rw == READ) {
copy_from_brd(mem + off, brd, sector, len);
flush_dcache_page(page);
- } else
+ } else {
+ flush_dcache_page(page);
copy_to_brd(brd, mem + off, sector, len);
+ }
kunmap_atomic(mem, KM_USER0);
out:
if (!brd->brd_queue)
goto out_free_dev;
blk_queue_make_request(brd->brd_queue, brd_make_request);
+ blk_queue_ordered(brd->brd_queue, QUEUE_ORDERED_TAG, NULL);
blk_queue_max_sectors(brd->brd_queue, 1024);
blk_queue_bounce_limit(brd->brd_queue, BLK_BOUNCE_ANY);
uint32_t tiling_mode;
uint32_t stride;
+ /** Record of address bit 17 of each page at last unbind. */
+ long *bit_17;
+
/** AGP mapping type (AGP_USER_MEMORY or AGP_USER_CACHED_MEMORY */
uint32_t agp_type;
void i915_gem_detach_phys_object(struct drm_device *dev,
struct drm_gem_object *obj);
void i915_gem_free_all_phys_object(struct drm_device *dev);
+int i915_gem_object_get_pages(struct drm_gem_object *obj);
+void i915_gem_object_put_pages(struct drm_gem_object *obj);
/* i915_gem_tiling.c */
void i915_gem_detect_bit_6_swizzle(struct drm_device *dev);
+void i915_gem_object_do_bit_17_swizzle(struct drm_gem_object *obj);
+void i915_gem_object_save_bit_17_swizzle(struct drm_gem_object *obj);
/* i915_gem_debug.c */
void i915_gem_dump_object(struct drm_gem_object *obj, int len,
uint64_t offset,
uint64_t size);
static void i915_gem_object_set_to_full_cpu_read_domain(struct drm_gem_object *obj);
-static int i915_gem_object_get_pages(struct drm_gem_object *obj);
-static void i915_gem_object_put_pages(struct drm_gem_object *obj);
static int i915_gem_object_wait_rendering(struct drm_gem_object *obj);
static int i915_gem_object_bind_to_gtt(struct drm_gem_object *obj,
unsigned alignment);
int length)
{
char __iomem *vaddr;
- int ret;
+ int unwritten;
vaddr = kmap_atomic(pages[page_base >> PAGE_SHIFT], KM_USER0);
if (vaddr == NULL)
return -ENOMEM;
- ret = __copy_to_user_inatomic(data, vaddr + page_offset, length);
+ unwritten = __copy_to_user_inatomic(data, vaddr + page_offset, length);
kunmap_atomic(vaddr, KM_USER0);
- return ret;
+ if (unwritten)
+ return -EFAULT;
+
+ return 0;
+}
+
+static int i915_gem_object_needs_bit17_swizzle(struct drm_gem_object *obj)
+{
+ drm_i915_private_t *dev_priv = obj->dev->dev_private;
+ struct drm_i915_gem_object *obj_priv = obj->driver_private;
+
+ return dev_priv->mm.bit_6_swizzle_x == I915_BIT_6_SWIZZLE_9_10_17 &&
+ obj_priv->tiling_mode != I915_TILING_NONE;
}
static inline int
return 0;
}
+static inline int
+slow_shmem_bit17_copy(struct page *gpu_page,
+ int gpu_offset,
+ struct page *cpu_page,
+ int cpu_offset,
+ int length,
+ int is_read)
+{
+ char *gpu_vaddr, *cpu_vaddr;
+
+ /* Use the unswizzled path if this page isn't affected. */
+ if ((page_to_phys(gpu_page) & (1 << 17)) == 0) {
+ if (is_read)
+ return slow_shmem_copy(cpu_page, cpu_offset,
+ gpu_page, gpu_offset, length);
+ else
+ return slow_shmem_copy(gpu_page, gpu_offset,
+ cpu_page, cpu_offset, length);
+ }
+
+ gpu_vaddr = kmap_atomic(gpu_page, KM_USER0);
+ if (gpu_vaddr == NULL)
+ return -ENOMEM;
+
+ cpu_vaddr = kmap_atomic(cpu_page, KM_USER1);
+ if (cpu_vaddr == NULL) {
+ kunmap_atomic(gpu_vaddr, KM_USER0);
+ return -ENOMEM;
+ }
+
+ /* Copy the data, XORing A6 with A17 (1). The user already knows he's
+ * XORing with the other bits (A9 for Y, A9 and A10 for X)
+ */
+ while (length > 0) {
+ int cacheline_end = ALIGN(gpu_offset + 1, 64);
+ int this_length = min(cacheline_end - gpu_offset, length);
+ int swizzled_gpu_offset = gpu_offset ^ 64;
+
+ if (is_read) {
+ memcpy(cpu_vaddr + cpu_offset,
+ gpu_vaddr + swizzled_gpu_offset,
+ this_length);
+ } else {
+ memcpy(gpu_vaddr + swizzled_gpu_offset,
+ cpu_vaddr + cpu_offset,
+ this_length);
+ }
+ cpu_offset += this_length;
+ gpu_offset += this_length;
+ length -= this_length;
+ }
+
+ kunmap_atomic(cpu_vaddr, KM_USER1);
+ kunmap_atomic(gpu_vaddr, KM_USER0);
+
+ return 0;
+}
+
/**
* This is the fast shmem pread path, which attempts to copy_from_user directly
* from the backing pages of the object to the user's address space. On a
int page_length;
int ret;
uint64_t data_ptr = args->data_ptr;
+ int do_bit17_swizzling;
remain = args->size;
down_read(&mm->mmap_sem);
pinned_pages = get_user_pages(current, mm, (uintptr_t)args->data_ptr,
- num_pages, 0, 0, user_pages, NULL);
+ num_pages, 1, 0, user_pages, NULL);
up_read(&mm->mmap_sem);
if (pinned_pages < num_pages) {
ret = -EFAULT;
goto fail_put_user_pages;
}
+ do_bit17_swizzling = i915_gem_object_needs_bit17_swizzle(obj);
+
mutex_lock(&dev->struct_mutex);
ret = i915_gem_object_get_pages(obj);
if ((data_page_offset + page_length) > PAGE_SIZE)
page_length = PAGE_SIZE - data_page_offset;
- ret = slow_shmem_copy(user_pages[data_page_index],
- data_page_offset,
- obj_priv->pages[shmem_page_index],
- shmem_page_offset,
- page_length);
+ if (do_bit17_swizzling) {
+ ret = slow_shmem_bit17_copy(obj_priv->pages[shmem_page_index],
+ shmem_page_offset,
+ user_pages[data_page_index],
+ data_page_offset,
+ page_length,
+ 1);
+ } else {
+ ret = slow_shmem_copy(user_pages[data_page_index],
+ data_page_offset,
+ obj_priv->pages[shmem_page_index],
+ shmem_page_offset,
+ page_length);
+ }
if (ret)
goto fail_put_pages;
return -EINVAL;
}
- ret = i915_gem_shmem_pread_fast(dev, obj, args, file_priv);
- if (ret != 0)
+ if (i915_gem_object_needs_bit17_swizzle(obj)) {
ret = i915_gem_shmem_pread_slow(dev, obj, args, file_priv);
+ } else {
+ ret = i915_gem_shmem_pread_fast(dev, obj, args, file_priv);
+ if (ret != 0)
+ ret = i915_gem_shmem_pread_slow(dev, obj, args,
+ file_priv);
+ }
drm_gem_object_unreference(obj);
int page_length;
int ret;
uint64_t data_ptr = args->data_ptr;
+ int do_bit17_swizzling;
remain = args->size;
goto fail_put_user_pages;
}
+ do_bit17_swizzling = i915_gem_object_needs_bit17_swizzle(obj);
+
mutex_lock(&dev->struct_mutex);
ret = i915_gem_object_get_pages(obj);
if ((data_page_offset + page_length) > PAGE_SIZE)
page_length = PAGE_SIZE - data_page_offset;
- ret = slow_shmem_copy(obj_priv->pages[shmem_page_index],
- shmem_page_offset,
- user_pages[data_page_index],
- data_page_offset,
- page_length);
+ if (do_bit17_swizzling) {
+ ret = slow_shmem_bit17_copy(obj_priv->pages[shmem_page_index],
+ shmem_page_offset,
+ user_pages[data_page_index],
+ data_page_offset,
+ page_length,
+ 0);
+ } else {
+ ret = slow_shmem_copy(obj_priv->pages[shmem_page_index],
+ shmem_page_offset,
+ user_pages[data_page_index],
+ data_page_offset,
+ page_length);
+ }
if (ret)
goto fail_put_pages;
ret = i915_gem_gtt_pwrite_slow(dev, obj, args,
file_priv);
}
+ } else if (i915_gem_object_needs_bit17_swizzle(obj)) {
+ ret = i915_gem_shmem_pwrite_slow(dev, obj, args, file_priv);
} else {
ret = i915_gem_shmem_pwrite_fast(dev, obj, args, file_priv);
if (ret == -EFAULT) {
return 0;
}
-static void
+void
i915_gem_object_put_pages(struct drm_gem_object *obj)
{
struct drm_i915_gem_object *obj_priv = obj->driver_private;
if (--obj_priv->pages_refcount != 0)
return;
+ if (obj_priv->tiling_mode != I915_TILING_NONE)
+ i915_gem_object_save_bit_17_swizzle(obj);
+
for (i = 0; i < page_count; i++)
if (obj_priv->pages[i] != NULL) {
if (obj_priv->dirty)
if (obj->write_domain != 0)
i915_gem_object_move_to_flushing(obj);
- else
+ else {
+ /* Take a reference on the object so it won't be
+ * freed while the spinlock is held. The list
+ * protection for this spinlock is safe when breaking
+ * the lock like this since the next thing we do
+ * is just get the head of the list again.
+ */
+ drm_gem_object_reference(obj);
i915_gem_object_move_to_inactive(obj);
+ spin_unlock(&dev_priv->mm.active_list_lock);
+ drm_gem_object_unreference(obj);
+ spin_lock(&dev_priv->mm.active_list_lock);
+ }
}
out:
spin_unlock(&dev_priv->mm.active_list_lock);
return ret;
}
-static int
+int
i915_gem_object_get_pages(struct drm_gem_object *obj)
{
struct drm_i915_gem_object *obj_priv = obj->driver_private;
}
obj_priv->pages[i] = page;
}
+
+ if (obj_priv->tiling_mode != I915_TILING_NONE)
+ i915_gem_object_do_bit_17_swizzle(obj);
+
return 0;
}
drm_free(*relocs, reloc_count * sizeof(**relocs),
DRM_MEM_DRIVER);
*relocs = NULL;
- return ret;
+ return -EFAULT;
}
reloc_index += exec_list[i].relocation_count;
}
- return ret;
+ return 0;
}
static int
struct drm_i915_gem_relocation_entry *relocs)
{
uint32_t reloc_count = 0, i;
- int ret;
+ int ret = 0;
for (i = 0; i < buffer_count; i++) {
struct drm_i915_gem_relocation_entry __user *user_relocs;
+ int unwritten;
user_relocs = (void __user *)(uintptr_t)exec_list[i].relocs_ptr;
- if (ret == 0) {
- ret = copy_to_user(user_relocs,
- &relocs[reloc_count],
- exec_list[i].relocation_count *
- sizeof(*relocs));
+ unwritten = copy_to_user(user_relocs,
+ &relocs[reloc_count],
+ exec_list[i].relocation_count *
+ sizeof(*relocs));
+
+ if (unwritten) {
+ ret = -EFAULT;
+ goto err;
}
reloc_count += exec_list[i].relocation_count;
}
+err:
drm_free(relocs, reloc_count * sizeof(*relocs), DRM_MEM_DRIVER);
return ret;
exec_offset = exec_list[args->buffer_count - 1].offset;
#if WATCH_EXEC
- i915_gem_dump_object(object_list[args->buffer_count - 1],
+ i915_gem_dump_object(batch_obj,
args->batch_len,
__func__,
~0);
(uintptr_t) args->buffers_ptr,
exec_list,
sizeof(*exec_list) * args->buffer_count);
- if (ret)
+ if (ret) {
+ ret = -EFAULT;
DRM_ERROR("failed to copy %d exec entries "
"back to user (%d)\n",
args->buffer_count, ret);
+ }
}
/* Copy the updated relocations out regardless of current error
i915_gem_free_mmap_offset(obj);
drm_free(obj_priv->page_cpu_valid, 1, DRM_MEM_DRIVER);
+ kfree(obj_priv->bit_17);
drm_free(obj->driver_private, 1, DRM_MEM_DRIVER);
}
return 0;
}
+static void i915_dump_pages(struct seq_file *m, struct page **pages, int page_count)
+{
+ int page, i;
+ uint32_t *mem;
+
+ for (page = 0; page < page_count; page++) {
+ mem = kmap(pages[page]);
+ for (i = 0; i < PAGE_SIZE; i += 4)
+ seq_printf(m, "%08x : %08x\n", i, mem[i / 4]);
+ kunmap(pages[page]);
+ }
+}
+
+static int i915_batchbuffer_info(struct seq_file *m, void *data)
+{
+ struct drm_info_node *node = (struct drm_info_node *) m->private;
+ struct drm_device *dev = node->minor->dev;
+ drm_i915_private_t *dev_priv = dev->dev_private;
+ struct drm_gem_object *obj;
+ struct drm_i915_gem_object *obj_priv;
+ int ret;
+
+ spin_lock(&dev_priv->mm.active_list_lock);
+
+ list_for_each_entry(obj_priv, &dev_priv->mm.active_list, list) {
+ obj = obj_priv->obj;
+ if (obj->read_domains & I915_GEM_DOMAIN_COMMAND) {
+ ret = i915_gem_object_get_pages(obj);
+ if (ret) {
+ DRM_ERROR("Failed to get pages: %d\n", ret);
+ spin_unlock(&dev_priv->mm.active_list_lock);
+ return ret;
+ }
+
+ seq_printf(m, "--- gtt_offset = 0x%08x\n", obj_priv->gtt_offset);
+ i915_dump_pages(m, obj_priv->pages, obj->size / PAGE_SIZE);
+
+ i915_gem_object_put_pages(obj);
+ }
+ }
+
+ spin_unlock(&dev_priv->mm.active_list_lock);
+
+ return 0;
+}
+
+static int i915_ringbuffer_data(struct seq_file *m, void *data)
+{
+ struct drm_info_node *node = (struct drm_info_node *) m->private;
+ struct drm_device *dev = node->minor->dev;
+ drm_i915_private_t *dev_priv = dev->dev_private;
+ u8 *virt;
+ uint32_t *ptr, off;
+
+ if (!dev_priv->ring.ring_obj) {
+ seq_printf(m, "No ringbuffer setup\n");
+ return 0;
+ }
+
+ virt = dev_priv->ring.virtual_start;
+
+ for (off = 0; off < dev_priv->ring.Size; off += 4) {
+ ptr = (uint32_t *)(virt + off);
+ seq_printf(m, "%08x : %08x\n", off, *ptr);
+ }
+
+ return 0;
+}
+
+static int i915_ringbuffer_info(struct seq_file *m, void *data)
+{
+ struct drm_info_node *node = (struct drm_info_node *) m->private;
+ struct drm_device *dev = node->minor->dev;
+ drm_i915_private_t *dev_priv = dev->dev_private;
+ unsigned int head, tail, mask;
+
+ head = I915_READ(PRB0_HEAD) & HEAD_ADDR;
+ tail = I915_READ(PRB0_TAIL) & TAIL_ADDR;
+ mask = dev_priv->ring.tail_mask;
+
+ seq_printf(m, "RingHead : %08x\n", head);
+ seq_printf(m, "RingTail : %08x\n", tail);
+ seq_printf(m, "RingMask : %08x\n", mask);
+ seq_printf(m, "RingSize : %08lx\n", dev_priv->ring.Size);
+ seq_printf(m, "Acthd : %08x\n", I915_READ(IS_I965G(dev) ? ACTHD_I965 : ACTHD));
+
+ return 0;
+}
+
+
static struct drm_info_list i915_gem_debugfs_list[] = {
{"i915_gem_active", i915_gem_object_list_info, 0, (void *) ACTIVE_LIST},
{"i915_gem_flushing", i915_gem_object_list_info, 0, (void *) FLUSHING_LIST},
{"i915_gem_fence_regs", i915_gem_fence_regs_info, 0},
{"i915_gem_interrupt", i915_interrupt_info, 0},
{"i915_gem_hws", i915_hws_info, 0},
+ {"i915_ringbuffer_data", i915_ringbuffer_data, 0},
+ {"i915_ringbuffer_info", i915_ringbuffer_info, 0},
+ {"i915_batchbuffers", i915_batchbuffer_info, 0},
};
#define I915_GEM_DEBUGFS_ENTRIES ARRAY_SIZE(i915_gem_debugfs_list)
*
*/
+#include "linux/string.h"
+#include "linux/bitops.h"
#include "drmP.h"
#include "drm.h"
#include "i915_drm.h"
swizzle_y = I915_BIT_6_SWIZZLE_9_11;
} else {
/* Bit 17 swizzling by the CPU in addition. */
- swizzle_x = I915_BIT_6_SWIZZLE_UNKNOWN;
- swizzle_y = I915_BIT_6_SWIZZLE_UNKNOWN;
+ swizzle_x = I915_BIT_6_SWIZZLE_9_10_17;
+ swizzle_y = I915_BIT_6_SWIZZLE_9_17;
}
break;
}
args->swizzle_mode = dev_priv->mm.bit_6_swizzle_x;
else
args->swizzle_mode = dev_priv->mm.bit_6_swizzle_y;
+
+ /* Hide bit 17 swizzling from the user. This prevents old Mesa
+ * from aborting the application on sw fallbacks to bit 17,
+ * and we use the pread/pwrite bit17 paths to swizzle for it.
+ * If there was a user that was relying on the swizzle
+ * information for drm_intel_bo_map()ed reads/writes this would
+ * break it, but we don't have any of those.
+ */
+ if (args->swizzle_mode == I915_BIT_6_SWIZZLE_9_17)
+ args->swizzle_mode = I915_BIT_6_SWIZZLE_9;
+ if (args->swizzle_mode == I915_BIT_6_SWIZZLE_9_10_17)
+ args->swizzle_mode = I915_BIT_6_SWIZZLE_9_10;
+
/* If we can't handle the swizzling, make it untiled. */
if (args->swizzle_mode == I915_BIT_6_SWIZZLE_UNKNOWN) {
args->tiling_mode = I915_TILING_NONE;
DRM_ERROR("unknown tiling mode\n");
}
+ /* Hide bit 17 from the user -- see comment in i915_gem_set_tiling */
+ if (args->swizzle_mode == I915_BIT_6_SWIZZLE_9_17)
+ args->swizzle_mode = I915_BIT_6_SWIZZLE_9;
+ if (args->swizzle_mode == I915_BIT_6_SWIZZLE_9_10_17)
+ args->swizzle_mode = I915_BIT_6_SWIZZLE_9_10;
+
drm_gem_object_unreference(obj);
mutex_unlock(&dev->struct_mutex);
return 0;
}
+
+/**
+ * Swap every 64 bytes of this page around, to account for it having a new
+ * bit 17 of its physical address and therefore being interpreted differently
+ * by the GPU.
+ */
+static int
+i915_gem_swizzle_page(struct page *page)
+{
+ char *vaddr;
+ int i;
+ char temp[64];
+
+ vaddr = kmap(page);
+ if (vaddr == NULL)
+ return -ENOMEM;
+
+ for (i = 0; i < PAGE_SIZE; i += 128) {
+ memcpy(temp, &vaddr[i], 64);
+ memcpy(&vaddr[i], &vaddr[i + 64], 64);
+ memcpy(&vaddr[i + 64], temp, 64);
+ }
+
+ kunmap(page);
+
+ return 0;
+}
+
+void
+i915_gem_object_do_bit_17_swizzle(struct drm_gem_object *obj)
+{
+ struct drm_device *dev = obj->dev;
+ drm_i915_private_t *dev_priv = dev->dev_private;
+ struct drm_i915_gem_object *obj_priv = obj->driver_private;
+ int page_count = obj->size >> PAGE_SHIFT;
+ int i;
+
+ if (dev_priv->mm.bit_6_swizzle_x != I915_BIT_6_SWIZZLE_9_10_17)
+ return;
+
+ if (obj_priv->bit_17 == NULL)
+ return;
+
+ for (i = 0; i < page_count; i++) {
+ char new_bit_17 = page_to_phys(obj_priv->pages[i]) >> 17;
+ if ((new_bit_17 & 0x1) !=
+ (test_bit(i, obj_priv->bit_17) != 0)) {
+ int ret = i915_gem_swizzle_page(obj_priv->pages[i]);
+ if (ret != 0) {
+ DRM_ERROR("Failed to swizzle page\n");
+ return;
+ }
+ set_page_dirty(obj_priv->pages[i]);
+ }
+ }
+}
+
+void
+i915_gem_object_save_bit_17_swizzle(struct drm_gem_object *obj)
+{
+ struct drm_device *dev = obj->dev;
+ drm_i915_private_t *dev_priv = dev->dev_private;
+ struct drm_i915_gem_object *obj_priv = obj->driver_private;
+ int page_count = obj->size >> PAGE_SHIFT;
+ int i;
+
+ if (dev_priv->mm.bit_6_swizzle_x != I915_BIT_6_SWIZZLE_9_10_17)
+ return;
+
+ if (obj_priv->bit_17 == NULL) {
+ obj_priv->bit_17 = kmalloc(BITS_TO_LONGS(page_count) *
+ sizeof(long), GFP_KERNEL);
+ if (obj_priv->bit_17 == NULL) {
+ DRM_ERROR("Failed to allocate memory for bit 17 "
+ "record\n");
+ return;
+ }
+ }
+
+ for (i = 0; i < page_count; i++) {
+ if (page_to_phys(obj_priv->pages[i]) & (1 << 17))
+ __set_bit(i, obj_priv->bit_17);
+ else
+ __clear_bit(i, obj_priv->bit_17);
+ }
+}
.p1 = { .min = I9XX_P1_MIN, .max = I9XX_P1_MAX },
.p2 = { .dot_limit = I9XX_P2_SDVO_DAC_SLOW_LIMIT,
.p2_slow = I9XX_P2_SDVO_DAC_SLOW, .p2_fast = I9XX_P2_SDVO_DAC_FAST },
+ .find_pll = intel_find_best_PLL,
},
{ /* INTEL_LIMIT_IGD_LVDS */
.dot = { .min = I9XX_DOT_MIN, .max = I9XX_DOT_MAX },
/* IGD only supports single-channel mode. */
.p2 = { .dot_limit = I9XX_P2_LVDS_SLOW_LIMIT,
.p2_slow = I9XX_P2_LVDS_SLOW, .p2_fast = I9XX_P2_LVDS_SLOW },
+ .find_pll = intel_find_best_PLL,
},
};
static struct sysrq_key_op sysrq_intelfb_restore_op = {
.handler = intelfb_sysrq,
- .help_msg = "force fb",
- .action_msg = "force restore of fb console",
+ .help_msg = "force-fb(G)",
+ .action_msg = "Restore framebuffer console",
};
int intelfb_probe(struct drm_device *dev)
struct intel_hdmi_priv {
u32 sdvox_reg;
u32 save_SDVOX;
- int has_hdmi_sink;
+ bool has_hdmi_sink;
};
static void intel_hdmi_mode_set(struct drm_encoder *encoder,
return true;
}
+static void
+intel_hdmi_sink_detect(struct drm_connector *connector)
+{
+ struct intel_output *intel_output = to_intel_output(connector);
+ struct intel_hdmi_priv *hdmi_priv = intel_output->dev_priv;
+ struct edid *edid = NULL;
+
+ edid = drm_get_edid(&intel_output->base,
+ &intel_output->ddc_bus->adapter);
+ if (edid != NULL) {
+ hdmi_priv->has_hdmi_sink = drm_detect_hdmi_monitor(edid);
+ kfree(edid);
+ intel_output->base.display_info.raw_edid = NULL;
+ }
+}
+
static enum drm_connector_status
intel_hdmi_detect(struct drm_connector *connector)
{
return connector_status_unknown;
}
- if ((I915_READ(PORT_HOTPLUG_STAT) & bit) != 0)
+ if ((I915_READ(PORT_HOTPLUG_STAT) & bit) != 0) {
+ intel_hdmi_sink_detect(connector);
return connector_status_connected;
- else
+ } else
return connector_status_disconnected;
}
intel_sdvo_read_response(intel_output, &response, 2);
}
+static void
+intel_sdvo_hdmi_sink_detect(struct drm_connector *connector)
+{
+ struct intel_output *intel_output = to_intel_output(connector);
+ struct intel_sdvo_priv *sdvo_priv = intel_output->dev_priv;
+ struct edid *edid = NULL;
+
+ intel_sdvo_set_control_bus_switch(intel_output, sdvo_priv->ddc_bus);
+ edid = drm_get_edid(&intel_output->base,
+ &intel_output->ddc_bus->adapter);
+ if (edid != NULL) {
+ sdvo_priv->is_hdmi = drm_detect_hdmi_monitor(edid);
+ kfree(edid);
+ intel_output->base.display_info.raw_edid = NULL;
+ }
+}
+
static enum drm_connector_status intel_sdvo_detect(struct drm_connector *connector)
{
u8 response[2];
if (status != SDVO_CMD_STATUS_SUCCESS)
return connector_status_unknown;
- if ((response[0] != 0) || (response[1] != 0))
+ if ((response[0] != 0) || (response[1] != 0)) {
+ intel_sdvo_hdmi_sink_detect(connector);
return connector_status_connected;
- else
+ } else
return connector_status_disconnected;
}
+++ /dev/null
-/*
- * Copyright (C) 2004 Red Hat UK Ltd.
- *
- * This file is released under the GPL.
- */
-
-#ifndef DM_BIO_LIST_H
-#define DM_BIO_LIST_H
-
-#include <linux/bio.h>
-
-#ifdef CONFIG_BLOCK
-
-struct bio_list {
- struct bio *head;
- struct bio *tail;
-};
-
-static inline int bio_list_empty(const struct bio_list *bl)
-{
- return bl->head == NULL;
-}
-
-static inline void bio_list_init(struct bio_list *bl)
-{
- bl->head = bl->tail = NULL;
-}
-
-#define bio_list_for_each(bio, bl) \
- for (bio = (bl)->head; bio; bio = bio->bi_next)
-
-static inline unsigned bio_list_size(const struct bio_list *bl)
-{
- unsigned sz = 0;
- struct bio *bio;
-
- bio_list_for_each(bio, bl)
- sz++;
-
- return sz;
-}
-
-static inline void bio_list_add(struct bio_list *bl, struct bio *bio)
-{
- bio->bi_next = NULL;
-
- if (bl->tail)
- bl->tail->bi_next = bio;
- else
- bl->head = bio;
-
- bl->tail = bio;
-}
-
-static inline void bio_list_add_head(struct bio_list *bl, struct bio *bio)
-{
- bio->bi_next = bl->head;
-
- bl->head = bio;
-
- if (!bl->tail)
- bl->tail = bio;
-}
-
-static inline void bio_list_merge(struct bio_list *bl, struct bio_list *bl2)
-{
- if (!bl2->head)
- return;
-
- if (bl->tail)
- bl->tail->bi_next = bl2->head;
- else
- bl->head = bl2->head;
-
- bl->tail = bl2->tail;
-}
-
-static inline void bio_list_merge_head(struct bio_list *bl,
- struct bio_list *bl2)
-{
- if (!bl2->head)
- return;
-
- if (bl->head)
- bl2->tail->bi_next = bl->head;
- else
- bl->tail = bl2->tail;
-
- bl->head = bl2->head;
-}
-
-static inline struct bio *bio_list_pop(struct bio_list *bl)
-{
- struct bio *bio = bl->head;
-
- if (bio) {
- bl->head = bl->head->bi_next;
- if (!bl->head)
- bl->tail = NULL;
-
- bio->bi_next = NULL;
- }
-
- return bio;
-}
-
-static inline struct bio *bio_list_get(struct bio_list *bl)
-{
- struct bio *bio = bl->head;
-
- bl->head = bl->tail = NULL;
-
- return bio;
-}
-
-#endif /* CONFIG_BLOCK */
-#endif
#include <linux/device-mapper.h>
-#include "dm-bio-list.h"
-
#define DM_MSG_PREFIX "delay"
struct delay_c {
#include <linux/device-mapper.h>
#include "dm-path-selector.h"
-#include "dm-bio-list.h"
#include "dm-bio-record.h"
#include "dm-uevent.h"
* This file is released under the GPL.
*/
-#include "dm-bio-list.h"
#include "dm-bio-record.h"
#include <linux/init.h>
#include <linux/vmalloc.h>
#include "dm.h"
-#include "dm-bio-list.h"
#define DM_MSG_PREFIX "region hash"
#include <linux/workqueue.h>
#include "dm-exception-store.h"
-#include "dm-bio-list.h"
#define DM_MSG_PREFIX "snapshots"
*/
#include "dm.h"
-#include "dm-bio-list.h"
#include "dm-uevent.h"
#include <linux/init.h>
#include <linux/blkdev.h>
#include <linux/seq_file.h>
#include "md.h"
-#include "dm-bio-list.h"
#include "raid1.h"
#include "bitmap.h"
#include <linux/blkdev.h>
#include <linux/seq_file.h>
#include "md.h"
-#include "dm-bio-list.h"
#include "raid10.h"
#include "bitmap.h"
.remove = __devexit_p(a2065_remove_one),
};
+static const struct net_device_ops lance_netdev_ops = {
+ .ndo_open = lance_open,
+ .ndo_stop = lance_close,
+ .ndo_start_xmit = lance_start_xmit,
+ .ndo_tx_timeout = lance_tx_timeout,
+ .ndo_set_multicast_list = lance_set_multicast,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+};
+
static int __devinit a2065_init_one(struct zorro_dev *z,
const struct zorro_device_id *ent)
{
priv->rx_ring_mod_mask = RX_RING_MOD_MASK;
priv->tx_ring_mod_mask = TX_RING_MOD_MASK;
- dev->open = &lance_open;
- dev->stop = &lance_close;
- dev->hard_start_xmit = &lance_start_xmit;
- dev->tx_timeout = &lance_tx_timeout;
+ dev->netdev_ops = &lance_netdev_ops;
dev->watchdog_timeo = 5*HZ;
- dev->set_multicast_list = &lance_set_multicast;
dev->dma = 0;
init_timer(&priv->multicast_timer);
.remove = __devexit_p(ariadne_remove_one),
};
+static const struct net_device_ops ariadne_netdev_ops = {
+ .ndo_open = ariadne_open,
+ .ndo_stop = ariadne_close,
+ .ndo_start_xmit = ariadne_start_xmit,
+ .ndo_tx_timeout = ariadne_tx_timeout,
+ .ndo_get_stats = ariadne_get_stats,
+ .ndo_set_multicast_list = set_multicast_list,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+};
+
static int __devinit ariadne_init_one(struct zorro_dev *z,
const struct zorro_device_id *ent)
{
dev->mem_start = ZTWO_VADDR(mem_start);
dev->mem_end = dev->mem_start+ARIADNE_RAM_SIZE;
- dev->open = &ariadne_open;
- dev->stop = &ariadne_close;
- dev->hard_start_xmit = &ariadne_start_xmit;
- dev->tx_timeout = &ariadne_tx_timeout;
+ dev->netdev_ops = &ariadne_netdev_ops;
dev->watchdog_timeo = 5*HZ;
- dev->get_stats = &ariadne_get_stats;
- dev->set_multicast_list = &set_multicast_list;
err = register_netdev(dev);
if (err) {
if (net_debug && version_printed++ == 0)
printk(KERN_INFO "%s", version);
}
+static const struct net_device_ops am79c961_netdev_ops = {
+ .ndo_open = am79c961_open,
+ .ndo_stop = am79c961_close,
+ .ndo_start_xmit = am79c961_sendpacket,
+ .ndo_get_stats = am79c961_getstats,
+ .ndo_set_multicast_list = am79c961_setmulticastlist,
+ .ndo_tx_timeout = am79c961_timeout,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = am79c961_poll_controller,
+#endif
+};
static int __init am79c961_probe(struct platform_device *pdev)
{
if (am79c961_hw_init(dev))
goto release;
- dev->open = am79c961_open;
- dev->stop = am79c961_close;
- dev->hard_start_xmit = am79c961_sendpacket;
- dev->get_stats = am79c961_getstats;
- dev->set_multicast_list = am79c961_setmulticastlist;
- dev->tx_timeout = am79c961_timeout;
-#ifdef CONFIG_NET_POLL_CONTROLLER
- dev->poll_controller = am79c961_poll_controller;
-#endif
+ dev->netdev_ops = &am79c961_netdev_ops;
ret = register_netdev(dev);
if (ret == 0) {
/*
* Enable/Disable promiscuous and multicast modes.
*/
-static void at91ether_set_rx_mode(struct net_device *dev)
+static void at91ether_set_multicast_list(struct net_device *dev)
{
unsigned long cfg;
/*
* Transmit packet.
*/
-static int at91ether_tx(struct sk_buff *skb, struct net_device *dev)
+static int at91ether_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct at91_private *lp = netdev_priv(dev);
dev->trans_start = jiffies;
} else {
- printk(KERN_ERR "at91_ether.c: at91ether_tx() called, but device is busy!\n");
+ printk(KERN_ERR "at91_ether.c: at91ether_start_xmit() called, but device is busy!\n");
return 1; /* if we return anything but zero, dev.c:1055 calls kfree_skb(skb)
on this skb, he also reports -ENETDOWN and printk's, so either
we free and return(0) or don't free and return 1 */
}
#endif
+static const struct net_device_ops at91ether_netdev_ops = {
+ .ndo_open = at91ether_open,
+ .ndo_stop = at91ether_close,
+ .ndo_start_xmit = at91ether_start_xmit,
+ .ndo_get_stats = at91ether_stats,
+ .ndo_set_multicast_list = at91ether_set_multicast_list,
+ .ndo_set_mac_address = set_mac_address,
+ .ndo_do_ioctl = at91ether_ioctl,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = at91ether_poll_controller,
+#endif
+};
+
/*
* Initialize the ethernet interface
*/
spin_lock_init(&lp->lock);
ether_setup(dev);
- dev->open = at91ether_open;
- dev->stop = at91ether_close;
- dev->hard_start_xmit = at91ether_tx;
- dev->get_stats = at91ether_stats;
- dev->set_multicast_list = at91ether_set_rx_mode;
- dev->set_mac_address = set_mac_address;
+ dev->netdev_ops = &at91ether_netdev_ops;
dev->ethtool_ops = &at91ether_ethtool_ops;
- dev->do_ioctl = at91ether_ioctl;
-#ifdef CONFIG_NET_POLL_CONTROLLER
- dev->poll_controller = at91ether_poll_controller;
-#endif
SET_NETDEV_DEV(dev, &pdev->dev);
.get_link = ep93xx_get_link,
};
-struct net_device *ep93xx_dev_alloc(struct ep93xx_eth_data *data)
+static const struct net_device_ops ep93xx_netdev_ops = {
+ .ndo_open = ep93xx_open,
+ .ndo_stop = ep93xx_close,
+ .ndo_start_xmit = ep93xx_xmit,
+ .ndo_get_stats = ep93xx_get_stats,
+ .ndo_do_ioctl = ep93xx_ioctl,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+};
+
+static struct net_device *ep93xx_dev_alloc(struct ep93xx_eth_data *data)
{
struct net_device *dev;
memcpy(dev->dev_addr, data->dev_addr, ETH_ALEN);
- dev->get_stats = ep93xx_get_stats;
dev->ethtool_ops = &ep93xx_ethtool_ops;
- dev->hard_start_xmit = ep93xx_xmit;
- dev->open = ep93xx_open;
- dev->stop = ep93xx_close;
- dev->do_ioctl = ep93xx_ioctl;
+ dev->netdev_ops = &ep93xx_netdev_ops;
dev->features |= NETIF_F_SG | NETIF_F_HW_CSUM;
printk(KERN_INFO "%s", version);
}
+static const struct net_device_ops ether1_netdev_ops = {
+ .ndo_open = ether1_open,
+ .ndo_stop = ether1_close,
+ .ndo_start_xmit = ether1_sendpacket,
+ .ndo_get_stats = ether1_getstats,
+ .ndo_set_multicast_list = ether1_setmulticastlist,
+ .ndo_tx_timeout = ether1_timeout,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+};
+
static int __devinit
ether1_probe(struct expansion_card *ec, const struct ecard_id *id)
{
goto free;
}
- dev->open = ether1_open;
- dev->stop = ether1_close;
- dev->hard_start_xmit = ether1_sendpacket;
- dev->get_stats = ether1_getstats;
- dev->set_multicast_list = ether1_setmulticastlist;
- dev->tx_timeout = ether1_timeout;
+ dev->netdev_ops = ðer1_netdev_ops;
dev->watchdog_timeo = 5 * HZ / 100;
ret = register_netdev(dev);
printk(KERN_INFO "%s", version);
}
+static const struct net_device_ops ether3_netdev_ops = {
+ .ndo_open = ether3_open,
+ .ndo_stop = ether3_close,
+ .ndo_start_xmit = ether3_sendpacket,
+ .ndo_get_stats = ether3_getstats,
+ .ndo_set_multicast_list = ether3_setmulticastlist,
+ .ndo_tx_timeout = ether3_timeout,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+};
+
static int __devinit
ether3_probe(struct expansion_card *ec, const struct ecard_id *id)
{
goto free;
}
- dev->open = ether3_open;
- dev->stop = ether3_close;
- dev->hard_start_xmit = ether3_sendpacket;
- dev->get_stats = ether3_getstats;
- dev->set_multicast_list = ether3_setmulticastlist;
- dev->tx_timeout = ether3_timeout;
+ dev->netdev_ops = ðer3_netdev_ops;
dev->watchdog_timeo = 5 * HZ / 100;
ret = register_netdev(dev);
return( ret );
}
+static const struct net_device_ops lance_netdev_ops = {
+ .ndo_open = lance_open,
+ .ndo_stop = lance_close,
+ .ndo_start_xmit = lance_start_xmit,
+ .ndo_set_multicast_list = set_multicast_list,
+ .ndo_set_mac_address = lance_set_mac_address,
+ .ndo_tx_timeout = lance_tx_timeout,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+};
static unsigned long __init lance_probe1( struct net_device *dev,
struct lance_addr *init_rec )
if (did_version++ == 0)
DPRINTK( 1, ( version ));
- /* The LANCE-specific entries in the device structure. */
- dev->open = &lance_open;
- dev->hard_start_xmit = &lance_start_xmit;
- dev->stop = &lance_close;
- dev->set_multicast_list = &set_multicast_list;
- dev->set_mac_address = &lance_set_mac_address;
+ dev->netdev_ops = &lance_netdev_ops;
/* XXX MSch */
- dev->tx_timeout = lance_tx_timeout;
dev->watchdog_timeo = TX_TIMEOUT;
return( 1 );
netif_wake_queue(dev);
}
-static void set_rx_mode(struct net_device *dev)
+static void au1000_multicast_list(struct net_device *dev)
{
struct au1000_private *aup = netdev_priv(dev);
if (au1000_debug > 4)
- printk("%s: set_rx_mode: flags=%x\n", dev->name, dev->flags);
+ printk("%s: au1000_multicast_list: flags=%x\n", dev->name, dev->flags);
if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */
aup->mac->control |= MAC_PROMISCUOUS;
return phy_mii_ioctl(aup->phy_dev, if_mii(rq), cmd);
}
+static const struct net_device_ops au1000_netdev_ops = {
+ .ndo_open = au1000_open,
+ .ndo_stop = au1000_close,
+ .ndo_start_xmit = au1000_tx,
+ .ndo_set_multicast_list = au1000_multicast_list,
+ .ndo_do_ioctl = au1000_ioctl,
+ .ndo_tx_timeout = au1000_tx_timeout,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+};
+
static struct net_device * au1000_probe(int port_num)
{
static unsigned version_printed = 0;
dev->base_addr = base;
dev->irq = irq;
- dev->open = au1000_open;
- dev->hard_start_xmit = au1000_tx;
- dev->stop = au1000_close;
- dev->set_multicast_list = &set_rx_mode;
- dev->do_ioctl = &au1000_ioctl;
+ dev->netdev_ops = &au1000_netdev_ops;
SET_ETHTOOL_OPS(dev, &au1000_ethtool_ops);
- dev->tx_timeout = au1000_tx_timeout;
dev->watchdog_timeo = ETH_TX_TIMEOUT;
/*
be_cmd_get_flow_control(&adapter->ctrl, &ecmd->tx_pause,
&ecmd->rx_pause);
- ecmd->autoneg = AUTONEG_ENABLE;
+ ecmd->autoneg = 0;
}
static int
struct be_adapter *adapter = netdev_priv(netdev);
int status;
- if (ecmd->autoneg != AUTONEG_ENABLE)
+ if (ecmd->autoneg != 0)
return -EINVAL;
status = be_cmd_set_flow_control(&adapter->ctrl, ecmd->tx_pause,
return 0;
}
+static const struct net_device_ops bfin_mac_netdev_ops = {
+ .ndo_open = bfin_mac_open,
+ .ndo_stop = bfin_mac_close,
+ .ndo_start_xmit = bfin_mac_hard_start_xmit,
+ .ndo_set_mac_address = bfin_mac_set_mac_address,
+ .ndo_tx_timeout = bfin_mac_timeout,
+ .ndo_set_multicast_list = bfin_mac_set_multicast_list,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = bfin_mac_poll,
+#endif
+};
+
/*
*
* this makes the board clean up everything that it can
/* Fill in the fields of the device structure with ethernet values. */
ether_setup(ndev);
- ndev->open = bfin_mac_open;
- ndev->stop = bfin_mac_close;
- ndev->hard_start_xmit = bfin_mac_hard_start_xmit;
- ndev->set_mac_address = bfin_mac_set_mac_address;
- ndev->tx_timeout = bfin_mac_timeout;
- ndev->set_multicast_list = bfin_mac_set_multicast_list;
-#ifdef CONFIG_NET_POLL_CONTROLLER
- ndev->poll_controller = bfin_mac_poll;
-#endif
+ ndev->netdev_ops = &bfin_mac_netdev_ops;
ndev->ethtool_ops = &bfin_mac_ethtool_ops;
spin_lock_init(&lp->lock);
if (arp->op_code == htons(ARPOP_REPLY)) {
/* update rx hash table for this ARP */
- printk("rar: update orig %s bond_dev %s\n", orig_dev->name,
- bond_dev->name);
bond = netdev_priv(bond_dev);
rlb_update_entry_from_arp(bond, arp);
pr_debug("Server received an ARP Reply from client\n");
for (i = 0; (i < BOND_MAX_ARP_TARGETS); i++) {
if (!targets[i])
- continue;
+ break;
pr_debug("basa: target %x\n", targets[i]);
if (list_empty(&bond->vlan_list)) {
pr_debug("basa: empty vlan: arp_send\n");
int i;
__be32 *targets = bond->params.arp_targets;
- targets = bond->params.arp_targets;
for (i = 0; (i < BOND_MAX_ARP_TARGETS) && targets[i]; i++) {
pr_debug("bva: sip %pI4 tip %pI4 t[%d] %pI4 bhti(tip) %d\n",
&sip, &tip, i, &targets[i], bond_has_this_ip(bond, tip));
for(i = 0; (i < BOND_MAX_ARP_TARGETS) ;i++) {
if (!bond->params.arp_targets[i])
- continue;
+ break;
if (printed)
seq_printf(seq, ",");
seq_printf(seq, " %pI4", &bond->params.arp_targets[i]);
goto out;
}
/* look for an empty slot to put the target in, and check for dupes */
- for (i = 0; (i < BOND_MAX_ARP_TARGETS); i++) {
+ for (i = 0; (i < BOND_MAX_ARP_TARGETS) && !done; i++) {
if (targets[i] == newtarget) { /* duplicate */
printk(KERN_ERR DRV_NAME
": %s: ARP target %pI4 is already present\n",
bond->dev->name, &newtarget);
- if (done)
- targets[i] = 0;
ret = -EINVAL;
goto out;
}
- if (targets[i] == 0 && !done) {
+ if (targets[i] == 0) {
printk(KERN_INFO DRV_NAME
": %s: adding ARP target %pI4.\n",
bond->dev->name, &newtarget);
goto out;
}
- for (i = 0; (i < BOND_MAX_ARP_TARGETS); i++) {
+ for (i = 0; (i < BOND_MAX_ARP_TARGETS) && !done; i++) {
if (targets[i] == newtarget) {
+ int j;
printk(KERN_INFO DRV_NAME
": %s: removing ARP target %pI4.\n",
bond->dev->name, &newtarget);
- targets[i] = 0;
+ for (j = i; (j < (BOND_MAX_ARP_TARGETS-1)) && targets[j+1]; j++)
+ targets[j] = targets[j+1];
+
+ targets[j] = 0;
done = 1;
}
}
struct transceiver_ops* transceiver = &transceivers[0];
+static const struct net_device_ops e100_netdev_ops = {
+ .ndo_open = e100_open,
+ .ndo_stop = e100_close,
+ .ndo_start_xmit = e100_send_packet,
+ .ndo_tx_timeout = e100_tx_timeout,
+ .ndo_get_stats = e100_get_stats,
+ .ndo_set_multicast_list = set_multicast_list,
+ .ndo_do_ioctl = e100_ioctl,
+ .ndo_set_mac_address = e100_set_mac_address,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_config = e100_set_config,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = e100_netpoll,
+#endif
+};
+
#define tx_done(dev) (*R_DMA_CH0_CMD == 0)
/*
/* fill in our handlers so the network layer can talk to us in the future */
- dev->open = e100_open;
- dev->hard_start_xmit = e100_send_packet;
- dev->stop = e100_close;
- dev->get_stats = e100_get_stats;
- dev->set_multicast_list = set_multicast_list;
- dev->set_mac_address = e100_set_mac_address;
dev->ethtool_ops = &e100_ethtool_ops;
- dev->do_ioctl = e100_ioctl;
- dev->set_config = e100_set_config;
- dev->tx_timeout = e100_tx_timeout;
-#ifdef CONFIG_NET_POLL_CONTROLLER
- dev->poll_controller = e100_netpoll;
-#endif
+ dev->netdev_ops = &e100_netdev_ops;
spin_lock_init(&np->lock);
spin_lock_init(&np->led_lock);
lance_set_multicast(dev);
}
+static const struct net_device_ops lance_netdev_ops = {
+ .ndo_open = lance_open,
+ .ndo_stop = lance_close,
+ .ndo_start_xmit = lance_start_xmit,
+ .ndo_tx_timeout = lance_tx_timeout,
+ .ndo_set_multicast_list = lance_set_multicast,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = eth_mac_addr,
+};
+
static int __init dec_lance_probe(struct device *bdev, const int type)
{
static unsigned version_printed;
printk(", addr = %pM, irq = %d\n", dev->dev_addr, dev->irq);
- dev->open = &lance_open;
- dev->stop = &lance_close;
- dev->hard_start_xmit = &lance_start_xmit;
- dev->tx_timeout = &lance_tx_timeout;
+ dev->netdev_ops = &lance_netdev_ops;
dev->watchdog_timeo = 5*HZ;
- dev->set_multicast_list = &lance_set_multicast;
/* lp->ll is the location of the registers for lance card */
lp->ll = ll;
static void e1000_vlan_rx_kill_vid(struct net_device *netdev, u16 vid);
static void e1000_restore_vlan(struct e1000_adapter *adapter);
-static int e1000_suspend(struct pci_dev *pdev, pm_message_t state);
#ifdef CONFIG_PM
+static int e1000_suspend(struct pci_dev *pdev, pm_message_t state);
static int e1000_resume(struct pci_dev *pdev);
#endif
static void e1000_shutdown(struct pci_dev *pdev);
struct e1000_buffer *buffer_info;
unsigned int i, eop;
unsigned int count = 0;
- bool cleaned;
+ bool cleaned = false;
unsigned int total_tx_bytes=0, total_tx_packets=0;
i = tx_ring->next_to_clean;
return 0;
}
-static int e1000_suspend(struct pci_dev *pdev, pm_message_t state)
+static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake)
{
struct net_device *netdev = pci_get_drvdata(pdev);
struct e1000_adapter *adapter = netdev_priv(netdev);
ew32(WUC, E1000_WUC_PME_EN);
ew32(WUFC, wufc);
- pci_enable_wake(pdev, PCI_D3hot, 1);
- pci_enable_wake(pdev, PCI_D3cold, 1);
} else {
ew32(WUC, 0);
ew32(WUFC, 0);
- pci_enable_wake(pdev, PCI_D3hot, 0);
- pci_enable_wake(pdev, PCI_D3cold, 0);
}
e1000_release_manageability(adapter);
+ *enable_wake = !!wufc;
+
/* make sure adapter isn't asleep if manageability is enabled */
- if (adapter->en_mng_pt) {
- pci_enable_wake(pdev, PCI_D3hot, 1);
- pci_enable_wake(pdev, PCI_D3cold, 1);
- }
+ if (adapter->en_mng_pt)
+ *enable_wake = true;
if (hw->phy_type == e1000_phy_igp_3)
e1000_phy_powerdown_workaround(hw);
pci_disable_device(pdev);
- pci_set_power_state(pdev, pci_choose_state(pdev, state));
-
return 0;
}
#ifdef CONFIG_PM
+static int e1000_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ int retval;
+ bool wake;
+
+ retval = __e1000_shutdown(pdev, &wake);
+ if (retval)
+ return retval;
+
+ if (wake) {
+ pci_prepare_to_sleep(pdev);
+ } else {
+ pci_wake_from_d3(pdev, false);
+ pci_set_power_state(pdev, PCI_D3hot);
+ }
+
+ return 0;
+}
+
static int e1000_resume(struct pci_dev *pdev)
{
struct net_device *netdev = pci_get_drvdata(pdev);
static void e1000_shutdown(struct pci_dev *pdev)
{
- e1000_suspend(pdev, PMSG_SUSPEND);
+ bool wake;
+
+ __e1000_shutdown(pdev, &wake);
+
+ if (system_state == SYSTEM_POWER_OFF) {
+ pci_wake_from_d3(pdev, wake);
+ pci_set_power_state(pdev, PCI_D3hot);
+ }
}
#ifdef CONFIG_NET_POLL_CONTROLLER
struct e1000_buffer *buffer_info;
unsigned int i, eop;
unsigned int count = 0;
- bool cleaned;
+ bool cleaned = false;
unsigned int total_tx_bytes = 0, total_tx_packets = 0;
i = tx_ring->next_to_clean;
}
}
-static int e1000_suspend(struct pci_dev *pdev, pm_message_t state)
+static int __e1000_shutdown(struct pci_dev *pdev, bool *enable_wake)
{
struct net_device *netdev = pci_get_drvdata(pdev);
struct e1000_adapter *adapter = netdev_priv(netdev);
ew32(WUC, E1000_WUC_PME_EN);
ew32(WUFC, wufc);
- pci_enable_wake(pdev, PCI_D3hot, 1);
- pci_enable_wake(pdev, PCI_D3cold, 1);
} else {
ew32(WUC, 0);
ew32(WUFC, 0);
- pci_enable_wake(pdev, PCI_D3hot, 0);
- pci_enable_wake(pdev, PCI_D3cold, 0);
}
+ *enable_wake = !!wufc;
+
/* make sure adapter isn't asleep if manageability is enabled */
- if (adapter->flags & FLAG_MNG_PT_ENABLED) {
- pci_enable_wake(pdev, PCI_D3hot, 1);
- pci_enable_wake(pdev, PCI_D3cold, 1);
- }
+ if (adapter->flags & FLAG_MNG_PT_ENABLED)
+ *enable_wake = true;
if (adapter->hw.phy.type == e1000_phy_igp_3)
e1000e_igp3_phy_powerdown_workaround_ich8lan(&adapter->hw);
pci_disable_device(pdev);
+ return 0;
+}
+
+static void e1000_power_off(struct pci_dev *pdev, bool sleep, bool wake)
+{
+ if (sleep && wake) {
+ pci_prepare_to_sleep(pdev);
+ return;
+ }
+
+ pci_wake_from_d3(pdev, wake);
+ pci_set_power_state(pdev, PCI_D3hot);
+}
+
+static void e1000_complete_shutdown(struct pci_dev *pdev, bool sleep,
+ bool wake)
+{
+ struct net_device *netdev = pci_get_drvdata(pdev);
+ struct e1000_adapter *adapter = netdev_priv(netdev);
+
/*
* The pci-e switch on some quad port adapters will report a
* correctable error when the MAC transitions from D0 to D3. To
pci_write_config_word(us_dev, pos + PCI_EXP_DEVCTL,
(devctl & ~PCI_EXP_DEVCTL_CERE));
- pci_set_power_state(pdev, pci_choose_state(pdev, state));
+ e1000_power_off(pdev, sleep, wake);
pci_write_config_word(us_dev, pos + PCI_EXP_DEVCTL, devctl);
} else {
- pci_set_power_state(pdev, pci_choose_state(pdev, state));
+ e1000_power_off(pdev, sleep, wake);
}
-
- return 0;
}
static void e1000e_disable_l1aspm(struct pci_dev *pdev)
}
#ifdef CONFIG_PM
+static int e1000_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ int retval;
+ bool wake;
+
+ retval = __e1000_shutdown(pdev, &wake);
+ if (!retval)
+ e1000_complete_shutdown(pdev, true, wake);
+
+ return retval;
+}
+
static int e1000_resume(struct pci_dev *pdev)
{
struct net_device *netdev = pci_get_drvdata(pdev);
static void e1000_shutdown(struct pci_dev *pdev)
{
- e1000_suspend(pdev, PMSG_SUSPEND);
+ bool wake = false;
+
+ __e1000_shutdown(pdev, &wake);
+
+ if (system_state == SYSTEM_POWER_OFF)
+ e1000_complete_shutdown(pdev, false, wake);
}
#ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_change_mtu = ehea_change_mtu,
.ndo_vlan_rx_register = ehea_vlan_rx_register,
.ndo_vlan_rx_add_vid = ehea_vlan_rx_add_vid,
- .ndo_vlan_rx_kill_vid = ehea_vlan_rx_kill_vid
+ .ndo_vlan_rx_kill_vid = ehea_vlan_rx_kill_vid,
+ .ndo_tx_timeout = ehea_tx_watchdog,
};
struct ehea_port *ehea_setup_single_port(struct ehea_adapter *adapter,
| NETIF_F_HIGHDMA | NETIF_F_IP_CSUM | NETIF_F_HW_VLAN_TX
| NETIF_F_HW_VLAN_RX | NETIF_F_HW_VLAN_FILTER
| NETIF_F_LLTX;
- dev->tx_timeout = &ehea_tx_watchdog;
dev->watchdog_timeo = EHEA_WATCH_DOG_TIMEOUT;
INIT_WORK(&port->reset_task, ehea_reset_port);
mod_timer(&np->nic_poll, jiffies + POLL_WAIT);
}
spin_unlock_irqrestore(&np->lock, flags);
- __napi_complete(napi);
+ napi_complete(napi);
return rx_work;
}
if (rx_work < budget) {
/* re-enable interrupts
(msix not enabled in napi) */
- __napi_complete(napi);
+ napi_complete(napi);
writel(np->irqmask, base + NvRegIrqMask);
}
#define IS_FEC(match) 0
#endif
+static const struct net_device_ops fs_enet_netdev_ops = {
+ .ndo_open = fs_enet_open,
+ .ndo_stop = fs_enet_close,
+ .ndo_get_stats = fs_enet_get_stats,
+ .ndo_start_xmit = fs_enet_start_xmit,
+ .ndo_tx_timeout = fs_timeout,
+ .ndo_set_multicast_list = fs_set_multicast_list,
+ .ndo_do_ioctl = fs_ioctl,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_change_mtu = eth_change_mtu,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = fs_enet_netpoll,
+#endif
+};
+
static int __devinit fs_enet_probe(struct of_device *ofdev,
const struct of_device_id *match)
{
fep->tx_ring = fpi->tx_ring;
fep->rx_ring = fpi->rx_ring;
- ndev->open = fs_enet_open;
- ndev->hard_start_xmit = fs_enet_start_xmit;
- ndev->tx_timeout = fs_timeout;
+ ndev->netdev_ops = &fs_enet_netdev_ops;
ndev->watchdog_timeo = 2 * HZ;
- ndev->stop = fs_enet_close;
- ndev->get_stats = fs_enet_get_stats;
- ndev->set_multicast_list = fs_set_multicast_list;
-#ifdef CONFIG_NET_POLL_CONTROLLER
- ndev->poll_controller = fs_enet_netpoll;
-#endif
if (fpi->use_napi)
netif_napi_add(ndev, &fep->napi, fs_enet_rx_napi,
fpi->napi_weight);
ndev->ethtool_ops = &fs_ethtool_ops;
- ndev->do_ioctl = fs_ioctl;
init_timer(&fep->phy_timer_list);
struct net_device *dev = priv->ndev;
if (dev->flags & IFF_UP) {
+ netif_stop_queue(dev);
stop_gfar(dev);
startup_gfar(dev);
+ netif_start_queue(dev);
}
netif_tx_schedule_all(dev);
return 0;
}
+static const struct net_device_ops emac_netdev_ops = {
+ .ndo_open = emac_open,
+ .ndo_stop = emac_close,
+ .ndo_get_stats = emac_stats,
+ .ndo_set_multicast_list = emac_set_multicast_list,
+ .ndo_do_ioctl = emac_ioctl,
+ .ndo_tx_timeout = emac_tx_timeout,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_start_xmit = emac_start_xmit,
+ .ndo_change_mtu = eth_change_mtu,
+};
+
+static const struct net_device_ops emac_gige_netdev_ops = {
+ .ndo_open = emac_open,
+ .ndo_stop = emac_close,
+ .ndo_get_stats = emac_stats,
+ .ndo_set_multicast_list = emac_set_multicast_list,
+ .ndo_do_ioctl = emac_ioctl,
+ .ndo_tx_timeout = emac_tx_timeout,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_start_xmit = emac_start_xmit_sg,
+ .ndo_change_mtu = emac_change_mtu,
+};
+
static int __devinit emac_probe(struct of_device *ofdev,
const struct of_device_id *match)
{
if (err != 0)
goto err_detach_tah;
- /* Fill in the driver function table */
- ndev->open = &emac_open;
if (dev->tah_dev)
ndev->features |= NETIF_F_IP_CSUM | NETIF_F_SG;
- ndev->tx_timeout = &emac_tx_timeout;
ndev->watchdog_timeo = 5 * HZ;
- ndev->stop = &emac_close;
- ndev->get_stats = &emac_stats;
- ndev->set_multicast_list = &emac_set_multicast_list;
- ndev->do_ioctl = &emac_ioctl;
if (emac_phy_supports_gige(dev->phy_mode)) {
- ndev->hard_start_xmit = &emac_start_xmit_sg;
- ndev->change_mtu = &emac_change_mtu;
+ ndev->netdev_ops = &emac_gige_netdev_ops;
dev->commac.ops = &emac_commac_sg_ops;
- } else {
- ndev->hard_start_xmit = &emac_start_xmit;
- }
+ } else
+ ndev->netdev_ops = &emac_netdev_ops;
SET_ETHTOOL_OPS(ndev, &emac_ethtool_ops);
netif_carrier_off(ndev);
* Writes value at the given offset in the register array which stores
* the VLAN filter table.
**/
-void igb_write_vfta(struct e1000_hw *hw, u32 offset, u32 value)
+static void igb_write_vfta(struct e1000_hw *hw, u32 offset, u32 value)
{
array_wr32(E1000_VFTA, offset, value);
wrfl();
s32 igb_check_alt_mac_addr(struct e1000_hw *hw);
void igb_reset_adaptive(struct e1000_hw *hw);
void igb_update_adaptive(struct e1000_hw *hw);
-void igb_write_vfta(struct e1000_hw *hw, u32 offset, u32 value);
bool igb_enable_mng_pass_thru(struct e1000_hw *hw);
* returns SUCCESS if it successfully received a message notification and
* copied it into the receive buffer.
**/
-s32 igb_read_posted_mbx(struct e1000_hw *hw, u32 *msg, u16 size, u16 mbx_id)
+static s32 igb_read_posted_mbx(struct e1000_hw *hw, u32 *msg, u16 size, u16 mbx_id)
{
struct e1000_mbx_info *mbx = &hw->mbx;
s32 ret_val = -E1000_ERR_MBX;
* returns SUCCESS if it successfully copied message into the buffer and
* received an ack to that message within delay * timeout period
**/
-s32 igb_write_posted_mbx(struct e1000_hw *hw, u32 *msg, u16 size, u16 mbx_id)
+static s32 igb_write_posted_mbx(struct e1000_hw *hw, u32 *msg, u16 size, u16 mbx_id)
{
struct e1000_mbx_info *mbx = &hw->mbx;
s32 ret_val = 0;
return ret_val;
}
-/**
- * e1000_init_mbx_ops_generic - Initialize NVM function pointers
- * @hw: pointer to the HW structure
- *
- * Setups up the function pointers to no-op functions
- **/
-void e1000_init_mbx_ops_generic(struct e1000_hw *hw)
-{
- struct e1000_mbx_info *mbx = &hw->mbx;
- mbx->ops.read_posted = igb_read_posted_mbx;
- mbx->ops.write_posted = igb_write_posted_mbx;
-}
-
static s32 igb_check_for_bit_pf(struct e1000_hw *hw, u32 mask)
{
u32 mbvficr = rd32(E1000_MBVFICR);
s32 igb_read_mbx(struct e1000_hw *, u32 *, u16, u16);
s32 igb_write_mbx(struct e1000_hw *, u32 *, u16, u16);
-s32 igb_read_posted_mbx(struct e1000_hw *, u32 *, u16, u16);
-s32 igb_write_posted_mbx(struct e1000_hw *, u32 *, u16, u16);
s32 igb_check_for_msg(struct e1000_hw *, u16);
s32 igb_check_for_ack(struct e1000_hw *, u16);
s32 igb_check_for_rst(struct e1000_hw *, u16);
int i;
unsigned char mac_addr[ETH_ALEN];
- if (num_vfs)
+ if (num_vfs) {
adapter->vf_data = kcalloc(num_vfs,
sizeof(struct vf_data_storage),
GFP_KERNEL);
- if (!adapter->vf_data) {
- dev_err(&pdev->dev, "Could not allocate VF private "
- "data - IOV enable failed\n");
- } else {
- err = pci_enable_sriov(pdev, num_vfs);
- if (!err) {
- adapter->vfs_allocated_count = num_vfs;
- dev_info(&pdev->dev, "%d vfs allocated\n", num_vfs);
- for (i = 0; i < adapter->vfs_allocated_count; i++) {
- random_ether_addr(mac_addr);
- igb_set_vf_mac(adapter, i, mac_addr);
- }
+ if (!adapter->vf_data) {
+ dev_err(&pdev->dev,
+ "Could not allocate VF private data - "
+ "IOV enable failed\n");
} else {
- kfree(adapter->vf_data);
- adapter->vf_data = NULL;
+ err = pci_enable_sriov(pdev, num_vfs);
+ if (!err) {
+ adapter->vfs_allocated_count = num_vfs;
+ dev_info(&pdev->dev,
+ "%d vfs allocated\n",
+ num_vfs);
+ for (i = 0;
+ i < adapter->vfs_allocated_count;
+ i++) {
+ random_ether_addr(mac_addr);
+ igb_set_vf_mac(adapter, i,
+ mac_addr);
+ }
+ } else {
+ kfree(adapter->vf_data);
+ adapter->vf_data = NULL;
+ }
}
}
}
extern int igbvf_up(struct igbvf_adapter *);
extern void igbvf_down(struct igbvf_adapter *);
extern void igbvf_reinit_locked(struct igbvf_adapter *);
-extern void igbvf_reset(struct igbvf_adapter *);
extern int igbvf_setup_rx_resources(struct igbvf_adapter *, struct igbvf_ring *);
extern int igbvf_setup_tx_resources(struct igbvf_adapter *, struct igbvf_ring *);
extern void igbvf_free_rx_resources(struct igbvf_ring *);
extern void igbvf_free_tx_resources(struct igbvf_ring *);
extern void igbvf_update_stats(struct igbvf_adapter *);
-extern void igbvf_set_interrupt_capability(struct igbvf_adapter *);
-extern void igbvf_reset_interrupt_capability(struct igbvf_adapter *);
extern unsigned int copybreak;
static const char igbvf_copyright[] = "Copyright (c) 2009 Intel Corporation.";
static int igbvf_poll(struct napi_struct *napi, int budget);
+static void igbvf_reset(struct igbvf_adapter *);
+static void igbvf_set_interrupt_capability(struct igbvf_adapter *);
+static void igbvf_reset_interrupt_capability(struct igbvf_adapter *);
static struct igbvf_info igbvf_vf_info = {
.mac = e1000_vfadapt,
e1e_flush();
}
-void igbvf_reset_interrupt_capability(struct igbvf_adapter *adapter)
+static void igbvf_reset_interrupt_capability(struct igbvf_adapter *adapter)
{
if (adapter->msix_entries) {
pci_disable_msix(adapter->pdev);
* Attempt to configure interrupts using the best available
* capabilities of the hardware and kernel.
**/
-void igbvf_set_interrupt_capability(struct igbvf_adapter *adapter)
+static void igbvf_set_interrupt_capability(struct igbvf_adapter *adapter)
{
int err = -ENOMEM;
int i;
* set/changed during runtime. After reset the device needs to be
* properly configured for Rx, Tx etc.
*/
-void igbvf_reset(struct igbvf_adapter *adapter)
+static void igbvf_reset(struct igbvf_adapter *adapter)
{
struct e1000_mac_info *mac = &adapter->hw.mac;
struct net_device *netdev = adapter->netdev;
* e1000_init_mac_params_vf - Inits MAC params
* @hw: pointer to the HW structure
**/
-s32 e1000_init_mac_params_vf(struct e1000_hw *hw)
+static s32 e1000_init_mac_params_vf(struct e1000_hw *hw)
{
struct e1000_mac_info *mac = &hw->mac;
/* These functions must be implemented by drivers */
void e1000_rlpml_set_vf(struct e1000_hw *, u16);
void e1000_init_function_pointers_vf(struct e1000_hw *hw);
-s32 e1000_init_mac_params_vf(struct e1000_hw *hw);
#endif /* _E1000_VF_H_ */
}
#endif
+static const struct net_device_ops ioc3_netdev_ops = {
+ .ndo_open = ioc3_open,
+ .ndo_stop = ioc3_close,
+ .ndo_start_xmit = ioc3_start_xmit,
+ .ndo_tx_timeout = ioc3_timeout,
+ .ndo_get_stats = ioc3_get_stats,
+ .ndo_set_multicast_list = ioc3_set_multicast_list,
+ .ndo_do_ioctl = ioc3_ioctl,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = ioc3_set_mac_address,
+ .ndo_change_mtu = eth_change_mtu,
+};
+
static int __devinit ioc3_probe(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
ioc3_get_eaddr(ip);
/* The IOC3-specific entries in the device structure. */
- dev->open = ioc3_open;
- dev->hard_start_xmit = ioc3_start_xmit;
- dev->tx_timeout = ioc3_timeout;
dev->watchdog_timeo = 5 * HZ;
- dev->stop = ioc3_close;
- dev->get_stats = ioc3_get_stats;
- dev->do_ioctl = ioc3_ioctl;
- dev->set_multicast_list = ioc3_set_multicast_list;
- dev->set_mac_address = ioc3_set_mac_address;
+ dev->netdev_ops = &ioc3_netdev_ops;
dev->ethtool_ops = &ioc3_ethtool_ops;
dev->features = NETIF_F_IP_CSUM;
}
#endif
+static const struct net_device_ops netcard_netdev_ops = {
+ .ndo_open = net_open,
+ .ndo_stop = net_close,
+ .ndo_start_xmit = net_send_packet,
+ .ndo_get_stats = net_get_stats,
+ .ndo_set_multicast_list = set_multicast_list,
+ .ndo_tx_timeout = net_tx_timeout,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_change_mtu = eth_change_mtu,
+};
+
/*
* This is the real probe routine. Linux has a history of friendly device
* probes on the ISA bus. A good device probes avoids doing writes, and
np = netdev_priv(dev);
spin_lock_init(&np->lock);
- dev->open = net_open;
- dev->stop = net_close;
- dev->hard_start_xmit = net_send_packet;
- dev->get_stats = net_get_stats;
- dev->set_multicast_list = &set_multicast_list;
-
- dev->tx_timeout = &net_tx_timeout;
+ dev->netdev_ops = &netcard_netdev_ops;
dev->watchdog_timeo = MY_TX_TIMEOUT;
err = register_netdev(dev);
}
/**
- * ixgbe_blink_led_start_82598 - Blink LED based on index.
- * @hw: pointer to hardware structure
- * @index: led number to blink
- **/
-static s32 ixgbe_blink_led_start_82598(struct ixgbe_hw *hw, u32 index)
-{
- ixgbe_link_speed speed = 0;
- bool link_up = 0;
- u32 autoc_reg = IXGBE_READ_REG(hw, IXGBE_AUTOC);
- u32 led_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL);
-
- /*
- * Link must be up to auto-blink the LEDs on the 82598EB MAC;
- * force it if link is down.
- */
- hw->mac.ops.check_link(hw, &speed, &link_up, false);
-
- if (!link_up) {
- autoc_reg |= IXGBE_AUTOC_FLU;
- IXGBE_WRITE_REG(hw, IXGBE_AUTOC, autoc_reg);
- msleep(10);
- }
-
- led_reg &= ~IXGBE_LED_MODE_MASK(index);
- led_reg |= IXGBE_LED_BLINK(index);
- IXGBE_WRITE_REG(hw, IXGBE_LEDCTL, led_reg);
- IXGBE_WRITE_FLUSH(hw);
-
- return 0;
-}
-
-/**
- * ixgbe_blink_led_stop_82598 - Stop blinking LED based on index.
- * @hw: pointer to hardware structure
- * @index: led number to stop blinking
- **/
-static s32 ixgbe_blink_led_stop_82598(struct ixgbe_hw *hw, u32 index)
-{
- u32 autoc_reg = IXGBE_READ_REG(hw, IXGBE_AUTOC);
- u32 led_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL);
-
- autoc_reg &= ~IXGBE_AUTOC_FLU;
- autoc_reg |= IXGBE_AUTOC_AN_RESTART;
- IXGBE_WRITE_REG(hw, IXGBE_AUTOC, autoc_reg);
-
- led_reg &= ~IXGBE_LED_MODE_MASK(index);
- led_reg &= ~IXGBE_LED_BLINK(index);
- led_reg |= IXGBE_LED_LINK_ACTIVE << IXGBE_LED_MODE_SHIFT(index);
- IXGBE_WRITE_REG(hw, IXGBE_LEDCTL, led_reg);
- IXGBE_WRITE_FLUSH(hw);
-
- return 0;
-}
-
-/**
* ixgbe_read_analog_reg8_82598 - Reads 8 bit Atlas analog register
* @hw: pointer to hardware structure
* @reg: analog register to read
.get_link_capabilities = &ixgbe_get_link_capabilities_82598,
.led_on = &ixgbe_led_on_generic,
.led_off = &ixgbe_led_off_generic,
- .blink_led_start = &ixgbe_blink_led_start_82598,
- .blink_led_stop = &ixgbe_blink_led_stop_82598,
+ .blink_led_start = &ixgbe_blink_led_start_generic,
+ .blink_led_stop = &ixgbe_blink_led_stop_generic,
.set_rar = &ixgbe_set_rar_generic,
.clear_rar = &ixgbe_clear_rar_generic,
.set_vmdq = &ixgbe_set_vmdq_82598,
s32 ixgbe_set_vfta_82599(struct ixgbe_hw *hw, u32 vlan,
u32 vind, bool vlan_on);
s32 ixgbe_clear_vfta_82599(struct ixgbe_hw *hw);
-s32 ixgbe_blink_led_stop_82599(struct ixgbe_hw *hw, u32 index);
-s32 ixgbe_blink_led_start_82599(struct ixgbe_hw *hw, u32 index);
s32 ixgbe_init_uta_tables_82599(struct ixgbe_hw *hw);
s32 ixgbe_read_analog_reg8_82599(struct ixgbe_hw *hw, u32 reg, u8 *val);
s32 ixgbe_write_analog_reg8_82599(struct ixgbe_hw *hw, u32 reg, u8 val);
}
/**
- * ixgbe_blink_led_start_82599 - Blink LED based on index.
- * @hw: pointer to hardware structure
- * @index: led number to blink
- **/
-s32 ixgbe_blink_led_start_82599(struct ixgbe_hw *hw, u32 index)
-{
- u32 led_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL);
-
- led_reg &= ~IXGBE_LED_MODE_MASK(index);
- led_reg |= IXGBE_LED_BLINK(index);
- IXGBE_WRITE_REG(hw, IXGBE_LEDCTL, led_reg);
- IXGBE_WRITE_FLUSH(hw);
-
- return 0;
-}
-
-/**
- * ixgbe_blink_led_stop_82599 - Stop blinking LED based on index.
- * @hw: pointer to hardware structure
- * @index: led number to stop blinking
- **/
-s32 ixgbe_blink_led_stop_82599(struct ixgbe_hw *hw, u32 index)
-{
- u32 led_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL);
-
- led_reg &= ~IXGBE_LED_MODE_MASK(index);
- led_reg &= ~IXGBE_LED_BLINK(index);
- IXGBE_WRITE_REG(hw, IXGBE_LEDCTL, led_reg);
- IXGBE_WRITE_FLUSH(hw);
-
- return 0;
-}
-
-/**
* ixgbe_init_uta_tables_82599 - Initialize the Unicast Table Array
* @hw: pointer to hardware structure
**/
.get_link_capabilities = &ixgbe_get_link_capabilities_82599,
.led_on = &ixgbe_led_on_generic,
.led_off = &ixgbe_led_off_generic,
- .blink_led_start = &ixgbe_blink_led_start_82599,
- .blink_led_stop = &ixgbe_blink_led_stop_82599,
+ .blink_led_start = &ixgbe_blink_led_start_generic,
+ .blink_led_stop = &ixgbe_blink_led_stop_generic,
.set_rar = &ixgbe_set_rar_generic,
.clear_rar = &ixgbe_clear_rar_generic,
.set_vmdq = &ixgbe_set_vmdq_82599,
return 0;
}
+
+/**
+ * ixgbe_blink_led_start_generic - Blink LED based on index.
+ * @hw: pointer to hardware structure
+ * @index: led number to blink
+ **/
+s32 ixgbe_blink_led_start_generic(struct ixgbe_hw *hw, u32 index)
+{
+ ixgbe_link_speed speed = 0;
+ bool link_up = 0;
+ u32 autoc_reg = IXGBE_READ_REG(hw, IXGBE_AUTOC);
+ u32 led_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL);
+
+ /*
+ * Link must be up to auto-blink the LEDs;
+ * Force it if link is down.
+ */
+ hw->mac.ops.check_link(hw, &speed, &link_up, false);
+
+ if (!link_up) {
+ autoc_reg |= IXGBE_AUTOC_FLU;
+ IXGBE_WRITE_REG(hw, IXGBE_AUTOC, autoc_reg);
+ msleep(10);
+ }
+
+ led_reg &= ~IXGBE_LED_MODE_MASK(index);
+ led_reg |= IXGBE_LED_BLINK(index);
+ IXGBE_WRITE_REG(hw, IXGBE_LEDCTL, led_reg);
+ IXGBE_WRITE_FLUSH(hw);
+
+ return 0;
+}
+
+/**
+ * ixgbe_blink_led_stop_generic - Stop blinking LED based on index.
+ * @hw: pointer to hardware structure
+ * @index: led number to stop blinking
+ **/
+s32 ixgbe_blink_led_stop_generic(struct ixgbe_hw *hw, u32 index)
+{
+ u32 autoc_reg = IXGBE_READ_REG(hw, IXGBE_AUTOC);
+ u32 led_reg = IXGBE_READ_REG(hw, IXGBE_LEDCTL);
+
+ autoc_reg &= ~IXGBE_AUTOC_FLU;
+ autoc_reg |= IXGBE_AUTOC_AN_RESTART;
+ IXGBE_WRITE_REG(hw, IXGBE_AUTOC, autoc_reg);
+
+ led_reg &= ~IXGBE_LED_MODE_MASK(index);
+ led_reg &= ~IXGBE_LED_BLINK(index);
+ led_reg |= IXGBE_LED_LINK_ACTIVE << IXGBE_LED_MODE_SHIFT(index);
+ IXGBE_WRITE_REG(hw, IXGBE_LEDCTL, led_reg);
+ IXGBE_WRITE_FLUSH(hw);
+
+ return 0;
+}
s32 ixgbe_read_analog_reg8_generic(struct ixgbe_hw *hw, u32 reg, u8 *val);
s32 ixgbe_write_analog_reg8_generic(struct ixgbe_hw *hw, u32 reg, u8 val);
+s32 ixgbe_blink_led_start_generic(struct ixgbe_hw *hw, u32 index);
+s32 ixgbe_blink_led_stop_generic(struct ixgbe_hw *hw, u32 index);
+
#define IXGBE_WRITE_REG(a, reg, value) writel((value), ((a)->hw_addr + (reg)))
#ifndef writeq
}
+static int ixgbe_wol_exclusion(struct ixgbe_adapter *adapter,
+ struct ethtool_wolinfo *wol)
+{
+ struct ixgbe_hw *hw = &adapter->hw;
+ int retval = 1;
+
+ switch(hw->device_id) {
+ case IXGBE_DEV_ID_82599_KX4:
+ retval = 0;
+ break;
+ default:
+ wol->supported = 0;
+ retval = 0;
+ }
+
+ return retval;
+}
+
static void ixgbe_get_wol(struct net_device *netdev,
struct ethtool_wolinfo *wol)
{
WAKE_BCAST | WAKE_MAGIC;
wol->wolopts = 0;
- if (!device_can_wakeup(&adapter->pdev->dev))
+ if (ixgbe_wol_exclusion(adapter, wol) ||
+ !device_can_wakeup(&adapter->pdev->dev))
return;
if (adapter->wol & IXGBE_WUFC_EX)
if (wol->wolopts & (WAKE_PHY | WAKE_ARP | WAKE_MAGICSECURE))
return -EOPNOTSUPP;
+ if (ixgbe_wol_exclusion(adapter, wol))
+ return wol->wolopts ? -EOPNOTSUPP : 0;
+
adapter->wol = 0;
if (wol->wolopts & WAKE_UCAST)
**/
static void ixgbe_set_num_queues(struct ixgbe_adapter *adapter)
{
- /* Start with base case */
- adapter->num_rx_queues = 1;
- adapter->num_tx_queues = 1;
-
#ifdef CONFIG_IXGBE_DCB
if (ixgbe_set_dcb_queues(adapter))
- return;
+ goto done;
#endif
if (ixgbe_set_rss_queues(adapter))
- return;
+ goto done;
+
+ /* fallback to base case */
+ adapter->num_rx_queues = 1;
+ adapter->num_tx_queues = 1;
+
+done:
+ /* Notify the stack of the (possibly) reduced Tx Queue count. */
+ adapter->netdev->real_num_tx_queues = adapter->num_tx_queues;
}
static void ixgbe_acquire_msix_vectors(struct ixgbe_adapter *adapter,
}
out:
- /* Notify the stack of the (possibly) reduced Tx Queue count. */
- adapter->netdev->real_num_tx_queues = adapter->num_tx_queues;
-
return err;
}
return 0;
}
-
#endif /* CONFIG_PM */
-static int ixgbe_suspend(struct pci_dev *pdev, pm_message_t state)
+
+static int __ixgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
{
struct net_device *netdev = pci_get_drvdata(pdev);
struct ixgbe_adapter *adapter = netdev_priv(netdev);
pci_enable_wake(pdev, PCI_D3cold, 0);
}
+ *enable_wake = !!wufc;
+
ixgbe_release_hw_control(adapter);
pci_disable_device(pdev);
- pci_set_power_state(pdev, pci_choose_state(pdev, state));
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int ixgbe_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ int retval;
+ bool wake;
+
+ retval = __ixgbe_shutdown(pdev, &wake);
+ if (retval)
+ return retval;
+
+ if (wake) {
+ pci_prepare_to_sleep(pdev);
+ } else {
+ pci_wake_from_d3(pdev, false);
+ pci_set_power_state(pdev, PCI_D3hot);
+ }
return 0;
}
+#endif /* CONFIG_PM */
static void ixgbe_shutdown(struct pci_dev *pdev)
{
- ixgbe_suspend(pdev, PMSG_SUSPEND);
+ bool wake;
+
+ __ixgbe_shutdown(pdev, &wake);
+
+ if (system_state == SYSTEM_POWER_OFF) {
+ pci_wake_from_d3(pdev, wake);
+ pci_set_power_state(pdev, PCI_D3hot);
+ }
}
/**
int count = 0;
unsigned int f;
- r_idx = (adapter->num_tx_queues - 1) & skb->queue_mapping;
+ r_idx = skb->queue_mapping;
tx_ring = &adapter->tx_ring[r_idx];
if (adapter->vlgrp && vlan_tx_tag_present(skb)) {
nubus_writew(swab16(value), dev->mem_start + portno);
}
+static const struct net_device_ops mac89x0_netdev_ops = {
+ .ndo_open = net_open,
+ .ndo_stop = net_close,
+ .ndo_start_xmit = net_send_packet,
+ .ndo_get_stats = net_get_stats,
+ .ndo_set_multicast_list = set_multicast_list,
+ .ndo_set_mac_address = set_mac_address,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+};
+
/* Probe for the CS8900 card in slot E. We won't bother looking
anywhere else until we have a really good reason to do so. */
struct net_device * __init mac89x0_probe(int unit)
printk(" IRQ %d ADDR %pM\n", dev->irq, dev->dev_addr);
- dev->open = net_open;
- dev->stop = net_close;
- dev->hard_start_xmit = net_send_packet;
- dev->get_stats = net_get_stats;
- dev->set_multicast_list = &set_multicast_list;
- dev->set_mac_address = &set_mac_address;
+ dev->netdev_ops = &mac89x0_netdev_ops;
err = register_netdev(dev);
if (err)
return phy_mii_ioctl(phydev, if_mii(rq), cmd);
}
+static const struct net_device_ops macb_netdev_ops = {
+ .ndo_open = macb_open,
+ .ndo_stop = macb_close,
+ .ndo_start_xmit = macb_start_xmit,
+ .ndo_set_multicast_list = macb_set_rx_mode,
+ .ndo_get_stats = macb_get_stats,
+ .ndo_do_ioctl = macb_ioctl,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+};
+
static int __init macb_probe(struct platform_device *pdev)
{
struct eth_platform_data *pdata;
goto err_out_iounmap;
}
- dev->open = macb_open;
- dev->stop = macb_close;
- dev->hard_start_xmit = macb_start_xmit;
- dev->get_stats = macb_get_stats;
- dev->set_multicast_list = macb_set_rx_mode;
- dev->do_ioctl = macb_ioctl;
+ dev->netdev_ops = &macb_netdev_ops;
netif_napi_add(dev, &bp->napi, macb_poll, 64);
dev->ethtool_ops = &macb_ethtool_ops;
return err;
}
+static const struct net_device_ops macsonic_netdev_ops = {
+ .ndo_open = macsonic_open,
+ .ndo_stop = macsonic_close,
+ .ndo_start_xmit = sonic_send_packet,
+ .ndo_set_multicast_list = sonic_multicast_list,
+ .ndo_tx_timeout = sonic_tx_timeout,
+ .ndo_get_stats = sonic_get_stats,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+};
+
static int __init macsonic_init(struct net_device *dev)
{
struct sonic_local* lp = netdev_priv(dev);
lp->rra_laddr = lp->rda_laddr + (SIZEOF_SONIC_RD * SONIC_NUM_RDS
* SONIC_BUS_SCALE(lp->dma_bitmode));
- dev->open = macsonic_open;
- dev->stop = macsonic_close;
- dev->hard_start_xmit = sonic_send_packet;
- dev->get_stats = sonic_get_stats;
- dev->set_multicast_list = &sonic_multicast_list;
- dev->tx_timeout = sonic_tx_timeout;
+ dev->netdev_ops = &macsonic_netdev_ops;
dev->watchdog_timeo = TX_TIMEOUT;
/*
lro_mgr->lro_arr = ss->rx_done.lro_desc;
lro_mgr->get_frag_header = myri10ge_get_frag_header;
lro_mgr->max_aggr = myri10ge_lro_max_pkts;
+ lro_mgr->frag_align_pad = 2;
if (lro_mgr->max_aggr > MAX_SKB_FRAGS)
lro_mgr->max_aggr = MAX_SKB_FRAGS;
#include <linux/mii.h>
#include <linux/phy.h>
#include <linux/phy_fixed.h>
+#include <linux/err.h>
#define MII_REGS_NUM 29
int ret;
pdev = platform_device_register_simple("Fixed MDIO bus", 0, NULL, 0);
- if (!pdev) {
- ret = -ENOMEM;
+ if (IS_ERR(pdev)) {
+ ret = PTR_ERR(pdev);
goto err_pdev;
}
#define MII_M1111_COPPER 0
#define MII_M1111_FIBER 1
+#define MII_88E1121_PHY_LED_CTRL 16
+#define MII_88E1121_PHY_LED_PAGE 3
+#define MII_88E1121_PHY_LED_DEF 0x0030
+#define MII_88E1121_PHY_PAGE 22
+
#define MII_M1011_PHY_STATUS 0x11
#define MII_M1011_PHY_STATUS_1000 0x8000
#define MII_M1011_PHY_STATUS_100 0x4000
return err;
}
+static int m88e1121_config_aneg(struct phy_device *phydev)
+{
+ int err, temp;
+
+ err = phy_write(phydev, MII_BMCR, BMCR_RESET);
+ if (err < 0)
+ return err;
+
+ err = phy_write(phydev, MII_M1011_PHY_SCR,
+ MII_M1011_PHY_SCR_AUTO_CROSS);
+ if (err < 0)
+ return err;
+
+ temp = phy_read(phydev, MII_88E1121_PHY_PAGE);
+
+ phy_write(phydev, MII_88E1121_PHY_PAGE, MII_88E1121_PHY_LED_PAGE);
+ phy_write(phydev, MII_88E1121_PHY_LED_CTRL, MII_88E1121_PHY_LED_DEF);
+ phy_write(phydev, MII_88E1121_PHY_PAGE, temp);
+
+ err = genphy_config_aneg(phydev);
+
+ return err;
+}
+
static int m88e1111_config_init(struct phy_device *phydev)
{
int err;
return 0;
}
+static int m88e1121_did_interrupt(struct phy_device *phydev)
+{
+ int imask;
+
+ imask = phy_read(phydev, MII_M1011_IEVENT);
+
+ if (imask & MII_M1011_IMASK_INIT)
+ return 1;
+
+ return 0;
+}
+
static struct phy_driver marvell_drivers[] = {
{
.phy_id = 0x01410c60,
.driver = {.owner = THIS_MODULE,},
},
{
+ .phy_id = 0x01410cb0,
+ .phy_id_mask = 0xfffffff0,
+ .name = "Marvell 88E1121R",
+ .features = PHY_GBIT_FEATURES,
+ .flags = PHY_HAS_INTERRUPT,
+ .config_aneg = &m88e1121_config_aneg,
+ .read_status = &marvell_read_status,
+ .ack_interrupt = &marvell_ack_interrupt,
+ .config_intr = &marvell_config_intr,
+ .did_interrupt = &m88e1121_did_interrupt,
+ .driver = { .owner = THIS_MODULE },
+ },
+ {
.phy_id = 0x01410cd0,
.phy_id_mask = 0xfffffff0,
.name = "Marvell 88E1145",
phydev->adjust_state = handler;
INIT_DELAYED_WORK(&phydev->state_queue, phy_state_machine);
- schedule_delayed_work(&phydev->state_queue, jiffies + HZ);
+ schedule_delayed_work(&phydev->state_queue, HZ);
}
/**
struct phy_device *phydev =
container_of(work, struct phy_device, phy_queue);
+ if (phydev->drv->did_interrupt &&
+ !phydev->drv->did_interrupt(phydev))
+ goto ignore;
+
err = phy_disable_interrupts(phydev);
if (err)
return;
+ignore:
+ atomic_dec(&phydev->irq_disable);
+ enable_irq(phydev->irq);
+ return;
+
irq_enable_err:
disable_irq(phydev->irq);
atomic_inc(&phydev->irq_disable);
if (err < 0)
phy_error(phydev);
- schedule_delayed_work(&phydev->state_queue,
- jiffies + PHY_STATE_TIME * HZ);
+ schedule_delayed_work(&phydev->state_queue, PHY_STATE_TIME * HZ);
}
WARN_ON(channel->rx_pkt != NULL);
efx_rx_strategy(channel);
-
- netif_napi_add(channel->napi_dev, &channel->napi_str,
- efx_poll, napi_weight);
}
}
efx_for_each_channel(channel, efx) {
channel->napi_dev = efx->net_dev;
+ netif_napi_add(channel->napi_dev, &channel->napi_str,
+ efx_poll, napi_weight);
}
return 0;
}
struct efx_channel *channel;
efx_for_each_channel(channel, efx) {
+ if (channel->napi_dev)
+ netif_napi_del(&channel->napi_str);
channel->napi_dev = NULL;
}
}
EFX_POPULATE_QWORD_1(phy_event, EV_CODE, GLOBAL_EV_DECODE);
if (EFX_IS10G(efx))
- EFX_SET_OWORD_FIELD(phy_event, XG_PHY_INTR, 1);
+ EFX_SET_QWORD_FIELD(phy_event, XG_PHY_INTR, 1);
else
- EFX_SET_OWORD_FIELD(phy_event, G_PHY0_INTR, 1);
+ EFX_SET_QWORD_FIELD(phy_event, G_PHY0_INTR, 1);
falcon_generate_event(&efx->channel[0], &phy_event);
}
return ret;
}
+static const struct net_device_ops sh_eth_netdev_ops = {
+ .ndo_open = sh_eth_open,
+ .ndo_stop = sh_eth_close,
+ .ndo_start_xmit = sh_eth_start_xmit,
+ .ndo_get_stats = sh_eth_get_stats,
+ .ndo_set_multicast_list = sh_eth_set_multicast_list,
+ .ndo_tx_timeout = sh_eth_tx_timeout,
+ .ndo_do_ioctl = sh_eth_do_ioctl,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_change_mtu = eth_change_mtu,
+};
+
static int sh_eth_drv_probe(struct platform_device *pdev)
{
int ret, i, devno = 0;
mdp->edmac_endian = pd->edmac_endian;
/* set function */
- ndev->open = sh_eth_open;
- ndev->hard_start_xmit = sh_eth_start_xmit;
- ndev->stop = sh_eth_close;
- ndev->get_stats = sh_eth_get_stats;
- ndev->set_multicast_list = sh_eth_set_multicast_list;
- ndev->do_ioctl = sh_eth_do_ioctl;
- ndev->tx_timeout = sh_eth_tx_timeout;
+ ndev->netdev_ops = &sh_eth_netdev_ops;
ndev->watchdog_timeo = TX_TIMEOUT;
mdp->post_rx = POST_RX >> (devno << 1);
if (netif_msg_ifdown(skge))
printk(KERN_INFO PFX "%s: disabling interface\n", dev->name);
- netif_stop_queue(dev);
+ netif_tx_disable(dev);
if (hw->chip_id == CHIP_ID_GENESIS && hw->phy_type == SK_PHY_XMAC)
del_timer_sync(&skge->link_timer);
}
skge->tx_ring.to_clean = e;
- netif_wake_queue(dev);
}
static void skge_tx_timeout(struct net_device *dev)
skge_write8(skge->hw, Q_ADDR(txqaddr[skge->port], Q_CSR), CSR_STOP);
skge_tx_clean(dev);
+ netif_wake_queue(dev);
}
static int skge_change_mtu(struct net_device *dev, int new_mtu)
return ERR_PTR(err);
}
+static const struct net_device_ops sun3_82586_netdev_ops = {
+ .ndo_open = sun3_82586_open,
+ .ndo_stop = sun3_82586_close,
+ .ndo_start_xmit = sun3_82586_send_packet,
+ .ndo_set_multicast_list = set_multicast_list,
+ .ndo_tx_timeout = sun3_82586_timeout,
+ .ndo_get_stats = sun3_82586_get_stats,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = eth_mac_addr,
+ .ndo_change_mtu = eth_change_mtu,
+};
+
static int __init sun3_82586_probe1(struct net_device *dev,int ioaddr)
{
int i, size, retval;
printk("Memaddr: 0x%lx, Memsize: %d, IRQ %d\n",dev->mem_start,size, dev->irq);
- dev->open = sun3_82586_open;
- dev->stop = sun3_82586_close;
- dev->get_stats = sun3_82586_get_stats;
- dev->tx_timeout = sun3_82586_timeout;
+ dev->netdev_ops = &sun3_82586_netdev_ops;
dev->watchdog_timeo = HZ/20;
- dev->hard_start_xmit = sun3_82586_send_packet;
- dev->set_multicast_list = set_multicast_list;
dev->if_port = 0;
return 0;
return 0;
}
+static const struct net_device_ops tc35815_netdev_ops = {
+ .ndo_open = tc35815_open,
+ .ndo_stop = tc35815_close,
+ .ndo_start_xmit = tc35815_send_packet,
+ .ndo_get_stats = tc35815_get_stats,
+ .ndo_set_multicast_list = tc35815_set_multicast_list,
+ .ndo_tx_timeout = tc35815_tx_timeout,
+ .ndo_do_ioctl = tc35815_ioctl,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = tc35815_poll_controller,
+#endif
+};
+
static int __devinit tc35815_init_one(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
ioaddr = pcim_iomap_table(pdev)[1];
/* Initialize the device structure. */
- dev->open = tc35815_open;
- dev->hard_start_xmit = tc35815_send_packet;
- dev->stop = tc35815_close;
- dev->get_stats = tc35815_get_stats;
- dev->set_multicast_list = tc35815_set_multicast_list;
- dev->do_ioctl = tc35815_ioctl;
+ dev->netdev_ops = &tc35815_netdev_ops;
dev->ethtool_ops = &tc35815_ethtool_ops;
- dev->tx_timeout = tc35815_tx_timeout;
dev->watchdog_timeo = TC35815_TX_TIMEOUT;
#ifdef TC35815_NAPI
netif_napi_add(dev, &lp->napi, tc35815_poll, NAPI_WEIGHT);
#endif
-#ifdef CONFIG_NET_POLL_CONTROLLER
- dev->poll_controller = tc35815_poll_controller;
-#endif
dev->irq = pdev->irq;
dev->base_addr = (unsigned long)ioaddr;
/* Next, try NVRAM. */
if (!tg3_nvram_read_be32(tp, mac_offset + 0, &hi) &&
!tg3_nvram_read_be32(tp, mac_offset + 4, &lo)) {
- memcpy(&dev->dev_addr[0], ((char *)&hi) + 2, 2);
- memcpy(&dev->dev_addr[2], (char *)&lo, sizeof(lo));
+ dev->dev_addr[0] = ((hi >> 16) & 0xff);
+ dev->dev_addr[1] = ((hi >> 24) & 0xff);
+ dev->dev_addr[2] = ((lo >> 0) & 0xff);
+ dev->dev_addr[3] = ((lo >> 8) & 0xff);
+ dev->dev_addr[4] = ((lo >> 16) & 0xff);
+ dev->dev_addr[5] = ((lo >> 24) & 0xff);
+
}
/* Finally just fetch it out of the MAC control regs. */
else {
.set_settings = tsi108_set_settings,
};
+static const struct net_device_ops tsi108_netdev_ops = {
+ .ndo_open = tsi108_open,
+ .ndo_stop = tsi108_close,
+ .ndo_start_xmit = tsi108_send_packet,
+ .ndo_set_multicast_list = tsi108_set_rx_mode,
+ .ndo_get_stats = tsi108_get_stats,
+ .ndo_do_ioctl = tsi108_do_ioctl,
+ .ndo_set_mac_address = tsi108_set_mac,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+};
+
static int
tsi108_init_one(struct platform_device *pdev)
{
data->phy_type = einfo->phy_type;
data->irq_num = einfo->irq_num;
data->id = pdev->id;
- dev->open = tsi108_open;
- dev->stop = tsi108_close;
- dev->hard_start_xmit = tsi108_send_packet;
- dev->set_mac_address = tsi108_set_mac;
- dev->set_multicast_list = tsi108_set_rx_mode;
- dev->get_stats = tsi108_get_stats;
netif_napi_add(dev, &data->napi, tsi108_poll, 64);
- dev->do_ioctl = tsi108_do_ioctl;
+ dev->netdev_ops = &tsi108_netdev_ops;
dev->ethtool_ops = &tsi108_ethtool_ops;
/* Apparently, the Linux networking code won't use scatter-gather
int err;
/* Under a page? Don't bother with paged skb. */
- if (prepad + len < PAGE_SIZE)
+ if (prepad + len < PAGE_SIZE || !linear)
linear = len;
skb = sock_alloc_send_pskb(sk, prepad + linear, len - linear, noblock,
if ((tun->flags & TUN_TYPE_MASK) == TUN_TAP_DEV) {
align = NET_IP_ALIGN;
- if (unlikely(len < ETH_HLEN))
+ if (unlikely(len < ETH_HLEN ||
+ (gso.hdr_len && gso.hdr_len < ETH_HLEN)))
return -EINVAL;
}
static int velocity_open(struct net_device *dev);
static int velocity_change_mtu(struct net_device *dev, int mtu);
static int velocity_xmit(struct sk_buff *skb, struct net_device *dev);
-static int velocity_intr(int irq, void *dev_instance);
+static irqreturn_t velocity_intr(int irq, void *dev_instance);
static void velocity_set_multi(struct net_device *dev);
static struct net_device_stats *velocity_get_stats(struct net_device *dev);
static int velocity_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
* efficiently as possible.
*/
-static int velocity_intr(int irq, void *dev_instance)
+static irqreturn_t velocity_intr(int irq, void *dev_instance)
{
struct net_device *dev = dev_instance;
struct velocity_info *vptr = netdev_priv(dev);
return err;
}
+static const struct net_device_ops xtsonic_netdev_ops = {
+ .ndo_open = xtsonic_open,
+ .ndo_stop = xtsonic_close,
+ .ndo_start_xmit = sonic_send_packet,
+ .ndo_get_stats = sonic_get_stats,
+ .ndo_set_multicast_list = sonic_multicast_list,
+ .ndo_tx_timeout = sonic_tx_timeout,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_change_mtu = eth_change_mtu,
+ .ndo_set_mac_address = eth_mac_addr,
+};
+
static int __init sonic_probe1(struct net_device *dev)
{
static unsigned version_printed = 0;
lp->rra_laddr = lp->rda_laddr + (SIZEOF_SONIC_RD * SONIC_NUM_RDS
* SONIC_BUS_SCALE(lp->dma_bitmode));
- dev->open = xtsonic_open;
- dev->stop = xtsonic_close;
- dev->hard_start_xmit = sonic_send_packet;
- dev->get_stats = sonic_get_stats;
- dev->set_multicast_list = &sonic_multicast_list;
- dev->tx_timeout = sonic_tx_timeout;
+ dev->netdev_ops = &xtsonic_netdev_ops;
dev->watchdog_timeo = TX_TIMEOUT;
/*
0 /*base_hi*/,
PAR_IRQ,
PARPORT_DMA_NONE /* dma */,
- NULL /*struct pci_dev* */) )
+ NULL /*struct pci_dev* */),
+ 0 /* shared irq flags */ )
printk(KERN_WARNING PFX "Probing parallel port failed.\n");
#endif /* CONFIG_PARPORT_PC */
#include <linux/slab.h>
#include <linux/buffer_head.h>
#include <linux/hdreg.h>
+#include <linux/async.h>
#include <asm/ccwdev.h>
#include <asm/ebcdic.h>
if (rc && rc != -EAGAIN)
device->target = device->state;
- if (device->state == device->target)
+ if (device->state == device->target) {
wake_up(&dasd_init_waitq);
+ dasd_put_device(device);
+ }
/* let user-space know that the device status changed */
kobject_uevent(&device->cdev->dev.kobj, KOBJ_CHANGE);
*/
void dasd_set_target_state(struct dasd_device *device, int target)
{
+ dasd_get_device(device);
/* If we are in probeonly mode stop at DASD_STATE_READY. */
if (dasd_probeonly && target > DASD_STATE_READY)
target = DASD_STATE_READY;
if (device->target != target) {
- if (device->state == target)
+ if (device->state == target) {
wake_up(&dasd_init_waitq);
+ dasd_put_device(device);
+ }
device->target = target;
}
if (device->state != device->target)
* SECTION: common functions for ccw_driver use
*/
+static void dasd_generic_auto_online(void *data, async_cookie_t cookie)
+{
+ struct ccw_device *cdev = data;
+ int ret;
+
+ ret = ccw_device_set_online(cdev);
+ if (ret)
+ pr_warning("%s: Setting the DASD online failed with rc=%d\n",
+ dev_name(&cdev->dev), ret);
+ else {
+ struct dasd_device *device = dasd_device_from_cdev(cdev);
+ wait_event(dasd_init_waitq, _wait_for_device(device));
+ dasd_put_device(device);
+ }
+}
+
/*
* Initial attempt at a probe function. this can be simplified once
* the other detection code is gone.
*/
if ((dasd_get_feature(cdev, DASD_FEATURE_INITIAL_ONLINE) > 0 ) ||
(dasd_autodetect && dasd_busid_known(dev_name(&cdev->dev)) != 0))
- ret = ccw_device_set_online(cdev);
- if (ret)
- pr_warning("%s: Setting the DASD online failed with rc=%d\n",
- dev_name(&cdev->dev), ret);
+ async_schedule(dasd_generic_auto_online, cdev);
return 0;
}
} else
pr_debug("dasd_generic device %s found\n",
dev_name(&cdev->dev));
-
- /* FIXME: we have to wait for the root device but we don't want
- * to wait for each single device but for all at once. */
- wait_event(dasd_init_waitq, _wait_for_device(device));
-
dasd_put_device(device);
-
return rc;
}
ccw++;
recid += count;
new_track = 0;
+ /* first idaw for a ccw may start anywhere */
+ if (!idaw_dst)
+ idaw_dst = dst;
}
- /* If we start a new idaw, everything is fine and the
- * start of the new idaw is the start of this segment.
+ /* If we start a new idaw, we must make sure that it
+ * starts on an IDA_BLOCK_SIZE boundary.
* If we continue an idaw, we must make sure that the
* current segment begins where the so far accumulated
* idaw ends
*/
- if (!idaw_dst)
- idaw_dst = dst;
+ if (!idaw_dst) {
+ if (__pa(dst) & (IDA_BLOCK_SIZE-1)) {
+ dasd_sfree_request(cqr, startdev);
+ return ERR_PTR(-ERANGE);
+ } else
+ idaw_dst = dst;
+ }
if ((idaw_dst + idaw_len) != dst) {
dasd_sfree_request(cqr, startdev);
return ERR_PTR(-ERANGE);
qdio_set_state(irq_ptr, QDIO_IRQ_STATE_STOPPED);
}
-static void qdio_call_shutdown(struct work_struct *work)
-{
- struct ccw_device_private *priv;
- struct ccw_device *cdev;
-
- priv = container_of(work, struct ccw_device_private, kick_work);
- cdev = priv->cdev;
- qdio_shutdown(cdev, QDIO_FLAG_CLEANUP_USING_CLEAR);
- put_device(&cdev->dev);
-}
-
-static void qdio_int_error(struct ccw_device *cdev)
-{
- struct qdio_irq *irq_ptr = cdev->private->qdio_data;
-
- switch (irq_ptr->state) {
- case QDIO_IRQ_STATE_INACTIVE:
- case QDIO_IRQ_STATE_CLEANUP:
- qdio_set_state(irq_ptr, QDIO_IRQ_STATE_ERR);
- break;
- case QDIO_IRQ_STATE_ESTABLISHED:
- case QDIO_IRQ_STATE_ACTIVE:
- qdio_set_state(irq_ptr, QDIO_IRQ_STATE_STOPPED);
- if (get_device(&cdev->dev)) {
- /* Can't call shutdown from interrupt context. */
- PREPARE_WORK(&cdev->private->kick_work,
- qdio_call_shutdown);
- queue_work(ccw_device_work, &cdev->private->kick_work);
- }
- break;
- default:
- WARN_ON(1);
- }
- wake_up(&cdev->private->wait_q);
-}
-
static int qdio_establish_check_errors(struct ccw_device *cdev, int cstat,
int dstat)
{
switch (PTR_ERR(irb)) {
case -EIO:
DBF_ERROR("%4x IO error", irq_ptr->schid.sch_no);
- return;
- case -ETIMEDOUT:
- DBF_ERROR("%4x IO timeout", irq_ptr->schid.sch_no);
- qdio_int_error(cdev);
+ qdio_set_state(irq_ptr, QDIO_IRQ_STATE_ERR);
+ wake_up(&cdev->private->wait_q);
return;
default:
WARN_ON(1);
case QDIO_IRQ_STATE_ACTIVE:
if (cstat & SCHN_STAT_PCI) {
qdio_int_handler_pci(irq_ptr);
- /* no state change so no need to wake up wait_q */
return;
}
if ((cstat & ~SCHN_STAT_PCI) || dstat) {
return 0;
}
-static int jsf_ioctl(struct inode *inode, struct file *f, unsigned int cmd,
- unsigned long arg)
+static long jsf_ioctl(struct file *f, unsigned int cmd, unsigned long arg)
{
+ lock_kernel();
int error = -ENOTTY;
void __user *argp = (void __user *)arg;
- if (!capable(CAP_SYS_ADMIN))
+ if (!capable(CAP_SYS_ADMIN)) {
+ unlock_kernel();
return -EPERM;
+ }
switch (cmd) {
case JSFLASH_IDENT:
- if (copy_to_user(argp, &jsf0.id, JSFIDSZ))
+ if (copy_to_user(argp, &jsf0.id, JSFIDSZ)) {
+ unlock_kernel();
return -EFAULT;
+ }
break;
case JSFLASH_ERASE:
error = jsf_ioctl_erase(arg);
break;
}
+ unlock_kernel();
return error;
}
.llseek = jsf_lseek,
.read = jsf_read,
.write = jsf_write,
- .ioctl = jsf_ioctl,
+ .unlocked_ioctl = jsf_ioctl,
.mmap = jsf_mmap,
.open = jsf_open,
.release = jsf_release,
static void uctrl_get_event_status(struct uctrl_driver *);
static void uctrl_get_external_status(struct uctrl_driver *);
-static int
-uctrl_ioctl(struct inode *inode, struct file *file,
- unsigned int cmd, unsigned long arg)
+static long
+uctrl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
switch (cmd) {
default:
static const struct file_operations uctrl_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
- .ioctl = uctrl_ioctl,
+ .unlocked_ioctl = uctrl_ioctl,
.open = uctrl_open,
};
*/
cmd->sense_buffer[8] = 0; /* Information */
cmd->sense_buffer[9] = 0xa; /* Add. length */
- do_div(bghm, cmd->device->sector_size);
+ bghm /= cmd->device->sector_size;
failing_sector = scsi_get_lba(cmd);
failing_sector += bghm;
struct intc_desc_int {
struct list_head list;
struct sys_device sysdev;
+ pm_message_t state;
unsigned long *reg;
#ifdef CONFIG_SMP
unsigned long *smp;
/* get intc controller associated with this sysdev */
d = container_of(dev, struct intc_desc_int, sysdev);
- /* enable wakeup irqs belonging to this intc controller */
- for_each_irq_desc(irq, desc) {
- if ((desc->status & IRQ_WAKEUP) && (desc->chip == &d->chip))
- intc_enable(irq);
+ switch (state.event) {
+ case PM_EVENT_ON:
+ if (d->state.event != PM_EVENT_FREEZE)
+ break;
+ for_each_irq_desc(irq, desc) {
+ if (desc->chip != &d->chip)
+ continue;
+ if (desc->status & IRQ_DISABLED)
+ intc_disable(irq);
+ else
+ intc_enable(irq);
+ }
+ break;
+ case PM_EVENT_FREEZE:
+ /* nothing has to be done */
+ break;
+ case PM_EVENT_SUSPEND:
+ /* enable wakeup irqs belonging to this intc controller */
+ for_each_irq_desc(irq, desc) {
+ if ((desc->status & IRQ_WAKEUP) && (desc->chip == &d->chip))
+ intc_enable(irq);
+ }
+ break;
}
+ d->state = state;
return 0;
}
+static int intc_resume(struct sys_device *dev)
+{
+ return intc_suspend(dev, PMSG_ON);
+}
+
static struct sysdev_class intc_sysdev_class = {
.name = "intc",
.suspend = intc_suspend,
+ .resume = intc_resume,
};
/* register this intc as sysdev to allow suspend/resume */
tty->driver_data = acm;
acm->tty = tty;
- /* force low_latency on so that our tty_push actually forces the data through,
- otherwise it is scheduled, and with high data rates data can get lost. */
- tty->low_latency = 1;
-
if (usb_autopm_get_interface(acm->control) < 0)
goto early_bail;
else
}
tty = tty_port_tty_get(&port->port);
- if (tty && urb->actual_length) {
- usb_serial_debug_data(debug, dev, __func__,
- urb->actual_length, urb->transfer_buffer);
-
- if (!tport->tp_is_open)
- dbg("%s - port closed, dropping data", __func__);
- else
- ti_recv(&urb->dev->dev, tty,
+ if (tty) {
+ if (urb->actual_length) {
+ usb_serial_debug_data(debug, dev, __func__,
+ urb->actual_length, urb->transfer_buffer);
+
+ if (!tport->tp_is_open)
+ dbg("%s - port closed, dropping data",
+ __func__);
+ else
+ ti_recv(&urb->dev->dev, tty,
urb->transfer_buffer,
urb->actual_length);
-
- spin_lock(&tport->tp_lock);
- tport->tp_icount.rx += urb->actual_length;
- spin_unlock(&tport->tp_lock);
+ spin_lock(&tport->tp_lock);
+ tport->tp_icount.rx += urb->actual_length;
+ spin_unlock(&tport->tp_lock);
+ }
tty_kref_put(tty);
}
return NULL;
}
+/**
+ * bio_alloc - allocate a bio for I/O
+ * @gfp_mask: the GFP_ mask given to the slab allocator
+ * @nr_iovecs: number of iovecs to pre-allocate
+ *
+ * Description:
+ * bio_alloc will allocate a bio and associated bio_vec array that can hold
+ * at least @nr_iovecs entries. Allocations will be done from the
+ * fs_bio_set. Also see @bio_alloc_bioset.
+ *
+ * If %__GFP_WAIT is set, then bio_alloc will always be able to allocate
+ * a bio. This is due to the mempool guarantees. To make this work, callers
+ * must never allocate more than 1 bio at the time from this pool. Callers
+ * that need to allocate more than 1 bio must always submit the previously
+ * allocate bio for IO before attempting to allocate a new one. Failure to
+ * do so can cause livelocks under memory pressure.
+ *
+ **/
struct bio *bio_alloc(gfp_t gfp_mask, int nr_iovecs)
{
struct bio *bio = bio_alloc_bioset(gfp_mask, nr_iovecs, fs_bio_set);
* Completion handler for block_write_full_page() - pages which are unlocked
* during I/O, and which have PageWriteback cleared upon I/O completion.
*/
-static void end_buffer_async_write(struct buffer_head *bh, int uptodate)
+void end_buffer_async_write(struct buffer_head *bh, int uptodate)
{
char b[BDEVNAME_SIZE];
unsigned long flags;
set_buffer_async_read(bh);
}
-void mark_buffer_async_write(struct buffer_head *bh)
+void mark_buffer_async_write_endio(struct buffer_head *bh,
+ bh_end_io_t *handler)
{
- bh->b_end_io = end_buffer_async_write;
+ bh->b_end_io = handler;
set_buffer_async_write(bh);
}
+
+void mark_buffer_async_write(struct buffer_head *bh)
+{
+ mark_buffer_async_write_endio(bh, end_buffer_async_write);
+}
EXPORT_SYMBOL(mark_buffer_async_write);
return err;
}
-void do_thaw_all(unsigned long unused)
+void do_thaw_all(struct work_struct *work)
{
struct super_block *sb;
char b[BDEVNAME_SIZE];
goto restart;
}
spin_unlock(&sb_lock);
+ kfree(work);
printk(KERN_WARNING "Emergency Thaw complete\n");
}
*/
void emergency_thaw_all(void)
{
- pdflush_operation(do_thaw_all, 0);
+ struct work_struct *work;
+
+ work = kmalloc(sizeof(*work), GFP_ATOMIC);
+ if (work) {
+ INIT_WORK(work, do_thaw_all);
+ schedule_work(work);
+ }
}
/**
* unplugging the device queue.
*/
static int __block_write_full_page(struct inode *inode, struct page *page,
- get_block_t *get_block, struct writeback_control *wbc)
+ get_block_t *get_block, struct writeback_control *wbc,
+ bh_end_io_t *handler)
{
int err;
sector_t block;
continue;
}
if (test_clear_buffer_dirty(bh)) {
- mark_buffer_async_write(bh);
+ mark_buffer_async_write_endio(bh, handler);
} else {
unlock_buffer(bh);
}
if (buffer_mapped(bh) && buffer_dirty(bh) &&
!buffer_delay(bh)) {
lock_buffer(bh);
- mark_buffer_async_write(bh);
+ mark_buffer_async_write_endio(bh, handler);
} else {
/*
* The buffer may have been set dirty during
out:
ret = mpage_writepage(page, get_block, wbc);
if (ret == -EAGAIN)
- ret = __block_write_full_page(inode, page, get_block, wbc);
+ ret = __block_write_full_page(inode, page, get_block, wbc,
+ end_buffer_async_write);
return ret;
}
EXPORT_SYMBOL(nobh_writepage);
/*
* The generic ->writepage function for buffer-backed address_spaces
+ * this form passes in the end_io handler used to finish the IO.
*/
-int block_write_full_page(struct page *page, get_block_t *get_block,
- struct writeback_control *wbc)
+int block_write_full_page_endio(struct page *page, get_block_t *get_block,
+ struct writeback_control *wbc, bh_end_io_t *handler)
{
struct inode * const inode = page->mapping->host;
loff_t i_size = i_size_read(inode);
/* Is the page fully inside i_size? */
if (page->index < end_index)
- return __block_write_full_page(inode, page, get_block, wbc);
+ return __block_write_full_page(inode, page, get_block, wbc,
+ handler);
/* Is the page fully outside i_size? (truncate in progress) */
offset = i_size & (PAGE_CACHE_SIZE-1);
* writes to that region are not written out to the file."
*/
zero_user_segment(page, offset, PAGE_CACHE_SIZE);
- return __block_write_full_page(inode, page, get_block, wbc);
+ return __block_write_full_page(inode, page, get_block, wbc, handler);
}
+/*
+ * The generic ->writepage function for buffer-backed address_spaces
+ */
+int block_write_full_page(struct page *page, get_block_t *get_block,
+ struct writeback_control *wbc)
+{
+ return block_write_full_page_endio(page, get_block, wbc,
+ end_buffer_async_write);
+}
+
+
sector_t generic_block_bmap(struct address_space *mapping, sector_t block,
get_block_t *get_block)
{
EXPORT_SYMBOL(block_sync_page);
EXPORT_SYMBOL(block_truncate_page);
EXPORT_SYMBOL(block_write_full_page);
+EXPORT_SYMBOL(block_write_full_page_endio);
EXPORT_SYMBOL(cont_write_begin);
EXPORT_SYMBOL(end_buffer_read_sync);
EXPORT_SYMBOL(end_buffer_write_sync);
+EXPORT_SYMBOL(end_buffer_async_write);
EXPORT_SYMBOL(file_fsync);
EXPORT_SYMBOL(generic_block_bmap);
EXPORT_SYMBOL(generic_cont_expand_simple);
struct bio *bio;
bio = bio_alloc(GFP_KERNEL, nr_vecs);
- if (bio == NULL)
- return -ENOMEM;
bio->bi_bdev = bdev;
bio->bi_sector = first_sector;
len = ee_len;
bio = bio_alloc(GFP_NOIO, len);
- if (!bio)
- return -ENOMEM;
bio->bi_sector = ee_pblock;
bio->bi_bdev = inode->i_sb->s_bdev;
}
static int fuse_get_user_pages(struct fuse_req *req, const char __user *buf,
- unsigned *nbytesp, int write)
+ size_t *nbytesp, int write)
{
- unsigned nbytes = *nbytesp;
+ size_t nbytes = *nbytesp;
unsigned long user_addr = (unsigned long) buf;
unsigned offset = user_addr & ~PAGE_MASK;
int npages;
return 0;
}
- nbytes = min(nbytes, (unsigned) FUSE_MAX_PAGES_PER_REQ << PAGE_SHIFT);
+ nbytes = min_t(size_t, nbytes, FUSE_MAX_PAGES_PER_REQ << PAGE_SHIFT);
npages = (nbytes + offset + PAGE_SIZE - 1) >> PAGE_SHIFT;
npages = clamp(npages, 1, FUSE_MAX_PAGES_PER_REQ);
down_read(¤t->mm->mmap_sem);
if (vma->vm_flags & VM_MAYSHARE)
return -ENODEV;
+ invalidate_inode_pages2(file->f_mapping);
+
return generic_file_mmap(file, vma);
}
GLOCK_BUG_ON(gl, test_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags));
- down_read(&gfs2_umount_flush_sem);
if (test_bit(GLF_DEMOTE, &gl->gl_flags) &&
gl->gl_demote_state != gl->gl_state) {
if (find_first_holder(gl))
if (ret == 0)
goto out_unlock;
if (ret == 2)
- goto out_sem;
+ goto out;
gh = find_first_waiter(gl);
gl->gl_target = gh->gh_state;
if (!(gh->gh_flags & (LM_FLAG_TRY | LM_FLAG_TRY_1CB)))
do_error(gl, 0); /* Fail queued try locks */
}
do_xmote(gl, gh, gl->gl_target);
-out_sem:
- up_read(&gfs2_umount_flush_sem);
+out:
return;
out_sched:
gfs2_glock_put(gl);
out_unlock:
clear_bit(GLF_LOCK, &gl->gl_flags);
- goto out_sem;
+ goto out;
}
static void glock_work_func(struct work_struct *work)
if (test_and_clear_bit(GLF_REPLY_PENDING, &gl->gl_flags))
finish_xmote(gl, gl->gl_reply);
+ down_read(&gfs2_umount_flush_sem);
spin_lock(&gl->gl_spin);
if (test_and_clear_bit(GLF_PENDING_DEMOTE, &gl->gl_flags) &&
gl->gl_state != LM_ST_UNLOCKED &&
}
run_queue(gl, 0);
spin_unlock(&gl->gl_spin);
+ up_read(&gfs2_umount_flush_sem);
if (!delay ||
queue_delayed_work(glock_workqueue, &gl->gl_work, delay) == 0)
gfs2_glock_put(gl);
if (S_ISREG(mode)) {
inode->i_op = &gfs2_file_iops;
if (gfs2_localflocks(sdp))
- inode->i_fop = gfs2_file_fops_nolock;
+ inode->i_fop = &gfs2_file_fops_nolock;
else
- inode->i_fop = gfs2_file_fops;
+ inode->i_fop = &gfs2_file_fops;
} else if (S_ISDIR(mode)) {
inode->i_op = &gfs2_dir_iops;
if (gfs2_localflocks(sdp))
- inode->i_fop = gfs2_dir_fops_nolock;
+ inode->i_fop = &gfs2_dir_fops_nolock;
else
- inode->i_fop = gfs2_dir_fops;
+ inode->i_fop = &gfs2_dir_fops;
} else if (S_ISLNK(mode)) {
inode->i_op = &gfs2_symlink_iops;
} else {
extern const struct inode_operations gfs2_file_iops;
extern const struct inode_operations gfs2_dir_iops;
extern const struct inode_operations gfs2_symlink_iops;
-extern const struct file_operations *gfs2_file_fops_nolock;
-extern const struct file_operations *gfs2_dir_fops_nolock;
+extern const struct file_operations gfs2_file_fops_nolock;
+extern const struct file_operations gfs2_dir_fops_nolock;
extern void gfs2_set_inode_flags(struct inode *inode);
#ifdef CONFIG_GFS2_FS_LOCKING_DLM
-extern const struct file_operations *gfs2_file_fops;
-extern const struct file_operations *gfs2_dir_fops;
+extern const struct file_operations gfs2_file_fops;
+extern const struct file_operations gfs2_dir_fops;
+
static inline int gfs2_localflocks(const struct gfs2_sbd *sdp)
{
return sdp->sd_args.ar_localflocks;
}
#else /* Single node only */
-#define gfs2_file_fops NULL
-#define gfs2_dir_fops NULL
+#define gfs2_file_fops gfs2_file_fops_nolock
+#define gfs2_dir_fops gfs2_dir_fops_nolock
+
static inline int gfs2_localflocks(const struct gfs2_sbd *sdp)
{
return 1;
}
}
-const struct file_operations *gfs2_file_fops = &(const struct file_operations){
+const struct file_operations gfs2_file_fops = {
.llseek = gfs2_llseek,
.read = do_sync_read,
.aio_read = generic_file_aio_read,
.setlease = gfs2_setlease,
};
-const struct file_operations *gfs2_dir_fops = &(const struct file_operations){
+const struct file_operations gfs2_dir_fops = {
.readdir = gfs2_readdir,
.unlocked_ioctl = gfs2_ioctl,
.open = gfs2_open,
#endif /* CONFIG_GFS2_FS_LOCKING_DLM */
-const struct file_operations *gfs2_file_fops_nolock = &(const struct file_operations){
+const struct file_operations gfs2_file_fops_nolock = {
.llseek = gfs2_llseek,
.read = do_sync_read,
.aio_read = generic_file_aio_read,
.setlease = generic_setlease,
};
-const struct file_operations *gfs2_dir_fops_nolock = &(const struct file_operations){
+const struct file_operations gfs2_dir_fops_nolock = {
.readdir = gfs2_readdir,
.unlocked_ioctl = gfs2_ioctl,
.open = gfs2_open,
lock_page(page);
bio = bio_alloc(GFP_NOFS, 1);
- if (unlikely(!bio)) {
- __free_page(page);
- return -ENOBUFS;
- }
-
bio->bi_sector = sector * (sb->s_blocksize >> 9);
bio->bi_bdev = sb->s_bdev;
bio_add_page(bio, page, PAGE_SIZE, 0);
ip = ghs[1].gh_gl->gl_object;
ip->i_disksize = size;
+ i_size_write(inode, size);
error = gfs2_meta_inode_buffer(ip, &dibh);
static LIST_HEAD(qd_lru_list);
static atomic_t qd_lru_count = ATOMIC_INIT(0);
-static spinlock_t qd_lru_lock = SPIN_LOCK_UNLOCKED;
+static DEFINE_SPINLOCK(qd_lru_lock);
int gfs2_shrink_qd_memory(int nr, gfp_t gfp_mask)
{
refrigerator();
t = min(quotad_timeo, statfs_timeo);
- prepare_to_wait(&sdp->sd_quota_wait, &wait, TASK_UNINTERRUPTIBLE);
+ prepare_to_wait(&sdp->sd_quota_wait, &wait, TASK_INTERRUPTIBLE);
spin_lock(&sdp->sd_trunc_lock);
empty = list_empty(&sdp->sd_trunc_list);
spin_unlock(&sdp->sd_trunc_lock);
spin_lock(&inode_lock);
}
-/*
- * We rarely want to lock two inodes that do not have a parent/child
- * relationship (such as directory, child inode) simultaneously. The
- * vast majority of file systems should be able to get along fine
- * without this. Do not use these functions except as a last resort.
- */
-void inode_double_lock(struct inode *inode1, struct inode *inode2)
-{
- if (inode1 == NULL || inode2 == NULL || inode1 == inode2) {
- if (inode1)
- mutex_lock(&inode1->i_mutex);
- else if (inode2)
- mutex_lock(&inode2->i_mutex);
- return;
- }
-
- if (inode1 < inode2) {
- mutex_lock_nested(&inode1->i_mutex, I_MUTEX_PARENT);
- mutex_lock_nested(&inode2->i_mutex, I_MUTEX_CHILD);
- } else {
- mutex_lock_nested(&inode2->i_mutex, I_MUTEX_PARENT);
- mutex_lock_nested(&inode1->i_mutex, I_MUTEX_CHILD);
- }
-}
-EXPORT_SYMBOL(inode_double_lock);
-
-void inode_double_unlock(struct inode *inode1, struct inode *inode2)
-{
- if (inode1)
- mutex_unlock(&inode1->i_mutex);
-
- if (inode2 && inode2 != inode1)
- mutex_unlock(&inode2->i_mutex);
-}
-EXPORT_SYMBOL(inode_double_unlock);
-
static __initdata unsigned long ihash_entries;
static int __init set_ihash_entries(char *str)
{
.bpop_translate = NULL,
};
+static struct lock_class_key nilfs_bmap_dat_lock_key;
+
/**
* nilfs_bmap_read - read a bmap from an inode
* @bmap: bmap
bmap->b_pops = &nilfs_bmap_ptr_ops_p;
bmap->b_last_allocated_key = 0; /* XXX: use macro */
bmap->b_last_allocated_ptr = NILFS_BMAP_NEW_PTR_INIT;
+ lockdep_set_class(&bmap->b_sem, &nilfs_bmap_dat_lock_key);
break;
case NILFS_CPFILE_INO:
case NILFS_SUFILE_INO:
{
memcpy(gcbmap, bmap, sizeof(union nilfs_bmap_union));
init_rwsem(&gcbmap->b_sem);
+ lockdep_set_class(&bmap->b_sem, &nilfs_bmap_dat_lock_key);
gcbmap->b_inode = &NILFS_BMAP_I(gcbmap)->vfs_inode;
}
{
memcpy(bmap, gcbmap, sizeof(union nilfs_bmap_union));
init_rwsem(&bmap->b_sem);
+ lockdep_set_class(&bmap->b_sem, &nilfs_bmap_dat_lock_key);
bmap->b_inode = &NILFS_BMAP_I(bmap)->vfs_inode;
}
#include "bmap_union.h"
/*
- * NILFS filesystem version
- */
-#define NILFS_VERSION "2.0.5"
-
-/*
* nilfs inode data in memory
*/
struct nilfs_inode_info {
struct nilfs_segment_entry *ent, *n;
struct inode *sufile = nilfs->ns_sufile;
__u64 segnum[4];
- time_t mtime;
int err;
int i;
* Collecting segments written after the latest super root.
* These are marked dirty to avoid being reallocated in the next write.
*/
- mtime = get_seconds();
list_for_each_entry_safe(ent, n, head, list) {
- if (ent->segnum == segnum[0]) {
- list_del(&ent->list);
- nilfs_free_segment_entry(ent);
- continue;
- }
- err = nilfs_open_segment_entry(ent, sufile);
- if (unlikely(err))
- goto failed;
- if (!nilfs_segment_usage_dirty(ent->raw_su)) {
- /* make the segment garbage */
- ent->raw_su->su_nblocks = cpu_to_le32(0);
- ent->raw_su->su_lastmod = cpu_to_le32(mtime);
- nilfs_segment_usage_set_dirty(ent->raw_su);
+ if (ent->segnum != segnum[0]) {
+ err = nilfs_sufile_scrap(sufile, ent->segnum);
+ if (unlikely(err))
+ goto failed;
}
list_del(&ent->list);
- nilfs_close_segment_entry(ent, sufile);
nilfs_free_segment_entry(ent);
}
create, NULL, bhp);
}
+static void nilfs_sufile_mod_counter(struct buffer_head *header_bh,
+ u64 ncleanadd, u64 ndirtyadd)
+{
+ struct nilfs_sufile_header *header;
+ void *kaddr;
+
+ kaddr = kmap_atomic(header_bh->b_page, KM_USER0);
+ header = kaddr + bh_offset(header_bh);
+ le64_add_cpu(&header->sh_ncleansegs, ncleanadd);
+ le64_add_cpu(&header->sh_ndirtysegs, ndirtyadd);
+ kunmap_atomic(kaddr, KM_USER0);
+
+ nilfs_mdt_mark_buffer_dirty(header_bh);
+}
+
+int nilfs_sufile_update(struct inode *sufile, __u64 segnum, int create,
+ void (*dofunc)(struct inode *, __u64,
+ struct buffer_head *,
+ struct buffer_head *))
+{
+ struct buffer_head *header_bh, *bh;
+ int ret;
+
+ if (unlikely(segnum >= nilfs_sufile_get_nsegments(sufile))) {
+ printk(KERN_WARNING "%s: invalid segment number: %llu\n",
+ __func__, (unsigned long long)segnum);
+ return -EINVAL;
+ }
+ down_write(&NILFS_MDT(sufile)->mi_sem);
+
+ ret = nilfs_sufile_get_header_block(sufile, &header_bh);
+ if (ret < 0)
+ goto out_sem;
+
+ ret = nilfs_sufile_get_segment_usage_block(sufile, segnum, create, &bh);
+ if (!ret) {
+ dofunc(sufile, segnum, header_bh, bh);
+ brelse(bh);
+ }
+ brelse(header_bh);
+
+ out_sem:
+ up_write(&NILFS_MDT(sufile)->mi_sem);
+ return ret;
+}
+
/**
* nilfs_sufile_alloc - allocate a segment
* @sufile: inode of segment usage file
int nilfs_sufile_alloc(struct inode *sufile, __u64 *segnump)
{
struct buffer_head *header_bh, *su_bh;
- struct the_nilfs *nilfs;
struct nilfs_sufile_header *header;
struct nilfs_segment_usage *su;
size_t susz = NILFS_MDT(sufile)->mi_entry_size;
down_write(&NILFS_MDT(sufile)->mi_sem);
- nilfs = NILFS_MDT(sufile)->mi_nilfs;
-
ret = nilfs_sufile_get_header_block(sufile, &header_bh);
if (ret < 0)
goto out_sem;
return ret;
}
-/**
- * nilfs_sufile_cancel_free -
- * @sufile: inode of segment usage file
- * @segnum: segment number
- *
- * Description:
- *
- * Return Value: On success, 0 is returned. On error, one of the following
- * negative error codes is returned.
- *
- * %-EIO - I/O error.
- *
- * %-ENOMEM - Insufficient amount of memory available.
- */
-int nilfs_sufile_cancel_free(struct inode *sufile, __u64 segnum)
+void nilfs_sufile_do_cancel_free(struct inode *sufile, __u64 segnum,
+ struct buffer_head *header_bh,
+ struct buffer_head *su_bh)
{
- struct buffer_head *header_bh, *su_bh;
- struct the_nilfs *nilfs;
- struct nilfs_sufile_header *header;
struct nilfs_segment_usage *su;
void *kaddr;
- int ret;
-
- down_write(&NILFS_MDT(sufile)->mi_sem);
-
- nilfs = NILFS_MDT(sufile)->mi_nilfs;
-
- ret = nilfs_sufile_get_header_block(sufile, &header_bh);
- if (ret < 0)
- goto out_sem;
-
- ret = nilfs_sufile_get_segment_usage_block(sufile, segnum, 0, &su_bh);
- if (ret < 0)
- goto out_header;
kaddr = kmap_atomic(su_bh->b_page, KM_USER0);
- su = nilfs_sufile_block_get_segment_usage(
- sufile, segnum, su_bh, kaddr);
+ su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr);
if (unlikely(!nilfs_segment_usage_clean(su))) {
printk(KERN_WARNING "%s: segment %llu must be clean\n",
__func__, (unsigned long long)segnum);
kunmap_atomic(kaddr, KM_USER0);
- goto out_su_bh;
+ return;
}
nilfs_segment_usage_set_dirty(su);
kunmap_atomic(kaddr, KM_USER0);
- kaddr = kmap_atomic(header_bh->b_page, KM_USER0);
- header = nilfs_sufile_block_get_header(sufile, header_bh, kaddr);
- le64_add_cpu(&header->sh_ncleansegs, -1);
- le64_add_cpu(&header->sh_ndirtysegs, 1);
- kunmap_atomic(kaddr, KM_USER0);
-
- nilfs_mdt_mark_buffer_dirty(header_bh);
+ nilfs_sufile_mod_counter(header_bh, -1, 1);
nilfs_mdt_mark_buffer_dirty(su_bh);
nilfs_mdt_mark_dirty(sufile);
-
- out_su_bh:
- brelse(su_bh);
- out_header:
- brelse(header_bh);
- out_sem:
- up_write(&NILFS_MDT(sufile)->mi_sem);
- return ret;
}
-/**
- * nilfs_sufile_freev - free segments
- * @sufile: inode of segment usage file
- * @segnum: array of segment numbers
- * @nsegs: number of segments
- *
- * Description: nilfs_sufile_freev() frees segments specified by @segnum and
- * @nsegs, which must have been returned by a previous call to
- * nilfs_sufile_alloc().
- *
- * Return Value: On success, 0 is returned. On error, one of the following
- * negative error codes is returned.
- *
- * %-EIO - I/O error.
- *
- * %-ENOMEM - Insufficient amount of memory available.
- */
-#define NILFS_SUFILE_FREEV_PREALLOC 16
-int nilfs_sufile_freev(struct inode *sufile, __u64 *segnum, size_t nsegs)
+void nilfs_sufile_do_scrap(struct inode *sufile, __u64 segnum,
+ struct buffer_head *header_bh,
+ struct buffer_head *su_bh)
{
- struct buffer_head *header_bh, **su_bh,
- *su_bh_prealloc[NILFS_SUFILE_FREEV_PREALLOC];
- struct the_nilfs *nilfs;
- struct nilfs_sufile_header *header;
struct nilfs_segment_usage *su;
void *kaddr;
- int ret, i;
+ int clean, dirty;
- down_write(&NILFS_MDT(sufile)->mi_sem);
-
- nilfs = NILFS_MDT(sufile)->mi_nilfs;
-
- /* prepare resources */
- if (nsegs <= NILFS_SUFILE_FREEV_PREALLOC)
- su_bh = su_bh_prealloc;
- else {
- su_bh = kmalloc(sizeof(*su_bh) * nsegs, GFP_NOFS);
- if (su_bh == NULL) {
- ret = -ENOMEM;
- goto out_sem;
- }
- }
-
- ret = nilfs_sufile_get_header_block(sufile, &header_bh);
- if (ret < 0)
- goto out_su_bh;
- for (i = 0; i < nsegs; i++) {
- ret = nilfs_sufile_get_segment_usage_block(sufile, segnum[i],
- 0, &su_bh[i]);
- if (ret < 0)
- goto out_bh;
- }
-
- /* free segments */
- for (i = 0; i < nsegs; i++) {
- kaddr = kmap_atomic(su_bh[i]->b_page, KM_USER0);
- su = nilfs_sufile_block_get_segment_usage(
- sufile, segnum[i], su_bh[i], kaddr);
- WARN_ON(nilfs_segment_usage_error(su));
- nilfs_segment_usage_set_clean(su);
+ kaddr = kmap_atomic(su_bh->b_page, KM_USER0);
+ su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr);
+ if (su->su_flags == cpu_to_le32(1UL << NILFS_SEGMENT_USAGE_DIRTY) &&
+ su->su_nblocks == cpu_to_le32(0)) {
kunmap_atomic(kaddr, KM_USER0);
- nilfs_mdt_mark_buffer_dirty(su_bh[i]);
+ return;
}
- kaddr = kmap_atomic(header_bh->b_page, KM_USER0);
- header = nilfs_sufile_block_get_header(sufile, header_bh, kaddr);
- le64_add_cpu(&header->sh_ncleansegs, nsegs);
- le64_add_cpu(&header->sh_ndirtysegs, -(u64)nsegs);
+ clean = nilfs_segment_usage_clean(su);
+ dirty = nilfs_segment_usage_dirty(su);
+
+ /* make the segment garbage */
+ su->su_lastmod = cpu_to_le64(0);
+ su->su_nblocks = cpu_to_le32(0);
+ su->su_flags = cpu_to_le32(1UL << NILFS_SEGMENT_USAGE_DIRTY);
kunmap_atomic(kaddr, KM_USER0);
- nilfs_mdt_mark_buffer_dirty(header_bh);
+
+ nilfs_sufile_mod_counter(header_bh, clean ? (u64)-1 : 0, dirty ? 0 : 1);
+ nilfs_mdt_mark_buffer_dirty(su_bh);
nilfs_mdt_mark_dirty(sufile);
+}
- out_bh:
- for (i--; i >= 0; i--)
- brelse(su_bh[i]);
- brelse(header_bh);
+void nilfs_sufile_do_free(struct inode *sufile, __u64 segnum,
+ struct buffer_head *header_bh,
+ struct buffer_head *su_bh)
+{
+ struct nilfs_segment_usage *su;
+ void *kaddr;
+ int sudirty;
- out_su_bh:
- if (su_bh != su_bh_prealloc)
- kfree(su_bh);
+ kaddr = kmap_atomic(su_bh->b_page, KM_USER0);
+ su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr);
+ if (nilfs_segment_usage_clean(su)) {
+ printk(KERN_WARNING "%s: segment %llu is already clean\n",
+ __func__, (unsigned long long)segnum);
+ kunmap_atomic(kaddr, KM_USER0);
+ return;
+ }
+ WARN_ON(nilfs_segment_usage_error(su));
+ WARN_ON(!nilfs_segment_usage_dirty(su));
- out_sem:
- up_write(&NILFS_MDT(sufile)->mi_sem);
- return ret;
-}
+ sudirty = nilfs_segment_usage_dirty(su);
+ nilfs_segment_usage_set_clean(su);
+ kunmap_atomic(kaddr, KM_USER0);
+ nilfs_mdt_mark_buffer_dirty(su_bh);
-/**
- * nilfs_sufile_free -
- * @sufile:
- * @segnum:
- */
-int nilfs_sufile_free(struct inode *sufile, __u64 segnum)
-{
- return nilfs_sufile_freev(sufile, &segnum, 1);
+ nilfs_sufile_mod_counter(header_bh, 1, sudirty ? (u64)-1 : 0);
+ nilfs_mdt_mark_dirty(sufile);
}
/**
return ret;
}
-/**
- * nilfs_sufile_set_error - mark a segment as erroneous
- * @sufile: inode of segment usage file
- * @segnum: segment number
- *
- * Description: nilfs_sufile_set_error() marks the segment specified by
- * @segnum as erroneous. The error segment will never be used again.
- *
- * Return Value: On success, 0 is returned. On error, one of the following
- * negative error codes is returned.
- *
- * %-EIO - I/O error.
- *
- * %-ENOMEM - Insufficient amount of memory available.
- *
- * %-EINVAL - Invalid segment usage number.
- */
-int nilfs_sufile_set_error(struct inode *sufile, __u64 segnum)
+void nilfs_sufile_do_set_error(struct inode *sufile, __u64 segnum,
+ struct buffer_head *header_bh,
+ struct buffer_head *su_bh)
{
- struct buffer_head *header_bh, *su_bh;
struct nilfs_segment_usage *su;
- struct nilfs_sufile_header *header;
void *kaddr;
- int ret;
-
- if (unlikely(segnum >= nilfs_sufile_get_nsegments(sufile))) {
- printk(KERN_WARNING "%s: invalid segment number: %llu\n",
- __func__, (unsigned long long)segnum);
- return -EINVAL;
- }
- down_write(&NILFS_MDT(sufile)->mi_sem);
-
- ret = nilfs_sufile_get_header_block(sufile, &header_bh);
- if (ret < 0)
- goto out_sem;
- ret = nilfs_sufile_get_segment_usage_block(sufile, segnum, 0, &su_bh);
- if (ret < 0)
- goto out_header;
+ int suclean;
kaddr = kmap_atomic(su_bh->b_page, KM_USER0);
su = nilfs_sufile_block_get_segment_usage(sufile, segnum, su_bh, kaddr);
if (nilfs_segment_usage_error(su)) {
kunmap_atomic(kaddr, KM_USER0);
- brelse(su_bh);
- goto out_header;
+ return;
}
-
+ suclean = nilfs_segment_usage_clean(su);
nilfs_segment_usage_set_error(su);
kunmap_atomic(kaddr, KM_USER0);
- brelse(su_bh);
- kaddr = kmap_atomic(header_bh->b_page, KM_USER0);
- header = nilfs_sufile_block_get_header(sufile, header_bh, kaddr);
- le64_add_cpu(&header->sh_ndirtysegs, -1);
- kunmap_atomic(kaddr, KM_USER0);
- nilfs_mdt_mark_buffer_dirty(header_bh);
+ if (suclean)
+ nilfs_sufile_mod_counter(header_bh, -1, 0);
nilfs_mdt_mark_buffer_dirty(su_bh);
nilfs_mdt_mark_dirty(sufile);
- brelse(su_bh);
-
- out_header:
- brelse(header_bh);
-
- out_sem:
- up_write(&NILFS_MDT(sufile)->mi_sem);
- return ret;
}
/**
si[i + j].sui_nblocks = le32_to_cpu(su->su_nblocks);
si[i + j].sui_flags = le32_to_cpu(su->su_flags) &
~(1UL << NILFS_SEGMENT_USAGE_ACTIVE);
- if (nilfs_segment_is_active(nilfs, segnum + i + j))
+ if (nilfs_segment_is_active(nilfs, segnum + j))
si[i + j].sui_flags |=
(1UL << NILFS_SEGMENT_USAGE_ACTIVE);
}
}
int nilfs_sufile_alloc(struct inode *, __u64 *);
-int nilfs_sufile_cancel_free(struct inode *, __u64);
-int nilfs_sufile_freev(struct inode *, __u64 *, size_t);
-int nilfs_sufile_free(struct inode *, __u64);
int nilfs_sufile_get_segment_usage(struct inode *, __u64,
struct nilfs_segment_usage **,
struct buffer_head **);
struct buffer_head *);
int nilfs_sufile_get_stat(struct inode *, struct nilfs_sustat *);
int nilfs_sufile_get_ncleansegs(struct inode *, unsigned long *);
-int nilfs_sufile_set_error(struct inode *, __u64);
ssize_t nilfs_sufile_get_suinfo(struct inode *, __u64, struct nilfs_suinfo *,
size_t);
+int nilfs_sufile_update(struct inode *, __u64, int,
+ void (*dofunc)(struct inode *, __u64,
+ struct buffer_head *,
+ struct buffer_head *));
+void nilfs_sufile_do_cancel_free(struct inode *, __u64, struct buffer_head *,
+ struct buffer_head *);
+void nilfs_sufile_do_scrap(struct inode *, __u64, struct buffer_head *,
+ struct buffer_head *);
+void nilfs_sufile_do_free(struct inode *, __u64, struct buffer_head *,
+ struct buffer_head *);
+void nilfs_sufile_do_set_error(struct inode *, __u64, struct buffer_head *,
+ struct buffer_head *);
+
+/**
+ * nilfs_sufile_cancel_free -
+ * @sufile: inode of segment usage file
+ * @segnum: segment number
+ *
+ * Description:
+ *
+ * Return Value: On success, 0 is returned. On error, one of the following
+ * negative error codes is returned.
+ *
+ * %-EIO - I/O error.
+ *
+ * %-ENOMEM - Insufficient amount of memory available.
+ */
+static inline int nilfs_sufile_cancel_free(struct inode *sufile, __u64 segnum)
+{
+ return nilfs_sufile_update(sufile, segnum, 0,
+ nilfs_sufile_do_cancel_free);
+}
+
+/**
+ * nilfs_sufile_scrap - make a segment garbage
+ * @sufile: inode of segment usage file
+ * @segnum: segment number to be freed
+ */
+static inline int nilfs_sufile_scrap(struct inode *sufile, __u64 segnum)
+{
+ return nilfs_sufile_update(sufile, segnum, 1, nilfs_sufile_do_scrap);
+}
+
+/**
+ * nilfs_sufile_free - free segment
+ * @sufile: inode of segment usage file
+ * @segnum: segment number to be freed
+ */
+static inline int nilfs_sufile_free(struct inode *sufile, __u64 segnum)
+{
+ return nilfs_sufile_update(sufile, segnum, 0, nilfs_sufile_do_free);
+}
+
+/**
+ * nilfs_sufile_set_error - mark a segment as erroneous
+ * @sufile: inode of segment usage file
+ * @segnum: segment number
+ *
+ * Description: nilfs_sufile_set_error() marks the segment specified by
+ * @segnum as erroneous. The error segment will never be used again.
+ *
+ * Return Value: On success, 0 is returned. On error, one of the following
+ * negative error codes is returned.
+ *
+ * %-EIO - I/O error.
+ *
+ * %-ENOMEM - Insufficient amount of memory available.
+ *
+ * %-EINVAL - Invalid segment usage number.
+ */
+static inline int nilfs_sufile_set_error(struct inode *sufile, __u64 segnum)
+{
+ return nilfs_sufile_update(sufile, segnum, 0,
+ nilfs_sufile_do_set_error);
+}
#endif /* _NILFS_SUFILE_H */
MODULE_AUTHOR("NTT Corp.");
MODULE_DESCRIPTION("A New Implementation of the Log-structured Filesystem "
"(NILFS)");
-MODULE_VERSION(NILFS_VERSION);
MODULE_LICENSE("GPL");
static int nilfs_remount(struct super_block *sb, int *flags, char *data);
{
struct super_block *sb = dentry->d_sb;
struct nilfs_sb_info *sbi = NILFS_SB(sb);
+ struct the_nilfs *nilfs = sbi->s_nilfs;
+ u64 id = huge_encode_dev(sb->s_bdev->bd_dev);
unsigned long long blocks;
unsigned long overhead;
unsigned long nrsvblocks;
sector_t nfreeblocks;
- struct the_nilfs *nilfs = sbi->s_nilfs;
int err;
/*
buf->f_files = atomic_read(&sbi->s_inodes_count);
buf->f_ffree = 0; /* nilfs_count_free_inodes(sb); */
buf->f_namelen = NILFS_NAME_LEN;
+ buf->f_fsid.val[0] = (u32)id;
+ buf->f_fsid.val[1] = (u32)(id >> 32);
+
return 0;
}
static int nilfs_load_super_root(struct the_nilfs *nilfs,
struct nilfs_sb_info *sbi, sector_t sr_block)
{
+ static struct lock_class_key dat_lock_key;
struct buffer_head *bh_sr;
struct nilfs_super_root *raw_sr;
struct nilfs_super_block **sbp = nilfs->ns_sbp;
if (unlikely(err))
goto failed_sufile;
+ lockdep_set_class(&NILFS_MDT(nilfs->ns_dat)->mi_sem, &dat_lock_key);
+ lockdep_set_class(&NILFS_MDT(nilfs->ns_gc_dat)->mi_sem, &dat_lock_key);
+
nilfs_mdt_set_shadow(nilfs->ns_dat, nilfs->ns_gc_dat);
nilfs_mdt_set_entry_size(nilfs->ns_cpfile, checkpoint_size,
sizeof(struct nilfs_cpfile_header));
return written ? written : ret;
}
+static int ocfs2_splice_to_file(struct pipe_inode_info *pipe,
+ struct file *out,
+ struct splice_desc *sd)
+{
+ int ret;
+
+ ret = ocfs2_prepare_inode_for_write(out->f_path.dentry, &sd->pos,
+ sd->total_len, 0, NULL);
+ if (ret < 0) {
+ mlog_errno(ret);
+ return ret;
+ }
+
+ return splice_from_pipe_feed(pipe, sd, pipe_to_file);
+}
+
static ssize_t ocfs2_file_splice_write(struct pipe_inode_info *pipe,
struct file *out,
loff_t *ppos,
unsigned int flags)
{
int ret;
- struct inode *inode = out->f_path.dentry->d_inode;
+ struct address_space *mapping = out->f_mapping;
+ struct inode *inode = mapping->host;
+ struct splice_desc sd = {
+ .total_len = len,
+ .flags = flags,
+ .pos = *ppos,
+ .u.file = out,
+ };
mlog_entry("(0x%p, 0x%p, %u, '%.*s')\n", out, pipe,
(unsigned int)len,
out->f_path.dentry->d_name.len,
out->f_path.dentry->d_name.name);
- mutex_lock_nested(&inode->i_mutex, I_MUTEX_PARENT);
+ if (pipe->inode)
+ mutex_lock_nested(&pipe->inode->i_mutex, I_MUTEX_PARENT);
- ret = ocfs2_rw_lock(inode, 1);
- if (ret < 0) {
- mlog_errno(ret);
- goto out;
- }
+ splice_from_pipe_begin(&sd);
+ do {
+ ret = splice_from_pipe_next(pipe, &sd);
+ if (ret <= 0)
+ break;
- ret = ocfs2_prepare_inode_for_write(out->f_path.dentry, ppos, len, 0,
- NULL);
- if (ret < 0) {
- mlog_errno(ret);
- goto out_unlock;
- }
+ mutex_lock_nested(&inode->i_mutex, I_MUTEX_CHILD);
+ ret = ocfs2_rw_lock(inode, 1);
+ if (ret < 0)
+ mlog_errno(ret);
+ else {
+ ret = ocfs2_splice_to_file(pipe, out, &sd);
+ ocfs2_rw_unlock(inode, 1);
+ }
+ mutex_unlock(&inode->i_mutex);
+ } while (ret > 0);
+ splice_from_pipe_end(pipe, &sd);
if (pipe->inode)
- mutex_lock_nested(&pipe->inode->i_mutex, I_MUTEX_CHILD);
- ret = generic_file_splice_write_nolock(pipe, out, ppos, len, flags);
- if (pipe->inode)
mutex_unlock(&pipe->inode->i_mutex);
-out_unlock:
- ocfs2_rw_unlock(inode, 1);
-out:
- mutex_unlock(&inode->i_mutex);
+ if (sd.num_spliced)
+ ret = sd.num_spliced;
+
+ if (ret > 0) {
+ unsigned long nr_pages;
+
+ *ppos += ret;
+ nr_pages = (ret + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
+
+ /*
+ * If file or inode is SYNC and we actually wrote some data,
+ * sync it.
+ */
+ if (unlikely((out->f_flags & O_SYNC) || IS_SYNC(inode))) {
+ int err;
+
+ mutex_lock(&inode->i_mutex);
+ err = ocfs2_rw_lock(inode, 1);
+ if (err < 0) {
+ mlog_errno(err);
+ } else {
+ err = generic_osync_inode(inode, mapping,
+ OSYNC_METADATA|OSYNC_DATA);
+ ocfs2_rw_unlock(inode, 1);
+ }
+ mutex_unlock(&inode->i_mutex);
+
+ if (err)
+ ret = err;
+ }
+ balance_dirty_pages_ratelimited_nr(mapping, nr_pages);
+ }
mlog_exit(ret);
return ret;
* -- Manfred Spraul <manfred@colorfullife.com> 2002-05-09
*/
+static void pipe_lock_nested(struct pipe_inode_info *pipe, int subclass)
+{
+ if (pipe->inode)
+ mutex_lock_nested(&pipe->inode->i_mutex, subclass);
+}
+
+void pipe_lock(struct pipe_inode_info *pipe)
+{
+ /*
+ * pipe_lock() nests non-pipe inode locks (for writing to a file)
+ */
+ pipe_lock_nested(pipe, I_MUTEX_PARENT);
+}
+EXPORT_SYMBOL(pipe_lock);
+
+void pipe_unlock(struct pipe_inode_info *pipe)
+{
+ if (pipe->inode)
+ mutex_unlock(&pipe->inode->i_mutex);
+}
+EXPORT_SYMBOL(pipe_unlock);
+
+void pipe_double_lock(struct pipe_inode_info *pipe1,
+ struct pipe_inode_info *pipe2)
+{
+ BUG_ON(pipe1 == pipe2);
+
+ if (pipe1 < pipe2) {
+ pipe_lock_nested(pipe1, I_MUTEX_PARENT);
+ pipe_lock_nested(pipe2, I_MUTEX_CHILD);
+ } else {
+ pipe_lock_nested(pipe2, I_MUTEX_CHILD);
+ pipe_lock_nested(pipe1, I_MUTEX_PARENT);
+ }
+}
+
/* Drop the inode semaphore and wait for a pipe event, atomically */
void pipe_wait(struct pipe_inode_info *pipe)
{
* is considered a noninteractive wait:
*/
prepare_to_wait(&pipe->wait, &wait, TASK_INTERRUPTIBLE);
- if (pipe->inode)
- mutex_unlock(&pipe->inode->i_mutex);
+ pipe_unlock(pipe);
schedule();
finish_wait(&pipe->wait, &wait);
- if (pipe->inode)
- mutex_lock(&pipe->inode->i_mutex);
+ pipe_lock(pipe);
}
static int
do_wakeup = 0;
page_nr = 0;
- if (pipe->inode)
- mutex_lock(&pipe->inode->i_mutex);
+ pipe_lock(pipe);
for (;;) {
if (!pipe->readers) {
pipe->waiting_writers--;
}
- if (pipe->inode) {
- mutex_unlock(&pipe->inode->i_mutex);
+ pipe_unlock(pipe);
- if (do_wakeup) {
- smp_mb();
- if (waitqueue_active(&pipe->wait))
- wake_up_interruptible(&pipe->wait);
- kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
- }
+ if (do_wakeup) {
+ smp_mb();
+ if (waitqueue_active(&pipe->wait))
+ wake_up_interruptible(&pipe->wait);
+ kill_fasync(&pipe->fasync_readers, SIGIO, POLL_IN);
}
while (page_nr < spd_pages)
* SPLICE_F_MOVE isn't set, or we cannot move the page, we simply create
* a new page in the output file page cache and fill/dirty that.
*/
-static int pipe_to_file(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
- struct splice_desc *sd)
+int pipe_to_file(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
+ struct splice_desc *sd)
{
struct file *file = sd->u.file;
struct address_space *mapping = file->f_mapping;
out:
return ret;
}
+EXPORT_SYMBOL(pipe_to_file);
+
+static void wakeup_pipe_writers(struct pipe_inode_info *pipe)
+{
+ smp_mb();
+ if (waitqueue_active(&pipe->wait))
+ wake_up_interruptible(&pipe->wait);
+ kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
+}
/**
- * __splice_from_pipe - splice data from a pipe to given actor
+ * splice_from_pipe_feed - feed available data from a pipe to a file
* @pipe: pipe to splice from
* @sd: information to @actor
* @actor: handler that splices the data
*
* Description:
- * This function does little more than loop over the pipe and call
- * @actor to do the actual moving of a single struct pipe_buffer to
- * the desired destination. See pipe_to_file, pipe_to_sendpage, or
- * pipe_to_user.
+
+ * This function loops over the pipe and calls @actor to do the
+ * actual moving of a single struct pipe_buffer to the desired
+ * destination. It returns when there's no more buffers left in
+ * the pipe or if the requested number of bytes (@sd->total_len)
+ * have been copied. It returns a positive number (one) if the
+ * pipe needs to be filled with more data, zero if the required
+ * number of bytes have been copied and -errno on error.
*
+ * This, together with splice_from_pipe_{begin,end,next}, may be
+ * used to implement the functionality of __splice_from_pipe() when
+ * locking is required around copying the pipe buffers to the
+ * destination.
*/
-ssize_t __splice_from_pipe(struct pipe_inode_info *pipe, struct splice_desc *sd,
- splice_actor *actor)
+int splice_from_pipe_feed(struct pipe_inode_info *pipe, struct splice_desc *sd,
+ splice_actor *actor)
{
- int ret, do_wakeup, err;
-
- ret = 0;
- do_wakeup = 0;
-
- for (;;) {
- if (pipe->nrbufs) {
- struct pipe_buffer *buf = pipe->bufs + pipe->curbuf;
- const struct pipe_buf_operations *ops = buf->ops;
+ int ret;
- sd->len = buf->len;
- if (sd->len > sd->total_len)
- sd->len = sd->total_len;
+ while (pipe->nrbufs) {
+ struct pipe_buffer *buf = pipe->bufs + pipe->curbuf;
+ const struct pipe_buf_operations *ops = buf->ops;
- err = actor(pipe, buf, sd);
- if (err <= 0) {
- if (!ret && err != -ENODATA)
- ret = err;
+ sd->len = buf->len;
+ if (sd->len > sd->total_len)
+ sd->len = sd->total_len;
- break;
- }
+ ret = actor(pipe, buf, sd);
+ if (ret <= 0) {
+ if (ret == -ENODATA)
+ ret = 0;
+ return ret;
+ }
+ buf->offset += ret;
+ buf->len -= ret;
- ret += err;
- buf->offset += err;
- buf->len -= err;
+ sd->num_spliced += ret;
+ sd->len -= ret;
+ sd->pos += ret;
+ sd->total_len -= ret;
- sd->len -= err;
- sd->pos += err;
- sd->total_len -= err;
- if (sd->len)
- continue;
+ if (!buf->len) {
+ buf->ops = NULL;
+ ops->release(pipe, buf);
+ pipe->curbuf = (pipe->curbuf + 1) & (PIPE_BUFFERS - 1);
+ pipe->nrbufs--;
+ if (pipe->inode)
+ sd->need_wakeup = true;
+ }
- if (!buf->len) {
- buf->ops = NULL;
- ops->release(pipe, buf);
- pipe->curbuf = (pipe->curbuf + 1) & (PIPE_BUFFERS - 1);
- pipe->nrbufs--;
- if (pipe->inode)
- do_wakeup = 1;
- }
+ if (!sd->total_len)
+ return 0;
+ }
- if (!sd->total_len)
- break;
- }
+ return 1;
+}
+EXPORT_SYMBOL(splice_from_pipe_feed);
- if (pipe->nrbufs)
- continue;
+/**
+ * splice_from_pipe_next - wait for some data to splice from
+ * @pipe: pipe to splice from
+ * @sd: information about the splice operation
+ *
+ * Description:
+ * This function will wait for some data and return a positive
+ * value (one) if pipe buffers are available. It will return zero
+ * or -errno if no more data needs to be spliced.
+ */
+int splice_from_pipe_next(struct pipe_inode_info *pipe, struct splice_desc *sd)
+{
+ while (!pipe->nrbufs) {
if (!pipe->writers)
- break;
- if (!pipe->waiting_writers) {
- if (ret)
- break;
- }
+ return 0;
- if (sd->flags & SPLICE_F_NONBLOCK) {
- if (!ret)
- ret = -EAGAIN;
- break;
- }
+ if (!pipe->waiting_writers && sd->num_spliced)
+ return 0;
- if (signal_pending(current)) {
- if (!ret)
- ret = -ERESTARTSYS;
- break;
- }
+ if (sd->flags & SPLICE_F_NONBLOCK)
+ return -EAGAIN;
- if (do_wakeup) {
- smp_mb();
- if (waitqueue_active(&pipe->wait))
- wake_up_interruptible_sync(&pipe->wait);
- kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
- do_wakeup = 0;
+ if (signal_pending(current))
+ return -ERESTARTSYS;
+
+ if (sd->need_wakeup) {
+ wakeup_pipe_writers(pipe);
+ sd->need_wakeup = false;
}
pipe_wait(pipe);
}
- if (do_wakeup) {
- smp_mb();
- if (waitqueue_active(&pipe->wait))
- wake_up_interruptible(&pipe->wait);
- kill_fasync(&pipe->fasync_writers, SIGIO, POLL_OUT);
- }
+ return 1;
+}
+EXPORT_SYMBOL(splice_from_pipe_next);
- return ret;
+/**
+ * splice_from_pipe_begin - start splicing from pipe
+ * @pipe: pipe to splice from
+ *
+ * Description:
+ * This function should be called before a loop containing
+ * splice_from_pipe_next() and splice_from_pipe_feed() to
+ * initialize the necessary fields of @sd.
+ */
+void splice_from_pipe_begin(struct splice_desc *sd)
+{
+ sd->num_spliced = 0;
+ sd->need_wakeup = false;
+}
+EXPORT_SYMBOL(splice_from_pipe_begin);
+
+/**
+ * splice_from_pipe_end - finish splicing from pipe
+ * @pipe: pipe to splice from
+ * @sd: information about the splice operation
+ *
+ * Description:
+ * This function will wake up pipe writers if necessary. It should
+ * be called after a loop containing splice_from_pipe_next() and
+ * splice_from_pipe_feed().
+ */
+void splice_from_pipe_end(struct pipe_inode_info *pipe, struct splice_desc *sd)
+{
+ if (sd->need_wakeup)
+ wakeup_pipe_writers(pipe);
+}
+EXPORT_SYMBOL(splice_from_pipe_end);
+
+/**
+ * __splice_from_pipe - splice data from a pipe to given actor
+ * @pipe: pipe to splice from
+ * @sd: information to @actor
+ * @actor: handler that splices the data
+ *
+ * Description:
+ * This function does little more than loop over the pipe and call
+ * @actor to do the actual moving of a single struct pipe_buffer to
+ * the desired destination. See pipe_to_file, pipe_to_sendpage, or
+ * pipe_to_user.
+ *
+ */
+ssize_t __splice_from_pipe(struct pipe_inode_info *pipe, struct splice_desc *sd,
+ splice_actor *actor)
+{
+ int ret;
+
+ splice_from_pipe_begin(sd);
+ do {
+ ret = splice_from_pipe_next(pipe, sd);
+ if (ret > 0)
+ ret = splice_from_pipe_feed(pipe, sd, actor);
+ } while (ret > 0);
+ splice_from_pipe_end(pipe, sd);
+
+ return sd->num_spliced ? sd->num_spliced : ret;
}
EXPORT_SYMBOL(__splice_from_pipe);
* @actor: handler that splices the data
*
* Description:
- * See __splice_from_pipe. This function locks the input and output inodes,
+ * See __splice_from_pipe. This function locks the pipe inode,
* otherwise it's identical to __splice_from_pipe().
*
*/
splice_actor *actor)
{
ssize_t ret;
- struct inode *inode = out->f_mapping->host;
struct splice_desc sd = {
.total_len = len,
.flags = flags,
.u.file = out,
};
- /*
- * The actor worker might be calling ->write_begin and
- * ->write_end. Most of the time, these expect i_mutex to
- * be held. Since this may result in an ABBA deadlock with
- * pipe->inode, we have to order lock acquiry here.
- *
- * Outer lock must be inode->i_mutex, as pipe_wait() will
- * release and reacquire pipe->inode->i_mutex, AND inode must
- * never be a pipe.
- */
- WARN_ON(S_ISFIFO(inode->i_mode));
- mutex_lock_nested(&inode->i_mutex, I_MUTEX_PARENT);
- if (pipe->inode)
- mutex_lock_nested(&pipe->inode->i_mutex, I_MUTEX_CHILD);
+ pipe_lock(pipe);
ret = __splice_from_pipe(pipe, &sd, actor);
- if (pipe->inode)
- mutex_unlock(&pipe->inode->i_mutex);
- mutex_unlock(&inode->i_mutex);
+ pipe_unlock(pipe);
return ret;
}
/**
- * generic_file_splice_write_nolock - generic_file_splice_write without mutexes
+ * generic_file_splice_write - splice data from a pipe to a file
* @pipe: pipe info
* @out: file to write to
* @ppos: position in @out
*
* Description:
* Will either move or copy pages (determined by @flags options) from
- * the given pipe inode to the given file. The caller is responsible
- * for acquiring i_mutex on both inodes.
+ * the given pipe inode to the given file.
*
*/
ssize_t
-generic_file_splice_write_nolock(struct pipe_inode_info *pipe, struct file *out,
- loff_t *ppos, size_t len, unsigned int flags)
+generic_file_splice_write(struct pipe_inode_info *pipe, struct file *out,
+ loff_t *ppos, size_t len, unsigned int flags)
{
struct address_space *mapping = out->f_mapping;
struct inode *inode = mapping->host;
.u.file = out,
};
ssize_t ret;
- int err;
-
- err = file_remove_suid(out);
- if (unlikely(err))
- return err;
-
- ret = __splice_from_pipe(pipe, &sd, pipe_to_file);
- if (ret > 0) {
- unsigned long nr_pages;
- *ppos += ret;
- nr_pages = (ret + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
+ pipe_lock(pipe);
- /*
- * If file or inode is SYNC and we actually wrote some data,
- * sync it.
- */
- if (unlikely((out->f_flags & O_SYNC) || IS_SYNC(inode))) {
- err = generic_osync_inode(inode, mapping,
- OSYNC_METADATA|OSYNC_DATA);
-
- if (err)
- ret = err;
- }
- balance_dirty_pages_ratelimited_nr(mapping, nr_pages);
- }
+ splice_from_pipe_begin(&sd);
+ do {
+ ret = splice_from_pipe_next(pipe, &sd);
+ if (ret <= 0)
+ break;
- return ret;
-}
+ mutex_lock_nested(&inode->i_mutex, I_MUTEX_CHILD);
+ ret = file_remove_suid(out);
+ if (!ret)
+ ret = splice_from_pipe_feed(pipe, &sd, pipe_to_file);
+ mutex_unlock(&inode->i_mutex);
+ } while (ret > 0);
+ splice_from_pipe_end(pipe, &sd);
-EXPORT_SYMBOL(generic_file_splice_write_nolock);
+ pipe_unlock(pipe);
-/**
- * generic_file_splice_write - splice data from a pipe to a file
- * @pipe: pipe info
- * @out: file to write to
- * @ppos: position in @out
- * @len: number of bytes to splice
- * @flags: splice modifier flags
- *
- * Description:
- * Will either move or copy pages (determined by @flags options) from
- * the given pipe inode to the given file.
- *
- */
-ssize_t
-generic_file_splice_write(struct pipe_inode_info *pipe, struct file *out,
- loff_t *ppos, size_t len, unsigned int flags)
-{
- struct address_space *mapping = out->f_mapping;
- struct inode *inode = mapping->host;
- struct splice_desc sd = {
- .total_len = len,
- .flags = flags,
- .pos = *ppos,
- .u.file = out,
- };
- ssize_t ret;
+ if (sd.num_spliced)
+ ret = sd.num_spliced;
- WARN_ON(S_ISFIFO(inode->i_mode));
- mutex_lock_nested(&inode->i_mutex, I_MUTEX_PARENT);
- ret = file_remove_suid(out);
- if (likely(!ret)) {
- if (pipe->inode)
- mutex_lock_nested(&pipe->inode->i_mutex, I_MUTEX_CHILD);
- ret = __splice_from_pipe(pipe, &sd, pipe_to_file);
- if (pipe->inode)
- mutex_unlock(&pipe->inode->i_mutex);
- }
- mutex_unlock(&inode->i_mutex);
if (ret > 0) {
unsigned long nr_pages;
if (!pipe)
return -EBADF;
- if (pipe->inode)
- mutex_lock(&pipe->inode->i_mutex);
+ pipe_lock(pipe);
error = ret = 0;
while (nr_segs) {
iov++;
}
- if (pipe->inode)
- mutex_unlock(&pipe->inode->i_mutex);
+ pipe_unlock(pipe);
if (!ret)
ret = error;
return 0;
ret = 0;
- mutex_lock(&pipe->inode->i_mutex);
+ pipe_lock(pipe);
while (!pipe->nrbufs) {
if (signal_pending(current)) {
pipe_wait(pipe);
}
- mutex_unlock(&pipe->inode->i_mutex);
+ pipe_unlock(pipe);
return ret;
}
return 0;
ret = 0;
- mutex_lock(&pipe->inode->i_mutex);
+ pipe_lock(pipe);
while (pipe->nrbufs >= PIPE_BUFFERS) {
if (!pipe->readers) {
pipe->waiting_writers--;
}
- mutex_unlock(&pipe->inode->i_mutex);
+ pipe_unlock(pipe);
return ret;
}
/*
* Potential ABBA deadlock, work around it by ordering lock
- * grabbing by inode address. Otherwise two different processes
+ * grabbing by pipe info address. Otherwise two different processes
* could deadlock (one doing tee from A -> B, the other from B -> A).
*/
- inode_double_lock(ipipe->inode, opipe->inode);
+ pipe_double_lock(ipipe, opipe);
do {
if (!opipe->readers) {
if (!ret && ipipe->waiting_writers && (flags & SPLICE_F_NONBLOCK))
ret = -EAGAIN;
- inode_double_unlock(ipipe->inode, opipe->inode);
+ pipe_unlock(ipipe);
+ pipe_unlock(opipe);
/*
* If we put data in the output pipe, wakeup any potential readers.
#else /* !CONFIG_BUG */
#ifndef HAVE_ARCH_BUG
-#define BUG()
+#define BUG() do {} while(0)
#endif
#ifndef HAVE_ARCH_BUG_ON
#define I915_BIT_6_SWIZZLE_9_10_11 4
/* Not seen by userland */
#define I915_BIT_6_SWIZZLE_UNKNOWN 5
+/* Seen by userland. */
+#define I915_BIT_6_SWIZZLE_9_17 6
+#define I915_BIT_6_SWIZZLE_9_10_17 7
struct drm_i915_gem_set_tiling {
/** Handle of the buffer to have its tiling state updated */
return bio && bio->bi_io_vec != NULL;
}
+/*
+ * BIO list managment for use by remapping drivers (e.g. DM or MD).
+ *
+ * A bio_list anchors a singly-linked list of bios chained through the bi_next
+ * member of the bio. The bio_list also caches the last list member to allow
+ * fast access to the tail.
+ */
+struct bio_list {
+ struct bio *head;
+ struct bio *tail;
+};
+
+static inline int bio_list_empty(const struct bio_list *bl)
+{
+ return bl->head == NULL;
+}
+
+static inline void bio_list_init(struct bio_list *bl)
+{
+ bl->head = bl->tail = NULL;
+}
+
+#define bio_list_for_each(bio, bl) \
+ for (bio = (bl)->head; bio; bio = bio->bi_next)
+
+static inline unsigned bio_list_size(const struct bio_list *bl)
+{
+ unsigned sz = 0;
+ struct bio *bio;
+
+ bio_list_for_each(bio, bl)
+ sz++;
+
+ return sz;
+}
+
+static inline void bio_list_add(struct bio_list *bl, struct bio *bio)
+{
+ bio->bi_next = NULL;
+
+ if (bl->tail)
+ bl->tail->bi_next = bio;
+ else
+ bl->head = bio;
+
+ bl->tail = bio;
+}
+
+static inline void bio_list_add_head(struct bio_list *bl, struct bio *bio)
+{
+ bio->bi_next = bl->head;
+
+ bl->head = bio;
+
+ if (!bl->tail)
+ bl->tail = bio;
+}
+
+static inline void bio_list_merge(struct bio_list *bl, struct bio_list *bl2)
+{
+ if (!bl2->head)
+ return;
+
+ if (bl->tail)
+ bl->tail->bi_next = bl2->head;
+ else
+ bl->head = bl2->head;
+
+ bl->tail = bl2->tail;
+}
+
+static inline void bio_list_merge_head(struct bio_list *bl,
+ struct bio_list *bl2)
+{
+ if (!bl2->head)
+ return;
+
+ if (bl->head)
+ bl2->tail->bi_next = bl->head;
+ else
+ bl->tail = bl2->tail;
+
+ bl->head = bl2->head;
+}
+
+static inline struct bio *bio_list_pop(struct bio_list *bl)
+{
+ struct bio *bio = bl->head;
+
+ if (bio) {
+ bl->head = bl->head->bi_next;
+ if (!bl->head)
+ bl->tail = NULL;
+
+ bio->bi_next = NULL;
+ }
+
+ return bio;
+}
+
+static inline struct bio *bio_list_get(struct bio_list *bl)
+{
+ struct bio *bio = bl->head;
+
+ bl->head = bl->tail = NULL;
+
+ return bio;
+}
+
#if defined(CONFIG_BLK_DEV_INTEGRITY)
#define bip_vec_idx(bip, idx) (&(bip->bip_vec[(idx)]))
unsigned long b_state);
void end_buffer_read_sync(struct buffer_head *bh, int uptodate);
void end_buffer_write_sync(struct buffer_head *bh, int uptodate);
+void end_buffer_async_write(struct buffer_head *bh, int uptodate);
/* Things to do with buffers at mapping->private_list */
void mark_buffer_dirty_inode(struct buffer_head *bh, struct inode *inode);
void block_invalidatepage(struct page *page, unsigned long offset);
int block_write_full_page(struct page *page, get_block_t *get_block,
struct writeback_control *wbc);
+int block_write_full_page_endio(struct page *page, get_block_t *get_block,
+ struct writeback_control *wbc, bh_end_io_t *handler);
int block_read_full_page(struct page*, get_block_t*);
int block_is_partially_uptodate(struct page *page, read_descriptor_t *desc,
unsigned long from);
*/
#define FMODE_NOCMTIME ((__force fmode_t)2048)
+/*
+ * The below are the various read and write types that we support. Some of
+ * them include behavioral modifiers that send information down to the
+ * block layer and IO scheduler. Terminology:
+ *
+ * The block layer uses device plugging to defer IO a little bit, in
+ * the hope that we will see more IO very shortly. This increases
+ * coalescing of adjacent IO and thus reduces the number of IOs we
+ * have to send to the device. It also allows for better queuing,
+ * if the IO isn't mergeable. If the caller is going to be waiting
+ * for the IO, then he must ensure that the device is unplugged so
+ * that the IO is dispatched to the driver.
+ *
+ * All IO is handled async in Linux. This is fine for background
+ * writes, but for reads or writes that someone waits for completion
+ * on, we want to notify the block layer and IO scheduler so that they
+ * know about it. That allows them to make better scheduling
+ * decisions. So when the below references 'sync' and 'async', it
+ * is referencing this priority hint.
+ *
+ * With that in mind, the available types are:
+ *
+ * READ A normal read operation. Device will be plugged.
+ * READ_SYNC A synchronous read. Device is not plugged, caller can
+ * immediately wait on this read without caring about
+ * unplugging.
+ * READA Used for read-ahead operations. Lower priority, and the
+ * block layer could (in theory) choose to ignore this
+ * request if it runs into resource problems.
+ * WRITE A normal async write. Device will be plugged.
+ * SWRITE Like WRITE, but a special case for ll_rw_block() that
+ * tells it to lock the buffer first. Normally a buffer
+ * must be locked before doing IO.
+ * WRITE_SYNC_PLUG Synchronous write. Identical to WRITE, but passes down
+ * the hint that someone will be waiting on this IO
+ * shortly. The device must still be unplugged explicitly,
+ * WRITE_SYNC_PLUG does not do this as we could be
+ * submitting more writes before we actually wait on any
+ * of them.
+ * WRITE_SYNC Like WRITE_SYNC_PLUG, but also unplugs the device
+ * immediately after submission. The write equivalent
+ * of READ_SYNC.
+ * WRITE_ODIRECT Special case write for O_DIRECT only.
+ * SWRITE_SYNC
+ * SWRITE_SYNC_PLUG Like WRITE_SYNC/WRITE_SYNC_PLUG, but locks the buffer.
+ * See SWRITE.
+ * WRITE_BARRIER Like WRITE, but tells the block layer that all
+ * previously submitted writes must be safely on storage
+ * before this one is started. Also guarantees that when
+ * this write is complete, it itself is also safely on
+ * storage. Prevents reordering of writes on both sides
+ * of this IO.
+ *
+ */
#define RW_MASK 1
#define RWA_MASK 2
#define READ 0
(SWRITE | (1 << BIO_RW_SYNCIO) | (1 << BIO_RW_NOIDLE))
#define SWRITE_SYNC (SWRITE_SYNC_PLUG | (1 << BIO_RW_UNPLUG))
#define WRITE_BARRIER (WRITE | (1 << BIO_RW_BARRIER))
+
+/*
+ * These aren't really reads or writes, they pass down information about
+ * parts of device that are now unused by the file system.
+ */
#define DISCARD_NOBARRIER (1 << BIO_RW_DISCARD)
#define DISCARD_BARRIER ((1 << BIO_RW_DISCARD) | (1 << BIO_RW_BARRIER))
I_MUTEX_QUOTA
};
-extern void inode_double_lock(struct inode *inode1, struct inode *inode2);
-extern void inode_double_unlock(struct inode *inode1, struct inode *inode2);
-
/*
* NOTE: in a 32bit arch with a preemptable kernel and
* an UP compile the i_size_read/write must be atomic
struct pipe_inode_info *, size_t, unsigned int);
extern ssize_t generic_file_splice_write(struct pipe_inode_info *,
struct file *, loff_t *, size_t, unsigned int);
-extern ssize_t generic_file_splice_write_nolock(struct pipe_inode_info *,
- struct file *, loff_t *, size_t, unsigned int);
extern ssize_t generic_splice_sendpage(struct pipe_inode_info *pipe,
struct file *out, loff_t *, size_t len, unsigned int flags);
extern long do_splice_direct(struct file *in, loff_t *ppos, struct file *out,
*
*/
-/* Flags related to I2C device features */
-#define FSL_I2C_DEV_SEPARATE_DFSRR 0x00000001
-#define FSL_I2C_DEV_CLOCK_5200 0x00000002
-
enum fsl_usb2_operating_modes {
FSL_USB2_MPH_HOST,
FSL_USB2_DR_HOST,
/* bits 24:31 of ap->flags are reserved for LLD specific flags */
+
/* struct ata_port pflags */
ATA_PFLAG_EH_PENDING = (1 << 0), /* EH pending */
ATA_PFLAG_EH_IN_PROGRESS = (1 << 1), /* EH in progress */
ATA_PFLAG_PM_PENDING = (1 << 18), /* PM operation pending */
ATA_PFLAG_INIT_GTM_VALID = (1 << 19), /* initial gtm data valid */
+ ATA_PFLAG_PIO32 = (1 << 20), /* 32bit PIO */
+ ATA_PFLAG_PIO32CHANGE = (1 << 21), /* 32bit PIO can be turned on/off */
+
/* struct ata_queued_cmd flags */
ATA_QCFLAG_ACTIVE = (1 << 0), /* cmd not yet ack'd to scsi lyer */
ATA_QCFLAG_DMAMAP = (1 << 1), /* SG table is DMA mapped */
struct Scsi_Host *scsi_host; /* our co-allocated scsi host */
struct ata_port_operations *ops;
spinlock_t *lock;
+ /* Flags owned by the EH context. Only EH should touch these once the
+ port is active */
unsigned long flags; /* ATA_FLAG_xxx */
+ /* Flags that change dynamically, protected by ap->lock */
unsigned int pflags; /* ATA_PFLAG_xxx */
unsigned int print_id; /* user visible unique port ID */
unsigned int port_no; /* 0 based port no. inside the host */
extern void ata_sff_error_handler(struct ata_port *ap);
extern void ata_sff_post_internal_cmd(struct ata_queued_cmd *qc);
extern int ata_sff_port_start(struct ata_port *ap);
+extern int ata_sff_port_start32(struct ata_port *ap);
extern void ata_sff_std_ports(struct ata_ioports *ioaddr);
extern unsigned long ata_bmdma_mode_filter(struct ata_device *dev,
unsigned long xfer_mask);
/* Enables or disables interrupts */
int (*config_intr)(struct phy_device *phydev);
+ /*
+ * Checks if the PHY generated an interrupt.
+ * For multi-PHY devices with shared PHY interrupt pin
+ */
+ int (*did_interrupt)(struct phy_device *phydev);
+
/* Clears up any memory if needed */
void (*remove)(struct phy_device *phydev);
memory allocation, whereas PIPE_BUF makes atomicity guarantees. */
#define PIPE_SIZE PAGE_SIZE
+/* Pipe lock and unlock operations */
+void pipe_lock(struct pipe_inode_info *);
+void pipe_unlock(struct pipe_inode_info *);
+void pipe_double_lock(struct pipe_inode_info *, struct pipe_inode_info *);
+
/* Drop the inode semaphore and wait for a pipe event, atomically */
void pipe_wait(struct pipe_inode_info *pipe);
void *data; /* cookie */
} u;
loff_t pos; /* file position */
+ size_t num_spliced; /* number of bytes already spliced */
+ bool need_wakeup; /* need to wake up writer */
};
struct partial_page {
splice_actor *);
extern ssize_t __splice_from_pipe(struct pipe_inode_info *,
struct splice_desc *, splice_actor *);
+extern int splice_from_pipe_feed(struct pipe_inode_info *, struct splice_desc *,
+ splice_actor *);
+extern int splice_from_pipe_next(struct pipe_inode_info *,
+ struct splice_desc *);
+extern void splice_from_pipe_begin(struct splice_desc *);
+extern void splice_from_pipe_end(struct pipe_inode_info *,
+ struct splice_desc *);
+extern int pipe_to_file(struct pipe_inode_info *, struct pipe_buffer *,
+ struct splice_desc *);
+
extern ssize_t splice_to_pipe(struct pipe_inode_info *,
struct splice_pipe_desc *);
extern ssize_t splice_direct_to_actor(struct file *, struct splice_desc *,
/**
* usb_serial_port: structure for the specific ports of a device.
* @serial: pointer back to the struct usb_serial owner of this port.
- * @tty: pointer to the corresponding tty for this port.
+ * @port: pointer to the corresponding tty_port for this port.
* @lock: spinlock to grab when updating portions of this structure.
* @mutex: mutex used to synchronize serial_open() and serial_close()
* access for this port.
* @interrupt_out_endpointAddress: endpoint address for the interrupt out pipe
* for this port.
* @bulk_in_buffer: pointer to the bulk in buffer for this port.
+ * @bulk_in_size: the size of the bulk_in_buffer, in bytes.
* @read_urb: pointer to the bulk in struct urb for this port.
* @bulk_in_endpointAddress: endpoint address for the bulk in pipe for this
* port.
* @bulk_out_buffer: pointer to the bulk out buffer for this port.
* @bulk_out_size: the size of the bulk_out_buffer, in bytes.
* @write_urb: pointer to the bulk out struct urb for this port.
+ * @write_urb_busy: port`s writing status
* @bulk_out_endpointAddress: endpoint address for the bulk out pipe for this
* port.
* @write_wait: a wait_queue_head_t used by the port.
* @work: work queue entry for the line discipline waking up.
- * @open_count: number of times this port has been opened.
* @throttled: nonzero if the read urb is inactive to throttle the device
* @throttle_req: nonzero if the tty wants to throttle us
+ * @console: attached usb serial console
+ * @dev: pointer to the serial device
*
* This structure is used by the usb-serial core and drivers for the specific
* ports of a device.
sk_common_release(sk);
}
-extern int ipv4_rcv_saddr_equal(const struct sock *sk1,
- const struct sock *sk2);
extern int udp_lib_get_port(struct sock *sk, unsigned short snum,
int (*)(const struct sock*,const struct sock*));
int type;
const char *id;
char name[100];
+ void *private_data;
+ void (*private_free)(struct snd_jack *);
};
#ifdef CONFIG_SND_JACK
int overrange;
snd_pcm_uframes_t avail_max;
snd_pcm_uframes_t hw_ptr_base; /* Position at buffer restart */
- snd_pcm_uframes_t hw_ptr_interrupt; /* Position at interrupt time*/
+ snd_pcm_uframes_t hw_ptr_interrupt; /* Position at interrupt time */
+ unsigned long hw_ptr_jiffies; /* Time when hw_ptr is updated */
/* -- HW params -- */
snd_pcm_access_t access; /* access mode */
struct bio *bio;
bio = bio_alloc(__GFP_WAIT | __GFP_HIGH, 1);
- if (!bio)
- return -ENOMEM;
bio->bi_sector = page_off * (PAGE_SIZE >> 9);
bio->bi_bdev = resume_bdev;
bio->bi_end_io = end_swap_bio_read;
static struct completion rcu_barrier_completion;
int rcu_scheduler_active __read_mostly;
+static atomic_t rcu_migrate_type_count = ATOMIC_INIT(0);
+static struct rcu_head rcu_migrate_head[3];
+static DECLARE_WAIT_QUEUE_HEAD(rcu_migrate_wq);
+
/*
* Awaken the corresponding synchronize_rcu() instance now that a
* grace period has elapsed.
}
}
-static inline void wait_migrated_callbacks(void);
+static inline void wait_migrated_callbacks(void)
+{
+ wait_event(rcu_migrate_wq, !atomic_read(&rcu_migrate_type_count));
+}
/*
* Orchestrate the specified type of RCU barrier, waiting for all
}
EXPORT_SYMBOL_GPL(rcu_barrier_sched);
-static atomic_t rcu_migrate_type_count = ATOMIC_INIT(0);
-static struct rcu_head rcu_migrate_head[3];
-static DECLARE_WAIT_QUEUE_HEAD(rcu_migrate_wq);
-
static void rcu_migrate_callback(struct rcu_head *notused)
{
if (atomic_dec_and_test(&rcu_migrate_type_count))
wake_up(&rcu_migrate_wq);
}
-static inline void wait_migrated_callbacks(void)
-{
- wait_event(rcu_migrate_wq, !atomic_read(&rcu_migrate_type_count));
-}
-
static int __cpuinit rcu_barrier_cpu_hotplug(struct notifier_block *self,
unsigned long action, void *hcpu)
{
goto exit;
retval = call_usermodehelper(argv[0], argv,
- env->envp, UMH_NO_WAIT);
+ env->envp, UMH_WAIT_EXEC);
}
exit:
}
return err;
}
+EXPORT_SYMBOL(filemap_write_and_wait_range);
/**
* add_to_page_cache_locked - add a locked page to the pagecache
* Overcommit.. This must be the final test, as it will
* update security statistics.
*/
- if (security_vm_enough_memory(grow))
+ if (security_vm_enough_memory_mm(mm, grow))
return -ENOMEM;
/* Ok, everything looks good - let it rip */
}
seq_putc(seq, '\n');
}
+
+ if (dev)
+ dev_put(dev);
}
return 0;
}
if (!skb)
return NET_RX_DROP;
- if (netpoll_rx_on(skb))
+ if (netpoll_rx_on(skb)) {
+ skb->protocol = eth_type_trans(skb, skb->dev);
return vlan_hwaccel_receive_skb(skb, grp, vlan_tci);
+ }
return napi_frags_finish(napi, skb,
vlan_gro_common(napi, grp, vlan_tci, skb));
{
if (test_and_clear_bit(__LINK_STATE_PRESENT, &dev->state) &&
netif_running(dev)) {
- netif_stop_queue(dev);
+ netif_tx_stop_all_queues(dev);
}
}
EXPORT_SYMBOL(netif_device_detach);
{
if (!test_and_set_bit(__LINK_STATE_PRESENT, &dev->state) &&
netif_running(dev)) {
- netif_wake_queue(dev);
+ netif_tx_wake_all_queues(dev);
__netdev_watchdog_up(dev);
}
}
struct list_head *head = &ptype_base[ntohs(type) & PTYPE_HASH_MASK];
int err = -ENOENT;
- if (NAPI_GRO_CB(skb)->count == 1)
+ if (NAPI_GRO_CB(skb)->count == 1) {
+ skb_shinfo(skb)->gso_size = 0;
goto out;
+ }
rcu_read_lock();
list_for_each_entry_rcu(ptype, head, list) {
}
out:
- skb_shinfo(skb)->gso_size = 0;
return netif_receive_skb(skb);
}
tcp_set_rto(sk);
if (inet_csk(sk)->icsk_rto < TCP_TIMEOUT_INIT && !tp->rx_opt.saw_tstamp)
goto reset;
+
+cwnd:
tp->snd_cwnd = tcp_init_cwnd(tp, dst);
tp->snd_cwnd_stamp = tcp_time_stamp;
return;
tp->mdev = tp->mdev_max = tp->rttvar = TCP_TIMEOUT_INIT;
inet_csk(sk)->icsk_rto = TCP_TIMEOUT_INIT;
}
+ goto cwnd;
}
static void tcp_update_reordering(struct sock *sk, const int metric,
return error;
}
-int ipv4_rcv_saddr_equal(const struct sock *sk1, const struct sock *sk2)
+static int ipv4_rcv_saddr_equal(const struct sock *sk1, const struct sock *sk2)
{
struct inet_sock *inet1 = inet_sk(sk1), *inet2 = inet_sk(sk2);
EXPORT_SYMBOL(udp_lib_setsockopt);
EXPORT_SYMBOL(udp_poll);
EXPORT_SYMBOL(udp_lib_get_port);
-EXPORT_SYMBOL(ipv4_rcv_saddr_equal);
#ifdef CONFIG_PROC_FS
EXPORT_SYMBOL(udp_proc_register);
default:
goto sticky_done;
}
-
- if ((rthdr->hdrlen & 1) ||
- (rthdr->hdrlen >> 1) != rthdr->segments_left)
- goto sticky_done;
}
retv = 0;
{
const struct in6_addr *sk_rcv_saddr6 = &inet6_sk(sk)->rcv_saddr;
const struct in6_addr *sk2_rcv_saddr6 = inet6_rcv_saddr(sk2);
+ __be32 sk_rcv_saddr = inet_sk(sk)->rcv_saddr;
+ __be32 sk2_rcv_saddr = inet_rcv_saddr(sk2);
int sk_ipv6only = ipv6_only_sock(sk);
int sk2_ipv6only = inet_v6_ipv6only(sk2);
int addr_type = ipv6_addr_type(sk_rcv_saddr6);
/* if both are mapped, treat as IPv4 */
if (addr_type == IPV6_ADDR_MAPPED && addr_type2 == IPV6_ADDR_MAPPED)
- return ipv4_rcv_saddr_equal(sk, sk2);
+ return (!sk2_ipv6only &&
+ (!sk_rcv_saddr || !sk2_rcv_saddr ||
+ sk_rcv_saddr == sk2_rcv_saddr));
if (addr_type2 == IPV6_ADDR_ANY &&
!(sk2_ipv6only && addr_type == IPV6_ADDR_MAPPED))
static inline char *alloc_one_pg_vec_page(unsigned long order)
{
- return (char *) __get_free_pages(GFP_KERNEL | __GFP_COMP | __GFP_ZERO,
- order);
+ gfp_t gfp_flags = GFP_KERNEL | __GFP_COMP | __GFP_ZERO | __GFP_NOWARN;
+
+ return (char *) __get_free_pages(gfp_flags, order);
}
static char **alloc_pg_vec(struct tpacket_req *req, int order)
unsigned char *asmptr;
int n, size, qbit = 0;
- /* ROSE empty frame has no meaning : don't send */
- if (len == 0)
- return 0;
-
if (msg->msg_flags & ~(MSG_DONTWAIT|MSG_EOR|MSG_CMSG_COMPAT))
return -EINVAL;
skb_reset_transport_header(skb);
copied = skb->len;
- /* ROSE empty frame has no meaning : ignore it */
- if (copied == 0) {
- skb_free_datagram(sk, skb);
- return copied;
- }
-
if (copied > size) {
copied = size;
msg->msg_flags |= MSG_TRUNC;
META_COLLECTOR(int_vlan_tag)
{
- unsigned short uninitialized_var(tag);
- if (vlan_get_tag(skb, &tag) < 0)
+ unsigned short tag;
+
+ tag = vlan_tx_tag_get(skb);
+ if (!tag && __vlan_get_tag(skb, &tag))
*err = -1;
else
dst->value = tag;
{
struct snd_ctl_elem_value *control;
int result;
-
- control = kmalloc(sizeof(*control), GFP_KERNEL);
- if (control == NULL)
- return -ENOMEM;
- if (copy_from_user(control, _control, sizeof(*control))) {
- kfree(control);
- return -EFAULT;
- }
+
+ control = memdup_user(_control, sizeof(*control));
+ if (IS_ERR(control))
+ return PTR_ERR(control);
+
snd_power_lock(card);
result = snd_power_wait(card, SNDRV_CTL_POWER_D0);
if (result >= 0)
struct snd_card *card;
int result;
- control = kmalloc(sizeof(*control), GFP_KERNEL);
- if (control == NULL)
- return -ENOMEM;
- if (copy_from_user(control, _control, sizeof(*control))) {
- kfree(control);
- return -EFAULT;
- }
+ control = memdup_user(_control, sizeof(*control));
+ if (IS_ERR(control))
+ return PTR_ERR(control);
+
card = file->card;
snd_power_lock(card);
result = snd_power_wait(card, SNDRV_CTL_POWER_D0);
if (op_flag > 0) {
if (size > 1024 * 128) /* sane value */
return -EINVAL;
- new_data = kmalloc(size, GFP_KERNEL);
- if (new_data == NULL)
- return -ENOMEM;
- if (copy_from_user(new_data, tlv, size)) {
- kfree(new_data);
- return -EFAULT;
- }
+
+ new_data = memdup_user(tlv, size);
+ if (IS_ERR(new_data))
+ return PTR_ERR(new_data);
change = ue->tlv_data_size != size;
if (!change)
change = memcmp(ue->tlv_data, new_data, size);
{
struct snd_jack *jack = device->device_data;
+ if (jack->private_free)
+ jack->private_free(jack);
+
/* If the input device is registered with the input subsystem
* then we need to use a different deallocator. */
if (jack->registered)
if (! (runtime = substream->runtime))
return -ENOTTY;
- data = kmalloc(sizeof(*data), GFP_KERNEL);
- if (data == NULL)
- return -ENOMEM;
/* only fifo_size is different, so just copy all */
- if (copy_from_user(data, data32, sizeof(*data32))) {
- err = -EFAULT;
- goto error;
- }
+ data = memdup_user(data32, sizeof(*data32));
+ if (IS_ERR(data))
+ return PTR_ERR(data);
+
if (refine)
err = snd_pcm_hw_refine(substream, data);
else
{
struct snd_pcm_runtime *runtime = substream->runtime;
snd_pcm_uframes_t pos;
- snd_pcm_uframes_t new_hw_ptr, hw_ptr_interrupt, hw_base;
- snd_pcm_sframes_t delta;
+ snd_pcm_uframes_t old_hw_ptr, new_hw_ptr, hw_ptr_interrupt, hw_base;
+ snd_pcm_sframes_t hdelta, delta;
+ unsigned long jdelta;
+ old_hw_ptr = runtime->status->hw_ptr;
pos = snd_pcm_update_hw_ptr_pos(substream, runtime);
if (pos == SNDRV_PCM_POS_XRUN) {
xrun(substream);
new_hw_ptr = hw_base + pos;
}
}
- if (delta > runtime->period_size) {
+ hdelta = new_hw_ptr - old_hw_ptr;
+ jdelta = jiffies - runtime->hw_ptr_jiffies;
+ if (((hdelta * HZ) / runtime->rate) > jdelta + HZ/100) {
+ delta = jdelta /
+ (((runtime->period_size * HZ) / runtime->rate)
+ + HZ/100);
+ hw_ptr_error(substream,
+ "hw_ptr skipping! [Q] "
+ "(pos=%ld, delta=%ld, period=%ld, "
+ "jdelta=%lu/%lu/%lu)\n",
+ (long)pos, (long)hdelta,
+ (long)runtime->period_size, jdelta,
+ ((hdelta * HZ) / runtime->rate), delta);
+ hw_ptr_interrupt = runtime->hw_ptr_interrupt +
+ runtime->period_size * delta;
+ if (hw_ptr_interrupt >= runtime->boundary)
+ hw_ptr_interrupt -= runtime->boundary;
+ /* rebase to interrupt position */
+ hw_base = new_hw_ptr = hw_ptr_interrupt;
+ /* align hw_base to buffer_size */
+ hw_base -= hw_base % runtime->buffer_size;
+ delta = 0;
+ }
+ if (delta > runtime->period_size + runtime->period_size / 2) {
hw_ptr_error(substream,
"Lost interrupts? "
"(stream=%i, delta=%ld, intr_ptr=%ld)\n",
runtime->hw_ptr_base = hw_base;
runtime->status->hw_ptr = new_hw_ptr;
+ runtime->hw_ptr_jiffies = jiffies;
runtime->hw_ptr_interrupt = hw_ptr_interrupt;
return snd_pcm_update_hw_ptr_post(substream, runtime);
snd_pcm_uframes_t pos;
snd_pcm_uframes_t old_hw_ptr, new_hw_ptr, hw_base;
snd_pcm_sframes_t delta;
+ unsigned long jdelta;
old_hw_ptr = runtime->status->hw_ptr;
pos = snd_pcm_update_hw_ptr_pos(substream, runtime);
new_hw_ptr = hw_base + pos;
delta = new_hw_ptr - old_hw_ptr;
+ jdelta = jiffies - runtime->hw_ptr_jiffies;
if (delta < 0) {
delta += runtime->buffer_size;
if (delta < 0) {
hw_ptr_error(substream,
"Unexpected hw_pointer value [2] "
- "(stream=%i, pos=%ld, old_ptr=%ld)\n",
+ "(stream=%i, pos=%ld, old_ptr=%ld, jdelta=%li)\n",
substream->stream, (long)pos,
- (long)old_hw_ptr);
+ (long)old_hw_ptr, jdelta);
return 0;
}
hw_base += runtime->buffer_size;
hw_base = 0;
new_hw_ptr = hw_base + pos;
}
- if (delta > runtime->period_size && runtime->periods > 1) {
+ if (((delta * HZ) / runtime->rate) > jdelta + HZ/100) {
hw_ptr_error(substream,
"hw_ptr skipping! "
- "(pos=%ld, delta=%ld, period=%ld)\n",
+ "(pos=%ld, delta=%ld, period=%ld, jdelta=%lu/%lu)\n",
(long)pos, (long)delta,
- (long)runtime->period_size);
+ (long)runtime->period_size, jdelta,
+ ((delta * HZ) / runtime->rate));
return 0;
}
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK &&
runtime->hw_ptr_base = hw_base;
runtime->status->hw_ptr = new_hw_ptr;
+ runtime->hw_ptr_jiffies = jiffies;
return snd_pcm_update_hw_ptr_post(substream, runtime);
}
runtime->status->hw_ptr %= runtime->buffer_size;
else
runtime->status->hw_ptr = 0;
+ runtime->hw_ptr_jiffies = jiffies;
snd_pcm_stream_unlock_irqrestore(substream, flags);
return 0;
}
struct snd_pcm_hw_params *params;
int err;
- params = kmalloc(sizeof(*params), GFP_KERNEL);
- if (!params) {
- err = -ENOMEM;
- goto out;
- }
- if (copy_from_user(params, _params, sizeof(*params))) {
- err = -EFAULT;
- goto out;
- }
+ params = memdup_user(_params, sizeof(*params));
+ if (IS_ERR(params))
+ return PTR_ERR(params);
+
err = snd_pcm_hw_refine(substream, params);
if (copy_to_user(_params, params, sizeof(*params))) {
if (!err)
err = -EFAULT;
}
-out:
+
kfree(params);
return err;
}
struct snd_pcm_hw_params *params;
int err;
- params = kmalloc(sizeof(*params), GFP_KERNEL);
- if (!params) {
- err = -ENOMEM;
- goto out;
- }
- if (copy_from_user(params, _params, sizeof(*params))) {
- err = -EFAULT;
- goto out;
- }
+ params = memdup_user(_params, sizeof(*params));
+ if (IS_ERR(params))
+ return PTR_ERR(params);
+
err = snd_pcm_hw_params(substream, params);
if (copy_to_user(_params, params, sizeof(*params))) {
if (!err)
err = -EFAULT;
}
-out:
+
kfree(params);
return err;
}
return -EFAULT;
if (copy_from_user(&xfern, _xfern, sizeof(xfern)))
return -EFAULT;
- bufs = kmalloc(sizeof(void *) * runtime->channels, GFP_KERNEL);
- if (bufs == NULL)
- return -ENOMEM;
- if (copy_from_user(bufs, xfern.bufs, sizeof(void *) * runtime->channels)) {
- kfree(bufs);
- return -EFAULT;
- }
+
+ bufs = memdup_user(xfern.bufs,
+ sizeof(void *) * runtime->channels);
+ if (IS_ERR(bufs))
+ return PTR_ERR(bufs);
result = snd_pcm_lib_writev(substream, bufs, xfern.frames);
kfree(bufs);
__put_user(result, &_xfern->result);
return -EFAULT;
if (copy_from_user(&xfern, _xfern, sizeof(xfern)))
return -EFAULT;
- bufs = kmalloc(sizeof(void *) * runtime->channels, GFP_KERNEL);
- if (bufs == NULL)
- return -ENOMEM;
- if (copy_from_user(bufs, xfern.bufs, sizeof(void *) * runtime->channels)) {
- kfree(bufs);
- return -EFAULT;
- }
+
+ bufs = memdup_user(xfern.bufs,
+ sizeof(void *) * runtime->channels);
+ if (IS_ERR(bufs))
+ return PTR_ERR(bufs);
result = snd_pcm_lib_readv(substream, bufs, xfern.frames);
kfree(bufs);
__put_user(result, &_xfern->result);
int err;
params = kmalloc(sizeof(*params), GFP_KERNEL);
- if (!params) {
- err = -ENOMEM;
- goto out;
- }
- oparams = kmalloc(sizeof(*oparams), GFP_KERNEL);
- if (!oparams) {
- err = -ENOMEM;
- goto out;
- }
+ if (!params)
+ return -ENOMEM;
- if (copy_from_user(oparams, _oparams, sizeof(*oparams))) {
- err = -EFAULT;
+ oparams = memdup_user(_oparams, sizeof(*oparams));
+ if (IS_ERR(oparams)) {
+ err = PTR_ERR(oparams);
goto out;
}
snd_pcm_hw_convert_from_old_params(params, oparams);
if (!err)
err = -EFAULT;
}
+
+ kfree(oparams);
out:
kfree(params);
- kfree(oparams);
return err;
}
int err;
params = kmalloc(sizeof(*params), GFP_KERNEL);
- if (!params) {
- err = -ENOMEM;
- goto out;
- }
- oparams = kmalloc(sizeof(*oparams), GFP_KERNEL);
- if (!oparams) {
- err = -ENOMEM;
- goto out;
- }
- if (copy_from_user(oparams, _oparams, sizeof(*oparams))) {
- err = -EFAULT;
+ if (!params)
+ return -ENOMEM;
+
+ oparams = memdup_user(_oparams, sizeof(*oparams));
+ if (IS_ERR(oparams)) {
+ err = PTR_ERR(oparams);
goto out;
}
snd_pcm_hw_convert_from_old_params(params, oparams);
if (!err)
err = -EFAULT;
}
+
+ kfree(oparams);
out:
kfree(params);
- kfree(oparams);
return err;
}
#endif /* CONFIG_SND_SUPPORT_OLD_API */
struct snd_seq_port_info *data;
mm_segment_t fs;
- data = kmalloc(sizeof(*data), GFP_KERNEL);
- if (! data)
- return -ENOMEM;
+ data = memdup_user(data32, sizeof(*data32));
+ if (IS_ERR(data))
+ return PTR_ERR(data);
- if (copy_from_user(data, data32, sizeof(*data32)) ||
- get_user(data->flags, &data32->flags) ||
+ if (get_user(data->flags, &data32->flags) ||
get_user(data->time_queue, &data32->time_queue))
goto error;
data->kernel = NULL;
struct list_head *p;
int err = 0;
- ginfo = kmalloc(sizeof(*ginfo), GFP_KERNEL);
- if (! ginfo)
- return -ENOMEM;
- if (copy_from_user(ginfo, _ginfo, sizeof(*ginfo))) {
- kfree(ginfo);
- return -EFAULT;
- }
+ ginfo = memdup_user(_ginfo, sizeof(*ginfo));
+ if (IS_ERR(ginfo))
+ return PTR_ERR(ginfo);
+
tid = ginfo->tid;
memset(ginfo, 0, sizeof(*ginfo));
ginfo->tid = tid;
static int snd_sb_csp_load_user(struct snd_sb_csp * p, const unsigned char __user *buf, int size, int load_flags)
{
- int err = -ENOMEM;
- unsigned char *kbuf = kmalloc(size, GFP_KERNEL);
- if (kbuf) {
- if (copy_from_user(kbuf, buf, size))
- err = -EFAULT;
- else
- err = snd_sb_csp_load(p, kbuf, size, load_flags);
- kfree(kbuf);
- }
+ int err;
+ unsigned char *kbuf;
+
+ kbuf = memdup_user(buf, size);
+ if (IS_ERR(kbuf))
+ return PTR_ERR(kbuf);
+
+ err = snd_sb_csp_load(p, kbuf, size, load_flags);
+
+ kfree(kbuf);
return err;
}
"> 512 bytes to FX\n");
return -EIO;
}
- page_data = kmalloc(r.data[2] * sizeof(short), GFP_KERNEL);
- if (!page_data)
- return -ENOMEM;
- if (copy_from_user (page_data,
- (unsigned char __user *) r.data[3],
- r.data[2] * sizeof(short))) {
- kfree(page_data);
- return -EFAULT;
- }
+ page_data = memdup_user((unsigned char __user *)
+ r.data[3],
+ r.data[2] * sizeof(short));
+ if (IS_ERR(page_data))
+ return PTR_ERR(page_data);
pd = page_data;
}
break;
case WFCTL_WFCMD:
- wc = kmalloc(sizeof(*wc), GFP_KERNEL);
- if (! wc)
- return -ENOMEM;
- if (copy_from_user (wc, argp, sizeof (*wc)))
- err = -EFAULT;
- else if (wavefront_synth_control (acard, wc) < 0)
+ wc = memdup_user(argp, sizeof(*wc));
+ if (IS_ERR(wc))
+ return PTR_ERR(wc);
+
+ if (wavefront_synth_control (acard, wc) < 0)
err = -EIO;
else if (copy_to_user (argp, wc, sizeof (*wc)))
err = -EFAULT;
case SNDRV_EMU10K1_IOCTL_CODE_POKE:
if (!capable(CAP_SYS_ADMIN))
return -EPERM;
- icode = kmalloc(sizeof(*icode), GFP_KERNEL);
- if (icode == NULL)
- return -ENOMEM;
- if (copy_from_user(icode, argp, sizeof(*icode))) {
- kfree(icode);
- return -EFAULT;
- }
+
+ icode = memdup_user(argp, sizeof(*icode));
+ if (IS_ERR(icode))
+ return PTR_ERR(icode);
res = snd_emu10k1_icode_poke(emu, icode);
kfree(icode);
return res;
case SNDRV_EMU10K1_IOCTL_CODE_PEEK:
- icode = kmalloc(sizeof(*icode), GFP_KERNEL);
- if (icode == NULL)
- return -ENOMEM;
- if (copy_from_user(icode, argp, sizeof(*icode))) {
- kfree(icode);
- return -EFAULT;
- }
+ icode = memdup_user(argp, sizeof(*icode));
+ if (IS_ERR(icode))
+ return PTR_ERR(icode);
res = snd_emu10k1_icode_peek(emu, icode);
if (res == 0 && copy_to_user(argp, icode, sizeof(*icode))) {
kfree(icode);
kfree(icode);
return res;
case SNDRV_EMU10K1_IOCTL_PCM_POKE:
- ipcm = kmalloc(sizeof(*ipcm), GFP_KERNEL);
- if (ipcm == NULL)
- return -ENOMEM;
- if (copy_from_user(ipcm, argp, sizeof(*ipcm))) {
- kfree(ipcm);
- return -EFAULT;
- }
+ ipcm = memdup_user(argp, sizeof(*ipcm));
+ if (IS_ERR(ipcm))
+ return PTR_ERR(ipcm);
res = snd_emu10k1_ipcm_poke(emu, ipcm);
kfree(ipcm);
return res;
case SNDRV_EMU10K1_IOCTL_PCM_PEEK:
- ipcm = kzalloc(sizeof(*ipcm), GFP_KERNEL);
- if (ipcm == NULL)
- return -ENOMEM;
- if (copy_from_user(ipcm, argp, sizeof(*ipcm))) {
- kfree(ipcm);
- return -EFAULT;
- }
+ ipcm = memdup_user(argp, sizeof(*ipcm));
+ if (IS_ERR(ipcm))
+ return PTR_ERR(ipcm);
res = snd_emu10k1_ipcm_peek(emu, ipcm);
if (res == 0 && copy_to_user(argp, ipcm, sizeof(*ipcm))) {
kfree(ipcm);
err = bus->ops.command(bus, res);
if (!err) {
struct hda_cache_head *c;
- u32 key = build_cmd_cache_key(nid, verb);
+ u32 key;
+ /* parm may contain the verb stuff for get/set amp */
+ verb = verb | (parm >> 8);
+ parm &= 0xff;
+ key = build_cmd_cache_key(nid, verb);
c = get_alloc_hash(&codec->cmd_cache, key);
if (c)
c->val = parm;
unsigned int period_bytes; /* size of the period in bytes */
unsigned int frags; /* number for period in the play buffer */
unsigned int fifo_size; /* FIFO size */
+ unsigned int start_flag: 1; /* stream full start flag */
+ unsigned long start_jiffies; /* start + minimum jiffies */
+ unsigned long min_jiffies; /* minimum jiffies before position is valid */
void __iomem *sd_addr; /* stream descriptor pointer */
unsigned int opened :1;
unsigned int running :1;
unsigned int irq_pending :1;
- unsigned int irq_ignore :1;
/*
* For VIA:
* A flag to ensure DMA position is 0
struct azx *chip = dev_id;
struct azx_dev *azx_dev;
u32 status;
- int i;
+ int i, ok;
spin_lock(&chip->reg_lock);
azx_sd_writeb(azx_dev, SD_STS, SD_INT_MASK);
if (!azx_dev->substream || !azx_dev->running)
continue;
- /* ignore the first dummy IRQ (due to pos_adj) */
- if (azx_dev->irq_ignore) {
- azx_dev->irq_ignore = 0;
- continue;
- }
/* check whether this IRQ is really acceptable */
- if (azx_position_ok(chip, azx_dev)) {
+ ok = azx_position_ok(chip, azx_dev);
+ if (ok == 1) {
azx_dev->irq_pending = 0;
spin_unlock(&chip->reg_lock);
snd_pcm_period_elapsed(azx_dev->substream);
spin_lock(&chip->reg_lock);
- } else if (chip->bus && chip->bus->workq) {
+ } else if (ok == 0 && chip->bus && chip->bus->workq) {
/* bogus IRQ, process it later */
azx_dev->irq_pending = 1;
queue_work(chip->bus->workq,
bdl = (u32 *)azx_dev->bdl.area;
ofs = 0;
azx_dev->frags = 0;
- azx_dev->irq_ignore = 0;
pos_adj = bdl_pos_adj[chip->dev_index];
if (pos_adj > 0) {
struct snd_pcm_runtime *runtime = substream->runtime;
&bdl, ofs, pos_adj, 1);
if (ofs < 0)
goto error;
- azx_dev->irq_ignore = 1;
}
} else
pos_adj = 0;
while (((val = azx_sd_readb(azx_dev, SD_CTL)) & SD_CTL_STREAM_RESET) &&
--timeout)
;
+
+ /* reset first position - may not be synced with hw at this time */
+ *azx_dev->posbuf = 0;
}
/*
snd_pcm_set_sync(substream);
mutex_unlock(&chip->open_mutex);
- azx_stream_reset(chip, azx_dev);
return 0;
}
unsigned int bufsize, period_bytes, format_val;
int err;
+ azx_stream_reset(chip, azx_dev);
format_val = snd_hda_calc_stream_format(runtime->rate,
runtime->channels,
runtime->format,
return err;
}
+ azx_dev->min_jiffies = (runtime->period_size * HZ) /
+ (runtime->rate * 2);
azx_setup_controller(chip, azx_dev);
if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
azx_dev->fifo_size = azx_sd_readw(azx_dev, SD_FIFOSIZE) + 1;
struct azx *chip = apcm->chip;
struct azx_dev *azx_dev;
struct snd_pcm_substream *s;
- int start, nsync = 0, sbits = 0;
+ int rstart = 0, start, nsync = 0, sbits = 0;
int nwait, timeout;
switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ rstart = 1;
case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
case SNDRV_PCM_TRIGGER_RESUME:
- case SNDRV_PCM_TRIGGER_START:
start = 1;
break;
case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
if (s->pcm->card != substream->pcm->card)
continue;
azx_dev = get_azx_dev(s);
+ if (rstart) {
+ azx_dev->start_flag = 1;
+ azx_dev->start_jiffies = jiffies + azx_dev->min_jiffies;
+ }
if (start)
azx_stream_start(chip, azx_dev);
else
{
unsigned int pos;
+ if (azx_dev->start_flag &&
+ time_before_eq(jiffies, azx_dev->start_jiffies))
+ return -1; /* bogus (too early) interrupt */
+ azx_dev->start_flag = 0;
+
pos = azx_get_position(chip, azx_dev);
if (chip->position_fix == POS_FIX_AUTO) {
if (!pos) {
}
#ifdef CONFIG_SND_JACK
+static void conexant_free_jack_priv(struct snd_jack *jack)
+{
+ struct conexant_jack *jacks = jack->private_data;
+ jacks->nid = 0;
+ jacks->jack = NULL;
+}
+
static int conexant_add_jack(struct hda_codec *codec,
hda_nid_t nid, int type)
{
struct conexant_spec *spec;
struct conexant_jack *jack;
const char *name;
+ int err;
spec = codec->spec;
snd_array_init(&spec->jacks, sizeof(*jack), 32);
jack->nid = nid;
jack->type = type;
- return snd_jack_new(codec->bus->card, name, type, &jack->jack);
+ err = snd_jack_new(codec->bus->card, name, type, &jack->jack);
+ if (err < 0)
+ return err;
+ jack->jack->private_data = jack;
+ jack->jack->private_free = conexant_free_jack_priv;
+ return 0;
}
static void conexant_report_jack(struct hda_codec *codec, hda_nid_t nid)
if (spec->jacks.list) {
struct conexant_jack *jacks = spec->jacks.list;
int i;
- for (i = 0; i < spec->jacks.used; i++)
- snd_device_free(codec->bus->card, &jacks[i].jack);
+ for (i = 0; i < spec->jacks.used; i++, jacks++) {
+ if (jacks->jack)
+ snd_device_free(codec->bus->card, jacks->jack);
+ }
snd_array_free(&spec->jacks);
}
#endif
SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC883_LAPTOP_EAPD),
SND_PCI_QUIRK(0x15d9, 0x8780, "Supermicro PDSBA", ALC883_3ST_6ch),
SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_MEDION),
- SND_PCI_QUIRK(0x1734, 0x1107, "FSC AMILO Xi2550",
+ SND_PCI_QUIRK_MASK(0x1734, 0xfff0, 0x1100, "FSC AMILO Xi/Pi25xx",
ALC883_FUJITSU_PI2515),
- SND_PCI_QUIRK(0x1734, 0x1108, "Fujitsu AMILO Pi2515", ALC883_FUJITSU_PI2515),
- SND_PCI_QUIRK(0x1734, 0x113d, "Fujitsu AMILO Xa3530",
+ SND_PCI_QUIRK_MASK(0x1734, 0xfff0, 0x1130, "Fujitsu AMILO Xa35xx",
ALC888_FUJITSU_XA3530),
SND_PCI_QUIRK(0x17aa, 0x101e, "Lenovo 101e", ALC883_LENOVO_101E_2ch),
SND_PCI_QUIRK(0x17aa, 0x2085, "Lenovo NB0763", ALC883_LENOVO_NB0763),
AC_VERB_SET_GPIO_DATA, gpiostate); /* sync */
}
+#ifdef CONFIG_SND_JACK
+static void stac92xx_free_jack_priv(struct snd_jack *jack)
+{
+ struct sigmatel_jack *jacks = jack->private_data;
+ jacks->nid = 0;
+ jacks->jack = NULL;
+}
+#endif
+
static int stac92xx_add_jack(struct hda_codec *codec,
hda_nid_t nid, int type)
{
int def_conf = snd_hda_codec_get_pincfg(codec, nid);
int connectivity = get_defcfg_connect(def_conf);
char name[32];
+ int err;
if (connectivity && connectivity != AC_JACK_PORT_FIXED)
return 0;
snd_hda_get_jack_connectivity(def_conf),
snd_hda_get_jack_location(def_conf));
- return snd_jack_new(codec->bus->card, name, type, &jack->jack);
-#else
- return 0;
+ err = snd_jack_new(codec->bus->card, name, type, &jack->jack);
+ if (err < 0) {
+ jack->nid = 0;
+ return err;
+ }
+ jack->jack->private_data = jack;
+ jack->jack->private_free = stac92xx_free_jack_priv;
#endif
+ return 0;
}
static int stac_add_event(struct sigmatel_spec *spec, hda_nid_t nid,
if (!codec->bus->shutdown && spec->jacks.list) {
struct sigmatel_jack *jacks = spec->jacks.list;
int i;
- for (i = 0; i < spec->jacks.used; i++)
- snd_device_free(codec->bus->card, &jacks[i].jack);
+ for (i = 0; i < spec->jacks.used; i++, jacks++) {
+ if (jacks->jack)
+ snd_device_free(codec->bus->card, jacks->jack);
+ }
}
snd_array_free(&spec->jacks);
#endif
unsigned int fragsize1;
unsigned int position;
unsigned int pos_shift;
+ unsigned int last_pos;
+ unsigned long last_pos_jiffies;
+ unsigned int jiffy_to_bytes;
int frags;
int lvi;
int lvi_frag;
ichdev->suspended = 0;
/* fallthru */
case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
val = ICH_IOCE | ICH_STARTBM;
+ ichdev->last_pos = ichdev->position;
+ ichdev->last_pos_jiffies = jiffies;
break;
case SNDRV_PCM_TRIGGER_SUSPEND:
ichdev->suspended = 1;
case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
val = ICH_IOCE;
break;
- case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
- val = ICH_IOCE | ICH_STARTBM;
- break;
default:
return -EINVAL;
}
ichdev->pos_shift = (runtime->sample_bits > 16) ? 2 : 1;
}
snd_intel8x0_setup_periods(chip, ichdev);
+ ichdev->jiffy_to_bytes = (runtime->rate * 4 * ichdev->pos_shift) / HZ;
return 0;
}
struct intel8x0 *chip = snd_pcm_substream_chip(substream);
struct ichdev *ichdev = get_ichdev(substream);
size_t ptr1, ptr;
- int civ, timeout = 100;
+ int civ, timeout = 10;
unsigned int position;
spin_lock(&chip->reg_lock);
ptr1 == igetword(chip, ichdev->reg_offset + ichdev->roff_picb))
break;
} while (timeout--);
- ptr1 <<= ichdev->pos_shift;
- ptr = ichdev->fragsize1 - ptr1;
- ptr += position;
+ if (ptr1 != 0) {
+ ptr1 <<= ichdev->pos_shift;
+ ptr = ichdev->fragsize1 - ptr1;
+ ptr += position;
+ ichdev->last_pos = ptr;
+ ichdev->last_pos_jiffies = jiffies;
+ } else {
+ ptr1 = jiffies - ichdev->last_pos_jiffies;
+ if (ptr1)
+ ptr1 -= 1;
+ ptr = ichdev->last_pos + ptr1 * ichdev->jiffy_to_bytes;
+ ptr %= ichdev->size;
+ }
spin_unlock(&chip->reg_lock);
if (ptr >= ichdev->size)
return 0;
struct snd_pcm_substream *subs;
struct ichdev *ichdev;
unsigned long port;
- unsigned long pos, t;
- struct timeval start_time, stop_time;
+ unsigned long pos, pos1, t;
+ int civ, timeout = 1000, attempt = 1;
+ struct timespec start_time, stop_time;
if (chip->ac97_bus->clock != 48000)
return; /* specified in module option */
+ __again:
subs = chip->pcm[0]->streams[0].substream;
if (! subs || subs->dma_buffer.bytes < INTEL8X0_TESTBUF_SIZE) {
snd_printk(KERN_WARNING "no playback buffer allocated - aborting measure ac97 clock\n");
}
ichdev = &chip->ichd[ICHD_PCMOUT];
ichdev->physbuf = subs->dma_buffer.addr;
- ichdev->size = chip->ichd[ICHD_PCMOUT].fragsize = INTEL8X0_TESTBUF_SIZE;
+ ichdev->size = ichdev->fragsize = INTEL8X0_TESTBUF_SIZE;
ichdev->substream = NULL; /* don't process interrupts */
/* set rate */
iputbyte(chip, port + ICH_REG_OFF_CR, ICH_IOCE);
iputdword(chip, ICHREG(ALI_DMACR), 1 << ichdev->ali_slot);
}
- do_gettimeofday(&start_time);
+ do_posix_clock_monotonic_gettime(&start_time);
spin_unlock_irq(&chip->reg_lock);
msleep(50);
spin_lock_irq(&chip->reg_lock);
/* check the position */
- pos = ichdev->fragsize1;
- pos -= igetword(chip, ichdev->reg_offset + ichdev->roff_picb) << ichdev->pos_shift;
- pos += ichdev->position;
+ do {
+ civ = igetbyte(chip, ichdev->reg_offset + ICH_REG_OFF_CIV);
+ pos1 = igetword(chip, ichdev->reg_offset + ichdev->roff_picb);
+ if (pos1 == 0) {
+ udelay(10);
+ continue;
+ }
+ if (civ == igetbyte(chip, ichdev->reg_offset + ICH_REG_OFF_CIV) &&
+ pos1 == igetword(chip, ichdev->reg_offset + ichdev->roff_picb))
+ break;
+ } while (timeout--);
+ if (pos1 == 0) { /* oops, this value is not reliable */
+ pos = 0;
+ } else {
+ pos = ichdev->fragsize1;
+ pos -= pos1 << ichdev->pos_shift;
+ pos += ichdev->position;
+ }
chip->in_measurement = 0;
- do_gettimeofday(&stop_time);
+ do_posix_clock_monotonic_gettime(&stop_time);
/* stop */
if (chip->device_type == DEVICE_ALI) {
iputdword(chip, ICHREG(ALI_DMACR), 1 << (ichdev->ali_slot + 16));
iputbyte(chip, port + ICH_REG_OFF_CR, ICH_RESETREGS);
spin_unlock_irq(&chip->reg_lock);
+ if (pos == 0) {
+ snd_printk(KERN_ERR "intel8x0: measure - unreliable DMA position..\n");
+ __retry:
+ if (attempt < 2) {
+ attempt++;
+ goto __again;
+ }
+ return;
+ }
+
+ pos /= 4;
t = stop_time.tv_sec - start_time.tv_sec;
t *= 1000000;
- t += stop_time.tv_usec - start_time.tv_usec;
- printk(KERN_INFO "%s: measured %lu usecs\n", __func__, t);
+ t += (stop_time.tv_nsec - start_time.tv_nsec) / 1000;
+ printk(KERN_INFO "%s: measured %lu usecs (%lu samples)\n", __func__, t, pos);
if (t == 0) {
- snd_printk(KERN_ERR "?? calculation error..\n");
- return;
+ snd_printk(KERN_ERR "intel8x0: ?? calculation error..\n");
+ goto __retry;
}
- pos = (pos / 4) * 1000;
+ pos *= 1000;
pos = (pos / t) * 1000 + ((pos % t) * 1000) / t;
- if (pos < 40000 || pos >= 60000)
+ if (pos < 40000 || pos >= 60000) {
/* abnormal value. hw problem? */
printk(KERN_INFO "intel8x0: measured clock %ld rejected\n", pos);
+ goto __retry;
+ } else if (pos > 40500 && pos < 41500)
+ /* first exception - 41000Hz reference clock */
+ chip->ac97_bus->clock = 41000;
+ else if (pos > 43600 && pos < 44600)
+ /* second exception - 44100HZ reference clock */
+ chip->ac97_bus->clock = 44100;
else if (pos < 47500 || pos > 48500)
/* not 48000Hz, tuning the clock.. */
chip->ac97_bus->clock = (chip->ac97_bus->clock * 48000) / pos;
#include <sound/soc.h>
#include <sound/soc-dapm.h>
-#include <mach/pxa-regs.h>
-#include <mach/hardware.h>
#include <mach/magician.h>
#include <asm/mach-types.h>
#include "../codecs/uda1380.h"
config SND_S3C24XX_SOC
tristate "SoC Audio for the Samsung S3CXXXX chips"
- depends on ARCH_S3C2410 || ARCH_S3C64XX
+ depends on ARCH_S3C2410
help
Say Y or M if you want to add support for codecs attached to
- the S3C24XX and S3C64XX AC97, I2S or SSP interface. You will
- also need to select the audio interfaces to support below.
+ the S3C24XX AC97 or I2S interfaces. You will also need to
+ select the audio interfaces to support below.
config SND_S3C24XX_SOC_I2S
tristate
-snd-usb-caiaq-y := caiaq-device.o caiaq-audio.o caiaq-midi.o caiaq-control.o
-snd-usb-caiaq-$(CONFIG_SND_USB_CAIAQ_INPUT) += caiaq-input.o
+snd-usb-caiaq-y := device.o audio.o midi.o control.o
+snd-usb-caiaq-$(CONFIG_SND_USB_CAIAQ_INPUT) += input.o
obj-$(CONFIG_SND_USB_CAIAQ) += snd-usb-caiaq.o
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
+#include <linux/spinlock.h>
#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/interrupt.h>
#include <linux/usb.h>
-#include <linux/spinlock.h>
#include <sound/core.h>
-#include <sound/initval.h>
#include <sound/pcm.h>
-#include <sound/rawmidi.h>
-#include <linux/input.h>
-#include "caiaq-device.h"
-#include "caiaq-audio.h"
+#include "device.h"
+#include "audio.h"
#define N_URBS 32
#define CLOCK_DRIFT_TOLERANCE 5
*/
#include <linux/init.h>
-#include <linux/interrupt.h>
#include <linux/usb.h>
+#include <sound/control.h>
#include <sound/core.h>
-#include <sound/initval.h>
#include <sound/pcm.h>
-#include <sound/rawmidi.h>
-#include <sound/control.h>
-#include <linux/input.h>
-#include "caiaq-device.h"
-#include "caiaq-control.h"
+#include "device.h"
+#include "control.h"
#define CNT_INTVAL 0x10000
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
-#include <linux/init.h>
-#include <linux/module.h>
#include <linux/moduleparam.h>
#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/init.h>
#include <linux/usb.h>
-#include <linux/input.h>
-#include <linux/spinlock.h>
-#include <sound/core.h>
#include <sound/initval.h>
+#include <sound/core.h>
#include <sound/pcm.h>
-#include <sound/rawmidi.h>
-#include <sound/control.h>
-
-#include "caiaq-device.h"
-#include "caiaq-audio.h"
-#include "caiaq-midi.h"
-#include "caiaq-control.h"
-#ifdef CONFIG_SND_USB_CAIAQ_INPUT
-#include "caiaq-input.h"
-#endif
+#include "device.h"
+#include "audio.h"
+#include "midi.h"
+#include "control.h"
+#include "input.h"
MODULE_AUTHOR("Daniel Mack <daniel@caiaq.de>");
MODULE_DESCRIPTION("caiaq USB audio, version 1.3.13");
*/
#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/input.h>
#include <linux/usb.h>
#include <linux/usb/input.h>
-#include <linux/spinlock.h>
-#include <sound/core.h>
-#include <sound/rawmidi.h>
#include <sound/pcm.h>
-#include "caiaq-device.h"
-#include "caiaq-input.h"
+
+#include "device.h"
+#include "input.h"
static unsigned short keycode_ak1[] = { KEY_C, KEY_B, KEY_A };
static unsigned short keycode_rk2[] = { KEY_1, KEY_2, KEY_3, KEY_4,
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/moduleparam.h>
-#include <linux/interrupt.h>
#include <linux/usb.h>
-#include <linux/input.h>
-#include <linux/spinlock.h>
-#include <sound/core.h>
#include <sound/rawmidi.h>
+#include <sound/core.h>
#include <sound/pcm.h>
-#include "caiaq-device.h"
-#include "caiaq-midi.h"
-
+#include "device.h"
+#include "midi.h"
static int snd_usb_caiaq_midi_input_open(struct snd_rawmidi_substream *substream)
{
if (cmd != SNDRV_USB_STREAM_IOCTL_SET_PARAMS)
return -ENOTTY;
- cfg = kmalloc(sizeof(*cfg), GFP_KERNEL);
- if (!cfg)
- return -ENOMEM;
+ cfg = memdup_user((void *)arg, sizeof(*cfg));
+ if (IS_ERR(cfg))
+ return PTR_ERR(cfg);
- if (copy_from_user(cfg, (void *)arg, sizeof(*cfg))) {
- err = -EFAULT;
- goto free;
- }
if (cfg->version != USB_STREAM_INTERFACE_VERSION) {
err = -ENXIO;
goto free;
if (access_ok(VERIFY_READ, dsp->image, dsp->length)) {
struct usb_device* dev = priv->chip.dev;
- char *buf = kmalloc(dsp->length, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
- if (copy_from_user(buf, dsp->image, dsp->length)) {
- kfree(buf);
- return -EFAULT;
- }
+ char *buf;
+
+ buf = memdup_user(dsp->image, dsp->length);
+ if (IS_ERR(buf))
+ return PTR_ERR(buf);
+
err = usb_set_interface(dev, 0, 1);
if (err)
snd_printk(KERN_ERR "usb_set_interface error \n");