M: Mail patches to: FullName <address@domain>
L: Mailing list that is relevant to this area
W: Web-page with status/info
+ Q: Patchwork web based patch tracking system site
T: SCM tree type and location. Type is one of: git, hg, quilt, stgit.
S: Status, one of the following:
Supported: Someone is actually paid to look after this.
S: Maintained
F: drivers/net/typhoon*
--3W-9XXX SATA-RAID CONTROLLER DRIVER
--M: Adam Radford <linuxraid@amcc.com>
++3WARE SAS/SATA-RAID SCSI DRIVERS (3W-XXXX, 3W-9XXX, 3W-SAS)
++M: Adam Radford <linuxraid@lsi.com>
L: linux-scsi@vger.kernel.org
--W: http://www.amcc.com
++W: http://www.lsi.com
S: Supported
--F: drivers/scsi/3w-9xxx*
--
--3W-XXXX ATA-RAID CONTROLLER DRIVER
--M: Adam Radford <linuxraid@amcc.com>
--L: linux-scsi@vger.kernel.org
--W: http://www.amcc.com
--S: Supported
--F: drivers/scsi/3w-xxxx*
++F: drivers/scsi/3w-*
53C700 AND 53C700-66 SCSI DRIVER
M: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
M: Latchesar Ionkov <lucho@ionkov.net>
L: v9fs-developer@lists.sourceforge.net
W: http://swik.net/v9fs
+Q: http://patchwork.kernel.org/project/v9fs-devel/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ericvh/v9fs.git
S: Maintained
F: Documentation/filesystems/9p.txt
ACER ASPIRE ONE TEMPERATURE AND FAN DRIVER
M: Peter Feuerer <peter@piie.net>
+L: platform-driver-x86@vger.kernel.org
W: http://piie.net/?section=acerhdf
S: Maintained
F: drivers/platform/x86/acerhdf.c
ACER WMI LAPTOP EXTRAS
M: Carlos Corbacho <carlos@strangeworlds.co.uk>
L: aceracpi@googlegroups.com (subscribers-only)
+L: platform-driver-x86@vger.kernel.org
W: http://code.google.com/p/aceracpi
S: Maintained
F: drivers/platform/x86/acer-wmi.c
M: Len Brown <lenb@kernel.org>
L: linux-acpi@vger.kernel.org
W: http://www.lesswatts.org/projects/acpi/
+Q: http://patchwork.kernel.org/project/linux-acpi/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-acpi-2.6.git
S: Supported
F: drivers/acpi/
ACPI WMI DRIVER
M: Carlos Corbacho <carlos@strangeworlds.co.uk>
-L: linux-acpi@vger.kernel.org
+L: platform-driver-x86@vger.kernel.org
W: http://www.lesswatts.org/projects/acpi/
S: Maintained
F: drivers/platform/x86/wmi.c
L: linux-geode@lists.infradead.org (moderated for non-subscribers)
W: http://www.amd.com/us-en/ConnectivitySolutions/TechnicalResources/0,,50_2334_2452_11363,00.html
S: Supported
-F: arch/x86/kernel/geode_32.c
F: drivers/char/hw_random/geode-rng.c
F: drivers/crypto/geode*
F: drivers/video/geode/
F: drivers/input/mouse/bcm5974.c
APPLE SMC DRIVER
-M: Nicolas Boichat <nicolas@boichat.ch>
-L: mactel-linux-devel@lists.sourceforge.net
+M: Henrik Rydberg <rydberg@euromail.se>
+L: lm-sensors@lm-sensors.org
S: Maintained
F: drivers/hwmon/applesmc.c
F: drivers/mtd/nand/bcm_umi_hamming.c
F: drivers/mtd/nand/nand_bcm_umi.h
++ARM/CAVIUM NETWORKS CNS3XXX MACHINE SUPPORT
++M: Anton Vorontsov <avorontsov@mvista.com>
++S: Maintained
++F: arch/arm/mach-cns3xxx/
++T: git git://git.infradead.org/users/cbou/linux-cns3xxx.git
++
ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE
M: Hartley Sweeten <hsweeten@visionengravers.com>
M: Ryan Mallon <ryan@bluewatersys.com>
F: arch/arm/mach-mx*/
F: arch/arm/plat-mxc/
+ARM/FREESCALE IMX51
+M: Amit Kucheria <amit.kucheria@canonical.com>
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Maintained
+F: arch/arm/mach-mx5/
+
ARM/GLOMATION GESBC9312SX MACHINE SUPPORT
M: Lennert Buytenhek <kernel@wantstofly.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
ARM/Marvell Loki/Kirkwood/MV78xx0/Orion SOC support
--M: Lennert Buytenhek <buytenh@marvell.com>
--M: Nicolas Pitre <nico@marvell.com>
++M: Lennert Buytenhek <kernel@wantstofly.org>
++M: Nicolas Pitre <nico@fluxnic.net>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
--T: git git://git.marvell.com/orion
--S: Maintained
++S: Odd Fixes
F: arch/arm/mach-loki/
F: arch/arm/mach-kirkwood/
F: arch/arm/mach-mv78xx0/
S: Maintained
ARM/NOMADIK ARCHITECTURE
-M: Alessandro Rubini <rubini@unipv.it>
-M: STEricsson <STEricsson_nomadik_linux@list.st.com>
-L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-S: Maintained
-F: arch/arm/mach-nomadik/
-F: arch/arm/plat-nomadik/
+M: Alessandro Rubini <rubini@unipv.it>
+M: STEricsson <STEricsson_nomadik_linux@list.st.com>
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Maintained
+F: arch/arm/mach-nomadik/
+F: arch/arm/plat-nomadik/
ARM/OPENMOKO NEO FREERUNNER (GTA02) MACHINE SUPPORT
M: Nelson Castillo <arhuaco@freaks-unidos.net>
M: David Brown <davidb@codeaurora.org>
M: Daniel Walker <dwalker@codeaurora.org>
M: Bryan Huntsman <bryanh@codeaurora.org>
++L: linux-arm-msm@vger.kernel.org
F: arch/arm/mach-msm/
F: drivers/video/msm/
F: drivers/mmc/host/msm_sdcc.c
ARM/SAMSUNG ARM ARCHITECTURES
M: Ben Dooks <ben-linux@fluff.org>
++M: Kukjin Kim <kgene.kim@samsung.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.fluff.org/ben/linux/
S: Maintained
--F: arch/arm/plat-s3c/
++F: arch/arm/plat-samsung/
F: arch/arm/plat-s3c24xx/
++F: arch/arm/plat-s5p/
ARM/S3C2410 ARM ARCHITECTURE
M: Ben Dooks <ben-linux@fluff.org>
S: Maintained
F: arch/arm/mach-s3c6410/
+ARM/SHMOBILE ARM ARCHITECTURE
+M: Paul Mundt <lethal@linux-sh.org>
+M: Magnus Damm <magnus.damm@gmail.com>
+L: linux-sh@vger.kernel.org
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/lethal/genesis-2.6.git
+W: http://oss.renesas.com
+S: Supported
+F: arch/arm/mach-shmobile/
+F: drivers/sh/
+
ARM/TECHNOLOGIC SYSTEMS TS7250 MACHINE SUPPORT
M: Lennert Buytenhek <kernel@wantstofly.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.mcuos.com
S: Maintained
++F: arch/arm/mach-w90x900/
++F: arch/arm/mach-nuc93x/
++F: drivers/input/keyboard/w90p910_keypad.c
++F: drivers/input/touchscreen/w90p910_ts.c
++F: drivers/watchdog/nuc900_wdt.c
++F: drivers/net/arm/w90p910_ether.c
++F: drivers/mtd/nand/w90p910_nand.c
++F: drivers/rtc/rtc-nuc900.c
++F: drivers/spi/spi_nuc900.c
++F: drivers/usb/host/ehci-w90x900.c
++F: drivers/video/nuc900fb.c
++F: drivers/sound/soc/nuc900/
+
+ARM/U300 MACHINE SUPPORT
+M: Linus Walleij <linus.walleij@stericsson.com>
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Supported
+F: arch/arm/mach-u300/
+F: drivers/i2c/busses/i2c-stu300.c
+F: drivers/rtc/rtc-coh901331.c
+F: drivers/watchdog/coh901327_wdt.c
+F: drivers/dma/coh901318*
ARM/U8500 ARM ARCHITECTURE
M: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com>
S: Maintained
F: arch/arm/vfp/
++ARM/VOIPAC PXA270 SUPPORT
++M: Marek Vasut <marek.vasut@gmail.com>
++L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
++S: Maintained
++F: arch/arm/mach-pxa/vpac270.c
++F: arch/arm/mach-pxa/include/mach-pxa/vpac270.h
++
++ARM/ZIPIT Z2 SUPPORT
++M: Marek Vasut <marek.vasut@gmail.com>
++L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
++S: Maintained
++F: arch/arm/mach-pxa/z2.c
++F: arch/arm/mach-pxa/include/mach-pxa/z2.h
++
+ASC7621 HARDWARE MONITOR DRIVER
+M: George Joseph <george.joseph@fairview5.com>
+L: lm-sensors@lm-sensors.org
+S: Maintained
+F: Documentation/hwmon/asc7621
+F: drivers/hwmon/asc7621.c
+
ASUS ACPI EXTRAS DRIVER
M: Corentin Chary <corentincj@iksaif.net>
M: Karol Kozimor <sziwan@users.sourceforge.net>
L: acpi4asus-user@lists.sourceforge.net
+L: platform-driver-x86@vger.kernel.org
W: http://acpi4asus.sf.net
S: Maintained
F: drivers/platform/x86/asus_acpi.c
ASUS LAPTOP EXTRAS DRIVER
M: Corentin Chary <corentincj@iksaif.net>
L: acpi4asus-user@lists.sourceforge.net
+L: platform-driver-x86@vger.kernel.org
W: http://acpi4asus.sf.net
S: Maintained
F: drivers/platform/x86/asus-laptop.c
F: drivers/mmc/host/atmel-mci-regs.h
ATMEL AT91 / AT32 SERIAL DRIVER
--M: Haavard Skinnemoen <hskinnemoen@atmel.com>
++M: Nicolas Ferre <nicolas.ferre@atmel.com>
S: Supported
F: drivers/serial/atmel_serial.c
F: include/video/atmel_lcdc.h
ATMEL MACB ETHERNET DRIVER
--M: Haavard Skinnemoen <hskinnemoen@atmel.com>
++M: Nicolas Ferre <nicolas.ferre@atmel.com>
S: Supported
F: drivers/net/macb.*
ATMEL SPI DRIVER
--M: Haavard Skinnemoen <hskinnemoen@atmel.com>
++M: Nicolas Ferre <nicolas.ferre@atmel.com>
S: Supported
F: drivers/spi/atmel_spi.*
ATMEL USBA UDC DRIVER
--M: Haavard Skinnemoen <hskinnemoen@atmel.com>
--L: kernel@avr32linux.org
++M: Nicolas Ferre <nicolas.ferre@atmel.com>
++L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://avr32linux.org/twiki/bin/view/Main/AtmelUsbDeviceDriver
S: Supported
F: drivers/usb/gadget/atmel_usba_udc.*
S: Supported
F: drivers/rtc/rtc-bfin.c
+BLACKFIN SDH DRIVER
+M: Cliff Cai <cliff.cai@analog.com>
+L: uclinux-dist-devel@blackfin.uclinux.org
+W: http://blackfin.uclinux.org
+S: Supported
+F: drivers/mmc/host/bfin_sdh.c
+
BLACKFIN SERIAL DRIVER
M: Sonic Zhang <sonic.zhang@analog.com>
L: uclinux-dist-devel@blackfin.uclinux.org
M: Chris Mason <chris.mason@oracle.com>
L: linux-btrfs@vger.kernel.org
W: http://btrfs.wiki.kernel.org/
+Q: http://patchwork.kernel.org/project/linux-btrfs/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable.git
S: Maintained
F: Documentation/filesystems/btrfs.txt
F: arch/x86/include/asm/tce.h
CAN NETWORK LAYER
-M: Urs Thuermann <urs.thuermann@volkswagen.de>
+M: Oliver Hartkopp <socketcan@hartkopp.net>
M: Oliver Hartkopp <oliver.hartkopp@volkswagen.de>
-L: socketcan-core@lists.berlios.de (subscribers-only)
+M: Urs Thuermann <urs.thuermann@volkswagen.de>
+L: socketcan-core@lists.berlios.de
+L: netdev@vger.kernel.org
W: http://developer.berlios.de/projects/socketcan/
S: Maintained
-F: drivers/net/can/
-F: include/linux/can/
+F: net/can/
F: include/linux/can.h
+F: include/linux/can/core.h
+F: include/linux/can/bcm.h
+F: include/linux/can/raw.h
CAN NETWORK DRIVERS
M: Wolfgang Grandegger <wg@grandegger.com>
-L: socketcan-core@lists.berlios.de (subscribers-only)
+L: socketcan-core@lists.berlios.de
+L: netdev@vger.kernel.org
W: http://developer.berlios.de/projects/socketcan/
S: Maintained
+F: drivers/net/can/
+F: include/linux/can/dev.h
+F: include/linux/can/error.h
+F: include/linux/can/netlink.h
+F: include/linux/can/platform/
CELL BROADBAND ENGINE ARCHITECTURE
M: Arnd Bergmann <arnd@arndb.de>
F: arch/powerpc/oprofile/*cell*
F: arch/powerpc/platforms/cell/
+CEPH DISTRIBUTED FILE SYSTEM CLIENT
+M: Sage Weil <sage@newdream.net>
+L: ceph-devel@vger.kernel.org
+W: http://ceph.newdream.net/
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
+S: Supported
+F: Documentation/filesystems/ceph.txt
+F: fs/ceph
+
CERTIFIED WIRELESS USB (WUSB) SUBSYSTEM:
M: David Vrabel <david.vrabel@csr.com>
L: linux-usb@vger.kernel.org
S: Supported
F: scripts/checkpatch.pl
--CISCO 10G ETHERNET DRIVER
++CISCO VIC ETHERNET NIC DRIVER
M: Scott Feldman <scofeldm@cisco.com>
--M: Joe Eykholt <jeykholt@cisco.com>
++M: Vasanthy Kolluri <vkolluri@cisco.com>
++M: Roopa Prabhu <roprabhu@cisco.com>
S: Supported
F: drivers/net/enic/
CMPC ACPI DRIVER
M: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
M: Daniel Oliveira Nascimento <don@syst.com.br>
+L: platform-driver-x86@vger.kernel.org
S: Supported
F: drivers/platform/x86/classmate-laptop.c
COMMON INTERNET FILE SYSTEM (CIFS)
M: Steve French <sfrench@samba.org>
--L: linux-cifs-client@lists.samba.org (moderated for non-subscribers)
++L: linux-cifs@vger.kernel.org
L: samba-technical@lists.samba.org (moderated for non-subscribers)
W: http://linux-cifs.samba.org/
+Q: http://patchwork.ozlabs.org/project/linux-cifs-client/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6.git
S: Supported
F: Documentation/filesystems/cifs.txt
COMPAL LAPTOP SUPPORT
M: Cezary Jackiewicz <cezary.jackiewicz@gmail.com>
+L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/compal-laptop.c
F: sound/pci/cs5535audio/
CX18 VIDEO4LINUX DRIVER
--M: Andy Walls <awalls@radix.net>
++M: Andy Walls <awalls@md.metrocast.net>
L: ivtv-devel@ivtvdriver.org (moderated for non-subscribers)
L: linux-media@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git
S: Supported
F: drivers/infiniband/hw/cxgb3/
++CXGB4 ETHERNET DRIVER (CXGB4)
++M: Dimitris Michailidis <dm@chelsio.com>
++L: netdev@vger.kernel.org
++W: http://www.chelsio.com
++S: Supported
++F: drivers/net/cxgb4/
++
++CXGB4 IWARP RNIC DRIVER (IW_CXGB4)
++M: Steve Wise <swise@chelsio.com>
++L: linux-rdma@vger.kernel.org
++W: http://www.openfabrics.org
++S: Supported
++F: drivers/infiniband/hw/cxgb4/
++
CYBERPRO FB DRIVER
M: Russell King <linux@arm.linux.org.uk>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
DELL LAPTOP DRIVER
M: Matthew Garrett <mjg59@srcf.ucam.org>
+L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/dell-laptop.c
P: Alasdair Kergon
L: dm-devel@redhat.com
W: http://sources.redhat.com/dm
+Q: http://patchwork.kernel.org/project/dm-devel/list/
S: Maintained
F: Documentation/device-mapper/
F: drivers/md/dm*
F: drivers/scsi/dpt/
DRBD DRIVER
-P: Philipp Reisner
-P: Lars Ellenberg
-M: drbd-dev@lists.linbit.com
-L: drbd-user@lists.linbit.com
-W: http://www.drbd.org
-T: git git://git.drbd.org/linux-2.6-drbd.git drbd
-T: git git://git.drbd.org/drbd-8.3.git
-S: Supported
-F: drivers/block/drbd/
-F: lib/lru_cache.c
-F: Documentation/blockdev/drbd/
+P: Philipp Reisner
+P: Lars Ellenberg
+M: drbd-dev@lists.linbit.com
+L: drbd-user@lists.linbit.com
+W: http://www.drbd.org
+T: git git://git.drbd.org/linux-2.6-drbd.git drbd
+T: git git://git.drbd.org/drbd-8.3.git
+S: Supported
+F: drivers/block/drbd/
+F: lib/lru_cache.c
+F: Documentation/blockdev/drbd/
DRIVER CORE, KOBJECTS, AND SYSFS
M: Greg Kroah-Hartman <gregkh@suse.de>
DRM DRIVERS
M: David Airlie <airlied@linux.ie>
-L: dri-devel@lists.sourceforge.net
+L: dri-devel@lists.freedesktop.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/airlied/drm-2.6.git
S: Maintained
F: drivers/gpu/drm/
EDAC-I5400
M: Mauro Carvalho Chehab <mchehab@redhat.com>
--L: bluesmoke-devel@lists.sourceforge.net (moderated for non-subscribers)
++L: linux-edac@vger.kernel.org
W: bluesmoke.sourceforge.net
S: Maintained
F: drivers/edac/i5400_edac.c
++EDAC-I7CORE
++M: Mauro Carvalho Chehab <mchehab@redhat.com>
++L: linux-edac@vger.kernel.org
++W: bluesmoke.sourceforge.net
++S: Maintained
++F: drivers/edac/i7core_edac.c
++F: drivers/edac/edac_mce.c
++F: include/linux/edac_mce.h
++
EDAC-I82975X
M: Ranganathan Desikan <ravi@jetztechnologies.com>
M: "Arvind R." <arvind@jetztechnologies.com>
EEEPC LAPTOP EXTRAS DRIVER
M: Corentin Chary <corentincj@iksaif.net>
L: acpi4asus-user@lists.sourceforge.net
+L: platform-driver-x86@vger.kernel.org
W: http://acpi4asus.sf.net
S: Maintained
F: drivers/platform/x86/eeepc-laptop.c
ETHERNET BRIDGE
M: Stephen Hemminger <shemminger@linux-foundation.org>
L: bridge@lists.linux-foundation.org
+L: netdev@vger.kernel.org
W: http://www.linux-foundation.org/en/Net:Bridge
S: Maintained
F: include/linux/netfilter_bridge/
M: Andreas Dilger <adilger@sun.com>
L: linux-ext4@vger.kernel.org
W: http://ext4.wiki.kernel.org
+Q: http://patchwork.ozlabs.org/project/linux-ext4/list/
S: Maintained
F: Documentation/filesystems/ext4.txt
F: fs/ext4/
F: Documentation/fault-injection/
F: lib/fault-inject.c
+FCOE SUBSYSTEM (libfc, libfcoe, fcoe)
+M: Robert Love <robert.w.love@intel.com>
+L: devel@open-fcoe.org
+W: www.Open-FCoE.org
+S: Supported
+F: drivers/scsi/libfc/
+F: drivers/scsi/fcoe/
+F: include/scsi/fc/
+F: include/scsi/libfc.h
+F: include/scsi/libfcoe.h
+
FILE LOCKING (flock() and fcntl()/lockf())
M: Matthew Wilcox <matthew@wil.cx>
L: linux-fsdevel@vger.kernel.org
S: Maintained
F: drivers/firewire/
F: include/linux/firewire*.h
+ +F: tools/firewire/
FIRMWARE LOADER (request_firmware)
S: Orphan
FUJITSU LAPTOP EXTRAS
M: Jonathan Woithe <jwoithe@physics.adelaide.edu.au>
-L: linux-acpi@vger.kernel.org
+L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/fujitsu-laptop.c
F: drivers/isdn/gigaset/
F: include/linux/gigaset_dev.h
+GRETH 10/100/1G Ethernet MAC device driver
+M: Kristoffer Glembo <kristoffer@gaisler.com>
+L: netdev@vger.kernel.org
+S: Maintained
+F: drivers/net/greth*
+
HARD DRIVE ACTIVE PROTECTION SYSTEM (HDAPS) DRIVER
M: Frank Seidel <frank@f-seidel.de>
L: lm-sensors@lm-sensors.org
S: Odd Fixes
F: drivers/char/hvc_*
+iSCSI BOOT FIRMWARE TABLE (iBFT) DRIVER
+M: Peter Jones <pjones@redhat.com>
+M: Konrad Rzeszutek Wilk <konrad@kernel.org>
+S: Maintained
+F: drivers/firmware/iscsi_ibft*
+
GSPCA FINEPIX SUBDRIVER
M: Frank Zago <frank@zago.net>
L: linux-media@vger.kernel.org
S: Maintained
F: sound/parisc/harmony.*
-HAYES ESP SERIAL DRIVER
-M: "Andrew J. Robinson" <arobinso@nyx.net>
-W: http://www.nyx.net/~arobinso
-S: Maintained
-F: Documentation/serial/hayes-esp.txt
-F: drivers/char/esp.c
-
HEWLETT-PACKARD SMART2 RAID DRIVER
M: Chirag Kantharia <chirag.kantharia@hp.com>
L: iss_storagedev@hp.com
HP COMPAQ TC1100 TABLET WMI EXTRAS DRIVER
M: Carlos Corbacho <carlos@strangeworlds.co.uk>
+L: platform-driver-x86@vger.kernel.org
S: Odd Fixes
F: drivers/platform/x86/tc1100-wmi.c
F: drivers/char/hpet.c
F: include/linux/hpet.h
--HPET: i386
--M: "Venkatesh Pallipadi (Venki)" <venkatesh.pallipadi@intel.com>
++HPET: x86
++M: "Venkatesh Pallipadi (Venki)" <venki@google.com>
S: Maintained
F: arch/x86/kernel/hpet.c
F: arch/x86/include/asm/hpet.h
--HPET: x86_64
--M: Vojtech Pavlik <vojtech@suse.cz>
--S: Maintained
--
HPET: ACPI
M: Bob Picco <bob.picco@hp.com>
S: Maintained
L: linux-i2c@vger.kernel.org
W: http://i2c.wiki.kernel.org/
T: quilt kernel.org/pub/linux/kernel/people/jdelvare/linux-2.6/jdelvare-i2c/
++T: git git://git.fluff.org/bjdooks/linux.git
S: Maintained
F: Documentation/i2c/
F: drivers/i2c/
IDE SUBSYSTEM
M: "David S. Miller" <davem@davemloft.net>
L: linux-ide@vger.kernel.org
+Q: http://patchwork.ozlabs.org/project/linux-ide/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/ide-2.6.git
S: Maintained
F: Documentation/ide/
M: Hal Rosenstock <hal.rosenstock@gmail.com>
L: linux-rdma@vger.kernel.org
W: http://www.openib.org/
+Q: http://patchwork.kernel.org/project/linux-rdma/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband.git
S: Supported
F: Documentation/infiniband/
M: Dmitry Torokhov <dmitry.torokhov@gmail.com>
M: Dmitry Torokhov <dtor@mail.ru>
L: linux-input@vger.kernel.org
+Q: http://patchwork.kernel.org/project/linux-input/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/dtor/input.git
S: Maintained
F: drivers/input/
++INPUT MULTITOUCH (MT) PROTOCOL
++M: Henrik Rydberg <rydberg@euromail.se>
++L: linux-input@vger.kernel.org
++S: Maintained
++F: Documentation/input/multi-touch-protocol.txt
++K: \b(ABS|SYN)_MT_
++
++INTEL IDLE DRIVER
++M: Len Brown <lenb@kernel.org>
++L: linux-pm@lists.linux-foundation.org
++T: git git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux-idle-2.6.git
++S: Supported
++F: drivers/idle/intel_idle.c
++
INTEL FRAMEBUFFER DRIVER (excluding 810 and 815)
-M: Sylvain Meyer <sylvain.meyer@worldonline.fr>
+M: Maik Broemme <mbroemme@plusserver.de>
L: linux-fbdev@vger.kernel.org
S: Maintained
F: Documentation/fb/intelfb.txt
INTEL MENLOW THERMAL DRIVER
M: Sujith Thomas <sujith.thomas@intel.com>
-L: linux-acpi@vger.kernel.org
+L: platform-driver-x86@vger.kernel.org
W: http://www.lesswatts.org/projects/acpi/
S: Supported
F: drivers/platform/x86/intel_menlow.c
F: drivers/net/ixgbe/
INTEL PRO/WIRELESS 2100 NETWORK CONNECTION SUPPORT
--M: Zhu Yi <yi.zhu@intel.com>
--M: Reinette Chatre <reinette.chatre@intel.com>
--M: Intel Linux Wireless <ilw@linux.intel.com>
L: linux-wireless@vger.kernel.org
--W: http://ipw2100.sourceforge.net
--S: Odd Fixes
++S: Orphan
F: Documentation/networking/README.ipw2100
F: drivers/net/wireless/ipw2x00/ipw2100.*
INTEL PRO/WIRELESS 2915ABG NETWORK CONNECTION SUPPORT
--M: Zhu Yi <yi.zhu@intel.com>
--M: Reinette Chatre <reinette.chatre@intel.com>
--M: Intel Linux Wireless <ilw@linux.intel.com>
L: linux-wireless@vger.kernel.org
--W: http://ipw2200.sourceforge.net
--S: Odd Fixes
++S: Orphan
F: Documentation/networking/README.ipw2200
F: drivers/net/wireless/ipw2x00/ipw2200.*
++INTEL(R) TRUSTED EXECUTION TECHNOLOGY (TXT)
++M: Joseph Cihula <joseph.cihula@intel.com>
++M: Shane Wang <shane.wang@intel.com>
++L: tboot-devel@lists.sourceforge.net
++W: http://tboot.sourceforge.net
++T: Mercurial http://www.bughost.org/repos.hg/tboot.hg
++S: Supported
++F: Documentation/intel_txt.txt
++F: include/linux/tboot.h
++F: arch/x86/kernel/tboot.c
++
INTEL WIRELESS WIMAX CONNECTION 2400
M: Inaky Perez-Gonzalez <inaky.perez-gonzalez@intel.com>
M: linux-wimax@intel.com
F: include/linux/wimax/i2400m.h
INTEL WIRELESS WIFI LINK (iwlwifi)
--M: Zhu Yi <yi.zhu@intel.com>
M: Reinette Chatre <reinette.chatre@intel.com>
++M: Wey-Yi Guy <wey-yi.w.guy@intel.com>
M: Intel Linux Wireless <ilw@linux.intel.com>
L: linux-wireless@vger.kernel.org
W: http://intellinuxwireless.org
INTEL WIRELESS MULTICOMM 3200 WIFI (iwmc3200wifi)
M: Samuel Ortiz <samuel.ortiz@intel.com>
--M: Zhu Yi <yi.zhu@intel.com>
M: Intel Linux Wireless <ilw@linux.intel.com>
L: linux-wireless@vger.kernel.org
S: Supported
IP1000A 10/100/1000 GIGABIT ETHERNET DRIVER
M: Francois Romieu <romieu@fr.zoreil.com>
M: Sorbica Shieh <sorbica@icplus.com.tw>
--M: Jesse Huang <jesse@icplus.com.tw>
L: netdev@vger.kernel.org
S: Maintained
--F: drivers/net/ipg.c
++F: drivers/net/ipg.*
IPATH DRIVER
M: Ralph Campbell <infinipath@qlogic.com>
ISDN SUBSYSTEM
M: Karsten Keil <isdn@linux-pingi.de>
L: isdn4linux@listserv.isdn4linux.de (subscribers-only)
+L: netdev@vger.kernel.org
W: http://www.isdn4linux.de
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kkeil/isdn-2.6.git
S: Maintained
S: Maintained
F: drivers/isdn/hardware/eicon/
+IT87 HARDWARE MONITORING DRIVER
+M: Jean Delvare <khali@linux-fr.org>
+L: lm-sensors@lm-sensors.org
+S: Maintained
+F: Documentation/hwmon/it87
+F: drivers/hwmon/it87.c
+
IVTV VIDEO4LINUX DRIVER
--M: Andy Walls <awalls@radix.net>
++M: Andy Walls <awalls@md.metrocast.net>
L: ivtv-devel@ivtvdriver.org (moderated for non-subscribers)
L: linux-media@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git
KCONFIG
M: Roman Zippel <zippel@linux-m68k.org>
L: linux-kbuild@vger.kernel.org
+Q: http://patchwork.kernel.org/project/linux-kbuild/list/
S: Maintained
F: Documentation/kbuild/kconfig-language.txt
F: scripts/kconfig/
S: Maintained
F: fs/autofs4/
--KERNEL BUILD
++KERNEL BUILD + files below scripts/ (unless maintained elsewhere)
M: Michal Marek <mmarek@suse.cz>
T: git git://repo.or.cz/linux-kbuild.git for-next
T: git git://repo.or.cz/linux-kbuild.git for-linus
F: Documentation/kbuild/
F: Makefile
F: scripts/Makefile.*
++F: scripts/basic/
++F: scripts/mk*
++F: scripts/package/
KERNEL JANITORS
L: kernel-janitors@vger.kernel.org
F: arch/x86/kvm/svm.c
KERNEL VIRTUAL MACHINE (KVM) FOR POWERPC
-M: Hollis Blanchard <hollisb@us.ibm.com>
+M: Alexander Graf <agraf@suse.de>
L: kvm-ppc@vger.kernel.org
W: http://kvm.qumranet.com
S: Supported
F: include/linux/kexec.h
F: kernel/kexec.c
-KGDB
+KEYS/KEYRINGS:
+M: David Howells <dhowells@redhat.com>
+L: keyrings@linux-nfs.org
+S: Maintained
+F: Documentation/keys.txt
+F: include/linux/key.h
+F: include/linux/key-type.h
+F: include/keys/
+F: security/keys/
+
- KGDB
++KGDB / KDB /debug_core
M: Jason Wessel <jason.wessel@windriver.com>
++W: http://kgdb.wiki.kernel.org/
L: kgdb-bugreport@lists.sourceforge.net
S: Maintained
F: Documentation/DocBook/kgdb.tmpl
F: drivers/misc/kgdbts.c
F: drivers/serial/kgdboc.c
++F: include/linux/kdb.h
F: include/linux/kgdb.h
--F: kernel/kgdb.c
++F: kernel/debug/
KMEMCHECK
M: Vegard Nossum <vegardno@ifi.uio.no>
M: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
M: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>
M: "David S. Miller" <davem@davemloft.net>
--M: Masami Hiramatsu <mhiramat@redhat.com>
++M: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
S: Maintained
F: Documentation/kprobes.txt
F: include/linux/kprobes.h
M: Paul Mackerras <paulus@samba.org>
W: http://www.penguinppc.org/
L: linuxppc-dev@ozlabs.org
+Q: http://patchwork.ozlabs.org/project/linuxppc-dev/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc.git
S: Supported
F: Documentation/powerpc/
LINUX FOR POWERPC PA SEMI PWRFICIENT
M: Olof Johansson <olof@lixom.net>
--W: http://www.pasemi.com/
L: linuxppc-dev@ozlabs.org
--S: Supported
++S: Maintained
F: arch/powerpc/platforms/pasemi/
F: drivers/*/*pasemi*
F: drivers/*/*/*pasemi*
F: Documentation/ldm.txt
F: fs/partitions/ldm.*
+LogFS
+M: Joern Engel <joern@logfs.org>
+L: logfs@logfs.org
+W: logfs.org
+S: Maintained
+F: fs/logfs/
+
LSILOGIC MPT FUSION DRIVERS (FC/SAS/SPI)
M: Eric Moore <Eric.Moore@lsi.com>
M: support@lsi.com
LTP (Linux Test Project)
M: Rishikesh K Rajak <risrajak@linux.vnet.ibm.com>
M: Garrett Cooper <yanegomi@gmail.com>
-M: Mike Frysinger <vapier@gentoo.org>
-M: Subrata Modak <subrata@linux.vnet.ibm.com>
+M: Mike Frysinger <vapier@gentoo.org>
+M: Subrata Modak <subrata@linux.vnet.ibm.com>
L: ltp-list@lists.sourceforge.net (subscribers-only)
W: http://ltp.sourceforge.net/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/galak/ltp.git
F: include/linux/mv643xx.h
MARVELL MWL8K WIRELESS DRIVER
-M: Lennert Buytenhek <buytenh@marvell.com>
+M: Lennert Buytenhek <buytenh@wantstofly.org>
L: linux-wireless@vger.kernel.org
-S: Supported
+S: Maintained
F: drivers/net/wireless/mwl8k.c
MARVELL SOC MMC/SD/SDIO CONTROLLER DRIVER
M: Nicolas Pitre <nico@fluxnic.net>
--S: Maintained
++S: Odd Fixes
++F: drivers/mmc/host/mvsdio.*
MARVELL YUKON / SYSKONNECT DRIVER
M: Mirko Lindner <mlindner@syskonnect.de>
P: LinuxTV.org Project
L: linux-media@vger.kernel.org
W: http://linuxtv.org
+Q: http://patchwork.kernel.org/project/linux-media/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git
S: Maintained
F: Documentation/dvb/
MEMORY RESOURCE CONTROLLER
M: Balbir Singh <balbir@linux.vnet.ibm.com>
-M: Pavel Emelyanov <xemul@openvz.org>
+M: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
M: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
L: linux-mm@kvack.org
S: Maintained
MEMORY TECHNOLOGY DEVICES (MTD)
M: David Woodhouse <dwmw2@infradead.org>
-W: http://www.linux-mtd.infradead.org/
L: linux-mtd@lists.infradead.org
+W: http://www.linux-mtd.infradead.org/
+Q: http://patchwork.ozlabs.org/project/linux-mtd/list/
T: git git://git.infradead.org/mtd-2.6.git
S: Maintained
F: drivers/mtd/
MSI LAPTOP SUPPORT
M: Lennart Poettering <mzxreary@0pointer.de>
+L: platform-driver-x86@vger.kernel.org
W: https://tango.0pointer.de/mailman/listinfo/s270-linux
W: http://0pointer.de/lennart/tchibo.html
S: Maintained
MSI WMI SUPPORT
M: Anisse Astier <anisse@astier.eu>
+L: platform-driver-x86@vger.kernel.org
S: Supported
F: drivers/platform/x86/msi-wmi.c
M: Rastapur Santosh <santosh.rastapur@neterion.com>
M: Sivakumar Subramani <sivakumar.subramani@neterion.com>
M: Sreenivasa Honnur <sreenivasa.honnur@neterion.com>
--M: Anil Murthy <anil.murthy@neterion.com>
L: netdev@vger.kernel.org
W: http://trac.neterion.com/cgi-bin/trac.cgi/wiki/Linux?Anonymous
W: http://trac.neterion.com/cgi-bin/trac.cgi/wiki/X3100Linux?Anonymous
NETWORKING [WIRELESS]
M: "John W. Linville" <linville@tuxdriver.com>
L: linux-wireless@vger.kernel.org
+Q: http://patchwork.kernel.org/project/linux-wireless/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-2.6.git
S: Maintained
F: net/mac80211/
F: net/wireless/
F: include/net/ieee80211*
F: include/linux/wireless.h
++F: include/linux/iw_handler.h
F: drivers/net/wireless/
NETWORKING DRIVERS
L: linux-omap@vger.kernel.org
W: http://www.muru.com/linux/omap/
W: http://linux.omap.com/
+Q: http://patchwork.kernel.org/project/linux-omap/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap-2.6.git
S: Maintained
F: arch/arm/*omap*/
M: Grant Likely <grant.likely@secretlab.ca>
L: devicetree-discuss@lists.ozlabs.org
W: http://fdt.secretlab.ca
++T: git git://git.secretlab.ca/git/linux-2.6.git
S: Maintained
F: drivers/of
F: include/linux/of*.h
M: Robert Richter <robert.richter@amd.com>
L: oprofile-list@lists.sf.net
S: Maintained
++F: arch/*/include/asm/oprofile*.h
F: arch/*/oprofile/
F: drivers/oprofile/
F: include/linux/oprofile.h
PANASONIC LAPTOP ACPI EXTRAS DRIVER
M: Harald Welte <laforge@gnumonks.org>
+L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/panasonic-laptop.c
M: "James E.J. Bottomley" <jejb@parisc-linux.org>
L: linux-parisc@vger.kernel.org
W: http://www.parisc-linux.org/
+Q: http://patchwork.kernel.org/project/linux-parisc/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kyle/parisc-2.6.git
S: Maintained
F: arch/parisc/
PCI SUBSYSTEM
M: Jesse Barnes <jbarnes@virtuousgeek.org>
L: linux-pci@vger.kernel.org
+Q: http://patchwork.kernel.org/project/linux-pci/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jbarnes/pci-2.6.git
S: Supported
F: Documentation/PCI/
M: Peter Zijlstra <a.p.zijlstra@chello.nl>
M: Paul Mackerras <paulus@samba.org>
M: Ingo Molnar <mingo@elte.hu>
+M: Arnaldo Carvalho de Melo <acme@redhat.com>
S: Supported
--F: kernel/perf_event.c
++F: kernel/perf_event*.c
F: include/linux/perf_event.h
- F: arch/*/kernel/perf_event.c
- F: arch/*/kernel/*/perf_event.c
- F: arch/*/kernel/*/*/perf_event.c
-F: arch/*/*/kernel/perf_event.c
++F: arch/*/kernel/perf_event*.c
++F: arch/*/kernel/*/perf_event*.c
++F: arch/*/kernel/*/*/perf_event*.c
F: arch/*/include/asm/perf_event.h
--F: arch/*/lib/perf_event.c
++F: arch/*/lib/perf_event*.c
F: arch/*/kernel/perf_callchain.c
F: tools/perf/
F: drivers/ata/sata_promise.*
PS3 NETWORK SUPPORT
-M: Geoff Levand <geoffrey.levand@am.sony.com>
+M: Geoff Levand <geoff@infradead.org>
L: netdev@vger.kernel.org
L: cbe-oss-dev@ozlabs.org
-S: Supported
+S: Maintained
F: drivers/net/ps3_gelic_net.*
PS3 PLATFORM SUPPORT
-M: Geoff Levand <geoffrey.levand@am.sony.com>
+M: Geoff Levand <geoff@infradead.org>
L: linuxppc-dev@ozlabs.org
L: cbe-oss-dev@ozlabs.org
-S: Supported
+S: Maintained
F: arch/powerpc/boot/ps3*
F: arch/powerpc/include/asm/lv1call.h
F: arch/powerpc/include/asm/ps3*.h
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6.git
S: Maintained
+MMP2 SUPPORT (aka ARMADA610)
+M: Haojian Zhuang <haojian.zhuang@marvell.com>
+M: Eric Miao <eric.y.miao@gmail.com>
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/ycmiao/pxa-linux-2.6.git
+S: Maintained
+
PXA MMCI DRIVER
S: Orphan
L: rtc-linux@googlegroups.com
S: Maintained
++QLOGIC QLA1280 SCSI DRIVER
++M: Michael Reed <mdr@sgi.com>
++L: linux-scsi@vger.kernel.org
++S: Maintained
++F: drivers/scsi/qla1280.[ch]
++
QLOGIC QLA2XXX FC-SCSI DRIVER
M: Andrew Vasquez <andrew.vasquez@qlogic.com>
M: linux-driver@qlogic.com
F: Documentation/scsi/LICENSE.qla2xxx
F: drivers/scsi/qla2xxx/
++QLOGIC QLA4XXX iSCSI DRIVER
++M: Ravi Anand <ravi.anand@qlogic.com>
++M: Vikas Chaudhary <vikas.chaudhary@qlogic.com>
++M: iscsi-driver@qlogic.com
++L: linux-scsi@vger.kernel.org
++S: Supported
++F: drivers/scsi/qla4xxx/
++
QLOGIC QLA3XXX NETWORK DRIVER
M: Ron Mercer <ron.mercer@qlogic.com>
M: linux-driver@qlogic.com
F: Documentation/networking/LICENSE.qla3xxx
F: drivers/net/qla3xxx.*
+QLOGIC QLCNIC (1/10)Gb ETHERNET DRIVER
+M: Amit Kumar Salecha <amit.salecha@qlogic.com>
++M: Anirban Chakraborty <anirban.chakraborty@qlogic.com>
+M: linux-driver@qlogic.com
+L: netdev@vger.kernel.org
+S: Supported
+F: drivers/net/qlcnic/
+
QLOGIC QLGE 10Gb ETHERNET DRIVER
M: Ron Mercer <ron.mercer@qlogic.com>
M: linux-driver@qlogic.com
RCUTORTURE MODULE
M: Josh Triplett <josh@freedesktop.org>
M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
-S: Maintained
+S: Supported
F: Documentation/RCU/torture.txt
F: kernel/rcutorture.c
M: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
W: http://www.rdrop.com/users/paulmck/rclock/
S: Supported
-F: Documentation/RCU/rcu.txt
-F: Documentation/RCU/rcuref.txt
-F: include/linux/rcupdate.h
-F: include/linux/srcu.h
-F: kernel/rcupdate.c
+F: Documentation/RCU/
+F: include/linux/rcu*
+F: include/linux/srcu*
+F: kernel/rcu*
+F: kernel/srcu*
+X: kernel/rcutorture.c
REAL TIME CLOCK DRIVER
M: Paul Gortmaker <p_gortmaker@yahoo.com>
REAL TIME CLOCK (RTC) SUBSYSTEM
M: Alessandro Zummo <a.zummo@towertech.it>
L: rtc-linux@googlegroups.com
+Q: http://patchwork.ozlabs.org/project/rtc-linux/list/
S: Maintained
F: Documentation/rtc.txt
F: drivers/rtc/
F: Documentation/rfkill.txt
F: net/rfkill/
++RICOH SMARTMEDIA/XD DRIVER
++M: Maxim Levitsky <maximlevitsky@gmail.com>
++S: Maintained
++F: drivers/mtd/nand/r822.c
++F: drivers/mtd/nand/r822.h
++
RISCOM8 DRIVER
S: Orphan
F: Documentation/serial/riscom8.txt
S: Supported
F: arch/s390/
F: drivers/s390/
++F: fs/partitions/ibm.c
++F: Documentation/s390/
++F: Documentation/DocBook/s390*
S390 NETWORK DRIVERS
M: Ursula Braun <ursula.braun@de.ibm.com>
S390 ZFCP DRIVER
M: Christof Schmitt <christof.schmitt@de.ibm.com>
-M: Martin Peschke <mp3@de.ibm.com>
+M: Swen Schillig <swen@vnet.ibm.com>
M: linux390@de.ibm.com
L: linux-s390@vger.kernel.org
W: http://www.ibm.com/developerworks/linux/linux390/
S: Supported
-F: Documentation/s390/zfcpdump.txt
F: drivers/s390/scsi/zfcp_*
S390 IUCV NETWORK LAYER
F: drivers/media/video/*7146*
F: include/media/*7146*
+TLG2300 VIDEO4LINUX-2 DRIVER
+M: Huang Shijie <shijie8@gmail.com>
+M: Kang Yong <kangyong@telegent.com>
+M: Zhang Xiaobing <xbzhang@telegent.com>
+S: Supported
+F: drivers/media/video/tlg2300
+
SC1200 WDT DRIVER
M: Zwane Mwaikambo <zwane@arm.linux.org.uk>
S: Maintained
S: Maintained
F: drivers/mmc/host/sdhci-s3c.c
++SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) ST SPEAR DRIVER
++M: Viresh Kumar <viresh.kumar@st.com>
++L: linux-mmc@vger.kernel.org
++S: Maintained
++F: drivers/mmc/host/sdhci-spear.c
++
SECURITY SUBSYSTEM
M: James Morris <jmorris@namei.org>
L: linux-security-module@vger.kernel.org (suggested Cc:)
SERVER ENGINES 10Gbps NIC - BladeEngine 2 DRIVER
M: Sathya Perla <sathyap@serverengines.com>
M: Subbu Seetharaman <subbus@serverengines.com>
+M: Sarveshwar Bandi <sarveshwarb@serverengines.com>
+M: Ajit Khaparde <ajitk@serverengines.com>
L: netdev@vger.kernel.org
W: http://www.serverengines.com
S: Supported
TI DAVINCI MACHINE SUPPORT
P: Kevin Hilman
M: davinci-linux-open-source@linux.davincidsp.com
+Q: http://patchwork.kernel.org/project/linux-davinci/list/
S: Supported
F: arch/arm/mach-davinci
SMC91x ETHERNET DRIVER
M: Nicolas Pitre <nico@fluxnic.net>
--S: Maintained
++S: Odd Fixes
F: drivers/net/smc91x.*
SMSC47B397 HARDWARE MONITOR DRIVER
SONY VAIO CONTROL DEVICE DRIVER
M: Mattia Dongili <malattia@linux.it>
-L: linux-acpi@vger.kernel.org
+L: platform-driver-x86@vger.kernel.org
W: http://www.linux.it/~malattia/wiki/index.php/Sony_drivers
S: Maintained
F: Documentation/laptops/sony-laptop.txt
SPARC + UltraSPARC (sparc/sparc64)
M: "David S. Miller" <davem@davemloft.net>
L: sparclinux@vger.kernel.org
+Q: http://patchwork.ozlabs.org/project/sparclinux/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next-2.6.git
S: Maintained
F: arch/sparc/
++F: drivers/sbus
+
+SPARC SERIAL DRIVERS
+M: "David S. Miller" <davem@davemloft.net>
+L: sparclinux@vger.kernel.org
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-2.6.git
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/davem/sparc-next-2.6.git
+S: Maintained
+F: drivers/serial/suncore.c
+F: drivers/serial/suncore.h
+F: drivers/serial/sunhv.c
+F: drivers/serial/sunsab.c
+F: drivers/serial/sunsab.h
+F: drivers/serial/sunsu.c
+F: drivers/serial/sunzilog.c
+F: drivers/serial/sunzilog.h
+
++SPEAR PLATFORM SUPPORT
++M: Viresh Kumar <viresh.kumar@st.com>
++W: http://www.st.com/spear
++S: Maintained
++F: arch/arm/plat-spear/
++
++SPEAR3XX MACHINE SUPPORT
++M: Viresh Kumar <viresh.kumar@st.com>
++W: http://www.st.com/spear
++S: Maintained
++F: arch/arm/mach-spear3xx/
++
++SPEAR6XX MACHINE SUPPORT
++M: Rajeev Kumar <rajeev-dlh.kumar@st.com>
++W: http://www.st.com/spear
++S: Maintained
++F: arch/arm/mach-spear6xx/
++
++SPEAR CLOCK FRAMEWORK SUPPORT
++M: Viresh Kumar <viresh.kumar@st.com>
++W: http://www.st.com/spear
++S: Maintained
++F: arch/arm/mach-spear*/clock.c
++F: arch/arm/mach-spear*/include/mach/clkdev.h
++F: arch/arm/plat-spear/clock.c
++F: arch/arm/plat-spear/include/plat/clock.h and clkdev.h
++
++SPEAR PAD MULTIPLEXING SUPPORT
++M: Viresh Kumar <viresh.kumar@st.com>
++W: http://www.st.com/spear
++S: Maintained
++F: arch/arm/plat-spear/include/plat/padmux.h
++F: arch/arm/plat-spear/padmux.c
++F: arch/arm/mach-spear*/spear*xx.c
++F: arch/arm/mach-spear*/include/mach/generic.h
++F: arch/arm/mach-spear3xx/spear3*0.c
++F: arch/arm/mach-spear3xx/spear3*0_evb.c
++F: arch/arm/mach-spear6xx/spear600.c
++F: arch/arm/mach-spear6xx/spear600_evb.c
+
SPECIALIX IO8+ MULTIPORT SERIAL CARD DRIVER
M: Roger Wolff <R.E.Wolff@BitWizard.nl>
S: Supported
M: David Brownell <dbrownell@users.sourceforge.net>
M: Grant Likely <grant.likely@secretlab.ca>
L: spi-devel-general@lists.sourceforge.net
+Q: http://patchwork.kernel.org/project/spi-devel-general/list/
++T: git git://git.secretlab.ca/git/linux-2.6.git
S: Maintained
F: Documentation/spi/
F: drivers/spi/
STARMODE RADIO IP (STRIP) PROTOCOL DRIVER
S: Orphan
-F: drivers/net/wireless/strip.c
+F: drivers/staging/strip/strip.c
F: include/linux/if_strip.h
STRADIS MPEG-2 DECODER DRIVER
M: Paul Mundt <lethal@linux-sh.org>
L: linux-sh@vger.kernel.org
W: http://www.linux-sh.org
+Q: http://patchwork.kernel.org/project/linux-sh/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6.git
S: Supported
F: Documentation/sh/
THINKPAD ACPI EXTRAS DRIVER
M: Henrique de Moraes Holschuh <ibm-acpi@hmh.eng.br>
L: ibm-acpi-devel@lists.sourceforge.net
+L: platform-driver-x86@vger.kernel.org
W: http://ibm-acpi.sourceforge.net
W: http://thinkwiki.org/wiki/Ibm-acpi
T: git git://repo.or.cz/linux-2.6/linux-acpi-2.6/ibm-acpi-2.6.git
F: sound/soc/codecs/twl4030*
TIPC NETWORK LAYER
-M: Per Liden <per.liden@ericsson.com>
M: Jon Maloy <jon.maloy@ericsson.com>
M: Allan Stephens <allan.stephens@windriver.com>
L: tipc-discussion@lists.sourceforge.net
TOPSTAR LAPTOP EXTRAS DRIVER
M: Herton Ronaldo Krzesinski <herton@mandriva.com.br>
+L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/topstar-laptop.c
TOSHIBA ACPI EXTRAS DRIVER
+L: platform-driver-x86@vger.kernel.org
S: Orphan
F: drivers/platform/x86/toshiba_acpi.c
F: drivers/mmc/host/tmio_mmc.*
TMPFS (SHMEM FILESYSTEM)
-M: Hugh Dickins <hugh.dickins@tiscali.co.uk>
+M: Hugh Dickins <hughd@google.com>
L: linux-mm@kvack.org
S: Maintained
F: include/linux/shmem_fs.h
L: linux-uvc-devel@lists.berlios.de (subscribers-only)
L: linux-media@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-2.6.git
--W: http://linux-uvc.berlios.de
++W: http://www.ideasonboard.org/uvc/
S: Maintained
F: drivers/media/video/uvc/
F: Documentation/filesystems/vfat.txt
F: fs/fat/
+VIRTIO CONSOLE DRIVER
+M: Amit Shah <amit.shah@redhat.com>
+L: virtualization@lists.linux-foundation.org
+S: Maintained
+F: drivers/char/virtio_console.c
+F: include/linux/virtio_console.h
+
+VIRTIO HOST (VHOST)
+M: "Michael S. Tsirkin" <mst@redhat.com>
+L: kvm@vger.kernel.org
+L: virtualization@lists.osdl.org
+L: netdev@vger.kernel.org
+S: Maintained
+F: drivers/vhost/
+F: include/linux/vhost.h
+
VIA RHINE NETWORK DRIVER
M: Roger Luethi <rl@hellgate.ch>
S: Maintained
WATCHDOG DEVICE DRIVERS
M: Wim Van Sebroeck <wim@iguana.be>
++L: linux-watchdog@vger.kernel.org
++W: http://www.linux-watchdog.org/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/wim/linux-2.6-watchdog.git
S: Maintained
F: Documentation/watchdog/
W: http://www.hpl.hp.com/personal/Jean_Tourrilhes/Linux/
S: Maintained
F: Documentation/networking/wavelan.txt
-F: drivers/net/wireless/wavelan*
+F: drivers/staging/wavelan/
WD7000 SCSI DRIVER
M: Miroslav Zagorac <zaga@fly.cc.fer.hr>
F: drivers/input/misc/wistron_btns.c
WL1251 WIRELESS DRIVER
-M: Kalle Valo <kalle.valo@nokia.com>
+M: Kalle Valo <kalle.valo@iki.fi>
L: linux-wireless@vger.kernel.org
W: http://wireless.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-testing.git
F: Documentation/x86/
F: arch/x86/
+X86 PLATFORM DRIVERS
+M: Matthew Garrett <mjg@redhat.com>
+L: platform-driver-x86@vger.kernel.org
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/mjg59/platform-drivers-x86.git
+S: Maintained
+F: drivers/platform/x86
+
XEN HYPERVISOR INTERFACE
M: Jeremy Fitzhardinge <jeremy@xensource.com>
M: Chris Wright <chrisw@sous-sol.org>
THE REST
M: Linus Torvalds <torvalds@linux-foundation.org>
L: linux-kernel@vger.kernel.org
+Q: http://patchwork.kernel.org/project/LKML/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
S: Buried alive in reporters
F: *
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/spinlock.h>
- #include <linux/timer.h>
#include <linux/workqueue.h>
#include <asm/atomic.h>
#define BIB_CRC(v) ((v) << 0)
#define BIB_CRC_LENGTH(v) ((v) << 16)
#define BIB_INFO_LENGTH(v) ((v) << 24)
-
+ #define BIB_BUS_NAME 0x31333934 /* "1394" */
#define BIB_LINK_SPEED(v) ((v) << 0)
#define BIB_GENERATION(v) ((v) << 4)
#define BIB_MAX_ROM(v) ((v) << 8)
#define BIB_BMC ((1) << 28)
#define BIB_ISC ((1) << 29)
#define BIB_CMC ((1) << 30)
- #define BIB_IMC ((1) << 31)
+ #define BIB_IRMC ((1) << 31)
+ #define NODE_CAPABILITIES 0x0c0083c0 /* per IEEE 1394 clause 8.3.2.6.5.2 */
static void generate_config_rom(struct fw_card *card, __be32 *config_rom)
{
config_rom[0] = cpu_to_be32(
BIB_CRC_LENGTH(4) | BIB_INFO_LENGTH(4) | BIB_CRC(0));
- config_rom[1] = cpu_to_be32(0x31333934);
+ config_rom[1] = cpu_to_be32(BIB_BUS_NAME);
config_rom[2] = cpu_to_be32(
BIB_LINK_SPEED(card->link_speed) |
BIB_GENERATION(card->config_rom_generation++ % 14 + 2) |
BIB_MAX_ROM(2) |
BIB_MAX_RECEIVE(card->max_receive) |
- BIB_BMC | BIB_ISC | BIB_CMC | BIB_IMC);
+ BIB_BMC | BIB_ISC | BIB_CMC | BIB_IRMC);
config_rom[3] = cpu_to_be32(card->guid >> 32);
config_rom[4] = cpu_to_be32(card->guid);
/* Generate root directory. */
- config_rom[6] = cpu_to_be32(0x0c0083c0); /* node capabilities */
+ config_rom[6] = cpu_to_be32(NODE_CAPABILITIES);
i = 7;
j = 7 + descriptor_count;
}
EXPORT_SYMBOL(fw_core_remove_descriptor);
++ static int reset_bus(struct fw_card *card, bool short_reset)
++ {
++ int reg = short_reset ? 5 : 1;
++ int bit = short_reset ? PHY_BUS_SHORT_RESET : PHY_BUS_RESET;
++
++ return card->driver->update_phy_reg(card, reg, 0, bit);
++ }
++
++ void fw_schedule_bus_reset(struct fw_card *card, bool delayed, bool short_reset)
++ {
++ /* We don't try hard to sort out requests of long vs. short resets. */
++ card->br_short = short_reset;
++
++ /* Use an arbitrary short delay to combine multiple reset requests. */
++ fw_card_get(card);
++ if (!schedule_delayed_work(&card->br_work,
++ delayed ? DIV_ROUND_UP(HZ, 100) : 0))
++ fw_card_put(card);
++ }
++ EXPORT_SYMBOL(fw_schedule_bus_reset);
++
++ static void br_work(struct work_struct *work)
++ {
++ struct fw_card *card = container_of(work, struct fw_card, br_work.work);
++
++ /* Delay for 2s after last reset per IEEE 1394 clause 8.2.1. */
++ if (card->reset_jiffies != 0 &&
++ time_is_after_jiffies(card->reset_jiffies + 2 * HZ)) {
++ if (!schedule_delayed_work(&card->br_work, 2 * HZ))
++ fw_card_put(card);
++ return;
++ }
++
++ fw_send_phy_config(card, FW_PHY_CONFIG_NO_NODE_ID, card->generation,
++ FW_PHY_CONFIG_CURRENT_GAP_COUNT);
++ reset_bus(card, card->br_short);
++ fw_card_put(card);
++ }
++
static void allocate_broadcast_channel(struct fw_card *card, int generation)
{
int channel, bandwidth = 0;
-- fw_iso_resource_manage(card, generation, 1ULL << 31, &channel,
-- &bandwidth, true, card->bm_transaction_data);
-- if (channel == 31) {
++ if (!card->broadcast_channel_allocated) {
++ fw_iso_resource_manage(card, generation, 1ULL << 31,
++ &channel, &bandwidth, true,
++ card->bm_transaction_data);
++ if (channel != 31) {
++ fw_notify("failed to allocate broadcast channel\n");
++ return;
++ }
card->broadcast_channel_allocated = true;
-- device_for_each_child(card->device, (void *)(long)generation,
-- fw_device_set_broadcast_channel);
}
++
++ device_for_each_child(card->device, (void *)(long)generation,
++ fw_device_set_broadcast_channel);
}
static const char gap_count_table[] = {
void fw_schedule_bm_work(struct fw_card *card, unsigned long delay)
{
fw_card_get(card);
-- if (!schedule_delayed_work(&card->work, delay))
++ if (!schedule_delayed_work(&card->bm_work, delay))
fw_card_put(card);
}
-- static void fw_card_bm_work(struct work_struct *work)
++ static void bm_work(struct work_struct *work)
{
-- struct fw_card *card = container_of(work, struct fw_card, work.work);
- struct fw_device *root_device;
++ struct fw_card *card = container_of(work, struct fw_card, bm_work.work);
- struct fw_device *root_device;
++ struct fw_device *root_device, *irm_device;
struct fw_node *root_node;
-- unsigned long flags;
-- int root_id, new_root_id, irm_id, local_id;
++ int root_id, new_root_id, irm_id, bm_id, local_id;
int gap_count, generation, grace, rcode;
bool do_reset = false;
bool root_device_is_running;
bool root_device_is_cmc;
++ bool irm_is_1394_1995_only;
-- spin_lock_irqsave(&card->lock, flags);
++ spin_lock_irq(&card->lock);
if (card->local_node == NULL) {
-- spin_unlock_irqrestore(&card->lock, flags);
++ spin_unlock_irq(&card->lock);
goto out_put_card;
}
generation = card->generation;
++
root_node = card->root_node;
fw_node_get(root_node);
root_device = root_node->data;
root_device_is_running = root_device &&
atomic_read(&root_device->state) == FW_DEVICE_RUNNING;
root_device_is_cmc = root_device && root_device->cmc;
++
++ irm_device = card->irm_node->data;
++ irm_is_1394_1995_only = irm_device && irm_device->config_rom &&
++ (irm_device->config_rom[2] & 0x000000f0) == 0;
++
root_id = root_node->node_id;
irm_id = card->irm_node->node_id;
local_id = card->local_node->node_id;
grace = time_after(jiffies, card->reset_jiffies + DIV_ROUND_UP(HZ, 8));
-- if (is_next_generation(generation, card->bm_generation) ||
++ if ((is_next_generation(generation, card->bm_generation) &&
++ !card->bm_abdicate) ||
(card->bm_generation != generation && grace)) {
/*
* This first step is to figure out who is IRM and
if (!card->irm_node->link_on) {
new_root_id = local_id;
-- fw_notify("IRM has link off, making local node (%02x) root.\n",
-- new_root_id);
++ fw_notify("%s, making local node (%02x) root.\n",
++ "IRM has link off", new_root_id);
++ goto pick_me;
++ }
++
++ if (irm_is_1394_1995_only) {
++ new_root_id = local_id;
++ fw_notify("%s, making local node (%02x) root.\n",
++ "IRM is not 1394a compliant", new_root_id);
goto pick_me;
}
card->bm_transaction_data[0] = cpu_to_be32(0x3f);
card->bm_transaction_data[1] = cpu_to_be32(local_id);
-- spin_unlock_irqrestore(&card->lock, flags);
++ spin_unlock_irq(&card->lock);
rcode = fw_run_transaction(card, TCODE_LOCK_COMPARE_SWAP,
irm_id, generation, SCODE_100,
CSR_REGISTER_BASE + CSR_BUS_MANAGER_ID,
-- card->bm_transaction_data,
-- sizeof(card->bm_transaction_data));
++ card->bm_transaction_data, 8);
if (rcode == RCODE_GENERATION)
/* Another bus reset, BM work has been rescheduled. */
goto out;
-- if (rcode == RCODE_COMPLETE &&
-- card->bm_transaction_data[0] != cpu_to_be32(0x3f)) {
++ bm_id = be32_to_cpu(card->bm_transaction_data[0]);
+
++ spin_lock_irq(&card->lock);
++ if (rcode == RCODE_COMPLETE && generation == card->generation)
++ card->bm_node_id =
++ bm_id == 0x3f ? local_id : 0xffc0 | bm_id;
++ spin_unlock_irq(&card->lock);
+
++ if (rcode == RCODE_COMPLETE && bm_id != 0x3f) {
/* Somebody else is BM. Only act as IRM. */
if (local_id == irm_id)
allocate_broadcast_channel(card, generation);
goto out;
}
-- spin_lock_irqsave(&card->lock, flags);
++ if (rcode == RCODE_SEND_ERROR) {
++ /*
++ * We have been unable to send the lock request due to
++ * some local problem. Let's try again later and hope
++ * that the problem has gone away by then.
++ */
++ fw_schedule_bm_work(card, DIV_ROUND_UP(HZ, 8));
++ goto out;
++ }
++
++ spin_lock_irq(&card->lock);
if (rcode != RCODE_COMPLETE) {
/*
* root, and thus, IRM.
*/
new_root_id = local_id;
-- fw_notify("BM lock failed, making local node (%02x) root.\n",
-- new_root_id);
++ fw_notify("%s, making local node (%02x) root.\n",
++ "BM lock failed", new_root_id);
goto pick_me;
}
} else if (card->bm_generation != generation) {
* We weren't BM in the last generation, and the last
* bus reset is less than 125ms ago. Reschedule this job.
*/
-- spin_unlock_irqrestore(&card->lock, flags);
++ spin_unlock_irq(&card->lock);
fw_schedule_bm_work(card, DIV_ROUND_UP(HZ, 8));
goto out;
}
* If we haven't probed this device yet, bail out now
* and let's try again once that's done.
*/
-- spin_unlock_irqrestore(&card->lock, flags);
++ spin_unlock_irq(&card->lock);
goto out;
} else if (root_device_is_cmc) {
/*
-- * FIXME: I suppose we should set the cmstr bit in the
-- * STATE_CLEAR register of this node, as described in
-- * 1394-1995, 8.4.2.6. Also, send out a force root
-- * packet for this node.
++ * We will send out a force root packet for this
++ * node as part of the gap count optimization.
*/
new_root_id = root_id;
} else {
(card->gap_count != gap_count || new_root_id != root_id))
do_reset = true;
-- spin_unlock_irqrestore(&card->lock, flags);
++ spin_unlock_irq(&card->lock);
if (do_reset) {
fw_notify("phy config: card %d, new root=%x, gap_count=%d\n",
card->index, new_root_id, gap_count);
fw_send_phy_config(card, new_root_id, generation, gap_count);
-- fw_core_initiate_bus_reset(card, 1);
++ reset_bus(card, true);
/* Will allocate broadcast channel after the reset. */
-- } else {
-- if (local_id == irm_id)
-- allocate_broadcast_channel(card, generation);
++ goto out;
++ }
++
++ if (root_device_is_cmc) {
++ /*
++ * Make sure that the cycle master sends cycle start packets.
++ */
++ card->bm_transaction_data[0] = cpu_to_be32(CSR_STATE_BIT_CMSTR);
++ rcode = fw_run_transaction(card, TCODE_WRITE_QUADLET_REQUEST,
++ root_id, generation, SCODE_100,
++ CSR_REGISTER_BASE + CSR_STATE_SET,
++ card->bm_transaction_data, 4);
++ if (rcode == RCODE_GENERATION)
++ goto out;
}
++ if (local_id == irm_id)
++ allocate_broadcast_channel(card, generation);
++
out:
fw_node_put(root_node);
out_put_card:
fw_card_put(card);
}
- static void flush_timer_callback(unsigned long data)
- {
- struct fw_card *card = (struct fw_card *)data;
-
- fw_flush_transactions(card);
- }
-
void fw_card_initialize(struct fw_card *card,
const struct fw_card_driver *driver,
struct device *device)
card->device = device;
card->current_tlabel = 0;
card->tlabel_mask = 0;
++ card->split_timeout_hi = 0;
++ card->split_timeout_lo = 800 << 19;
++ card->split_timeout_cycles = 800;
++ card->split_timeout_jiffies = DIV_ROUND_UP(HZ, 10);
card->color = 0;
card->broadcast_channel = BROADCAST_CHANNEL_INITIAL;
kref_init(&card->kref);
init_completion(&card->done);
INIT_LIST_HEAD(&card->transaction_list);
++ INIT_LIST_HEAD(&card->phy_receiver_list);
spin_lock_init(&card->lock);
- setup_timer(&card->flush_timer,
- flush_timer_callback, (unsigned long)card);
card->local_node = NULL;
-- INIT_DELAYED_WORK(&card->work, fw_card_bm_work);
++ INIT_DELAYED_WORK(&card->br_work, br_work);
++ INIT_DELAYED_WORK(&card->bm_work, bm_work);
}
EXPORT_SYMBOL(fw_card_initialize);
}
EXPORT_SYMBOL(fw_card_add);
--
/*
* The next few functions implement a dummy driver that is used once a card
* driver shuts down an fw_card. This allows the driver to cleanly unload,
* as all IO to the card will be handled (and failed) by the dummy driver
* instead of calling into the module. Only functions for iso context
* shutdown still need to be provided by the card driver.
++ *
++ * .read/write_csr() should never be called anymore after the dummy driver
++ * was bound since they are only used within request handler context.
++ * .set_config_rom() is never called since the card is taken out of card_list
++ * before switching to the dummy driver.
*/
-- static int dummy_enable(struct fw_card *card,
-- const __be32 *config_rom, size_t length)
++ static int dummy_read_phy_reg(struct fw_card *card, int address)
{
-- BUG();
-- return -1;
++ return -ENODEV;
}
static int dummy_update_phy_reg(struct fw_card *card, int address,
return -ENODEV;
}
-- static int dummy_set_config_rom(struct fw_card *card,
-- const __be32 *config_rom, size_t length)
-- {
-- /*
-- * We take the card out of card_list before setting the dummy
-- * driver, so this should never get called.
-- */
-- BUG();
-- return -1;
-- }
--
static void dummy_send_request(struct fw_card *card, struct fw_packet *packet)
{
-- packet->callback(packet, card, -ENODEV);
++ packet->callback(packet, card, RCODE_CANCELLED);
}
static void dummy_send_response(struct fw_card *card, struct fw_packet *packet)
{
-- packet->callback(packet, card, -ENODEV);
++ packet->callback(packet, card, RCODE_CANCELLED);
}
static int dummy_cancel_packet(struct fw_card *card, struct fw_packet *packet)
return -ENODEV;
}
++ static struct fw_iso_context *dummy_allocate_iso_context(struct fw_card *card,
++ int type, int channel, size_t header_size)
++ {
++ return ERR_PTR(-ENODEV);
++ }
++
++ static int dummy_start_iso(struct fw_iso_context *ctx,
++ s32 cycle, u32 sync, u32 tags)
++ {
++ return -ENODEV;
++ }
++
++ static int dummy_set_iso_channels(struct fw_iso_context *ctx, u64 *channels)
++ {
++ return -ENODEV;
++ }
++
++ static int dummy_queue_iso(struct fw_iso_context *ctx, struct fw_iso_packet *p,
++ struct fw_iso_buffer *buffer, unsigned long payload)
++ {
++ return -ENODEV;
++ }
++
static const struct fw_card_driver dummy_driver_template = {
-- .enable = dummy_enable,
-- .update_phy_reg = dummy_update_phy_reg,
-- .set_config_rom = dummy_set_config_rom,
-- .send_request = dummy_send_request,
-- .cancel_packet = dummy_cancel_packet,
-- .send_response = dummy_send_response,
-- .enable_phys_dma = dummy_enable_phys_dma,
++ .read_phy_reg = dummy_read_phy_reg,
++ .update_phy_reg = dummy_update_phy_reg,
++ .send_request = dummy_send_request,
++ .send_response = dummy_send_response,
++ .cancel_packet = dummy_cancel_packet,
++ .enable_phys_dma = dummy_enable_phys_dma,
++ .allocate_iso_context = dummy_allocate_iso_context,
++ .start_iso = dummy_start_iso,
++ .set_iso_channels = dummy_set_iso_channels,
++ .queue_iso = dummy_queue_iso,
};
void fw_card_release(struct kref *kref)
card->driver->update_phy_reg(card, 4,
PHY_LINK_ACTIVE | PHY_CONTENDER, 0);
-- fw_core_initiate_bus_reset(card, 1);
++ fw_schedule_bus_reset(card, false, true);
mutex_lock(&card_mutex);
list_del_init(&card->link);
wait_for_completion(&card->done);
WARN_ON(!list_empty(&card->transaction_list));
- del_timer_sync(&card->flush_timer);
}
EXPORT_SYMBOL(fw_core_remove_card);
--
-- int fw_core_initiate_bus_reset(struct fw_card *card, int short_reset)
-- {
-- int reg = short_reset ? 5 : 1;
-- int bit = short_reset ? PHY_BUS_SHORT_RESET : PHY_BUS_RESET;
--
-- return card->driver->update_phy_reg(card, reg, 0, bit);
-- }
-- EXPORT_SYMBOL(fw_core_initiate_bus_reset);
* Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
++ #include <linux/bug.h>
#include <linux/compat.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/poll.h>
-- #include <linux/sched.h>
++ #include <linux/sched.h> /* required for linux/wait.h */
+#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/string.h>
#include <linux/time.h>
#include "core.h"
++ /*
++ * ABI version history is documented in linux/firewire-cdev.h.
++ */
++ #define FW_CDEV_KERNEL_VERSION 4
++ #define FW_CDEV_VERSION_EVENT_REQUEST2 4
++ #define FW_CDEV_VERSION_ALLOCATE_REGION_END 4
++
struct client {
u32 version;
struct fw_device *device;
struct fw_iso_buffer buffer;
unsigned long vm_start;
++ struct list_head phy_receiver_link;
++ u64 phy_receiver_closure;
++
struct list_head link;
struct kref kref;
};
struct inbound_transaction_resource {
struct client_resource resource;
++ struct fw_card *card;
struct fw_request *request;
void *data;
size_t length;
struct inbound_transaction_event {
struct event event;
-- struct fw_cdev_event_request request;
++ union {
++ struct fw_cdev_event_request request;
++ struct fw_cdev_event_request2 request2;
++ } req;
};
struct iso_interrupt_event {
struct fw_cdev_event_iso_interrupt interrupt;
};
++ struct iso_interrupt_mc_event {
++ struct event event;
++ struct fw_cdev_event_iso_interrupt_mc interrupt;
++ };
++
struct iso_resource_event {
struct event event;
struct fw_cdev_event_iso_resource iso_resource;
};
++ struct outbound_phy_packet_event {
++ struct event event;
++ struct client *client;
++ struct fw_packet p;
++ struct fw_cdev_event_phy_packet phy_packet;
++ };
++
++ struct inbound_phy_packet_event {
++ struct event event;
++ struct fw_cdev_event_phy_packet phy_packet;
++ };
++
static inline void __user *u64_to_uptr(__u64 value)
{
return (void __user *)(unsigned long)value;
idr_init(&client->resource_idr);
INIT_LIST_HEAD(&client->event_list);
init_waitqueue_head(&client->wait);
++ INIT_LIST_HEAD(&client->phy_receiver_link);
kref_init(&client->kref);
file->private_data = client;
list_add_tail(&client->link, &device->client_list);
mutex_unlock(&device->client_list_mutex);
- return 0;
+ return nonseekable_open(inode, file);
}
static void queue_event(struct client *client, struct event *event,
event->generation = client->device->generation;
event->node_id = client->device->node_id;
event->local_node_id = card->local_node->node_id;
-- event->bm_node_id = 0; /* FIXME: We don't track the BM. */
++ event->bm_node_id = card->bm_node_id;
event->irm_node_id = card->irm_node->node_id;
event->root_node_id = card->root_node->node_id;
e = kzalloc(sizeof(*e), GFP_KERNEL);
if (e == NULL) {
-- fw_notify("Out of memory when allocating bus reset event\n");
++ fw_notify("Out of memory when allocating event\n");
return;
}
struct fw_cdev_allocate_iso_resource allocate_iso_resource;
struct fw_cdev_send_stream_packet send_stream_packet;
struct fw_cdev_get_cycle_timer2 get_cycle_timer2;
++ struct fw_cdev_send_phy_packet send_phy_packet;
++ struct fw_cdev_receive_phy_packets receive_phy_packets;
++ struct fw_cdev_set_iso_channels set_iso_channels;
};
static int ioctl_get_info(struct client *client, union ioctl_arg *arg)
unsigned long ret = 0;
client->version = a->version;
-- a->version = FW_CDEV_VERSION;
++ a->version = FW_CDEV_KERNEL_VERSION;
a->card = client->device->card->index;
down_read(&fw_device_rwsem);
(request->length > 4096 || request->length > 512 << speed))
return -EIO;
++ if (request->tcode == TCODE_WRITE_QUADLET_REQUEST &&
++ request->length < 4)
++ return -EINVAL;
++
e = kmalloc(sizeof(*e) + request->length, GFP_KERNEL);
if (e == NULL)
return -ENOMEM;
if (is_fcp_request(r->request))
kfree(r->data);
else
-- fw_send_response(client->device->card, r->request,
-- RCODE_CONFLICT_ERROR);
++ fw_send_response(r->card, r->request, RCODE_CONFLICT_ERROR);
++
++ fw_card_put(r->card);
kfree(r);
}
static void handle_request(struct fw_card *card, struct fw_request *request,
int tcode, int destination, int source,
-- int generation, int speed,
-- unsigned long long offset,
++ int generation, unsigned long long offset,
void *payload, size_t length, void *callback_data)
{
struct address_handler_resource *handler = callback_data;
struct inbound_transaction_resource *r;
struct inbound_transaction_event *e;
++ size_t event_size0;
void *fcp_frame = NULL;
int ret;
++ /* card may be different from handler->client->device->card */
++ fw_card_get(card);
++
r = kmalloc(sizeof(*r), GFP_ATOMIC);
e = kmalloc(sizeof(*e), GFP_ATOMIC);
-- if (r == NULL || e == NULL)
++ if (r == NULL || e == NULL) {
++ fw_notify("Out of memory when allocating event\n");
goto failed;
--
++ }
++ r->card = card;
r->request = request;
r->data = payload;
r->length = length;
if (ret < 0)
goto failed;
-- e->request.type = FW_CDEV_EVENT_REQUEST;
-- e->request.tcode = tcode;
-- e->request.offset = offset;
-- e->request.length = length;
-- e->request.handle = r->resource.handle;
-- e->request.closure = handler->closure;
++ if (handler->client->version < FW_CDEV_VERSION_EVENT_REQUEST2) {
++ struct fw_cdev_event_request *req = &e->req.request;
++
++ if (tcode & 0x10)
++ tcode = TCODE_LOCK_REQUEST;
++
++ req->type = FW_CDEV_EVENT_REQUEST;
++ req->tcode = tcode;
++ req->offset = offset;
++ req->length = length;
++ req->handle = r->resource.handle;
++ req->closure = handler->closure;
++ event_size0 = sizeof(*req);
++ } else {
++ struct fw_cdev_event_request2 *req = &e->req.request2;
++
++ req->type = FW_CDEV_EVENT_REQUEST2;
++ req->tcode = tcode;
++ req->offset = offset;
++ req->source_node_id = source;
++ req->destination_node_id = destination;
++ req->card = card->index;
++ req->generation = generation;
++ req->length = length;
++ req->handle = r->resource.handle;
++ req->closure = handler->closure;
++ event_size0 = sizeof(*req);
++ }
queue_event(handler->client, &e->event,
-- &e->request, sizeof(e->request), r->data, length);
++ &e->req, event_size0, r->data, length);
return;
failed:
if (!is_fcp_request(request))
fw_send_response(card, request, RCODE_CONFLICT_ERROR);
++
++ fw_card_put(card);
}
static void release_address_handler(struct client *client,
return -ENOMEM;
region.start = a->offset;
-- region.end = a->offset + a->length;
++ if (client->version < FW_CDEV_VERSION_ALLOCATE_REGION_END)
++ region.end = a->offset + a->length;
++ else
++ region.end = a->region_end;
++
r->handler.length = a->length;
r->handler.address_callback = handle_request;
r->handler.callback_data = r;
kfree(r);
return ret;
}
++ a->offset = r->handler.offset;
r->resource.release = release_address_handler;
ret = add_client_resource(client, &r->resource, GFP_KERNEL);
if (is_fcp_request(r->request))
goto out;
-- if (a->length < r->length)
-- r->length = a->length;
-- if (copy_from_user(r->data, u64_to_uptr(a->data), r->length)) {
++ if (a->length != fw_get_response_length(r->request)) {
++ ret = -EINVAL;
++ kfree(r->request);
++ goto out;
++ }
++ if (copy_from_user(r->data, u64_to_uptr(a->data), a->length)) {
ret = -EFAULT;
kfree(r->request);
goto out;
}
-- fw_send_response(client->device->card, r->request, a->rcode);
++ fw_send_response(r->card, r->request, a->rcode);
out:
++ fw_card_put(r->card);
kfree(r);
return ret;
static int ioctl_initiate_bus_reset(struct client *client, union ioctl_arg *arg)
{
-- return fw_core_initiate_bus_reset(client->device->card,
++ fw_schedule_bus_reset(client->device->card, true,
arg->initiate_bus_reset.type == FW_CDEV_SHORT_RESET);
++ return 0;
}
static void release_descriptor(struct client *client,
struct client *client = data;
struct iso_interrupt_event *e;
-- e = kzalloc(sizeof(*e) + header_length, GFP_ATOMIC);
-- if (e == NULL)
++ e = kmalloc(sizeof(*e) + header_length, GFP_ATOMIC);
++ if (e == NULL) {
++ fw_notify("Out of memory when allocating event\n");
return;
--
++ }
e->interrupt.type = FW_CDEV_EVENT_ISO_INTERRUPT;
e->interrupt.closure = client->iso_closure;
e->interrupt.cycle = cycle;
sizeof(e->interrupt) + header_length, NULL, 0);
}
++ static void iso_mc_callback(struct fw_iso_context *context,
++ dma_addr_t completed, void *data)
++ {
++ struct client *client = data;
++ struct iso_interrupt_mc_event *e;
++
++ e = kmalloc(sizeof(*e), GFP_ATOMIC);
++ if (e == NULL) {
++ fw_notify("Out of memory when allocating event\n");
++ return;
++ }
++ e->interrupt.type = FW_CDEV_EVENT_ISO_INTERRUPT_MULTICHANNEL;
++ e->interrupt.closure = client->iso_closure;
++ e->interrupt.completed = fw_iso_buffer_lookup(&client->buffer,
++ completed);
++ queue_event(client, &e->event, &e->interrupt,
++ sizeof(e->interrupt), NULL, 0);
++ }
++
static int ioctl_create_iso_context(struct client *client, union ioctl_arg *arg)
{
struct fw_cdev_create_iso_context *a = &arg->create_iso_context;
struct fw_iso_context *context;
++ fw_iso_callback_t cb;
-- /* We only support one context at this time. */
-- if (client->iso_context != NULL)
-- return -EBUSY;
--
-- if (a->channel > 63)
-- return -EINVAL;
++ BUILD_BUG_ON(FW_CDEV_ISO_CONTEXT_TRANSMIT != FW_ISO_CONTEXT_TRANSMIT ||
++ FW_CDEV_ISO_CONTEXT_RECEIVE != FW_ISO_CONTEXT_RECEIVE ||
++ FW_CDEV_ISO_CONTEXT_RECEIVE_MULTICHANNEL !=
++ FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL);
switch (a->type) {
-- case FW_ISO_CONTEXT_RECEIVE:
-- if (a->header_size < 4 || (a->header_size & 3))
++ case FW_ISO_CONTEXT_TRANSMIT:
++ if (a->speed > SCODE_3200 || a->channel > 63)
return -EINVAL;
++
++ cb = iso_callback;
break;
-- case FW_ISO_CONTEXT_TRANSMIT:
-- if (a->speed > SCODE_3200)
++ case FW_ISO_CONTEXT_RECEIVE:
++ if (a->header_size < 4 || (a->header_size & 3) ||
++ a->channel > 63)
return -EINVAL;
++
++ cb = iso_callback;
++ break;
++
++ case FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL:
++ cb = (fw_iso_callback_t)iso_mc_callback;
break;
default:
}
context = fw_iso_context_create(client->device->card, a->type,
-- a->channel, a->speed, a->header_size,
-- iso_callback, client);
++ a->channel, a->speed, a->header_size, cb, client);
if (IS_ERR(context))
return PTR_ERR(context);
++ /* We only support one context at this time. */
++ spin_lock_irq(&client->lock);
++ if (client->iso_context != NULL) {
++ spin_unlock_irq(&client->lock);
++ fw_iso_context_destroy(context);
++ return -EBUSY;
++ }
client->iso_closure = a->closure;
client->iso_context = context;
++ spin_unlock_irq(&client->lock);
-- /* We only support one context at this time. */
a->handle = 0;
return 0;
}
++ static int ioctl_set_iso_channels(struct client *client, union ioctl_arg *arg)
++ {
++ struct fw_cdev_set_iso_channels *a = &arg->set_iso_channels;
++ struct fw_iso_context *ctx = client->iso_context;
++
++ if (ctx == NULL || a->handle != 0)
++ return -EINVAL;
++
++ return fw_iso_context_set_channels(ctx, &a->channels);
++ }
++
/* Macros for decoding the iso packet control header. */
#define GET_PAYLOAD_LENGTH(v) ((v) & 0xffff)
#define GET_INTERRUPT(v) (((v) >> 16) & 0x01)
struct fw_cdev_queue_iso *a = &arg->queue_iso;
struct fw_cdev_iso_packet __user *p, *end, *next;
struct fw_iso_context *ctx = client->iso_context;
-- unsigned long payload, buffer_end, header_length;
++ unsigned long payload, buffer_end, transmit_header_bytes = 0;
u32 control;
int count;
struct {
* use the indirect payload, the iso buffer need not be mapped
* and the a->data pointer is ignored.
*/
--
payload = (unsigned long)a->data - client->vm_start;
buffer_end = client->buffer.page_count << PAGE_SHIFT;
if (a->data == 0 || client->buffer.pages == NULL ||
buffer_end = 0;
}
-- p = (struct fw_cdev_iso_packet __user *)u64_to_uptr(a->packets);
++ if (ctx->type == FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL && payload & 3)
++ return -EINVAL;
++ p = (struct fw_cdev_iso_packet __user *)u64_to_uptr(a->packets);
if (!access_ok(VERIFY_READ, p, a->size))
return -EFAULT;
u.packet.sy = GET_SY(control);
u.packet.header_length = GET_HEADER_LENGTH(control);
-- if (ctx->type == FW_ISO_CONTEXT_TRANSMIT) {
-- if (u.packet.header_length % 4 != 0)
++ switch (ctx->type) {
++ case FW_ISO_CONTEXT_TRANSMIT:
++ if (u.packet.header_length & 3)
+ return -EINVAL;
- header_length = u.packet.header_length;
- } else {
- /*
- * We require that header_length is a multiple of
- * the fixed header size, ctx->header_size.
- */
- if (ctx->header_size == 0) {
- if (u.packet.header_length > 0)
- return -EINVAL;
- } else if (u.packet.header_length == 0 ||
- u.packet.header_length % ctx->header_size != 0) {
++ transmit_header_bytes = u.packet.header_length;
++ break;
++
++ case FW_ISO_CONTEXT_RECEIVE:
++ if (u.packet.header_length == 0 ||
++ u.packet.header_length % ctx->header_size != 0)
+ return -EINVAL;
- header_length = u.packet.header_length;
- } else {
- /*
- * We require that header_length is a multiple of
- * the fixed header size, ctx->header_size.
- */
- if (ctx->header_size == 0) {
- if (u.packet.header_length > 0)
- return -EINVAL;
- } else if (u.packet.header_length == 0 ||
- u.packet.header_length % ctx->header_size != 0) {
++ break;
++
++ case FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL:
++ if (u.packet.payload_length == 0 ||
++ u.packet.payload_length & 3)
return -EINVAL;
-- }
-- header_length = 0;
++ break;
}
next = (struct fw_cdev_iso_packet __user *)
-- &p->header[header_length / 4];
++ &p->header[transmit_header_bytes / 4];
if (next > end)
return -EINVAL;
if (__copy_from_user
-- (u.packet.header, p->header, header_length))
++ (u.packet.header, p->header, transmit_header_bytes))
return -EFAULT;
if (u.packet.skip && ctx->type == FW_ISO_CONTEXT_TRANSMIT &&
u.packet.header_length + u.packet.payload_length > 0)
{
struct fw_cdev_start_iso *a = &arg->start_iso;
++ BUILD_BUG_ON(
++ FW_CDEV_ISO_CONTEXT_MATCH_TAG0 != FW_ISO_CONTEXT_MATCH_TAG0 ||
++ FW_CDEV_ISO_CONTEXT_MATCH_TAG1 != FW_ISO_CONTEXT_MATCH_TAG1 ||
++ FW_CDEV_ISO_CONTEXT_MATCH_TAG2 != FW_ISO_CONTEXT_MATCH_TAG2 ||
++ FW_CDEV_ISO_CONTEXT_MATCH_TAG3 != FW_ISO_CONTEXT_MATCH_TAG3 ||
++ FW_CDEV_ISO_CONTEXT_MATCH_ALL_TAGS != FW_ISO_CONTEXT_MATCH_ALL_TAGS);
++
if (client->iso_context == NULL || a->handle != 0)
return -EINVAL;
local_irq_disable();
-- cycle_time = card->driver->get_cycle_time(card);
++ cycle_time = card->driver->read_csr(card, CSR_CYCLE_TIME);
switch (a->clk_id) {
case CLOCK_REALTIME: getnstimeofday(&ts); break;
return init_request(client, &request, dest, a->speed);
}
++ static void outbound_phy_packet_callback(struct fw_packet *packet,
++ struct fw_card *card, int status)
++ {
++ struct outbound_phy_packet_event *e =
++ container_of(packet, struct outbound_phy_packet_event, p);
++
++ switch (status) {
++ /* expected: */
++ case ACK_COMPLETE: e->phy_packet.rcode = RCODE_COMPLETE; break;
++ /* should never happen with PHY packets: */
++ case ACK_PENDING: e->phy_packet.rcode = RCODE_COMPLETE; break;
++ case ACK_BUSY_X:
++ case ACK_BUSY_A:
++ case ACK_BUSY_B: e->phy_packet.rcode = RCODE_BUSY; break;
++ case ACK_DATA_ERROR: e->phy_packet.rcode = RCODE_DATA_ERROR; break;
++ case ACK_TYPE_ERROR: e->phy_packet.rcode = RCODE_TYPE_ERROR; break;
++ /* stale generation; cancelled; on certain controllers: no ack */
++ default: e->phy_packet.rcode = status; break;
++ }
++ e->phy_packet.data[0] = packet->timestamp;
++
++ queue_event(e->client, &e->event, &e->phy_packet,
++ sizeof(e->phy_packet) + e->phy_packet.length, NULL, 0);
++ client_put(e->client);
++ }
++
++ static int ioctl_send_phy_packet(struct client *client, union ioctl_arg *arg)
++ {
++ struct fw_cdev_send_phy_packet *a = &arg->send_phy_packet;
++ struct fw_card *card = client->device->card;
++ struct outbound_phy_packet_event *e;
++
++ /* Access policy: Allow this ioctl only on local nodes' device files. */
++ if (!client->device->is_local)
++ return -ENOSYS;
++
++ e = kzalloc(sizeof(*e) + 4, GFP_KERNEL);
++ if (e == NULL)
++ return -ENOMEM;
++
++ client_get(client);
++ e->client = client;
++ e->p.speed = SCODE_100;
++ e->p.generation = a->generation;
++ e->p.header[0] = a->data[0];
++ e->p.header[1] = a->data[1];
++ e->p.header_length = 8;
++ e->p.callback = outbound_phy_packet_callback;
++ e->phy_packet.closure = a->closure;
++ e->phy_packet.type = FW_CDEV_EVENT_PHY_PACKET_SENT;
++ if (is_ping_packet(a->data))
++ e->phy_packet.length = 4;
++
++ card->driver->send_request(card, &e->p);
++
++ return 0;
++ }
++
++ static int ioctl_receive_phy_packets(struct client *client, union ioctl_arg *arg)
++ {
++ struct fw_cdev_receive_phy_packets *a = &arg->receive_phy_packets;
++ struct fw_card *card = client->device->card;
++
++ /* Access policy: Allow this ioctl only on local nodes' device files. */
++ if (!client->device->is_local)
++ return -ENOSYS;
++
++ spin_lock_irq(&card->lock);
++
++ list_move_tail(&client->phy_receiver_link, &card->phy_receiver_list);
++ client->phy_receiver_closure = a->closure;
++
++ spin_unlock_irq(&card->lock);
++
++ return 0;
++ }
++
++ void fw_cdev_handle_phy_packet(struct fw_card *card, struct fw_packet *p)
++ {
++ struct client *client;
++ struct inbound_phy_packet_event *e;
++ unsigned long flags;
++
++ spin_lock_irqsave(&card->lock, flags);
++
++ list_for_each_entry(client, &card->phy_receiver_list, phy_receiver_link) {
++ e = kmalloc(sizeof(*e) + 8, GFP_ATOMIC);
++ if (e == NULL) {
++ fw_notify("Out of memory when allocating event\n");
++ break;
++ }
++ e->phy_packet.closure = client->phy_receiver_closure;
++ e->phy_packet.type = FW_CDEV_EVENT_PHY_PACKET_RECEIVED;
++ e->phy_packet.rcode = RCODE_COMPLETE;
++ e->phy_packet.length = 8;
++ e->phy_packet.data[0] = p->header[1];
++ e->phy_packet.data[1] = p->header[2];
++ queue_event(client, &e->event,
++ &e->phy_packet, sizeof(e->phy_packet) + 8, NULL, 0);
++ }
++
++ spin_unlock_irqrestore(&card->lock, flags);
++ }
++
static int (* const ioctl_handlers[])(struct client *, union ioctl_arg *) = {
-- ioctl_get_info,
-- ioctl_send_request,
-- ioctl_allocate,
-- ioctl_deallocate,
-- ioctl_send_response,
-- ioctl_initiate_bus_reset,
-- ioctl_add_descriptor,
-- ioctl_remove_descriptor,
-- ioctl_create_iso_context,
-- ioctl_queue_iso,
-- ioctl_start_iso,
-- ioctl_stop_iso,
-- ioctl_get_cycle_timer,
-- ioctl_allocate_iso_resource,
-- ioctl_deallocate_iso_resource,
-- ioctl_allocate_iso_resource_once,
-- ioctl_deallocate_iso_resource_once,
-- ioctl_get_speed,
-- ioctl_send_broadcast_request,
-- ioctl_send_stream_packet,
-- ioctl_get_cycle_timer2,
++ [0x00] = ioctl_get_info,
++ [0x01] = ioctl_send_request,
++ [0x02] = ioctl_allocate,
++ [0x03] = ioctl_deallocate,
++ [0x04] = ioctl_send_response,
++ [0x05] = ioctl_initiate_bus_reset,
++ [0x06] = ioctl_add_descriptor,
++ [0x07] = ioctl_remove_descriptor,
++ [0x08] = ioctl_create_iso_context,
++ [0x09] = ioctl_queue_iso,
++ [0x0a] = ioctl_start_iso,
++ [0x0b] = ioctl_stop_iso,
++ [0x0c] = ioctl_get_cycle_timer,
++ [0x0d] = ioctl_allocate_iso_resource,
++ [0x0e] = ioctl_deallocate_iso_resource,
++ [0x0f] = ioctl_allocate_iso_resource_once,
++ [0x10] = ioctl_deallocate_iso_resource_once,
++ [0x11] = ioctl_get_speed,
++ [0x12] = ioctl_send_broadcast_request,
++ [0x13] = ioctl_send_stream_packet,
++ [0x14] = ioctl_get_cycle_timer2,
++ [0x15] = ioctl_send_phy_packet,
++ [0x16] = ioctl_receive_phy_packets,
++ [0x17] = ioctl_set_iso_channels,
};
static int dispatch_ioctl(struct client *client,
struct client *client = file->private_data;
struct event *event, *next_event;
++ spin_lock_irq(&client->device->card->lock);
++ list_del(&client->phy_receiver_link);
++ spin_unlock_irq(&client->device->card->lock);
++
mutex_lock(&client->device->client_list_mutex);
list_del(&client->link);
mutex_unlock(&client->device->client_list_mutex);
const struct file_operations fw_device_ops = {
.owner = THIS_MODULE,
+ .llseek = no_llseek,
.open = fw_device_op_open,
.read = fw_device_op_read,
.unlocked_ioctl = fw_device_op_ioctl,
- .poll = fw_device_op_poll,
- .release = fw_device_op_release,
.mmap = fw_device_op_mmap,
-
+ .release = fw_device_op_release,
+ .poll = fw_device_op_poll,
#ifdef CONFIG_COMPAT
.compat_ioctl = fw_device_op_compat_ioctl,
#endif
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/rwsem.h>
-#include <linux/semaphore.h>
+#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/string.h>
#include <linux/workqueue.h>
}
/**
-- * fw_csr_string - reads a string from the configuration ROM
-- * @directory: e.g. root directory or unit directory
-- * @key: the key of the preceding directory entry
-- * @buf: where to put the string
-- * @size: size of @buf, in bytes
++ * fw_csr_string() - reads a string from the configuration ROM
++ * @directory: e.g. root directory or unit directory
++ * @key: the key of the preceding directory entry
++ * @buf: where to put the string
++ * @size: size of @buf, in bytes
*
* The string is taken from a minimal ASCII text descriptor leaf after
* the immediate entry with @key. The string is zero-terminated.
struct fw_driver *driver = (struct fw_driver *)dev->driver;
if (is_fw_unit(dev) && driver != NULL && driver->update != NULL) {
- down(&dev->sem);
+ device_lock(dev);
driver->update(unit);
- up(&dev->sem);
+ device_unlock(dev);
}
return 0;
goto give_up;
}
++ fw_device_cdev_update(device);
create_units(device);
/* Userspace may want to re-read attributes. */
#include <linux/firewire-constants.h>
#include <linux/kernel.h>
#include <linux/mm.h>
+#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/vmalloc.h>
}
EXPORT_SYMBOL(fw_iso_buffer_destroy);
++ /* Convert DMA address to offset into virtually contiguous buffer. */
++ size_t fw_iso_buffer_lookup(struct fw_iso_buffer *buffer, dma_addr_t completed)
++ {
++ int i;
++ dma_addr_t address;
++ ssize_t offset;
++
++ for (i = 0; i < buffer->page_count; i++) {
++ address = page_private(buffer->pages[i]);
++ offset = (ssize_t)completed - (ssize_t)address;
++ if (offset > 0 && offset <= PAGE_SIZE)
++ return (i << PAGE_SHIFT) + offset;
++ }
++
++ return 0;
++ }
++
struct fw_iso_context *fw_iso_context_create(struct fw_card *card,
int type, int channel, int speed, size_t header_size,
fw_iso_callback_t callback, void *callback_data)
ctx->channel = channel;
ctx->speed = speed;
ctx->header_size = header_size;
-- ctx->callback = callback;
++ ctx->callback.sc = callback;
ctx->callback_data = callback_data;
return ctx;
void fw_iso_context_destroy(struct fw_iso_context *ctx)
{
-- struct fw_card *card = ctx->card;
--
-- card->driver->free_iso_context(ctx);
++ ctx->card->driver->free_iso_context(ctx);
}
EXPORT_SYMBOL(fw_iso_context_destroy);
}
EXPORT_SYMBOL(fw_iso_context_start);
++ int fw_iso_context_set_channels(struct fw_iso_context *ctx, u64 *channels)
++ {
++ return ctx->card->driver->set_iso_channels(ctx, channels);
++ }
++
int fw_iso_context_queue(struct fw_iso_context *ctx,
struct fw_iso_packet *packet,
struct fw_iso_buffer *buffer,
unsigned long payload)
{
-- struct fw_card *card = ctx->card;
--
-- return card->driver->queue_iso(ctx, packet, buffer, payload);
++ return ctx->card->driver->queue_iso(ctx, packet, buffer, payload);
}
EXPORT_SYMBOL(fw_iso_context_queue);
for (try = 0; try < 5; try++) {
new = allocate ? old - bandwidth : old + bandwidth;
if (new < 0 || new > BANDWIDTH_AVAILABLE_INITIAL)
- break;
+ return -EBUSY;
data[0] = cpu_to_be32(old);
data[1] = cpu_to_be32(new);
u32 channels_mask, u64 offset, bool allocate, __be32 data[2])
{
__be32 c, all, old;
- int i, retry = 5;
+ int i, ret = -EIO, retry = 5;
old = all = allocate ? cpu_to_be32(~0) : 0;
if (!(channels_mask & 1 << i))
continue;
+ ret = -EBUSY;
+
c = cpu_to_be32(1 << (31 - i));
if ((old & c) != (all & c))
continue;
/* 1394-1995 IRM, fall through to retry. */
default:
- if (retry--)
+ if (retry) {
+ retry--;
i--;
+ } else {
+ ret = -EIO;
+ }
}
}
- return -EIO;
+ return ret;
}
static void deallocate_channel(struct fw_card *card, int irm_id,
}
/**
-- * fw_iso_resource_manage - Allocate or deallocate a channel and/or bandwidth
++ * fw_iso_resource_manage() - Allocate or deallocate a channel and/or bandwidth
*
* In parameters: card, generation, channels_mask, bandwidth, allocate
* Out parameters: channel, bandwidth
#include <linux/mutex.h>
#include <linux/netdevice.h>
#include <linux/skbuff.h>
+#include <linux/slab.h>
#include <linux/spinlock.h>
#include <asm/unaligned.h>
static void fwnet_receive_packet(struct fw_card *card, struct fw_request *r,
int tcode, int destination, int source, int generation,
-- int speed, unsigned long long offset, void *payload,
-- size_t length, void *callback_data)
++ unsigned long long offset, void *payload, size_t length,
++ void *callback_data)
{
struct fwnet_device *dev = callback_data;
int rcode;
* Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
++ #include <linux/bug.h>
#include <linux/compiler.h>
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/firewire.h>
#include <linux/firewire-constants.h>
-#include <linux/gfp.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
++ #include <linux/mutex.h>
#include <linux/pci.h>
#include <linux/pci_ids.h>
+#include <linux/slab.h>
#include <linux/spinlock.h>
#include <linux/string.h>
+++#include <linux/time.h>
#include <asm/byteorder.h>
#include <asm/page.h>
int generation;
int request_generation; /* for timestamping incoming requests */
unsigned quirks;
++ unsigned int pri_req_max;
++ u32 bus_time;
++ bool is_root;
++ bool csr_state_setclear_abdicate;
/*
* Spinlock for accessing fw_ohci data. Never call out of
*/
spinlock_t lock;
++ struct mutex phy_reg_mutex;
++
struct ar_context ar_request_ctx;
struct ar_context ar_response_ctx;
struct context at_request_ctx;
struct context at_response_ctx;
-- u32 it_context_mask;
++ u32 it_context_mask; /* unoccupied IT contexts */
struct iso_context *it_context_list;
-- u64 ir_context_channels;
-- u32 ir_context_mask;
++ u64 ir_context_channels; /* unoccupied channels */
++ u32 ir_context_mask; /* unoccupied IR contexts */
struct iso_context *ir_context_list;
++ u64 mc_channels; /* channels in use by the multichannel IR context */
++ bool mc_allocated;
__be32 *config_rom;
dma_addr_t config_rom_bus;
static char ohci_driver_name[] = KBUILD_MODNAME;
++ #define PCI_DEVICE_ID_JMICRON_JMB38X_FW 0x2380
#define PCI_DEVICE_ID_TI_TSB12LV22 0x8009
#define QUIRK_CYCLE_TIMER 1
#define QUIRK_RESET_PACKET 2
#define QUIRK_BE_HEADERS 4
+ #define QUIRK_NO_1394A 8
++ #define QUIRK_NO_MSI 16
/* In case of multiple matches in ohci_quirks[], only the first one is used. */
static const struct {
unsigned short vendor, device, flags;
} ohci_quirks[] = {
{PCI_VENDOR_ID_TI, PCI_DEVICE_ID_TI_TSB12LV22, QUIRK_CYCLE_TIMER |
- QUIRK_RESET_PACKET},
+ QUIRK_RESET_PACKET |
+ QUIRK_NO_1394A},
{PCI_VENDOR_ID_TI, PCI_ANY_ID, QUIRK_RESET_PACKET},
{PCI_VENDOR_ID_AL, PCI_ANY_ID, QUIRK_CYCLE_TIMER},
++ {PCI_VENDOR_ID_JMICRON, PCI_DEVICE_ID_JMICRON_JMB38X_FW, QUIRK_NO_MSI},
{PCI_VENDOR_ID_NEC, PCI_ANY_ID, QUIRK_CYCLE_TIMER},
{PCI_VENDOR_ID_VIA, PCI_ANY_ID, QUIRK_CYCLE_TIMER},
{PCI_VENDOR_ID_APPLE, PCI_DEVICE_ID_APPLE_UNI_N_FW, QUIRK_BE_HEADERS},
", nonatomic cycle timer = " __stringify(QUIRK_CYCLE_TIMER)
", reset packet generation = " __stringify(QUIRK_RESET_PACKET)
", AR/selfID endianess = " __stringify(QUIRK_BE_HEADERS)
+ ", no 1394a enhancements = " __stringify(QUIRK_NO_1394A)
++ ", disable MSI = " __stringify(QUIRK_NO_MSI)
")");
- #ifdef CONFIG_FIREWIRE_OHCI_DEBUG
-
#define OHCI_PARAM_DEBUG_AT_AR 1
#define OHCI_PARAM_DEBUG_SELFIDS 2
#define OHCI_PARAM_DEBUG_IRQS 4
#define OHCI_PARAM_DEBUG_BUSRESETS 8 /* only effective before chip init */
+ #ifdef CONFIG_FIREWIRE_OHCI_DEBUG
+
static int param_debug;
module_param_named(debug, param_debug, int, 0644);
MODULE_PARM_DESC(debug, "Verbose logging (default = 0"
!(evt & OHCI1394_busReset))
return;
-- fw_notify("IRQ %08x%s%s%s%s%s%s%s%s%s%s%s%s%s\n", evt,
++ fw_notify("IRQ %08x%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n", evt,
evt & OHCI1394_selfIDComplete ? " selfID" : "",
evt & OHCI1394_RQPkt ? " AR_req" : "",
evt & OHCI1394_RSPkt ? " AR_resp" : "",
evt & OHCI1394_isochTx ? " IT" : "",
evt & OHCI1394_postedWriteErr ? " postedWriteErr" : "",
evt & OHCI1394_cycleTooLong ? " cycleTooLong" : "",
++ evt & OHCI1394_cycle64Seconds ? " cycle64Seconds" : "",
evt & OHCI1394_cycleInconsistent ? " cycleInconsistent" : "",
evt & OHCI1394_regAccessFail ? " regAccessFail" : "",
evt & OHCI1394_busReset ? " busReset" : "",
OHCI1394_RSPkt | OHCI1394_reqTxComplete |
OHCI1394_respTxComplete | OHCI1394_isochRx |
OHCI1394_isochTx | OHCI1394_postedWriteErr |
-- OHCI1394_cycleTooLong | OHCI1394_cycleInconsistent |
++ OHCI1394_cycleTooLong | OHCI1394_cycle64Seconds |
++ OHCI1394_cycleInconsistent |
OHCI1394_regAccessFail | OHCI1394_busReset)
? " ?" : "");
}
#else
- #define log_irqs(evt)
- #define log_selfids(node_id, generation, self_id_count, sid)
- #define log_ar_at_event(dir, speed, header, evt)
+ #define param_debug 0
+ static inline void log_irqs(u32 evt) {}
+ static inline void log_selfids(int node_id, int generation, int self_id_count, u32 *s) {}
+ static inline void log_ar_at_event(char dir, int speed, u32 *header, int evt) {}
#endif /* CONFIG_FIREWIRE_OHCI_DEBUG */
reg_read(ohci, OHCI1394_Version);
}
- static int ohci_update_phy_reg(struct fw_card *card, int addr,
- int clear_bits, int set_bits)
+ static int read_phy_reg(struct fw_ohci *ohci, int addr)
{
- struct fw_ohci *ohci = fw_ohci(card);
- u32 val, old;
+ u32 val;
+ int i;
reg_write(ohci, OHCI1394_PhyControl, OHCI1394_PhyControl_Read(addr));
- for (i = 0; i < 10; i++) {
- flush_writes(ohci);
- msleep(2);
- val = reg_read(ohci, OHCI1394_PhyControl);
- if ((val & OHCI1394_PhyControl_ReadDone) == 0) {
- fw_error("failed to set phy reg bits.\n");
- return -EBUSY;
++ for (i = 0; i < 3 + 100; i++) {
+ val = reg_read(ohci, OHCI1394_PhyControl);
+ if (val & OHCI1394_PhyControl_ReadDone)
+ return OHCI1394_PhyControl_ReadData(val);
+
- msleep(1);
++ /*
++ * Try a few times without waiting. Sleeping is necessary
++ * only when the link/PHY interface is busy.
++ */
++ if (i >= 3)
++ msleep(1);
}
+ fw_error("failed to read phy reg\n");
+
+ return -EBUSY;
+ }
+
+ static int write_phy_reg(const struct fw_ohci *ohci, int addr, u32 val)
+ {
+ int i;
- old = OHCI1394_PhyControl_ReadData(val);
- old = (old & ~clear_bits) | set_bits;
reg_write(ohci, OHCI1394_PhyControl,
- OHCI1394_PhyControl_Write(addr, old));
+ OHCI1394_PhyControl_Write(addr, val));
- for (i = 0; i < 100; i++) {
++ for (i = 0; i < 3 + 100; i++) {
+ val = reg_read(ohci, OHCI1394_PhyControl);
+ if (!(val & OHCI1394_PhyControl_WritePending))
+ return 0;
- msleep(1);
- return 0;
++ if (i >= 3)
++ msleep(1);
+ }
+ fw_error("failed to write phy reg\n");
+
+ return -EBUSY;
+ }
+
- static int ohci_update_phy_reg(struct fw_card *card, int addr,
- int clear_bits, int set_bits)
++ static int update_phy_reg(struct fw_ohci *ohci, int addr,
++ int clear_bits, int set_bits)
+ {
- struct fw_ohci *ohci = fw_ohci(card);
- int ret;
-
- ret = read_phy_reg(ohci, addr);
++ int ret = read_phy_reg(ohci, addr);
+ if (ret < 0)
+ return ret;
+
+ /*
+ * The interrupt status bits are cleared by writing a one bit.
+ * Avoid clearing them unless explicitly requested in set_bits.
+ */
+ if (addr == 5)
+ clear_bits |= PHY_INT_STATUS_BITS;
+
+ return write_phy_reg(ohci, addr, (ret & ~clear_bits) | set_bits);
+ }
+
+ static int read_paged_phy_reg(struct fw_ohci *ohci, int page, int addr)
+ {
+ int ret;
+
- ret = ohci_update_phy_reg(&ohci->card, 7, PHY_PAGE_SELECT, page << 5);
++ ret = update_phy_reg(ohci, 7, PHY_PAGE_SELECT, page << 5);
+ if (ret < 0)
+ return ret;
+
+ return read_phy_reg(ohci, addr);
+ }
+
++ static int ohci_read_phy_reg(struct fw_card *card, int addr)
++ {
++ struct fw_ohci *ohci = fw_ohci(card);
++ int ret;
++
++ mutex_lock(&ohci->phy_reg_mutex);
++ ret = read_phy_reg(ohci, addr);
++ mutex_unlock(&ohci->phy_reg_mutex);
++
++ return ret;
++ }
++
++ static int ohci_update_phy_reg(struct fw_card *card, int addr,
++ int clear_bits, int set_bits)
++ {
++ struct fw_ohci *ohci = fw_ohci(card);
++ int ret;
++
++ mutex_lock(&ohci->phy_reg_mutex);
++ ret = update_phy_reg(ohci, addr, clear_bits, set_bits);
++ mutex_unlock(&ohci->phy_reg_mutex);
++
++ return ret;
+ }
+
static int ar_context_add_page(struct ar_context *ctx)
{
struct device *dev = ctx->ohci->card.device;
ab->descriptor.res_count = cpu_to_le16(PAGE_SIZE - offset);
ab->descriptor.branch_address = 0;
++ wmb(); /* finish init of new descriptors before branch_address update */
ctx->last_buffer->descriptor.branch_address = cpu_to_le32(ab_bus | 1);
ctx->last_buffer->next = ab;
ctx->last_buffer = ab;
d_bus = desc->buffer_bus + (d - desc->buffer) * sizeof(*d);
desc->used += (z + extra) * sizeof(*d);
++
++ wmb(); /* finish init of new descriptors before branch_address update */
ctx->prev->branch_address = cpu_to_le32(d_bus | z);
ctx->prev = find_branch_descriptor(d, z);
header[1] = cpu_to_le32(packet->header[0]);
header[2] = cpu_to_le32(packet->header[1]);
d[0].req_count = cpu_to_le16(12);
++
++ if (is_ping_packet(packet->header))
++ d[0].control |= cpu_to_le16(DESCRIPTOR_PING);
break;
case 4:
struct fw_packet *packet, u32 csr)
{
struct fw_packet response;
- int tcode, length, ext_tcode, sel;
+ int tcode, length, ext_tcode, sel, try;
__be32 *payload, lock_old;
u32 lock_arg, lock_data;
reg_write(ohci, OHCI1394_CSRCompareData, lock_arg);
reg_write(ohci, OHCI1394_CSRControl, sel);
- if (reg_read(ohci, OHCI1394_CSRControl) & 0x80000000)
- lock_old = cpu_to_be32(reg_read(ohci, OHCI1394_CSRData));
- else
- fw_notify("swap not done yet\n");
+ for (try = 0; try < 20; try++)
+ if (reg_read(ohci, OHCI1394_CSRControl) & 0x80000000) {
+ lock_old = cpu_to_be32(reg_read(ohci,
+ OHCI1394_CSRData));
+ fw_fill_response(&response, packet->header,
+ RCODE_COMPLETE,
+ &lock_old, sizeof(lock_old));
+ goto out;
+ }
+
+ fw_error("swap not done (CSR lock timeout)\n");
+ fw_fill_response(&response, packet->header, RCODE_BUSY, NULL, 0);
- fw_fill_response(&response, packet->header,
- RCODE_COMPLETE, &lock_old, sizeof(lock_old));
out:
fw_core_handle_response(&ohci->card, &response);
}
static void handle_local_request(struct context *ctx, struct fw_packet *packet)
{
- u64 offset;
- u32 csr;
+ u64 offset, csr;
if (ctx == &ctx->ohci->at_request_ctx) {
packet->ack = ACK_PENDING;
}
++ static u32 cycle_timer_ticks(u32 cycle_timer)
++ {
++ u32 ticks;
++
++ ticks = cycle_timer & 0xfff;
++ ticks += 3072 * ((cycle_timer >> 12) & 0x1fff);
++ ticks += (3072 * 8000) * (cycle_timer >> 25);
++
++ return ticks;
++ }
++
++ /*
++ * Some controllers exhibit one or more of the following bugs when updating the
++ * iso cycle timer register:
++ * - When the lowest six bits are wrapping around to zero, a read that happens
++ * at the same time will return garbage in the lowest ten bits.
++ * - When the cycleOffset field wraps around to zero, the cycleCount field is
++ * not incremented for about 60 ns.
++ * - Occasionally, the entire register reads zero.
++ *
++ * To catch these, we read the register three times and ensure that the
++ * difference between each two consecutive reads is approximately the same, i.e.
++ * less than twice the other. Furthermore, any negative difference indicates an
++ * error. (A PCI read should take at least 20 ticks of the 24.576 MHz timer to
++ * execute, so we have enough precision to compute the ratio of the differences.)
++ */
++ static u32 get_cycle_time(struct fw_ohci *ohci)
++ {
++ u32 c0, c1, c2;
++ u32 t0, t1, t2;
++ s32 diff01, diff12;
++ int i;
++
++ c2 = reg_read(ohci, OHCI1394_IsochronousCycleTimer);
++
++ if (ohci->quirks & QUIRK_CYCLE_TIMER) {
++ i = 0;
++ c1 = c2;
++ c2 = reg_read(ohci, OHCI1394_IsochronousCycleTimer);
++ do {
++ c0 = c1;
++ c1 = c2;
++ c2 = reg_read(ohci, OHCI1394_IsochronousCycleTimer);
++ t0 = cycle_timer_ticks(c0);
++ t1 = cycle_timer_ticks(c1);
++ t2 = cycle_timer_ticks(c2);
++ diff01 = t1 - t0;
++ diff12 = t2 - t1;
++ } while ((diff01 <= 0 || diff12 <= 0 ||
++ diff01 / diff12 >= 2 || diff12 / diff01 >= 2)
++ && i++ < 20);
++ }
++
++ return c2;
++ }
++
++ /*
++ * This function has to be called at least every 64 seconds. The bus_time
++ * field stores not only the upper 25 bits of the BUS_TIME register but also
++ * the most significant bit of the cycle timer in bit 6 so that we can detect
++ * changes in this bit.
++ */
++ static u32 update_bus_time(struct fw_ohci *ohci)
++ {
++ u32 cycle_time_seconds = get_cycle_time(ohci) >> 25;
++
++ if ((ohci->bus_time & 0x40) != (cycle_time_seconds & 0x40))
++ ohci->bus_time += 0x40;
++
++ return ohci->bus_time | cycle_time_seconds;
++ }
++
static void bus_reset_tasklet(unsigned long data)
{
struct fw_ohci *ohci = (struct fw_ohci *)data;
unsigned long flags;
void *free_rom = NULL;
dma_addr_t free_rom_bus = 0;
++ bool is_new_root;
reg = reg_read(ohci, OHCI1394_NodeID);
if (!(reg & OHCI1394_NodeID_idValid)) {
ohci->node_id = reg & (OHCI1394_NodeID_busNumber |
OHCI1394_NodeID_nodeNumber);
++ is_new_root = (reg & OHCI1394_NodeID_root) != 0;
++ if (!(ohci->is_root && is_new_root))
++ reg_write(ohci, OHCI1394_LinkControlSet,
++ OHCI1394_LinkControl_cycleMaster);
++ ohci->is_root = is_new_root;
++
reg = reg_read(ohci, OHCI1394_SelfIDCount);
if (reg & OHCI1394_SelfIDCount_selfIDError) {
fw_notify("inconsistent self IDs\n");
* was set up before this reset, the old one is now no longer
* in use and we can free it. Update the config rom pointers
* to point to the current config rom and clear the
-- * next_config_rom pointer so a new udpate can take place.
++ * next_config_rom pointer so a new update can take place.
*/
if (ohci->next_config_rom != NULL) {
self_id_count, ohci->self_id_buffer);
fw_core_handle_bus_reset(&ohci->card, ohci->node_id, generation,
-- self_id_count, ohci->self_id_buffer);
++ self_id_count, ohci->self_id_buffer,
++ ohci->csr_state_setclear_abdicate);
++ ohci->csr_state_setclear_abdicate = false;
}
static irqreturn_t irq_handler(int irq, void *data)
fw_notify("isochronous cycle inconsistent\n");
}
++ if (event & OHCI1394_cycle64Seconds) {
++ spin_lock(&ohci->lock);
++ update_bus_time(ohci);
++ spin_unlock(&ohci->lock);
++ }
++
return IRQ_HANDLED;
}
memset(&dest[length], 0, CONFIG_ROM_SIZE - size);
}
- ret = ohci_update_phy_reg(&ohci->card, 5, clear, set);
+ static int configure_1394a_enhancements(struct fw_ohci *ohci)
+ {
+ bool enable_1394a;
+ int ret, clear, set, offset;
+
+ /* Check if the driver should configure link and PHY. */
+ if (!(reg_read(ohci, OHCI1394_HCControlSet) &
+ OHCI1394_HCControl_programPhyEnable))
+ return 0;
+
+ /* Paranoia: check whether the PHY supports 1394a, too. */
+ enable_1394a = false;
+ ret = read_phy_reg(ohci, 2);
+ if (ret < 0)
+ return ret;
+ if ((ret & PHY_EXTENDED_REGISTERS) == PHY_EXTENDED_REGISTERS) {
+ ret = read_paged_phy_reg(ohci, 1, 8);
+ if (ret < 0)
+ return ret;
+ if (ret >= 1)
+ enable_1394a = true;
+ }
+
+ if (ohci->quirks & QUIRK_NO_1394A)
+ enable_1394a = false;
+
+ /* Configure PHY and link consistently. */
+ if (enable_1394a) {
+ clear = 0;
+ set = PHY_ENABLE_ACCEL | PHY_ENABLE_MULTI;
+ } else {
+ clear = PHY_ENABLE_ACCEL | PHY_ENABLE_MULTI;
+ set = 0;
+ }
++ ret = update_phy_reg(ohci, 5, clear, set);
+ if (ret < 0)
+ return ret;
+
+ if (enable_1394a)
+ offset = OHCI1394_HCControlSet;
+ else
+ offset = OHCI1394_HCControlClear;
+ reg_write(ohci, offset, OHCI1394_HCControl_aPhyEnhanceEnable);
+
+ /* Clean up: configuration has been taken care of. */
+ reg_write(ohci, OHCI1394_HCControlClear,
+ OHCI1394_HCControl_programPhyEnable);
+
+ return 0;
+ }
+
static int ohci_enable(struct fw_card *card,
const __be32 *config_rom, size_t length)
{
struct fw_ohci *ohci = fw_ohci(card);
struct pci_dev *dev = to_pci_dev(card->device);
-- u32 lps;
- int i;
++ u32 lps, seconds, version, irqs;
+ int i, ret;
if (software_reset(ohci)) {
fw_error("Failed to reset ohci card.\n");
OHCI1394_HCControl_noByteSwapData);
reg_write(ohci, OHCI1394_SelfIDBuffer, ohci->self_id_bus);
-- reg_write(ohci, OHCI1394_LinkControlClear,
-- OHCI1394_LinkControl_rcvPhyPkt);
reg_write(ohci, OHCI1394_LinkControlSet,
OHCI1394_LinkControl_rcvSelfID |
++ OHCI1394_LinkControl_rcvPhyPkt |
OHCI1394_LinkControl_cycleTimerEnable |
OHCI1394_LinkControl_cycleMaster);
reg_write(ohci, OHCI1394_ATRetries,
OHCI1394_MAX_AT_REQ_RETRIES |
(OHCI1394_MAX_AT_RESP_RETRIES << 4) |
-- (OHCI1394_MAX_PHYS_RESP_RETRIES << 8));
++ (OHCI1394_MAX_PHYS_RESP_RETRIES << 8) |
++ (200 << 16));
++
++ seconds = lower_32_bits(get_seconds());
++ reg_write(ohci, OHCI1394_IsochronousCycleTimer, seconds << 25);
++ ohci->bus_time = seconds & ~0x3f;
++
++ version = reg_read(ohci, OHCI1394_Version) & 0x00ff00ff;
++ if (version >= OHCI_VERSION_1_1) {
++ reg_write(ohci, OHCI1394_InitialChannelsAvailableHi,
++ 0xfffffffe);
++ card->broadcast_channel_auto_allocated = true;
++ }
++
++ /* Get implemented bits of the priority arbitration request counter. */
++ reg_write(ohci, OHCI1394_FairnessControl, 0x3f);
++ ohci->pri_req_max = reg_read(ohci, OHCI1394_FairnessControl) & 0x3f;
++ reg_write(ohci, OHCI1394_FairnessControl, 0);
++ card->priority_budget_implemented = ohci->pri_req_max != 0;
ar_context_run(&ohci->ar_request_ctx);
ar_context_run(&ohci->ar_response_ctx);
reg_write(ohci, OHCI1394_PhyUpperBound, 0x00010000);
reg_write(ohci, OHCI1394_IntEventClear, ~0);
reg_write(ohci, OHCI1394_IntMaskClear, ~0);
-- reg_write(ohci, OHCI1394_IntMaskSet,
-- OHCI1394_selfIDComplete |
-- OHCI1394_RQPkt | OHCI1394_RSPkt |
-- OHCI1394_reqTxComplete | OHCI1394_respTxComplete |
-- OHCI1394_isochRx | OHCI1394_isochTx |
-- OHCI1394_postedWriteErr | OHCI1394_cycleTooLong |
-- OHCI1394_cycleInconsistent | OHCI1394_regAccessFail |
-- OHCI1394_masterIntEnable);
-- if (param_debug & OHCI_PARAM_DEBUG_BUSRESETS)
-- reg_write(ohci, OHCI1394_IntMaskSet, OHCI1394_busReset);
+
+ ret = configure_1394a_enhancements(ohci);
+ if (ret < 0)
+ return ret;
/* Activate link_on bit and contender bit in our self ID packets.*/
- if (ohci_update_phy_reg(card, 4, 0,
- PHY_LINK_ACTIVE | PHY_CONTENDER) < 0)
- return -EIO;
+ ret = ohci_update_phy_reg(card, 4, 0, PHY_LINK_ACTIVE | PHY_CONTENDER);
+ if (ret < 0)
+ return ret;
/*
* When the link is not yet enabled, the atomic config rom
reg_write(ohci, OHCI1394_AsReqFilterHiSet, 0x80000000);
++ if (!(ohci->quirks & QUIRK_NO_MSI))
++ pci_enable_msi(dev);
if (request_irq(dev->irq, irq_handler,
-- IRQF_SHARED, ohci_driver_name, ohci)) {
-- fw_error("Failed to allocate shared interrupt %d.\n",
-- dev->irq);
++ pci_dev_msi_enabled(dev) ? 0 : IRQF_SHARED,
++ ohci_driver_name, ohci)) {
++ fw_error("Failed to allocate interrupt %d.\n", dev->irq);
++ pci_disable_msi(dev);
dma_free_coherent(ohci->card.device, CONFIG_ROM_SIZE,
ohci->config_rom, ohci->config_rom_bus);
return -EIO;
}
++ irqs = OHCI1394_reqTxComplete | OHCI1394_respTxComplete |
++ OHCI1394_RQPkt | OHCI1394_RSPkt |
++ OHCI1394_isochTx | OHCI1394_isochRx |
++ OHCI1394_postedWriteErr |
++ OHCI1394_selfIDComplete |
++ OHCI1394_regAccessFail |
++ OHCI1394_cycle64Seconds |
++ OHCI1394_cycleInconsistent | OHCI1394_cycleTooLong |
++ OHCI1394_masterIntEnable;
++ if (param_debug & OHCI_PARAM_DEBUG_BUSRESETS)
++ irqs |= OHCI1394_busReset;
++ reg_write(ohci, OHCI1394_IntMaskSet, irqs);
++
reg_write(ohci, OHCI1394_HCControlSet,
OHCI1394_HCControl_linkEnable |
OHCI1394_HCControl_BIBimageValid);
flush_writes(ohci);
-- /*
-- * We are ready to go, initiate bus reset to finish the
-- * initialization.
-- */
--
-- fw_core_initiate_bus_reset(&ohci->card, 1);
++ /* We are ready to go, reset bus to finish initialization. */
++ fw_schedule_bus_reset(&ohci->card, false, true);
return 0;
}
* takes effect.
*/
if (ret == 0)
-- fw_core_initiate_bus_reset(&ohci->card, 1);
++ fw_schedule_bus_reset(&ohci->card, true, true);
else
dma_free_coherent(ohci->card.device, CONFIG_ROM_SIZE,
next_config_rom, next_config_rom_bus);
#endif /* CONFIG_FIREWIRE_OHCI_REMOTE_DMA */
}
-- static u32 cycle_timer_ticks(u32 cycle_timer)
++ static u32 ohci_read_csr(struct fw_card *card, int csr_offset)
{
-- u32 ticks;
++ struct fw_ohci *ohci = fw_ohci(card);
++ unsigned long flags;
++ u32 value;
++
++ switch (csr_offset) {
++ case CSR_STATE_CLEAR:
++ case CSR_STATE_SET:
++ if (ohci->is_root &&
++ (reg_read(ohci, OHCI1394_LinkControlSet) &
++ OHCI1394_LinkControl_cycleMaster))
++ value = CSR_STATE_BIT_CMSTR;
++ else
++ value = 0;
++ if (ohci->csr_state_setclear_abdicate)
++ value |= CSR_STATE_BIT_ABDICATE;
-- ticks = cycle_timer & 0xfff;
-- ticks += 3072 * ((cycle_timer >> 12) & 0x1fff);
-- ticks += (3072 * 8000) * (cycle_timer >> 25);
++ return value;
-- return ticks;
++ case CSR_NODE_IDS:
++ return reg_read(ohci, OHCI1394_NodeID) << 16;
++
++ case CSR_CYCLE_TIME:
++ return get_cycle_time(ohci);
++
++ case CSR_BUS_TIME:
++ /*
++ * We might be called just after the cycle timer has wrapped
++ * around but just before the cycle64Seconds handler, so we
++ * better check here, too, if the bus time needs to be updated.
++ */
++ spin_lock_irqsave(&ohci->lock, flags);
++ value = update_bus_time(ohci);
++ spin_unlock_irqrestore(&ohci->lock, flags);
++ return value;
++
++ case CSR_BUSY_TIMEOUT:
++ value = reg_read(ohci, OHCI1394_ATRetries);
++ return (value >> 4) & 0x0ffff00f;
++
++ case CSR_PRIORITY_BUDGET:
++ return (reg_read(ohci, OHCI1394_FairnessControl) & 0x3f) |
++ (ohci->pri_req_max << 8);
++
++ default:
++ WARN_ON(1);
++ return 0;
++ }
}
-- /*
-- * Some controllers exhibit one or more of the following bugs when updating the
-- * iso cycle timer register:
-- * - When the lowest six bits are wrapping around to zero, a read that happens
-- * at the same time will return garbage in the lowest ten bits.
-- * - When the cycleOffset field wraps around to zero, the cycleCount field is
-- * not incremented for about 60 ns.
-- * - Occasionally, the entire register reads zero.
-- *
-- * To catch these, we read the register three times and ensure that the
-- * difference between each two consecutive reads is approximately the same, i.e.
-- * less than twice the other. Furthermore, any negative difference indicates an
-- * error. (A PCI read should take at least 20 ticks of the 24.576 MHz timer to
-- * execute, so we have enough precision to compute the ratio of the differences.)
-- */
-- static u32 ohci_get_cycle_time(struct fw_card *card)
++ static void ohci_write_csr(struct fw_card *card, int csr_offset, u32 value)
{
struct fw_ohci *ohci = fw_ohci(card);
-- u32 c0, c1, c2;
-- u32 t0, t1, t2;
-- s32 diff01, diff12;
-- int i;
++ unsigned long flags;
-- c2 = reg_read(ohci, OHCI1394_IsochronousCycleTimer);
++ switch (csr_offset) {
++ case CSR_STATE_CLEAR:
++ if ((value & CSR_STATE_BIT_CMSTR) && ohci->is_root) {
++ reg_write(ohci, OHCI1394_LinkControlClear,
++ OHCI1394_LinkControl_cycleMaster);
++ flush_writes(ohci);
++ }
++ if (value & CSR_STATE_BIT_ABDICATE)
++ ohci->csr_state_setclear_abdicate = false;
++ break;
-- if (ohci->quirks & QUIRK_CYCLE_TIMER) {
-- i = 0;
-- c1 = c2;
-- c2 = reg_read(ohci, OHCI1394_IsochronousCycleTimer);
-- do {
-- c0 = c1;
-- c1 = c2;
-- c2 = reg_read(ohci, OHCI1394_IsochronousCycleTimer);
-- t0 = cycle_timer_ticks(c0);
-- t1 = cycle_timer_ticks(c1);
-- t2 = cycle_timer_ticks(c2);
-- diff01 = t1 - t0;
-- diff12 = t2 - t1;
-- } while ((diff01 <= 0 || diff12 <= 0 ||
-- diff01 / diff12 >= 2 || diff12 / diff01 >= 2)
-- && i++ < 20);
-- }
++ case CSR_STATE_SET:
++ if ((value & CSR_STATE_BIT_CMSTR) && ohci->is_root) {
++ reg_write(ohci, OHCI1394_LinkControlSet,
++ OHCI1394_LinkControl_cycleMaster);
++ flush_writes(ohci);
++ }
++ if (value & CSR_STATE_BIT_ABDICATE)
++ ohci->csr_state_setclear_abdicate = true;
++ break;
-- return c2;
++ case CSR_NODE_IDS:
++ reg_write(ohci, OHCI1394_NodeID, value >> 16);
++ flush_writes(ohci);
++ break;
++
++ case CSR_CYCLE_TIME:
++ reg_write(ohci, OHCI1394_IsochronousCycleTimer, value);
++ reg_write(ohci, OHCI1394_IntEventSet,
++ OHCI1394_cycleInconsistent);
++ flush_writes(ohci);
++ break;
++
++ case CSR_BUS_TIME:
++ spin_lock_irqsave(&ohci->lock, flags);
++ ohci->bus_time = (ohci->bus_time & 0x7f) | (value & ~0x7f);
++ spin_unlock_irqrestore(&ohci->lock, flags);
++ break;
++
++ case CSR_BUSY_TIMEOUT:
++ value = (value & 0xf) | ((value & 0xf) << 4) |
++ ((value & 0xf) << 8) | ((value & 0x0ffff000) << 4);
++ reg_write(ohci, OHCI1394_ATRetries, value);
++ flush_writes(ohci);
++ break;
++
++ case CSR_PRIORITY_BUDGET:
++ reg_write(ohci, OHCI1394_FairnessControl, value & 0x3f);
++ flush_writes(ohci);
++ break;
++
++ default:
++ WARN_ON(1);
++ break;
++ }
}
static void copy_iso_headers(struct iso_context *ctx, void *p)
__le32 *ir_header;
void *p;
-- for (pd = d; pd <= last; pd++) {
++ for (pd = d; pd <= last; pd++)
if (pd->transfer_status)
break;
-- }
if (pd > last)
/* Descriptor(s) not done yet, stop iteration */
return 0;
if (le16_to_cpu(last->control) & DESCRIPTOR_IRQ_ALWAYS) {
ir_header = (__le32 *) p;
-- ctx->base.callback(&ctx->base,
-- le32_to_cpu(ir_header[0]) & 0xffff,
-- ctx->header_length, ctx->header,
-- ctx->base.callback_data);
++ ctx->base.callback.sc(&ctx->base,
++ le32_to_cpu(ir_header[0]) & 0xffff,
++ ctx->header_length, ctx->header,
++ ctx->base.callback_data);
ctx->header_length = 0;
}
return 1;
}
++ /* d == last because each descriptor block is only a single descriptor. */
++ static int handle_ir_buffer_fill(struct context *context,
++ struct descriptor *d,
++ struct descriptor *last)
++ {
++ struct iso_context *ctx =
++ container_of(context, struct iso_context, context);
++
++ if (!last->transfer_status)
++ /* Descriptor(s) not done yet, stop iteration */
++ return 0;
++
++ if (le16_to_cpu(last->control) & DESCRIPTOR_IRQ_ALWAYS)
++ ctx->base.callback.mc(&ctx->base,
++ le32_to_cpu(last->data_address) +
++ le16_to_cpu(last->req_count) -
++ le16_to_cpu(last->res_count),
++ ctx->base.callback_data);
++
++ return 1;
++ }
++
static int handle_it_packet(struct context *context,
struct descriptor *d,
struct descriptor *last)
ctx->header_length += 4;
}
if (le16_to_cpu(last->control) & DESCRIPTOR_IRQ_ALWAYS) {
-- ctx->base.callback(&ctx->base, le16_to_cpu(last->res_count),
-- ctx->header_length, ctx->header,
-- ctx->base.callback_data);
++ ctx->base.callback.sc(&ctx->base, le16_to_cpu(last->res_count),
++ ctx->header_length, ctx->header,
++ ctx->base.callback_data);
ctx->header_length = 0;
}
return 1;
}
++ static void set_multichannel_mask(struct fw_ohci *ohci, u64 channels)
++ {
++ u32 hi = channels >> 32, lo = channels;
++
++ reg_write(ohci, OHCI1394_IRMultiChanMaskHiClear, ~hi);
++ reg_write(ohci, OHCI1394_IRMultiChanMaskLoClear, ~lo);
++ reg_write(ohci, OHCI1394_IRMultiChanMaskHiSet, hi);
++ reg_write(ohci, OHCI1394_IRMultiChanMaskLoSet, lo);
++ mmiowb();
++ ohci->mc_channels = channels;
++ }
++
static struct fw_iso_context *ohci_allocate_iso_context(struct fw_card *card,
int type, int channel, size_t header_size)
{
struct fw_ohci *ohci = fw_ohci(card);
-- struct iso_context *ctx, *list;
-- descriptor_callback_t callback;
-- u64 *channels, dont_care = ~0ULL;
-- u32 *mask, regs;
++ struct iso_context *uninitialized_var(ctx);
++ descriptor_callback_t uninitialized_var(callback);
++ u64 *uninitialized_var(channels);
++ u32 *uninitialized_var(mask), uninitialized_var(regs);
unsigned long flags;
-- int index, ret = -ENOMEM;
++ int index, ret = -EBUSY;
+
- if (type == FW_ISO_CONTEXT_TRANSMIT) {
- channels = &dont_care;
- mask = &ohci->it_context_mask;
- list = ohci->it_context_list;
++ spin_lock_irqsave(&ohci->lock, flags);
+
- if (type == FW_ISO_CONTEXT_TRANSMIT) {
- channels = &dont_care;
- mask = &ohci->it_context_mask;
- list = ohci->it_context_list;
++ switch (type) {
++ case FW_ISO_CONTEXT_TRANSMIT:
++ mask = &ohci->it_context_mask;
callback = handle_it_packet;
-- } else {
++ index = ffs(*mask) - 1;
++ if (index >= 0) {
++ *mask &= ~(1 << index);
++ regs = OHCI1394_IsoXmitContextBase(index);
++ ctx = &ohci->it_context_list[index];
++ }
++ break;
++
++ case FW_ISO_CONTEXT_RECEIVE:
channels = &ohci->ir_context_channels;
-- mask = &ohci->ir_context_mask;
-- list = ohci->ir_context_list;
++ mask = &ohci->ir_context_mask;
callback = handle_ir_packet_per_buffer;
-- }
++ index = *channels & 1ULL << channel ? ffs(*mask) - 1 : -1;
++ if (index >= 0) {
++ *channels &= ~(1ULL << channel);
++ *mask &= ~(1 << index);
++ regs = OHCI1394_IsoRcvContextBase(index);
++ ctx = &ohci->ir_context_list[index];
++ }
++ break;
-- spin_lock_irqsave(&ohci->lock, flags);
-- index = *channels & 1ULL << channel ? ffs(*mask) - 1 : -1;
-- if (index >= 0) {
-- *channels &= ~(1ULL << channel);
-- *mask &= ~(1 << index);
++ case FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL:
++ mask = &ohci->ir_context_mask;
++ callback = handle_ir_buffer_fill;
++ index = !ohci->mc_allocated ? ffs(*mask) - 1 : -1;
++ if (index >= 0) {
++ ohci->mc_allocated = true;
++ *mask &= ~(1 << index);
++ regs = OHCI1394_IsoRcvContextBase(index);
++ ctx = &ohci->ir_context_list[index];
++ }
++ break;
++
++ default:
++ index = -1;
++ ret = -ENOSYS;
}
++
spin_unlock_irqrestore(&ohci->lock, flags);
if (index < 0)
-- return ERR_PTR(-EBUSY);
-
- if (type == FW_ISO_CONTEXT_TRANSMIT)
- regs = OHCI1394_IsoXmitContextBase(index);
- else
- regs = OHCI1394_IsoRcvContextBase(index);
++ return ERR_PTR(ret);
- if (type == FW_ISO_CONTEXT_TRANSMIT)
- regs = OHCI1394_IsoXmitContextBase(index);
- else
- regs = OHCI1394_IsoRcvContextBase(index);
-
-- ctx = &list[index];
memset(ctx, 0, sizeof(*ctx));
ctx->header_length = 0;
ctx->header = (void *) __get_free_page(GFP_KERNEL);
-- if (ctx->header == NULL)
++ if (ctx->header == NULL) {
++ ret = -ENOMEM;
goto out;
--
++ }
ret = context_init(&ctx->context, ohci, regs, callback);
if (ret < 0)
goto out_with_header;
++ if (type == FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL)
++ set_multichannel_mask(ohci, 0);
++
return &ctx->base;
out_with_header:
free_page((unsigned long)ctx->header);
out:
spin_lock_irqsave(&ohci->lock, flags);
++
++ switch (type) {
++ case FW_ISO_CONTEXT_RECEIVE:
++ *channels |= 1ULL << channel;
++ break;
++
++ case FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL:
++ ohci->mc_allocated = false;
++ break;
++ }
*mask |= 1 << index;
++
spin_unlock_irqrestore(&ohci->lock, flags);
return ERR_PTR(ret);
{
struct iso_context *ctx = container_of(base, struct iso_context, base);
struct fw_ohci *ohci = ctx->context.ohci;
-- u32 control, match;
++ u32 control = IR_CONTEXT_ISOCH_HEADER, match;
int index;
-- if (ctx->base.type == FW_ISO_CONTEXT_TRANSMIT) {
++ switch (ctx->base.type) {
++ case FW_ISO_CONTEXT_TRANSMIT:
index = ctx - ohci->it_context_list;
match = 0;
if (cycle >= 0)
reg_write(ohci, OHCI1394_IsoXmitIntEventClear, 1 << index);
reg_write(ohci, OHCI1394_IsoXmitIntMaskSet, 1 << index);
context_run(&ctx->context, match);
-- } else {
++ break;
++
++ case FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL:
++ control |= IR_CONTEXT_BUFFER_FILL|IR_CONTEXT_MULTI_CHANNEL_MODE;
++ /* fall through */
++ case FW_ISO_CONTEXT_RECEIVE:
index = ctx - ohci->ir_context_list;
-- control = IR_CONTEXT_ISOCH_HEADER;
match = (tags << 28) | (sync << 8) | ctx->base.channel;
if (cycle >= 0) {
match |= (cycle & 0x07fff) << 12;
reg_write(ohci, OHCI1394_IsoRecvIntMaskSet, 1 << index);
reg_write(ohci, CONTEXT_MATCH(ctx->context.regs), match);
context_run(&ctx->context, control);
++ break;
}
return 0;
struct iso_context *ctx = container_of(base, struct iso_context, base);
int index;
-- if (ctx->base.type == FW_ISO_CONTEXT_TRANSMIT) {
++ switch (ctx->base.type) {
++ case FW_ISO_CONTEXT_TRANSMIT:
index = ctx - ohci->it_context_list;
reg_write(ohci, OHCI1394_IsoXmitIntMaskClear, 1 << index);
-- } else {
++ break;
++
++ case FW_ISO_CONTEXT_RECEIVE:
++ case FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL:
index = ctx - ohci->ir_context_list;
reg_write(ohci, OHCI1394_IsoRecvIntMaskClear, 1 << index);
++ break;
}
flush_writes(ohci);
context_stop(&ctx->context);
spin_lock_irqsave(&ohci->lock, flags);
-- if (ctx->base.type == FW_ISO_CONTEXT_TRANSMIT) {
++ switch (base->type) {
++ case FW_ISO_CONTEXT_TRANSMIT:
index = ctx - ohci->it_context_list;
ohci->it_context_mask |= 1 << index;
-- } else {
++ break;
++
++ case FW_ISO_CONTEXT_RECEIVE:
index = ctx - ohci->ir_context_list;
ohci->ir_context_mask |= 1 << index;
ohci->ir_context_channels |= 1ULL << base->channel;
++ break;
++
++ case FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL:
++ index = ctx - ohci->ir_context_list;
++ ohci->ir_context_mask |= 1 << index;
++ ohci->ir_context_channels |= ohci->mc_channels;
++ ohci->mc_channels = 0;
++ ohci->mc_allocated = false;
++ break;
}
spin_unlock_irqrestore(&ohci->lock, flags);
}
-- static int ohci_queue_iso_transmit(struct fw_iso_context *base,
-- struct fw_iso_packet *packet,
-- struct fw_iso_buffer *buffer,
-- unsigned long payload)
++ static int ohci_set_iso_channels(struct fw_iso_context *base, u64 *channels)
++ {
++ struct fw_ohci *ohci = fw_ohci(base->card);
++ unsigned long flags;
++ int ret;
++
++ switch (base->type) {
++ case FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL:
++
++ spin_lock_irqsave(&ohci->lock, flags);
++
++ /* Don't allow multichannel to grab other contexts' channels. */
++ if (~ohci->ir_context_channels & ~ohci->mc_channels & *channels) {
++ *channels = ohci->ir_context_channels;
++ ret = -EBUSY;
++ } else {
++ set_multichannel_mask(ohci, *channels);
++ ret = 0;
++ }
++
++ spin_unlock_irqrestore(&ohci->lock, flags);
++
++ break;
++ default:
++ ret = -EINVAL;
++ }
++
++ return ret;
++ }
++
++ static int queue_iso_transmit(struct iso_context *ctx,
++ struct fw_iso_packet *packet,
++ struct fw_iso_buffer *buffer,
++ unsigned long payload)
{
-- struct iso_context *ctx = container_of(base, struct iso_context, base);
struct descriptor *d, *last, *pd;
struct fw_iso_packet *p;
__le32 *header;
return 0;
}
-- static int ohci_queue_iso_receive_packet_per_buffer(struct fw_iso_context *base,
-- struct fw_iso_packet *packet,
-- struct fw_iso_buffer *buffer,
-- unsigned long payload)
++ static int queue_iso_packet_per_buffer(struct iso_context *ctx,
++ struct fw_iso_packet *packet,
++ struct fw_iso_buffer *buffer,
++ unsigned long payload)
{
-- struct iso_context *ctx = container_of(base, struct iso_context, base);
struct descriptor *d, *pd;
-- struct fw_iso_packet *p = packet;
dma_addr_t d_bus, page_bus;
u32 z, header_z, rest;
int i, j, length;
* The OHCI controller puts the isochronous header and trailer in the
* buffer, so we need at least 8 bytes.
*/
-- packet_count = p->header_length / ctx->base.header_size;
++ packet_count = packet->header_length / ctx->base.header_size;
header_size = max(ctx->base.header_size, (size_t)8);
/* Get header size in number of descriptors. */
header_z = DIV_ROUND_UP(header_size, sizeof(*d));
page = payload >> PAGE_SHIFT;
offset = payload & ~PAGE_MASK;
-- payload_per_buffer = p->payload_length / packet_count;
++ payload_per_buffer = packet->payload_length / packet_count;
for (i = 0; i < packet_count; i++) {
/* d points to the header descriptor */
d->control = cpu_to_le16(DESCRIPTOR_STATUS |
DESCRIPTOR_INPUT_MORE);
-- if (p->skip && i == 0)
++ if (packet->skip && i == 0)
d->control |= cpu_to_le16(DESCRIPTOR_WAIT);
d->req_count = cpu_to_le16(header_size);
d->res_count = d->req_count;
pd->control = cpu_to_le16(DESCRIPTOR_STATUS |
DESCRIPTOR_INPUT_LAST |
DESCRIPTOR_BRANCH_ALWAYS);
-- if (p->interrupt && i == packet_count - 1)
++ if (packet->interrupt && i == packet_count - 1)
pd->control |= cpu_to_le16(DESCRIPTOR_IRQ_ALWAYS);
context_append(&ctx->context, d, z, header_z);
return 0;
}
++ static int queue_iso_buffer_fill(struct iso_context *ctx,
++ struct fw_iso_packet *packet,
++ struct fw_iso_buffer *buffer,
++ unsigned long payload)
++ {
++ struct descriptor *d;
++ dma_addr_t d_bus, page_bus;
++ int page, offset, rest, z, i, length;
++
++ page = payload >> PAGE_SHIFT;
++ offset = payload & ~PAGE_MASK;
++ rest = packet->payload_length;
++
++ /* We need one descriptor for each page in the buffer. */
++ z = DIV_ROUND_UP(offset + rest, PAGE_SIZE);
++
++ if (WARN_ON(offset & 3 || rest & 3 || page + z > buffer->page_count))
++ return -EFAULT;
++
++ for (i = 0; i < z; i++) {
++ d = context_get_descriptors(&ctx->context, 1, &d_bus);
++ if (d == NULL)
++ return -ENOMEM;
++
++ d->control = cpu_to_le16(DESCRIPTOR_INPUT_MORE |
++ DESCRIPTOR_BRANCH_ALWAYS);
++ if (packet->skip && i == 0)
++ d->control |= cpu_to_le16(DESCRIPTOR_WAIT);
++ if (packet->interrupt && i == z - 1)
++ d->control |= cpu_to_le16(DESCRIPTOR_IRQ_ALWAYS);
++
++ if (offset + rest < PAGE_SIZE)
++ length = rest;
++ else
++ length = PAGE_SIZE - offset;
++ d->req_count = cpu_to_le16(length);
++ d->res_count = d->req_count;
++ d->transfer_status = 0;
++
++ page_bus = page_private(buffer->pages[page]);
++ d->data_address = cpu_to_le32(page_bus + offset);
++
++ rest -= length;
++ offset = 0;
++ page++;
++
++ context_append(&ctx->context, d, 1, 0);
++ }
++
++ return 0;
++ }
++
static int ohci_queue_iso(struct fw_iso_context *base,
struct fw_iso_packet *packet,
struct fw_iso_buffer *buffer,
{
struct iso_context *ctx = container_of(base, struct iso_context, base);
unsigned long flags;
-- int ret;
++ int ret = -ENOSYS;
spin_lock_irqsave(&ctx->context.ohci->lock, flags);
-- if (base->type == FW_ISO_CONTEXT_TRANSMIT)
-- ret = ohci_queue_iso_transmit(base, packet, buffer, payload);
-- else
-- ret = ohci_queue_iso_receive_packet_per_buffer(base, packet,
-- buffer, payload);
++ switch (base->type) {
++ case FW_ISO_CONTEXT_TRANSMIT:
++ ret = queue_iso_transmit(ctx, packet, buffer, payload);
++ break;
++ case FW_ISO_CONTEXT_RECEIVE:
++ ret = queue_iso_packet_per_buffer(ctx, packet, buffer, payload);
++ break;
++ case FW_ISO_CONTEXT_RECEIVE_MULTICHANNEL:
++ ret = queue_iso_buffer_fill(ctx, packet, buffer, payload);
++ break;
++ }
spin_unlock_irqrestore(&ctx->context.ohci->lock, flags);
return ret;
static const struct fw_card_driver ohci_driver = {
.enable = ohci_enable,
++ .read_phy_reg = ohci_read_phy_reg,
.update_phy_reg = ohci_update_phy_reg,
.set_config_rom = ohci_set_config_rom,
.send_request = ohci_send_request,
.send_response = ohci_send_response,
.cancel_packet = ohci_cancel_packet,
.enable_phys_dma = ohci_enable_phys_dma,
-- .get_cycle_time = ohci_get_cycle_time,
++ .read_csr = ohci_read_csr,
++ .write_csr = ohci_write_csr,
.allocate_iso_context = ohci_allocate_iso_context,
.free_iso_context = ohci_free_iso_context,
++ .set_iso_channels = ohci_set_iso_channels,
.queue_iso = ohci_queue_iso,
.start_iso = ohci_start_iso,
.stop_iso = ohci_stop_iso,
};
#ifdef CONFIG_PPC_PMAC
- static void ohci_pmac_on(struct pci_dev *dev)
+ static void pmac_ohci_on(struct pci_dev *dev)
{
if (machine_is(powermac)) {
struct device_node *ofn = pci_device_to_OF_node(dev);
}
}
- static void ohci_pmac_off(struct pci_dev *dev)
+ static void pmac_ohci_off(struct pci_dev *dev)
{
if (machine_is(powermac)) {
struct device_node *ofn = pci_device_to_OF_node(dev);
}
}
#else
- #define ohci_pmac_on(dev)
- #define ohci_pmac_off(dev)
+ static inline void pmac_ohci_on(struct pci_dev *dev) {}
+ static inline void pmac_ohci_off(struct pci_dev *dev) {}
#endif /* CONFIG_PPC_PMAC */
static int __devinit pci_probe(struct pci_dev *dev,
const struct pci_device_id *ent)
{
struct fw_ohci *ohci;
- u32 bus_options, max_receive, link_speed, version;
+ u32 bus_options, max_receive, link_speed, version, link_enh;
u64 guid;
int i, err, n_ir, n_it;
size_t size;
fw_card_initialize(&ohci->card, &ohci_driver, &dev->dev);
- ohci_pmac_on(dev);
+ pmac_ohci_on(dev);
err = pci_enable_device(dev);
if (err) {
pci_set_drvdata(dev, ohci);
spin_lock_init(&ohci->lock);
++ mutex_init(&ohci->phy_reg_mutex);
tasklet_init(&ohci->bus_reset_tasklet,
bus_reset_tasklet, (unsigned long)ohci);
if (param_quirks)
ohci->quirks = param_quirks;
+ /* TI OHCI-Lynx and compatible: set recommended configuration bits. */
+ if (dev->vendor == PCI_VENDOR_ID_TI) {
+ pci_read_config_dword(dev, PCI_CFG_TI_LinkEnh, &link_enh);
+
+ /* adjust latency of ATx FIFO: use 1.7 KB threshold */
+ link_enh &= ~TI_LinkEnh_atx_thresh_mask;
+ link_enh |= TI_LinkEnh_atx_thresh_1_7K;
+
+ /* use priority arbitration for asynchronous responses */
+ link_enh |= TI_LinkEnh_enab_unfair;
+
+ /* required for aPhyEnhanceEnable to work */
+ link_enh |= TI_LinkEnh_enab_accel;
+
+ pci_write_config_dword(dev, PCI_CFG_TI_LinkEnh, link_enh);
+ }
+
ar_context_init(&ohci->ar_request_ctx, ohci,
OHCI1394_AsReqRcvContextControlSet);
pci_disable_device(dev);
fail_free:
kfree(&ohci->card);
- ohci_pmac_off(dev);
+ pmac_ohci_off(dev);
fail:
if (err == -ENOMEM)
fw_error("Out of memory\n");
context_release(&ohci->at_response_ctx);
kfree(ohci->it_context_list);
kfree(ohci->ir_context_list);
++ pci_disable_msi(dev);
pci_iounmap(dev, ohci->registers);
pci_release_region(dev, 0);
pci_disable_device(dev);
kfree(&ohci->card);
- ohci_pmac_off(dev);
+ pmac_ohci_off(dev);
fw_notify("Removed fw-ohci device.\n");
}
software_reset(ohci);
free_irq(dev->irq, ohci);
++ pci_disable_msi(dev);
err = pci_save_state(dev);
if (err) {
fw_error("pci_save_state failed\n");
err = pci_set_power_state(dev, pci_choose_state(dev, state));
if (err)
fw_error("pci_set_power_state failed with %d\n", err);
- ohci_pmac_off(dev);
+ pmac_ohci_off(dev);
return 0;
}
struct fw_ohci *ohci = pci_get_drvdata(dev);
int err;
- ohci_pmac_on(dev);
+ pmac_ohci_on(dev);
pci_set_power_state(dev, PCI_D0);
pci_restore_state(dev);
err = pci_enable_device(dev);
static void sbp2_status_write(struct fw_card *card, struct fw_request *request,
int tcode, int destination, int source,
-- int generation, int speed,
-- unsigned long long offset,
++ int generation, unsigned long long offset,
void *payload, size_t length, void *callback_data)
{
struct sbp2_logical_unit *lu = callback_data;
fw_send_request(device->card, &orb->t, TCODE_WRITE_BLOCK_REQUEST,
node_id, generation, device->max_speed, offset,
-- &orb->pointer, sizeof(orb->pointer),
-- complete_transaction, orb);
++ &orb->pointer, 8, complete_transaction, orb);
}
static int sbp2_cancel_orbs(struct sbp2_logical_unit *lu)
fw_run_transaction(device->card, TCODE_WRITE_QUADLET_REQUEST,
lu->tgt->node_id, lu->generation, device->max_speed,
lu->command_block_agent_address + SBP2_AGENT_RESET,
-- &d, sizeof(d));
++ &d, 4);
}
static void complete_agent_reset_write_no_wait(struct fw_card *card,
fw_send_request(device->card, t, TCODE_WRITE_QUADLET_REQUEST,
lu->tgt->node_id, lu->generation, device->max_speed,
lu->command_block_agent_address + SBP2_AGENT_RESET,
-- &d, sizeof(d), complete_agent_reset_write_no_wait, t);
++ &d, 4, complete_agent_reset_write_no_wait, t);
}
static inline void sbp2_allow_block(struct sbp2_logical_unit *lu)
fw_run_transaction(device->card, TCODE_WRITE_QUADLET_REQUEST,
lu->tgt->node_id, lu->generation, device->max_speed,
-- CSR_REGISTER_BASE + CSR_BUSY_TIMEOUT,
-- &d, sizeof(d));
++ CSR_REGISTER_BASE + CSR_BUSY_TIMEOUT, &d, 4);
}
static void sbp2_reconnect(struct work_struct *work);
sdev->start_stop_pwr_cond = 1;
if (lu->tgt->workarounds & SBP2_WORKAROUND_128K_MAX_TRANS)
- blk_queue_max_sectors(sdev->request_queue, 128 * 1024 / 512);
+ blk_queue_max_hw_sectors(sdev->request_queue, 128 * 1024 / 512);
blk_queue_max_segment_size(sdev->request_queue, SBP2_MAX_SEG_SIZE);
#include <linux/delay.h>
#include <linux/device.h>
#include <linux/dma-mapping.h>
-#include <linux/gfp.h>
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/list.h>
struct csr1212_keyval *kv;
struct csr1212_dentry *dentry;
u64 management_agent_addr;
-- u32 unit_characteristics, firmware_revision, model;
++ u32 firmware_revision, model;
unsigned workarounds;
int i;
management_agent_addr = 0;
-- unit_characteristics = 0;
firmware_revision = SBP2_ROM_VALUE_MISSING;
model = ud->flags & UNIT_DIRECTORY_MODEL_ID ?
ud->model_id : SBP2_ROM_VALUE_MISSING;
lu->lun = ORB_SET_LUN(kv->value.immediate);
break;
-- case SBP2_UNIT_CHARACTERISTICS_KEY:
-- /* FIXME: This is ignored so far.
-- * See SBP-2 clause 7.4.8. */
-- unit_characteristics = kv->value.immediate;
-- break;
case SBP2_FIRMWARE_REVISION_KEY:
firmware_revision = kv->value.immediate;
break;
default:
++ /* FIXME: Check for SBP2_UNIT_CHARACTERISTICS_KEY
++ * mgt_ORB_timeout and ORB_size, SBP-2 clause 7.4.8. */
++
/* FIXME: Check for SBP2_DEVICE_TYPE_AND_LUN_KEY.
* Its "ordered" bit has consequences for command ORB
* list handling. See SBP-2 clauses 4.6, 7.4.11, 10.2 */
if (lu->workarounds & SBP2_WORKAROUND_POWER_CONDITION)
sdev->start_stop_pwr_cond = 1;
if (lu->workarounds & SBP2_WORKAROUND_128K_MAX_TRANS)
- blk_queue_max_sectors(sdev->request_queue, 128 * 1024 / 512);
+ blk_queue_max_hw_sectors(sdev->request_queue, 128 * 1024 / 512);
blk_queue_max_segment_size(sdev->request_queue, SBP2_MAX_SEG_SIZE);
return 0;
return rcode != RCODE_COMPLETE ? -EIO : 0;
}
-static int node_lock(struct firedtv *fdtv, u64 addr, __be32 data[])
+static int node_lock(struct firedtv *fdtv, u64 addr, void *data)
{
return node_req(fdtv, addr, data, 8, TCODE_LOCK_COMPARE_SWAP);
}
static void handle_fcp(struct fw_card *card, struct fw_request *request,
int tcode, int destination, int source, int generation,
-- int speed, unsigned long long offset,
-- void *payload, size_t length, void *callback_data)
++ unsigned long long offset, void *payload, size_t length,
++ void *callback_data)
{
struct firedtv *f, *fdtv = NULL;
struct fw_device *device;
/*
* Char device interface.
*
- * Copyright (C) 2005-2006 Kristian Hoegsberg <krh@bitplanet.net>
+ * Copyright (C) 2005-2007 Kristian Hoegsberg <krh@bitplanet.net>
*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
*
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
*
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software Foundation,
- * Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
*/
#ifndef _LINUX_FIREWIRE_CDEV_H
#include <linux/types.h>
#include <linux/firewire-constants.h>
-- #define FW_CDEV_EVENT_BUS_RESET 0x00
-- #define FW_CDEV_EVENT_RESPONSE 0x01
-- #define FW_CDEV_EVENT_REQUEST 0x02
-- #define FW_CDEV_EVENT_ISO_INTERRUPT 0x03
-- #define FW_CDEV_EVENT_ISO_RESOURCE_ALLOCATED 0x04
-- #define FW_CDEV_EVENT_ISO_RESOURCE_DEALLOCATED 0x05
++ #define FW_CDEV_EVENT_BUS_RESET 0x00
++ #define FW_CDEV_EVENT_RESPONSE 0x01
++ #define FW_CDEV_EVENT_REQUEST 0x02
++ #define FW_CDEV_EVENT_ISO_INTERRUPT 0x03
++ #define FW_CDEV_EVENT_ISO_RESOURCE_ALLOCATED 0x04
++ #define FW_CDEV_EVENT_ISO_RESOURCE_DEALLOCATED 0x05
++
++ /* available since kernel version 2.6.36 */
++ #define FW_CDEV_EVENT_REQUEST2 0x06
++ #define FW_CDEV_EVENT_PHY_PACKET_SENT 0x07
++ #define FW_CDEV_EVENT_PHY_PACKET_RECEIVED 0x08
++ #define FW_CDEV_EVENT_ISO_INTERRUPT_MULTICHANNEL 0x09
/**
* struct fw_cdev_event_common - Common part of all fw_cdev_event_ types
* This event is sent when the bus the device belongs to goes through a bus
* reset. It provides information about the new bus configuration, such as
* new node ID for this device, new root ID, and others.
++ *
++ * If @bm_node_id is 0xffff right after bus reset it can be reread by an
++ * %FW_CDEV_IOC_GET_INFO ioctl after bus manager selection was finished.
++ * Kernels with ABI version < 4 do not set @bm_node_id.
*/
struct fw_cdev_event_bus_reset {
__u64 closure;
/**
* struct fw_cdev_event_response - Sent when a response packet was received
-- * @closure: See &fw_cdev_event_common;
-- * set by %FW_CDEV_IOC_SEND_REQUEST ioctl
++ * @closure: See &fw_cdev_event_common; set by %FW_CDEV_IOC_SEND_REQUEST
++ * or %FW_CDEV_IOC_SEND_BROADCAST_REQUEST
++ * or %FW_CDEV_IOC_SEND_STREAM_PACKET ioctl
* @type: See &fw_cdev_event_common; always %FW_CDEV_EVENT_RESPONSE
* @rcode: Response code returned by the remote node
* @length: Data length, i.e. the response's payload size in bytes
* sent by %FW_CDEV_IOC_SEND_REQUEST ioctl. The payload data for responses
* carrying data (read and lock responses) follows immediately and can be
* accessed through the @data field.
++ *
++ * The event is also generated after conclusions of transactions that do not
++ * involve response packets. This includes unified write transactions,
++ * broadcast write transactions, and transmission of asynchronous stream
++ * packets. @rcode indicates success or failure of such transmissions.
*/
struct fw_cdev_event_response {
__u64 closure;
};
/**
-- * struct fw_cdev_event_request - Sent on incoming request to an address region
++ * struct fw_cdev_event_request - Old version of &fw_cdev_event_request2
* @closure: See &fw_cdev_event_common; set by %FW_CDEV_IOC_ALLOCATE ioctl
* @type: See &fw_cdev_event_common; always %FW_CDEV_EVENT_REQUEST
++ * @tcode: See &fw_cdev_event_request2
++ * @offset: See &fw_cdev_event_request2
++ * @handle: See &fw_cdev_event_request2
++ * @length: See &fw_cdev_event_request2
++ * @data: See &fw_cdev_event_request2
++ *
++ * This event is sent instead of &fw_cdev_event_request2 if the kernel or
++ * the client implements ABI version <= 3.
++ *
++ * Unlike &fw_cdev_event_request2, the sender identity cannot be established,
++ * broadcast write requests cannot be distinguished from unicast writes, and
++ * @tcode of lock requests is %TCODE_LOCK_REQUEST.
++ *
++ * Requests to the FCP_REQUEST or FCP_RESPONSE register are responded to as
++ * with &fw_cdev_event_request2, except in kernel 2.6.32 and older which send
++ * the response packet of the client's %FW_CDEV_IOC_SEND_RESPONSE ioctl.
++ */
++ struct fw_cdev_event_request {
++ __u64 closure;
++ __u32 type;
++ __u32 tcode;
++ __u64 offset;
++ __u32 handle;
++ __u32 length;
++ __u32 data[0];
++ };
++
++ /**
++ * struct fw_cdev_event_request2 - Sent on incoming request to an address region
++ * @closure: See &fw_cdev_event_common; set by %FW_CDEV_IOC_ALLOCATE ioctl
++ * @type: See &fw_cdev_event_common; always %FW_CDEV_EVENT_REQUEST2
* @tcode: Transaction code of the incoming request
* @offset: The offset into the 48-bit per-node address space
++ * @source_node_id: Sender node ID
++ * @destination_node_id: Destination node ID
++ * @card: The index of the card from which the request came
++ * @generation: Bus generation in which the request is valid
* @handle: Reference to the kernel-side pending request
* @length: Data length, i.e. the request's payload size in bytes
* @data: Incoming data, if any
*
* The payload data for requests carrying data (write and lock requests)
* follows immediately and can be accessed through the @data field.
++ *
++ * Unlike &fw_cdev_event_request, @tcode of lock requests is one of the
++ * firewire-core specific %TCODE_LOCK_MASK_SWAP...%TCODE_LOCK_VENDOR_DEPENDENT,
++ * i.e. encodes the extended transaction code.
++ *
++ * @card may differ from &fw_cdev_get_info.card because requests are received
++ * from all cards of the Linux host. @source_node_id, @destination_node_id, and
++ * @generation pertain to that card. Destination node ID and bus generation may
++ * therefore differ from the corresponding fields of the last
++ * &fw_cdev_event_bus_reset.
++ *
++ * @destination_node_id may also differ from the current node ID because of a
++ * non-local bus ID part or in case of a broadcast write request. Note, a
++ * client must call an %FW_CDEV_IOC_SEND_RESPONSE ioctl even in case of a
++ * broadcast write request; the kernel will then release the kernel-side pending
++ * request but will not actually send a response packet.
++ *
++ * In case of a write request to FCP_REQUEST or FCP_RESPONSE, the kernel already
++ * sent a write response immediately after the request was received; in this
++ * case the client must still call an %FW_CDEV_IOC_SEND_RESPONSE ioctl to
++ * release the kernel-side pending request, though another response won't be
++ * sent.
++ *
++ * If the client subsequently needs to initiate requests to the sender node of
++ * an &fw_cdev_event_request2, it needs to use a device file with matching
++ * card index, node ID, and generation for outbound requests.
*/
-- struct fw_cdev_event_request {
++ struct fw_cdev_event_request2 {
__u64 closure;
__u32 type;
__u32 tcode;
__u64 offset;
++ __u32 source_node_id;
++ __u32 destination_node_id;
++ __u32 card;
++ __u32 generation;
__u32 handle;
__u32 length;
__u32 data[0];
* @header: Stripped headers, if any
*
* This event is sent when the controller has completed an &fw_cdev_iso_packet
-- * with the %FW_CDEV_ISO_INTERRUPT bit set. In the receive case, the headers
-- * stripped of all packets up until and including the interrupt packet are
-- * returned in the @header field. The amount of header data per packet is as
-- * specified at iso context creation by &fw_cdev_create_iso_context.header_size.
++ * with the %FW_CDEV_ISO_INTERRUPT bit set.
*
-- * In version 1 of this ABI, header data consisted of the 1394 isochronous
-- * packet header, followed by quadlets from the packet payload if
-- * &fw_cdev_create_iso_context.header_size > 4.
++ * Isochronous transmit events (context type %FW_CDEV_ISO_CONTEXT_TRANSMIT):
*
-- * In version 2 of this ABI, header data consist of the 1394 isochronous
-- * packet header, followed by a timestamp quadlet if
-- * &fw_cdev_create_iso_context.header_size > 4, followed by quadlets from the
-- * packet payload if &fw_cdev_create_iso_context.header_size > 8.
++ * In version 3 and some implementations of version 2 of the ABI, &header_length
++ * is a multiple of 4 and &header contains timestamps of all packets up until
++ * the interrupt packet. The format of the timestamps is as described below for
++ * isochronous reception. In version 1 of the ABI, &header_length was 0.
*
-- * Behaviour of ver. 1 of this ABI is no longer available since ABI ver. 2.
++ * Isochronous receive events (context type %FW_CDEV_ISO_CONTEXT_RECEIVE):
++ *
++ * The headers stripped of all packets up until and including the interrupt
++ * packet are returned in the @header field. The amount of header data per
++ * packet is as specified at iso context creation by
++ * &fw_cdev_create_iso_context.header_size.
++ *
++ * Hence, _interrupt.header_length / _context.header_size is the number of
++ * packets received in this interrupt event. The client can now iterate
++ * through the mmap()'ed DMA buffer according to this number of packets and
++ * to the buffer sizes as the client specified in &fw_cdev_queue_iso.
++ *
++ * Since version 2 of this ABI, the portion for each packet in _interrupt.header
++ * consists of the 1394 isochronous packet header, followed by a timestamp
++ * quadlet if &fw_cdev_create_iso_context.header_size > 4, followed by quadlets
++ * from the packet payload if &fw_cdev_create_iso_context.header_size > 8.
*
-- * Format of 1394 iso packet header: 16 bits len, 2 bits tag, 6 bits channel,
-- * 4 bits tcode, 4 bits sy, in big endian byte order. Format of timestamp:
-- * 16 bits invalid, 3 bits cycleSeconds, 13 bits cycleCount, in big endian byte
-- * order.
++ * Format of 1394 iso packet header: 16 bits data_length, 2 bits tag, 6 bits
++ * channel, 4 bits tcode, 4 bits sy, in big endian byte order.
++ * data_length is the actual received size of the packet without the four
++ * 1394 iso packet header bytes.
++ *
++ * Format of timestamp: 16 bits invalid, 3 bits cycleSeconds, 13 bits
++ * cycleCount, in big endian byte order.
++ *
++ * In version 1 of the ABI, no timestamp quadlet was inserted; instead, payload
++ * data followed directly after the 1394 is header if header_size > 4.
++ * Behaviour of ver. 1 of this ABI is no longer available since ABI ver. 2.
*/
struct fw_cdev_event_iso_interrupt {
__u64 closure;
};
/**
++ * struct fw_cdev_event_iso_interrupt_mc - An iso buffer chunk was completed
++ * @closure: See &fw_cdev_event_common;
++ * set by %FW_CDEV_CREATE_ISO_CONTEXT ioctl
++ * @type: %FW_CDEV_EVENT_ISO_INTERRUPT_MULTICHANNEL
++ * @completed: Offset into the receive buffer; data before this offest is valid
++ *
++ * This event is sent in multichannel contexts (context type
++ * %FW_CDEV_ISO_CONTEXT_RECEIVE_MULTICHANNEL) for &fw_cdev_iso_packet buffer
++ * chunks that have the %FW_CDEV_ISO_INTERRUPT bit set. Whether this happens
++ * when a packet is completed and/or when a buffer chunk is completed depends
++ * on the hardware implementation.
++ *
++ * The buffer is continuously filled with the following data, per packet:
++ * - the 1394 iso packet header as described at &fw_cdev_event_iso_interrupt,
++ * but in little endian byte order,
++ * - packet payload (as many bytes as specified in the data_length field of
++ * the 1394 iso packet header) in big endian byte order,
++ * - 0...3 padding bytes as needed to align the following trailer quadlet,
++ * - trailer quadlet, containing the reception timestamp as described at
++ * &fw_cdev_event_iso_interrupt, but in little endian byte order.
++ *
++ * Hence the per-packet size is data_length (rounded up to a multiple of 4) + 8.
++ * When processing the data, stop before a packet that would cross the
++ * @completed offset.
++ *
++ * A packet near the end of a buffer chunk will typically spill over into the
++ * next queued buffer chunk. It is the responsibility of the client to check
++ * for this condition, assemble a broken-up packet from its parts, and not to
++ * re-queue any buffer chunks in which as yet unread packet parts reside.
++ */
++ struct fw_cdev_event_iso_interrupt_mc {
++ __u64 closure;
++ __u32 type;
++ __u32 completed;
++ };
++
++ /**
* struct fw_cdev_event_iso_resource - Iso resources were allocated or freed
* @closure: See &fw_cdev_event_common;
* set by %FW_CDEV_IOC_(DE)ALLOCATE_ISO_RESOURCE(_ONCE) ioctl
};
/**
++ * struct fw_cdev_event_phy_packet - A PHY packet was transmitted or received
++ * @closure: See &fw_cdev_event_common; set by %FW_CDEV_IOC_SEND_PHY_PACKET
++ * or %FW_CDEV_IOC_RECEIVE_PHY_PACKETS ioctl
++ * @type: %FW_CDEV_EVENT_PHY_PACKET_SENT or %..._RECEIVED
++ * @rcode: %RCODE_..., indicates success or failure of transmission
++ * @length: Data length in bytes
++ * @data: Incoming data
++ *
++ * If @type is %FW_CDEV_EVENT_PHY_PACKET_SENT, @length is 0 and @data empty,
++ * except in case of a ping packet: Then, @length is 4, and @data[0] is the
++ * ping time in 49.152MHz clocks if @rcode is %RCODE_COMPLETE.
++ *
++ * If @type is %FW_CDEV_EVENT_PHY_PACKET_RECEIVED, @length is 8 and @data
++ * consists of the two PHY packet quadlets, in host byte order.
++ */
++ struct fw_cdev_event_phy_packet {
++ __u64 closure;
++ __u32 type;
++ __u32 rcode;
++ __u32 length;
++ __u32 data[0];
++ };
++
++ /**
* union fw_cdev_event - Convenience union of fw_cdev_event_ types
-- * @common: Valid for all types
-- * @bus_reset: Valid if @common.type == %FW_CDEV_EVENT_BUS_RESET
-- * @response: Valid if @common.type == %FW_CDEV_EVENT_RESPONSE
-- * @request: Valid if @common.type == %FW_CDEV_EVENT_REQUEST
-- * @iso_interrupt: Valid if @common.type == %FW_CDEV_EVENT_ISO_INTERRUPT
-- * @iso_resource: Valid if @common.type ==
++ * @common: Valid for all types
++ * @bus_reset: Valid if @common.type == %FW_CDEV_EVENT_BUS_RESET
++ * @response: Valid if @common.type == %FW_CDEV_EVENT_RESPONSE
++ * @request: Valid if @common.type == %FW_CDEV_EVENT_REQUEST
++ * @request2: Valid if @common.type == %FW_CDEV_EVENT_REQUEST2
++ * @iso_interrupt: Valid if @common.type == %FW_CDEV_EVENT_ISO_INTERRUPT
++ * @iso_interrupt_mc: Valid if @common.type ==
++ * %FW_CDEV_EVENT_ISO_INTERRUPT_MULTICHANNEL
++ * @iso_resource: Valid if @common.type ==
* %FW_CDEV_EVENT_ISO_RESOURCE_ALLOCATED or
* %FW_CDEV_EVENT_ISO_RESOURCE_DEALLOCATED
++ * @phy_packet: Valid if @common.type ==
++ * %FW_CDEV_EVENT_PHY_PACKET_SENT or
++ * %FW_CDEV_EVENT_PHY_PACKET_RECEIVED
*
* Convenience union for userspace use. Events could be read(2) into an
* appropriately aligned char buffer and then cast to this union for further
struct fw_cdev_event_bus_reset bus_reset;
struct fw_cdev_event_response response;
struct fw_cdev_event_request request;
++ struct fw_cdev_event_request2 request2; /* added in 2.6.36 */
struct fw_cdev_event_iso_interrupt iso_interrupt;
-- struct fw_cdev_event_iso_resource iso_resource;
++ struct fw_cdev_event_iso_interrupt_mc iso_interrupt_mc; /* added in 2.6.36 */
++ struct fw_cdev_event_iso_resource iso_resource; /* added in 2.6.30 */
++ struct fw_cdev_event_phy_packet phy_packet; /* added in 2.6.36 */
};
/* available since kernel version 2.6.22 */
/* available since kernel version 2.6.34 */
#define FW_CDEV_IOC_GET_CYCLE_TIMER2 _IOWR('#', 0x14, struct fw_cdev_get_cycle_timer2)
++ /* available since kernel version 2.6.36 */
++ #define FW_CDEV_IOC_SEND_PHY_PACKET _IOWR('#', 0x15, struct fw_cdev_send_phy_packet)
++ #define FW_CDEV_IOC_RECEIVE_PHY_PACKETS _IOW('#', 0x16, struct fw_cdev_receive_phy_packets)
++ #define FW_CDEV_IOC_SET_ISO_CHANNELS _IOW('#', 0x17, struct fw_cdev_set_iso_channels)
++
/*
-- * FW_CDEV_VERSION History
++ * ABI version history
* 1 (2.6.22) - initial version
++ * (2.6.24) - added %FW_CDEV_IOC_GET_CYCLE_TIMER
* 2 (2.6.30) - changed &fw_cdev_event_iso_interrupt.header if
* &fw_cdev_create_iso_context.header_size is 8 or more
++ * - added %FW_CDEV_IOC_*_ISO_RESOURCE*,
++ * %FW_CDEV_IOC_GET_SPEED, %FW_CDEV_IOC_SEND_BROADCAST_REQUEST,
++ * %FW_CDEV_IOC_SEND_STREAM_PACKET
* (2.6.32) - added time stamp to xmit &fw_cdev_event_iso_interrupt
* (2.6.33) - IR has always packet-per-buffer semantics now, not one of
* dual-buffer or packet-per-buffer depending on hardware
++ * - shared use and auto-response for FCP registers
* 3 (2.6.34) - made &fw_cdev_get_cycle_timer reliable
++ * - added %FW_CDEV_IOC_GET_CYCLE_TIMER2
++ * 4 (2.6.36) - added %FW_CDEV_EVENT_REQUEST2, %FW_CDEV_EVENT_PHY_PACKET_*,
++ * and &fw_cdev_allocate.region_end
++ * - implemented &fw_cdev_event_bus_reset.bm_node_id
++ * - added %FW_CDEV_IOC_SEND_PHY_PACKET, _RECEIVE_PHY_PACKETS
++ * - added %FW_CDEV_EVENT_ISO_INTERRUPT_MULTICHANNEL,
++ * %FW_CDEV_ISO_CONTEXT_RECEIVE_MULTICHANNEL, and
++ * %FW_CDEV_IOC_SET_ISO_CHANNELS
*/
-- #define FW_CDEV_VERSION 3
++ #define FW_CDEV_VERSION 3 /* Meaningless; don't use this macro. */
/**
* struct fw_cdev_get_info - General purpose information ioctl
-- * @version: The version field is just a running serial number.
-- * We never break backwards compatibility, but may add more
-- * structs and ioctls in later revisions.
++ * @version: The version field is just a running serial number. Both an
++ * input parameter (ABI version implemented by the client) and
++ * output parameter (ABI version implemented by the kernel).
++ * A client must not fill in an %FW_CDEV_VERSION defined from an
++ * included kernel header file but the actual version for which
++ * the client was implemented. This is necessary for forward
++ * compatibility. We never break backwards compatibility, but
++ * may add more structs, events, and ioctls in later revisions.
* @rom_length: If @rom is non-zero, at most rom_length bytes of configuration
* ROM will be copied into that user space address. In either
* case, @rom_length is updated with the actual length of the
};
/**
-- * struct fw_cdev_allocate - Allocate a CSR address range
++ * struct fw_cdev_allocate - Allocate a CSR in an address range
* @offset: Start offset of the address range
* @closure: To be passed back to userspace in request events
-- * @length: Length of the address range, in bytes
++ * @length: Length of the CSR, in bytes
* @handle: Handle to the allocation, written by the kernel
++ * @region_end: First address above the address range (added in ABI v4, 2.6.36)
*
* Allocate an address range in the 48-bit address space on the local node
* (the controller). This allows userspace to listen for requests with an
-- * offset within that address range. When the kernel receives a request
-- * within the range, an &fw_cdev_event_request event will be written back.
-- * The @closure field is passed back to userspace in the response event.
++ * offset within that address range. Every time when the kernel receives a
++ * request within the range, an &fw_cdev_event_request2 event will be emitted.
++ * (If the kernel or the client implements ABI version <= 3, an
++ * &fw_cdev_event_request will be generated instead.)
++ *
++ * The @closure field is passed back to userspace in these request events.
* The @handle field is an out parameter, returning a handle to the allocated
* range to be used for later deallocation of the range.
*
* The address range is allocated on all local nodes. The address allocation
-- * is exclusive except for the FCP command and response registers.
++ * is exclusive except for the FCP command and response registers. If an
++ * exclusive address region is already in use, the ioctl fails with errno set
++ * to %EBUSY.
++ *
++ * If kernel and client implement ABI version >= 4, the kernel looks up a free
++ * spot of size @length inside [@offset..@region_end) and, if found, writes
++ * the start address of the new CSR back in @offset. I.e. @offset is an
++ * in and out parameter. If this automatic placement of a CSR in a bigger
++ * address range is not desired, the client simply needs to set @region_end
++ * = @offset + @length.
++ *
++ * If the kernel or the client implements ABI version <= 3, @region_end is
++ * ignored and effectively assumed to be @offset + @length.
++ *
++ * @region_end is only present in a kernel header >= 2.6.36. If necessary,
++ * this can for example be tested by #ifdef FW_CDEV_EVENT_REQUEST2.
*/
struct fw_cdev_allocate {
__u64 offset;
__u64 closure;
__u32 length;
__u32 handle;
++ __u64 region_end; /* available since kernel version 2.6.36 */
};
/**
* Initiate a bus reset for the bus this device is on. The bus reset can be
* either the original (long) bus reset or the arbitrated (short) bus reset
* introduced in 1394a-2000.
++ *
++ * The ioctl returns immediately. A subsequent &fw_cdev_event_bus_reset
++ * indicates when the reset actually happened. Since ABI v4, this may be
++ * considerably later than the ioctl because the kernel ensures a grace period
++ * between subsequent bus resets as per IEEE 1394 bus management specification.
*/
struct fw_cdev_initiate_bus_reset {
-- __u32 type; /* FW_CDEV_SHORT_RESET or FW_CDEV_LONG_RESET */
++ __u32 type;
};
/**
*
* @immediate, @key, and @data array elements are CPU-endian quadlets.
*
-- * If successful, the kernel adds the descriptor and writes back a handle to the
-- * kernel-side object to be used for later removal of the descriptor block and
-- * immediate key.
++ * If successful, the kernel adds the descriptor and writes back a @handle to
++ * the kernel-side object to be used for later removal of the descriptor block
++ * and immediate key. The kernel will also generate a bus reset to signal the
++ * change of the configuration ROM to other nodes.
*
* This ioctl affects the configuration ROMs of all local nodes.
* The ioctl only succeeds on device files which represent a local node.
* descriptor was added
*
* Remove a descriptor block and accompanying immediate key from the local
-- * nodes' configuration ROMs.
++ * nodes' configuration ROMs. The kernel will also generate a bus reset to
++ * signal the change of the configuration ROM to other nodes.
*/
struct fw_cdev_remove_descriptor {
__u32 handle;
};
-- #define FW_CDEV_ISO_CONTEXT_TRANSMIT 0
-- #define FW_CDEV_ISO_CONTEXT_RECEIVE 1
++ #define FW_CDEV_ISO_CONTEXT_TRANSMIT 0
++ #define FW_CDEV_ISO_CONTEXT_RECEIVE 1
++ #define FW_CDEV_ISO_CONTEXT_RECEIVE_MULTICHANNEL 2 /* added in 2.6.36 */
/**
-- * struct fw_cdev_create_iso_context - Create a context for isochronous IO
-- * @type: %FW_CDEV_ISO_CONTEXT_TRANSMIT or %FW_CDEV_ISO_CONTEXT_RECEIVE
-- * @header_size: Header size to strip for receive contexts
-- * @channel: Channel to bind to
-- * @speed: Speed for transmit contexts
-- * @closure: To be returned in &fw_cdev_event_iso_interrupt
++ * struct fw_cdev_create_iso_context - Create a context for isochronous I/O
++ * @type: %FW_CDEV_ISO_CONTEXT_TRANSMIT or %FW_CDEV_ISO_CONTEXT_RECEIVE or
++ * %FW_CDEV_ISO_CONTEXT_RECEIVE_MULTICHANNEL
++ * @header_size: Header size to strip in single-channel reception
++ * @channel: Channel to bind to in single-channel reception or transmission
++ * @speed: Transmission speed
++ * @closure: To be returned in &fw_cdev_event_iso_interrupt or
++ * &fw_cdev_event_iso_interrupt_multichannel
* @handle: Handle to context, written back by kernel
*
* Prior to sending or receiving isochronous I/O, a context must be created.
* The context records information about the transmit or receive configuration
* and typically maps to an underlying hardware resource. A context is set up
* for either sending or receiving. It is bound to a specific isochronous
-- * channel.
++ * @channel.
++ *
++ * In case of multichannel reception, @header_size and @channel are ignored
++ * and the channels are selected by %FW_CDEV_IOC_SET_ISO_CHANNELS.
++ *
++ * For %FW_CDEV_ISO_CONTEXT_RECEIVE contexts, @header_size must be at least 4
++ * and must be a multiple of 4. It is ignored in other context types.
++ *
++ * @speed is ignored in receive context types.
*
* If a context was successfully created, the kernel writes back a handle to the
* context, which must be passed in for subsequent operations on that context.
*
-- * For receive contexts, @header_size must be at least 4 and must be a multiple
-- * of 4.
-- *
-- * Note that the effect of a @header_size > 4 depends on
-- * &fw_cdev_get_info.version, as documented at &fw_cdev_event_iso_interrupt.
++ * Limitations:
++ * No more than one iso context can be created per fd.
++ * The total number of contexts that all userspace and kernelspace drivers can
++ * create on a card at a time is a hardware limit, typically 4 or 8 contexts per
++ * direction, and of them at most one multichannel receive context.
*/
struct fw_cdev_create_iso_context {
__u32 type;
__u32 handle;
};
++ /**
++ * struct fw_cdev_set_iso_channels - Select channels in multichannel reception
++ * @channels: Bitmask of channels to listen to
++ * @handle: Handle of the mutichannel receive context
++ *
++ * @channels is the bitwise or of 1ULL << n for each channel n to listen to.
++ *
++ * The ioctl fails with errno %EBUSY if there is already another receive context
++ * on a channel in @channels. In that case, the bitmask of all unoccupied
++ * channels is returned in @channels.
++ */
++ struct fw_cdev_set_iso_channels {
++ __u64 channels;
++ __u32 handle;
++ };
++
#define FW_CDEV_ISO_PAYLOAD_LENGTH(v) (v)
#define FW_CDEV_ISO_INTERRUPT (1 << 16)
#define FW_CDEV_ISO_SKIP (1 << 17)
/**
* struct fw_cdev_iso_packet - Isochronous packet
-- * @control: Contains the header length (8 uppermost bits), the sy field
-- * (4 bits), the tag field (2 bits), a sync flag (1 bit),
-- * a skip flag (1 bit), an interrupt flag (1 bit), and the
++ * @control: Contains the header length (8 uppermost bits),
++ * the sy field (4 bits), the tag field (2 bits), a sync flag
++ * or a skip flag (1 bit), an interrupt flag (1 bit), and the
* payload length (16 lowermost bits)
-- * @header: Header and payload
++ * @header: Header and payload in case of a transmit context.
*
* &struct fw_cdev_iso_packet is used to describe isochronous packet queues.
-- *
* Use the FW_CDEV_ISO_ macros to fill in @control.
++ * The @header array is empty in case of receive contexts.
++ *
++ * Context type %FW_CDEV_ISO_CONTEXT_TRANSMIT:
++ *
++ * @control.HEADER_LENGTH must be a multiple of 4. It specifies the numbers of
++ * bytes in @header that will be prepended to the packet's payload. These bytes
++ * are copied into the kernel and will not be accessed after the ioctl has
++ * returned.
++ *
++ * The @control.SY and TAG fields are copied to the iso packet header. These
++ * fields are specified by IEEE 1394a and IEC 61883-1.
++ *
++ * The @control.SKIP flag specifies that no packet is to be sent in a frame.
++ * When using this, all other fields except @control.INTERRUPT must be zero.
++ *
++ * When a packet with the @control.INTERRUPT flag set has been completed, an
++ * &fw_cdev_event_iso_interrupt event will be sent.
++ *
++ * Context type %FW_CDEV_ISO_CONTEXT_RECEIVE:
++ *
++ * @control.HEADER_LENGTH must be a multiple of the context's header_size.
++ * If the HEADER_LENGTH is larger than the context's header_size, multiple
++ * packets are queued for this entry.
++ *
++ * The @control.SY and TAG fields are ignored.
++ *
++ * If the @control.SYNC flag is set, the context drops all packets until a
++ * packet with a sy field is received which matches &fw_cdev_start_iso.sync.
++ *
++ * @control.PAYLOAD_LENGTH defines how many payload bytes can be received for
++ * one packet (in addition to payload quadlets that have been defined as headers
++ * and are stripped and returned in the &fw_cdev_event_iso_interrupt structure).
++ * If more bytes are received, the additional bytes are dropped. If less bytes
++ * are received, the remaining bytes in this part of the payload buffer will not
++ * be written to, not even by the next packet. I.e., packets received in
++ * consecutive frames will not necessarily be consecutive in memory. If an
++ * entry has queued multiple packets, the PAYLOAD_LENGTH is divided equally
++ * among them.
*
-- * For transmit packets, the header length must be a multiple of 4 and specifies
-- * the numbers of bytes in @header that will be prepended to the packet's
-- * payload; these bytes are copied into the kernel and will not be accessed
-- * after the ioctl has returned. The sy and tag fields are copied to the iso
-- * packet header (these fields are specified by IEEE 1394a and IEC 61883-1).
-- * The skip flag specifies that no packet is to be sent in a frame; when using
-- * this, all other fields except the interrupt flag must be zero.
-- *
-- * For receive packets, the header length must be a multiple of the context's
-- * header size; if the header length is larger than the context's header size,
-- * multiple packets are queued for this entry. The sy and tag fields are
-- * ignored. If the sync flag is set, the context drops all packets until
-- * a packet with a matching sy field is received (the sync value to wait for is
-- * specified in the &fw_cdev_start_iso structure). The payload length defines
-- * how many payload bytes can be received for one packet (in addition to payload
-- * quadlets that have been defined as headers and are stripped and returned in
-- * the &fw_cdev_event_iso_interrupt structure). If more bytes are received, the
-- * additional bytes are dropped. If less bytes are received, the remaining
-- * bytes in this part of the payload buffer will not be written to, not even by
-- * the next packet, i.e., packets received in consecutive frames will not
-- * necessarily be consecutive in memory. If an entry has queued multiple
-- * packets, the payload length is divided equally among them.
-- *
-- * When a packet with the interrupt flag set has been completed, the
++ * When a packet with the @control.INTERRUPT flag set has been completed, an
* &fw_cdev_event_iso_interrupt event will be sent. An entry that has queued
* multiple receive packets is completed when its last packet is completed.
++ *
++ * Context type %FW_CDEV_ISO_CONTEXT_RECEIVE_MULTICHANNEL:
++ *
++ * Here, &fw_cdev_iso_packet would be more aptly named _iso_buffer_chunk since
++ * it specifies a chunk of the mmap()'ed buffer, while the number and alignment
++ * of packets to be placed into the buffer chunk is not known beforehand.
++ *
++ * @control.PAYLOAD_LENGTH is the size of the buffer chunk and specifies room
++ * for header, payload, padding, and trailer bytes of one or more packets.
++ * It must be a multiple of 4.
++ *
++ * @control.HEADER_LENGTH, TAG and SY are ignored. SYNC is treated as described
++ * for single-channel reception.
++ *
++ * When a buffer chunk with the @control.INTERRUPT flag set has been filled
++ * entirely, an &fw_cdev_event_iso_interrupt_mc event will be sent.
*/
struct fw_cdev_iso_packet {
__u32 control;
/**
* struct fw_cdev_queue_iso - Queue isochronous packets for I/O
-- * @packets: Userspace pointer to packet data
++ * @packets: Userspace pointer to an array of &fw_cdev_iso_packet
* @data: Pointer into mmap()'ed payload buffer
-- * @size: Size of packet data in bytes
++ * @size: Size of the @packets array, in bytes
* @handle: Isochronous context handle
*
* Queue a number of isochronous packets for reception or transmission.
* The kernel may or may not queue all packets, but will write back updated
* values of the @packets, @data and @size fields, so the ioctl can be
* resubmitted easily.
++ *
++ * In case of a multichannel receive context, @data must be quadlet-aligned
++ * relative to the buffer start.
*/
struct fw_cdev_queue_iso {
__u64 packets;
__u32 speed;
};
++ /**
++ * struct fw_cdev_send_phy_packet - send a PHY packet
++ * @closure: Passed back to userspace in the PHY-packet-sent event
++ * @data: First and second quadlet of the PHY packet
++ * @generation: The bus generation where packet is valid
++ *
++ * The %FW_CDEV_IOC_SEND_PHY_PACKET ioctl sends a PHY packet to all nodes
++ * on the same card as this device. After transmission, an
++ * %FW_CDEV_EVENT_PHY_PACKET_SENT event is generated.
++ *
++ * The payload @data[] shall be specified in host byte order. Usually,
++ * @data[1] needs to be the bitwise inverse of @data[0]. VersaPHY packets
++ * are an exception to this rule.
++ *
++ * The ioctl is only permitted on device files which represent a local node.
++ */
++ struct fw_cdev_send_phy_packet {
++ __u64 closure;
++ __u32 data[2];
++ __u32 generation;
++ };
++
++ /**
++ * struct fw_cdev_receive_phy_packets - start reception of PHY packets
++ * @closure: Passed back to userspace in phy packet events
++ *
++ * This ioctl activates issuing of %FW_CDEV_EVENT_PHY_PACKET_RECEIVED due to
++ * incoming PHY packets from any node on the same bus as the device.
++ *
++ * The ioctl is only permitted on device files which represent a local node.
++ */
++ struct fw_cdev_receive_phy_packets {
++ __u64 closure;
++ };
++
#endif /* _LINUX_FIREWIRE_CDEV_H */