Format: <area>[,<node>]
See also Documentation/networking/decnet.txt.
- default_blu= [VT]
+ vt.default_blu= [VT]
Format: <blue0>,<blue1>,<blue2>,...,<blue15>
Change the default blue palette of the console.
This is a 16-member array composed of values
ranging from 0-255.
- default_grn= [VT]
+ vt.default_grn= [VT]
Format: <green0>,<green1>,<green2>,...,<green15>
Change the default green palette of the console.
This is a 16-member array composed of values
ranging from 0-255.
- default_red= [VT]
+ vt.default_red= [VT]
Format: <red0>,<red1>,<red2>,...,<red15>
Change the default red palette of the console.
This is a 16-member array composed of values
ranging from 0-255.
- default_utf8= [VT]
+ vt.default_utf8=
+ [VT]
Format=<0|1>
Set system-wide default UTF-8 mode for all tty's.
- Default is 0 and by setting to 1, it enables UTF-8
- mode for all newly opened or allocated terminals.
+ Default is 1, i.e. UTF-8 mode is enabled for all
+ newly opened terminals.
dhash_entries= [KNL]
Set number of hash buckets for dentry cache.
lapic_timer_c2_ok [X86-32,x86-64,APIC] trust the local apic timer in
C2 power state.
+ libata.dma= [LIBATA] DMA control
+ libata.dma=0 Disable all PATA and SATA DMA
+ libata.dma=1 PATA and SATA Disk DMA only
+ libata.dma=2 ATAPI (CDROM) DMA only
+ libata.dma=4 Compact Flash DMA only
+ Combinations also work, so libata.dma=3 enables DMA
+ for disks and CDROMs, but not CFs.
+
libata.noacpi [LIBATA] Disables use of ACPI in libata suspend/resume
when set.
Format: <int>
variable can be read when reading some _other_ cpu's variables.
-* Rules to follow when using local atomic operations
-
-- Variables touched by local ops must be per cpu variables.
-- _Only_ the CPU owner of these variables must write to them.
-- This CPU can use local ops from any context (process, irq, softirq, nmi, ...)
- to update its local_t variables.
-- Preemption (or interrupts) must be disabled when using local ops in
- process context to make sure the process won't be migrated to a
- different CPU between getting the per-cpu variable and doing the
- actual local op.
-- When using local ops in interrupt context, no special care must be
- taken on a mainline kernel, since they will run on the local CPU with
- preemption already disabled. I suggest, however, to explicitly
- disable preemption anyway to make sure it will still work correctly on
- -rt kernels.
-- Reading the local cpu variable will provide the current copy of the
- variable.
-- Reads of these variables can be done from any CPU, because updates to
- "long", aligned, variables are always atomic. Since no memory
- synchronization is done by the writer CPU, an outdated copy of the
- variable can be read when reading some _other_ cpu's variables.
-
-
* How to use local atomic operations
#include <linux/percpu.h>
2) Do not forget to update netdev->trans_start to jiffies after
each new tx packet is given to the hardware.
-3) Do not forget that once you return 0 from your hard_start_xmit
+3) A hard_start_xmit method must not modify the shared parts of a
+ cloned SKB.
+
+4) Do not forget that once you return 0 from your hard_start_xmit
method, it is your driver's responsibility to free up the SKB
and in some finite amount of time.
"wavelan" driver (old ISA Wavelan)
----------------
o Config : Network device -> Wireless LAN -> AT&T WaveLAN
- o Location : .../drivers/net/wavelan*
- o in-line doc : .../drivers/net/wavelan.p.h
+ o Location : .../drivers/net/wireless/wavelan*
+ o in-line doc : .../drivers/net/wireless/wavelan.p.h
o on-line doc :
http://www.hpl.hp.com/personal/Jean_Tourrilhes/Linux/Wavelan.html
autoconfiguration will take place. The most common way to use this
is "ip=dhcp".
- Note that "ip=off" is not the same thing as "ip=::::::off", because in
- the latter autoconfiguration will take place if any of DHCP, BOOTP or RARP
- are compiled in the kernel.
-
<client-ip> IP address of the client.
Default: Determined using autoconfiguration.
this option.
off or none: don't use autoconfiguration
+ (do static IP assignment instead)
on or any: use any protocol available in the kernel
+ (default)
dhcp: use DHCP
bootp: use BOOTP
rarp: use RARP
A more advanced driver could for example check that a HTTP server is
still responding before doing the write call to ping the watchdog.
-When the device is closed, the watchdog is disabled. This is not
-always such a good idea, since if there is a bug in the watchdog
-daemon and it crashes the system will not reboot. Because of this,
-some of the drivers support the configuration option "Disable watchdog
-shutdown on close", CONFIG_WATCHDOG_NOWAYOUT. If it is set to Y when
-compiling the kernel, there is no way of disabling the watchdog once
-it has been started. So, if the watchdog daemon crashes, the system
-will reboot after the timeout has passed. Watchdog devices also usually
-support the nowayout module parameter so that this option can be controlled
-at runtime.
-
-Drivers will not disable the watchdog, unless a specific magic character 'V'
-has been sent /dev/watchdog just before closing the file. If the userspace
-daemon closes the file without sending this special character, the driver
-will assume that the daemon (and userspace in general) died, and will stop
-pinging the watchdog without disabling it first. This will then cause a
-reboot if the watchdog is not re-opened in sufficient time.
+When the device is closed, the watchdog is disabled, unless the "Magic
+Close" feature is supported (see below). This is not always such a
+good idea, since if there is a bug in the watchdog daemon and it
+crashes the system will not reboot. Because of this, some of the
+drivers support the configuration option "Disable watchdog shutdown on
+close", CONFIG_WATCHDOG_NOWAYOUT. If it is set to Y when compiling
+the kernel, there is no way of disabling the watchdog once it has been
+started. So, if the watchdog daemon crashes, the system will reboot
+after the timeout has passed. Watchdog devices also usually support
+the nowayout module parameter so that this option can be controlled at
+runtime.
+
+Magic Close feature:
+
+If a driver supports "Magic Close", the driver will not disable the
+watchdog unless a specific magic character 'V' has been sent to
+/dev/watchdog just before closing the file. If the userspace daemon
+closes the file without sending this special character, the driver
+will assume that the daemon (and userspace in general) died, and will
+stop pinging the watchdog without disabling it first. This will then
+cause a reboot if the watchdog is not re-opened in sufficient time.
The ioctl API:
ATMEL AT91 MCI DRIVER
P: Nicolas Ferre
-M: nicolas.ferre@rfo.atmel.com
+M: nicolas.ferre@atmel.com
L: linux-arm-kernel@lists.arm.linux.org.uk (subscribers-only)
W: http://www.atmel.com/products/AT91/
W: http://www.at91.com/
S: Maintained
+ATMEL LCDFB DRIVER
+P: Nicolas Ferre
+M: nicolas.ferre@atmel.com
+L: linux-fbdev-devel@lists.sourceforge.net (subscribers-only)
+S: Maintained
+
ATMEL MACB ETHERNET DRIVER
P: Haavard Skinnemoen
M: hskinnemoen@atmel.com
S: Maintained
IDE/ATAPI CDROM DRIVER
+P: Borislav Petkov
+M: bbpetkov@yahoo.de
L: linux-ide@vger.kernel.org
-S: Unmaintained
+S: Maintained
IDE/ATAPI FLOPPY DRIVERS
P: Paul Bristow
P: Roland Dreier
M: rolandd@cisco.com
P: Sean Hefty
-M: mshefty@ichips.intel.com
+M: sean.hefty@intel.com
P: Hal Rosenstock
M: hal.rosenstock@gmail.com
L: general@lists.openfabrics.org
S: Maintained
INTEL PRO/100 ETHERNET SUPPORT
-P: John Ronciak
-M: john.ronciak@intel.com
+P: Auke Kok
+M: auke-jan.h.kok@intel.com
P: Jesse Brandeburg
M: jesse.brandeburg@intel.com
P: Jeff Kirsher
M: jeffrey.t.kirsher@intel.com
-P: Auke Kok
-M: auke-jan.h.kok@intel.com
+P: John Ronciak
+M: john.ronciak@intel.com
L: e1000-devel@lists.sourceforge.net
W: http://sourceforge.net/projects/e1000/
S: Supported
INTEL PRO/1000 GIGABIT ETHERNET SUPPORT
-P: Jeb Cramer
-M: cramerj@intel.com
-P: John Ronciak
-M: john.ronciak@intel.com
+P: Auke Kok
+M: auke-jan.h.kok@intel.com
P: Jesse Brandeburg
M: jesse.brandeburg@intel.com
P: Jeff Kirsher
M: jeffrey.t.kirsher@intel.com
-P: Auke Kok
-M: auke-jan.h.kok@intel.com
+P: John Ronciak
+M: john.ronciak@intel.com
L: e1000-devel@lists.sourceforge.net
W: http://sourceforge.net/projects/e1000/
S: Supported
M: jketreno@linux.intel.com
L: linux-wireless@vger.kernel.org
L: ipw2100-devel@lists.sourceforge.net
-L: http://lists.sourceforge.net/mailman/listinfo/ipw2100-devel
+W: http://lists.sourceforge.net/mailman/listinfo/ipw2100-devel
W: http://ipw2100.sourceforge.net
S: Supported
M: jketreno@linux.intel.com
L: linux-wireless@vger.kernel.org
L: ipw2100-devel@lists.sourceforge.net
-L: http://lists.sourceforge.net/mailman/listinfo/ipw2100-devel
+W: http://lists.sourceforge.net/mailman/listinfo/ipw2100-devel
W: http://ipw2200.sourceforge.net
S: Supported
MSI LAPTOP SUPPORT
P: Lennart Poettering
M: mzxreary@0pointer.de
-L: https://tango.0pointer.de/mailman/listinfo/s270-linux
+W: https://tango.0pointer.de/mailman/listinfo/s270-linux
W: http://0pointer.de/lennart/tchibo.html
S: Maintained
S: Maintained
NETXEN (1/10) GbE SUPPORT
-P: Amit S. Kale
-M: amitkale@netxen.com
+P: Dhananjay Phadke
+M: dhananjay@netxen.com
L: netdev@vger.kernel.org
W: http://www.netxen.com
S: Supported
S: Supported
SPIDERNET NETWORK DRIVER for CELL
-P: Linas Vepstas
-M: linas@austin.ibm.com
+P: Ishizaki Kou
+M: kou.ishizaki@toshiba.co.jp
+P: Jens Osterkamp
+M: jens@de.ibm.com
L: netdev@vger.kernel.org
S: Supported
W: http://user-mode-linux.sourceforge.net
S: Maintained
+USERSPACE I/O (UIO)
+P: Hans J. Koch
+M: hjk@linutronix.de
+P: Greg Kroah-Hartman
+M: gregkh@suse.de
+L: linux-kernel@vger.kernel.org
+S: Maintained
+
FAT/VFAT/MSDOS FILESYSTEM:
P: OGAWA Hirofumi
M: hirofumi@mail.parknet.co.jp
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 24
-EXTRAVERSION = -rc6
+EXTRAVERSION = -rc8
NAME = Arr Matey! A Hairy Bilge Rat!
# *DOCUMENTATION*
# Do not:
# o use make's built-in rules and variables
-# (this increases performance and avoid hard-to-debug behavour);
+# (this increases performance and avoids hard-to-debug behaviour);
# o print "Entering directory ...";
MAKEFLAGS += -rR --no-print-directory
ALLINCLUDE_ARCHS := $(SRCARCH)
endif
else
-#Allow user to specify only ALLSOURCE_PATHS on the command line, keeping existing behavour.
+#Allow user to specify only ALLSOURCE_PATHS on the command line, keeping existing behaviour.
ALLINCLUDE_ARCHS := $(ALLSOURCE_ARCHS)
endif
--- /dev/null
+i386
+x86_64
FP_UNPACK_SP(SB, &vb);
DR_c = DB_c;
DR_s = DB_s;
- DR_e = DB_e;
+ DR_e = DB_e + (1024 - 128);
DR_f = SB_f << (52 - 23);
goto pack_d;
}
source "drivers/dma/Kconfig"
+source "drivers/dca/Kconfig"
+
endmenu
source "fs/Kconfig"
-source "kernel/Kconfig.instrumentation"
+source "arch/arm/Kconfig.instrumentation"
source "arch/arm/Kconfig.debug"
--- /dev/null
+menuconfig INSTRUMENTATION
+ bool "Instrumentation Support"
+ default y
+ ---help---
+ Say Y here to get to see options related to performance measurement,
+ system-wide debugging, and testing. This option alone does not add any
+ kernel code.
+
+ If you say N, all options in this submenu will be skipped and
+ disabled. If you're trying to debug the kernel itself, go see the
+ Kernel Hacking menu.
+
+if INSTRUMENTATION
+
+config PROFILING
+ bool "Profiling support (EXPERIMENTAL)"
+ help
+ Say Y here to enable the extended profiling support mechanisms used
+ by profilers such as OProfile.
+
+config OPROFILE
+ tristate "OProfile system profiling (EXPERIMENTAL)"
+ depends on PROFILING && !UML
+ help
+ OProfile is a profiling system capable of profiling the
+ whole system, include the kernel, kernel modules, libraries,
+ and applications.
+
+ If unsure, say N.
+
+config OPROFILE_ARMV6
+ bool
+ depends on OPROFILE && CPU_V6 && !SMP
+ default y
+ select OPROFILE_ARM11_CORE
+
+config OPROFILE_MPCORE
+ bool
+ depends on OPROFILE && CPU_V6 && SMP
+ default y
+ select OPROFILE_ARM11_CORE
+
+config OPROFILE_ARM11_CORE
+ bool
+
+config MARKERS
+ bool "Activate markers"
+ help
+ Place an empty function call at each marker site. Can be
+ dynamically changed for a probe function.
+
+endif # INSTRUMENTATION
#endif
};
+static struct i2c_board_info __initdata ek_i2c_devices[] = {
+ {
+ I2C_BOARD_INFO("ics1523", 0x26),
+ },
+ {
+ I2C_BOARD_INFO("dac3550", 0x4d),
+ }
+};
+
#define EK_FLASH_BASE AT91_CHIPSELECT_0
#define EK_FLASH_SIZE 0x200000
KEY(0,1,KEY_RIGHT),
KEY(0,2,KEY_LEFT),
KEY(0,3,KEY_DOWN),
- KEY(0,4,KEY_CENTER),
- KEY(0,5,KEY_0_5),
- KEY(1,0,KEY_SOFT2),
+ KEY(0,4,KEY_ENTER),
+ KEY(1,0,KEY_F10),
KEY(1,1,KEY_SEND),
KEY(1,2,KEY_END),
KEY(1,3,KEY_VOLUMEDOWN),
KEY(1,4,KEY_VOLUMEUP),
KEY(1,5,KEY_RECORD),
- KEY(2,0,KEY_SOFT1),
+ KEY(2,0,KEY_F9),
KEY(2,1,KEY_3),
KEY(2,2,KEY_6),
KEY(2,3,KEY_9),
- KEY(2,4,KEY_SHARP),
- KEY(2,5,KEY_2_5),
+ KEY(2,4,KEY_KPDOT),
KEY(3,0,KEY_BACK),
KEY(3,1,KEY_2),
KEY(3,2,KEY_5),
KEY(3,3,KEY_8),
KEY(3,4,KEY_0),
- KEY(3,5,KEY_HEADSETHOOK),
+ KEY(3,5,KEY_KPSLASH),
KEY(4,0,KEY_HOME),
KEY(4,1,KEY_1),
KEY(4,2,KEY_4),
KEY(4,3,KEY_7),
- KEY(4,4,KEY_STAR),
+ KEY(4,4,KEY_KPASTERISK),
KEY(4,5,KEY_POWER),
0
};
#include <asm/arch/omapfb.h>
#include <asm/arch/lcd_mipid.h>
-#include "../plat-omap/dsp/dsp_common.h"
-
#define ADS7846_PENDOWN_GPIO 15
static void __init omap_nokia770_init_irq(void)
out:
return ret;
}
+#else
+#define omap_dsp_init() do {} while (0)
#endif /* CONFIG_OMAP_DSP */
static void __init omap_nokia770_init(void)
KEY(0,1,KEY_RIGHT),
KEY(0,2,KEY_LEFT),
KEY(0,3,KEY_DOWN),
- KEY(0,4,KEY_CENTER),
- KEY(0,5,KEY_0_5),
- KEY(1,0,KEY_SOFT2),
+ KEY(0,4,KEY_ENTER),
+ KEY(1,0,KEY_F10),
KEY(1,1,KEY_SEND),
KEY(1,2,KEY_END),
KEY(1,3,KEY_VOLUMEDOWN),
KEY(1,4,KEY_VOLUMEUP),
KEY(1,5,KEY_RECORD),
- KEY(2,0,KEY_SOFT1),
+ KEY(2,0,KEY_F9),
KEY(2,1,KEY_3),
KEY(2,2,KEY_6),
KEY(2,3,KEY_9),
- KEY(2,4,KEY_SHARP),
- KEY(2,5,KEY_2_5),
+ KEY(2,4,KEY_KPDOT),
KEY(3,0,KEY_BACK),
KEY(3,1,KEY_2),
KEY(3,2,KEY_5),
KEY(3,3,KEY_8),
KEY(3,4,KEY_0),
- KEY(3,5,KEY_HEADSETHOOK),
+ KEY(3,5,KEY_KPSLASH),
KEY(4,0,KEY_HOME),
KEY(4,1,KEY_1),
KEY(4,2,KEY_4),
KEY(4,3,KEY_7),
- KEY(4,4,KEY_STAR),
+ KEY(4,4,KEY_KPASTERISK),
KEY(4,5,KEY_POWER),
0
};
SAVE(GAFR1_L); SAVE(GAFR1_U);
SAVE(GAFR2_L); SAVE(GAFR2_U);
- SAVE(ICMR);
+ SAVE(ICMR); ICMR = 0;
SAVE(CKEN);
SAVE(PSTR);
+
+ /* Clear GPIO transition detect bits */
+ GEDR0 = GEDR0; GEDR1 = GEDR1; GEDR2 = GEDR2;
}
static void pxa25x_cpu_pm_restore(unsigned long *sleep_save)
{
+ /* ensure not to come back here if it wasn't intended */
+ PSPR = 0;
+
/* restore registers */
RESTORE_GPLEVEL(0); RESTORE_GPLEVEL(1); RESTORE_GPLEVEL(2);
RESTORE(GPDR0); RESTORE(GPDR1); RESTORE(GPDR2);
RESTORE(GFER0); RESTORE(GFER1); RESTORE(GFER2);
RESTORE(PGSR0); RESTORE(PGSR1); RESTORE(PGSR2);
+ PSSR = PSSR_RDH | PSSR_PH;
+
RESTORE(CKEN);
+
+ ICLR = 0;
+ ICCR = 1;
RESTORE(ICMR);
RESTORE(PSTR);
}
pxa_cpu_save_sp:
@ preserve phys address of stack
mov r0, sp
- mov r2, lr
+ str lr, [sp, #-4]!
bl sleep_phys_sp
ldr r1, =sleep_save_sp
str r0, [r1]
- mov pc, r2
+ ldr pc, [sp], #4
/*
* pxa27x_cpu_suspend()
mar acc0, r2, r3
#endif
ldmfd sp!, {r4 - r12, pc} @ return to caller
-
-
* OP_SCALAR - this operation always operates in scalar mode
* OP_SD - the instruction exceptionally writes to a single precision result.
* OP_DD - the instruction exceptionally writes to a double precision result.
+ * OP_SM - the instruction exceptionally reads from a single precision operand.
*/
#define OP_SCALAR (1 << 0)
#define OP_SD (1 << 1)
#define OP_DD (1 << 1)
+#define OP_SM (1 << 2)
struct op {
u32 (* const fn)(int dd, int dn, int dm, u32 fpscr);
[FEXT_TO_IDX(FEXT_FCMPZ)] = { vfp_double_fcmpz, OP_SCALAR },
[FEXT_TO_IDX(FEXT_FCMPEZ)] = { vfp_double_fcmpez, OP_SCALAR },
[FEXT_TO_IDX(FEXT_FCVT)] = { vfp_double_fcvts, OP_SCALAR|OP_SD },
- [FEXT_TO_IDX(FEXT_FUITO)] = { vfp_double_fuito, OP_SCALAR },
- [FEXT_TO_IDX(FEXT_FSITO)] = { vfp_double_fsito, OP_SCALAR },
+ [FEXT_TO_IDX(FEXT_FUITO)] = { vfp_double_fuito, OP_SCALAR|OP_SM },
+ [FEXT_TO_IDX(FEXT_FSITO)] = { vfp_double_fsito, OP_SCALAR|OP_SM },
[FEXT_TO_IDX(FEXT_FTOUI)] = { vfp_double_ftoui, OP_SCALAR|OP_SD },
[FEXT_TO_IDX(FEXT_FTOUIZ)] = { vfp_double_ftouiz, OP_SCALAR|OP_SD },
[FEXT_TO_IDX(FEXT_FTOSI)] = { vfp_double_ftosi, OP_SCALAR|OP_SD },
u32 exceptions = 0;
unsigned int dest;
unsigned int dn = vfp_get_dn(inst);
- unsigned int dm = vfp_get_dm(inst);
+ unsigned int dm;
unsigned int vecitr, veclen, vecstride;
struct op *fop;
dest = vfp_get_dd(inst);
/*
+ * f[us]ito takes a sN operand, not a dN operand.
+ */
+ if (fop->flags & OP_SM)
+ dm = vfp_get_sm(inst);
+ else
+ dm = vfp_get_dm(inst);
+
+ /*
* If destination bank is zero, vector length is always '1'.
* ARM DDI0100F C5.1.3, C5.3.2.
*/
bool
default y
+config HARDWARE_PM
+ def_bool y
+ depends on OPROFILE
+
source "init/Kconfig"
source "kernel/Kconfig.preempt"
*!
*! Functions exported: ds1302_readreg, ds1302_writereg, ds1302_init
*!
-*! $Log: ds1302.c,v $
-*! Revision 1.18 2005/01/24 09:11:26 mikaelam
-*! Minor changes to get DS1302 RTC chip driver to work
-*!
-*! Revision 1.17 2005/01/05 06:11:22 starvik
-*! No need to do local_irq_disable after local_irq_save.
-*!
-*! Revision 1.16 2004/12/13 12:21:52 starvik
-*! Added I/O and DMA allocators from Linux 2.4
-*!
-*! Revision 1.14 2004/08/24 06:48:43 starvik
-*! Whitespace cleanup
-*!
-*! Revision 1.13 2004/05/28 09:26:59 starvik
-*! Modified I2C initialization to work in 2.6.
-*!
-*! Revision 1.12 2004/05/14 07:58:03 starvik
-*! Merge of changes from 2.4
-*!
-*! Revision 1.10 2004/02/04 09:25:12 starvik
-*! Merge of Linux 2.6.2
-*!
-*! Revision 1.9 2003/07/04 08:27:37 starvik
-*! Merge of Linux 2.5.74
-*!
-*! Revision 1.8 2003/04/09 05:20:47 starvik
-*! Merge of Linux 2.5.67
-*!
-*! Revision 1.6 2003/01/09 14:42:51 starvik
-*! Merge of Linux 2.5.55
-*!
-*! Revision 1.4 2002/12/11 13:13:57 starvik
-*! Added arch/ to v10 specific includes
-*! Added fix from Linux 2.4 in serial.c (flush_to_flip_buffer)
-*!
-*! Revision 1.3 2002/11/20 11:56:10 starvik
-*! Merge of Linux 2.5.48
-*!
-*! Revision 1.2 2002/11/18 13:16:06 starvik
-*! Linux 2.5 port of latest 2.4 drivers
-*!
-*! Revision 1.15 2002/10/11 16:14:33 johana
-*! Added CONFIG_ETRAX_DS1302_TRICKLE_CHARGE and initial setting of the
-*! trcklecharge register.
-*!
-*! Revision 1.14 2002/10/10 12:15:38 magnusmn
-*! Added support for having the RST signal on bit g0
-*!
-*! Revision 1.13 2002/05/29 15:16:08 johana
-*! Removed unused variables.
-*!
-*! Revision 1.12 2002/04/10 15:35:25 johana
-*! Moved probe function closer to init function and marked it __init.
-*!
-*! Revision 1.11 2001/06/14 12:35:52 jonashg
-*! The ATA hack is back. It is unfortunately the only way to set g27 to output.
-*!
-*! Revision 1.9 2001/06/14 10:00:14 jonashg
-*! No need for tempudelay to be inline anymore (had to adjust the usec to
-*! loops conversion because of this to make it slow enough to be a udelay).
-*!
-*! Revision 1.8 2001/06/14 08:06:32 jonashg
-*! Made tempudelay delay usecs (well, just a tad more).
-*!
-*! Revision 1.7 2001/06/13 14:18:11 jonashg
-*! Only allow processes with SYS_TIME capability to set time and charge.
-*!
-*! Revision 1.6 2001/06/12 15:22:07 jonashg
-*! * Made init function __init.
-*! * Parameter to out_byte() is unsigned char.
-*! * The magic number 42 has got a name.
-*! * Removed comment about /proc (nothing is exported there).
-*!
-*! Revision 1.5 2001/06/12 14:35:13 jonashg
-*! Gave the module a name and added it to printk's.
-*!
-*! Revision 1.4 2001/05/31 14:53:40 jonashg
-*! Made tempudelay() inline so that the watchdog doesn't reset (see
-*! function comment).
-*!
-*! Revision 1.3 2001/03/26 16:03:06 bjornw
-*! Needs linux/config.h
-*!
-*! Revision 1.2 2001/03/20 19:42:00 bjornw
-*! Use the ETRAX prefix on the DS1302 options
-*!
-*! Revision 1.1 2001/03/20 09:13:50 magnusmn
-*! Linux 2.4 port
-*!
-*! Revision 1.10 2000/07/05 15:38:23 bjornw
-*! Dont update kernel time when a RTC_SET_TIME is done
-*!
-*! Revision 1.9 2000/03/02 15:42:59 macce
-*! * Hack to make RTC work on all 2100/2400
-*!
-*! Revision 1.8 2000/02/23 16:59:18 torbjore
-*! added setup of R_GEN_CONFIG when RTC is connected to the generic port.
-*!
-*! Revision 1.7 2000/01/17 15:51:43 johana
-*! Added RTC_SET_CHARGE ioctl to enable trickle charger.
-*!
-*! Revision 1.6 1999/10/27 13:19:47 bjornw
-*! Added update_xtime_from_cmos which reads back the updated RTC into the kernel.
-*! /dev/rtc calls it now.
-*!
-*! Revision 1.5 1999/10/27 12:39:37 bjornw
-*! Disabled superuser check. Anyone can now set the time.
-*!
-*! Revision 1.4 1999/09/02 13:27:46 pkj
-*! Added shadow for R_PORT_PB_CONFIG.
-*! Renamed port_g_shadow to port_g_data_shadow.
-*!
-*! Revision 1.3 1999/09/02 08:28:06 pkj
-*! Made it possible to select either port PB or the generic port for the RST
-*! signal line to the DS1302 RTC.
-*! Also make sure the RST bit is configured as output on Port PB (if used).
-*!
-*! Revision 1.2 1999/09/01 14:47:20 bjornw
-*! Added support for /dev/rtc operations with ioctl RD_TIME and SET_TIME to read
-*! and set the date. Register as major 121.
-*!
-*! Revision 1.1 1999/09/01 09:45:29 bjornw
-*! Implemented a DS1302 RTC driver.
-*!
-*!
*! ---------------------------------------------------------------------------
*!
-*! (C) Copyright 1999, 2000, 2001, 2002, 2003, 2004 Axis Communications AB, LUND, SWEDEN
-*!
-*! $Id: ds1302.c,v 1.18 2005/01/24 09:11:26 mikaelam Exp $
+*! (C) Copyright 1999-2007 Axis Communications AB, LUND, SWEDEN
*!
*!***************************************************************************/
#include <asm/rtc.h>
#include <asm/arch/io_interface_mux.h>
+#include "i2c.h"
+
#define RTC_MAJOR_NR 121 /* local major, change later */
static const char ds1302_name[] = "ds1302";
if (((interfaces[ioif].gpio_g_in & gpio_in_pins) != interfaces[ioif].gpio_g_in) ||
((interfaces[ioif].gpio_g_out & gpio_out_pins) != interfaces[ioif].gpio_g_out) ||
((interfaces[ioif].gpio_b & gpio_pb_pins) != interfaces[ioif].gpio_b)) {
+ local_irq_restore(flags);
printk(KERN_CRIT "cris_request_io_interface: Could not get required pins for interface %u\n",
ioif);
return -EBUSY;
*
* Ideas also taken from arch/arm.
*
- * Copyright (C) 2000, 2001 Axis Communications AB
+ * Copyright (C) 2000-2007 Axis Communications AB
*
* Authors: Bjorn Wesen (bjornw@axis.com)
*
*/
#define RESTART_CRIS_SYS(regs) regs->r10 = regs->orig_r10; regs->irp -= 2;
-int do_signal(int canrestart, sigset_t *oldset, struct pt_regs *regs);
+void do_signal(int canrestart, struct pt_regs *regs);
/*
- * Atomically swap in the new signal mask, and wait for a signal. Define
+ * Atomically swap in the new signal mask, and wait for a signal. Define
* dummy arguments to be able to reach the regs argument. (Note that this
* arrangement relies on old_sigset_t occupying one register.)
*/
-int
-sys_sigsuspend(old_sigset_t mask, long r11, long r12, long r13, long mof,
- long srp, struct pt_regs *regs)
+int sys_sigsuspend(old_sigset_t mask, long r11, long r12, long r13, long mof,
+ long srp, struct pt_regs *regs)
{
- sigset_t saveset;
-
mask &= _BLOCKABLE;
spin_lock_irq(¤t->sighand->siglock);
- saveset = current->blocked;
+ current->saved_sigmask = current->blocked;
siginitset(¤t->blocked, mask);
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
-
- regs->r10 = -EINTR;
- while (1) {
- current->state = TASK_INTERRUPTIBLE;
- schedule();
- if (do_signal(0, &saveset, regs))
- /* We will get here twice: once to call the signal
- handler, then again to return from the
- sigsuspend system call. When calling the
- signal handler, R10 holds the signal number as
- set through do_signal. The sigsuspend call
- will return with the restored value set above;
- always -EINTR. */
- return regs->r10;
- }
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ set_thread_flag(TIF_RESTORE_SIGMASK);
+ return -ERESTARTNOHAND;
}
-/* Define dummy arguments to be able to reach the regs argument. (Note that
- * this arrangement relies on size_t occupying one register.)
- */
-int
-sys_rt_sigsuspend(sigset_t *unewset, size_t sigsetsize, long r12, long r13,
- long mof, long srp, struct pt_regs *regs)
-{
- sigset_t saveset, newset;
-
- /* XXX: Don't preclude handling different sized sigset_t's. */
- if (sigsetsize != sizeof(sigset_t))
- return -EINVAL;
-
- if (copy_from_user(&newset, unewset, sizeof(newset)))
- return -EFAULT;
- sigdelsetmask(&newset, ~_BLOCKABLE);
-
- spin_lock_irq(¤t->sighand->siglock);
- saveset = current->blocked;
- current->blocked = newset;
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
-
- regs->r10 = -EINTR;
- while (1) {
- current->state = TASK_INTERRUPTIBLE;
- schedule();
- if (do_signal(0, &saveset, regs))
- /* We will get here twice: once to call the signal
- handler, then again to return from the
- sigsuspend system call. When calling the
- signal handler, R10 holds the signal number as
- set through do_signal. The sigsuspend call
- will return with the restored value set above;
- always -EINTR. */
- return regs->r10;
- }
-}
-
-int
-sys_sigaction(int sig, const struct old_sigaction __user *act,
- struct old_sigaction *oact)
+int sys_sigaction(int sig, const struct old_sigaction __user *act,
+ struct old_sigaction *oact)
{
struct k_sigaction new_ka, old_ka;
int ret;
return ret;
}
-int
-sys_sigaltstack(const stack_t *uss, stack_t __user *uoss)
+int sys_sigaltstack(const stack_t *uss, stack_t __user *uoss)
{
return do_sigaltstack(uss, uoss, rdusp());
}
/* TODO: the other ports use regs->orig_XX to disable syscall checks
* after this completes, but we don't use that mechanism. maybe we can
- * use it now ?
+ * use it now ?
*/
return err;
/* Define dummy arguments to be able to reach the regs argument. */
-asmlinkage int sys_sigreturn(long r10, long r11, long r12, long r13, long mof,
+asmlinkage int sys_sigreturn(long r10, long r11, long r12, long r13, long mof,
long srp, struct pt_regs *regs)
{
struct sigframe __user *frame = (struct sigframe *)rdusp();
current->blocked = set;
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
-
+
if (restore_sigcontext(regs, &frame->sc))
goto badframe;
badframe:
force_sig(SIGSEGV, current);
return 0;
-}
+}
/* Define dummy arguments to be able to reach the regs argument. */
-asmlinkage int sys_rt_sigreturn(long r10, long r11, long r12, long r13,
+asmlinkage int sys_rt_sigreturn(long r10, long r11, long r12, long r13,
long mof, long srp, struct pt_regs *regs)
{
struct rt_sigframe __user *frame = (struct rt_sigframe *)rdusp();
current->blocked = set;
recalc_sigpending();
spin_unlock_irq(¤t->sighand->siglock);
-
+
if (restore_sigcontext(regs, &frame->uc.uc_mcontext))
goto badframe;
badframe:
force_sig(SIGSEGV, current);
return 0;
-}
+}
/*
* Set up a signal frame.
*/
-static int
-setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs, unsigned long mask)
+static int setup_sigcontext(struct sigcontext __user *sc,
+ struct pt_regs *regs, unsigned long mask)
{
int err = 0;
unsigned long usp = rdusp();
return err;
}
-/* figure out where we want to put the new signal frame - usually on the stack */
+/* Figure out where we want to put the new signal frame
+ * - usually on the stack. */
static inline void __user *
-get_sigframe(struct k_sigaction *ka, struct pt_regs * regs, size_t frame_size)
+get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size)
{
unsigned long sp = rdusp();
}
/* grab and setup a signal frame.
- *
+ *
* basically we stack a lot of state info, and arrange for the
* user-mode program to return to the kernel using either a
* trampoline which performs the syscall sigreturn, or a provided
* user-mode trampoline.
*/
-static void setup_frame(int sig, struct k_sigaction *ka,
- sigset_t *set, struct pt_regs * regs)
+static int setup_frame(int sig, struct k_sigaction *ka,
+ sigset_t *set, struct pt_regs *regs)
{
struct sigframe __user *frame;
unsigned long return_ip;
wrusp((unsigned long)frame);
- return;
+ return 0;
give_sigsegv:
force_sigsegv(sig, current);
+ return -EFAULT;
}
-static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
- sigset_t *set, struct pt_regs * regs)
+static int setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ sigset_t *set, struct pt_regs *regs)
{
struct rt_sigframe __user *frame;
unsigned long return_ip;
/* trampoline - the desired return ip is the retcode itself */
return_ip = (unsigned long)&frame->retcode;
/* This is movu.w __NR_rt_sigreturn, r9; break 13; */
- err |= __put_user(0x9c5f, (short __user*)(frame->retcode+0));
- err |= __put_user(__NR_rt_sigreturn, (short __user*)(frame->retcode+2));
- err |= __put_user(0xe93d, (short __user*)(frame->retcode+4));
+ err |= __put_user(0x9c5f, (short __user *)(frame->retcode+0));
+ err |= __put_user(__NR_rt_sigreturn,
+ (short __user *)(frame->retcode+2));
+ err |= __put_user(0xe93d, (short __user *)(frame->retcode+4));
}
if (err)
/* Set up registers for signal handler */
- regs->irp = (unsigned long) ka->sa.sa_handler; /* what we enter NOW */
- regs->srp = return_ip; /* what we enter LATER */
- regs->r10 = sig; /* first argument is signo */
- regs->r11 = (unsigned long) &frame->info; /* second argument is (siginfo_t *) */
- regs->r12 = 0; /* third argument is unused */
-
- /* actually move the usp to reflect the stacked frame */
-
+ /* What we enter NOW */
+ regs->irp = (unsigned long) ka->sa.sa_handler;
+ /* What we enter LATER */
+ regs->srp = return_ip;
+ /* First argument is signo */
+ regs->r10 = sig;
+ /* Second argument is (siginfo_t *) */
+ regs->r11 = (unsigned long)&frame->info;
+ /* Third argument is unused */
+ regs->r12 = 0;
+
+ /* Actually move the usp to reflect the stacked frame */
wrusp((unsigned long)frame);
- return;
+ return 0;
give_sigsegv:
force_sigsegv(sig, current);
+ return -EFAULT;
}
/*
* OK, we're invoking a handler
- */
+ */
-static inline void
-handle_signal(int canrestart, unsigned long sig,
- siginfo_t *info, struct k_sigaction *ka,
- sigset_t *oldset, struct pt_regs * regs)
+static inline int handle_signal(int canrestart, unsigned long sig,
+ siginfo_t *info, struct k_sigaction *ka,
+ sigset_t *oldset, struct pt_regs *regs)
{
+ int ret;
+
/* Are we from a system call? */
if (canrestart) {
/* If so, check system call restarting.. */
switch (regs->r10) {
- case -ERESTART_RESTARTBLOCK:
- case -ERESTARTNOHAND:
- /* ERESTARTNOHAND means that the syscall should only be
- restarted if there was no handler for the signal, and since
- we only get here if there is a handler, we don't restart */
+ case -ERESTART_RESTARTBLOCK:
+ case -ERESTARTNOHAND:
+ /* ERESTARTNOHAND means that the syscall should
+ * only be restarted if there was no handler for
+ * the signal, and since we only get here if there
+ * is a handler, we don't restart */
+ regs->r10 = -EINTR;
+ break;
+ case -ERESTARTSYS:
+ /* ERESTARTSYS means to restart the syscall if
+ * there is no handler or the handler was
+ * registered with SA_RESTART */
+ if (!(ka->sa.sa_flags & SA_RESTART)) {
regs->r10 = -EINTR;
break;
-
- case -ERESTARTSYS:
- /* ERESTARTSYS means to restart the syscall if there is no
- handler or the handler was registered with SA_RESTART */
- if (!(ka->sa.sa_flags & SA_RESTART)) {
- regs->r10 = -EINTR;
- break;
- }
- /* fallthrough */
- case -ERESTARTNOINTR:
- /* ERESTARTNOINTR means that the syscall should be called again
- after the signal handler returns. */
- RESTART_CRIS_SYS(regs);
+ }
+ /* fallthrough */
+ case -ERESTARTNOINTR:
+ /* ERESTARTNOINTR means that the syscall should
+ * be called again after the signal handler returns. */
+ RESTART_CRIS_SYS(regs);
}
}
/* Set up the stack frame */
if (ka->sa.sa_flags & SA_SIGINFO)
- setup_rt_frame(sig, ka, info, oldset, regs);
+ ret = setup_rt_frame(sig, ka, info, oldset, regs);
else
- setup_frame(sig, ka, oldset, regs);
-
- if (ka->sa.sa_flags & SA_ONESHOT)
- ka->sa.sa_handler = SIG_DFL;
-
- spin_lock_irq(¤t->sighand->siglock);
- sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
- if (!(ka->sa.sa_flags & SA_NODEFER))
- sigaddset(¤t->blocked,sig);
- recalc_sigpending();
- spin_unlock_irq(¤t->sighand->siglock);
+ ret = setup_frame(sig, ka, oldset, regs);
+
+ if (ret == 0) {
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked, ¤t->blocked,
+ &ka->sa.sa_mask);
+ if (!(ka->sa.sa_flags & SA_NODEFER))
+ sigaddset(¤t->blocked, sig);
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+ }
+ return ret;
}
/*
* mode below.
*/
-int do_signal(int canrestart, sigset_t *oldset, struct pt_regs *regs)
+void do_signal(int canrestart, struct pt_regs *regs)
{
siginfo_t info;
int signr;
struct k_sigaction ka;
+ sigset_t *oldset;
/*
* We want the common case to go fast, which
* if so.
*/
if (!user_mode(regs))
- return 1;
+ return;
- if (!oldset)
+ if (test_thread_flag(TIF_RESTORE_SIGMASK))
+ oldset = ¤t->saved_sigmask;
+ else
oldset = ¤t->blocked;
signr = get_signal_to_deliver(&info, &ka, regs, NULL);
if (signr > 0) {
/* Whee! Actually deliver the signal. */
- handle_signal(canrestart, signr, &info, &ka, oldset, regs);
- return 1;
+ if (handle_signal(canrestart, signr, &info, &ka,
+ oldset, regs)) {
+ /* a signal was successfully delivered; the saved
+ * sigmask will have been stored in the signal frame,
+ * and will be restored by sigreturn, so we can simply
+ * clear the TIF_RESTORE_SIGMASK flag */
+ if (test_thread_flag(TIF_RESTORE_SIGMASK))
+ clear_thread_flag(TIF_RESTORE_SIGMASK);
+ }
+ return;
}
/* Did we come from a system call? */
regs->r10 == -ERESTARTNOINTR) {
RESTART_CRIS_SYS(regs);
}
- if (regs->r10 == -ERESTART_RESTARTBLOCK){
+ if (regs->r10 == -ERESTART_RESTARTBLOCK) {
regs->r10 = __NR_restart_syscall;
regs->irp -= 2;
}
}
- return 0;
+
+ /* if there's no signal to deliver, we just put the saved sigmask
+ * back */
+ if (test_thread_flag(TIF_RESTORE_SIGMASK)) {
+ clear_thread_flag(TIF_RESTORE_SIGMASK);
+ sigprocmask(SIG_SETMASK, ¤t->saved_sigmask, NULL);
+ }
}
#include <linux/swap.h>
#include <linux/sched.h>
#include <linux/init.h>
+#include <linux/vmstat.h>
#include <asm/arch/svinto.h>
#include <asm/types.h>
#include <asm/signal.h>
*/
#include <asm-generic/vmlinux.lds.h>
-
+#include <asm/page.h>
+
jiffies = jiffies_64;
SECTIONS
{
_stext = .;
__stext = .;
.text : {
- *(.text)
+ TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
*(.fixup)
__edata = . ; /* End of data section */
_edata = . ;
- . = ALIGN(8192); /* init_task and stack, must be aligned */
+ . = ALIGN(PAGE_SIZE); /* init_task and stack, must be aligned */
.data.init_task : { *(.data.init_task) }
- . = ALIGN(8192); /* Init code and data */
+ . = ALIGN(PAGE_SIZE); /* Init code and data */
__init_begin = .;
.init.text : {
_sinittext = .;
__setup_end = .;
.initcall.init : {
__initcall_start = .;
- *(.initcall1.init);
- *(.initcall2.init);
- *(.initcall3.init);
- *(.initcall4.init);
- *(.initcall5.init);
- *(.initcall6.init);
- *(.initcall7.init);
+ INITCALLS
__initcall_end = .;
}
__initramfs_start = .;
*(.init.ramfs)
__initramfs_end = .;
- /* We fill to the next page, so we can discard all init
- pages without needing to consider what payload might be
- appended to the kernel image. */
- FILL (0);
- . = ALIGN (8192);
}
#endif
-
__vmlinux_end = .; /* last address of the physical file */
- __init_end = .;
+
+ /*
+ * We fill to the next page, so we can discard all init
+ * pages without needing to consider what payload might be
+ * appended to the kernel image.
+ */
+ . = ALIGN(PAGE_SIZE);
+
+ __init_end = .;
__data_end = . ; /* Move to _edata ? */
__bss_start = .; /* BSS */
case LDFA_OP:
case LDFCCLR_OP:
case LDFCNC_OP:
- case LDF_IMM_OP:
- case LDFA_IMM_OP:
- case LDFCCLR_IMM_OP:
- case LDFCNC_IMM_OP:
if (u.insn.x)
ret = emulate_load_floatpair(ifa, u.insn, regs);
else
ret = emulate_load_float(ifa, u.insn, regs);
break;
+ case LDF_IMM_OP:
+ case LDFA_IMM_OP:
+ case LDFCCLR_IMM_OP:
+ case LDFCNC_IMM_OP:
+ ret = emulate_load_float(ifa, u.insn, regs);
+ break;
+
case STF_OP:
case STF_IMM_OP:
ret = emulate_store_float(ifa, u.insn, regs);
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
- * Copyright (c) 2004-2005 Silicon Graphics, Inc. All Rights Reserved.
+ * Copyright (c) 2004-2007 Silicon Graphics, Inc. All Rights Reserved.
*/
* PIO read fails, the MCA handler will force the error to look
* corrected and vector to the xp_error_PIOR which will return an error.
*
+ * The definition of "consumption" and the time it takes for an MCA
+ * to surface is processor implementation specific. This code
+ * is sufficient on Itanium through the Montvale processor family.
+ * It may need to be adjusted for future processor implementations.
+ *
* extern int xp_nofault_PIOR(void *remote_register);
*/
mov r8=r0 // Stage a success return value
ld8.acq r9=[r32];; // PIO Read the specified register
adds r9=1,r9;; // Add to force consumption
- or r9=r9,r9;; // Or to force consumption
+ srlz.i;; // Allow time for MCA to surface
br.ret.sptk.many b0;; // Return success
.global xp_error_PIOR
xp_error_PIOR:
mov r8=1 // Return value of 1
br.ret.sptk.many b0;; // Return failure
-
select SYS_SUPPORTS_LITTLE_ENDIAN
select SSB
select SSB_DRIVER_MIPS
+ select SSB_DRIVER_EXTIF
+ select SSB_PCICORE_HOSTMODE if PCI
select GENERIC_GPIO
select SYS_HAS_EARLY_PRINTK
select CFE
menu "CPU selection"
-source "kernel/time/Kconfig"
-
choice
prompt "CPU type"
default CPU_R4X00
performance should round up your number of processors to the next
power of two.
+source "kernel/time/Kconfig"
+
#
# Timer Interrupt Frequency Configuration
#
/*
* BRIEF MODULE DESCRIPTION
- * Alchemy/AMD Au1x00 pci support.
+ * Alchemy/AMD Au1x00 PCI support.
*
- * Copyright 2001,2002,2003 MontaVista Software Inc.
+ * Copyright 2001-2003, 2007 MontaVista Software Inc.
* Author: MontaVista Software, Inc.
* ppopov@mvista.com or source@mvista.com
*
static int __init au1x_pci_setup(void)
{
+ extern void au1x_pci_cfg_init(void);
+
#if defined(CONFIG_SOC_AU1500) || defined(CONFIG_SOC_AU1550)
virt_io_addr = (unsigned long)ioremap(Au1500_PCI_IO_START,
Au1500_PCI_IO_END - Au1500_PCI_IO_START + 1);
set_io_port_base(virt_io_addr);
#endif
+ au1x_pci_cfg_init();
+
register_pci_controller(&au1x_controller);
return 0;
}
#include <linux/io.h>
#include <linux/serial_reg.h>
+#include <cobalt.h>
+
#define UART_BASE ((void __iomem *)CKSEG1ADDR(0x1c800000))
void prom_putchar(char c)
{
+ if (cobalt_board_id <= COBALT_BRD_ID_QUBE1)
+ return;
+
while (!(readb(UART_BASE + UART_LSR) & UART_LSR_THRE))
;
* kernel load address. This is needed because this platform does
* not have a ELF loader yet.
*/
- __INIT
+FEXPORT(__kernel_entry)
+ j kernel_entry
#endif
__INIT_REFOK
static void __init bootmem_init(void)
{
- unsigned long init_begin, reserved_end;
+ unsigned long reserved_end;
unsigned long mapstart = ~0UL;
unsigned long bootmap_size;
int i;
min_low_pfn, max_low_pfn);
- init_begin = PFN_UP(__pa_symbol(&__init_begin));
for (i = 0; i < boot_mem_map.nr_map; i++) {
unsigned long start, end;
end = PFN_DOWN(boot_mem_map.map[i].addr
+ boot_mem_map.map[i].size);
- if (start <= init_begin)
- start = init_begin;
+ if (start <= min_low_pfn)
+ start = min_low_pfn;
if (start >= end)
continue;
return 1;
/*
- * I don't have erratas for newer R4400 so be paranoid.
+ * we assume newer revisions are ok
*/
- return 1;
+ return 0;
}
return 0;
MKLASATIMG = mklasatimg
MKLASATIMG_ARCH = mq2,mqpro,sp100,sp200
-KERNEL_IMAGE = $(TOPDIR)/vmlinux
+KERNEL_IMAGE = vmlinux
KERNEL_START = $(shell $(NM) $(KERNEL_IMAGE) | grep " _text" | cut -f1 -d\ )
KERNEL_ENTRY = $(shell $(NM) $(KERNEL_IMAGE) | grep kernel_entry | cut -f1 -d\ )
-LDSCRIPT= -L$(obj) -Tromscript.normal
+LDSCRIPT= -L$(srctree)/$(src) -Tromscript.normal
HEAD_DEFINES := -D_kernel_start=0x$(KERNEL_START) \
-D_kernel_entry=0x$(KERNEL_ENTRY) \
-D TIMESTAMP=$(shell date +%s)
$(obj)/head.o: $(obj)/head.S $(KERNEL_IMAGE)
- $(CC) -fno-pic $(HEAD_DEFINES) -I$(TOPDIR)/include -c -o $@ $<
+ $(CC) -fno-pic $(HEAD_DEFINES) $(LINUXINCLUDE) -c -o $@ $<
OBJECTS = head.o kImage.o
void __init prom_free_prom_memory(void)
{
-#if 0 /* for now ... */
unsigned long addr;
int i;
free_init_pages("prom memory",
addr, addr + boot_mem_map.map[i].size);
}
-#endif
}
static void mips_machine_restart(char *command)
{
- unsigned int __iomem *softres_reg = ioremap(SOFTRES_REG, sizeof(unsigned int));
+ unsigned int __iomem *softres_reg =
+ ioremap(SOFTRES_REG, sizeof(unsigned int));
- writew(GORESET, softres_reg);
+ __raw_writel(GORESET, softres_reg);
}
static void mips_machine_halt(void)
{
- unsigned int __iomem *softres_reg = ioremap(SOFTRES_REG, sizeof(unsigned int));
+ unsigned int __iomem *softres_reg =
+ ioremap(SOFTRES_REG, sizeof(unsigned int));
- writew(GORESET, softres_reg);
+ __raw_writel(GORESET, softres_reg);
}
#if defined(CONFIG_MIPS_ATLAS)
/* Check PCI clock */
{
unsigned int __iomem *jmpr_p = (unsigned int *) ioremap(MALTA_JMPRS_REG, sizeof(unsigned int));
- int jmpr = (readw(jmpr_p) >> 2) & 0x07;
+ int jmpr = (__raw_readl(jmpr_p) >> 2) & 0x07;
static const int pciclocks[] __initdata = {
33, 20, 25, 30, 12, 16, 37, 10
};
/* ignore region specifiers */
gfp &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM);
-#ifdef CONFIG_ZONE_DMA32
+#ifdef CONFIG_ZONE_DMA
if (dev == NULL)
gfp |= __GFP_DMA;
else if (dev->coherent_dma_mask < DMA_BIT_MASK(24))
int __init pcibios_map_irq(const struct pci_dev *dev, u8 slot, u8 pin)
{
- if (cobalt_board_id < COBALT_BRD_ID_QUBE2)
+ if (cobalt_board_id <= COBALT_BRD_ID_QUBE1)
return irq_tab_qube1[slot];
if (cobalt_board_id == COBALT_BRD_ID_RAQ2)
/*
* BRIEF MODULE DESCRIPTION
- * Alchemy/AMD Au1x00 pci support.
+ * Alchemy/AMD Au1x00 PCI support.
*
- * Copyright 2001,2002,2003 MontaVista Software Inc.
+ * Copyright 2001-2003, 2007 MontaVista Software Inc.
* Author: MontaVista Software, Inc.
* ppopov@mvista.com or source@mvista.com
*
write_c0_pagemask(old_pagemask);
}
-struct vm_struct *pci_cfg_vm;
+static struct vm_struct *pci_cfg_vm;
static int pci_cfg_wired_entry;
-static int first_cfg = 1;
-unsigned long last_entryLo0, last_entryLo1;
+static unsigned long last_entryLo0, last_entryLo1;
+
+/*
+ * We can't ioremap the entire pci config space because it's too large.
+ * Nor can we call ioremap dynamically because some device drivers use
+ * the PCI config routines from within interrupt handlers and that
+ * becomes a problem in get_vm_area(). We use one wired TLB to handle
+ * all config accesses for all busses.
+ */
+void __init au1x_pci_cfg_init(void)
+{
+ /* Reserve a wired entry for PCI config accesses */
+ pci_cfg_vm = get_vm_area(0x2000, VM_IOREMAP);
+ if (!pci_cfg_vm)
+ panic(KERN_ERR "PCI unable to get vm area\n");
+ pci_cfg_wired_entry = read_c0_wired();
+ add_wired_entry(0, 0, (unsigned long)pci_cfg_vm->addr, PM_4K);
+ last_entryLo0 = last_entryLo1 = 0xffffffff;
+}
static int config_access(unsigned char access_type, struct pci_bus *bus,
unsigned int dev_fn, unsigned char where,
Au1500_PCI_STATCMD);
au_sync_udelay(1);
- /*
- * We can't ioremap the entire pci config space because it's
- * too large. Nor can we call ioremap dynamically because some
- * device drivers use the pci config routines from within
- * interrupt handlers and that becomes a problem in get_vm_area().
- * We use one wired tlb to handle all config accesses for all
- * busses. To improve performance, if the current device
- * is the same as the last device accessed, we don't touch the
- * tlb.
- */
- if (first_cfg) {
- /* reserve a wired entry for pci config accesses */
- first_cfg = 0;
- pci_cfg_vm = get_vm_area(0x2000, VM_IOREMAP);
- if (!pci_cfg_vm)
- panic(KERN_ERR "PCI unable to get vm area\n");
- pci_cfg_wired_entry = read_c0_wired();
- add_wired_entry(0, 0, (unsigned long)pci_cfg_vm->addr, PM_4K);
- last_entryLo0 = last_entryLo1 = 0xffffffff;
- }
-
/* Allow board vendors to implement their own off-chip idsel.
* If it doesn't succeed, may as well bail out at this point.
*/
/* page boundary */
cfg_base = cfg_base & PAGE_MASK;
+ /*
+ * To improve performance, if the current device is the same as
+ * the last device accessed, we don't touch the TLB.
+ */
entryLo0 = (6 << 26) | (cfg_base >> 6) | (2 << 3) | 7;
entryLo1 = (6 << 26) | (cfg_base >> 6) | (0x1000 >> 6) | (2 << 3) | 7;
-
if ((entryLo0 != last_entryLo0) || (entryLo1 != last_entryLo1)) {
mod_wired_entry(pci_cfg_wired_entry, entryLo0, entryLo1,
(unsigned long)pci_cfg_vm->addr, PM_4K);
mace_pci_read_config(struct pci_bus *bus, unsigned int devfn,
int reg, int size, u32 *val)
{
+ u32 control = mace->pci.control;
+
+ /* disable master aborts interrupts during config read */
+ mace->pci.control = control & ~MACEPCI_CONTROL_MAR_INT;
mace->pci.config_addr = mkaddr(bus, devfn, reg);
switch (size) {
case 1:
*val = mace->pci.config_data.l;
break;
}
+ /* ack possible master abort */
+ mace->pci.error &= ~MACEPCI_ERROR_MASTER_ABORT;
+ mace->pci.control = control;
DPRINTK("read%d: reg=%08x,val=%02x\n", size * 8, reg, *val);
.iommu = 0,
.mem_offset = MACE_PCI_MEM_OFFSET,
.io_offset = 0,
+ .io_map_base = CKSEG1ADDR(MACEPCI_LOW_IO),
};
static int __init mace_init(void)
BUG_ON(request_irq(MACE_PCI_BRIDGE_IRQ, macepci_error, 0,
"MACE PCI error", NULL));
- iomem_resource = mace_pci_mem_resource;
+ /* extend memory resources */
+ iomem_resource.end = mace_pci_mem_resource.end;
ioport_resource = mace_pci_io_resource;
register_pci_controller(&mace_pci_controller);
#include <linux/kernel_stat.h>
#include <linux/spinlock.h>
#include <linux/interrupt.h>
-#include <linux/module.h>
#include <asm/bootinfo.h>
#include <asm/cpu.h>
return read_c0_count2();
}
+static struct clocksource pnx_clocksource = {
+ .name = "pnx8xxx",
+ .rating = 200,
+ .read = hpt_read,
+ .flags = CLOCK_SOURCE_IS_CONTINUOUS,
+};
+
static void timer_ack(void)
{
write_c0_compare(cpj);
}
+static irqreturn_t pnx8xxx_timer_interrupt(int irq, void *dev_id)
+{
+ struct clock_event_device *c = dev_id;
+
+ /* clear MATCH, signal the event */
+ c->event_handler(c);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction pnx8xxx_timer_irq = {
+ .handler = pnx8xxx_timer_interrupt,
+ .flags = IRQF_DISABLED | IRQF_PERCPU,
+ .name = "pnx8xxx_timer",
+};
+
+static irqreturn_t monotonic_interrupt(int irq, void *dev_id)
+{
+ /* Timer 2 clear interrupt */
+ write_c0_compare2(-1);
+ return IRQ_HANDLED;
+}
+
+static struct irqaction monotonic_irqaction = {
+ .handler = monotonic_interrupt,
+ .flags = IRQF_DISABLED,
+ .name = "Monotonic timer",
+};
+
+static int pnx8xxx_set_next_event(unsigned long delta,
+ struct clock_event_device *evt)
+{
+ write_c0_compare(delta);
+ return 0;
+}
+
+static struct clock_event_device pnx8xxx_clockevent = {
+ .name = "pnx8xxx_clockevent",
+ .features = CLOCK_EVT_FEAT_ONESHOT,
+ .set_next_event = pnx8xxx_set_next_event,
+};
+
/*
* plat_time_init() - it does the following things:
*
__init void plat_time_init(void)
{
+ unsigned int configPR;
unsigned int n;
unsigned int m;
unsigned int p;
unsigned int pow2p;
+ clockevents_register_device(&pnx8xxx_clockevent);
+ clocksource_register(&pnx_clocksource);
+
+ setup_irq(PNX8550_INT_TIMER1, &pnx8xxx_timer_irq);
+ setup_irq(PNX8550_INT_TIMER2, &monotonic_irqaction);
+
+ /* Timer 1 start */
+ configPR = read_c0_config7();
+ configPR &= ~0x00000008;
+ write_c0_config7(configPR);
+
+ /* Timer 2 start */
+ configPR = read_c0_config7();
+ configPR &= ~0x00000010;
+ write_c0_config7(configPR);
+
+ /* Timer 3 stop */
+ configPR = read_c0_config7();
+ configPR |= 0x00000020;
+ write_c0_config7(configPR);
+
+
/* PLL0 sets MIPS clock (PLL1 <=> TM1, PLL6 <=> TM2, PLL5 <=> mem) */
/* (but only if CLK_MIPS_CTL select value [bits 3:1] is 1: FIXME) */
write_c0_count2(0);
write_c0_compare2(0xffffffff);
- clocksource_mips.read = hpt_read;
- mips_timer_ack = timer_ack;
-}
-
-static irqreturn_t monotonic_interrupt(int irq, void *dev_id)
-{
- /* Timer 2 clear interrupt */
- write_c0_compare2(-1);
- return IRQ_HANDLED;
}
-static struct irqaction monotonic_irqaction = {
- .handler = monotonic_interrupt,
- .flags = IRQF_DISABLED,
- .name = "Monotonic timer",
-};
-void __init plat_timer_setup(struct irqaction *irq)
-{
- int configPR;
-
- setup_irq(PNX8550_INT_TIMER1, irq);
- setup_irq(PNX8550_INT_TIMER2, &monotonic_irqaction);
-
- /* Timer 1 start */
- configPR = read_c0_config7();
- configPR &= ~0x00000008;
- write_c0_config7(configPR);
-
- /* Timer 2 start */
- configPR = read_c0_config7();
- configPR &= ~0x00000010;
- write_c0_config7(configPR);
-
- /* Timer 3 stop */
- configPR = read_c0_config7();
- configPR |= 0x00000020;
- write_c0_config7(configPR);
-}
crime_int = crime->istat & crime_mask;
irq = MACE_VID_IN1_IRQ + __ffs(crime_int);
- crime_int = 1 << irq;
if (crime_int & CRIME_MACEISA_INT_MASK) {
unsigned long mace_int = mace->perif.ctrl.istat;
#include <asm/ip32/mace.h>
#include <asm/ip32/ip32_ints.h>
-/*
- * .iobase isn't a constant (in the sense of C) so we fill it in at runtime.
- */
-#define MACE_PORT(int) \
+#define MACEISA_SERIAL1_OFFS offsetof(struct sgi_mace, isa.serial1)
+#define MACEISA_SERIAL2_OFFS offsetof(struct sgi_mace, isa.serial2)
+
+#define MACE_PORT(offset,_irq) \
{ \
- .irq = int, \
+ .mapbase = MACE_BASE + offset, \
+ .irq = _irq, \
.uartclk = 1843200, \
.iotype = UPIO_MEM, \
- .flags = UPF_SKIP_TEST, \
+ .flags = UPF_SKIP_TEST|UPF_IOREMAP, \
.regshift = 8, \
}
static struct plat_serial8250_port uart8250_data[] = {
- MACE_PORT(MACEISA_SERIAL1_IRQ),
- MACE_PORT(MACEISA_SERIAL2_IRQ),
+ MACE_PORT(MACEISA_SERIAL1_OFFS, MACEISA_SERIAL1_IRQ),
+ MACE_PORT(MACEISA_SERIAL2_OFFS, MACEISA_SERIAL2_IRQ),
{ },
};
static int __init uart8250_init(void)
{
- uart8250_data[0].membase = (void __iomem *) &mace->isa.serial1;
- uart8250_data[1].membase = (void __iomem *) &mace->isa.serial2;
-
return platform_device_register(&uart8250_device);
}
printk(KERN_WARNING "seeprom: bad checksum.\n");
}
for (i = 0; i < 2; i++) {
- unsigned int slot = TX4938_PCIC_IDSEL_AD_TO_SLOT(31 - i);
- unsigned int id = (1 << 8) | PCI_DEVFN(slot, 0); /* bus 1 */
+ unsigned int id =
+ TXX9_IRQ_BASE + (i ? TX4938_IR_ETH1 : TX4938_IR_ETH0);
struct platform_device *pdev;
if (!(tx4938_ccfgptr->pcfg &
(i ? TX4938_PCFG_ETH1_SEL : TX4938_PCFG_ETH0_SEL)))
* This file adds the header file glue so that the shared files
* flatdevicetree.[ch] can compile and work in the powerpc bootwrapper.
*
- * strncmp & strchr copied from <file:lib/strings.c>
+ * strncmp & strchr copied from <file:lib/string.c>
* Copyright (C) 1991, 1992 Linus Torvalds
*
* Maintained by: Mark A. Greer <mgreer@mvista.com>
unsigned long flags;
struct scatterlist *s, *outs, *segstart;
int outcount, incount, i;
+ unsigned int align;
unsigned long handle;
BUG_ON(direction == DMA_NONE);
/* Allocate iommu entries for that segment */
vaddr = (unsigned long) sg_virt(s);
npages = iommu_num_pages(vaddr, slen);
- entry = iommu_range_alloc(tbl, npages, &handle, mask >> IOMMU_PAGE_SHIFT, 0);
+ align = 0;
+ if (IOMMU_PAGE_SHIFT < PAGE_SHIFT && slen >= PAGE_SIZE &&
+ (vaddr & ~PAGE_MASK) == 0)
+ align = PAGE_SHIFT - IOMMU_PAGE_SHIFT;
+ entry = iommu_range_alloc(tbl, npages, &handle,
+ mask >> IOMMU_PAGE_SHIFT, align);
DBG(" - vaddr: %lx, size: %lx\n", vaddr, slen);
{
dma_addr_t dma_handle = DMA_ERROR_CODE;
unsigned long uaddr;
- unsigned int npages;
+ unsigned int npages, align;
BUG_ON(direction == DMA_NONE);
npages = iommu_num_pages(uaddr, size);
if (tbl) {
+ align = 0;
+ if (IOMMU_PAGE_SHIFT < PAGE_SHIFT && size >= PAGE_SIZE &&
+ ((unsigned long)vaddr & ~PAGE_MASK) == 0)
+ align = PAGE_SHIFT - IOMMU_PAGE_SHIFT;
+
dma_handle = iommu_alloc(tbl, vaddr, npages, direction,
- mask >> IOMMU_PAGE_SHIFT, 0);
+ mask >> IOMMU_PAGE_SHIFT, align);
if (dma_handle == DMA_ERROR_CODE) {
if (printk_ratelimit()) {
printk(KERN_INFO "iommu_alloc failed, "
prom_printf("fixup_device_tree_efika: ",
"skipped entry %x - setprop error\n", i);
}
+
+ /* Make sure ethernet mdio bus node exists */
+ node = call_prom("finddevice", 1, 1, ADDR("/builtin/mdio"));
+ if (!PHANDLE_VALID(node)) {
+ prom_printf("Adding Ethernet MDIO node\n");
+ call_prom("interpret", 1, 1,
+ " s\" /builtin\" find-device"
+ " new-device"
+ " 1 encode-int s\" #address-cells\" property"
+ " 0 encode-int s\" #size-cells\" property"
+ " s\" mdio\" 2dup device-name device-type"
+ " s\" mpc5200b-fec-phy\" encode-string"
+ " s\" compatible\" property"
+ " 0xf0003000 0x400 reg"
+ " 0x2 encode-int"
+ " 0x5 encode-int encode+"
+ " 0x3 encode-int encode+"
+ " s\" interrupts\" property"
+ " finish-device");
+ };
+
+ /* Make sure ethernet phy device node exist */
+ node = call_prom("finddevice", 1, 1, ADDR("/builtin/mdio/ethernet-phy"));
+ if (!PHANDLE_VALID(node)) {
+ prom_printf("Adding Ethernet PHY node\n");
+ call_prom("interpret", 1, 1,
+ " s\" /builtin/mdio\" find-device"
+ " new-device"
+ " s\" ethernet-phy\" device-name"
+ " 0x10 encode-int s\" reg\" property"
+ " my-self"
+ " ihandle>phandle"
+ " finish-device"
+ " s\" /builtin/ethernet\" find-device"
+ " encode-int"
+ " s\" phy-handle\" property"
+ " device-end");
+ }
+
}
#else
#define fixup_device_tree_efika()
create_shadowed_slbe(VMALLOC_START, mmu_kernel_ssize, vflags, 1);
+ slb_shadow_clear(2);
+
/* We don't bolt the stack for the time being - we're in boot,
* so the stack is in the bolted segment. By the time it goes
* elsewhere, we'll call _switch() which will bolt in the new
but also at lower core voltage.
endmenu
+
+config OPROFILE_CELL
+ def_bool y
+ depends on PPC_CELL_NATIVE && (OPROFILE = m || OPROFILE = y)
+
spu-manage-$(CONFIG_PPC_CELL_NATIVE) += spu_manage.o
obj-$(CONFIG_SPU_BASE) += spu_callbacks.o spu_base.o \
+ spu_notify.o \
spu_syscalls.o spu_fault.o \
$(spu-priv1-y) \
$(spu-manage-y) \
--- /dev/null
+/*
+ * Move OProfile dependencies from spufs module to the kernel so it
+ * can run on non-cell PPC.
+ *
+ * Copyright (C) IBM 2005
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#undef DEBUG
+
+#include <linux/module.h>
+#include <asm/spu.h>
+#include "spufs/spufs.h"
+
+static BLOCKING_NOTIFIER_HEAD(spu_switch_notifier);
+
+void spu_switch_notify(struct spu *spu, struct spu_context *ctx)
+{
+ blocking_notifier_call_chain(&spu_switch_notifier,
+ ctx ? ctx->object_id : 0, spu);
+}
+EXPORT_SYMBOL_GPL(spu_switch_notify);
+
+int spu_switch_event_register(struct notifier_block *n)
+{
+ int ret;
+ ret = blocking_notifier_chain_register(&spu_switch_notifier, n);
+ if (!ret)
+ notify_spus_active();
+ return ret;
+}
+EXPORT_SYMBOL_GPL(spu_switch_event_register);
+
+int spu_switch_event_unregister(struct notifier_block *n)
+{
+ return blocking_notifier_chain_unregister(&spu_switch_notifier, n);
+}
+EXPORT_SYMBOL_GPL(spu_switch_event_unregister);
+
+void spu_set_profile_private_kref(struct spu_context *ctx,
+ struct kref *prof_info_kref,
+ void (* prof_info_release) (struct kref *kref))
+{
+ ctx->prof_priv_kref = prof_info_kref;
+ ctx->prof_priv_release = prof_info_release;
+}
+EXPORT_SYMBOL_GPL(spu_set_profile_private_kref);
+
+void *spu_get_profile_private_kref(struct spu_context *ctx)
+{
+ return ctx->prof_priv_kref;
+}
+EXPORT_SYMBOL_GPL(spu_get_profile_private_kref);
+
return ret;
}
+void notify_spus_active(void)
+{
+ struct spufs_calls *calls;
+
+ calls = spufs_calls_get();
+ if (!calls)
+ return;
+
+ calls->notify_spus_active();
+ spufs_calls_put(calls);
+
+ return;
+}
+
int register_spu_syscalls(struct spufs_calls *calls)
{
if (spufs_calls)
spu_release(ctx);
}
-void spu_set_profile_private_kref(struct spu_context *ctx,
- struct kref *prof_info_kref,
- void ( * prof_info_release) (struct kref *kref))
-{
- ctx->prof_priv_kref = prof_info_kref;
- ctx->prof_priv_release = prof_info_release;
-}
-EXPORT_SYMBOL_GPL(spu_set_profile_private_kref);
-
-void *spu_get_profile_private_kref(struct spu_context *ctx)
-{
- return ctx->prof_priv_kref;
-}
-EXPORT_SYMBOL_GPL(spu_get_profile_private_kref);
-
-
return rval;
}
-static BLOCKING_NOTIFIER_HEAD(spu_switch_notifier);
-
-void spu_switch_notify(struct spu *spu, struct spu_context *ctx)
-{
- blocking_notifier_call_chain(&spu_switch_notifier,
- ctx ? ctx->object_id : 0, spu);
-}
-
-static void notify_spus_active(void)
+void do_notify_spus_active(void)
{
int node;
}
}
-int spu_switch_event_register(struct notifier_block * n)
-{
- int ret;
- ret = blocking_notifier_chain_register(&spu_switch_notifier, n);
- if (!ret)
- notify_spus_active();
- return ret;
-}
-EXPORT_SYMBOL_GPL(spu_switch_event_register);
-
-int spu_switch_event_unregister(struct notifier_block * n)
-{
- return blocking_notifier_chain_unregister(&spu_switch_notifier, n);
-}
-EXPORT_SYMBOL_GPL(spu_switch_event_unregister);
-
/**
* spu_bind_context - bind spu context to physical spu
* @spu: physical spu to bind to
.spu_run = do_spu_run,
.coredump_extra_notes_size = spufs_coredump_extra_notes_size,
.coredump_extra_notes_write = spufs_coredump_extra_notes_write,
+ .notify_spus_active = do_notify_spus_active,
.owner = THIS_MODULE,
};
#include <linux/workqueue.h>
#include <linux/fs.h>
#include <linux/syscalls.h>
+#include <linux/ctype.h>
#include <asm/lmb.h>
HEADER_LDR_FORMAT_GZIP = 1,
};
+#define OS_AREA_HEADER_MAGIC_NUM "cell_ext_os_area"
+
/**
* struct os_area_header - os area header segment.
* @magic_num: Always 'cell_ext_os_area'.
u8 _reserved_5[8];
};
-enum {
- OS_AREA_DB_MAGIC_NUM = 0x2d64622dU,
-};
+#define OS_AREA_DB_MAGIC_NUM "-db-"
/**
* struct os_area_db - Shared flash memory database.
- * @magic_num: Always '-db-' = 0x2d64622d.
+ * @magic_num: Always '-db-'.
* @version: os_area_db format version number.
* @index_64: byte offset of the database id index for 64 bit variables.
* @count_64: number of usable 64 bit index entries
*/
struct os_area_db {
- u32 magic_num;
+ u8 magic_num[4];
u16 version;
u16 _reserved_1;
u16 index_64;
prop->name);
}
+static void dump_field(char *s, const u8 *field, int size_of_field)
+{
+#if defined(DEBUG)
+ int i;
+
+ for (i = 0; i < size_of_field; i++)
+ s[i] = isprint(field[i]) ? field[i] : '.';
+ s[i] = 0;
+#endif
+}
+
#define dump_header(_a) _dump_header(_a, __func__, __LINE__)
static void _dump_header(const struct os_area_header *h, const char *func,
int line)
{
+ char str[sizeof(h->magic_num) + 1];
+
+ dump_field(str, h->magic_num, sizeof(h->magic_num));
pr_debug("%s:%d: h.magic_num: '%s'\n", func, line,
- h->magic_num);
+ str);
pr_debug("%s:%d: h.hdr_version: %u\n", func, line,
h->hdr_version);
pr_debug("%s:%d: h.db_area_offset: %u\n", func, line,
static int verify_header(const struct os_area_header *header)
{
- if (memcmp(header->magic_num, "cell_ext_os_area", 16)) {
+ if (memcmp(header->magic_num, OS_AREA_HEADER_MAGIC_NUM,
+ sizeof(header->magic_num))) {
pr_debug("%s:%d magic_num failed\n", __func__, __LINE__);
return -1;
}
static int db_verify(const struct os_area_db *db)
{
- if (db->magic_num != OS_AREA_DB_MAGIC_NUM) {
+ if (memcmp(db->magic_num, OS_AREA_DB_MAGIC_NUM,
+ sizeof(db->magic_num))) {
pr_debug("%s:%d magic_num failed\n", __func__, __LINE__);
return -1;
}
static void _dump_db(const struct os_area_db *db, const char *func,
int line)
{
+ char str[sizeof(db->magic_num) + 1];
+
+ dump_field(str, db->magic_num, sizeof(db->magic_num));
pr_debug("%s:%d: db.magic_num: '%s'\n", func, line,
- (const char*)&db->magic_num);
+ str);
pr_debug("%s:%d: db.version: %u\n", func, line,
db->version);
pr_debug("%s:%d: db.index_64: %u\n", func, line,
memset(db, 0, sizeof(struct os_area_db));
- db->magic_num = OS_AREA_DB_MAGIC_NUM;
+ memcpy(db->magic_num, OS_AREA_DB_MAGIC_NUM, sizeof(db->magic_num));
db->version = 1;
db->index_64 = HEADER_SIZE;
db->count_64 = VALUES_64_COUNT;
#include <asm/vdso_datapage.h>
#include <asm/pSeries_reconfig.h>
#include "xics.h"
+#include "plpar_wrappers.h"
/* This version can't take the spinlock, because it never returns */
static struct rtas_args rtas_stop_self_args = {
local_irq_disable();
idle_task_exit();
xics_teardown_cpu(0);
+ unregister_slb_shadow(hard_smp_processor_id(), __pa(get_slb_shadow()));
rtas_stop_self();
/* Should never get here... */
BUG();
* Based upon code written by Ross Biro, Linus Torvalds, Bob Manson,
* and David Mosberger.
*
- * Added Linux support -miguel (weird, eh?, the orignal code was meant
+ * Added Linux support -miguel (weird, eh?, the original code was meant
* to emulate SunOS).
*/
static inline unsigned long do_gettimeoffset(void)
{
- return (*master_l10_counter >> 10) & 0x1fffff;
+ unsigned long val = *master_l10_counter;
+ unsigned long usec = (val >> 10) & 0x1fffff;
+
+ /* Limit hit? */
+ if (val & 0x80000000)
+ usec += 1000000 / HZ;
+
+ return usec;
}
/* Ok, my cute asm atomicity trick doesn't work anymore.
/* arch/sparc64/kernel/ktlb.S: Kernel mapping TLB miss handling.
*
- * Copyright (C) 1995, 1997, 2005 David S. Miller <davem@davemloft.net>
+ * Copyright (C) 1995, 1997, 2005, 2008 David S. Miller <davem@davemloft.net>
* Copyright (C) 1996 Eddie C. Dost (ecd@brainaid.de)
* Copyright (C) 1996 Miguel de Icaza (miguel@nuclecu.unam.mx)
* Copyright (C) 1996,98,99 Jakub Jelinek (jj@sunsite.mff.cuni.cz)
ba,pt %xcc, sun4v_dtlb_load
mov %g5, %g3
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
kvmap_vmemmap:
sub %g4, %g5, %g5
srlx %g5, 22, %g5
or %g1, %lo(vmemmap_table), %g1
ba,pt %xcc, kvmap_dtlb_load
ldx [%g1 + %g5], %g5
+#endif
kvmap_dtlb_nonlinear:
/* Catch kernel NULL pointer derefs. */
bleu,pn %xcc, kvmap_dtlb_longpath
nop
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
/* Do not use the TSB for vmemmap. */
mov (VMEMMAP_BASE >> 24), %g5
sllx %g5, 24, %g5
cmp %g4,%g5
bgeu,pn %xcc, kvmap_vmemmap
nop
+#endif
KERN_TSB_LOOKUP_TL1(%g4, %g6, %g5, %g1, %g2, %g3, kvmap_dtlb_load)
return (device_mask & dma_addr_mask) == dma_addr_mask;
}
+void pci_resource_to_user(const struct pci_dev *pdev, int bar,
+ const struct resource *rp, resource_size_t *start,
+ resource_size_t *end)
+{
+ struct pci_pbm_info *pbm = pdev->dev.archdata.host_controller;
+ unsigned long offset;
+
+ if (rp->flags & IORESOURCE_IO)
+ offset = pbm->io_space.start;
+ else
+ offset = pbm->mem_space.start;
+
+ *start = rp->start - offset;
+ *end = rp->end - offset;
+}
+
#endif /* !(CONFIG_PCI) */
/* How the Tomatillo IRQs are routed around is pure guesswork here.
*
* All the Tomatillo devices I see in prtconf dumps seem to have only
- * a single PCI bus unit attached to it. It would seem they are seperate
+ * a single PCI bus unit attached to it. It would seem they are separate
* devices because their PortID (ie. JBUS ID) values are all different
* and thus the registers are mapped to totally different locations.
*
1: ba,pt %xcc, etrap
2: or %g7, %lo(2b), %g7
+ mov %l4, %o1
call sun4v_itlb_error_report
add %sp, PTREGS_OFF, %o0
1: ba,pt %xcc, etrap
2: or %g7, %lo(2b), %g7
+ mov %l4, %o1
call sun4v_dtlb_error_report
add %sp, PTREGS_OFF, %o0
printk(KERN_EMERG "SUN4V-ITLB: Error at TPC[%lx], tl %d\n",
regs->tpc, tl);
print_symbol(KERN_EMERG "SUN4V-ITLB: TPC<%s>\n", regs->tpc);
+ printk(KERN_EMERG "SUN4V-ITLB: O7[%lx]\n", regs->u_regs[UREG_I7]);
+ print_symbol(KERN_EMERG "SUN4V-ITLB: O7<%s>\n", regs->u_regs[UREG_I7]);
printk(KERN_EMERG "SUN4V-ITLB: vaddr[%lx] ctx[%lx] "
"pte[%lx] error[%lx]\n",
sun4v_err_itlb_vaddr, sun4v_err_itlb_ctx,
printk(KERN_EMERG "SUN4V-DTLB: Error at TPC[%lx], tl %d\n",
regs->tpc, tl);
print_symbol(KERN_EMERG "SUN4V-DTLB: TPC<%s>\n", regs->tpc);
+ printk(KERN_EMERG "SUN4V-DTLB: O7[%lx]\n", regs->u_regs[UREG_I7]);
+ print_symbol(KERN_EMERG "SUN4V-DTLB: O7<%s>\n", regs->u_regs[UREG_I7]);
printk(KERN_EMERG "SUN4V-DTLB: vaddr[%lx] ctx[%lx] "
"pte[%lx] error[%lx]\n",
sun4v_err_dtlb_vaddr, sun4v_err_dtlb_ctx,
n = read(in_fds[0], &c, sizeof(c));
if (n == 0) {
printk("harddog_open - EOF on watchdog pipe\n");
- helper_wait(pid);
+ helper_wait(pid, 1, NULL);
err = -EIO;
goto out_close_out;
}
else if (n < 0) {
printk("harddog_open - read of watchdog pipe failed, "
"err = %d\n", errno);
- helper_wait(pid);
+ helper_wait(pid, 1, NULL);
err = n;
goto out_close_out;
}
apm_info.disabled = 1;
return -ENODEV;
}
- if (PM_IS_ACTIVE()) {
+ if (pm_flags & PM_ACPI) {
printk(KERN_NOTICE "apm: overridden by ACPI.\n");
apm_info.disabled = 1;
return -ENODEV;
}
-#ifdef CONFIG_PM_LEGACY
- pm_active = 1;
-#endif
+ pm_flags |= PM_APM;
/*
* Set up a segment that references the real mode segment 0x40
kthread_stop(kapmd_task);
kapmd_task = NULL;
}
-#ifdef CONFIG_PM_LEGACY
- pm_active = 0;
-#endif
+ pm_flags &= ~PM_APM;
}
module_init(apm_init);
/* Do an early initialization of the fixmap area */
movl $(swapper_pg_dir - __PAGE_OFFSET), %edx
movl $(swapper_pg_pmd - __PAGE_OFFSET), %eax
- addl $0x007, %eax /* 0x007 = PRESENT+RW+USER */
+ addl $0x67, %eax /* 0x67 == _PAGE_TABLE */
movl %eax, 4092(%edx)
xorl %ebx,%ebx /* This is the boot CPU (BSP) */
hpet_pie_count = 0;
}
- if (hpet_rtc_flags & RTC_PIE &&
+ if (hpet_rtc_flags & RTC_AIE &&
(curr_time.tm_sec == hpet_alarm_time.tm_sec) &&
(curr_time.tm_min == hpet_alarm_time.tm_min) &&
(curr_time.tm_hour == hpet_alarm_time.tm_hour))
{
int apic1, pin1, apic2, pin2;
int vector;
- unsigned int ver;
unsigned long flags;
local_irq_save(flags);
- ver = apic_read(APIC_LVR);
- ver = GET_APIC_VERSION(ver);
-
/*
* get/set the timer IRQ vector:
*/
* mode for the 8259A whenever interrupts are routed
* through I/O APICs. Also IRQ0 has to be enabled in
* the 8259A which implies the virtual wire has to be
- * disabled in the local APIC. Finally timer interrupts
- * need to be acknowledged manually in the 8259A for
- * timer_interrupt() and for the i82489DX when using
- * the NMI watchdog.
+ * disabled in the local APIC.
*/
apic_write_around(APIC_LVT0, APIC_LVT_MASKED | APIC_DM_EXTINT);
init_8259A(1);
- timer_ack = !cpu_has_tsc;
- timer_ack |= (nmi_watchdog == NMI_IO_APIC && !APIC_INTEGRATED(ver));
+ timer_ack = 1;
if (timer_over_8254 > 0)
enable_8259A_irq(0);
static irqreturn_t mfgpt_tick(int irq, void *dev_id)
{
+ /* Turn off the clock (and clear the event) */
+ mfgpt_disable_timer(mfgpt_event_clock);
+
if (mfgpt_tick_mode == CLOCK_EVT_MODE_SHUTDOWN)
return IRQ_HANDLED;
- /* Turn off the clock */
- mfgpt_disable_timer(mfgpt_event_clock);
-
/* Clear the counter */
geode_mfgpt_write(mfgpt_event_clock, MFGPT_REG_COUNTER, 0);
}
mfgpt_event_clock = timer;
- /* Set the clock scale and enable the event mode for CMP2 */
- val = MFGPT_SCALE | (3 << 8);
-
- geode_mfgpt_write(mfgpt_event_clock, MFGPT_REG_SETUP, val);
/* Set up the IRQ on the MFGPT side */
if (geode_mfgpt_setup_irq(mfgpt_event_clock, MFGPT_CMP2, irq)) {
goto err;
}
+ /* Set the clock scale and enable the event mode for CMP2 */
+ val = MFGPT_SCALE | (3 << 8);
+
+ geode_mfgpt_write(mfgpt_event_clock, MFGPT_REG_SETUP, val);
+
/* Set up the clock event */
mfgpt_clockevent.mult = div_sc(MFGPT_HZ, NSEC_PER_SEC, 32);
mfgpt_clockevent.min_delta_ns = clockevent_delta2ns(0xF,
#include <asm/smp.h>
#include <asm/nmi.h>
-#include <asm/timer.h>
#include "mach_traps.h"
prev_nmi_count = kmalloc(NR_CPUS * sizeof(int), GFP_KERNEL);
if (!prev_nmi_count)
- goto error;
+ return -1;
printk(KERN_INFO "Testing NMI watchdog ... ");
if (!atomic_read(&nmi_active)) {
kfree(prev_nmi_count);
atomic_set(&nmi_active, -1);
- goto error;
+ return -1;
}
printk("OK.\n");
kfree(prev_nmi_count);
return 0;
-error:
- timer_ack = !cpu_has_tsc;
-
- return -1;
}
/* This needs to happen later in boot so counters are working */
late_initcall(check_nmi_watchdog);
}
}
+static void do_nothing(void *unused)
+{
+}
+
void cpu_idle_wait(void)
{
unsigned int cpu, this_cpu = get_cpu();
cpu_clear(cpu, map);
}
cpus_and(map, map, cpu_online_map);
+ /*
+ * We waited 1 sec, if a CPU still did not call idle
+ * it may be because it is in idle and not waking up
+ * because it has nothing to do.
+ * Give all the remaining CPUS a kick.
+ */
+ smp_call_function_mask(map, do_nothing, 0, 0);
} while (!cpus_empty(map));
set_cpus_allowed(current, tmp);
cpu_relax();
}
+static void do_nothing(void *unused)
+{
+}
+
void cpu_idle_wait(void)
{
unsigned int cpu, this_cpu = get_cpu();
cpu_clear(cpu, map);
}
cpus_and(map, map, cpu_online_map);
+ /*
+ * We waited 1 sec, if a CPU still did not call idle
+ * it may be because it is in idle and not waking up
+ * because it has nothing to do.
+ * Give all the remaining CPUS a kick.
+ */
+ smp_call_function_mask(map, do_nothing, 0, 0);
} while (!cpus_empty(map));
set_cpus_allowed(current, tmp);
struct cpuinfo_x86 *c = &cpu_data(id);
*c = boot_cpu_data;
- identify_cpu(c);
c->cpu_index = id;
+ identify_cpu(c);
print_cpu_info(c);
}
int cpu;
};
-void do_fork_idle(struct work_struct *work)
+static void __cpuinit do_fork_idle(struct work_struct *work)
{
struct create_idle *c_idle =
container_of(work, struct create_idle, work);
info.si_errno = 0; \
info.si_code = sicode; \
info.si_addr = (void __user *)siaddr; \
+ trace_hardirqs_fixup(); \
if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
== NOTIFY_STOP) \
return; \
info.si_errno = 0; \
info.si_code = sicode; \
info.si_addr = (void __user *)siaddr; \
+ trace_hardirqs_fixup(); \
if (notify_die(DIE_TRAP, str, regs, error_code, trapnr, signr) \
== NOTIFY_STOP) \
return; \
static void __init set_highmem_pages_init(int bad_ppro)
{
int pfn;
- for (pfn = highstart_pfn; pfn < highend_pfn; pfn++)
- add_one_highpage_init(pfn_to_page(pfn), pfn, bad_ppro);
+ for (pfn = highstart_pfn; pfn < highend_pfn; pfn++) {
+ /*
+ * Holes under sparsemem might not have no mem_map[]:
+ */
+ if (pfn_valid(pfn))
+ add_one_highpage_init(pfn_to_page(pfn), pfn, bad_ppro);
+ }
totalram_pages += totalhigh_pages;
}
#endif /* CONFIG_FLATMEM */
if (cpu_model == 14)
*cpu_type = "i386/core";
- else if (cpu_model == 15)
+ else if (cpu_model == 15 || cpu_model == 23)
*cpu_type = "i386/core_2";
else if (cpu_model > 0xd)
return 0;
#include <linux/time.h>
#include <asm/uaccess.h>
-static DEFINE_PER_CPU(unsigned long long, blk_trace_cpu_offset) = { 0, };
static unsigned int blktrace_seq __read_mostly = 1;
/*
const int cpu = smp_processor_id();
t->magic = BLK_IO_TRACE_MAGIC | BLK_IO_TRACE_VERSION;
- t->time = cpu_clock(cpu) - per_cpu(blk_trace_cpu_offset, cpu);
+ t->time = ktime_to_ns(ktime_get());
t->device = bt->dev;
t->action = action;
t->pid = pid;
t->magic = BLK_IO_TRACE_MAGIC | BLK_IO_TRACE_VERSION;
t->sequence = ++(*sequence);
- t->time = cpu_clock(cpu) - per_cpu(blk_trace_cpu_offset, cpu);
+ t->time = ktime_to_ns(ktime_get());
t->sector = sector;
t->bytes = bytes;
t->action = what;
EXPORT_SYMBOL_GPL(__blk_add_trace);
static struct dentry *blk_tree_root;
-static struct mutex blk_tree_mutex;
+static DEFINE_MUTEX(blk_tree_mutex);
static unsigned int root_users;
static inline void blk_remove_root(void)
blk_trace_remove(q);
}
}
-
-/*
- * Average offset over two calls to cpu_clock() with a gettimeofday()
- * in the middle
- */
-static void blk_check_time(unsigned long long *t, int this_cpu)
-{
- unsigned long long a, b;
- struct timeval tv;
-
- a = cpu_clock(this_cpu);
- do_gettimeofday(&tv);
- b = cpu_clock(this_cpu);
-
- *t = tv.tv_sec * 1000000000 + tv.tv_usec * 1000;
- *t -= (a + b) / 2;
-}
-
-/*
- * calibrate our inter-CPU timings
- */
-static void blk_trace_check_cpu_time(void *data)
-{
- unsigned long long *t;
- int this_cpu = get_cpu();
-
- t = &per_cpu(blk_trace_cpu_offset, this_cpu);
-
- /*
- * Just call it twice, hopefully the second call will be cache hot
- * and a little more precise
- */
- blk_check_time(t, this_cpu);
- blk_check_time(t, this_cpu);
-
- put_cpu();
-}
-
-static void blk_trace_set_ht_offsets(void)
-{
-#if defined(CONFIG_SCHED_SMT)
- int cpu, i;
-
- /*
- * now make sure HT siblings have the same time offset
- */
- preempt_disable();
- for_each_online_cpu(cpu) {
- unsigned long long *cpu_off, *sibling_off;
-
- for_each_cpu_mask(i, per_cpu(cpu_sibling_map, cpu)) {
- if (i == cpu)
- continue;
-
- cpu_off = &per_cpu(blk_trace_cpu_offset, cpu);
- sibling_off = &per_cpu(blk_trace_cpu_offset, i);
- *sibling_off = *cpu_off;
- }
- }
- preempt_enable();
-#endif
-}
-
-static __init int blk_trace_init(void)
-{
- mutex_init(&blk_tree_mutex);
- on_each_cpu(blk_trace_check_cpu_time, NULL, 1, 1);
- blk_trace_set_ht_offsets();
-
- return 0;
-}
-
-module_init(blk_trace_init);
-
and functions, which do not yet exist in /sys
Say N to delete power /proc/acpi/ folders that have moved to /sys/
+config ACPI_SYSFS_POWER
+ bool "Future power /sys interface"
+ select POWER_SUPPLY
+ default y
+ ---help---
+ Say N to disable power /sys interface
config ACPI_PROC_EVENT
bool "Deprecated /proc/acpi/event support"
depends on PROC_FS
config ACPI_AC
tristate "AC Adapter"
depends on X86
- select POWER_SUPPLY
default y
help
This driver adds support for the AC Adapter object, which indicates
config ACPI_BATTERY
tristate "Battery"
depends on X86
- select POWER_SUPPLY
default y
help
This driver adds support for battery information through
config ACPI_SBS
tristate "Smart Battery System"
depends on X86
- select POWER_SUPPLY
help
This driver adds support for the Smart Battery System, another
type of access to battery information, found on some laptops.
#include <linux/proc_fs.h>
#include <linux/seq_file.h>
#endif
+#ifdef CONFIG_ACPI_SYSFS_POWER
#include <linux/power_supply.h>
+#endif
#include <acpi/acpi_bus.h>
#include <acpi/acpi_drivers.h>
};
struct acpi_ac {
+#ifdef CONFIG_ACPI_SYSFS_POWER
struct power_supply charger;
+#endif
struct acpi_device * device;
unsigned long state;
};
.release = single_release,
};
#endif
-
+#ifdef CONFIG_ACPI_SYSFS_POWER
static int get_ac_property(struct power_supply *psy,
enum power_supply_property psp,
union power_supply_propval *val)
static enum power_supply_property ac_props[] = {
POWER_SUPPLY_PROP_ONLINE,
};
-
+#endif
/* --------------------------------------------------------------------------
AC Adapter Management
-------------------------------------------------------------------------- */
acpi_bus_generate_netlink_event(device->pnp.device_class,
device->dev.bus_id, event,
(u32) ac->state);
+#ifdef CONFIG_ACPI_SYSFS_POWER
kobject_uevent(&ac->charger.dev->kobj, KOBJ_CHANGE);
+#endif
break;
default:
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
#endif
if (result)
goto end;
+#ifdef CONFIG_ACPI_SYSFS_POWER
ac->charger.name = acpi_device_bid(device);
ac->charger.type = POWER_SUPPLY_TYPE_MAINS;
ac->charger.properties = ac_props;
ac->charger.num_properties = ARRAY_SIZE(ac_props);
ac->charger.get_property = get_ac_property;
power_supply_register(&ac->device->dev, &ac->charger);
+#endif
status = acpi_install_notify_handler(device->handle,
ACPI_ALL_NOTIFY, acpi_ac_notify,
ac);
old_state = ac->state;
if (acpi_ac_get_state(ac))
return 0;
+#ifdef CONFIG_ACPI_SYSFS_POWER
if (old_state != ac->state)
kobject_uevent(&ac->charger.dev->kobj, KOBJ_CHANGE);
+#endif
return 0;
}
status = acpi_remove_notify_handler(device->handle,
ACPI_ALL_NOTIFY, acpi_ac_notify);
+#ifdef CONFIG_ACPI_SYSFS_POWER
if (ac->charger.dev)
power_supply_unregister(&ac->charger);
+#endif
#ifdef CONFIG_ACPI_PROCFS_POWER
acpi_ac_remove_fs(device);
#endif
#include <acpi/acpi_bus.h>
#include <acpi/acpi_drivers.h>
+#ifdef CONFIG_ACPI_SYSFS_POWER
#include <linux/power_supply.h>
+#endif
#define ACPI_BATTERY_VALUE_UNKNOWN 0xFFFFFFFF
struct acpi_battery {
struct mutex lock;
+#ifdef CONFIG_ACPI_SYSFS_POWER
struct power_supply bat;
+#endif
struct acpi_device *device;
unsigned long update_time;
int current_now;
return battery->device->status.battery_present;
}
+#ifdef CONFIG_ACPI_SYSFS_POWER
static int acpi_battery_technology(struct acpi_battery *battery)
{
if (!strcasecmp("NiCd", battery->type))
POWER_SUPPLY_PROP_MODEL_NAME,
POWER_SUPPLY_PROP_MANUFACTURER,
};
+#endif
#ifdef CONFIG_ACPI_PROCFS_POWER
inline char *acpi_battery_units(struct acpi_battery *battery)
return acpi_battery_set_alarm(battery);
}
+#ifdef CONFIG_ACPI_SYSFS_POWER
static ssize_t acpi_battery_alarm_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
int result;
- battery->update_time = 0;
- result = acpi_battery_get_info(battery);
- acpi_battery_init_alarm(battery);
- if (result)
- return result;
if (battery->power_unit) {
battery->bat.properties = charge_battery_props;
battery->bat.num_properties =
power_supply_unregister(&battery->bat);
battery->bat.dev = NULL;
}
+#endif
static int acpi_battery_update(struct acpi_battery *battery)
{
- int result = acpi_battery_get_status(battery);
+ int result;
+ result = acpi_battery_get_status(battery);
if (result)
return result;
+#ifdef CONFIG_ACPI_SYSFS_POWER
if (!acpi_battery_present(battery)) {
sysfs_remove_battery(battery);
+ battery->update_time = 0;
return 0;
}
+#endif
+ if (!battery->update_time) {
+ result = acpi_battery_get_info(battery);
+ if (result)
+ return result;
+ acpi_battery_init_alarm(battery);
+ }
+#ifdef CONFIG_ACPI_SYSFS_POWER
if (!battery->bat.dev)
sysfs_add_battery(battery);
+#endif
return acpi_battery_get_state(battery);
}
acpi_bus_generate_netlink_event(device->pnp.device_class,
device->dev.bus_id, event,
acpi_battery_present(battery));
+#ifdef CONFIG_ACPI_SYSFS_POWER
/* acpi_batter_update could remove power_supply object */
if (battery->bat.dev)
kobject_uevent(&battery->bat.dev->kobj, KOBJ_CHANGE);
+#endif
}
static int acpi_battery_add(struct acpi_device *device)
#ifdef CONFIG_ACPI_PROCFS_POWER
acpi_battery_remove_fs(device);
#endif
+#ifdef CONFIG_ACPI_SYSFS_POWER
sysfs_remove_battery(battery);
+#endif
mutex_destroy(&battery->lock);
kfree(battery);
return 0;
#include <linux/list.h>
#include <linux/sched.h>
#include <linux/pm.h>
-#include <linux/pm_legacy.h>
#include <linux/device.h>
#include <linux/proc_fs.h>
#ifdef CONFIG_X86
result = acpi_bus_init();
if (!result) {
-#ifdef CONFIG_PM_LEGACY
- if (!PM_IS_ACTIVE())
- pm_active = 1;
+ if (!(pm_flags & PM_APM))
+ pm_flags |= PM_ACPI;
else {
printk(KERN_INFO PREFIX
"APM is already active, exiting\n");
disable_acpi();
result = -ENODEV;
}
-#endif
} else
disable_acpi();
return 0;
}
+int __init acpi_boot_ec_enable(void)
+{
+ if (!boot_ec || boot_ec->handlers_installed)
+ return 0;
+ if (!ec_install_handlers(boot_ec)) {
+ first_ec = boot_ec;
+ return 0;
+ }
+ return -EFAULT;
+}
+
int __init acpi_ec_ecdt_probe(void)
{
int ret;
goto error;
/* We really need to limit this workaround, the only ASUS,
* which needs it, has fake EC._INI method, so use it as flag.
+ * Keep boot_ec struct as it will be needed soon.
*/
if (ACPI_FAILURE(acpi_get_handle(boot_ec->handle, "_INI", &x)))
- goto error;
+ return -ENODEV;
}
ret = ec_install_handlers(boot_ec);
* setup will potentially execute control methods
* (e.g., _REG method for this region)
*/
- acpi_ex_relinquish_interpreter();
+ acpi_ex_exit_interpreter();
status = region_setup(region_obj, ACPI_REGION_ACTIVATE,
handler_desc->address_space.context,
/* Re-enter the interpreter */
- acpi_ex_reacquire_interpreter();
+ acpi_ex_enter_interpreter();
/* Check for failure of the Region Setup */
* exit the interpreter because the handler *might* block -- we don't
* know what it will do, so we can't hold the lock on the intepreter.
*/
- acpi_ex_relinquish_interpreter();
+ acpi_ex_exit_interpreter();
}
/* Call the handler */
* We just returned from a non-default handler, we must re-enter the
* interpreter
*/
- acpi_ex_reacquire_interpreter();
+ acpi_ex_enter_interpreter();
}
return_ACPI_STATUS(status);
&polarity, &link,
acpi_pci_allocate_irq);
+ if (irq < 0) {
+ /*
+ * IDE legacy mode controller IRQs are magic. Why do compat
+ * extensions always make such a nasty mess.
+ */
+ if (dev->class >> 8 == PCI_CLASS_STORAGE_IDE &&
+ (dev->class & 0x05) == 0)
+ return 0;
+ }
/*
* No IRQ known to the ACPI subsystem - maybe the BIOS /
* driver reported one, then use it. Exit in any case.
#define PM_TIMER_TICKS_TO_US(p) (((p) * 1000)/(PM_TIMER_FREQUENCY/1000))
static unsigned int max_cstate __read_mostly = ACPI_PROCESSOR_MAX_POWER;
+#ifdef CONFIG_CPU_IDLE
module_param(max_cstate, uint, 0000);
+#else
+module_param(max_cstate, uint, 0644);
+#endif
static unsigned int nocst __read_mostly;
module_param(nocst, uint, 0000);
#include <linux/jiffies.h>
#include <linux/delay.h>
+#ifdef CONFIG_ACPI_SYSFS_POWER
#include <linux/power_supply.h>
+#endif
#include "sbshc.h"
MODULE_DEVICE_TABLE(acpi, sbs_device_ids);
struct acpi_battery {
+#ifdef CONFIG_ACPI_SYSFS_POWER
struct power_supply bat;
+#endif
struct acpi_sbs *sbs;
#ifdef CONFIG_ACPI_PROCFS_POWER
struct proc_dir_entry *proc_entry;
#define to_acpi_battery(x) container_of(x, struct acpi_battery, bat);
struct acpi_sbs {
+#ifdef CONFIG_ACPI_SYSFS_POWER
struct power_supply charger;
+#endif
struct acpi_device *device;
struct acpi_smb_hc *hc;
struct mutex lock;
acpi_battery_ipscale(battery);
}
+#ifdef CONFIG_ACPI_SYSFS_POWER
static int sbs_get_ac_property(struct power_supply *psy,
enum power_supply_property psp,
union power_supply_propval *val)
POWER_SUPPLY_PROP_MODEL_NAME,
POWER_SUPPLY_PROP_MANUFACTURER,
};
+#endif
/* --------------------------------------------------------------------------
Smart Battery System Management
return result;
}
+#ifdef CONFIG_ACPI_SYSFS_POWER
static ssize_t acpi_battery_alarm_show(struct device *dev,
struct device_attribute *attr,
char *buf)
.show = acpi_battery_alarm_show,
.store = acpi_battery_alarm_store,
};
+#endif
/* --------------------------------------------------------------------------
FS Interface (/proc/acpi)
&acpi_battery_state_fops, &acpi_battery_alarm_fops,
battery);
#endif
+#ifdef CONFIG_ACPI_SYSFS_POWER
battery->bat.name = battery->name;
battery->bat.type = POWER_SUPPLY_TYPE_BATTERY;
if (!acpi_battery_mode(battery)) {
goto end;
battery->have_sysfs_alarm = 1;
end:
+#endif
printk(KERN_INFO PREFIX "%s [%s]: Battery Slot [%s] (battery %s)\n",
ACPI_SBS_DEVICE_NAME, acpi_device_bid(sbs->device),
battery->name, sbs->battery->present ? "present" : "absent");
static void acpi_battery_remove(struct acpi_sbs *sbs, int id)
{
struct acpi_battery *battery = &sbs->battery[id];
-
+#ifdef CONFIG_ACPI_SYSFS_POWER
if (battery->bat.dev) {
if (battery->have_sysfs_alarm)
device_remove_file(battery->bat.dev, &alarm_attr);
power_supply_unregister(&battery->bat);
}
+#endif
#ifdef CONFIG_ACPI_PROCFS_POWER
if (battery->proc_entry)
acpi_sbs_remove_fs(&battery->proc_entry, acpi_battery_dir);
if (result)
goto end;
#endif
+#ifdef CONFIG_ACPI_SYSFS_POWER
sbs->charger.name = "sbs-charger";
sbs->charger.type = POWER_SUPPLY_TYPE_MAINS;
sbs->charger.properties = sbs_ac_props;
sbs->charger.num_properties = ARRAY_SIZE(sbs_ac_props);
sbs->charger.get_property = sbs_get_ac_property;
power_supply_register(&sbs->device->dev, &sbs->charger);
+#endif
printk(KERN_INFO PREFIX "%s [%s]: AC Adapter [%s] (%s)\n",
ACPI_SBS_DEVICE_NAME, acpi_device_bid(sbs->device),
ACPI_AC_DIR_NAME, sbs->charger_present ? "on-line" : "off-line");
static void acpi_charger_remove(struct acpi_sbs *sbs)
{
+#ifdef CONFIG_ACPI_SYSFS_POWER
if (sbs->charger.dev)
power_supply_unregister(&sbs->charger);
+#endif
#ifdef CONFIG_ACPI_PROCFS_POWER
if (sbs->charger_entry)
acpi_sbs_remove_fs(&sbs->charger_entry, acpi_ac_dir);
ACPI_SBS_NOTIFY_STATUS,
sbs->charger_present);
#endif
+#ifdef CONFIG_ACPI_SYSFS_POWER
kobject_uevent(&sbs->charger.dev->kobj, KOBJ_CHANGE);
+#endif
}
if (sbs->manager_present) {
for (id = 0; id < MAX_SBS_BAT; ++id) {
ACPI_SBS_NOTIFY_STATUS,
bat->present);
#endif
+#ifdef CONFIG_ACPI_SYSFS_POWER
kobject_uevent(&bat->bat.dev->kobj, KOBJ_CHANGE);
+#endif
}
}
}
return result;
}
+int __init acpi_boot_ec_enable(void);
+
static int __init acpi_scan_init(void)
{
int result;
* Enumerate devices in the ACPI namespace.
*/
result = acpi_bus_scan_fixed(acpi_root);
+
+ /* EC region might be needed at bus_scan, so enable it now */
+ acpi_boot_ec_enable();
+
if (!result)
result = acpi_bus_scan(acpi_root, &ops);
ich8_2port_sata,
ich8m_apple_sata_ahci, /* locks up on second port enable */
tolapai_sata_ahci,
+ piix_pata_vmw, /* PIIX4 for VMware, spurious DMA_ERR */
/* constants for mapping table */
P0 = 0, /* port 0 */
static void piix_set_dmamode(struct ata_port *ap, struct ata_device *adev);
static void ich_set_dmamode(struct ata_port *ap, struct ata_device *adev);
static int ich_pata_cable_detect(struct ata_port *ap);
+static u8 piix_vmw_bmdma_status(struct ata_port *ap);
#ifdef CONFIG_PM
static int piix_pci_device_suspend(struct pci_dev *pdev, pm_message_t mesg);
static int piix_pci_device_resume(struct pci_dev *pdev);
static const struct pci_device_id piix_pci_tbl[] = {
/* Intel PIIX3 for the 430HX etc */
{ 0x8086, 0x7010, PCI_ANY_ID, PCI_ANY_ID, 0, 0, piix_pata_mwdma },
+ /* VMware ICH4 */
+ { 0x8086, 0x7111, 0x15ad, 0x1976, 0, 0, piix_pata_vmw },
/* Intel PIIX4 for the 430TX/440BX/MX chipset: UDMA 33 */
/* Also PIIX4E (fn3 rev 2) and PIIX4M (fn3 rev 3) */
{ 0x8086, 0x7111, PCI_ANY_ID, PCI_ANY_ID, 0, 0, piix_pata_33 },
.port_start = ata_port_start,
};
+static const struct ata_port_operations piix_vmw_ops = {
+ .set_piomode = piix_set_piomode,
+ .set_dmamode = piix_set_dmamode,
+ .mode_filter = ata_pci_default_filter,
+
+ .tf_load = ata_tf_load,
+ .tf_read = ata_tf_read,
+ .check_status = ata_check_status,
+ .exec_command = ata_exec_command,
+ .dev_select = ata_std_dev_select,
+
+ .bmdma_setup = ata_bmdma_setup,
+ .bmdma_start = ata_bmdma_start,
+ .bmdma_stop = ata_bmdma_stop,
+ .bmdma_status = piix_vmw_bmdma_status,
+ .qc_prep = ata_qc_prep,
+ .qc_issue = ata_qc_issue_prot,
+ .data_xfer = ata_data_xfer,
+
+ .freeze = ata_bmdma_freeze,
+ .thaw = ata_bmdma_thaw,
+ .error_handler = piix_pata_error_handler,
+ .post_internal_cmd = ata_bmdma_post_internal_cmd,
+ .cable_detect = ata_cable_40wire,
+
+ .irq_handler = ata_interrupt,
+ .irq_clear = ata_bmdma_irq_clear,
+ .irq_on = ata_irq_on,
+
+ .port_start = ata_port_start,
+};
+
static const struct piix_map_db ich5_map_db = {
.mask = 0x7,
.port_enable = 0x3,
.port_ops = &piix_sata_ops,
},
+ [piix_pata_vmw] =
+ {
+ .sht = &piix_sht,
+ .flags = PIIX_PATA_FLAGS,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x06, /* mwdma1-2 ?? CHECK 0 should be ok but slow */
+ .udma_mask = ATA_UDMA_MASK_40C,
+ .port_ops = &piix_vmw_ops,
+ },
+
};
static struct pci_bits piix_enable_bits[] = {
}
#endif
+static u8 piix_vmw_bmdma_status(struct ata_port *ap)
+{
+ return ata_bmdma_status(ap) & ~ATA_DMA_ERR;
+}
+
#define AHCI_PCI_BAR 5
#define AHCI_GLOBAL_CTL 0x04
#define AHCI_ENABLE (1 << 31)
rc = ap->ops->port_start(ap);
if (rc) {
if (rc != -ENODEV)
- dev_printk(KERN_ERR, host->dev, "failed to start port %d (errno=%d)\n", i, rc);
+ dev_printk(KERN_ERR, host->dev,
+ "failed to start port %d "
+ "(errno=%d)\n", i, rc);
goto err_out;
}
}
ehc->i.action &= ~ATA_EH_PERDEV_MASK;
}
- /* consider speeding down */
+ /* propagate timeout to host link */
+ if ((all_err_mask & AC_ERR_TIMEOUT) && !ata_is_host_link(link))
+ ap->link.eh_context.i.err_mask |= AC_ERR_TIMEOUT;
+
+ /* record error and consider speeding down */
dev = ehc->i.dev;
- if (!dev && ata_link_max_devices(link) == 1 &&
- ata_dev_enabled(link->device))
- dev = link->device;
+ if (!dev && ((ata_link_max_devices(link) == 1 &&
+ ata_dev_enabled(link->device))))
+ dev = link->device;
if (dev)
ehc->i.action |= ata_eh_speed_down(dev, is_io, all_err_mask);
{
struct ata_link *link;
- __ata_port_for_each_link(link, ap)
+ ata_port_for_each_link(link, ap)
ata_eh_link_autopsy(link);
+
+ /* Autopsy of fanout ports can affect host link autopsy.
+ * Perform host link autopsy last.
+ */
+ if (ap->nr_pmp_links)
+ ata_eh_link_autopsy(&ap->link);
}
/**
if (ata_link_offline(link))
continue;
- /* apply class override and convert UNKNOWN to NONE */
+ /* apply class override */
if (lflags & ATA_LFLAG_ASSUME_ATA)
classes[dev->devno] = ATA_DEV_ATA;
else if (lflags & ATA_LFLAG_ASSUME_SEMB)
classes[dev->devno] = ATA_DEV_SEMB_UNSUP; /* not yet */
- else if (classes[dev->devno] == ATA_DEV_UNKNOWN)
- classes[dev->devno] = ATA_DEV_NONE;
}
/* record current link speed */
/* SError.N need a kick in the ass to get working */
link->flags |= ATA_LFLAG_HRST_TO_RESUME;
- /* class code report is unreliable */
- if (link->pmp < 5)
- link->flags |= ATA_LFLAG_ASSUME_ATA;
-
- /* The config device, which can be either at
- * port 0 or 5, locks up on SRST.
+ /* Class code report is unreliable and SRST
+ * times out under certain configurations.
+ * Config device can be at port 0 or 5 and
+ * locks up on SRST.
*/
- if (link->pmp == 0 || link->pmp == 5)
+ if (link->pmp <= 5)
link->flags |= ATA_LFLAG_NO_SRST |
ATA_LFLAG_ASSUME_ATA;
blk_queue_max_hw_segments(q, q->max_hw_segments - 1);
}
+ if (dev->class == ATA_DEV_ATA)
+ sdev->manage_start_stop = 1;
+
if (dev->flags & ATA_DFLAG_AN)
set_bit(SDEV_EVT_MEDIA_CHANGE, sdev->supported_events);
ata_scsi_sdev_config(sdev);
- sdev->manage_start_stop = 1;
-
if (dev)
ata_scsi_dev_config(sdev, dev);
if (rc)
goto err_out;
- if (!legacy_mode) {
+ if (!legacy_mode && pdev->irq) {
+ /* We may have no IRQ assigned in which case we can poll. This
+ shouldn't happen on a sane system but robustness is cheap
+ in this case */
rc = devm_request_irq(dev, pdev->irq, pi->port_ops->irq_handler,
IRQF_SHARED, DRV_NAME, host);
if (rc)
ata_port_desc(host->ports[0], "irq %d", pdev->irq);
ata_port_desc(host->ports[1], "irq %d", pdev->irq);
- } else {
+ } else if (legacy_mode) {
if (!ata_port_is_dummy(host->ports[0])) {
rc = devm_request_irq(dev, ATA_PRIMARY_IRQ(pdev),
pi->port_ops->irq_handler,
if (res == NULL)
return -EINVAL;
- while (bfin_port_info[board_idx].udma_mask>0 && udma_fsclk[udma_mode] > fsclk) {
+ while (bfin_port_info[board_idx].udma_mask > 0 &&
+ udma_fsclk[udma_mode] > fsclk) {
udma_mode--;
bfin_port_info[board_idx].udma_mask >>= 1;
}
.port_start = ata_port_start,
};
-static void ixp4xx_setup_port(struct ata_ioports *ioaddr,
+static void ixp4xx_setup_port(struct ata_port *ap,
struct ixp4xx_pata_data *data,
unsigned long raw_cs0, unsigned long raw_cs1)
{
+ struct ata_ioports *ioaddr = &ap->ioaddr;
unsigned long raw_cmd = raw_cs0;
unsigned long raw_ctl = raw_cs1 + 0x06;
ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
if (unlikely(slop)) {
- u32 pad;
+ __le32 pad = 0;
if (write_data) {
memcpy(&pad, buf + buflen - slop, slop);
- pad = le32_to_cpu(pad);
- iowrite32(pad, ap->ioaddr.data_addr);
+ iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
} else {
- pad = ioread32(ap->ioaddr.data_addr);
- pad = cpu_to_le16(pad);
+ pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
memcpy(buf + buflen - slop, &pad, slop);
}
}
/* Flip back to 33Mhz for PIO */
if (adev->dma_mode >= XFER_UDMA_2)
iowrite8(ioread8(clock) & ~sel66, clock);
-
ata_bmdma_stop(qc);
+ pdc202xx_set_piomode(ap, adev);
}
/**
adev->max_sectors = 256;
}
+static int pdc2026x_port_start(struct ata_port *ap)
+{
+ void __iomem *bmdma = ap->ioaddr.bmdma_addr;
+ if (bmdma) {
+ /* Enable burst mode */
+ u8 burst = ioread8(bmdma + 0x1f);
+ iowrite8(burst | 0x01, bmdma + 0x1f);
+ }
+ return ata_sff_port_start(ap);
+}
+
+/**
+ * pdc2026x_check_atapi_dma - Check whether ATAPI DMA can be supported for this command
+ * @qc: Metadata associated with taskfile to check
+ *
+ * Just say no - not supported on older Promise.
+ *
+ * LOCKING:
+ * None (inherited from caller).
+ *
+ * RETURNS: 0 when ATAPI DMA can be used
+ * 1 otherwise
+ */
+
+static int pdc2026x_check_atapi_dma(struct ata_queued_cmd *qc)
+{
+ return 1;
+}
+
static struct scsi_host_template pdc202xx_sht = {
.module = THIS_MODULE,
.name = DRV_NAME,
.post_internal_cmd = ata_bmdma_post_internal_cmd,
.cable_detect = pdc2026x_cable_detect,
+ .check_atapi_dma= pdc2026x_check_atapi_dma,
.bmdma_setup = ata_bmdma_setup,
.bmdma_start = pdc2026x_bmdma_start,
.bmdma_stop = pdc2026x_bmdma_stop,
.irq_clear = ata_bmdma_irq_clear,
.irq_on = ata_irq_on,
- .port_start = ata_sff_port_start,
+ .port_start = pdc2026x_port_start,
};
static int pdc202xx_init_one(struct pci_dev *dev, const struct pci_device_id *id)
ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
if (unlikely(slop)) {
- u32 pad;
+ __le32 pad = 0;
if (write_data) {
memcpy(&pad, buf + buflen - slop, slop);
- pad = le32_to_cpu(pad);
- iowrite32(pad, ap->ioaddr.data_addr);
+ iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
} else {
- pad = ioread32(ap->ioaddr.data_addr);
- pad = cpu_to_le32(pad);
+ pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
memcpy(buf + buflen - slop, &pad, slop);
}
}
ioread32_rep(ap->ioaddr.data_addr, buf, buflen >> 2);
if (unlikely(slop)) {
- u32 pad;
+ __le32 pad = 0;
if (write_data) {
memcpy(&pad, buf + buflen - slop, slop);
- pad = le32_to_cpu(pad);
- iowrite32(pad, ap->ioaddr.data_addr);
+ iowrite32(le32_to_cpu(pad), ap->ioaddr.data_addr);
} else {
- pad = ioread32(ap->ioaddr.data_addr);
- pad = cpu_to_le16(pad);
+ pad = cpu_to_le32(ioread32(ap->ioaddr.data_addr));
memcpy(buf + buflen - slop, &pad, slop);
}
}
static void qs_error_handler(struct ata_port *ap)
{
qs_enter_reg_mode(ap);
- ata_do_eh(ap, qs_prereset, ata_std_softreset, NULL,
+ ata_do_eh(ap, qs_prereset, NULL, sata_std_hardreset,
ata_std_postreset);
}
[PORT_CERR_PKT_PROT] = { AC_ERR_HSM, ATA_EH_SOFTRESET,
"invalid data directon for ATAPI CDB" },
[PORT_CERR_SGT_BOUNDARY] = { AC_ERR_SYSTEM, ATA_EH_SOFTRESET,
- "SGT no on qword boundary" },
+ "SGT not on qword boundary" },
[PORT_CERR_SGT_TGTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET,
"PCI target abort while fetching SGT" },
[PORT_CERR_SGT_MSTABRT] = { AC_ERR_HOST_BUS, ATA_EH_SOFTRESET,
struct ata_link *link = qc->dev->link;
struct ata_port *ap = link->ap;
u8 prot = qc->tf.protocol;
- int is_atapi = (prot == ATA_PROT_ATAPI ||
- prot == ATA_PROT_ATAPI_NODATA ||
- prot == ATA_PROT_ATAPI_DMA);
-
- /* ATAPI commands completing with CHECK_SENSE cause various
- * weird problems if other commands are active. PMP DMA CS
- * errata doesn't cover all and HSM violation occurs even with
- * only one other device active. Always run an ATAPI command
- * by itself.
- */
+
+ /*
+ * There is a bug in the chip:
+ * Port LRAM Causes the PRB/SGT Data to be Corrupted
+ * If the host issues a read request for LRAM and SActive registers
+ * while active commands are available in the port, PRB/SGT data in
+ * the LRAM can become corrupted. This issue applies only when
+ * reading from, but not writing to, the LRAM.
+ *
+ * Therefore, reading LRAM when there is no particular error [and
+ * other commands may be outstanding] is prohibited.
+ *
+ * To avoid this bug there are two situations where a command must run
+ * exclusive of any other commands on the port:
+ *
+ * - ATAPI commands which check the sense data
+ * - Passthrough ATA commands which always have ATA_QCFLAG_RESULT_TF
+ * set.
+ *
+ */
+ int is_excl = (prot == ATA_PROT_ATAPI ||
+ prot == ATA_PROT_ATAPI_NODATA ||
+ prot == ATA_PROT_ATAPI_DMA ||
+ (qc->flags & ATA_QCFLAG_RESULT_TF));
+
if (unlikely(ap->excl_link)) {
if (link == ap->excl_link) {
if (ap->nr_active_links)
qc->flags |= ATA_QCFLAG_CLEAR_EXCL;
} else
return ATA_DEFER_PORT;
- } else if (unlikely(is_atapi)) {
+ } else if (unlikely(is_excl)) {
ap->excl_link = link;
if (ap->nr_active_links)
return ATA_DEFER_PORT;
if (ci && ci->desc) {
err_mask |= ci->err_mask;
action |= ci->action;
+ if (action & ATA_EH_RESET_MASK)
+ freeze = 1;
ata_ehi_push_desc(ehi, "%s", ci->desc);
} else {
err_mask |= AC_ERR_OTHER;
action |= ATA_EH_SOFTRESET;
+ freeze = 1;
ata_ehi_push_desc(ehi, "unknown command error %d",
cerr);
}
"packet purged",
"packet ageing timeout",
"channel ageing timeout",
- "calculated lenght error",
- "programmed lenght limit error",
+ "calculated length error",
+ "programmed length limit error",
"aal5 crc32 error",
"oam transp or transpc crc10 error",
"reserved 25",
};
-int __devinit idt77105_init(struct atm_dev *dev)
+int idt77105_init(struct atm_dev *dev)
{
dev->phy = &idt77105_ops;
return 0;
if (mac[i] == NULL)
nicstar_init_eprom(card->membase);
- if (request_irq(pcidev->irq, &ns_irq_handler, IRQF_DISABLED | IRQF_SHARED, "nicstar", card) != 0)
- {
- printk("nicstar%d: can't allocate IRQ %d.\n", i, pcidev->irq);
- error = 9;
- ns_init_card_error(card, error);
- return error;
- }
-
/* Set the VPI/VCI MSb mask to zero so we can receive OAM cells */
writel(0x00000000, card->membase + VPM);
card->iovpool.count++;
}
- card->intcnt = 0;
-
/* Configure NICStAR */
if (card->rct_size == 4096)
ns_cfg_rctsize = NS_CFG_RCTSIZE_4096_ENTRIES;
card->efbie = 1;
+ card->intcnt = 0;
+ if (request_irq(pcidev->irq, &ns_irq_handler, IRQF_DISABLED | IRQF_SHARED, "nicstar", card) != 0)
+ {
+ printk("nicstar%d: can't allocate IRQ %d.\n", i, pcidev->irq);
+ error = 9;
+ ns_init_card_error(card, error);
+ return error;
+ }
+
/* Register device */
card->atmdev = atm_dev_register("nicstar", &atm_ops, -1, NULL);
if (card->atmdev == NULL)
};
-int __devinit suni_init(struct atm_dev *dev)
+int suni_init(struct atm_dev *dev)
{
unsigned char mri;
return;
}
-static int cciss_pci_init(ctlr_info_t *c, struct pci_dev *pdev)
+static int __devinit cciss_pci_init(ctlr_info_t *c, struct pci_dev *pdev)
{
ushort subsystem_vendor_id, subsystem_device_id, command;
__u32 board_id, scratchpad = 0;
static int loop_switch(struct loop_device *lo, struct file *file)
{
struct switch_request w;
- struct bio *bio = bio_alloc(GFP_KERNEL, 1);
+ struct bio *bio = bio_alloc(GFP_KERNEL, 0);
if (!bio)
return -ENOMEM;
init_completion(&w.wait);
spin_lock_irqsave(&ll->hcill_lock, flags);
switch (ll->hcill_state) {
+ case HCILL_ASLEEP_TO_AWAKE:
+ /*
+ * This state means that both the host and the BRF chip
+ * have simultaneously sent a wake-up-indication packet.
+ * Traditionaly, in this case, receiving a wake-up-indication
+ * was enough and an additional wake-up-ack wasn't needed.
+ * This has changed with the BRF6350, which does require an
+ * explicit wake-up-ack. Other BRF versions, which do not
+ * require an explicit ack here, do accept it, thus it is
+ * perfectly safe to always send one.
+ */
+ BT_DBG("dual wake-up-indication");
+ /* deliberate fall-through - do not add break */
case HCILL_ASLEEP:
/* acknowledge device wake up */
if (send_hcill_cmd(HCILL_WAKE_UP_ACK, hu) < 0) {
goto out;
}
break;
- case HCILL_ASLEEP_TO_AWAKE:
- /*
- * this state means that a wake-up-indication
- * is already on its way to the device,
- * and will serve as the required wake-up-ack
- */
- BT_DBG("dual wake-up-indication");
- break;
default:
- /* any other state are illegal */
+ /* any other state is illegal */
BT_ERR("received HCILL_WAKE_UP_IND in state %ld", ll->hcill_state);
break;
}
your Linux box, for instance in order to become a dial-in server.
For information about the Cyclades-Z card, read
- <file:drivers/char/README.cycladesZ>.
+ <file:Documentation/README.cycladesZ>.
To compile this driver as a module, choose M here: the
module will be called cyclades.
}
EXPORT_SYMBOL_GPL(tpm_remove_hardware);
-static u8 savestate[] = {
- 0, 193, /* TPM_TAG_RQU_COMMAND */
- 0, 0, 0, 10, /* blob length (in bytes) */
- 0, 0, 0, 152 /* TPM_ORD_SaveState */
-};
-
/*
* We are about to suspend. Save the TPM state
* so that it can be restored.
int tpm_pm_suspend(struct device *dev, pm_message_t pm_state)
{
struct tpm_chip *chip = dev_get_drvdata(dev);
+ u8 savestate[] = {
+ 0, 193, /* TPM_TAG_RQU_COMMAND */
+ 0, 0, 0, 10, /* blob length (in bytes) */
+ 0, 0, 0, 152 /* TPM_ORD_SaveState */
+ };
+
if (chip == NULL)
return -ENODEV;
if (!timeout)
timeout = MAX_SCHEDULE_TIMEOUT;
if (wait_event_interruptible_timeout(tty->write_wait,
- !tty->driver->chars_in_buffer(tty), timeout))
+ !tty->driver->chars_in_buffer(tty), timeout) < 0)
return;
if (tty->driver->wait_until_sent)
tty->driver->wait_until_sent(tty, timeout);
EXPORT_SYMBOL(tty_termios_copy_hw);
/**
+ * tty_termios_hw_change - check for setting change
+ * @a: termios
+ * @b: termios to compare
+ *
+ * Check if any of the bits that affect a dumb device have changed
+ * between the two termios structures, or a speed change is needed.
+ */
+
+int tty_termios_hw_change(struct ktermios *a, struct ktermios *b)
+{
+ if (a->c_ispeed != b->c_ispeed || a->c_ospeed != b->c_ospeed)
+ return 1;
+ if ((a->c_cflag ^ b->c_cflag) & ~(HUPCL | CREAD | CLOCAL))
+ return 1;
+ return 0;
+}
+EXPORT_SYMBOL(tty_termios_hw_change);
+
+/**
* change_termios - update termios values
* @tty: tty to update
* @new_termios: desired new value
spin_unlock_bh(&dev->queue_lock);
if (found) {
- atomic_dec(&dev->refcnt);
cn_queue_free_callback(cbq);
+ atomic_dec(&dev->refcnt);
return -EINVAL;
}
if (queue_work(dev->cbdev->cn_queue,
&__cbq->work))
err = 0;
+ else
+ err = -EINVAL;
} else {
struct cn_callback_data *d;
"optimised for use in a battery environment");
MODULE_LICENSE ("GPL");
+#ifdef CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE
+fs_initcall(cpufreq_gov_dbs_init);
+#else
module_init(cpufreq_gov_dbs_init);
+#endif
module_exit(cpufreq_gov_dbs_exit);
"Low Latency Frequency Transition capable processors");
MODULE_LICENSE("GPL");
+#ifdef CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND
+fs_initcall(cpufreq_gov_dbs_init);
+#else
module_init(cpufreq_gov_dbs_init);
+#endif
module_exit(cpufreq_gov_dbs_exit);
-
MODULE_DESCRIPTION ("CPUfreq policy governor 'userspace'");
MODULE_LICENSE ("GPL");
+#ifdef CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE
fs_initcall(cpufreq_gov_userspace_init);
+#else
+module_init(cpufreq_gov_userspace_init);
+#endif
module_exit(cpufreq_gov_userspace_exit);
/* ====== Encryption/decryption routines ====== */
/* These are the real call to PadLock. */
+static inline void padlock_xcrypt(const u8 *input, u8 *output, void *key,
+ void *control_word)
+{
+ asm volatile (".byte 0xf3,0x0f,0xa7,0xc8" /* rep xcryptecb */
+ : "+S"(input), "+D"(output)
+ : "d"(control_word), "b"(key), "c"(1));
+}
+
+static void aes_crypt_copy(const u8 *in, u8 *out, u32 *key, struct cword *cword)
+{
+ u8 buf[AES_BLOCK_SIZE * 2 + PADLOCK_ALIGNMENT - 1];
+ u8 *tmp = PTR_ALIGN(&buf[0], PADLOCK_ALIGNMENT);
+
+ memcpy(tmp, in, AES_BLOCK_SIZE);
+ padlock_xcrypt(tmp, out, key, cword);
+}
+
+static inline void aes_crypt(const u8 *in, u8 *out, u32 *key,
+ struct cword *cword)
+{
+ asm volatile ("pushfl; popfl");
+
+ /* padlock_xcrypt requires at least two blocks of data. */
+ if (unlikely(!(((unsigned long)in ^ (PAGE_SIZE - AES_BLOCK_SIZE)) &
+ (PAGE_SIZE - 1)))) {
+ aes_crypt_copy(in, out, key, cword);
+ return;
+ }
+
+ padlock_xcrypt(in, out, key, cword);
+}
+
static inline void padlock_xcrypt_ecb(const u8 *input, u8 *output, void *key,
void *control_word, u32 count)
{
+ if (count == 1) {
+ aes_crypt(input, output, key, control_word);
+ return;
+ }
+
asm volatile ("pushfl; popfl"); /* enforce key reload. */
- asm volatile (".byte 0xf3,0x0f,0xa7,0xc8" /* rep xcryptecb */
+ asm volatile ("test $1, %%cl;"
+ "je 1f;"
+ "lea -1(%%ecx), %%eax;"
+ "mov $1, %%ecx;"
+ ".byte 0xf3,0x0f,0xa7,0xc8;" /* rep xcryptecb */
+ "mov %%eax, %%ecx;"
+ "1:"
+ ".byte 0xf3,0x0f,0xa7,0xc8" /* rep xcryptecb */
: "+S"(input), "+D"(output)
- : "d"(control_word), "b"(key), "c"(count));
+ : "d"(control_word), "b"(key), "c"(count)
+ : "ax");
}
static inline u8 *padlock_xcrypt_cbc(const u8 *input, u8 *output, void *key,
static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
struct aes_ctx *ctx = aes_ctx(tfm);
- padlock_xcrypt_ecb(in, out, ctx->E, &ctx->cword.encrypt, 1);
+ aes_crypt(in, out, ctx->E, &ctx->cword.encrypt);
}
static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
struct aes_ctx *ctx = aes_ctx(tfm);
- padlock_xcrypt_ecb(in, out, ctx->D, &ctx->cword.decrypt, 1);
+ aes_crypt(in, out, ctx->D, &ctx->cword.decrypt);
}
static struct crypto_alg aes_alg = {
extern int dmi_available;
-static int __init dmi_id_init(void)
+/* In a separate function to keep gcc 3.2 happy - do NOT merge this in
+ dmi_id_init! */
+static void __init dmi_id_init_attr_table(void)
{
- int ret, i;
-
- if (!dmi_available)
- return -ENODEV;
+ int i;
/* Not necessarily all DMI fields are available on all
* systems, hence let's built an attribute table of just
ADD_DMI_ATTR(chassis_serial, DMI_CHASSIS_SERIAL);
ADD_DMI_ATTR(chassis_asset_tag, DMI_CHASSIS_ASSET_TAG);
sys_dmi_attributes[i++] = &sys_dmi_modalias_attr.attr;
+}
+
+static int __init dmi_id_init(void)
+{
+ int ret;
+
+ if (!dmi_available)
+ return -ENODEV;
+
+ dmi_id_init_attr_table();
ret = class_register(&dmi_class);
if (ret)
it87.c - Part of lm_sensors, Linux kernel modules for hardware
monitoring.
+ The IT8705F is an LPC-based Super I/O part that contains UARTs, a
+ parallel port, an IR port, a MIDI port, a floppy controller, etc., in
+ addition to an Environment Controller (Enhanced Hardware Monitor and
+ Fan Controller)
+
+ This driver supports only the Environment Controller in the IT8705F and
+ similar parts. The other devices are supported by different drivers.
+
Supports: IT8705F Super I/O chip w/LPC interface
IT8712F Super I/O chip w/LPC interface
IT8716F Super I/O chip w/LPC interface
/* Length of ISA address segment */
#define IT87_EXTENT 8
-/* Where are the ISA address/data registers relative to the base address */
-#define IT87_ADDR_REG_OFFSET 5
-#define IT87_DATA_REG_OFFSET 6
+/* Length of ISA address segment for Environmental Controller */
+#define IT87_EC_EXTENT 2
+
+/* Offset of EC registers from ISA base address */
+#define IT87_EC_OFFSET 5
+
+/* Where are the ISA address/data registers relative to the EC base address */
+#define IT87_ADDR_REG_OFFSET 0
+#define IT87_DATA_REG_OFFSET 1
/*----- The IT87 registers -----*/
};
res = platform_get_resource(pdev, IORESOURCE_IO, 0);
- if (!request_region(res->start, IT87_EXTENT, DRVNAME)) {
+ if (!request_region(res->start, IT87_EC_EXTENT, DRVNAME)) {
dev_err(dev, "Failed to request region 0x%lx-0x%lx\n",
(unsigned long)res->start,
- (unsigned long)(res->start + IT87_EXTENT - 1));
+ (unsigned long)(res->start + IT87_EC_EXTENT - 1));
err = -EBUSY;
goto ERROR0;
}
platform_set_drvdata(pdev, NULL);
kfree(data);
ERROR1:
- release_region(res->start, IT87_EXTENT);
+ release_region(res->start, IT87_EC_EXTENT);
ERROR0:
return err;
}
sysfs_remove_group(&pdev->dev.kobj, &it87_group);
sysfs_remove_group(&pdev->dev.kobj, &it87_group_opt);
- release_region(data->addr, IT87_EXTENT);
+ release_region(data->addr, IT87_EC_EXTENT);
platform_set_drvdata(pdev, NULL);
kfree(data);
const struct it87_sio_data *sio_data)
{
struct resource res = {
- .start = address ,
- .end = address + IT87_EXTENT - 1,
+ .start = address + IT87_EC_OFFSET,
+ .end = address + IT87_EC_OFFSET + IT87_EC_EXTENT - 1,
.name = DRVNAME,
.flags = IORESOURCE_IO,
};
data->vrm = vid_which_vrm();
superio_enter(sio_data->sioreg);
- /* Set VID input sensibility if needed. In theory the BIOS should
- have set it, but in practice it's not always the case. */
- en_vrm10 = superio_inb(sio_data->sioreg, SIO_REG_EN_VRM10);
- if ((en_vrm10 & 0x08) && data->vrm != 100) {
- dev_warn(dev, "Setting VID input voltage to TTL\n");
- superio_outb(sio_data->sioreg, SIO_REG_EN_VRM10,
- en_vrm10 & ~0x08);
- } else if (!(en_vrm10 & 0x08) && data->vrm == 100) {
- dev_warn(dev, "Setting VID input voltage to VRM10\n");
- superio_outb(sio_data->sioreg, SIO_REG_EN_VRM10,
- en_vrm10 | 0x08);
- }
/* Read VID value */
superio_select(sio_data->sioreg, W83627EHF_LD_HWM);
- if (superio_inb(sio_data->sioreg, SIO_REG_VID_CTRL) & 0x80)
+ if (superio_inb(sio_data->sioreg, SIO_REG_VID_CTRL) & 0x80) {
+ /* Set VID input sensibility if needed. In theory the BIOS
+ should have set it, but in practice it's not always the
+ case. We only do it for the W83627EHF/EHG because the
+ W83627DHG is more complex in this respect. */
+ if (sio_data->kind == w83627ehf) {
+ en_vrm10 = superio_inb(sio_data->sioreg,
+ SIO_REG_EN_VRM10);
+ if ((en_vrm10 & 0x08) && data->vrm == 90) {
+ dev_warn(dev, "Setting VID input voltage to "
+ "TTL\n");
+ superio_outb(sio_data->sioreg, SIO_REG_EN_VRM10,
+ en_vrm10 & ~0x08);
+ } else if (!(en_vrm10 & 0x08) && data->vrm == 100) {
+ dev_warn(dev, "Setting VID input voltage to "
+ "VRM10\n");
+ superio_outb(sio_data->sioreg, SIO_REG_EN_VRM10,
+ en_vrm10 | 0x08);
+ }
+ }
+
data->vid = superio_inb(sio_data->sioreg, SIO_REG_VID_DATA) & 0x3f;
- else {
+ } else {
dev_info(dev, "VID pins in output mode, CPU VID not "
"available\n");
data->vid = 0x3f;
* Generic i2c master transfer entrypoint.
*
* Note: We do not use Atmel's feature of storing the "internal device address".
- * Instead the "internal device address" has to be written using a seperate
+ * Instead the "internal device address" has to be written using a separate
* i2c message.
* http://lists.arm.linux.org.uk/pipermail/linux-arm-kernel/2004-September/024411.html
*/
err_free_irq:
free_irq(dev->irq, dev);
err_unuse_clocks:
+ omap_i2c_write_reg(dev, OMAP_I2C_CON_REG, 0);
omap_i2c_disable_clocks(dev);
omap_i2c_put_clocks(dev);
err_free_mem:
platform_set_drvdata(pdev, NULL);
kfree(dev);
err_release_region:
- omap_i2c_write_reg(dev, OMAP_I2C_CON_REG, 0);
release_mem_region(mem->start, (mem->end - mem->start) + 1);
return r;
break;
/* Note that these are broken vs. the expected smbus API where
- * on reads, the lenght is actually returned from the function,
+ * on reads, the length is actually returned from the function,
* but I think the current API makes no sense and I don't want
* any driver that I haven't verified for correctness to go
* anywhere near a pmac i2c bus anyway ...
static int __init i2c_sibyte_init(void)
{
- printk("i2c-swarm.o: i2c SMBus adapter module for SiByte board\n");
+ pr_info("i2c-sibyte: i2c SMBus adapter module for SiByte board\n");
if (i2c_sibyte_add_bus(&sibyte_board_adapter[0], K_SMB_FREQ_100KHZ) < 0)
return -ENODEV;
- if (i2c_sibyte_add_bus(&sibyte_board_adapter[1], K_SMB_FREQ_400KHZ) < 0)
+ if (i2c_sibyte_add_bus(&sibyte_board_adapter[1],
+ K_SMB_FREQ_400KHZ) < 0) {
+ i2c_del_adapter(&sibyte_board_adapter[0]);
return -ENODEV;
+ }
return 0;
}
/* This address checking function differs from the one in i2c-core
in that it considers an address with a registered device, but no
- bounded driver, as NOT busy. */
+ bound driver, as NOT busy. */
static int i2cdev_check_addr(struct i2c_adapter *adapter, unsigned int addr)
{
struct list_head *item;
#include <acpi/acpi.h>
#include <linux/ide.h>
#include <linux/pci.h>
+#include <linux/dmi.h>
#include <acpi/acpi_bus.h>
#include <acpi/acnames.h>
extern int ide_noacpitfs;
extern int ide_noacpionboot;
+static bool ide_noacpi_psx;
+static int no_acpi_psx(const struct dmi_system_id *id)
+{
+ ide_noacpi_psx = true;
+ printk(KERN_NOTICE"%s detected - disable ACPI _PSx.\n", id->ident);
+ return 0;
+}
+
+static const struct dmi_system_id ide_acpi_dmi_table[] = {
+ /* Bug 9673. */
+ /* We should check if this is because ACPI NVS isn't save/restored. */
+ {
+ .callback = no_acpi_psx,
+ .ident = "HP nx9005",
+ .matches = {
+ DMI_MATCH(DMI_BIOS_VENDOR, "Phoenix Technologies Ltd."),
+ DMI_MATCH(DMI_BIOS_VERSION, "KAM1.60")
+ },
+ },
+
+ { } /* terminate list */
+};
+
+static int ide_acpi_blacklist(void)
+{
+ static int done;
+ if (done)
+ return 0;
+ done = 1;
+ dmi_check_system(ide_acpi_dmi_table);
+ return 0;
+}
+
/**
* ide_get_dev_handle - finds acpi_handle and PCI device.function
* @dev: device to locate
{
int unit;
- if (ide_noacpi)
+ if (ide_noacpi || ide_noacpi_psx)
return;
DEBPRINT("ENTER:\n");
struct ide_acpi_drive_link *master;
struct ide_acpi_drive_link *slave;
+ ide_acpi_blacklist();
+
hwif->acpidata = kzalloc(sizeof(struct ide_acpi_hwif_link), GFP_KERNEL);
if (!hwif->acpidata)
return;
return 0;
else if (ireason == 0) {
/* Whoops... The drive is expecting to receive data from us! */
- printk(KERN_ERR "%s: read_intr: Drive wants to transfer data the "
- "wrong way!\n", drive->name);
+ printk(KERN_ERR "%s: %s: wrong transfer direction!\n",
+ drive->name, __FUNCTION__);
/* Throw some data at the drive so it doesn't hang
and quit this request. */
return 0;
} else {
/* Drive wants a command packet, or invalid ireason... */
- printk(KERN_ERR "%s: read_intr: bad interrupt reason %x\n", drive->name,
- ireason);
+ printk(KERN_ERR "%s: %s: bad interrupt reason 0x%02x\n",
+ drive->name, __FUNCTION__, ireason);
}
cdrom_end_request(drive, 0);
*/
if (dma) {
info->dma = 0;
- if ((dma_error = HWIF(drive)->ide_dma_end(drive)))
+ dma_error = HWIF(drive)->ide_dma_end(drive);
+ if (dma_error) {
+ printk(KERN_ERR "%s: DMA read error\n", drive->name);
ide_dma_off(drive);
+ }
}
if (cdrom_decode_status(drive, 0, &stat))
return ide_stopped;
/* Read the interrupt reason and the transfer length. */
- ireason = HWIF(drive)->INB(IDE_IREASON_REG);
+ ireason = HWIF(drive)->INB(IDE_IREASON_REG) & 0x3;
lowcyl = HWIF(drive)->INB(IDE_BCOUNTL_REG);
highcyl = HWIF(drive)->INB(IDE_BCOUNTH_REG);
if (thislen > len) thislen = len;
/* The drive wants to be written to. */
- if ((ireason & 3) == 0) {
+ if (ireason == 0) {
if (!rq->data) {
blk_dump_rq_flags(rq, "cdrom_pc_intr, write");
goto confused;
}
/* Same drill for reading. */
- else if ((ireason & 3) == 2) {
+ else if (ireason == 2) {
if (!rq->data) {
- blk_dump_rq_flags(rq, "cdrom_pc_intr, write");
+ blk_dump_rq_flags(rq, "cdrom_pc_intr, read");
goto confused;
}
/* Transfer the data. */
return 0;
else if (ireason == 2) {
/* Whoops... The drive wants to send data. */
- printk(KERN_ERR "%s: write_intr: wrong transfer direction!\n",
- drive->name);
+ printk(KERN_ERR "%s: %s: wrong transfer direction!\n",
+ drive->name, __FUNCTION__);
while (len > 0) {
int dum = 0;
}
} else {
/* Drive wants a command packet, or invalid ireason... */
- printk(KERN_ERR "%s: write_intr: bad interrupt reason %x\n",
- drive->name, ireason);
+ printk(KERN_ERR "%s: %s: bad interrupt reason 0x%02x\n",
+ drive->name, __FUNCTION__, ireason);
}
cdrom_end_request(drive, 0);
/* Check for errors. */
if (dma) {
info->dma = 0;
- if ((dma_error = HWIF(drive)->ide_dma_end(drive))) {
- printk(KERN_ERR "ide-cd: write dma error\n");
+ dma_error = HWIF(drive)->ide_dma_end(drive);
+ if (dma_error) {
+ printk(KERN_ERR "%s: DMA write error\n", drive->name);
ide_dma_off(drive);
}
}
}
/* Read the interrupt reason and the transfer length. */
- ireason = HWIF(drive)->INB(IDE_IREASON_REG);
+ ireason = HWIF(drive)->INB(IDE_IREASON_REG) & 0x3;
lowcyl = HWIF(drive)->INB(IDE_BCOUNTL_REG);
highcyl = HWIF(drive)->INB(IDE_BCOUNTH_REG);
*/
uptodate = 1;
if (rq->current_nr_sectors > 0) {
- printk(KERN_ERR "%s: write_intr: data underrun (%d blocks)\n",
- drive->name, rq->current_nr_sectors);
+ printk(KERN_ERR "%s: %s: data underrun (%d blocks)\n",
+ drive->name, __FUNCTION__,
+ rq->current_nr_sectors);
uptodate = 0;
}
cdrom_end_request(drive, uptodate);
int this_transfer;
if (!rq->current_nr_sectors) {
- printk(KERN_ERR "ide-cd: write_intr: oops\n");
+ printk(KERN_ERR "%s: %s: confused, missing data\n",
+ drive->name, __FUNCTION__);
break;
}
if (!drive->id->model[0] &&
!strncmp(drive->id->fw_rev, "241N", 4)) {
CDROM_STATE_FLAGS(drive)->current_speed =
- (((unsigned int)cap->curspeed) + (176/2)) / 176;
+ (le16_to_cpu(cap->curspeed) + (176/2)) / 176;
CDROM_CONFIG_FLAGS(drive)->max_speed =
- (((unsigned int)cap->maxspeed) + (176/2)) / 176;
+ (le16_to_cpu(cap->maxspeed) + (176/2)) / 176;
} else {
CDROM_STATE_FLAGS(drive)->current_speed =
- (ntohs(cap->curspeed) + (176/2)) / 176;
+ (be16_to_cpu(cap->curspeed) + (176/2)) / 176;
CDROM_CONFIG_FLAGS(drive)->max_speed =
- (ntohs(cap->maxspeed) + (176/2)) / 176;
+ (be16_to_cpu(cap->maxspeed) + (176/2)) / 176;
}
}
if (!CDROM_CONFIG_FLAGS(drive)->ram)
devinfo->mask |= CDC_RAM;
+ if (CDROM_CONFIG_FLAGS(drive)->no_speed_select)
+ devinfo->mask |= CDC_SELECT_SPEED;
+
devinfo->disk = info->disk;
return register_cdrom(devinfo);
}
CDROM_CONFIG_FLAGS(drive)->limit_nframes = 1;
/* the 3231 model does not support the SET_CD_SPEED command */
else if (!strcmp(drive->id->model, "SAMSUNG CD-ROM SCR-3231"))
- cdi->mask |= CDC_SELECT_SPEED;
+ CDROM_CONFIG_FLAGS(drive)->no_speed_select = 1;
#if ! STANDARD_ATAPI
/* by default Sanyo 3 CD changer support is turned off and
g->driverfs_dev = &drive->gendev;
g->flags = GENHD_FL_CD | GENHD_FL_REMOVABLE;
if (ide_cdrom_setup(drive)) {
- struct cdrom_device_info *devinfo = &info->devinfo;
ide_proc_unregister_driver(drive, &ide_cdrom_driver);
- kfree(info->buffer);
- kfree(info->toc);
- kfree(info->changer_info);
- if (devinfo->handle == drive && unregister_cdrom(devinfo))
- printk (KERN_ERR "%s: ide_cdrom_cleanup failed to unregister device from the cdrom driver.\n", drive->name);
- kfree(info);
- drive->driver_data = NULL;
+ ide_cd_release(&info->kref);
goto failed;
}
__u8 close_tray : 1; /* can close the tray */
__u8 writing : 1; /* pseudo write in progress */
__u8 mo_drive : 1; /* drive is an MO device */
- __u8 reserved : 2;
+ __u8 no_speed_select : 1; /* SET_CD_SPEED command is unsupported. */
+ __u8 reserved : 1;
byte max_speed; /* Max speed of the drive */
};
#define CDROM_CONFIG_FLAGS(drive) (&(((struct cdrom_info *)(drive->driver_data))->config_flags))
printk(KERN_DEBUG "%s: skipping word 93 validity check\n",
drive->name);
+ if (ide_dev_is_sata(id) && !ivb)
+ return 1;
+
if (hwif->cbl != ATA_CBL_PATA80 && !ivb)
goto no_80w;
- if (ide_dev_is_sata(id))
- return 1;
-
/*
* FIXME:
* - force bit13 (80c cable present) check also for !ivb devices
/*
- * linux/drivers/ide/pci/cmd64x.c Version 1.51 Nov 8, 2007
+ * linux/drivers/ide/pci/cmd64x.c Version 1.52 Dec 24, 2007
*
* cmd64x.c: Enable interrupts at initialization time on Ultra/PCI machines.
* Due to massive hardware bugs, UltraDMA is only supported
.init_chipset = init_chipset_cmd64x,
.init_hwif = init_hwif_cmd64x,
.enablebits = {{0x51,0x04,0x04}, {0x51,0x08,0x08}},
+ .chipset = ide_cmd646,
.host_flags = IDE_HFLAG_ABUSE_PREFETCH | IDE_HFLAG_BOOTABLE,
.pio_mask = ATA_PIO5,
.mwdma_mask = ATA_MWDMA2,
.init_chipset = init_chipset_cmd64x,
.init_hwif = init_hwif_cmd64x,
.enablebits = {{0x51,0x04,0x04}, {0x51,0x08,0x08}},
- .chipset = ide_cmd646,
.host_flags = IDE_HFLAG_ABUSE_PREFETCH | IDE_HFLAG_BOOTABLE,
.pio_mask = ATA_PIO5,
.mwdma_mask = ATA_MWDMA2,
#define ATAC_BM0_PRD 0x04
#define CS5535_CABLE_DETECT 0x48
-/* Format I PIO settings. We seperate out cmd and data for safer timings */
+/* Format I PIO settings. We separate out cmd and data for safer timings */
static unsigned int cs5535_pio_cmd_timings[5] =
{ 0xF7F4, 0x53F3, 0x13F1, 0x5131, 0x1131 };
/*
- * linux/drivers/ide/pci/trm290.c Version 1.02 Mar. 18, 2000
+ * linux/drivers/ide/pci/trm290.c Version 1.05 Dec. 26, 2007
*
* Copyright (c) 1997-1998 Mark Lord
+ * Copyright (c) 2007 MontaVista Software, Inc. <source@mvista.com>
* May be copied or modified under the terms of the GNU General Public License
*
* June 22, 2004 - get rid of check_region
trm290_prepare_drive(drive, drive->using_dma);
}
-static void trm290_ide_dma_exec_cmd(ide_drive_t *drive, u8 command)
+static void trm290_dma_exec_cmd(ide_drive_t *drive, u8 command)
{
BUG_ON(HWGROUP(drive)->handler != NULL); /* paranoia check */
ide_set_handler(drive, &ide_dma_intr, WAIT_CMD, NULL);
outb(command, IDE_COMMAND_REG);
}
-static int trm290_ide_dma_setup(ide_drive_t *drive)
+static int trm290_dma_setup(ide_drive_t *drive)
{
ide_hwif_t *hwif = drive->hwif;
struct request *rq = hwif->hwgroup->rq;
return 0;
}
-static void trm290_ide_dma_start(ide_drive_t *drive)
+static void trm290_dma_start(ide_drive_t *drive)
{
}
return (status == 0x00ff);
}
+static void trm290_dma_host_on(ide_drive_t *drive)
+{
+}
+
+static void trm290_dma_host_off(ide_drive_t *drive)
+{
+}
+
static void __devinit init_hwif_trm290(ide_hwif_t *hwif)
{
unsigned int cfgbase = 0;
ide_setup_dma(hwif, (hwif->config_data + 4) ^ (hwif->channel ? 0x0080 : 0x0000), 3);
- hwif->dma_setup = &trm290_ide_dma_setup;
- hwif->dma_exec_cmd = &trm290_ide_dma_exec_cmd;
- hwif->dma_start = &trm290_ide_dma_start;
- hwif->ide_dma_end = &trm290_ide_dma_end;
- hwif->ide_dma_test_irq = &trm290_ide_dma_test_irq;
+ hwif->dma_host_off = &trm290_dma_host_off;
+ hwif->dma_host_on = &trm290_dma_host_on;
+ hwif->dma_setup = &trm290_dma_setup;
+ hwif->dma_exec_cmd = &trm290_dma_exec_cmd;
+ hwif->dma_start = &trm290_dma_start;
+ hwif->ide_dma_end = &trm290_ide_dma_end;
+ hwif->ide_dma_test_irq = &trm290_ide_dma_test_irq;
hwif->selectproc = &trm290_selectproc;
#if 1
}
}
+ /*
+ * The opcode is in the low byte when its in network order
+ * (top byte when in host order).
+ */
+ opcode = be32_to_cpu(ohdr->bth[0]) >> 24;
+ if (qp->ibqp.qp_num > 1 &&
+ opcode == IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE) {
+ if (header_in_data) {
+ wc.imm_data = *(__be32 *) data;
+ data += sizeof(__be32);
+ } else
+ wc.imm_data = ohdr->u.ud.imm_data;
+ wc.wc_flags = IB_WC_WITH_IMM;
+ hdrsize += sizeof(u32);
+ } else if (opcode == IB_OPCODE_UD_SEND_ONLY) {
+ wc.imm_data = 0;
+ wc.wc_flags = 0;
+ } else {
+ dev->n_pkt_drops++;
+ goto bail;
+ }
+
/* Get the number of bytes the message was padded by. */
pad = (be32_to_cpu(ohdr->bth[0]) >> 20) & 3;
if (unlikely(tlen < (hdrsize + pad + 4))) {
wc.byte_len = tlen + sizeof(struct ib_grh);
/*
- * The opcode is in the low byte when its in network order
- * (top byte when in host order).
- */
- opcode = be32_to_cpu(ohdr->bth[0]) >> 24;
- if (qp->ibqp.qp_num > 1 &&
- opcode == IB_OPCODE_UD_SEND_ONLY_WITH_IMMEDIATE) {
- if (header_in_data) {
- wc.imm_data = *(__be32 *) data;
- data += sizeof(__be32);
- } else
- wc.imm_data = ohdr->u.ud.imm_data;
- wc.wc_flags = IB_WC_WITH_IMM;
- hdrsize += sizeof(u32);
- } else if (opcode == IB_OPCODE_UD_SEND_ONLY) {
- wc.imm_data = 0;
- wc.wc_flags = 0;
- } else {
- dev->n_pkt_drops++;
- goto bail;
- }
-
- /*
* Get the next work request entry to find where to put the data.
*/
if (qp->r_reuse_sge)
wc->dlid_path_bits = (be32_to_cpu(cqe->g_mlpath_rqpn) >> 24) & 0x7f;
wc->wc_flags |= be32_to_cpu(cqe->g_mlpath_rqpn) & 0x80000000 ?
IB_WC_GRH : 0;
- wc->pkey_index = be32_to_cpu(cqe->immed_rss_invalid) >> 16;
+ wc->pkey_index = be32_to_cpu(cqe->immed_rss_invalid) & 0x7f;
}
return 0;
list_for_each_entry_safe(target, tmp_target,
&host->target_list, list) {
+ srp_remove_host(target->scsi_host);
scsi_remove_host(target->scsi_host);
srp_disconnect_target(target);
ib_destroy_cm_id(target->cm_id);
EXPORT_SYMBOL(gameport_open);
EXPORT_SYMBOL(gameport_close);
EXPORT_SYMBOL(gameport_rescan);
-EXPORT_SYMBOL(gameport_cooked_read);
-EXPORT_SYMBOL(gameport_set_name);
EXPORT_SYMBOL(gameport_set_phys);
EXPORT_SYMBOL(gameport_start_polling);
EXPORT_SYMBOL(gameport_stop_polling);
if (value >= 0)
disposition = INPUT_PASS_TO_ALL;
break;
+
+ case EV_PWR:
+ disposition = INPUT_PASS_TO_ALL;
+ break;
}
if (type != EV_SYN)
__set_bit(code, dev->ffbit);
break;
+ case EV_PWR:
+ /* do nothing */
+ break;
+
default:
printk(KERN_ERR
"input_set_capability: unknown type %u (code %u)\n",
to your machine, so normally you should say Y here.
config KEYBOARD_HP6XX
- tristate "HP Jornada 6XX Keyboard support"
+ tristate "HP Jornada 6xx keyboard"
depends on SH_HP6XX
select INPUT_POLLDEV
help
- This adds support for the onboard keyboard found on
- HP Jornada 620/660/680/690.
+ Say Y here if you have a HP Jornada 620/660/680/690 and want to
+ support the built-in keyboard.
To compile this driver as a module, choose M here: the
module will be called jornada680_kbd.
config KEYBOARD_HP7XX
- tristate "HP Jornada 7XX Keyboard Driver"
+ tristate "HP Jornada 7xx keyboard"
depends on SA1100_JORNADA720_SSP && SA1100_SSP
help
- Say Y here to add support for the HP Jornada 7xx (710/720/728)
- onboard keyboard.
+ Say Y here if you have a HP Jornada 710/720/728 and want to
+ support the built-in keyboard.
To compile this driver as a module, choose M here: the
module will be called jornada720_kbd.
* published by the Free Software Foundation.
*/
-#include <linux/input.h>
-#include <linux/kernel.h>
-#include <linux/module.h>
#include <linux/init.h>
+#include <linux/input.h>
#include <linux/input-polldev.h>
+#include <linux/interrupt.h>
#include <linux/jiffies.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
#include <linux/platform_device.h>
-#include <linux/interrupt.h>
#include <asm/delay.h>
#include <asm/io.h>
#define PLDR 0xa4000134
static const unsigned short jornada_scancodes[] = {
-/* PTD1 */ KEY_CAPSLOCK, KEY_MACRO, KEY_LEFTCTRL, 0, KEY_ESC, 0, 0, 0, /* 1 -> 8 */
- KEY_F1, KEY_F2, KEY_F3, KEY_F8, KEY_F7, KEY_F2, KEY_F4, KEY_F5, /* 9 -> 16 */
-/* PTD5 */ KEY_SLASH, KEY_APOSTROPHE, KEY_ENTER, 0, KEY_Z, 0, 0, 0, /* 17 -> 24 */
- KEY_X, KEY_C, KEY_V, KEY_DOT, KEY_COMMA, KEY_M, KEY_B, KEY_N, /* 25 -> 32 */
-/* PTD7 */ KEY_KP2, KEY_KP6, 0, 0, 0, 0, 0, 0, /* 33 -> 40 */
- 0, 0, 0, KEY_KP4, 0, 0, KEY_LEFTALT, KEY_HANJA, /* 41 -> 48 */
-/* PTE0 */ 0, 0, 0, 0, KEY_FINANCE, 0, 0, 0, /* 49 -> 56 */
- KEY_LEFTCTRL, 0, KEY_SPACE, KEY_KPDOT, KEY_VOLUMEUP, 249, 0, 0, /* 57 -> 64 */
-/* PTE1 */ KEY_SEMICOLON, KEY_RIGHTBRACE, KEY_BACKSLASH, 0, KEY_A, 0, 0, 0,/* 65 -> 72 */
- KEY_S, KEY_D, KEY_F, KEY_L, KEY_K, KEY_J, KEY_G, KEY_H, /* 73 -> 80 */
-/* PTE3 */ KEY_KP8, KEY_LEFTMETA, KEY_RIGHTSHIFT, 0, KEY_TAB, 0, 0,0, /* 81 -> 88 */
- 0, KEY_LEFTSHIFT, 0, 0, 0, 0, 0, 0, /* 89 -> 96 */
-/* PTE6 */ KEY_P, KEY_LEFTBRACE, KEY_BACKSPACE, 0, KEY_Q, 0, 0, 0, /* 97 -> 104 */
- KEY_W, KEY_E, KEY_R, KEY_O, KEY_I, KEY_U, KEY_T, KEY_R, /* 105 -> 112 */
-/* PTE7 */ KEY_0, KEY_MINUS, KEY_EQUAL, 0, KEY_1, 0, 0, 0, /* 113 -> 120 */
- KEY_2, KEY_3, KEY_4, KEY_9, KEY_8, KEY_7, KEY_5, KEY_6, /* 121 -> 128 */
+/* PTD1 */ KEY_CAPSLOCK, KEY_MACRO, KEY_LEFTCTRL, 0, KEY_ESC, KEY_KP5, 0, 0, /* 1 -> 8 */
+ KEY_F1, KEY_F2, KEY_F3, KEY_F8, KEY_F7, KEY_F6, KEY_F4, KEY_F5, /* 9 -> 16 */
+/* PTD5 */ KEY_SLASH, KEY_APOSTROPHE, KEY_ENTER, 0, KEY_Z, 0, 0, 0, /* 17 -> 24 */
+ KEY_X, KEY_C, KEY_V, KEY_DOT, KEY_COMMA, KEY_M, KEY_B, KEY_N, /* 25 -> 32 */
+/* PTD7 */ KEY_KP2, KEY_KP6, KEY_KP3, 0, 0, 0, 0, 0, /* 33 -> 40 */
+ KEY_F10, KEY_RO, KEY_F9, KEY_KP4, KEY_NUMLOCK, KEY_SCROLLLOCK, KEY_LEFTALT, KEY_HANJA, /* 41 -> 48 */
+/* PTE0 */ KEY_KATAKANA, KEY_KP0, KEY_GRAVE, 0, KEY_FINANCE, 0, 0, 0, /* 49 -> 56 */
+ KEY_KPMINUS, KEY_HIRAGANA, KEY_SPACE, KEY_KPDOT, KEY_VOLUMEUP, 249, 0, 0, /* 57 -> 64 */
+/* PTE1 */ KEY_SEMICOLON, KEY_RIGHTBRACE, KEY_BACKSLASH, 0, KEY_A, 0, 0, 0, /* 65 -> 72 */
+ KEY_S, KEY_D, KEY_F, KEY_L, KEY_K, KEY_J, KEY_G, KEY_H, /* 73 -> 80 */
+/* PTE3 */ KEY_KP8, KEY_LEFTMETA, KEY_RIGHTSHIFT, 0, KEY_TAB, 0, 0, 0, /* 81 -> 88 */
+ 0, KEY_LEFTSHIFT, KEY_KP7, KEY_KP9, KEY_KP1, KEY_F11, KEY_KPPLUS, KEY_KPASTERISK, /* 89 -> 96 */
+/* PTE6 */ KEY_P, KEY_LEFTBRACE, KEY_BACKSPACE, 0, KEY_Q, 0, 0, 0, /* 97 -> 104 */
+ KEY_W, KEY_E, KEY_R, KEY_O, KEY_I, KEY_U, KEY_T, KEY_Y, /* 105 -> 112 */
+/* PTE7 */ KEY_0, KEY_MINUS, KEY_EQUAL, 0, KEY_1, 0, 0, 0, /* 113 -> 120 */
+ KEY_2, KEY_3, KEY_4, KEY_9, KEY_8, KEY_7, KEY_5, KEY_6, /* 121 -> 128 */
/* **** */ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0
};
for (i = 0; i < ARRAY_SIZE(spitzkbd_keycode); i++)
set_bit(spitzkbd->keycode[i], input_dev->keybit);
clear_bit(0, input_dev->keybit);
+ set_bit(KEY_SUSPEND, input_dev->keybit);
set_bit(SW_LID, input_dev->swbit);
set_bit(SW_TABLET_MODE, input_dev->swbit);
set_bit(SW_HEADPHONE_INSERT, input_dev->swbit);
{ { 0x20, 0x02, 0x0e }, 0xf8, 0xf8, ALPS_PASS | ALPS_DUALPOINT }, /* XXX */
{ { 0x22, 0x02, 0x0a }, 0xf8, 0xf8, ALPS_PASS | ALPS_DUALPOINT },
{ { 0x22, 0x02, 0x14 }, 0xff, 0xff, ALPS_PASS | ALPS_DUALPOINT }, /* Dell Latitude D600 */
- { { 0x73, 0x02, 0x50 }, 0xcf, 0xff, ALPS_FW_BK_1 } /* Dell Vostro 1400 */
+ { { 0x73, 0x02, 0x50 }, 0xcf, 0xcf, ALPS_FW_BK_1 } /* Dell Vostro 1400 */
};
/*
static void lifebook_disconnect(struct psmouse *psmouse)
{
+ struct lifebook_data *priv = psmouse->private;
+
psmouse_reset(psmouse);
- kfree(psmouse->private);
+ if (priv) {
+ input_unregister_device(priv->dev2);
+ kfree(priv);
+ }
psmouse->private = NULL;
}
err_pt_deactivate:
if (parent && parent->pt_deactivate)
parent->pt_deactivate(parent);
+ input_unregister_device(psmouse->dev);
+ input_dev = NULL; /* so we don't try to free it below */
err_protocol_disconnect:
if (psmouse->disconnect)
psmouse->disconnect(psmouse);
BIT_MASK(ABS_PRESSURE) |
BIT_MASK(ABS_TOOL_WIDTH) },
}, /* A touchpad */
+ {
+ .flags = INPUT_DEVICE_ID_MATCH_EVBIT |
+ INPUT_DEVICE_ID_MATCH_KEYBIT |
+ INPUT_DEVICE_ID_MATCH_ABSBIT,
+ .evbit = { BIT(EV_KEY) | BIT(EV_ABS) | BIT(EV_SYN) },
+ .keybit = { [BIT_WORD(BTN_LEFT)] = BIT_MASK(BTN_LEFT) },
+ .absbit = { BIT_MASK(ABS_X) | BIT_MASK(ABS_Y) },
+ }, /* Mouse-like device with absolute X and Y but ordinary
+ clicks, like hp ILO2 High Performance mouse */
{ }, /* Terminating entry */
};
module will be called mk712.
config TOUCHSCREEN_HP600
- tristate "HP Jornada 680/690 touchscreen"
+ tristate "HP Jornada 6xx touchscreen"
depends on SH_HP6XX && SH_ADC
help
- Say Y here if you have a HP Jornada 680 or 690 and want to
+ Say Y here if you have a HP Jornada 620/660/680/690 and want to
support the built-in touchscreen.
- If unsure, say N.
-
To compile this driver as a module, choose M here: the
module will be called hp680_ts_input.
config TOUCHSCREEN_HP7XX
- tristate "HP Jornada 710/720/728 touchscreen"
+ tristate "HP Jornada 7xx touchscreen"
depends on SA1100_JORNADA720_SSP
help
Say Y here if you have a HP Jornada 710/720/728 and want
* - DMC TSC-10/25
* - IRTOUCHSYSTEMS/UNITOP
* - IdealTEK URTC1000
+ * - General Touch
* - GoTop Super_Q2/GogoPen/PenPower tablets
*
* Copyright (C) 2004-2007 by Daniel Ritz <daniel.ritz@gmx.ch>
#include <linux/usb/input.h>
-#define DRIVER_VERSION "v0.5"
+#define DRIVER_VERSION "v0.6"
#define DRIVER_AUTHOR "Daniel Ritz <daniel.ritz@gmx.ch>"
#define DRIVER_DESC "USB Touchscreen Driver"
int min_yc, max_yc;
int min_press, max_press;
int rept_size;
- int flags;
void (*process_pkt) (struct usbtouch_usb *usbtouch, unsigned char *pkt, int len);
+
+ /*
+ * used to get the packet len. possible return values:
+ * > 0: packet len
+ * = 0: skip one byte
+ * < 0: -return value more bytes needed
+ */
int (*get_pkt_len) (unsigned char *pkt, int len);
+
int (*read_data) (struct usbtouch_usb *usbtouch, unsigned char *pkt);
int (*init) (struct usbtouch_usb *usbtouch);
};
-#define USBTOUCH_FLG_BUFFER 0x01
-
-
/* a usbtouch device */
struct usbtouch_usb {
unsigned char *data;
};
-#if defined(CONFIG_TOUCHSCREEN_USB_EGALAX) || defined(CONFIG_TOUCHSCREEN_USB_ETURBO) || defined(CONFIG_TOUCHSCREEN_USB_IDEALTEK)
-#define MULTI_PACKET
-#endif
-
-#ifdef MULTI_PACKET
-static void usbtouch_process_multi(struct usbtouch_usb *usbtouch,
- unsigned char *pkt, int len);
-#endif
-
/* device types */
enum {
DEVTPYE_DUMMY = -1,
#ifdef CONFIG_TOUCHSCREEN_USB_EGALAX
+#ifndef MULTI_PACKET
+#define MULTI_PACKET
+#endif
+
#define EGALAX_PKT_TYPE_MASK 0xFE
#define EGALAX_PKT_TYPE_REPT 0x80
#define EGALAX_PKT_TYPE_DIAG 0x0A
* eTurboTouch part
*/
#ifdef CONFIG_TOUCHSCREEN_USB_ETURBO
+#ifndef MULTI_PACKET
+#define MULTI_PACKET
+#endif
static int eturbo_read_data(struct usbtouch_usb *dev, unsigned char *pkt)
{
unsigned int shift;
* IdealTEK URTC1000 Part
*/
#ifdef CONFIG_TOUCHSCREEN_USB_IDEALTEK
+#ifndef MULTI_PACKET
+#define MULTI_PACKET
+#endif
static int idealtek_get_pkt_len(unsigned char *buf, int len)
{
if (buf[0] & 0x80)
/*****************************************************************************
* the different device descriptors
*/
+#ifdef MULTI_PACKET
+static void usbtouch_process_multi(struct usbtouch_usb *usbtouch,
+ unsigned char *pkt, int len);
+#endif
+
static struct usbtouch_device_info usbtouch_dev_info[] = {
#ifdef CONFIG_TOUCHSCREEN_USB_EGALAX
[DEVTYPE_EGALAX] = {
.min_yc = 0x0,
.max_yc = 0x07ff,
.rept_size = 16,
- .flags = USBTOUCH_FLG_BUFFER,
.process_pkt = usbtouch_process_multi,
.get_pkt_len = egalax_get_pkt_len,
.read_data = egalax_read_data,
.min_yc = 0x0,
.max_yc = 0x07ff,
.rept_size = 8,
- .flags = USBTOUCH_FLG_BUFFER,
.process_pkt = usbtouch_process_multi,
.get_pkt_len = eturbo_get_pkt_len,
.read_data = eturbo_read_data,
.min_yc = 0x0,
.max_yc = 0x0fff,
.rept_size = 8,
- .flags = USBTOUCH_FLG_BUFFER,
.process_pkt = usbtouch_process_multi,
.get_pkt_len = idealtek_get_pkt_len,
.read_data = idealtek_read_data,
pos = 0;
while (pos < buf_len) {
/* get packet len */
- pkt_len = usbtouch->type->get_pkt_len(buffer + pos, len);
+ pkt_len = usbtouch->type->get_pkt_len(buffer + pos,
+ buf_len - pos);
- /* unknown packet: drop everything */
- if (unlikely(!pkt_len))
- goto out_flush_buf;
+ /* unknown packet: skip one byte */
+ if (unlikely(!pkt_len)) {
+ pos++;
+ continue;
+ }
/* full packet: process */
if (likely((pkt_len > 0) && (pkt_len <= buf_len - pos))) {
if (!usbtouch->data)
goto out_free;
- if (type->flags & USBTOUCH_FLG_BUFFER) {
+ if (type->get_pkt_len) {
usbtouch->buffer = kmalloc(type->rept_size, GFP_KERNEL);
if (!usbtouch->buffer)
goto out_free_buffers;
dflag = 0;
count_pull = count_put = 0;
while ((count_pull < skb->len) && (len > 0)) {
+ /* push every character but the last to the tty buffer directly */
+ if ( count_put )
+ tty_insert_flip_char(tty, last, TTY_NORMAL);
len--;
if (dev->drv[di]->DLEflag & DLEmask) {
last = DLE;
tty_insert_flip_char(tty, DLE, 0);
tty_insert_flip_char(tty, *dp++, 0);
}
+ if (*dp == DLE)
+ tty_insert_flip_char(tty, DLE, 0);
last = *dp;
} else {
#endif
if ((info->flags & ISDN_ASYNC_CLOSING) || (!info->tty)) {
return;
}
+#ifdef CONFIG_ISDN_AUDIO
+ if ( !info->vonline )
+ tty_ldisc_flush(info->tty);
+#else
tty_ldisc_flush(info->tty);
+#endif
if ((info->flags & ISDN_ASYNC_CHECK_CD) &&
(!((info->flags & ISDN_ASYNC_CALLOUT_ACTIVE) &&
(info->flags & ISDN_ASYNC_CALLOUT_NOHUP)))) {
goto err_out;
/* add to the list of leds */
- write_lock(&leds_list_lock);
+ down_write(&leds_list_lock);
list_add_tail(&led_cdev->node, &leds_list);
- write_unlock(&leds_list_lock);
+ up_write(&leds_list_lock);
#ifdef CONFIG_LEDS_TRIGGERS
init_rwsem(&led_cdev->trigger_lock);
device_unregister(led_cdev->dev);
- write_lock(&leds_list_lock);
+ down_write(&leds_list_lock);
list_del(&led_cdev->node);
- write_unlock(&leds_list_lock);
+ up_write(&leds_list_lock);
}
EXPORT_SYMBOL_GPL(led_classdev_unregister);
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/module.h>
-#include <linux/spinlock.h>
+#include <linux/rwsem.h>
#include <linux/leds.h>
#include "leds.h"
-DEFINE_RWLOCK(leds_list_lock);
+DECLARE_RWSEM(leds_list_lock);
LIST_HEAD(leds_list);
EXPORT_SYMBOL_GPL(leds_list);
up_write(&triggers_list_lock);
/* Register with any LEDs that have this as a default trigger */
- read_lock(&leds_list_lock);
+ down_read(&leds_list_lock);
list_for_each_entry(led_cdev, &leds_list, node) {
down_write(&led_cdev->trigger_lock);
if (!led_cdev->trigger && led_cdev->default_trigger &&
led_trigger_set(led_cdev, trigger);
up_write(&led_cdev->trigger_lock);
}
- read_unlock(&leds_list_lock);
+ up_read(&leds_list_lock);
return 0;
}
up_write(&triggers_list_lock);
/* Remove anyone actively using this trigger */
- read_lock(&leds_list_lock);
+ down_read(&leds_list_lock);
list_for_each_entry(led_cdev, &leds_list, node) {
down_write(&led_cdev->trigger_lock);
if (led_cdev->trigger == trigger)
led_trigger_set(led_cdev, NULL);
up_write(&led_cdev->trigger_lock);
}
- read_unlock(&leds_list_lock);
+ up_read(&leds_list_lock);
}
void led_trigger_unregister_simple(struct led_trigger *trigger)
static void locomoled_brightness_set(struct led_classdev *led_cdev,
enum led_brightness value, int offset)
{
- struct locomo_dev *locomo_dev = LOCOMO_DEV(led_cdev->dev);
+ struct locomo_dev *locomo_dev = LOCOMO_DEV(led_cdev->dev->parent);
unsigned long flags;
local_irq_save(flags);
#define __LEDS_H_INCLUDED
#include <linux/device.h>
+#include <linux/rwsem.h>
#include <linux/leds.h>
static inline void led_set_brightness(struct led_classdev *led_cdev,
led_cdev->brightness_set(led_cdev, value);
}
-extern rwlock_t leds_list_lock;
+extern struct rw_semaphore leds_list_lock;
extern struct list_head leds_list;
#ifdef CONFIG_LEDS_TRIGGERS
not "rustyvisor". See Documentation/lguest/lguest.txt.
If unsure, say N. If curious, say M. If masochistic, say Y.
-
-config LGUEST_GUEST
- bool
- help
- The guest needs code built-in, even if the host has lguest
- support as a module. The drivers are tiny, so we build them
- in too.
input_sync(ahid->input);
input_report_key(ahid->input, KEY_CAPSLOCK, 0);
input_sync(ahid->input);
+ return;
}
- return;
+ break;
#ifdef CONFIG_PPC_PMAC
case ADB_KEY_POWER_OLD: /* Power key on PBook 3400 needs remapping */
switch(pmac_call_feature(PMAC_FTR_GET_MB_INFO,
md_done_sync(conf->mddev, STRIPE_SECTORS, 1);
}
- if (s.expanding && s.locked == 0)
+ if (s.expanding && s.locked == 0 &&
+ !test_bit(STRIPE_OP_COMPUTE_BLK, &sh->ops.pending))
handle_stripe_expansion(conf, sh, NULL);
if (sh->ops.count)
md_done_sync(conf->mddev, STRIPE_SECTORS, 1);
}
- if (s.expanding && s.locked == 0)
+ if (s.expanding && s.locked == 0 &&
+ !test_bit(STRIPE_OP_COMPUTE_BLK, &sh->ops.pending))
handle_stripe_expansion(conf, sh, &r6s);
spin_unlock(&sh->lock);
}
-static struct saa7146_extension av7110_extension;
+static struct saa7146_extension av7110_extension_driver;
#define MAKE_AV7110_INFO(x_var,x_name) \
static struct saa7146_pci_extension_data x_var = { \
.ext_priv = x_name, \
- .ext = &av7110_extension }
+ .ext = &av7110_extension_driver }
MAKE_AV7110_INFO(tts_1_X_fsc,"Technotrend/Hauppauge WinTV DVB-S rev1.X or Fujitsu Siemens DVB-C");
MAKE_AV7110_INFO(ttt_1_X, "Technotrend/Hauppauge WinTV DVB-T rev1.X");
MODULE_DEVICE_TABLE(pci, pci_tbl);
-static struct saa7146_extension av7110_extension = {
+static struct saa7146_extension av7110_extension_driver = {
.name = "dvb",
.flags = SAA7146_USE_I2C_IRQ,
static int __init av7110_init(void)
{
int retval;
- retval = saa7146_register_extension(&av7110_extension);
+ retval = saa7146_register_extension(&av7110_extension_driver);
return retval;
}
static void __exit av7110_exit(void)
{
- saa7146_unregister_extension(&av7110_extension);
+ saa7146_unregister_extension(&av7110_extension_driver);
}
module_init(av7110_init);
struct video_mbuf *mbuf = arg;
unsigned int i;
- mutex_lock(&fh->cap.lock);
retval = videobuf_mmap_setup(&fh->cap,gbuffers,gbufsize,
V4L2_MEMORY_MMAP);
if (retval < 0)
- goto fh_unlock_and_return;
+ return retval;
gbuffers = retval;
memset(mbuf,0,sizeof(*mbuf));
mbuf->size = gbuffers * gbufsize;
for (i = 0; i < gbuffers; i++)
mbuf->offsets[i] = i * gbufsize;
- mutex_unlock(&fh->cap.lock);
return 0;
}
case VIDIOCMCAPTURE:
select VIDEOBUF_DVB
select DVB_TUNER_MT2131 if !DVB_FE_CUSTOMISE
select DVB_S5H1409 if !DVB_FE_CUSTOMISE
+ select DVB_LGDT330X if !DVB_FE_CUSTOMISE
select DVB_PLL if !DVB_FE_CUSTOMISE
---help---
This is a video4linux driver for Conexant 23885 based
.setscl = ivtv_setscl_old,
.getsda = ivtv_getsda_old,
.getscl = ivtv_getscl_old,
- .udelay = 5,
+ .udelay = 10,
.timeout = 200,
};
static int saa7134_resume(struct pci_dev *pci_dev)
{
-
struct saa7134_dev *dev = pci_get_drvdata(pci_dev);
- unsigned int flags;
+ unsigned long flags;
pci_restore_state(pci_dev);
pci_set_power_state(pci_dev, PCI_D0);
int ret, wbufsize, word_gap, words;
const struct kvec *vec;
unsigned long vec_seek;
+ unsigned long initial_adr;
+ int initial_len = len;
wbufsize = cfi_interleave(cfi) << cfi->cfiq->MaxBufWriteSize;
adr += chip->start;
+ initial_adr = adr;
cmd_adr = adr & ~(wbufsize-1);
/* Let's determine this according to the interleave only once */
return ret;
}
- XIP_INVAL_CACHED_RANGE(map, adr, len);
+ XIP_INVAL_CACHED_RANGE(map, initial_adr, initial_len);
ENABLE_VPP(map);
xip_disable(map, chip, cmd_adr);
chip->state = FL_WRITING;
ret = INVAL_CACHE_AND_WAIT(map, chip, cmd_adr,
- adr, len,
+ initial_adr, initial_len,
chip->buffer_write_time);
if (ret) {
map_write(map, CMD(0x70), cmd_adr);
#if defined(__ISAPNP__)
static int pnp_cards;
struct pnp_dev *idev = NULL;
+ int pnp_found = 0;
if (nopnp == 1)
goto no_pnp;
pnp_cards++;
netdev_boot_setup_check(dev);
+ pnp_found = 1;
goto found;
}
}
lp = netdev_priv(dev);
#if defined(__ISAPNP__)
lp->dev = &idev->dev;
+ if (pnp_found)
+ lp->type = EL3_PNP;
#endif
err = el3_common_init(dev);
enum Window3 { /* Window 3: MAC/config bits. */
Wn3_Config = 0, Wn3_MAC_Ctrl = 6, Wn3_Options = 8,
};
-union wn3_config {
- int i;
- struct w3_config_fields {
- unsigned int ram_size:3, ram_width:1, ram_speed:2, rom_size:2;
- int pad8:8;
- unsigned int ram_split:2, pad18:2, xcvr:3, pad21:1, autoselect:1;
- int pad24:7;
- } u;
+enum wn3_config {
+ Ram_size = 7,
+ Ram_width = 8,
+ Ram_speed = 0x30,
+ Rom_size = 0xc0,
+ Ram_split_shift = 16,
+ Ram_split = 3 << Ram_split_shift,
+ Xcvr_shift = 20,
+ Xcvr = 7 << Xcvr_shift,
+ Autoselect = 0x1000000,
};
enum Window4 {
/* Read the station address from the EEPROM. */
EL3WINDOW(0);
for (i = 0; i < 0x18; i++) {
- short *phys_addr = (short *) dev->dev_addr;
+ __be16 *phys_addr = (__be16 *) dev->dev_addr;
int timer;
outw(EEPROM_Read + i, ioaddr + Wn0EepromCmd);
/* Pause for at least 162 us. for the read to take place. */
{
char *ram_split[] = { "5:3", "3:1", "1:1", "3:5" };
- union wn3_config config;
+ __u32 config;
EL3WINDOW(3);
vp->available_media = inw(ioaddr + Wn3_Options);
- config.i = inl(ioaddr + Wn3_Config);
+ config = inl(ioaddr + Wn3_Config);
if (corkscrew_debug > 1)
printk(KERN_INFO " Internal config register is %4.4x, transceivers %#x.\n",
- config.i, inw(ioaddr + Wn3_Options));
+ config, inw(ioaddr + Wn3_Options));
printk(KERN_INFO " %dK %s-wide RAM %s Rx:Tx split, %s%s interface.\n",
- 8 << config.u.ram_size,
- config.u.ram_width ? "word" : "byte",
- ram_split[config.u.ram_split],
- config.u.autoselect ? "autoselect/" : "",
- media_tbl[config.u.xcvr].name);
- dev->if_port = config.u.xcvr;
- vp->default_media = config.u.xcvr;
- vp->autoselect = config.u.autoselect;
+ 8 << config & Ram_size,
+ config & Ram_width ? "word" : "byte",
+ ram_split[(config & Ram_split) >> Ram_split_shift],
+ config & Autoselect ? "autoselect/" : "",
+ media_tbl[(config & Xcvr) >> Xcvr_shift].name);
+ vp->default_media = (config & Xcvr) >> Xcvr_shift;
+ vp->autoselect = config & Autoselect ? 1 : 0;
+ dev->if_port = vp->default_media;
}
if (vp->media_override != 7) {
printk(KERN_INFO " Media override to transceiver type %d (%s).\n",
{
int ioaddr = dev->base_addr;
struct corkscrew_private *vp = netdev_priv(dev);
- union wn3_config config;
+ __u32 config;
int i;
/* Before initializing select the active media port. */
EL3WINDOW(3);
if (vp->full_duplex)
outb(0x20, ioaddr + Wn3_MAC_Ctrl); /* Set the full-duplex bit. */
- config.i = inl(ioaddr + Wn3_Config);
+ config = inl(ioaddr + Wn3_Config);
if (vp->media_override != 7) {
if (corkscrew_debug > 1)
} else
dev->if_port = vp->default_media;
- config.u.xcvr = dev->if_port;
- outl(config.i, ioaddr + Wn3_Config);
+ config = (config & ~Xcvr) | (dev->if_port << Xcvr_shift);
+ outl(config, ioaddr + Wn3_Config);
if (corkscrew_debug > 1) {
printk("%s: corkscrew_open() InternalConfig %8.8x.\n",
- dev->name, config.i);
+ dev->name, config);
}
outw(TxReset, ioaddr + EL3_CMD);
ok = 1;
}
if (!ok) {
- union wn3_config config;
+ __u32 config;
do {
dev->if_port =
ioaddr + Wn4_Media);
EL3WINDOW(3);
- config.i = inl(ioaddr + Wn3_Config);
- config.u.xcvr = dev->if_port;
- outl(config.i, ioaddr + Wn3_Config);
+ config = inl(ioaddr + Wn3_Config);
+ config = (config & ~Xcvr) | (dev->if_port << Xcvr_shift);
+ outl(config, ioaddr + Wn3_Config);
outw(dev->if_port == 3 ? StartCoax : StopCoax,
ioaddr + EL3_CMD);
If you don't have this card, of course say N.
-config IP1000
- tristate "IP1000 Gigabit Ethernet support"
- depends on PCI && EXPERIMENTAL
- select MII
- ---help---
- This driver supports IP1000 gigabit Ethernet cards.
-
- To compile this driver as a module, choose M here: the module
- will be called ipg. This is recommended.
-
source "drivers/net/arcnet/Kconfig"
source "drivers/net/phy/Kconfig"
<http://support.intel.com>
- More specific information on configuring the driver is in
- <file:Documentation/networking/e1000e.txt>.
-
To compile this driver as a module, choose M here. The module
will be called e1000e.
+config IP1000
+ tristate "IP1000 Gigabit Ethernet support"
+ depends on PCI && EXPERIMENTAL
+ select MII
+ ---help---
+ This driver supports IP1000 gigabit Ethernet cards.
+
+ To compile this driver as a module, choose M here: the module
+ will be called ipg. This is recommended.
+
source "drivers/net/ixp2000/Kconfig"
config MYRI_SBUS
<http://support.intel.com>
- More specific information on configuring the driver is in
- <file:Documentation/networking/ixgbe.txt>.
-
To compile this driver as a module, choose M here. The module
will be called ixgbe.
struct atl1_hw *hw = &adapter->hw;
struct net_device *netdev = adapter->netdev;
- hw->max_frame_size = netdev->mtu + ETH_HLEN + ETH_FCS_LEN;
+ hw->max_frame_size = netdev->mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
hw->min_frame_size = ETH_ZLEN + ETH_FCS_LEN;
adapter->wol = 0;
{
struct atl1_adapter *adapter = netdev_priv(netdev);
int old_mtu = netdev->mtu;
- int max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN;
+ int max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
if ((max_frame < ETH_ZLEN + ETH_FCS_LEN) ||
(max_frame > MAX_JUMBO_FRAME_SIZE)) {
/* set Interrupt Clear Timer */
iowrite16(adapter->ict, hw->hw_addr + REG_CMBDISDMA_TIMER);
- /* set MTU, 4 : VLAN */
- iowrite32(hw->max_frame_size + 4, hw->hw_addr + REG_MTU);
+ /* set max frame size hw will accept */
+ iowrite32(hw->max_frame_size, hw->hw_addr + REG_MTU);
/* jumbo size & rrd retirement timer */
value = (((u32) hw->rx_jumbo_th & RXQ_JMBOSZ_TH_MASK)
/*
* Send learning packets after MAC address swap.
*
- * Called with RTNL and bond->lock held for read.
+ * Called with RTNL and no other locks
*/
static void alb_fasten_mac_swap(struct bonding *bond, struct slave *slave1,
struct slave *slave2)
int slaves_state_differ = (SLAVE_IS_OK(slave1) != SLAVE_IS_OK(slave2));
struct slave *disabled_slave = NULL;
+ ASSERT_RTNL();
+
/* fasten the change in the switch */
if (SLAVE_IS_OK(slave1)) {
alb_send_learning_packets(slave1, slave1->dev->dev_addr);
* a slave that has @slave's permanet address as its current address.
* We'll make sure that that slave no longer uses @slave's permanent address.
*
- * Caller must hold bond lock
+ * Caller must hold RTNL and no other locks
*/
static void alb_change_hw_addr_on_detach(struct bonding *bond, struct slave *slave)
{
return 0;
}
-/* Caller must hold bond lock for write */
+/*
+ * Remove slave from tlb and rlb hash tables, and fix up MAC addresses
+ * if necessary.
+ *
+ * Caller must hold RTNL and no other locks
+ */
void bond_alb_deinit_slave(struct bonding *bond, struct slave *slave)
{
if (bond->slave_cnt > 1) {
struct slave *swap_slave;
int i;
- if (new_slave)
- ASSERT_RTNL();
-
if (bond->curr_active_slave == new_slave) {
return;
}
write_unlock_bh(&bond->curr_slave_lock);
read_unlock(&bond->lock);
+ ASSERT_RTNL();
+
/* curr_active_slave must be set before calling alb_swap_mac_addr */
if (swap_slave) {
/* swap mac address */
bond->alb_info.rlb_enabled);
}
- read_lock(&bond->lock);
-
if (swap_slave) {
alb_fasten_mac_swap(bond, swap_slave, new_slave);
+ read_lock(&bond->lock);
} else {
- /* fasten bond mac on new current slave */
+ read_lock(&bond->lock);
alb_send_learning_packets(new_slave, bond->dev->dev_addr);
}
* has been cleared (if our_slave == old_current),
* but before a new active slave is selected.
*/
+ write_unlock_bh(&bond->lock);
bond_alb_deinit_slave(bond, slave);
+ write_lock_bh(&bond->lock);
}
if (oldcurrent == slave) {
slave_dev = slave->dev;
bond_detach_slave(bond, slave);
+ /* now that the slave is detached, unlock and perform
+ * all the undo steps that should not be called from
+ * within a lock.
+ */
+ write_unlock_bh(&bond->lock);
+
if ((bond->params.mode == BOND_MODE_TLB) ||
(bond->params.mode == BOND_MODE_ALB)) {
/* must be called only after the slave
bond_compute_features(bond);
- /* now that the slave is detached, unlock and perform
- * all the undo steps that should not be called from
- * within a lock.
- */
- write_unlock_bh(&bond->lock);
-
bond_destroy_slave_symlinks(bond_dev, slave_dev);
bond_del_vlans_from_slave(bond, slave_dev);
rtnl_lock();
read_lock(&bond->lock);
__bond_mii_monitor(bond, 1);
- rtnl_unlock();
+ read_unlock(&bond->lock);
+ rtnl_unlock(); /* might sleep, hold no other locks */
+ read_lock(&bond->lock);
}
delay = ((bond->params.miimon * HZ) / 1000) ? : 1;
case NETDEV_CHANGENAME:
return bond_event_changename(event_bond);
case NETDEV_UNREGISTER:
- /*
- * TODO: remove a bond from the list?
- */
+ bond_release_all(event_bond->dev);
break;
default:
break;
/*
* Convert string input module parms. Accept either the
- * number of the mode or its string name.
+ * number of the mode or its string name. A bit complicated because
+ * some mode names are substrings of other names, and calls from sysfs
+ * may have whitespace in the name (trailing newlines, for example).
*/
-int bond_parse_parm(char *mode_arg, struct bond_parm_tbl *tbl)
+int bond_parse_parm(const char *buf, struct bond_parm_tbl *tbl)
{
- int i;
+ int mode = -1, i, rv;
+ char modestr[BOND_MAX_MODENAME_LEN + 1] = { 0, };
+
+ rv = sscanf(buf, "%d", &mode);
+ if (!rv) {
+ rv = sscanf(buf, "%20s", modestr);
+ if (!rv)
+ return -1;
+ }
for (i = 0; tbl[i].modename; i++) {
- if ((isdigit(*mode_arg) &&
- tbl[i].mode == simple_strtol(mode_arg, NULL, 0)) ||
- (strcmp(mode_arg, tbl[i].modename) == 0)) {
+ if (mode == tbl[i].mode)
+ return tbl[i].mode;
+ if (strcmp(modestr, tbl[i].modename) == 0)
return tbl[i].mode;
- }
}
return -1;
int bond_create(char *name, struct bond_params *params, struct bonding **newbond)
{
struct net_device *bond_dev;
+ struct bonding *bond, *nxt;
int res;
rtnl_lock();
+ down_write(&bonding_rwsem);
+
+ /* Check to see if the bond already exists. */
+ list_for_each_entry_safe(bond, nxt, &bond_dev_list, bond_list)
+ if (strnicmp(bond->dev->name, name, IFNAMSIZ) == 0) {
+ printk(KERN_ERR DRV_NAME
+ ": cannot add bond %s; it already exists\n",
+ name);
+ res = -EPERM;
+ goto out_rtnl;
+ }
+
bond_dev = alloc_netdev(sizeof(struct bonding), name ? name : "",
ether_setup);
if (!bond_dev) {
netif_carrier_off(bond_dev);
+ up_write(&bonding_rwsem);
rtnl_unlock(); /* allows sysfs registration of net device */
res = bond_create_sysfs_entry(bond_dev->priv);
if (res < 0) {
rtnl_lock();
+ down_write(&bonding_rwsem);
goto out_bond;
}
out_netdev:
free_netdev(bond_dev);
out_rtnl:
+ up_write(&bonding_rwsem);
rtnl_unlock();
return res;
}
#ifdef CONFIG_PROC_FS
bond_create_proc_dir();
#endif
+
+ init_rwsem(&bonding_rwsem);
+
for (i = 0; i < max_bonds; i++) {
res = bond_create(NULL, &bonding_defaults, NULL);
if (res)
{
char command[IFNAMSIZ + 1] = {0, };
char *ifname;
- int res = count;
+ int rv, res = count;
struct bonding *bond;
struct bonding *nxt;
- down_write(&(bonding_rwsem));
sscanf(buffer, "%16s", command); /* IFNAMSIZ*/
ifname = command + 1;
if ((strlen(command) <= 1) ||
goto err_no_cmd;
if (command[0] == '+') {
-
- /* Check to see if the bond already exists. */
- list_for_each_entry_safe(bond, nxt, &bond_dev_list, bond_list)
- if (strnicmp(bond->dev->name, ifname, IFNAMSIZ) == 0) {
- printk(KERN_ERR DRV_NAME
- ": cannot add bond %s; it already exists\n",
- ifname);
- res = -EPERM;
- goto out;
- }
-
printk(KERN_INFO DRV_NAME
": %s is being created...\n", ifname);
- if (bond_create(ifname, &bonding_defaults, &bond)) {
- printk(KERN_INFO DRV_NAME
- ": %s interface already exists. Bond creation failed.\n",
- ifname);
- res = -EPERM;
+ rv = bond_create(ifname, &bonding_defaults, &bond);
+ if (rv) {
+ printk(KERN_INFO DRV_NAME ": Bond creation failed.\n");
+ res = rv;
}
goto out;
}
if (command[0] == '-') {
+ rtnl_lock();
+ down_write(&bonding_rwsem);
+
list_for_each_entry_safe(bond, nxt, &bond_dev_list, bond_list)
if (strnicmp(bond->dev->name, ifname, IFNAMSIZ) == 0) {
- rtnl_lock();
/* check the ref count on the bond's kobject.
* If it's > expected, then there's a file open,
* and we have to fail.
*/
if (atomic_read(&bond->dev->dev.kobj.kref.refcount)
> expected_refcount){
- rtnl_unlock();
printk(KERN_INFO DRV_NAME
": Unable remove bond %s due to open references.\n",
ifname);
": %s is being deleted...\n",
bond->dev->name);
bond_destroy(bond);
+ up_write(&bonding_rwsem);
rtnl_unlock();
goto out;
}
printk(KERN_ERR DRV_NAME
": unable to delete non-existent bond %s\n", ifname);
res = -ENODEV;
+ up_write(&bonding_rwsem);
+ rtnl_unlock();
goto out;
}
* get called forever, which is bad.
*/
out:
- up_write(&(bonding_rwsem));
return res;
}
/* class attribute for bond_masters file. This ends up in /sys/class/net */
/* Note: We can't hold bond->lock here, as bond_create grabs it. */
+ rtnl_lock();
+ down_write(&(bonding_rwsem));
+
sscanf(buffer, "%16s", command); /* IFNAMSIZ*/
ifname = command + 1;
if ((strlen(command) <= 1) ||
dev->mtu = bond->dev->mtu;
}
}
- rtnl_lock();
res = bond_enslave(bond->dev, dev);
bond_for_each_slave(bond, slave, i)
if (strnicmp(slave->dev->name, ifname, IFNAMSIZ) == 0)
slave->original_mtu = original_mtu;
- rtnl_unlock();
if (res) {
ret = res;
}
if (dev) {
printk(KERN_INFO DRV_NAME ": %s: Removing slave %s\n",
bond->dev->name, dev->name);
- rtnl_lock();
if (bond->setup_by_slave)
res = bond_release_and_destroy(bond->dev, dev);
else
res = bond_release(bond->dev, dev);
- rtnl_unlock();
if (res) {
ret = res;
goto out;
ret = -EPERM;
out:
+ up_write(&(bonding_rwsem));
+ rtnl_unlock();
return ret;
}
goto out;
}
- new_value = bond_parse_parm((char *)buf, bond_mode_tbl);
+ new_value = bond_parse_parm(buf, bond_mode_tbl);
if (new_value < 0) {
printk(KERN_ERR DRV_NAME
": %s: Ignoring invalid mode value %.*s.\n",
goto out;
}
- new_value = bond_parse_parm((char *)buf, xmit_hashtype_tbl);
+ new_value = bond_parse_parm(buf, xmit_hashtype_tbl);
if (new_value < 0) {
printk(KERN_ERR DRV_NAME
": %s: Ignoring invalid xmit hash policy value %.*s.\n",
int new_value;
struct bonding *bond = to_bond(d);
- new_value = bond_parse_parm((char *)buf, arp_validate_tbl);
+ new_value = bond_parse_parm(buf, arp_validate_tbl);
if (new_value < 0) {
printk(KERN_ERR DRV_NAME
": %s: Ignoring invalid arp_validate value %s\n",
goto out;
}
- new_value = bond_parse_parm((char *)buf, bond_lacp_tbl);
+ new_value = bond_parse_parm(buf, bond_lacp_tbl);
if ((new_value == 1) || (new_value == 0)) {
bond->params.lacp_fast = new_value;
struct slave *slave;
struct bonding *bond = to_bond(d);
- write_lock_bh(&bond->lock);
+ rtnl_lock();
+ read_lock(&bond->lock);
+ write_lock_bh(&bond->curr_slave_lock);
+
if (!USES_PRIMARY(bond->params.mode)) {
printk(KERN_INFO DRV_NAME
": %s: Unable to set primary slave; %s is in mode %d\n",
}
}
out:
- write_unlock_bh(&bond->lock);
-
+ write_unlock_bh(&bond->curr_slave_lock);
+ read_unlock(&bond->lock);
rtnl_unlock();
return count;
struct bonding *bond = to_bond(d);
rtnl_lock();
- write_lock_bh(&bond->lock);
+ read_lock(&bond->lock);
+ write_lock_bh(&bond->curr_slave_lock);
if (!USES_PRIMARY(bond->params.mode)) {
printk(KERN_INFO DRV_NAME
}
}
out:
- write_unlock_bh(&bond->lock);
+ write_unlock_bh(&bond->curr_slave_lock);
+ read_unlock(&bond->lock);
rtnl_unlock();
return count;
int ret = 0;
struct bonding *firstbond;
- init_rwsem(&bonding_rwsem);
-
/* get the netdev class pointer */
firstbond = container_of(bond_dev_list.next, struct bonding, bond_list);
if (!firstbond)
int mode;
};
+#define BOND_MAX_MODENAME_LEN 20
+
struct vlan_entry {
struct list_head vlan_list;
__be32 vlan_ip;
void bond_loadbalance_arp_mon(struct work_struct *);
void bond_activebackup_arp_mon(struct work_struct *);
void bond_set_mode_ops(struct bonding *bond, int mode);
-int bond_parse_parm(char *mode_arg, struct bond_parm_tbl *tbl);
+int bond_parse_parm(const char *mode_arg, struct bond_parm_tbl *tbl);
void bond_select_active_slave(struct bonding *bond);
void bond_change_active_slave(struct bonding *bond, struct slave *new_active);
void bond_register_arp(struct bonding *);
#define DRV_MODULE_NAME "cassini"
#define PFX DRV_MODULE_NAME ": "
-#define DRV_MODULE_VERSION "1.4"
-#define DRV_MODULE_RELDATE "1 July 2004"
+#define DRV_MODULE_VERSION "1.5"
+#define DRV_MODULE_RELDATE "4 Jan 2008"
#define CAS_DEF_MSG_ENABLE \
(NETIF_MSG_DRV | \
cas_disable_irq(cp, i);
}
-static inline void cas_buffer_init(cas_page_t *cp)
-{
- struct page *page = cp->buffer;
- atomic_set((atomic_t *)&page->lru.next, 1);
-}
-
-static inline int cas_buffer_count(cas_page_t *cp)
-{
- struct page *page = cp->buffer;
- return atomic_read((atomic_t *)&page->lru.next);
-}
-
-static inline void cas_buffer_inc(cas_page_t *cp)
-{
- struct page *page = cp->buffer;
- atomic_inc((atomic_t *)&page->lru.next);
-}
-
-static inline void cas_buffer_dec(cas_page_t *cp)
-{
- struct page *page = cp->buffer;
- atomic_dec((atomic_t *)&page->lru.next);
-}
-
static void cas_enable_irq(struct cas *cp, const int ring)
{
if (ring == 0) { /* all but TX_DONE */
{
pci_unmap_page(cp->pdev, page->dma_addr, cp->page_size,
PCI_DMA_FROMDEVICE);
- cas_buffer_dec(page);
__free_pages(page->buffer, cp->page_order);
kfree(page);
return 0;
page->buffer = alloc_pages(flags, cp->page_order);
if (!page->buffer)
goto page_err;
- cas_buffer_init(page);
page->dma_addr = pci_map_page(cp->pdev, page->buffer, 0,
cp->page_size, PCI_DMA_FROMDEVICE);
return page;
list_for_each_safe(elem, tmp, &list) {
cas_page_t *page = list_entry(elem, cas_page_t, list);
- if (cas_buffer_count(page) > 1)
+ if (page_count(page->buffer) > 1)
continue;
list_del(elem);
cas_page_t *page = cp->rx_pages[1][index];
cas_page_t *new;
- if (cas_buffer_count(page) == 1)
+ if (page_count(page->buffer) == 1)
return page;
new = cas_page_dequeue(cp);
cas_page_t **page1 = cp->rx_pages[1];
/* swap if buffer is in use */
- if (cas_buffer_count(page0[index]) > 1) {
+ if (page_count(page0[index]->buffer) > 1) {
cas_page_t *new = cas_page_spare(cp, index);
if (new) {
page1[index] = page0[index];
struct cas_page *page;
struct sk_buff *skb;
void *addr, *crcaddr;
+ __sum16 csum;
char *p;
hlen = CAS_VAL(RX_COMP2_HDR_SIZE, words[1]);
skb_shinfo(skb)->nr_frags++;
skb->data_len += hlen - swivel;
+ skb->truesize += hlen - swivel;
skb->len += hlen - swivel;
get_page(page->buffer);
- cas_buffer_inc(page);
frag->page = page->buffer;
frag->page_offset = off;
frag->size = hlen - swivel;
frag++;
get_page(page->buffer);
- cas_buffer_inc(page);
frag->page = page->buffer;
frag->page_offset = 0;
frag->size = hlen;
skb_put(skb, alloclen);
}
- i = CAS_VAL(RX_COMP4_TCP_CSUM, words[3]);
+ csum = (__force __sum16)htons(CAS_VAL(RX_COMP4_TCP_CSUM, words[3]));
if (cp->crc_size) {
/* checksum includes FCS. strip it out. */
- i = csum_fold(csum_partial(crcaddr, cp->crc_size, i));
+ csum = csum_fold(csum_partial(crcaddr, cp->crc_size,
+ csum_unfold(csum)));
if (addr)
cas_page_unmap(addr);
}
- skb->csum = ntohs(i ^ 0xffff);
+ skb->csum = csum_unfold(~csum);
skb->ip_summed = CHECKSUM_COMPLETE;
skb->protocol = eth_type_trans(skb, cp->dev);
return len;
released = 0;
while (entry != last) {
/* make a new buffer if it's still in use */
- if (cas_buffer_count(page[entry]) > 1) {
+ if (page_count(page[entry]->buffer) > 1) {
cas_page_t *new = cas_page_dequeue(cp);
if (!new) {
/* let the timer know that we need to
{
struct cas *cp = container_of(napi, struct cas, napi);
struct net_device *dev = cp->dev;
- int i, enable_intr, todo, credits;
+ int i, enable_intr, credits;
u32 status = readl(cp->regs + REG_INTR_STATUS);
unsigned long flags;
struct cas *cp = netdev_priv(dev);
#ifdef USE_NAPI
- napi_enable(&cp->napi);
+ napi_disable(&cp->napi);
#endif
/* Make sure we don't get distracted by suspend/resume */
mutex_lock(&cp->pm_mutex);
return rc;
}
+/* When this chip sits underneath an Intel 31154 bridge, it is the
+ * only subordinate device and we can tweak the bridge settings to
+ * reflect that fact.
+ */
+static void __devinit cas_program_bridge(struct pci_dev *cas_pdev)
+{
+ struct pci_dev *pdev = cas_pdev->bus->self;
+ u32 val;
+
+ if (!pdev)
+ return;
+
+ if (pdev->vendor != 0x8086 || pdev->device != 0x537c)
+ return;
+
+ /* Clear bit 10 (Bus Parking Control) in the Secondary
+ * Arbiter Control/Status Register which lives at offset
+ * 0x41. Using a 32-bit word read/modify/write at 0x40
+ * is much simpler so that's how we do this.
+ */
+ pci_read_config_dword(pdev, 0x40, &val);
+ val &= ~0x00040000;
+ pci_write_config_dword(pdev, 0x40, val);
+
+ /* Max out the Multi-Transaction Timer settings since
+ * Cassini is the only device present.
+ *
+ * The register is 16-bit and lives at 0x50. When the
+ * settings are enabled, it extends the GRANT# signal
+ * for a requestor after a transaction is complete. This
+ * allows the next request to run without first needing
+ * to negotiate the GRANT# signal back.
+ *
+ * Bits 12:10 define the grant duration:
+ *
+ * 1 -- 16 clocks
+ * 2 -- 32 clocks
+ * 3 -- 64 clocks
+ * 4 -- 128 clocks
+ * 5 -- 256 clocks
+ *
+ * All other values are illegal.
+ *
+ * Bits 09:00 define which REQ/GNT signal pairs get the
+ * GRANT# signal treatment. We set them all.
+ */
+ pci_write_config_word(pdev, 0x50, (5 << 10) | 0x3ff);
+
+ /* The Read Prefecth Policy register is 16-bit and sits at
+ * offset 0x52. It enables a "smart" pre-fetch policy. We
+ * enable it and max out all of the settings since only one
+ * device is sitting underneath and thus bandwidth sharing is
+ * not an issue.
+ *
+ * The register has several 3 bit fields, which indicates a
+ * multiplier applied to the base amount of prefetching the
+ * chip would do. These fields are at:
+ *
+ * 15:13 --- ReRead Primary Bus
+ * 12:10 --- FirstRead Primary Bus
+ * 09:07 --- ReRead Secondary Bus
+ * 06:04 --- FirstRead Secondary Bus
+ *
+ * Bits 03:00 control which REQ/GNT pairs the prefetch settings
+ * get enabled on. Bit 3 is a grouped enabler which controls
+ * all of the REQ/GNT pairs from [8:3]. Bits 2 to 0 control
+ * the individual REQ/GNT pairs [2:0].
+ */
+ pci_write_config_word(pdev, 0x52,
+ (0x7 << 13) |
+ (0x7 << 10) |
+ (0x7 << 7) |
+ (0x7 << 4) |
+ (0xf << 0));
+
+ /* Force cacheline size to 0x8 */
+ pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, 0x08);
+
+ /* Force latency timer to maximum setting so Cassini can
+ * sit on the bus as long as it likes.
+ */
+ pci_write_config_byte(pdev, PCI_LATENCY_TIMER, 0xff);
+}
+
static int __devinit cas_init_one(struct pci_dev *pdev,
const struct pci_device_id *ent)
{
printk(KERN_WARNING PFX "Could not enable MWI for %s\n",
pci_name(pdev));
+ cas_program_bridge(pdev);
+
/*
* On some architectures, the default cache line size set
* by pci_try_set_mwi reduces perforamnce. We have to increase
inserted into
outgoing frame. */
struct cas_tx_desc {
- u64 control;
- u64 buffer;
+ __le64 control;
+ __le64 buffer;
};
/* descriptor ring for free buffers contains page-sized buffers. the index
* the completion ring.
*/
struct cas_rx_desc {
- u64 index;
- u64 buffer;
+ __le64 index;
+ __le64 buffer;
};
/* received packets are put on the completion ring. */
#define RX_INDEX_RELEASE 0x0000000000002000ULL
struct cas_rx_comp {
- u64 word1;
- u64 word2;
- u64 word3;
- u64 word4;
+ __le64 word1;
+ __le64 word2;
+ __le64 word3;
+ __le64 word4;
};
enum link_state {
struct cas_rx_comp rxcs[N_RX_COMP_RINGS][INIT_BLOCK_RX_COMP];
struct cas_rx_desc rxds[N_RX_DESC_RINGS][INIT_BLOCK_RX_DESC];
struct cas_tx_desc txds[N_TX_RINGS][INIT_BLOCK_TX];
- u64 tx_compwb;
+ __le64 tx_compwb;
};
/* tiny buffers to deal with target abort issue. we allocate a bit
return NETDEV_TX_OK;
len = max(skb->len, ETH_ZLEN);
- queue = skb->queue_mapping;
+ queue = skb_get_queue_mapping(skb);
#ifdef CONFIG_NETDEVICES_MULTIQUEUE
netif_stop_subqueue(dev, queue);
#else
#endif
/* Read eeprom */
for (i = 0; i < 128; i++) {
- ((u16 *) sromdata)[i] = le16_to_cpu (read_eeprom (ioaddr, i));
+ ((__le16 *) sromdata)[i] = cpu_to_le16(read_eeprom (ioaddr, i));
}
#ifdef MEM_MAPPING
ioaddr = dev->base_addr;
PCI_DMA_FROMDEVICE));
}
np->rx_ring[entry].fraginfo |=
- cpu_to_le64 (np->rx_buf_sz) << 48;
+ cpu_to_le64((u64)np->rx_buf_sz << 48);
np->rx_ring[entry].status = 0;
} /* end for */
} /* end if */
cpu_to_le64 ( pci_map_single (
np->pdev, skb->data, np->rx_buf_sz,
PCI_DMA_FROMDEVICE));
- np->rx_ring[i].fraginfo |= cpu_to_le64 (np->rx_buf_sz) << 48;
+ np->rx_ring[i].fraginfo |= cpu_to_le64((u64)np->rx_buf_sz << 48);
}
/* Set RFDListPtr */
- writel (cpu_to_le32 (np->rx_ring_dma), dev->base_addr + RFDListPtr0);
+ writel (np->rx_ring_dma, dev->base_addr + RFDListPtr0);
writel (0, dev->base_addr + RFDListPtr1);
return;
}
#endif
if (np->vlan) {
- tfc_vlan_tag =
- cpu_to_le64 (VLANTagInsert) |
- (cpu_to_le64 (np->vlan) << 32) |
- (cpu_to_le64 (skb->priority) << 45);
+ tfc_vlan_tag = VLANTagInsert |
+ ((u64)np->vlan << 32) |
+ ((u64)skb->priority << 45);
}
txdesc->fraginfo = cpu_to_le64 (pci_map_single (np->pdev, skb->data,
skb->len,
PCI_DMA_TODEVICE));
- txdesc->fraginfo |= cpu_to_le64 (skb->len) << 48;
+ txdesc->fraginfo |= cpu_to_le64((u64)skb->len << 48);
/* DL2K bug: DMA fails to get next descriptor ptr in 10Mbps mode
* Work around: Always use 1 descriptor in 10Mbps mode */
return IRQ_RETVAL(handled);
}
+static inline dma_addr_t desc_to_dma(struct netdev_desc *desc)
+{
+ return le64_to_cpu(desc->fraginfo) & DMA_48BIT_MASK;
+}
+
static void
rio_free_tx (struct net_device *dev, int irq)
{
while (entry != np->cur_tx) {
struct sk_buff *skb;
- if (!(np->tx_ring[entry].status & TFDDone))
+ if (!(np->tx_ring[entry].status & cpu_to_le64(TFDDone)))
break;
skb = np->tx_skbuff[entry];
pci_unmap_single (np->pdev,
- np->tx_ring[entry].fraginfo & DMA_48BIT_MASK,
+ desc_to_dma(&np->tx_ring[entry]),
skb->len, PCI_DMA_TODEVICE);
if (irq)
dev_kfree_skb_irq (skb);
int pkt_len;
u64 frame_status;
- if (!(desc->status & RFDDone) ||
- !(desc->status & FrameStart) || !(desc->status & FrameEnd))
+ if (!(desc->status & cpu_to_le64(RFDDone)) ||
+ !(desc->status & cpu_to_le64(FrameStart)) ||
+ !(desc->status & cpu_to_le64(FrameEnd)))
break;
/* Chip omits the CRC. */
- pkt_len = le64_to_cpu (desc->status & 0xffff);
- frame_status = le64_to_cpu (desc->status);
+ frame_status = le64_to_cpu(desc->status);
+ pkt_len = frame_status & 0xffff;
if (--cnt < 0)
break;
/* Update rx error statistics, drop packet. */
/* Small skbuffs for short packets */
if (pkt_len > copy_thresh) {
pci_unmap_single (np->pdev,
- desc->fraginfo & DMA_48BIT_MASK,
+ desc_to_dma(desc),
np->rx_buf_sz,
PCI_DMA_FROMDEVICE);
skb_put (skb = np->rx_skbuff[entry], pkt_len);
np->rx_skbuff[entry] = NULL;
} else if ((skb = dev_alloc_skb (pkt_len + 2)) != NULL) {
pci_dma_sync_single_for_cpu(np->pdev,
- desc->fraginfo &
- DMA_48BIT_MASK,
+ desc_to_dma(desc),
np->rx_buf_sz,
PCI_DMA_FROMDEVICE);
/* 16 byte align the IP header */
pkt_len);
skb_put (skb, pkt_len);
pci_dma_sync_single_for_device(np->pdev,
- desc->fraginfo &
- DMA_48BIT_MASK,
+ desc_to_dma(desc),
np->rx_buf_sz,
PCI_DMA_FROMDEVICE);
}
PCI_DMA_FROMDEVICE));
}
np->rx_ring[entry].fraginfo |=
- cpu_to_le64 (np->rx_buf_sz) << 48;
+ cpu_to_le64((u64)np->rx_buf_sz << 48);
np->rx_ring[entry].status = 0;
entry = (entry + 1) % RX_RING_SIZE;
}
hash_table[0] = hash_table[1] = 0;
/* RxFlowcontrol DA: 01-80-C2-00-00-01. Hash index=0x39 */
- hash_table[1] |= cpu_to_le32(0x02000000);
+ hash_table[1] |= 0x02000000;
if (dev->flags & IFF_PROMISC) {
/* Receive all frames promiscuously. */
rx_mode = ReceiveAllFrames;
("%02x:cur:%08x next:%08x status:%08x frag1:%08x frag0:%08x",
i,
(u32) (np->tx_ring_dma + i * sizeof (*desc)),
- (u32) desc->next_desc,
- (u32) desc->status, (u32) (desc->fraginfo >> 32),
- (u32) desc->fraginfo);
+ (u32)le64_to_cpu(desc->next_desc),
+ (u32)le64_to_cpu(desc->status),
+ (u32)(le64_to_cpu(desc->fraginfo) >> 32),
+ (u32)le64_to_cpu(desc->fraginfo));
printk ("\n");
}
printk ("\n");
static int
mii_wait_link (struct net_device *dev, int wait)
{
- BMSR_t bmsr;
+ __u16 bmsr;
int phy_addr;
struct netdev_private *np;
phy_addr = np->phy_addr;
do {
- bmsr.image = mii_read (dev, phy_addr, MII_BMSR);
- if (bmsr.bits.link_status)
+ bmsr = mii_read (dev, phy_addr, MII_BMSR);
+ if (bmsr & MII_BMSR_LINK_STATUS)
return 0;
mdelay (1);
} while (--wait > 0);
static int
mii_get_media (struct net_device *dev)
{
- ANAR_t negotiate;
- BMSR_t bmsr;
- BMCR_t bmcr;
- MSCR_t mscr;
- MSSR_t mssr;
+ __u16 negotiate;
+ __u16 bmsr;
+ __u16 mscr;
+ __u16 mssr;
int phy_addr;
struct netdev_private *np;
np = netdev_priv(dev);
phy_addr = np->phy_addr;
- bmsr.image = mii_read (dev, phy_addr, MII_BMSR);
+ bmsr = mii_read (dev, phy_addr, MII_BMSR);
if (np->an_enable) {
- if (!bmsr.bits.an_complete) {
+ if (!(bmsr & MII_BMSR_AN_COMPLETE)) {
/* Auto-Negotiation not completed */
return -1;
}
- negotiate.image = mii_read (dev, phy_addr, MII_ANAR) &
+ negotiate = mii_read (dev, phy_addr, MII_ANAR) &
mii_read (dev, phy_addr, MII_ANLPAR);
- mscr.image = mii_read (dev, phy_addr, MII_MSCR);
- mssr.image = mii_read (dev, phy_addr, MII_MSSR);
- if (mscr.bits.media_1000BT_FD & mssr.bits.lp_1000BT_FD) {
+ mscr = mii_read (dev, phy_addr, MII_MSCR);
+ mssr = mii_read (dev, phy_addr, MII_MSSR);
+ if (mscr & MII_MSCR_1000BT_FD && mssr & MII_MSSR_LP_1000BT_FD) {
np->speed = 1000;
np->full_duplex = 1;
printk (KERN_INFO "Auto 1000 Mbps, Full duplex\n");
- } else if (mscr.bits.media_1000BT_HD & mssr.bits.lp_1000BT_HD) {
+ } else if (mscr & MII_MSCR_1000BT_HD && mssr & MII_MSSR_LP_1000BT_HD) {
np->speed = 1000;
np->full_duplex = 0;
printk (KERN_INFO "Auto 1000 Mbps, Half duplex\n");
- } else if (negotiate.bits.media_100BX_FD) {
+ } else if (negotiate & MII_ANAR_100BX_FD) {
np->speed = 100;
np->full_duplex = 1;
printk (KERN_INFO "Auto 100 Mbps, Full duplex\n");
- } else if (negotiate.bits.media_100BX_HD) {
+ } else if (negotiate & MII_ANAR_100BX_HD) {
np->speed = 100;
np->full_duplex = 0;
printk (KERN_INFO "Auto 100 Mbps, Half duplex\n");
- } else if (negotiate.bits.media_10BT_FD) {
+ } else if (negotiate & MII_ANAR_10BT_FD) {
np->speed = 10;
np->full_duplex = 1;
printk (KERN_INFO "Auto 10 Mbps, Full duplex\n");
- } else if (negotiate.bits.media_10BT_HD) {
+ } else if (negotiate & MII_ANAR_10BT_HD) {
np->speed = 10;
np->full_duplex = 0;
printk (KERN_INFO "Auto 10 Mbps, Half duplex\n");
}
- if (negotiate.bits.pause) {
+ if (negotiate & MII_ANAR_PAUSE) {
np->tx_flow &= 1;
np->rx_flow &= 1;
- } else if (negotiate.bits.asymmetric) {
+ } else if (negotiate & MII_ANAR_ASYMMETRIC) {
np->tx_flow = 0;
np->rx_flow &= 1;
}
/* else tx_flow, rx_flow = user select */
} else {
- bmcr.image = mii_read (dev, phy_addr, MII_BMCR);
- if (bmcr.bits.speed100 == 1 && bmcr.bits.speed1000 == 0) {
+ __u16 bmcr = mii_read (dev, phy_addr, MII_BMCR);
+ switch (bmcr & (MII_BMCR_SPEED_100 | MII_BMCR_SPEED_1000)) {
+ case MII_BMCR_SPEED_1000:
+ printk (KERN_INFO "Operating at 1000 Mbps, ");
+ break;
+ case MII_BMCR_SPEED_100:
printk (KERN_INFO "Operating at 100 Mbps, ");
- } else if (bmcr.bits.speed100 == 0 && bmcr.bits.speed1000 == 0) {
+ break;
+ case 0:
printk (KERN_INFO "Operating at 10 Mbps, ");
- } else if (bmcr.bits.speed100 == 0 && bmcr.bits.speed1000 == 1) {
- printk (KERN_INFO "Operating at 1000 Mbps, ");
}
- if (bmcr.bits.duplex_mode) {
+ if (bmcr & MII_BMCR_DUPLEX_MODE) {
printk ("Full duplex\n");
} else {
printk ("Half duplex\n");
static int
mii_set_media (struct net_device *dev)
{
- PHY_SCR_t pscr;
- BMCR_t bmcr;
- BMSR_t bmsr;
- ANAR_t anar;
+ __u16 pscr;
+ __u16 bmcr;
+ __u16 bmsr;
+ __u16 anar;
int phy_addr;
struct netdev_private *np;
np = netdev_priv(dev);
/* Does user set speed? */
if (np->an_enable) {
/* Advertise capabilities */
- bmsr.image = mii_read (dev, phy_addr, MII_BMSR);
- anar.image = mii_read (dev, phy_addr, MII_ANAR);
- anar.bits.media_100BX_FD = bmsr.bits.media_100BX_FD;
- anar.bits.media_100BX_HD = bmsr.bits.media_100BX_HD;
- anar.bits.media_100BT4 = bmsr.bits.media_100BT4;
- anar.bits.media_10BT_FD = bmsr.bits.media_10BT_FD;
- anar.bits.media_10BT_HD = bmsr.bits.media_10BT_HD;
- anar.bits.pause = 1;
- anar.bits.asymmetric = 1;
- mii_write (dev, phy_addr, MII_ANAR, anar.image);
+ bmsr = mii_read (dev, phy_addr, MII_BMSR);
+ anar = mii_read (dev, phy_addr, MII_ANAR) &
+ ~MII_ANAR_100BX_FD &
+ ~MII_ANAR_100BX_HD &
+ ~MII_ANAR_100BT4 &
+ ~MII_ANAR_10BT_FD &
+ ~MII_ANAR_10BT_HD;
+ if (bmsr & MII_BMSR_100BX_FD)
+ anar |= MII_ANAR_100BX_FD;
+ if (bmsr & MII_BMSR_100BX_HD)
+ anar |= MII_ANAR_100BX_HD;
+ if (bmsr & MII_BMSR_100BT4)
+ anar |= MII_ANAR_100BT4;
+ if (bmsr & MII_BMSR_10BT_FD)
+ anar |= MII_ANAR_10BT_FD;
+ if (bmsr & MII_BMSR_10BT_HD)
+ anar |= MII_ANAR_10BT_HD;
+ anar |= MII_ANAR_PAUSE | MII_ANAR_ASYMMETRIC;
+ mii_write (dev, phy_addr, MII_ANAR, anar);
/* Enable Auto crossover */
- pscr.image = mii_read (dev, phy_addr, MII_PHY_SCR);
- pscr.bits.mdi_crossover_mode = 3; /* 11'b */
- mii_write (dev, phy_addr, MII_PHY_SCR, pscr.image);
+ pscr = mii_read (dev, phy_addr, MII_PHY_SCR);
+ pscr |= 3 << 5; /* 11'b */
+ mii_write (dev, phy_addr, MII_PHY_SCR, pscr);
/* Soft reset PHY */
mii_write (dev, phy_addr, MII_BMCR, MII_BMCR_RESET);
- bmcr.image = 0;
- bmcr.bits.an_enable = 1;
- bmcr.bits.restart_an = 1;
- bmcr.bits.reset = 1;
- mii_write (dev, phy_addr, MII_BMCR, bmcr.image);
+ bmcr = MII_BMCR_AN_ENABLE | MII_BMCR_RESTART_AN | MII_BMCR_RESET;
+ mii_write (dev, phy_addr, MII_BMCR, bmcr);
mdelay(1);
} else {
/* Force speed setting */
/* 1) Disable Auto crossover */
- pscr.image = mii_read (dev, phy_addr, MII_PHY_SCR);
- pscr.bits.mdi_crossover_mode = 0;
- mii_write (dev, phy_addr, MII_PHY_SCR, pscr.image);
+ pscr = mii_read (dev, phy_addr, MII_PHY_SCR);
+ pscr &= ~(3 << 5);
+ mii_write (dev, phy_addr, MII_PHY_SCR, pscr);
/* 2) PHY Reset */
- bmcr.image = mii_read (dev, phy_addr, MII_BMCR);
- bmcr.bits.reset = 1;
- mii_write (dev, phy_addr, MII_BMCR, bmcr.image);
+ bmcr = mii_read (dev, phy_addr, MII_BMCR);
+ bmcr |= MII_BMCR_RESET;
+ mii_write (dev, phy_addr, MII_BMCR, bmcr);
/* 3) Power Down */
- bmcr.image = 0x1940; /* must be 0x1940 */
- mii_write (dev, phy_addr, MII_BMCR, bmcr.image);
+ bmcr = 0x1940; /* must be 0x1940 */
+ mii_write (dev, phy_addr, MII_BMCR, bmcr);
mdelay (100); /* wait a certain time */
/* 4) Advertise nothing */
mii_write (dev, phy_addr, MII_ANAR, 0);
/* 5) Set media and Power Up */
- bmcr.image = 0;
- bmcr.bits.power_down = 1;
+ bmcr = MII_BMCR_POWER_DOWN;
if (np->speed == 100) {
- bmcr.bits.speed100 = 1;
- bmcr.bits.speed1000 = 0;
+ bmcr |= MII_BMCR_SPEED_100;
printk (KERN_INFO "Manual 100 Mbps, ");
} else if (np->speed == 10) {
- bmcr.bits.speed100 = 0;
- bmcr.bits.speed1000 = 0;
printk (KERN_INFO "Manual 10 Mbps, ");
}
if (np->full_duplex) {
- bmcr.bits.duplex_mode = 1;
+ bmcr |= MII_BMCR_DUPLEX_MODE;
printk ("Full duplex\n");
} else {
- bmcr.bits.duplex_mode = 0;
printk ("Half duplex\n");
}
#if 0
/* Set 1000BaseT Master/Slave setting */
- mscr.image = mii_read (dev, phy_addr, MII_MSCR);
- mscr.bits.cfg_enable = 1;
- mscr.bits.cfg_value = 0;
+ mscr = mii_read (dev, phy_addr, MII_MSCR);
+ mscr |= MII_MSCR_CFG_ENABLE;
+ mscr &= ~MII_MSCR_CFG_VALUE = 0;
#endif
- mii_write (dev, phy_addr, MII_BMCR, bmcr.image);
+ mii_write (dev, phy_addr, MII_BMCR, bmcr);
mdelay(10);
}
return 0;
static int
mii_get_media_pcs (struct net_device *dev)
{
- ANAR_PCS_t negotiate;
- BMSR_t bmsr;
- BMCR_t bmcr;
+ __u16 negotiate;
+ __u16 bmsr;
int phy_addr;
struct netdev_private *np;
np = netdev_priv(dev);
phy_addr = np->phy_addr;
- bmsr.image = mii_read (dev, phy_addr, PCS_BMSR);
+ bmsr = mii_read (dev, phy_addr, PCS_BMSR);
if (np->an_enable) {
- if (!bmsr.bits.an_complete) {
+ if (!(bmsr & MII_BMSR_AN_COMPLETE)) {
/* Auto-Negotiation not completed */
return -1;
}
- negotiate.image = mii_read (dev, phy_addr, PCS_ANAR) &
+ negotiate = mii_read (dev, phy_addr, PCS_ANAR) &
mii_read (dev, phy_addr, PCS_ANLPAR);
np->speed = 1000;
- if (negotiate.bits.full_duplex) {
+ if (negotiate & PCS_ANAR_FULL_DUPLEX) {
printk (KERN_INFO "Auto 1000 Mbps, Full duplex\n");
np->full_duplex = 1;
} else {
printk (KERN_INFO "Auto 1000 Mbps, half duplex\n");
np->full_duplex = 0;
}
- if (negotiate.bits.pause) {
+ if (negotiate & PCS_ANAR_PAUSE) {
np->tx_flow &= 1;
np->rx_flow &= 1;
- } else if (negotiate.bits.asymmetric) {
+ } else if (negotiate & PCS_ANAR_ASYMMETRIC) {
np->tx_flow = 0;
np->rx_flow &= 1;
}
/* else tx_flow, rx_flow = user select */
} else {
- bmcr.image = mii_read (dev, phy_addr, PCS_BMCR);
+ __u16 bmcr = mii_read (dev, phy_addr, PCS_BMCR);
printk (KERN_INFO "Operating at 1000 Mbps, ");
- if (bmcr.bits.duplex_mode) {
+ if (bmcr & MII_BMCR_DUPLEX_MODE) {
printk ("Full duplex\n");
} else {
printk ("Half duplex\n");
static int
mii_set_media_pcs (struct net_device *dev)
{
- BMCR_t bmcr;
- ESR_t esr;
- ANAR_PCS_t anar;
+ __u16 bmcr;
+ __u16 esr;
+ __u16 anar;
int phy_addr;
struct netdev_private *np;
np = netdev_priv(dev);
/* Auto-Negotiation? */
if (np->an_enable) {
/* Advertise capabilities */
- esr.image = mii_read (dev, phy_addr, PCS_ESR);
- anar.image = mii_read (dev, phy_addr, MII_ANAR);
- anar.bits.half_duplex =
- esr.bits.media_1000BT_HD | esr.bits.media_1000BX_HD;
- anar.bits.full_duplex =
- esr.bits.media_1000BT_FD | esr.bits.media_1000BX_FD;
- anar.bits.pause = 1;
- anar.bits.asymmetric = 1;
- mii_write (dev, phy_addr, MII_ANAR, anar.image);
+ esr = mii_read (dev, phy_addr, PCS_ESR);
+ anar = mii_read (dev, phy_addr, MII_ANAR) &
+ ~PCS_ANAR_HALF_DUPLEX &
+ ~PCS_ANAR_FULL_DUPLEX;
+ if (esr & (MII_ESR_1000BT_HD | MII_ESR_1000BX_HD))
+ anar |= PCS_ANAR_HALF_DUPLEX;
+ if (esr & (MII_ESR_1000BT_FD | MII_ESR_1000BX_FD))
+ anar |= PCS_ANAR_FULL_DUPLEX;
+ anar |= PCS_ANAR_PAUSE | PCS_ANAR_ASYMMETRIC;
+ mii_write (dev, phy_addr, MII_ANAR, anar);
/* Soft reset PHY */
mii_write (dev, phy_addr, MII_BMCR, MII_BMCR_RESET);
- bmcr.image = 0;
- bmcr.bits.an_enable = 1;
- bmcr.bits.restart_an = 1;
- bmcr.bits.reset = 1;
- mii_write (dev, phy_addr, MII_BMCR, bmcr.image);
+ bmcr = MII_BMCR_AN_ENABLE | MII_BMCR_RESTART_AN |
+ MII_BMCR_RESET;
+ mii_write (dev, phy_addr, MII_BMCR, bmcr);
mdelay(1);
} else {
/* Force speed setting */
/* PHY Reset */
- bmcr.image = 0;
- bmcr.bits.reset = 1;
- mii_write (dev, phy_addr, MII_BMCR, bmcr.image);
+ bmcr = MII_BMCR_RESET;
+ mii_write (dev, phy_addr, MII_BMCR, bmcr);
mdelay(10);
- bmcr.image = 0;
- bmcr.bits.an_enable = 0;
if (np->full_duplex) {
- bmcr.bits.duplex_mode = 1;
+ bmcr = MII_BMCR_DUPLEX_MODE;
printk (KERN_INFO "Manual full duplex\n");
} else {
- bmcr.bits.duplex_mode = 0;
+ bmcr = 0;
printk (KERN_INFO "Manual half duplex\n");
}
- mii_write (dev, phy_addr, MII_BMCR, bmcr.image);
+ mii_write (dev, phy_addr, MII_BMCR, bmcr);
mdelay(10);
/* Advertise nothing */
skb = np->rx_skbuff[i];
if (skb) {
pci_unmap_single(np->pdev,
- np->rx_ring[i].fraginfo & DMA_48BIT_MASK,
+ desc_to_dma(&np->rx_ring[i]),
skb->len, PCI_DMA_FROMDEVICE);
dev_kfree_skb (skb);
np->rx_skbuff[i] = NULL;
skb = np->tx_skbuff[i];
if (skb) {
pci_unmap_single(np->pdev,
- np->tx_ring[i].fraginfo & DMA_48BIT_MASK,
+ desc_to_dma(&np->tx_ring[i]),
skb->len, PCI_DMA_TODEVICE);
dev_kfree_skb (skb);
np->tx_skbuff[i] = NULL;
};
/* Basic Mode Control Register */
-typedef union t_MII_BMCR {
- u16 image;
- struct {
- u16 _bit_5_0:6; // bit 5:0
- u16 speed1000:1; // bit 6
- u16 col_test_enable:1; // bit 7
- u16 duplex_mode:1; // bit 8
- u16 restart_an:1; // bit 9
- u16 isolate:1; // bit 10
- u16 power_down:1; // bit 11
- u16 an_enable:1; // bit 12
- u16 speed100:1; // bit 13
- u16 loopback:1; // bit 14
- u16 reset:1; // bit 15
- } bits;
-} BMCR_t, *PBMCR_t;
-
enum _mii_bmcr {
MII_BMCR_RESET = 0x8000,
MII_BMCR_LOOP_BACK = 0x4000,
};
/* Basic Mode Status Register */
-typedef union t_MII_BMSR {
- u16 image;
- struct {
- u16 ext_capability:1; // bit 0
- u16 japper_detect:1; // bit 1
- u16 link_status:1; // bit 2
- u16 an_ability:1; // bit 3
- u16 remote_fault:1; // bit 4
- u16 an_complete:1; // bit 5
- u16 preamble_supp:1; // bit 6
- u16 _bit_7:1; // bit 7
- u16 ext_status:1; // bit 8
- u16 media_100BT2_HD:1; // bit 9
- u16 media_100BT2_FD:1; // bit 10
- u16 media_10BT_HD:1; // bit 11
- u16 media_10BT_FD:1; // bit 12
- u16 media_100BX_HD:1; // bit 13
- u16 media_100BX_FD:1; // bit 14
- u16 media_100BT4:1; // bit 15
- } bits;
-} BMSR_t, *PBMSR_t;
-
enum _mii_bmsr {
MII_BMSR_100BT4 = 0x8000,
MII_BMSR_100BX_FD = 0x4000,
};
/* ANAR */
-typedef union t_MII_ANAR {
- u16 image;
- struct {
- u16 selector:5; // bit 4:0
- u16 media_10BT_HD:1; // bit 5
- u16 media_10BT_FD:1; // bit 6
- u16 media_100BX_HD:1; // bit 7
- u16 media_100BX_FD:1; // bit 8
- u16 media_100BT4:1; // bit 9
- u16 pause:1; // bit 10
- u16 asymmetric:1; // bit 11
- u16 _bit12:1; // bit 12
- u16 remote_fault:1; // bit 13
- u16 _bit14:1; // bit 14
- u16 next_page:1; // bit 15
- } bits;
-} ANAR_t, *PANAR_t;
-
enum _mii_anar {
MII_ANAR_NEXT_PAGE = 0x8000,
MII_ANAR_REMOTE_FAULT = 0x4000,
};
/* ANLPAR */
-typedef union t_MII_ANLPAR {
- u16 image;
- struct {
- u16 selector:5; // bit 4:0
- u16 media_10BT_HD:1; // bit 5
- u16 media_10BT_FD:1; // bit 6
- u16 media_100BX_HD:1; // bit 7
- u16 media_100BX_FD:1; // bit 8
- u16 media_100BT4:1; // bit 9
- u16 pause:1; // bit 10
- u16 asymmetric:1; // bit 11
- u16 _bit12:1; // bit 12
- u16 remote_fault:1; // bit 13
- u16 _bit14:1; // bit 14
- u16 next_page:1; // bit 15
- } bits;
-} ANLPAR_t, *PANLPAR_t;
-
enum _mii_anlpar {
MII_ANLPAR_NEXT_PAGE = MII_ANAR_NEXT_PAGE,
MII_ANLPAR_REMOTE_FAULT = MII_ANAR_REMOTE_FAULT,
};
/* Auto-Negotiation Expansion Register */
-typedef union t_MII_ANER {
- u16 image;
- struct {
- u16 lp_negotiable:1; // bit 0
- u16 page_received:1; // bit 1
- u16 nextpagable:1; // bit 2
- u16 lp_nextpagable:1; // bit 3
- u16 pdetect_fault:1; // bit 4
- u16 _bit15_5:11; // bit 15:5
- } bits;
-} ANER_t, *PANER_t;
-
enum _mii_aner {
MII_ANER_PAR_DETECT_FAULT = 0x0010,
MII_ANER_LP_NEXTPAGABLE = 0x0008,
};
/* MASTER-SLAVE Control Register */
-typedef union t_MII_MSCR {
- u16 image;
- struct {
- u16 _bit_7_0:8; // bit 7:0
- u16 media_1000BT_HD:1; // bit 8
- u16 media_1000BT_FD:1; // bit 9
- u16 port_type:1; // bit 10
- u16 cfg_value:1; // bit 11
- u16 cfg_enable:1; // bit 12
- u16 test_mode:3; // bit 15:13
- } bits;
-} MSCR_t, *PMSCR_t;
-
enum _mii_mscr {
MII_MSCR_TEST_MODE = 0xe000,
MII_MSCR_CFG_ENABLE = 0x1000,
};
/* MASTER-SLAVE Status Register */
-typedef union t_MII_MSSR {
- u16 image;
- struct {
- u16 idle_err_count:8; // bit 7:0
- u16 _bit_9_8:2; // bit 9:8
- u16 lp_1000BT_HD:1; // bit 10
- u16 lp_1000BT_FD:1; // bit 11
- u16 remote_rcv_status:1; // bit 12
- u16 local_rcv_status:1; // bit 13
- u16 cfg_resolution:1; // bit 14
- u16 cfg_fault:1; // bit 15
- } bits;
-} MSSR_t, *PMSSR_t;
-
enum _mii_mssr {
MII_MSSR_CFG_FAULT = 0x8000,
MII_MSSR_CFG_RES = 0x4000,
};
/* IEEE Extened Status Register */
-typedef union t_MII_ESR {
- u16 image;
- struct {
- u16 _bit_11_0:12; // bit 11:0
- u16 media_1000BT_HD:2; // bit 12
- u16 media_1000BT_FD:1; // bit 13
- u16 media_1000BX_HD:1; // bit 14
- u16 media_1000BX_FD:1; // bit 15
- } bits;
-} ESR_t, *PESR_t;
-
enum _mii_esr {
MII_ESR_1000BX_FD = 0x8000,
MII_ESR_1000BX_HD = 0x4000,
MII_ESR_1000BT_HD = 0x1000,
};
/* PHY Specific Control Register */
+#if 0
typedef union t_MII_PHY_SCR {
u16 image;
struct {
u16 xmit_fifo_depth:2; // bit 15:14
} bits;
} PHY_SCR_t, *PPHY_SCR_t;
+#endif
typedef enum t_MII_ADMIN_STATUS {
adm_reset,
/* PCS control and status registers bitmap as the same as MII */
/* PCS Extended Status register bitmap as the same as MII */
/* PCS ANAR */
-typedef union t_PCS_ANAR {
- u16 image;
- struct {
- u16 _bit_4_0:5; // bit 4:0
- u16 full_duplex:1; // bit 5
- u16 half_duplex:1; // bit 6
- u16 asymmetric:1; // bit 7
- u16 pause:1; // bit 8
- u16 _bit_11_9:3; // bit 11:9
- u16 remote_fault:2; // bit 13:12
- u16 _bit_14:1; // bit 14
- u16 next_page:1; // bit 15
- } bits;
-} ANAR_PCS_t, *PANAR_PCS_t;
-
enum _pcs_anar {
PCS_ANAR_NEXT_PAGE = 0x8000,
PCS_ANAR_REMOTE_FAULT = 0x3000,
PCS_ANAR_FULL_DUPLEX = 0x0020,
};
/* PCS ANLPAR */
-typedef union t_PCS_ANLPAR {
- u16 image;
- struct {
- u16 _bit_4_0:5; // bit 4:0
- u16 full_duplex:1; // bit 5
- u16 half_duplex:1; // bit 6
- u16 asymmetric:1; // bit 7
- u16 pause:1; // bit 8
- u16 _bit_11_9:3; // bit 11:9
- u16 remote_fault:2; // bit 13:12
- u16 _bit_14:1; // bit 14
- u16 next_page:1; // bit 15
- } bits;
-} ANLPAR_PCS_t, *PANLPAR_PCS_t;
-
enum _pcs_anlpar {
PCS_ANLPAR_NEXT_PAGE = PCS_ANAR_NEXT_PAGE,
PCS_ANLPAR_REMOTE_FAULT = PCS_ANAR_REMOTE_FAULT,
/* The Rx and Tx buffer descriptors. */
struct netdev_desc {
- u64 next_desc;
- u64 status;
- u64 fraginfo;
+ __le64 next_desc;
+ __le64 status;
+ __le64 fraginfo;
};
#define PRIV_ALIGN 15 /* Required alignment mask */
struct nic *nic = container_of(napi, struct nic, napi);
struct net_device *netdev = nic->netdev;
unsigned int work_done = 0;
- int tx_cleaned;
e100_rx_clean(nic, &work_done, budget);
- tx_cleaned = e100_tx_clean(nic);
+ e100_tx_clean(nic);
- /* If no Rx and Tx cleanup work was done, exit polling mode. */
- if((!tx_cleaned && (work_done == 0)) || !netif_running(netdev)) {
+ /* If budget not fully consumed, exit the polling mode */
+ if (work_done < budget) {
netif_rx_complete(netdev, napi);
e100_enable_irq(nic);
}
#ifdef CONFIG_E1000_NAPI
napi_disable(&adapter->napi);
+ atomic_set(&adapter->irq_sem, 0);
#endif
e1000_irq_disable(adapter);
/* Must NOT use netdev_priv macro here. */
adapter = poll_dev->priv;
- /* Keep link state information with original netdev */
- if (!netif_carrier_ok(poll_dev))
- goto quit_polling;
-
/* e1000_clean is called per-cpu. This lock protects
* tx_ring[0] from being cleaned by multiple cpus
* simultaneously. A failure obtaining the lock means
* tx_ring[0] is currently being cleaned anyway. */
if (spin_trylock(&adapter->tx_queue_lock)) {
tx_cleaned = e1000_clean_tx_irq(adapter,
- &adapter->tx_ring[0]);
+ &adapter->tx_ring[0]);
spin_unlock(&adapter->tx_queue_lock);
}
adapter->clean_rx(adapter, &adapter->rx_ring[0],
&work_done, budget);
- /* If no Tx and not enough Rx work done, exit the polling mode */
- if ((!tx_cleaned && (work_done == 0)) ||
- !netif_running(poll_dev)) {
-quit_polling:
+ if (tx_cleaned)
+ work_done = budget;
+
+ /* If budget not fully consumed, exit the polling mode */
+ if (work_done < budget) {
if (likely(adapter->itr_setting & 3))
e1000_set_itr(adapter);
netif_rx_complete(poll_dev, napi);
/* Must NOT use netdev_priv macro here. */
adapter = poll_dev->priv;
- /* Keep link state information with original netdev */
- if (!netif_carrier_ok(poll_dev))
- goto quit_polling;
-
/* e1000_clean is called per-cpu. This lock protects
* tx_ring from being cleaned by multiple cpus
* simultaneously. A failure obtaining the lock means
adapter->clean_rx(adapter, &work_done, budget);
- /* If no Tx and not enough Rx work done, exit the polling mode */
- if ((!tx_cleaned && (work_done < budget)) ||
- !netif_running(poll_dev)) {
-quit_polling:
+ if (tx_cleaned)
+ work_done = budget;
+
+ /* If budget not fully consumed, exit the polling mode */
+ if (work_done < budget) {
if (adapter->itr_setting & 3)
e1000_set_itr(adapter);
netif_rx_complete(poll_dev, napi);
msleep(10);
napi_disable(&adapter->napi);
+ atomic_set(&adapter->irq_sem, 0);
e1000_irq_disable(adapter);
del_timer_sync(&adapter->watchdog_timer);
epic_rx_err(dev, ep);
- if (netif_running(dev) && (work_done < budget)) {
+ if (work_done < budget) {
unsigned long flags;
int more;
__u16 pkt_len, sc;
int curidx;
- if (fpi->use_napi) {
- if (!netif_running(dev))
- return 0;
- }
-
/*
* First, grab all of the stats for the incoming packet.
* These get messed up if we get called due to a busy condition.
struct mpc52xx_fec __iomem *fec = priv->fec;
out_be32(&fec->mib_control, FEC_MIB_DISABLE);
- memset_io(&fec->rmon_t_drop, 0, (__force u32)&fec->reserved10 -
- (__force u32)&fec->rmon_t_drop);
+ memset_io(&fec->rmon_t_drop, 0,
+ offsetof(struct mpc52xx_fec, reserved10) -
+ offsetof(struct mpc52xx_fec, rmon_t_drop));
out_be32(&fec->mib_control, 0);
memset(&dev->stats, 0, sizeof(dev->stats));
dev->dev_addr[3] = (np->orig_mac[0] >> 16) & 0xff;
dev->dev_addr[4] = (np->orig_mac[0] >> 8) & 0xff;
dev->dev_addr[5] = (np->orig_mac[0] >> 0) & 0xff;
- /* set permanent address to be correct aswell */
- np->orig_mac[0] = (dev->dev_addr[0] << 0) + (dev->dev_addr[1] << 8) +
- (dev->dev_addr[2] << 16) + (dev->dev_addr[3] << 24);
- np->orig_mac[1] = (dev->dev_addr[4] << 0) + (dev->dev_addr[5] << 8);
writel(txreg|NVREG_TRANSMITPOLL_MAC_ADDR_REV, base + NvRegTransmitPoll);
}
memcpy(dev->perm_addr, dev->dev_addr, dev->addr_len);
*/
writel(np->orig_mac[0], base + NvRegMacAddrA);
writel(np->orig_mac[1], base + NvRegMacAddrB);
+ writel(readl(base + NvRegTransmitPoll) & ~NVREG_TRANSMITPOLL_MAC_ADDR_REV,
+ base + NvRegTransmitPoll);
/* free all structures */
free_rings(dev);
u16 pkt_len, sc;
int curidx;
- if (!netif_running(dev))
- return 0;
-
/*
* First, grab all of the stats for the incoming packet.
* These get messed up if we get called due to a busy condition.
static int fs_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
struct fs_enet_private *fep = netdev_priv(dev);
+
+ if (!fep->phydev)
+ return -ENODEV;
+
return phy_ethtool_gset(fep->phydev, cmd);
}
static int fs_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
struct fs_enet_private *fep = netdev_priv(dev);
- phy_ethtool_sset(fep->phydev, cmd);
- return 0;
+
+ if (!fep->phydev)
+ return -ENODEV;
+
+ return phy_ethtool_sset(fep->phydev, cmd);
}
static int fs_nway_reset(struct net_device *dev)
static void ipg_nic_txfree(struct net_device *dev)
{
struct ipg_nic_private *sp = netdev_priv(dev);
- void __iomem *ioaddr = sp->ioaddr;
- unsigned int curr;
- u64 txd_map;
- unsigned int released, pending;
-
- txd_map = (u64)sp->txd_map;
- curr = ipg_r32(TFD_LIST_PTR_0) -
- do_div(txd_map, sizeof(struct ipg_tx)) - 1;
+ unsigned int released, pending, dirty;
IPG_DEBUG_MSG("_nic_txfree\n");
pending = sp->tx_current - sp->tx_dirty;
+ dirty = sp->tx_dirty % IPG_TFDLIST_LENGTH;
for (released = 0; released < pending; released++) {
- unsigned int dirty = sp->tx_dirty % IPG_TFDLIST_LENGTH;
struct sk_buff *skb = sp->TxBuff[dirty];
struct ipg_tx *txfd = sp->txd + dirty;
* If the TFDDone bit is set, free the associated
* buffer.
*/
- if (dirty == curr)
- break;
-
- /* Setup TFDDONE for compatible issue. */
- txfd->tfc |= cpu_to_le64(IPG_TFC_TFDDONE);
+ if (!(txfd->tfc & cpu_to_le64(IPG_TFC_TFDDONE)))
+ break;
/* Free the transmit buffer. */
if (skb) {
sp->TxBuff[dirty] = NULL;
}
+ dirty = (dirty + 1) % IPG_TFDLIST_LENGTH;
}
sp->tx_dirty += released;
#ifdef JUMBO_FRAME
ipg_nic_rxrestore(dev);
#endif
+ spin_lock(&sp->lock);
+
/* Get interrupt source information, and acknowledge
* some (i.e. TxDMAComplete, RxDMAComplete, RxEarly,
* IntRequested, MacControlFrame, LinkEvent) interrupts
handled = 1;
if (unlikely(!netif_running(dev)))
- goto out;
-
- spin_lock(&sp->lock);
+ goto out_unlock;
/* If RFDListEnd interrupt, restore all used RFDs. */
if (status & IPG_IS_RFD_LIST_END) {
ipg_w16(IPG_IE_TX_DMA_COMPLETE | IPG_IE_RX_DMA_COMPLETE |
IPG_IE_HOST_ERROR | IPG_IE_INT_REQUESTED | IPG_IE_TX_COMPLETE |
IPG_IE_LINK_EVENT | IPG_IE_UPDATE_STATS, INT_ENABLE);
-
+out_unlock:
spin_unlock(&sp->lock);
-out:
+
return IRQ_RETVAL(handled);
}
*/
if (sp->tenmbpsmode)
txfd->tfc |= cpu_to_le64(IPG_TFC_TXINDICATE);
- else if (!((sp->tx_current - sp->tx_dirty + 1) >
- IPG_FRAMESBETWEENTXDMACOMPLETES)) {
- txfd->tfc |= cpu_to_le64(IPG_TFC_TXDMAINDICATE);
- }
+ txfd->tfc |= cpu_to_le64(IPG_TFC_TXDMAINDICATE);
/* Based on compilation option, determine if FCS is to be
* appended to transmit frame by IPG.
*/
ipg_w32(IPG_DC_TX_DMA_POLL_NOW, DMA_CTRL);
if (sp->tx_current == (sp->tx_dirty + IPG_TFDLIST_LENGTH))
- netif_wake_queue(dev);
+ netif_stop_queue(dev);
spin_unlock_irqrestore(&sp->lock, flags);
{
struct net_device *netdev = adapter->netdev;
+#ifdef CONFIG_IXGB_NAPI
+ napi_disable(&adapter->napi);
+ atomic_set(&adapter->irq_sem, 0);
+#endif
+
ixgb_irq_disable(adapter);
free_irq(adapter->pdev->irq, netdev);
if(kill_watchdog)
del_timer_sync(&adapter->watchdog_timer);
-#ifdef CONFIG_IXGB_NAPI
- napi_disable(&adapter->napi);
-#endif
+
adapter->link_speed = 0;
adapter->link_duplex = 0;
netif_carrier_off(netdev);
{
struct ixgb_adapter *adapter = container_of(napi, struct ixgb_adapter, napi);
struct net_device *netdev = adapter->netdev;
- int tx_cleaned;
int work_done = 0;
- tx_cleaned = ixgb_clean_tx_irq(adapter);
+ ixgb_clean_tx_irq(adapter);
ixgb_clean_rx_irq(adapter, &work_done, budget);
- /* if no Tx and not enough Rx work done, exit the polling mode */
- if((!tx_cleaned && (work_done == 0)) || !netif_running(netdev)) {
+ /* If budget not fully consumed, exit the polling mode */
+ if (work_done < budget) {
netif_rx_complete(netdev, napi);
ixgb_irq_enable(adapter);
}
IXGBE_WRITE_FLUSH(&adapter->hw);
msleep(10);
+ napi_disable(&adapter->napi);
+ atomic_set(&adapter->irq_sem, 0);
+
ixgbe_irq_disable(adapter);
- napi_disable(&adapter->napi);
del_timer_sync(&adapter->watchdog_timer);
netif_carrier_off(netdev);
struct net_device *netdev = adapter->netdev;
int tx_cleaned = 0, work_done = 0;
- /* Keep link state information with original netdev */
- if (!netif_carrier_ok(adapter->netdev))
- goto quit_polling;
-
/* In non-MSIX case, there is no multi-Tx/Rx queue */
tx_cleaned = ixgbe_clean_tx_irq(adapter, adapter->tx_ring);
ixgbe_clean_rx_irq(adapter, &adapter->rx_ring[0], &work_done,
budget);
- /* If no Tx and not enough Rx work done, exit the polling mode */
- if ((!tx_cleaned && (work_done < budget)) ||
- !netif_running(adapter->netdev)) {
-quit_polling:
+ if (tx_cleaned)
+ work_done = budget;
+
+ /* If budget not fully consumed, exit the polling mode */
+ if (work_done < budget) {
netif_rx_complete(netdev, napi);
ixgbe_irq_enable(adapter);
}
struct net_device *dev = ip->dev;
int rx;
- /* @@@ Have to stop polling when nds[0] is administratively
- * downed while we are polling. */
rx = 0;
do {
ixp2000_reg_write(IXP2000_IRQ_THD_RAW_STATUS_A_0, 0x00ff);
| NETIF_F_NO_CSUM
| NETIF_F_HIGHDMA
| NETIF_F_LLTX
- | NETIF_F_NETNS_LOCAL,
+ | NETIF_F_NETNS_LOCAL;
dev->ethtool_ops = &loopback_ethtool_ops;
dev->header_ops = ð_header_ops;
dev->init = loopback_dev_init;
(unsigned long)status);
if (status & MACB_BIT(UND)) {
+ int i;
printk(KERN_ERR "%s: TX underrun, resetting buffers\n",
- bp->dev->name);
+ bp->dev->name);
+
+ head = bp->tx_head;
+
+ /*Mark all the buffer as used to avoid sending a lost buffer*/
+ for (i = 0; i < TX_RING_SIZE; i++)
+ bp->tx_ring[i].ctrl = MACB_BIT(TX_USED);
+
+ /* free transmit buffer in upper layer*/
+ for (tail = bp->tx_tail; tail != head; tail = NEXT_TX(tail)) {
+ struct ring_info *rp = &bp->tx_skb[tail];
+ struct sk_buff *skb = rp->skb;
+
+ BUG_ON(skb == NULL);
+
+ rmb();
+
+ dma_unmap_single(&bp->pdev->dev, rp->mapping, skb->len,
+ DMA_TO_DEVICE);
+ rp->skb = NULL;
+ dev_kfree_skb_irq(skb);
+ }
+
bp->tx_head = bp->tx_tail = 0;
}
if (lowerdev == NULL)
return -ENODEV;
+ /* Don't allow macvlans on top of other macvlans - its not really
+ * wrong, but lockdep can't handle it and its not useful for anything
+ * you couldn't do directly on top of the real device.
+ */
+ if (lowerdev->rtnl_link_ops == dev->rtnl_link_ops)
+ return -ENODEV;
+
if (!tb[IFLA_MTU])
dev->mtu = lowerdev->mtu;
else if (dev->mtu > lowerdev->mtu)
{
int i;
DECLARE_MAC_BUF(mac);
+ u64 macaddr;
- for (i = 0; i < 6; i++)
- dev->dev_addr[i] = o2meth_eaddr[i];
DPRINTK("Loading MAC Address: %s\n", print_mac(mac, dev->dev_addr));
- mace->eth.mac_addr = (*(unsigned long*)o2meth_eaddr) >> 16;
+ macaddr = 0;
+ for (i = 0; i < 6; i++)
+ macaddr |= dev->dev_addr[i] << ((5 - i) * 8);
+
+ mace->eth.mac_addr = macaddr;
}
/*
#endif
dev->irq = MACE_ETHERNET_IRQ;
dev->base_addr = (unsigned long)&mace->eth;
+ memcpy(dev->dev_addr, o2meth_eaddr, 6);
priv = netdev_priv(dev);
spin_lock_init(&priv->meth_lock);
/* process as many rx events as NAPI will allow */
work_done = myri10ge_clean_rx_done(mgp, budget);
- if (work_done < budget || !netif_running(netdev)) {
+ if (work_done < budget) {
netif_rx_complete(netdev, napi);
put_be32(htonl(3), mgp->irq_claim);
}
/* Reenable interrupts providing nothing is trying to shut
* the chip down. */
spin_lock(&np->lock);
- if (!np->hands_off && netif_running(dev))
+ if (!np->hands_off)
natsemi_irq_enable(dev);
spin_unlock(&np->lock);
ndev->last_rx = jiffies;
skb->protocol = eth_type_trans(skb, ndev);
netif_rx(skb);
- dev->stats.rx_packets++;
- dev->stats.rx_bytes += len;
+ ndev->stats.rx_packets++;
+ ndev->stats.rx_bytes += len;
}
static irqreturn_t
#define _NETXEN_NIC_LINUX_MAJOR 3
#define _NETXEN_NIC_LINUX_MINOR 4
-#define _NETXEN_NIC_LINUX_SUBVERSION 2
-#define NETXEN_NIC_LINUX_VERSIONID "3.4.2"
+#define _NETXEN_NIC_LINUX_SUBVERSION 18
+#define NETXEN_NIC_LINUX_VERSIONID "3.4.18"
#define NETXEN_NUM_FLASH_SECTORS (64)
#define NETXEN_FLASH_SECTOR_SIZE (64 * 1024)
((cmd_desc)->port_ctxid |= ((var) & 0xF0))
#define netxen_set_cmd_desc_flags(cmd_desc, val) \
- ((cmd_desc)->flags_opcode &= ~cpu_to_le16(0x7f), \
- (cmd_desc)->flags_opcode |= cpu_to_le16((val) & 0x7f))
+ (cmd_desc)->flags_opcode = ((cmd_desc)->flags_opcode & \
+ ~cpu_to_le16(0x7f)) | cpu_to_le16((val) & 0x7f)
#define netxen_set_cmd_desc_opcode(cmd_desc, val) \
- ((cmd_desc)->flags_opcode &= ~cpu_to_le16(0x3f<<7), \
- (cmd_desc)->flags_opcode |= cpu_to_le16(((val & 0x3f)<<7)))
+ (cmd_desc)->flags_opcode = ((cmd_desc)->flags_opcode & \
+ ~cpu_to_le16((u16)0x3f << 7)) | cpu_to_le16(((val) & 0x3f) << 7)
#define netxen_set_cmd_desc_num_of_buff(cmd_desc, val) \
- ((cmd_desc)->num_of_buffers_total_length &= ~cpu_to_le32(0xff), \
- (cmd_desc)->num_of_buffers_total_length |= cpu_to_le32((val) & 0xff))
+ (cmd_desc)->num_of_buffers_total_length = \
+ ((cmd_desc)->num_of_buffers_total_length & \
+ ~cpu_to_le32(0xff)) | cpu_to_le32((val) & 0xff)
#define netxen_set_cmd_desc_totallength(cmd_desc, val) \
- ((cmd_desc)->num_of_buffers_total_length &= ~cpu_to_le32(0xffffff00), \
- (cmd_desc)->num_of_buffers_total_length |= cpu_to_le32(val << 8))
+ (cmd_desc)->num_of_buffers_total_length = \
+ ((cmd_desc)->num_of_buffers_total_length & \
+ ~cpu_to_le32((u32)0xffffff << 8)) | \
+ cpu_to_le32(((val) & 0xffffff) << 8)
#define netxen_get_cmd_desc_opcode(cmd_desc) \
- ((le16_to_cpu((cmd_desc)->flags_opcode) >> 7) & 0x003F)
+ ((le16_to_cpu((cmd_desc)->flags_opcode) >> 7) & 0x003f)
#define netxen_get_cmd_desc_totallength(cmd_desc) \
- (le32_to_cpu((cmd_desc)->num_of_buffers_total_length) >> 8)
+ ((le32_to_cpu((cmd_desc)->num_of_buffers_total_length) >> 8) & 0xffffff)
struct cmd_desc_type0 {
u8 tcp_hdr_offset; /* For LSO only */
#define netxen_get_sts_desc_lro_last_frag(status_desc) \
(((status_desc)->lro & 0x80) >> 7)
-#define netxen_get_sts_port(status_desc) \
- (le64_to_cpu((status_desc)->status_desc_data) & 0x0F)
-#define netxen_get_sts_status(status_desc) \
- ((le64_to_cpu((status_desc)->status_desc_data) >> 4) & 0x0F)
-#define netxen_get_sts_type(status_desc) \
- ((le64_to_cpu((status_desc)->status_desc_data) >> 8) & 0x0F)
-#define netxen_get_sts_totallength(status_desc) \
- ((le64_to_cpu((status_desc)->status_desc_data) >> 12) & 0xFFFF)
-#define netxen_get_sts_refhandle(status_desc) \
- ((le64_to_cpu((status_desc)->status_desc_data) >> 28) & 0xFFFF)
-#define netxen_get_sts_prot(status_desc) \
- ((le64_to_cpu((status_desc)->status_desc_data) >> 44) & 0x0F)
+#define netxen_get_sts_port(sts_data) \
+ ((sts_data) & 0x0F)
+#define netxen_get_sts_status(sts_data) \
+ (((sts_data) >> 4) & 0x0F)
+#define netxen_get_sts_type(sts_data) \
+ (((sts_data) >> 8) & 0x0F)
+#define netxen_get_sts_totallength(sts_data) \
+ (((sts_data) >> 12) & 0xFFFF)
+#define netxen_get_sts_refhandle(sts_data) \
+ (((sts_data) >> 28) & 0xFFFF)
+#define netxen_get_sts_prot(sts_data) \
+ (((sts_data) >> 44) & 0x0F)
+#define netxen_get_sts_opcode(sts_data) \
+ (((sts_data) >> 58) & 0x03F)
+
#define netxen_get_sts_owner(status_desc) \
((le64_to_cpu((status_desc)->status_desc_data) >> 56) & 0x03)
-#define netxen_get_sts_opcode(status_desc) \
- ((le64_to_cpu((status_desc)->status_desc_data) >> 58) & 0x03F)
-
-#define netxen_clear_sts_owner(status_desc) \
- ((status_desc)->status_desc_data &= \
- ~cpu_to_le64(((unsigned long long)3) << 56 ))
-#define netxen_set_sts_owner(status_desc, val) \
- ((status_desc)->status_desc_data |= \
- cpu_to_le64(((unsigned long long)((val) & 0x3)) << 56 ))
+#define netxen_set_sts_owner(status_desc, val) { \
+ (status_desc)->status_desc_data = \
+ ((status_desc)->status_desc_data & \
+ ~cpu_to_le64(0x3ULL << 56)) | \
+ cpu_to_le64((u64)((val) & 0x3) << 56); \
+}
struct status_desc {
/* Bit pattern: 0-3 port, 4-7 status, 8-11 type, 12-27 total_length
{
struct pci_dev *pdev = adapter->pdev;
struct net_device *netdev = adapter->netdev;
- int index = netxen_get_sts_refhandle(desc);
+ u64 sts_data = le64_to_cpu(desc->status_desc_data);
+ int index = netxen_get_sts_refhandle(sts_data);
struct netxen_recv_context *recv_ctx = &(adapter->recv_ctx[ctxid]);
struct netxen_rx_buffer *buffer;
struct sk_buff *skb;
- u32 length = netxen_get_sts_totallength(desc);
+ u32 length = netxen_get_sts_totallength(sts_data);
u32 desc_ctx;
struct netxen_rcv_desc_ctx *rcv_desc;
int ret;
- desc_ctx = netxen_get_sts_type(desc);
+ desc_ctx = netxen_get_sts_type(sts_data);
if (unlikely(desc_ctx >= NUM_RCV_DESC_RINGS)) {
printk("%s: %s Bad Rcv descriptor ring\n",
netxen_nic_driver_name, netdev->name);
skb = (struct sk_buff *)buffer->skb;
if (likely(adapter->rx_csum &&
- netxen_get_sts_status(desc) == STATUS_CKSUM_OK)) {
+ netxen_get_sts_status(sts_data) == STATUS_CKSUM_OK)) {
adapter->stats.csummed++;
skb->ip_summed = CHECKSUM_UNNECESSARY;
} else
break;
}
netxen_process_rcv(adapter, ctxid, desc);
- netxen_clear_sts_owner(desc);
netxen_set_sts_owner(desc, STATUS_OWNER_PHANTOM);
consumer = (consumer + 1) & (adapter->max_rx_desc_count - 1);
count++;
struct pci_dev *pdev;
struct netxen_skb_frag *frag;
u32 i;
- struct sk_buff *skb = NULL;
int done;
spin_lock(&adapter->tx_lock);
while ((last_consumer != consumer) && (count1 < MAX_STATUS_HANDLE)) {
buffer = &adapter->cmd_buf_arr[last_consumer];
pdev = adapter->pdev;
- frag = &buffer->frag_array[0];
- skb = buffer->skb;
- if (skb && (cmpxchg(&buffer->skb, skb, 0) == skb)) {
+ if (buffer->skb) {
+ frag = &buffer->frag_array[0];
pci_unmap_single(pdev, frag->dma, frag->length,
PCI_DMA_TODEVICE);
frag->dma = 0ULL;
}
adapter->stats.skbfreed++;
- dev_kfree_skb_any(skb);
- skb = NULL;
+ dev_kfree_skb_any(buffer->skb);
+ buffer->skb = NULL;
} else if (adapter->proc_cmd_buf_counter == 1) {
adapter->stats.txnullskb++;
}
unregister_netdev(netdev);
- if (adapter->stop_port)
- adapter->stop_port(adapter);
-
- netxen_nic_disable_int(adapter);
-
if (adapter->is_up == NETXEN_ADAPTER_UP_MAGIC) {
init_firmware_done++;
netxen_free_hw_resources(adapter);
netif_stop_queue(netdev);
napi_disable(&adapter->napi);
+ if (adapter->stop_port)
+ adapter->stop_port(adapter);
+
netxen_nic_disable_int(adapter);
cmd_buff = adapter->cmd_buf_arr;
return NETDEV_TX_OK;
}
- /*
- * Everything is set up. Now, we just need to transmit it out.
- * Note that we have to copy the contents of buffer over to
- * right place. Later on, this can be optimized out by de-coupling the
- * producer index from the buffer index.
- */
- retry_getting_window:
- spin_lock_bh(&adapter->tx_lock);
- if (adapter->total_threads >= MAX_XMIT_PRODUCERS) {
- spin_unlock_bh(&adapter->tx_lock);
- /*
- * Yield CPU
- */
- if (!in_atomic())
- schedule();
- else {
- for (i = 0; i < 20; i++)
- cpu_relax(); /*This a nop instr on i386 */
- }
- goto retry_getting_window;
- }
- local_producer = adapter->cmd_producer;
/* There 4 fragments per descriptor */
no_of_desc = (frag_count + 3) >> 2;
if (netdev->features & NETIF_F_TSO) {
}
}
}
+
+ spin_lock_bh(&adapter->tx_lock);
+ if (adapter->total_threads >= MAX_XMIT_PRODUCERS) {
+ goto out_requeue;
+ }
+ local_producer = adapter->cmd_producer;
k = adapter->cmd_producer;
max_tx_desc_count = adapter->max_tx_desc_count;
last_cmd_consumer = adapter->last_cmd_consumer;
if ((k + no_of_desc) >=
((last_cmd_consumer <= k) ? last_cmd_consumer + max_tx_desc_count :
last_cmd_consumer)) {
- netif_stop_queue(netdev);
- adapter->flags |= NETXEN_NETDEV_STATUS;
- spin_unlock_bh(&adapter->tx_lock);
- return NETDEV_TX_BUSY;
+ goto out_requeue;
}
k = get_index_range(k, max_tx_desc_count, no_of_desc);
adapter->cmd_producer = k;
adapter->max_tx_desc_count);
hwdesc = &hw->cmd_desc_head[producer];
memset(hwdesc, 0, sizeof(struct cmd_desc_type0));
+ pbuf = &adapter->cmd_buf_arr[producer];
+ pbuf->skb = NULL;
}
frag = &skb_shinfo(skb)->frags[i - 1];
len = frag->size;
}
/* copy the MAC/IP/TCP headers to the cmd descriptor list */
hwdesc = &hw->cmd_desc_head[producer];
+ pbuf = &adapter->cmd_buf_arr[producer];
+ pbuf->skb = NULL;
/* copy the first 64 bytes */
memcpy(((void *)hwdesc) + 2,
if (more_hdr) {
hwdesc = &hw->cmd_desc_head[producer];
+ pbuf = &adapter->cmd_buf_arr[producer];
+ pbuf->skb = NULL;
/* copy the next 64 bytes - should be enough except
* for pathological case
*/
}
}
- i = netxen_get_cmd_desc_totallength(&hw->cmd_desc_head[saved_producer]);
-
- hw->cmd_desc_head[saved_producer].flags_opcode =
- cpu_to_le16(hw->cmd_desc_head[saved_producer].flags_opcode);
- hw->cmd_desc_head[saved_producer].num_of_buffers_total_length =
- cpu_to_le32(hw->cmd_desc_head[saved_producer].
- num_of_buffers_total_length);
-
spin_lock_bh(&adapter->tx_lock);
- adapter->stats.txbytes += i;
+ adapter->stats.txbytes += skb->len;
/* Code to update the adapter considering how many producer threads
are currently working */
}
adapter->stats.xmitfinished++;
- spin_unlock_bh(&adapter->tx_lock);
-
netdev->trans_start = jiffies;
- DPRINTK(INFO, "wrote CMD producer %x to phantom\n", producer);
-
- DPRINTK(INFO, "Done. Send\n");
+ spin_unlock_bh(&adapter->tx_lock);
return NETDEV_TX_OK;
+
+out_requeue:
+ netif_stop_queue(netdev);
+ adapter->flags |= NETXEN_NETDEV_STATUS;
+
+ spin_unlock_bh(&adapter->tx_lock);
+ return NETDEV_TX_BUSY;
}
static void netxen_watchdog(unsigned long v)
budget / MAX_RCV_CTX);
}
- if (work_done >= budget && netxen_nic_rx_has_work(adapter) != 0)
+ if (work_done >= budget)
done = 0;
if (netxen_process_cmd_ring((unsigned long)adapter) == 0)
__u32 mac_cfg;
u32 port = physical_port[adapter->portnum];
- if (port != 0)
+ if (port > NETXEN_NIU_MAX_XG_PORTS)
return -EINVAL;
+
mac_cfg = 0;
- netxen_xg_soft_reset(mac_cfg);
- if (netxen_nic_hw_write_wx(adapter, NETXEN_NIU_XGE_CONFIG_0,
- &mac_cfg, 4))
+ if (netxen_nic_hw_write_wx(adapter,
+ NETXEN_NIU_XGE_CONFIG_0 + (0x10000 * port), &mac_cfg, 4))
return -EIO;
return 0;
}
#define DRV_MODULE_NAME "niu"
#define PFX DRV_MODULE_NAME ": "
-#define DRV_MODULE_VERSION "0.5"
-#define DRV_MODULE_RELDATE "October 5, 2007"
+#define DRV_MODULE_VERSION "0.6"
+#define DRV_MODULE_RELDATE "January 5, 2008"
static char version[] __devinitdata =
DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n";
return 0;
}
-static int xcvr_init_10g(struct niu *np)
+static int mrvl88x2011_act_led(struct niu *np, int val)
+{
+ int err;
+
+ err = mdio_read(np, np->phy_addr, MRVL88X2011_USER_DEV2_ADDR,
+ MRVL88X2011_LED_8_TO_11_CTL);
+ if (err < 0)
+ return err;
+
+ err &= ~MRVL88X2011_LED(MRVL88X2011_LED_ACT,MRVL88X2011_LED_CTL_MASK);
+ err |= MRVL88X2011_LED(MRVL88X2011_LED_ACT,val);
+
+ return mdio_write(np, np->phy_addr, MRVL88X2011_USER_DEV2_ADDR,
+ MRVL88X2011_LED_8_TO_11_CTL, err);
+}
+
+static int mrvl88x2011_led_blink_rate(struct niu *np, int rate)
+{
+ int err;
+
+ err = mdio_read(np, np->phy_addr, MRVL88X2011_USER_DEV2_ADDR,
+ MRVL88X2011_LED_BLINK_CTL);
+ if (err >= 0) {
+ err &= ~MRVL88X2011_LED_BLKRATE_MASK;
+ err |= (rate << 4);
+
+ err = mdio_write(np, np->phy_addr, MRVL88X2011_USER_DEV2_ADDR,
+ MRVL88X2011_LED_BLINK_CTL, err);
+ }
+
+ return err;
+}
+
+static int xcvr_init_10g_mrvl88x2011(struct niu *np)
+{
+ int err;
+
+ /* Set LED functions */
+ err = mrvl88x2011_led_blink_rate(np, MRVL88X2011_LED_BLKRATE_134MS);
+ if (err)
+ return err;
+
+ /* led activity */
+ err = mrvl88x2011_act_led(np, MRVL88X2011_LED_CTL_OFF);
+ if (err)
+ return err;
+
+ err = mdio_read(np, np->phy_addr, MRVL88X2011_USER_DEV3_ADDR,
+ MRVL88X2011_GENERAL_CTL);
+ if (err < 0)
+ return err;
+
+ err |= MRVL88X2011_ENA_XFPREFCLK;
+
+ err = mdio_write(np, np->phy_addr, MRVL88X2011_USER_DEV3_ADDR,
+ MRVL88X2011_GENERAL_CTL, err);
+ if (err < 0)
+ return err;
+
+ err = mdio_read(np, np->phy_addr, MRVL88X2011_USER_DEV1_ADDR,
+ MRVL88X2011_PMA_PMD_CTL_1);
+ if (err < 0)
+ return err;
+
+ if (np->link_config.loopback_mode == LOOPBACK_MAC)
+ err |= MRVL88X2011_LOOPBACK;
+ else
+ err &= ~MRVL88X2011_LOOPBACK;
+
+ err = mdio_write(np, np->phy_addr, MRVL88X2011_USER_DEV1_ADDR,
+ MRVL88X2011_PMA_PMD_CTL_1, err);
+ if (err < 0)
+ return err;
+
+ /* Enable PMD */
+ return mdio_write(np, np->phy_addr, MRVL88X2011_USER_DEV1_ADDR,
+ MRVL88X2011_10G_PMD_TX_DIS, MRVL88X2011_ENA_PMDTX);
+}
+
+static int xcvr_init_10g_bcm8704(struct niu *np)
{
struct niu_link_config *lp = &np->link_config;
u16 analog_stat0, tx_alarm_status;
int err;
- u64 val;
-
- val = nr64_mac(XMAC_CONFIG);
- val &= ~XMAC_CONFIG_LED_POLARITY;
- val |= XMAC_CONFIG_FORCE_LED_ON;
- nw64_mac(XMAC_CONFIG, val);
-
- /* XXX shared resource, lock parent XXX */
- val = nr64(MIF_CONFIG);
- val |= MIF_CONFIG_INDIRECT_MODE;
- nw64(MIF_CONFIG, val);
err = bcm8704_reset(np);
if (err)
return 0;
}
+static int xcvr_init_10g(struct niu *np)
+{
+ int phy_id, err;
+ u64 val;
+
+ val = nr64_mac(XMAC_CONFIG);
+ val &= ~XMAC_CONFIG_LED_POLARITY;
+ val |= XMAC_CONFIG_FORCE_LED_ON;
+ nw64_mac(XMAC_CONFIG, val);
+
+ /* XXX shared resource, lock parent XXX */
+ val = nr64(MIF_CONFIG);
+ val |= MIF_CONFIG_INDIRECT_MODE;
+ nw64(MIF_CONFIG, val);
+
+ phy_id = phy_decode(np->parent->port_phy, np->port);
+ phy_id = np->parent->phy_probe_info.phy_id[phy_id][np->port];
+
+ /* handle different phy types */
+ switch (phy_id & NIU_PHY_ID_MASK) {
+ case NIU_PHY_ID_MRVL88X2011:
+ err = xcvr_init_10g_mrvl88x2011(np);
+ break;
+
+ default: /* bcom 8704 */
+ err = xcvr_init_10g_bcm8704(np);
+ break;
+ }
+
+ return 0;
+}
+
static int mii_reset(struct niu *np)
{
int limit, err;
return 0;
}
-static int link_status_10g(struct niu *np, int *link_up_p)
+static int link_status_10g_mrvl(struct niu *np, int *link_up_p)
{
- unsigned long flags;
- int err, link_up;
+ int err, link_up, pma_status, pcs_status;
link_up = 0;
- spin_lock_irqsave(&np->lock, flags);
+ err = mdio_read(np, np->phy_addr, MRVL88X2011_USER_DEV1_ADDR,
+ MRVL88X2011_10G_PMD_STATUS_2);
+ if (err < 0)
+ goto out;
- err = -EINVAL;
- if (np->link_config.loopback_mode != LOOPBACK_DISABLED)
+ /* Check PMA/PMD Register: 1.0001.2 == 1 */
+ err = mdio_read(np, np->phy_addr, MRVL88X2011_USER_DEV1_ADDR,
+ MRVL88X2011_PMA_PMD_STATUS_1);
+ if (err < 0)
+ goto out;
+
+ pma_status = ((err & MRVL88X2011_LNK_STATUS_OK) ? 1 : 0);
+
+ /* Check PMC Register : 3.0001.2 == 1: read twice */
+ err = mdio_read(np, np->phy_addr, MRVL88X2011_USER_DEV3_ADDR,
+ MRVL88X2011_PMA_PMD_STATUS_1);
+ if (err < 0)
+ goto out;
+
+ err = mdio_read(np, np->phy_addr, MRVL88X2011_USER_DEV3_ADDR,
+ MRVL88X2011_PMA_PMD_STATUS_1);
+ if (err < 0)
goto out;
+ pcs_status = ((err & MRVL88X2011_LNK_STATUS_OK) ? 1 : 0);
+
+ /* Check XGXS Register : 4.0018.[0-3,12] */
+ err = mdio_read(np, np->phy_addr, MRVL88X2011_USER_DEV4_ADDR,
+ MRVL88X2011_10G_XGXS_LANE_STAT);
+ if (err < 0)
+ goto out;
+
+ if (err == (PHYXS_XGXS_LANE_STAT_ALINGED | PHYXS_XGXS_LANE_STAT_LANE3 |
+ PHYXS_XGXS_LANE_STAT_LANE2 | PHYXS_XGXS_LANE_STAT_LANE1 |
+ PHYXS_XGXS_LANE_STAT_LANE0 | PHYXS_XGXS_LANE_STAT_MAGIC |
+ 0x800))
+ link_up = (pma_status && pcs_status) ? 1 : 0;
+
+ np->link_config.active_speed = SPEED_10000;
+ np->link_config.active_duplex = DUPLEX_FULL;
+ err = 0;
+out:
+ mrvl88x2011_act_led(np, (link_up ?
+ MRVL88X2011_LED_CTL_PCS_ACT :
+ MRVL88X2011_LED_CTL_OFF));
+
+ *link_up_p = link_up;
+ return err;
+}
+
+static int link_status_10g_bcom(struct niu *np, int *link_up_p)
+{
+ int err, link_up;
+
+ link_up = 0;
+
err = mdio_read(np, np->phy_addr, BCM8704_PMA_PMD_DEV_ADDR,
BCM8704_PMD_RCV_SIGDET);
if (err < 0)
err = 0;
out:
+ *link_up_p = link_up;
+ return err;
+}
+
+static int link_status_10g(struct niu *np, int *link_up_p)
+{
+ unsigned long flags;
+ int err = -EINVAL;
+
+ spin_lock_irqsave(&np->lock, flags);
+
+ if (np->link_config.loopback_mode == LOOPBACK_DISABLED) {
+ int phy_id;
+
+ phy_id = phy_decode(np->parent->port_phy, np->port);
+ phy_id = np->parent->phy_probe_info.phy_id[phy_id][np->port];
+
+ /* handle different phy types */
+ switch (phy_id & NIU_PHY_ID_MASK) {
+ case NIU_PHY_ID_MRVL88X2011:
+ err = link_status_10g_mrvl(np, link_up_p);
+ break;
+
+ default: /* bcom 8704 */
+ err = link_status_10g_bcom(np, link_up_p);
+ break;
+ }
+ }
+
spin_unlock_irqrestore(&np->lock, flags);
- *link_up_p = link_up;
return err;
}
static int link_status_1g(struct niu *np, int *link_up_p)
{
+ struct niu_link_config *lp = &np->link_config;
u16 current_speed, bmsr;
unsigned long flags;
u8 current_duplex;
link_up = 0;
}
}
+ lp->active_speed = current_speed;
+ lp->active_duplex = current_duplex;
err = 0;
out:
skb->protocol = eth_type_trans(skb, np->dev);
netif_receive_skb(skb);
+ np->dev->last_rx = jiffies;
+
return num_rcr;
}
u64 stat = nr64(RX_DMA_CTL_STAT(rp->rx_channel));
int err = 0;
- dev_err(np->device, PFX "%s: RX channel %u error, stat[%llx]\n",
- np->dev->name, rp->rx_channel, (unsigned long long) stat);
-
- niu_log_rxchan_errors(np, rp, stat);
if (stat & (RX_DMA_CTL_STAT_CHAN_FATAL |
RX_DMA_CTL_STAT_PORT_FATAL))
err = -EINVAL;
+ if (err) {
+ dev_err(np->device, PFX "%s: RX channel %u error, stat[%llx]\n",
+ np->dev->name, rp->rx_channel,
+ (unsigned long long) stat);
+
+ niu_log_rxchan_errors(np, rp, stat);
+ }
+
nw64(RX_DMA_CTL_STAT(rp->rx_channel),
stat & RX_DMA_CTL_WRITE_CLEAR_ERRS);
return -ENODEV;
}
-static int niu_slowpath_interrupt(struct niu *np, struct niu_ldg *lp)
+static int niu_slowpath_interrupt(struct niu *np, struct niu_ldg *lp,
+ u64 v0, u64 v1, u64 v2)
{
- u64 v0 = lp->v0;
- u64 v1 = lp->v1;
- u64 v2 = lp->v2;
+
int i, err = 0;
+ lp->v0 = v0;
+ lp->v1 = v1;
+ lp->v2 = v2;
+
if (v1 & 0x00000000ffffffffULL) {
u32 rx_vec = (v1 & 0xffffffff);
if (rx_vec & (1 << rp->rx_channel)) {
int r = niu_rx_error(np, rp);
- if (r)
+ if (r) {
err = r;
+ } else {
+ if (!v0)
+ nw64(RX_DMA_CTL_STAT(rp->rx_channel),
+ RX_DMA_CTL_STAT_MEX);
+ }
}
}
}
if (err)
niu_enable_interrupts(np, 0);
- return -EINVAL;
+ return err;
}
static void niu_rxchan_intr(struct niu *np, struct rx_ring_info *rp,
}
if (unlikely((v0 & ((u64)1 << LDN_MIF)) || v1 || v2)) {
- int err = niu_slowpath_interrupt(np, lp);
+ int err = niu_slowpath_interrupt(np, lp, v0, v1, v2);
if (err)
goto out;
}
}
kfree_skb(skb);
skb = skb_new;
- }
+ } else
+ skb_orphan(skb);
align = ((unsigned long) skb->data & (16 - 1));
headroom = align + sizeof(struct tx_pkt_hdr);
if (dev_id_1 < 0 || dev_id_2 < 0)
return 0;
if (type == PHY_TYPE_PMA_PMD || type == PHY_TYPE_PCS) {
- if ((id & NIU_PHY_ID_MASK) != NIU_PHY_ID_BCM8704)
+ if (((id & NIU_PHY_ID_MASK) != NIU_PHY_ID_BCM8704) &&
+ ((id & NIU_PHY_ID_MASK) != NIU_PHY_ID_MRVL88X2011))
return 0;
} else {
if ((id & NIU_PHY_ID_MASK) != NIU_PHY_ID_BCM5464R)
#define NIU_PHY_ID_MASK 0xfffff0f0
#define NIU_PHY_ID_BCM8704 0x00206030
#define NIU_PHY_ID_BCM5464R 0x002060b0
+#define NIU_PHY_ID_MRVL88X2011 0x01410020
+
+/* MRVL88X2011 register addresses */
+#define MRVL88X2011_USER_DEV1_ADDR 1
+#define MRVL88X2011_USER_DEV2_ADDR 2
+#define MRVL88X2011_USER_DEV3_ADDR 3
+#define MRVL88X2011_USER_DEV4_ADDR 4
+#define MRVL88X2011_PMA_PMD_CTL_1 0x0000
+#define MRVL88X2011_PMA_PMD_STATUS_1 0x0001
+#define MRVL88X2011_10G_PMD_STATUS_2 0x0008
+#define MRVL88X2011_10G_PMD_TX_DIS 0x0009
+#define MRVL88X2011_10G_XGXS_LANE_STAT 0x0018
+#define MRVL88X2011_GENERAL_CTL 0x8300
+#define MRVL88X2011_LED_BLINK_CTL 0x8303
+#define MRVL88X2011_LED_8_TO_11_CTL 0x8306
+
+/* MRVL88X2011 register control */
+#define MRVL88X2011_ENA_XFPREFCLK 0x0001
+#define MRVL88X2011_ENA_PMDTX 0x0000
+#define MRVL88X2011_LOOPBACK 0x1
+#define MRVL88X2011_LED_ACT 0x1
+#define MRVL88X2011_LNK_STATUS_OK 0x4
+#define MRVL88X2011_LED_BLKRATE_MASK 0x70
+#define MRVL88X2011_LED_BLKRATE_034MS 0x0
+#define MRVL88X2011_LED_BLKRATE_067MS 0x1
+#define MRVL88X2011_LED_BLKRATE_134MS 0x2
+#define MRVL88X2011_LED_BLKRATE_269MS 0x3
+#define MRVL88X2011_LED_BLKRATE_538MS 0x4
+#define MRVL88X2011_LED_CTL_OFF 0x0
+#define MRVL88X2011_LED_CTL_PCS_ACT 0x5
+#define MRVL88X2011_LED_CTL_MASK 0x7
+#define MRVL88X2011_LED(n,v) ((v)<<((n)*4))
+#define MRVL88X2011_LED_STAT(n,v) ((v)>>((n)*4))
#define BCM8704_PMA_PMD_DEV_ADDR 1
#define BCM8704_PCS_DEV_ADDR 2
enum Window3 { /* Window 3: MAC/config bits. */
Wn3_Config=0, Wn3_MAC_Ctrl=6, Wn3_Options=8,
};
-union wn3_config {
- int i;
- struct w3_config_fields {
- unsigned int ram_size:3, ram_width:1, ram_speed:2, rom_size:2;
- int pad8:8;
- unsigned int ram_split:2, pad18:2, xcvr:3, pad21:1, autoselect:1;
- int pad24:7;
- } u;
+enum wn3_config {
+ Ram_size = 7,
+ Ram_width = 8,
+ Ram_speed = 0x30,
+ Rom_size = 0xc0,
+ Ram_split_shift = 16,
+ Ram_split = 3 << Ram_split_shift,
+ Xcvr_shift = 20,
+ Xcvr = 7 << Xcvr_shift,
+ Autoselect = 0x1000000,
};
enum Window4 { /* Window 4: Xcvr/media bits. */
struct net_device *dev = link->priv;
struct el3_private *lp = netdev_priv(dev);
tuple_t tuple;
- unsigned short buf[32];
+ __le16 buf[32];
int last_fn, last_ret, i, j;
kio_addr_t ioaddr;
- u16 *phys_addr;
+ __be16 *phys_addr;
char *cardname;
- union wn3_config config;
+ __u32 config;
DECLARE_MAC_BUF(mac);
- phys_addr = (u16 *)dev->dev_addr;
+ phys_addr = (__be16 *)dev->dev_addr;
DEBUG(0, "3c574_config(0x%p)\n", link);
if (pcmcia_get_first_tuple(link, &tuple) == CS_SUCCESS) {
pcmcia_get_tuple_data(link, &tuple);
for (i = 0; i < 3; i++)
- phys_addr[i] = htons(buf[i]);
+ phys_addr[i] = htons(le16_to_cpu(buf[i]));
} else {
EL3WINDOW(0);
for (i = 0; i < 3; i++)
phys_addr[i] = htons(read_eeprom(ioaddr, i + 10));
- if (phys_addr[0] == 0x6060) {
+ if (phys_addr[0] == htons(0x6060)) {
printk(KERN_NOTICE "3c574_cs: IO port conflict at 0x%03lx"
"-0x%03lx\n", dev->base_addr, dev->base_addr+15);
goto failed;
outw(0<<11, ioaddr + RunnerRdCtrl);
printk(KERN_INFO " ASIC rev %d,", mcr>>3);
EL3WINDOW(3);
- config.i = inl(ioaddr + Wn3_Config);
- lp->default_media = config.u.xcvr;
- lp->autoselect = config.u.autoselect;
+ config = inl(ioaddr + Wn3_Config);
+ lp->default_media = (config & Xcvr) >> Xcvr_shift;
+ lp->autoselect = config & Autoselect ? 1 : 0;
}
init_timer(&lp->media);
dev->name, cardname, dev->base_addr, dev->irq,
print_mac(mac, dev->dev_addr));
printk(" %dK FIFO split %s Rx:Tx, %sMII interface.\n",
- 8 << config.u.ram_size, ram_split[config.u.ram_split],
- config.u.autoselect ? "autoselect " : "");
+ 8 << config & Ram_size,
+ ram_split[(config & Ram_split) >> Ram_split_shift],
+ config & Autoselect ? "autoselect " : "");
return 0;
struct net_device *dev = link->priv;
struct el3_private *lp = netdev_priv(dev);
tuple_t tuple;
- u16 buf[32], *phys_addr;
+ __le16 buf[32];
+ __be16 *phys_addr;
int last_fn, last_ret, i, j, multi = 0, fifo;
kio_addr_t ioaddr;
char *ram_split[] = {"5:3", "3:1", "1:1", "3:5"};
DEBUG(0, "3c589_config(0x%p)\n", link);
- phys_addr = (u16 *)dev->dev_addr;
+ phys_addr = (__be16 *)dev->dev_addr;
tuple.Attributes = 0;
tuple.TupleData = (cisdata_t *)buf;
tuple.TupleDataMax = sizeof(buf);
if (pcmcia_get_first_tuple(link, &tuple) == CS_SUCCESS) {
pcmcia_get_tuple_data(link, &tuple);
for (i = 0; i < 3; i++)
- phys_addr[i] = htons(buf[i]);
+ phys_addr[i] = htons(le16_to_cpu(buf[i]));
} else {
for (i = 0; i < 3; i++)
phys_addr[i] = htons(read_eeprom(ioaddr, i));
- if (phys_addr[0] == 0x6060) {
+ if (phys_addr[0] == htons(0x6060)) {
printk(KERN_ERR "3c589_cs: IO port conflict at 0x%03lx"
"-0x%03lx\n", dev->base_addr, dev->base_addr+15);
goto failed;
{
#ifdef CONFIG_PCNET32_NAPI
struct pcnet32_private *lp = netdev_priv(dev);
+ ulong ioaddr = dev->base_addr;
+ u16 val;
#endif
netif_wake_queue(dev);
#ifdef CONFIG_PCNET32_NAPI
+ val = lp->a.read_csr(ioaddr, CSR3);
+ val &= 0x00ff;
+ lp->a.write_csr(ioaddr, CSR3, val);
napi_enable(&lp->napi);
#endif
}
unsigned long hw_flags;
struct ql3xxx_port_registers __iomem *port_regs = qdev->mem_map_registers;
- if (!netif_carrier_ok(ndev))
- goto quit_polling;
-
ql_tx_rx_clean(qdev, &tx_cleaned, &rx_cleaned, budget);
- if (tx_cleaned + rx_cleaned != budget ||
- !netif_running(ndev)) {
-quit_polling:
+ if (tx_cleaned + rx_cleaned != budget) {
spin_lock_irqsave(&qdev->hw_lock, hw_flags);
__netif_rx_complete(ndev, napi);
ql_update_small_bufq_prod_index(qdev);
u32 clk;
clk = RTL_R8(Config2) & PCI_Clock_66MHz;
- for (i = 0; i < ARRAY_SIZE(cfg2_info); i++) {
+ for (i = 0; i < ARRAY_SIZE(cfg2_info); i++, p++) {
if ((p->mac_version == mac_version) && (p->clk == clk)) {
RTL_W32(0x7c, p->val);
break;
static inline void rtl8169_make_unusable_by_asic(struct RxDesc *desc)
{
- desc->addr = 0x0badbadbadbadbadull;
+ desc->addr = cpu_to_le64(0x0badbadbadbadbadull);
desc->opts1 &= ~cpu_to_le32(DescOwn | RsvdMask);
}
rtl8169_irq_mask_and_ack(ioaddr);
#ifdef CONFIG_R8169_NAPI
+ tp->intr_mask = 0xffff;
+ RTL_W16(IntrMask, tp->intr_event);
napi_enable(&tp->napi);
#endif
}
}
/* Work around for AMD plateform. */
- if ((desc->opts2 & 0xfffe000) &&
+ if ((desc->opts2 & cpu_to_le32(0xfffe000)) &&
(tp->mac_version == RTL_GIGA_MAC_VER_05)) {
desc->opts2 = 0;
cur_rx++;
{
struct rr_private *rrpriv;
struct rr_regs __iomem *regs;
- struct eeprom *hw = NULL;
u32 start_pc;
int i;
writel(RBURST_64|WBURST_64, ®s->PciState);
wmb();
- start_pc = rr_read_eeprom_word(rrpriv, &hw->rncd_info.FwStart);
+ start_pc = rr_read_eeprom_word(rrpriv,
+ offsetof(struct eeprom, rncd_info.FwStart));
#if (DEBUG > 1)
printk("%s: Executing firmware at address 0x%06x\n",
* it to our CPU byte-order.
*/
static u32 rr_read_eeprom_word(struct rr_private *rrpriv,
- void * offset)
+ size_t offset)
{
- u32 word;
+ __be32 word;
- if ((rr_read_eeprom(rrpriv, (unsigned long)offset,
- (char *)&word, 4) == 4))
+ if ((rr_read_eeprom(rrpriv, offset,
+ (unsigned char *)&word, 4) == 4))
return be32_to_cpu(word);
return 0;
}
{
struct rr_private *rrpriv;
struct rr_regs __iomem *regs;
- struct eeprom *hw = NULL;
u32 sram_size, rev;
DECLARE_MAC_BUF(mac);
* other method I've seen. -VAL
*/
- *(u16 *)(dev->dev_addr) =
- htons(rr_read_eeprom_word(rrpriv, &hw->manf.BoardULA));
- *(u32 *)(dev->dev_addr+2) =
- htonl(rr_read_eeprom_word(rrpriv, &hw->manf.BoardULA[4]));
+ *(__be16 *)(dev->dev_addr) =
+ htons(rr_read_eeprom_word(rrpriv, offsetof(struct eeprom, manf.BoardULA)));
+ *(__be32 *)(dev->dev_addr+2) =
+ htonl(rr_read_eeprom_word(rrpriv, offsetof(struct eeprom, manf.BoardULA[4])));
printk(" MAC: %s\n", print_mac(mac, dev->dev_addr));
- sram_size = rr_read_eeprom_word(rrpriv, (void *)8);
+ sram_size = rr_read_eeprom_word(rrpriv, 8);
printk(" SRAM size 0x%06x\n", sram_size);
return 0;
{
struct rr_private *rrpriv;
struct rr_regs __iomem *regs;
- unsigned long eptr, segptr;
+ size_t eptr, segptr;
int i, j;
u32 localctrl, sptr, len, tmp;
u32 p2len, p2size, nr_seg, revision, io, sram_size;
- struct eeprom *hw = NULL;
rrpriv = netdev_priv(dev);
regs = rrpriv->regs;
*/
io = readl(®s->ExtIo);
writel(0, ®s->ExtIo);
- sram_size = rr_read_eeprom_word(rrpriv, (void *)8);
+ sram_size = rr_read_eeprom_word(rrpriv, 8);
for (i = 200; i < sram_size / 4; i++){
writel(i * 4, ®s->WinBase);
writel(io, ®s->ExtIo);
mb();
- eptr = (unsigned long)rr_read_eeprom_word(rrpriv,
- &hw->rncd_info.AddrRunCodeSegs);
+ eptr = rr_read_eeprom_word(rrpriv,
+ offsetof(struct eeprom, rncd_info.AddrRunCodeSegs));
eptr = ((eptr & 0x1fffff) >> 3);
- p2len = rr_read_eeprom_word(rrpriv, (void *)(0x83*4));
+ p2len = rr_read_eeprom_word(rrpriv, 0x83*4);
p2len = (p2len << 2);
- p2size = rr_read_eeprom_word(rrpriv, (void *)(0x84*4));
+ p2size = rr_read_eeprom_word(rrpriv, 0x84*4);
p2size = ((p2size & 0x1fffff) >> 3);
if ((eptr < p2size) || (eptr > (p2size + p2len))){
goto out;
}
- revision = rr_read_eeprom_word(rrpriv, &hw->manf.HeaderFmt);
+ revision = rr_read_eeprom_word(rrpriv,
+ offsetof(struct eeprom, manf.HeaderFmt));
if (revision != 1){
printk("%s: invalid firmware format (%i)\n",
goto out;
}
- nr_seg = rr_read_eeprom_word(rrpriv, (void *)eptr);
+ nr_seg = rr_read_eeprom_word(rrpriv, eptr);
eptr +=4;
#if (DEBUG > 1)
printk("%s: nr_seg %i\n", dev->name, nr_seg);
#endif
for (i = 0; i < nr_seg; i++){
- sptr = rr_read_eeprom_word(rrpriv, (void *)eptr);
+ sptr = rr_read_eeprom_word(rrpriv, eptr);
eptr += 4;
- len = rr_read_eeprom_word(rrpriv, (void *)eptr);
+ len = rr_read_eeprom_word(rrpriv, eptr);
eptr += 4;
- segptr = (unsigned long)rr_read_eeprom_word(rrpriv, (void *)eptr);
+ segptr = rr_read_eeprom_word(rrpriv, eptr);
segptr = ((segptr & 0x1fffff) >> 3);
eptr += 4;
#if (DEBUG > 1)
dev->name, i, sptr, len, segptr);
#endif
for (j = 0; j < len; j++){
- tmp = rr_read_eeprom_word(rrpriv, (void *)segptr);
+ tmp = rr_read_eeprom_word(rrpriv, segptr);
writel(sptr, ®s->WinBase);
mb();
writel(tmp, ®s->WinData);
unsigned long offset,
unsigned char *buf,
unsigned long length);
-static u32 rr_read_eeprom_word(struct rr_private *rrpriv, void * offset);
+static u32 rr_read_eeprom_word(struct rr_private *rrpriv, size_t offset);
static int rr_load_firmware(struct net_device *dev);
static inline void rr_raz_tx(struct rr_private *, struct net_device *);
static inline void rr_raz_rx(struct rr_private *, struct net_device *);
#include "s2io.h"
#include "s2io-regs.h"
-#define DRV_VERSION "2.0.26.10"
+#define DRV_VERSION "2.0.26.17"
/* S2io Driver name & version. */
static char s2io_driver_name[] = "Neterion";
struct XENA_dev_config __iomem *bar0 = nic->bar0;
int i;
- if (!is_s2io_card_up(nic))
- return 0;
-
mac_control = &nic->mac_control;
config = &nic->config;
netif_carrier_off(dev);
sp->last_link_state = 0;
- napi_enable(&sp->napi);
-
if (sp->config.intr_type == MSI_X) {
int ret = s2io_enable_msi_x(sp);
return 0;
hw_init_failed:
- napi_disable(&sp->napi);
if (sp->config.intr_type == MSI_X) {
if (sp->entries) {
kfree(sp->entries);
return 0;
netif_stop_queue(dev);
- napi_disable(&sp->napi);
/* Reset card, kill tasklet and free Tx and Rx buffers. */
s2io_card_down(sp);
struct XENA_dev_config __iomem *bar0 = sp->bar0;
unsigned long flags;
register u64 val64 = 0;
+ struct config_param *config;
+ config = &sp->config;
if (!is_s2io_card_up(sp))
return;
}
clear_bit(__S2IO_STATE_CARD_UP, &sp->state);
+ /* Disable napi */
+ if (config->napi)
+ napi_disable(&sp->napi);
+
/* disable Tx and Rx traffic on the NIC */
if (do_io)
stop_nic(sp);
DBG_PRINT(INFO_DBG, "Buf in ring:%d is %d:\n", i,
atomic_read(&sp->rx_bufs_left[i]));
}
+
+ /* Initialise napi */
+ if (config->napi)
+ napi_enable(&sp->napi);
+
/* Maintain the state prior to the open */
if (sp->promisc_flg)
sp->promisc_flg = 0;
le = get_tx_le(sky2);
le->addr = 0;
le->opcode = OP_ADDR64 | HW_OWNER;
- sky2->tx_addr64 = 0;
}
static inline struct tx_ring_info *tx_le_re(struct sky2_port *sky2,
dma_addr_t map, unsigned len)
{
struct sky2_rx_le *le;
- u32 hi = upper_32_bits(map);
- if (sky2->rx_addr64 != hi) {
+ if (sizeof(dma_addr_t) > sizeof(u32)) {
le = sky2_next_rx(sky2);
- le->addr = cpu_to_le32(hi);
+ le->addr = cpu_to_le32(upper_32_bits(map));
le->opcode = OP_ADDR64 | HW_OWNER;
- sky2->rx_addr64 = upper_32_bits(map + len);
}
le = sky2_next_rx(sky2);
TX_VLAN_TAG_OFF);
}
+ sky2_read32(hw, B0_Y2_SP_LISR);
napi_enable(&hw->napi);
netif_tx_unlock_bh(dev);
}
struct tx_ring_info *re;
unsigned i, len;
dma_addr_t mapping;
- u32 addr64;
u16 mss;
u8 ctrl;
len = skb_headlen(skb);
mapping = pci_map_single(hw->pdev, skb->data, len, PCI_DMA_TODEVICE);
- addr64 = upper_32_bits(mapping);
- /* Send high bits if changed or crosses boundary */
- if (addr64 != sky2->tx_addr64 ||
- upper_32_bits(mapping + len) != sky2->tx_addr64) {
+ /* Send high bits if needed */
+ if (sizeof(dma_addr_t) > sizeof(u32)) {
le = get_tx_le(sky2);
- le->addr = cpu_to_le32(addr64);
+ le->addr = cpu_to_le32(upper_32_bits(mapping));
le->opcode = OP_ADDR64 | HW_OWNER;
- sky2->tx_addr64 = upper_32_bits(mapping + len);
}
/* Check for TCP Segmentation Offload */
mapping = pci_map_page(hw->pdev, frag->page, frag->page_offset,
frag->size, PCI_DMA_TODEVICE);
- addr64 = upper_32_bits(mapping);
- if (addr64 != sky2->tx_addr64) {
+
+ if (sizeof(dma_addr_t) > sizeof(u32)) {
le = get_tx_le(sky2);
- le->addr = cpu_to_le32(addr64);
+ le->addr = cpu_to_le32(upper_32_bits(mapping));
le->ctrl = 0;
le->opcode = OP_ADDR64 | HW_OWNER;
- sky2->tx_addr64 = addr64;
}
le = get_tx_le(sky2);
err = sky2_rx_start(sky2);
sky2_write32(hw, B0_IMSK, imask);
+ sky2_read32(hw, B0_Y2_SP_LISR);
napi_enable(&hw->napi);
if (err)
last = sky2_read16(hw, Y2_QADDR(rxqaddr[port], PREF_UNIT_PUT_IDX)),
sky2_read16(hw, Y2_QADDR(rxqaddr[port], PREF_UNIT_LAST_IDX)));
+ sky2_read32(hw, B0_Y2_SP_LISR);
napi_enable(&hw->napi);
return 0;
}
u16 tx_cons; /* next le to check */
u16 tx_prod; /* next le to use */
u16 tx_next; /* debug only */
- u32 tx_addr64;
+
u16 tx_pending;
u16 tx_last_mss;
u32 tx_tcpsum;
struct rx_ring_info *rx_ring ____cacheline_aligned_in_smp;
struct sky2_rx_le *rx_le;
- u32 rx_addr64;
+
u16 rx_next; /* next re to check */
u16 rx_put; /* next le index to use */
u16 rx_pending;
{
struct platform_device *plat_dev = to_platform_device(dev);
struct pci_dev *pci_dev = data;
- unsigned int id = (pci_dev->bus->number << 8) | pci_dev->devfn;
+ unsigned int id = pci_dev->irq;
return !strcmp(plat_dev->name, "tc35815-mac") && plat_dev->id == id;
}
}
static int tg3_nvram_read(struct tg3 *tp, u32 offset, u32 *val);
+static int tg3_nvram_read_le(struct tg3 *tp, u32 offset, __le32 *val);
static int tg3_nvram_read_swab(struct tg3 *tp, u32 offset, u32 *val);
static int tg3_get_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom, u8 *data)
struct tg3 *tp = netdev_priv(dev);
int ret;
u8 *pd;
- u32 i, offset, len, val, b_offset, b_count;
+ u32 i, offset, len, b_offset, b_count;
+ __le32 val;
if (tp->link_config.phy_is_low_power)
return -EAGAIN;
/* i.e. offset=1 len=2 */
b_count = len;
}
- ret = tg3_nvram_read(tp, offset-b_offset, &val);
+ ret = tg3_nvram_read_le(tp, offset-b_offset, &val);
if (ret)
return ret;
- val = cpu_to_le32(val);
memcpy(data, ((char*)&val) + b_offset, b_count);
len -= b_count;
offset += b_count;
/* read bytes upto the last 4 byte boundary */
pd = &data[eeprom->len];
for (i = 0; i < (len - (len & 3)); i += 4) {
- ret = tg3_nvram_read(tp, offset + i, &val);
+ ret = tg3_nvram_read_le(tp, offset + i, &val);
if (ret) {
eeprom->len += i;
return ret;
}
- val = cpu_to_le32(val);
memcpy(pd + i, &val, 4);
}
eeprom->len += i;
pd = &data[eeprom->len];
b_count = len & 3;
b_offset = offset + len - b_count;
- ret = tg3_nvram_read(tp, b_offset, &val);
+ ret = tg3_nvram_read_le(tp, b_offset, &val);
if (ret)
return ret;
- val = cpu_to_le32(val);
- memcpy(pd, ((char*)&val), b_count);
+ memcpy(pd, &val, b_count);
eeprom->len += b_count;
}
return 0;
{
struct tg3 *tp = netdev_priv(dev);
int ret;
- u32 offset, len, b_offset, odd_len, start, end;
+ u32 offset, len, b_offset, odd_len;
u8 *buf;
+ __le32 start, end;
if (tp->link_config.phy_is_low_power)
return -EAGAIN;
if ((b_offset = (offset & 3))) {
/* adjustments to start on required 4 byte boundary */
- ret = tg3_nvram_read(tp, offset-b_offset, &start);
+ ret = tg3_nvram_read_le(tp, offset-b_offset, &start);
if (ret)
return ret;
- start = cpu_to_le32(start);
len += b_offset;
offset &= ~3;
if (len < 4)
/* adjustments to end on required 4 byte boundary */
odd_len = 1;
len = (len + 3) & ~3;
- ret = tg3_nvram_read(tp, offset+len-4, &end);
+ ret = tg3_nvram_read_le(tp, offset+len-4, &end);
if (ret)
return ret;
- end = cpu_to_le32(end);
}
buf = data;
static int tg3_test_nvram(struct tg3 *tp)
{
- u32 *buf, csum, magic;
+ u32 csum, magic;
+ __le32 *buf;
int i, j, k, err = 0, size;
if (tg3_nvram_read_swab(tp, 0, &magic) != 0)
err = -EIO;
for (i = 0, j = 0; i < size; i += 4, j++) {
- u32 val;
-
- if ((err = tg3_nvram_read(tp, i, &val)) != 0)
+ if ((err = tg3_nvram_read_le(tp, i, &buf[j])) != 0)
break;
- buf[j] = cpu_to_le32(val);
}
if (i < size)
goto out;
/* Selfboot format */
- if ((cpu_to_be32(buf[0]) & TG3_EEPROM_MAGIC_FW_MSK) ==
+ magic = swab32(le32_to_cpu(buf[0]));
+ if ((magic & TG3_EEPROM_MAGIC_FW_MSK) ==
TG3_EEPROM_MAGIC_FW) {
u8 *buf8 = (u8 *) buf, csum8 = 0;
- if ((cpu_to_be32(buf[0]) & TG3_EEPROM_SB_REVISION_MASK) ==
+ if ((magic & TG3_EEPROM_SB_REVISION_MASK) ==
TG3_EEPROM_SB_REVISION_2) {
/* For rev 2, the csum doesn't include the MBA. */
for (i = 0; i < TG3_EEPROM_SB_F1R2_MBA_OFF; i++)
goto out;
}
- if ((cpu_to_be32(buf[0]) & TG3_EEPROM_MAGIC_HW_MSK) ==
+ if ((magic & TG3_EEPROM_MAGIC_HW_MSK) ==
TG3_EEPROM_MAGIC_HW) {
u8 data[NVRAM_SELFBOOT_DATA_SIZE];
u8 parity[NVRAM_SELFBOOT_DATA_SIZE];
/* Bootstrap checksum at offset 0x10 */
csum = calc_crc((unsigned char *) buf, 0x10);
- if(csum != cpu_to_le32(buf[0x10/4]))
+ if(csum != le32_to_cpu(buf[0x10/4]))
goto out;
/* Manufacturing block starts at offset 0x74, checksum at 0xfc */
csum = calc_crc((unsigned char *) &buf[0x74/4], 0x88);
- if (csum != cpu_to_le32(buf[0xfc/4]))
+ if (csum != le32_to_cpu(buf[0xfc/4]))
goto out;
err = 0;
return ret;
}
+static int tg3_nvram_read_le(struct tg3 *tp, u32 offset, __le32 *val)
+{
+ u32 v;
+ int res = tg3_nvram_read(tp, offset, &v);
+ if (!res)
+ *val = cpu_to_le32(v);
+ return res;
+}
+
static int tg3_nvram_read_swab(struct tg3 *tp, u32 offset, u32 *val)
{
int err;
u32 val;
for (i = 0; i < len; i += 4) {
- u32 addr, data;
+ u32 addr;
+ __le32 data;
addr = offset + i;
memcpy(&data, buf + i, 4);
- tw32(GRC_EEPROM_DATA, cpu_to_le32(data));
+ tw32(GRC_EEPROM_DATA, le32_to_cpu(data));
val = tr32(GRC_EEPROM_ADDR);
tw32(GRC_EEPROM_ADDR, val | EEPROM_ADDR_COMPLETE);
phy_addr = offset & ~pagemask;
for (j = 0; j < pagesize; j += 4) {
- if ((ret = tg3_nvram_read(tp, phy_addr + j,
- (u32 *) (tmp + j))))
+ if ((ret = tg3_nvram_read_le(tp, phy_addr + j,
+ (__le32 *) (tmp + j))))
break;
}
if (ret)
break;
for (j = 0; j < pagesize; j += 4) {
- u32 data;
+ __be32 data;
- data = *((u32 *) (tmp + j));
- tw32(NVRAM_WRDATA, cpu_to_be32(data));
+ data = *((__be32 *) (tmp + j));
+ /* swab32(le32_to_cpu(data)), actually */
+ tw32(NVRAM_WRDATA, be32_to_cpu(data));
tw32(NVRAM_ADDR, phy_addr + j);
int i, ret = 0;
for (i = 0; i < len; i += 4, offset += 4) {
- u32 data, page_off, phy_addr, nvram_cmd;
+ u32 page_off, phy_addr, nvram_cmd;
+ __be32 data;
memcpy(&data, buf + i, 4);
- tw32(NVRAM_WRDATA, cpu_to_be32(data));
+ tw32(NVRAM_WRDATA, be32_to_cpu(data));
page_off = offset % tp->nvram_pagesize;
vpd_cap = pci_find_capability(tp->pdev, PCI_CAP_ID_VPD);
for (i = 0; i < 256; i += 4) {
u32 tmp, j = 0;
+ __le32 v;
u16 tmp16;
pci_write_config_word(tp->pdev, vpd_cap + PCI_VPD_ADDR,
pci_read_config_dword(tp->pdev, vpd_cap + PCI_VPD_DATA,
&tmp);
- tmp = cpu_to_le32(tmp);
- memcpy(&vpd_data[i], &tmp, 4);
+ v = cpu_to_le32(tmp);
+ memcpy(&vpd_data[i], &v, 4);
}
}
offset = offset + ver_offset - start;
for (i = 0; i < 16; i += 4) {
- if (tg3_nvram_read(tp, offset + i, &val))
+ __le32 v;
+ if (tg3_nvram_read_le(tp, offset + i, &v))
return;
- val = le32_to_cpu(val);
- memcpy(tp->fw_ver + i, &val, 4);
+ memcpy(tp->fw_ver + i, &v, 4);
}
if (!(tp->tg3_flags & TG3_FLAG_ENABLE_ASF) ||
tp->fw_ver[bcnt++] = ' ';
for (i = 0; i < 4; i++) {
- if (tg3_nvram_read(tp, offset, &val))
+ __le32 v;
+ if (tg3_nvram_read_le(tp, offset, &v))
return;
- val = le32_to_cpu(val);
- offset += sizeof(val);
+ offset += sizeof(v);
- if (bcnt > TG3_VER_SIZE - sizeof(val)) {
- memcpy(&tp->fw_ver[bcnt], &val, TG3_VER_SIZE - bcnt);
+ if (bcnt > TG3_VER_SIZE - sizeof(v)) {
+ memcpy(&tp->fw_ver[bcnt], &v, TG3_VER_SIZE - bcnt);
break;
}
- memcpy(&tp->fw_ver[bcnt], &val, sizeof(val));
- bcnt += sizeof(val);
+ memcpy(&tp->fw_ver[bcnt], &v, sizeof(v));
+ bcnt += sizeof(v);
}
tp->fw_ver[TG3_VER_SIZE - 1] = 0;
struct xl_private *xl_priv=netdev_priv(dev);
u8 __iomem *xl_mmio = xl_priv->xl_mmio ;
u8 i ;
- u16 hwaddr[3] ; /* Should be u8[6] but we get word return values */
+ __le16 hwaddr[3] ; /* Should be u8[6] but we get word return values */
int open_err ;
u16 switchsettings, switchsettings_eeprom ;
}
/*
- * Read the information from the EEPROM that we need. I know we
- * should use ntohs, but the word gets stored reversed in the 16
- * bit field anyway and it all works its self out when we memcpy
- * it into dev->dev_addr.
+ * Read the information from the EEPROM that we need.
*/
- hwaddr[0] = xl_ee_read(dev,0x10) ;
- hwaddr[1] = xl_ee_read(dev,0x11) ;
- hwaddr[2] = xl_ee_read(dev,0x12) ;
+ hwaddr[0] = cpu_to_le16(xl_ee_read(dev,0x10));
+ hwaddr[1] = cpu_to_le16(xl_ee_read(dev,0x11));
+ hwaddr[2] = cpu_to_le16(xl_ee_read(dev,0x12));
/* Ring speed */
break ;
skb->dev = dev ;
- xl_priv->xl_rx_ring[i].upfragaddr = pci_map_single(xl_priv->pdev, skb->data,xl_priv->pkt_buf_sz, PCI_DMA_FROMDEVICE) ;
- xl_priv->xl_rx_ring[i].upfraglen = xl_priv->pkt_buf_sz | RXUPLASTFRAG;
+ xl_priv->xl_rx_ring[i].upfragaddr = cpu_to_le32(pci_map_single(xl_priv->pdev, skb->data,xl_priv->pkt_buf_sz, PCI_DMA_FROMDEVICE));
+ xl_priv->xl_rx_ring[i].upfraglen = cpu_to_le32(xl_priv->pkt_buf_sz) | RXUPLASTFRAG;
xl_priv->rx_ring_skb[i] = skb ;
}
xl_priv->rx_ring_tail = 0 ;
xl_priv->rx_ring_dma_addr = pci_map_single(xl_priv->pdev,xl_priv->xl_rx_ring, sizeof(struct xl_rx_desc) * XL_RX_RING_SIZE, PCI_DMA_TODEVICE) ;
for (i=0;i<(xl_priv->rx_ring_no-1);i++) {
- xl_priv->xl_rx_ring[i].upnextptr = xl_priv->rx_ring_dma_addr + (sizeof (struct xl_rx_desc) * (i+1)) ;
+ xl_priv->xl_rx_ring[i].upnextptr = cpu_to_le32(xl_priv->rx_ring_dma_addr + (sizeof (struct xl_rx_desc) * (i+1)));
}
xl_priv->xl_rx_ring[i].upnextptr = 0 ;
* Setup the first dummy DPD entry for polling to start working.
*/
- xl_priv->xl_tx_ring[0].framestartheader = TXDPDEMPTY ;
+ xl_priv->xl_tx_ring[0].framestartheader = TXDPDEMPTY;
xl_priv->xl_tx_ring[0].buffer = 0 ;
xl_priv->xl_tx_ring[0].buffer_length = 0 ;
xl_priv->xl_tx_ring[0].dnnextptr = 0 ;
return open_err ;
} else {
writel( (MEM_WORD_READ | 0xD0000 | xl_priv->srb) + 8, xl_mmio + MMIO_MAC_ACCESS_CMD) ;
- xl_priv->asb = ntohs(readw(xl_mmio + MMIO_MACDATA)) ;
+ xl_priv->asb = swab16(readw(xl_mmio + MMIO_MACDATA)) ;
printk(KERN_INFO "%s: Adapter Opened Details: ",dev->name) ;
printk("ASB: %04x",xl_priv->asb ) ;
writel( (MEM_WORD_READ | 0xD0000 | xl_priv->srb) + 10, xl_mmio + MMIO_MAC_ACCESS_CMD) ;
- printk(", SRB: %04x",ntohs(readw(xl_mmio + MMIO_MACDATA)) ) ;
+ printk(", SRB: %04x",swab16(readw(xl_mmio + MMIO_MACDATA)) ) ;
writel( (MEM_WORD_READ | 0xD0000 | xl_priv->srb) + 12, xl_mmio + MMIO_MAC_ACCESS_CMD) ;
- xl_priv->arb = ntohs(readw(xl_mmio + MMIO_MACDATA)) ;
+ xl_priv->arb = swab16(readw(xl_mmio + MMIO_MACDATA)) ;
printk(", ARB: %04x \n",xl_priv->arb ) ;
writel( (MEM_WORD_READ | 0xD0000 | xl_priv->srb) + 14, xl_mmio + MMIO_MAC_ACCESS_CMD) ;
- vsoff = ntohs(readw(xl_mmio + MMIO_MACDATA)) ;
+ vsoff = swab16(readw(xl_mmio + MMIO_MACDATA)) ;
/*
* Interesting, sending the individual characters directly to printk was causing klogd to use
static void adv_rx_ring(struct net_device *dev) /* Advance rx_ring, cut down on bloat in xl_rx */
{
struct xl_private *xl_priv=netdev_priv(dev);
- int prev_ring_loc ;
-
- prev_ring_loc = (xl_priv->rx_ring_tail + XL_RX_RING_SIZE - 1) & (XL_RX_RING_SIZE - 1);
- xl_priv->xl_rx_ring[prev_ring_loc].upnextptr = xl_priv->rx_ring_dma_addr + (sizeof (struct xl_rx_desc) * xl_priv->rx_ring_tail) ;
- xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].framestatus = 0 ;
- xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upnextptr = 0 ;
- xl_priv->rx_ring_tail++ ;
- xl_priv->rx_ring_tail &= (XL_RX_RING_SIZE-1) ;
-
- return ;
+ int n = xl_priv->rx_ring_tail;
+ int prev_ring_loc;
+
+ prev_ring_loc = (n + XL_RX_RING_SIZE - 1) & (XL_RX_RING_SIZE - 1);
+ xl_priv->xl_rx_ring[prev_ring_loc].upnextptr = cpu_to_le32(xl_priv->rx_ring_dma_addr + (sizeof (struct xl_rx_desc) * n));
+ xl_priv->xl_rx_ring[n].framestatus = 0;
+ xl_priv->xl_rx_ring[n].upnextptr = 0;
+ xl_priv->rx_ring_tail++;
+ xl_priv->rx_ring_tail &= (XL_RX_RING_SIZE-1);
}
static void xl_rx(struct net_device *dev)
temp_ring_loc &= (XL_RX_RING_SIZE-1) ;
}
- frame_length = xl_priv->xl_rx_ring[temp_ring_loc].framestatus & 0x7FFF ;
+ frame_length = le32_to_cpu(xl_priv->xl_rx_ring[temp_ring_loc].framestatus) & 0x7FFF;
skb = dev_alloc_skb(frame_length) ;
}
while (xl_priv->rx_ring_tail != temp_ring_loc) {
- copy_len = xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfraglen & 0x7FFF ;
+ copy_len = le32_to_cpu(xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfraglen) & 0x7FFF;
frame_length -= copy_len ;
- pci_dma_sync_single_for_cpu(xl_priv->pdev,xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr,xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ;
+ pci_dma_sync_single_for_cpu(xl_priv->pdev,le32_to_cpu(xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr),xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE);
skb_copy_from_linear_data(xl_priv->rx_ring_skb[xl_priv->rx_ring_tail],
skb_put(skb, copy_len),
copy_len);
- pci_dma_sync_single_for_device(xl_priv->pdev,xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr,xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ;
+ pci_dma_sync_single_for_device(xl_priv->pdev,le32_to_cpu(xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr),xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE);
adv_rx_ring(dev) ;
}
/* Now we have found the last fragment */
- pci_dma_sync_single_for_cpu(xl_priv->pdev,xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr,xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ;
+ pci_dma_sync_single_for_cpu(xl_priv->pdev,le32_to_cpu(xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr),xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE);
skb_copy_from_linear_data(xl_priv->rx_ring_skb[xl_priv->rx_ring_tail],
skb_put(skb,copy_len), frame_length);
/* memcpy(skb_put(skb,frame_length), bus_to_virt(xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr), frame_length) ; */
- pci_dma_sync_single_for_device(xl_priv->pdev,xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr,xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ;
+ pci_dma_sync_single_for_device(xl_priv->pdev,le32_to_cpu(xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr),xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE);
adv_rx_ring(dev) ;
skb->protocol = tr_type_trans(skb,dev) ;
netif_rx(skb) ;
} else { /* Single Descriptor Used, simply swap buffers over, fast path */
- frame_length = xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].framestatus & 0x7FFF ;
+ frame_length = le32_to_cpu(xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].framestatus) & 0x7FFF;
skb = dev_alloc_skb(xl_priv->pkt_buf_sz) ;
}
skb2 = xl_priv->rx_ring_skb[xl_priv->rx_ring_tail] ;
- pci_unmap_single(xl_priv->pdev, xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr, xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ;
+ pci_unmap_single(xl_priv->pdev, le32_to_cpu(xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr), xl_priv->pkt_buf_sz,PCI_DMA_FROMDEVICE) ;
skb_put(skb2, frame_length) ;
skb2->protocol = tr_type_trans(skb2,dev) ;
xl_priv->rx_ring_skb[xl_priv->rx_ring_tail] = skb ;
- xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr = pci_map_single(xl_priv->pdev,skb->data,xl_priv->pkt_buf_sz, PCI_DMA_FROMDEVICE) ;
- xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfraglen = xl_priv->pkt_buf_sz | RXUPLASTFRAG ;
+ xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr = cpu_to_le32(pci_map_single(xl_priv->pdev,skb->data,xl_priv->pkt_buf_sz, PCI_DMA_FROMDEVICE));
+ xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfraglen = cpu_to_le32(xl_priv->pkt_buf_sz) | RXUPLASTFRAG;
adv_rx_ring(dev) ;
xl_priv->xl_stats.rx_packets++ ;
xl_priv->xl_stats.rx_bytes += frame_length ;
for (i=0;i<XL_RX_RING_SIZE;i++) {
dev_kfree_skb_irq(xl_priv->rx_ring_skb[xl_priv->rx_ring_tail]) ;
- pci_unmap_single(xl_priv->pdev,xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr,xl_priv->pkt_buf_sz, PCI_DMA_FROMDEVICE) ;
+ pci_unmap_single(xl_priv->pdev,le32_to_cpu(xl_priv->xl_rx_ring[xl_priv->rx_ring_tail].upfragaddr),xl_priv->pkt_buf_sz, PCI_DMA_FROMDEVICE);
xl_priv->rx_ring_tail++ ;
xl_priv->rx_ring_tail &= XL_RX_RING_SIZE-1;
}
txd = &(xl_priv->xl_tx_ring[tx_head]) ;
txd->dnnextptr = 0 ;
- txd->framestartheader = skb->len | TXDNINDICATE ;
- txd->buffer = pci_map_single(xl_priv->pdev, skb->data, skb->len, PCI_DMA_TODEVICE) ;
- txd->buffer_length = skb->len | TXDNFRAGLAST ;
+ txd->framestartheader = cpu_to_le32(skb->len) | TXDNINDICATE;
+ txd->buffer = cpu_to_le32(pci_map_single(xl_priv->pdev, skb->data, skb->len, PCI_DMA_TODEVICE));
+ txd->buffer_length = cpu_to_le32(skb->len) | TXDNFRAGLAST;
xl_priv->tx_ring_skb[tx_head] = skb ;
xl_priv->xl_stats.tx_packets++ ;
xl_priv->xl_stats.tx_bytes += skb->len ;
xl_priv->tx_ring_head &= (XL_TX_RING_SIZE - 1) ;
xl_priv->free_ring_entries-- ;
- xl_priv->xl_tx_ring[tx_prev].dnnextptr = xl_priv->tx_ring_dma_addr + (sizeof (struct xl_tx_desc) * tx_head) ;
+ xl_priv->xl_tx_ring[tx_prev].dnnextptr = cpu_to_le32(xl_priv->tx_ring_dma_addr + (sizeof (struct xl_tx_desc) * tx_head));
/* Sneaky, by doing a read on DnListPtr we can force the card to poll on the DnNextPtr */
/* readl(xl_mmio + MMIO_DNLISTPTR) ; */
while (xl_priv->xl_tx_ring[xl_priv->tx_ring_tail].framestartheader & TXDNCOMPLETE ) {
txd = &(xl_priv->xl_tx_ring[xl_priv->tx_ring_tail]) ;
- pci_unmap_single(xl_priv->pdev,txd->buffer, xl_priv->tx_ring_skb[xl_priv->tx_ring_tail]->len, PCI_DMA_TODEVICE) ;
+ pci_unmap_single(xl_priv->pdev, le32_to_cpu(txd->buffer), xl_priv->tx_ring_skb[xl_priv->tx_ring_tail]->len, PCI_DMA_TODEVICE);
txd->framestartheader = 0 ;
- txd->buffer = 0xdeadbeef ;
+ txd->buffer = cpu_to_le32(0xdeadbeef);
txd->buffer_length = 0 ;
dev_kfree_skb_irq(xl_priv->tx_ring_skb[xl_priv->tx_ring_tail]) ;
xl_priv->tx_ring_tail++ ;
if (arb_cmd == RING_STATUS_CHANGE) { /* Ring.Status.Change */
writel( ( (MEM_WORD_READ | 0xD0000 | xl_priv->arb) + 6), xl_mmio + MMIO_MAC_ACCESS_CMD) ;
- printk(KERN_INFO "%s: Ring Status Change: New Status = %04x\n", dev->name, ntohs(readw(xl_mmio + MMIO_MACDATA) )) ;
+ printk(KERN_INFO "%s: Ring Status Change: New Status = %04x\n", dev->name, swab16(readw(xl_mmio + MMIO_MACDATA) )) ;
- lan_status = ntohs(readw(xl_mmio + MMIO_MACDATA));
+ lan_status = swab16(readw(xl_mmio + MMIO_MACDATA));
/* Acknowledge interrupt, this tells nic we are done with the arb */
writel(ACK_INTERRUPT | ARBCACK | LATCH_ACK, xl_mmio + MMIO_COMMAND) ;
printk(KERN_INFO "Received.Data \n") ;
#endif
writel( ((MEM_WORD_READ | 0xD0000 | xl_priv->arb) + 6), xl_mmio + MMIO_MAC_ACCESS_CMD) ;
- xl_priv->mac_buffer = ntohs(readw(xl_mmio + MMIO_MACDATA)) ;
+ xl_priv->mac_buffer = swab16(readw(xl_mmio + MMIO_MACDATA)) ;
/* Now we are going to be really basic here and not do anything
* with the data at all. The tech docs do not give me enough
writeb(0x81, xl_mmio + MMIO_MACDATA) ;
writel(MEM_WORD_WRITE | 0xd0000 | xl_priv->asb | 6, xl_mmio + MMIO_MAC_ACCESS_CMD) ;
- writew(ntohs(xl_priv->mac_buffer), xl_mmio + MMIO_MACDATA) ;
+ writew(swab16(xl_priv->mac_buffer), xl_mmio + MMIO_MACDATA) ;
xl_wait_misr_flags(dev) ;
#define HOSTERRINT (1<<1)
/* Receive descriptor bits */
-#define RXOVERRUN (1<<19)
-#define RXFC (1<<21)
-#define RXAR (1<<22)
-#define RXUPDCOMPLETE (1<<23)
-#define RXUPDFULL (1<<24)
-#define RXUPLASTFRAG (1<<31)
+#define RXOVERRUN cpu_to_le32(1<<19)
+#define RXFC cpu_to_le32(1<<21)
+#define RXAR cpu_to_le32(1<<22)
+#define RXUPDCOMPLETE cpu_to_le32(1<<23)
+#define RXUPDFULL cpu_to_le32(1<<24)
+#define RXUPLASTFRAG cpu_to_le32(1<<31)
/* Transmit descriptor bits */
-#define TXDNCOMPLETE (1<<16)
-#define TXTXINDICATE (1<<27)
-#define TXDPDEMPTY (1<<29)
-#define TXDNINDICATE (1<<31)
-#define TXDNFRAGLAST (1<<31)
+#define TXDNCOMPLETE cpu_to_le32(1<<16)
+#define TXTXINDICATE cpu_to_le32(1<<27)
+#define TXDPDEMPTY cpu_to_le32(1<<29)
+#define TXDNINDICATE cpu_to_le32(1<<31)
+#define TXDNFRAGLAST cpu_to_le32(1<<31)
/* Interrupts to Acknowledge */
#define LATCH_ACK 1
/* 3c359 data structures */
struct xl_tx_desc {
- u32 dnnextptr ;
- u32 framestartheader ;
- u32 buffer ;
- u32 buffer_length ;
+ __le32 dnnextptr;
+ __le32 framestartheader;
+ __le32 buffer;
+ __le32 buffer_length;
};
struct xl_rx_desc {
- u32 upnextptr ;
- u32 framestatus ;
- u32 upfragaddr ;
- u32 upfraglen ;
+ __le32 upnextptr;
+ __le32 framestatus;
+ __le32 upfragaddr;
+ __le32 upfraglen;
};
struct xl_private {
static int de4x5_sw_reset(struct net_device *dev);
static int de4x5_rx(struct net_device *dev);
static int de4x5_tx(struct net_device *dev);
-static int de4x5_ast(struct net_device *dev);
+static void de4x5_ast(struct net_device *dev);
static int de4x5_txur(struct net_device *dev);
static int de4x5_rx_ovfc(struct net_device *dev);
static int an_exception(struct de4x5_private *lp);
static char *build_setup_frame(struct net_device *dev, int mode);
static void disable_ast(struct net_device *dev);
-static void enable_ast(struct net_device *dev, u32 time_out);
static long de4x5_switch_mac_port(struct net_device *dev);
static int gep_rd(struct net_device *dev);
static void gep_wr(s32 data, struct net_device *dev);
-static void timeout(struct net_device *dev, void (*fn)(u_long data), u_long data, u_long msec);
static void yawn(struct net_device *dev, int state);
static void de4x5_parse_params(struct net_device *dev);
static void de4x5_dbg_open(struct net_device *dev);
lp->gendev = gendev;
spin_lock_init(&lp->lock);
init_timer(&lp->timer);
+ lp->timer.function = (void (*)(unsigned long))de4x5_ast;
+ lp->timer.data = (unsigned long)dev;
de4x5_parse_params(dev);
/*
lp->state = OPEN;
de4x5_dbg_open(dev);
- if (request_irq(dev->irq, (void *)de4x5_interrupt, IRQF_SHARED,
+ if (request_irq(dev->irq, de4x5_interrupt, IRQF_SHARED,
lp->adapter_name, dev)) {
printk("de4x5_open(): Requested IRQ%d is busy - attemping FAST/SHARE...", dev->irq);
if (request_irq(dev->irq, de4x5_interrupt, IRQF_DISABLED | IRQF_SHARED,
return 0;
}
-static int
+static void
de4x5_ast(struct net_device *dev)
{
- struct de4x5_private *lp = netdev_priv(dev);
- int next_tick = DE4X5_AUTOSENSE_MS;
+ struct de4x5_private *lp = netdev_priv(dev);
+ int next_tick = DE4X5_AUTOSENSE_MS;
+ int dt;
- disable_ast(dev);
+ if (lp->useSROM)
+ next_tick = srom_autoconf(dev);
+ else if (lp->chipset == DC21140)
+ next_tick = dc21140m_autoconf(dev);
+ else if (lp->chipset == DC21041)
+ next_tick = dc21041_autoconf(dev);
+ else if (lp->chipset == DC21040)
+ next_tick = dc21040_autoconf(dev);
+ lp->linkOK = 0;
- if (lp->useSROM) {
- next_tick = srom_autoconf(dev);
- } else if (lp->chipset == DC21140) {
- next_tick = dc21140m_autoconf(dev);
- } else if (lp->chipset == DC21041) {
- next_tick = dc21041_autoconf(dev);
- } else if (lp->chipset == DC21040) {
- next_tick = dc21040_autoconf(dev);
- }
- lp->linkOK = 0;
- enable_ast(dev, next_tick);
+ dt = (next_tick * HZ) / 1000;
- return 0;
+ if (!dt)
+ dt = 1;
+
+ mod_timer(&lp->timer, jiffies + dt);
}
static int
for (j=0, i=0; i<ETH_ALEN; i++) {
j += (u_char) *((u_char *)&lp->srom + SROM_HWADD + i);
}
- if ((j != 0) && (j != 0x5fa)) {
+ if (j != 0 && j != 6 * 0xff) {
last.chipset = device;
last.bus = pb;
last.irq = irq;
static int
autoconf_media(struct net_device *dev)
{
- struct de4x5_private *lp = netdev_priv(dev);
- u_long iobase = dev->base_addr;
- int next_tick = DE4X5_AUTOSENSE_MS;
+ struct de4x5_private *lp = netdev_priv(dev);
+ u_long iobase = dev->base_addr;
- lp->linkOK = 0;
- lp->c_media = AUTO; /* Bogus last media */
- disable_ast(dev);
- inl(DE4X5_MFC); /* Zero the lost frames counter */
- lp->media = INIT;
- lp->tcount = 0;
+ disable_ast(dev);
- if (lp->useSROM) {
- next_tick = srom_autoconf(dev);
- } else if (lp->chipset == DC21040) {
- next_tick = dc21040_autoconf(dev);
- } else if (lp->chipset == DC21041) {
- next_tick = dc21041_autoconf(dev);
- } else if (lp->chipset == DC21140) {
- next_tick = dc21140m_autoconf(dev);
- }
+ lp->c_media = AUTO; /* Bogus last media */
+ inl(DE4X5_MFC); /* Zero the lost frames counter */
+ lp->media = INIT;
+ lp->tcount = 0;
- enable_ast(dev, next_tick);
+ de4x5_ast(dev);
- return (lp->media);
+ return lp->media;
}
/*
outl(0, aprom_addr); /* Reset Ethernet Address ROM Pointer */
}
} else { /* Read new srom */
- u_short tmp, *p = (short *)((char *)&lp->srom + SROM_HWADD);
+ u_short tmp;
+ __le16 *p = (__le16 *)((char *)&lp->srom + SROM_HWADD);
for (i=0; i<(ETH_ALEN>>1); i++) {
tmp = srom_rd(aprom_addr, (SROM_HWADD>>1) + i);
- *p = le16_to_cpu(tmp);
- j += *p++;
+ j += tmp; /* for check for 0:0:0:0:0:0 or ff:ff:ff:ff:ff:ff */
+ *p = cpu_to_le16(tmp);
}
- if ((j == 0) || (j == 0x2fffd)) {
- return;
+ if (j == 0 || j == 3 * 0xffff) {
+ /* could get 0 only from all-0 and 3 * 0xffff only from all-1 */
+ return;
}
- p=(short *)&lp->srom;
+ p = (__le16 *)&lp->srom;
for (i=0; i<(sizeof(struct de4x5_srom)>>1); i++) {
tmp = srom_rd(aprom_addr, i);
- *p++ = le16_to_cpu(tmp);
+ *p++ = cpu_to_le16(tmp);
}
de4x5_dbg_srom((struct de4x5_srom *)&lp->srom);
}
}
static void
-enable_ast(struct net_device *dev, u32 time_out)
-{
- timeout(dev, (void *)&de4x5_ast, (u_long)dev, time_out);
-
- return;
-}
-
-static void
disable_ast(struct net_device *dev)
{
- struct de4x5_private *lp = netdev_priv(dev);
-
- del_timer(&lp->timer);
-
- return;
+ struct de4x5_private *lp = netdev_priv(dev);
+ del_timer_sync(&lp->timer);
}
static long
}
static void
-timeout(struct net_device *dev, void (*fn)(u_long data), u_long data, u_long msec)
-{
- struct de4x5_private *lp = netdev_priv(dev);
- int dt;
-
- /* First, cancel any pending timer events */
- del_timer(&lp->timer);
-
- /* Convert msec to ticks */
- dt = (msec * HZ) / 1000;
- if (dt==0) dt=1;
-
- /* Set up timer */
- init_timer(&lp->timer);
- lp->timer.expires = jiffies + dt;
- lp->timer.function = fn;
- lp->timer.data = data;
- add_timer(&lp->timer);
-
- return;
-}
-
-static void
yawn(struct net_device *dev, int state)
{
struct de4x5_private *lp = netdev_priv(dev);
int received = 0;
#endif
- if (!netif_running(dev))
- goto done;
-
#ifdef CONFIG_TULIP_NAPI_HW_MITIGATION
/* that one buffer is needed for mit activation; or might be a
if (tulip_debug > 5)
printk(KERN_DEBUG "%s: In tulip_rx(), entry %d %8.8x.\n",
dev->name, entry, status);
- if (work_done++ >= budget)
+
+ if (++work_done >= budget)
goto not_done;
if ((status & 0x38008300) != 0x0300) {
* finally: amount of IO did not increase at all. */
} while ((ioread32(tp->base_addr + CSR5) & RxIntr));
-done:
-
#ifdef CONFIG_TULIP_NAPI_HW_MITIGATION
/* We use this simplistic scheme for IM. It's proven by
tp->rx_ring[i].status = 0; /* Not owned by Tulip chip. */
tp->rx_ring[i].length = 0;
- tp->rx_ring[i].buffer1 = 0xBADF00D0; /* An invalid address. */
+ /* An invalid address. */
+ tp->rx_ring[i].buffer1 = cpu_to_le32(0xBADF00D0);
if (skb) {
pci_unmap_single(tp->pdev, mapping, PKT_BUF_SZ,
PCI_DMA_FROMDEVICE);
struct xircom_private {
/* Send and receive buffers, kernel-addressable and dma addressable forms */
- unsigned int *rx_buffer;
- unsigned int *tx_buffer;
+ __le32 *rx_buffer;
+ __le32 *tx_buffer;
dma_addr_t rx_dma_handle;
dma_addr_t tx_dma_handle;
/* FIXME: The specification tells us that the length we send HAS to be a multiple of
4 bytes. */
- card->tx_buffer[4*desc+1] = skb->len;
- if (desc == NUMDESCRIPTORS-1)
- card->tx_buffer[4*desc+1] |= (1<<25); /* bit 25: last descriptor of the ring */
+ card->tx_buffer[4*desc+1] = cpu_to_le32(skb->len);
+ if (desc == NUMDESCRIPTORS - 1) /* bit 25: last descriptor of the ring */
+ card->tx_buffer[4*desc+1] |= cpu_to_le32(1<<25);
- card->tx_buffer[4*desc+1] |= 0xF0000000;
+ card->tx_buffer[4*desc+1] |= cpu_to_le32(0xF0000000);
/* 0xF0... means want interrupts*/
card->tx_skb[desc] = skb;
wmb();
/* This gives the descriptor to the card */
- card->tx_buffer[4*desc] = 0x80000000;
+ card->tx_buffer[4*desc] = cpu_to_le32(0x80000000);
trigger_transmit(card);
- if (((int)card->tx_buffer[nextdescriptor*4])<0) { /* next descriptor is occupied... */
+ if (card->tx_buffer[nextdescriptor*4] & cpu_to_le32(0x8000000)) {
+ /* next descriptor is occupied... */
netif_stop_queue(dev);
}
card->transmit_used = nextdescriptor;
*/
static void setup_descriptors(struct xircom_private *card)
{
- unsigned int val;
- unsigned int address;
+ u32 address;
int i;
enter("setup_descriptors");
for (i=0;i<NUMDESCRIPTORS;i++ ) {
/* Rx Descr0: It's empty, let the card own it, no errors -> 0x80000000 */
- card->rx_buffer[i*4 + 0] = 0x80000000;
+ card->rx_buffer[i*4 + 0] = cpu_to_le32(0x80000000);
/* Rx Descr1: buffer 1 is 1536 bytes, buffer 2 is 0 bytes */
- card->rx_buffer[i*4 + 1] = 1536;
- if (i==NUMDESCRIPTORS-1)
- card->rx_buffer[i*4 + 1] |= (1 << 25); /* bit 25 is "last descriptor" */
+ card->rx_buffer[i*4 + 1] = cpu_to_le32(1536);
+ if (i == NUMDESCRIPTORS - 1) /* bit 25 is "last descriptor" */
+ card->rx_buffer[i*4 + 1] |= cpu_to_le32(1 << 25);
/* Rx Descr2: address of the buffer
we store the buffer at the 2nd half of the page */
- address = (unsigned long) card->rx_dma_handle;
+ address = card->rx_dma_handle;
card->rx_buffer[i*4 + 2] = cpu_to_le32(address + bufferoffsets[i]);
/* Rx Desc3: address of 2nd buffer -> 0 */
card->rx_buffer[i*4 + 3] = 0;
wmb();
/* Write the receive descriptor ring address to the card */
- address = (unsigned long) card->rx_dma_handle;
- val = cpu_to_le32(address);
- outl(val, card->io_port + CSR3); /* Receive descr list address */
+ address = card->rx_dma_handle;
+ outl(address, card->io_port + CSR3); /* Receive descr list address */
/* transmit descriptors */
/* Tx Descr0: Empty, we own it, no errors -> 0x00000000 */
card->tx_buffer[i*4 + 0] = 0x00000000;
/* Tx Descr1: buffer 1 is 1536 bytes, buffer 2 is 0 bytes */
- card->tx_buffer[i*4 + 1] = 1536;
- if (i==NUMDESCRIPTORS-1)
- card->tx_buffer[i*4 + 1] |= (1 << 25); /* bit 25 is "last descriptor" */
+ card->tx_buffer[i*4 + 1] = cpu_to_le32(1536);
+ if (i == NUMDESCRIPTORS - 1) /* bit 25 is "last descriptor" */
+ card->tx_buffer[i*4 + 1] |= cpu_to_le32(1 << 25);
/* Tx Descr2: address of the buffer
we store the buffer at the 2nd half of the page */
- address = (unsigned long) card->tx_dma_handle;
+ address = card->tx_dma_handle;
card->tx_buffer[i*4 + 2] = cpu_to_le32(address + bufferoffsets[i]);
/* Tx Desc3: address of 2nd buffer -> 0 */
card->tx_buffer[i*4 + 3] = 0;
wmb();
/* wite the transmit descriptor ring to the card */
- address = (unsigned long) card->tx_dma_handle;
- val =cpu_to_le32(address);
- outl(val, card->io_port + CSR4); /* xmit descr list address */
+ address = card->tx_dma_handle;
+ outl(address, card->io_port + CSR4); /* xmit descr list address */
leave("setup_descriptors");
}
int status;
enter("investigate_read_descriptor");
- status = card->rx_buffer[4*descnr];
+ status = le32_to_cpu(card->rx_buffer[4*descnr]);
if ((status > 0)) { /* packet received */
out:
/* give the buffer back to the card */
- card->rx_buffer[4*descnr] = 0x80000000;
+ card->rx_buffer[4*descnr] = cpu_to_le32(0x80000000);
trigger_receive(card);
}
enter("investigate_write_descriptor");
- status = card->tx_buffer[4*descnr];
+ status = le32_to_cpu(card->tx_buffer[4*descnr]);
#if 0
if (status & 0x8000) { /* Major error */
printk(KERN_ERR "Major transmit error status %x \n", status);
tun->flags &= ~TUN_PERSIST;
DBG(KERN_INFO "%s: persist %s\n",
- tun->dev->name, arg ? "disabled" : "enabled");
+ tun->dev->name, arg ? "enabled" : "disabled");
break;
case TUNSETOWNER:
first_txd->flags = TYPHOON_TX_DESC | TYPHOON_DESC_VALID;
first_txd->numDesc = 0;
first_txd->len = 0;
- first_txd->addr = (u64)((unsigned long) skb) & 0xffffffff;
- first_txd->addrHi = (u64)((unsigned long) skb) >> 32;
+ first_txd->tx_addr = (u64)((unsigned long) skb);
first_txd->processFlags = 0;
if(skb->ip_summed == CHECKSUM_PARTIAL) {
PCI_DMA_TODEVICE);
txd->flags = TYPHOON_FRAG_DESC | TYPHOON_DESC_VALID;
txd->len = cpu_to_le16(skb->len);
- txd->addr = cpu_to_le32(skb_dma);
- txd->addrHi = 0;
+ txd->frag.addr = cpu_to_le32(skb_dma);
+ txd->frag.addrHi = 0;
first_txd->numDesc++;
} else {
int i, len;
PCI_DMA_TODEVICE);
txd->flags = TYPHOON_FRAG_DESC | TYPHOON_DESC_VALID;
txd->len = cpu_to_le16(len);
- txd->addr = cpu_to_le32(skb_dma);
- txd->addrHi = 0;
+ txd->frag.addr = cpu_to_le32(skb_dma);
+ txd->frag.addrHi = 0;
first_txd->numDesc++;
for(i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
PCI_DMA_TODEVICE);
txd->flags = TYPHOON_FRAG_DESC | TYPHOON_DESC_VALID;
txd->len = cpu_to_le16(len);
- txd->addr = cpu_to_le32(skb_dma);
- txd->addrHi = 0;
+ txd->frag.addr = cpu_to_le32(skb_dma);
+ txd->frag.addrHi = 0;
first_txd->numDesc++;
}
}
* ethtool_ops->get_{strings,stats}()
*/
stats->tx_packets = le32_to_cpu(s->txPackets);
- stats->tx_bytes = le32_to_cpu(s->txBytes);
+ stats->tx_bytes = le64_to_cpu(s->txBytes);
stats->tx_errors = le32_to_cpu(s->txCarrierLost);
stats->tx_carrier_errors = le32_to_cpu(s->txCarrierLost);
stats->collisions = le32_to_cpu(s->txMultipleCollisions);
stats->rx_packets = le32_to_cpu(s->rxPacketsGood);
- stats->rx_bytes = le32_to_cpu(s->rxBytesGood);
+ stats->rx_bytes = le64_to_cpu(s->rxBytesGood);
stats->rx_fifo_errors = le32_to_cpu(s->rxFifoOverruns);
stats->rx_errors = le32_to_cpu(s->rxFifoOverruns) +
le32_to_cpu(s->BadSSD) + le32_to_cpu(s->rxCrcErrors);
if(typhoon_issue_command(tp, 1, &xp_cmd, 3, xp_resp) < 0) {
strcpy(info->fw_version, "Unknown runtime");
} else {
- u32 sleep_ver = xp_resp[0].parm2;
+ u32 sleep_ver = le32_to_cpu(xp_resp[0].parm2);
snprintf(info->fw_version, 32, "%02x.%03x.%03x",
sleep_ver >> 24, (sleep_ver >> 12) & 0xfff,
sleep_ver & 0xfff);
}
INIT_COMMAND_NO_RESPONSE(&xp_cmd, TYPHOON_CMD_XCVR_SELECT);
- xp_cmd.parm1 = cpu_to_le16(xcvr);
+ xp_cmd.parm1 = xcvr;
err = typhoon_issue_command(tp, 1, &xp_cmd, 0, NULL);
if(err < 0)
goto out;
tp->txLoRing.writeRegister = TYPHOON_REG_TX_LO_READY;
tp->txHiRing.writeRegister = TYPHOON_REG_TX_HI_READY;
- tp->txlo_dma_addr = iface->txLoAddr;
+ tp->txlo_dma_addr = le32_to_cpu(iface->txLoAddr);
tp->card_state = Sleeping;
smp_wmb();
u8 *image_data;
void *dpage;
dma_addr_t dpage_dma;
- unsigned int csum;
+ __sum16 csum;
u32 irqEnabled;
u32 irqMasked;
u32 numSections;
* summing. Fortunately, due to the properties of
* the checksum, we can do this once, at the end.
*/
- csum = csum_partial_copy_nocheck(image_data, dpage,
- len, 0);
- csum = csum_fold(csum);
- csum = le16_to_cpu(csum);
+ csum = csum_fold(csum_partial_copy_nocheck(image_data,
+ dpage, len,
+ 0));
iowrite32(len, ioaddr + TYPHOON_REG_BOOT_LENGTH);
- iowrite32(csum, ioaddr + TYPHOON_REG_BOOT_CHECKSUM);
+ iowrite32(le16_to_cpu((__force __le16)csum),
+ ioaddr + TYPHOON_REG_BOOT_CHECKSUM);
iowrite32(load_addr,
ioaddr + TYPHOON_REG_BOOT_DEST_ADDR);
iowrite32(0, ioaddr + TYPHOON_REG_BOOT_DATA_HI);
if(type == TYPHOON_TX_DESC) {
/* This tx_desc describes a packet.
*/
- unsigned long ptr = tx->addr | ((u64)tx->addrHi << 32);
+ unsigned long ptr = tx->tx_addr;
struct sk_buff *skb = (struct sk_buff *) ptr;
dev_kfree_skb_irq(skb);
} else if(type == TYPHOON_FRAG_DESC) {
/* This tx_desc describes a memory mapping. Free it.
*/
- skb_dma = (dma_addr_t) le32_to_cpu(tx->addr);
+ skb_dma = (dma_addr_t) le32_to_cpu(tx->frag.addr);
dma_len = le16_to_cpu(tx->len);
pci_unmap_single(tp->pdev, skb_dma, dma_len,
PCI_DMA_TODEVICE);
struct rx_free *r;
if((ring->lastWrite + sizeof(*r)) % (RXFREE_ENTRIES * sizeof(*r)) ==
- indexes->rxBuffCleared) {
+ le32_to_cpu(indexes->rxBuffCleared)) {
/* no room in ring, just drop the skb
*/
dev_kfree_skb_any(rxb->skb);
rxb->skb = NULL;
if((ring->lastWrite + sizeof(*r)) % (RXFREE_ENTRIES * sizeof(*r)) ==
- indexes->rxBuffCleared)
+ le32_to_cpu(indexes->rxBuffCleared))
return -ENOMEM;
skb = dev_alloc_skb(PKT_BUF_SZ);
volatile __le32 txLoCleared;
volatile __le32 txHiCleared;
volatile __le32 rxLoReady;
- volatile __u32 rxBuffCleared; /* AV: really? */
+ volatile __le32 rxBuffCleared;
volatile __le32 cmdCleared;
volatile __le32 respReady;
volatile __le32 rxHiReady;
#define TYPHOON_DESC_VALID 0x80
u8 numDesc;
__le16 len;
- u32 addr;
- u32 addrHi;
+ union {
+ struct {
+ __le32 addr;
+ __le32 addrHi;
+ } frag;
+ u64 tx_addr; /* opaque for hardware, for TX_DESC */
+ };
__le32 processFlags;
#define TYPHOON_TX_PF_NO_CRC __constant_cpu_to_le32(0x00000001)
#define TYPHOON_TX_PF_IP_CHKSUM __constant_cpu_to_le32(0x00000002)
u8 flags;
u8 numDesc;
__le16 frameLen;
- u32 addr;
- u32 addrHi;
+ u32 addr; /* opaque, comes from virtAddr */
+ u32 addrHi; /* opaque, comes from virtAddrHi */
__le32 rxStatus;
#define TYPHOON_RX_ERR_INTERNAL __constant_cpu_to_le32(0x00000000)
#define TYPHOON_RX_ERR_FIFO_UNDERRUN __constant_cpu_to_le32(0x00000001)
};
struct ax88172_int_data {
- u16 res1;
+ __le16 res1;
u8 link;
- u16 res2;
+ __le16 res2;
u8 status;
- u16 res3;
+ __le16 res3;
} __attribute__ ((packed));
static int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
u16 size, void *data)
{
+ void *buf;
+ int err = -ENOMEM;
+
devdbg(dev,"asix_read_cmd() cmd=0x%02x value=0x%04x index=0x%04x size=%d",
cmd, value, index, size);
- return usb_control_msg(
+
+ buf = kmalloc(size, GFP_KERNEL);
+ if (!buf)
+ goto out;
+
+ err = usb_control_msg(
dev->udev,
usb_rcvctrlpipe(dev->udev, 0),
cmd,
USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
value,
index,
- data,
+ buf,
size,
USB_CTRL_GET_TIMEOUT);
+ if (err == size)
+ memcpy(data, buf, size);
+ else if (err >= 0)
+ err = -EINVAL;
+ kfree(buf);
+
+out:
+ return err;
}
static int asix_write_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,
u16 size, void *data)
{
+ void *buf = NULL;
+ int err = -ENOMEM;
+
devdbg(dev,"asix_write_cmd() cmd=0x%02x value=0x%04x index=0x%04x size=%d",
cmd, value, index, size);
- return usb_control_msg(
+
+ if (data) {
+ buf = kmalloc(size, GFP_KERNEL);
+ if (!buf)
+ goto out;
+ memcpy(buf, data, size);
+ }
+
+ err = usb_control_msg(
dev->udev,
usb_sndctrlpipe(dev->udev, 0),
cmd,
USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
value,
index,
- data,
+ buf,
size,
USB_CTRL_SET_TIMEOUT);
+ kfree(buf);
+
+out:
+ return err;
}
static void asix_async_cmd_callback(struct urb *urb)
static inline int asix_get_phy_addr(struct usbnet *dev)
{
- int ret = 0;
- void *buf;
+ u8 buf[2];
+ int ret = asix_read_cmd(dev, AX_CMD_READ_PHY_ID, 0, 0, 2, buf);
devdbg(dev, "asix_get_phy_addr()");
- buf = kmalloc(2, GFP_KERNEL);
- if (!buf)
- goto out1;
-
- if ((ret = asix_read_cmd(dev, AX_CMD_READ_PHY_ID,
- 0, 0, 2, buf)) < 2) {
+ if (ret < 0) {
deverr(dev, "Error reading PHYID register: %02x", ret);
- goto out2;
+ goto out;
}
- devdbg(dev, "asix_get_phy_addr() returning 0x%04x", *((u16 *)buf));
- ret = *((u8 *)buf + 1);
-out2:
- kfree(buf);
-out1:
+ devdbg(dev, "asix_get_phy_addr() returning 0x%04x", *((__le16 *)buf));
+ ret = buf[1];
+
+out:
return ret;
}
static u16 asix_read_rx_ctl(struct usbnet *dev)
{
- u16 ret = 0;
- void *buf;
-
- buf = kmalloc(2, GFP_KERNEL);
- if (!buf)
- goto out1;
+ __le16 v;
+ int ret = asix_read_cmd(dev, AX_CMD_READ_RX_CTL, 0, 0, 2, &v);
- if ((ret = asix_read_cmd(dev, AX_CMD_READ_RX_CTL,
- 0, 0, 2, buf)) < 2) {
+ if (ret < 0) {
deverr(dev, "Error reading RX_CTL register: %02x", ret);
- goto out2;
+ goto out;
}
- ret = le16_to_cpu(*((u16 *)buf));
-out2:
- kfree(buf);
-out1:
+ ret = le16_to_cpu(v);
+out:
return ret;
}
static u16 asix_read_medium_status(struct usbnet *dev)
{
- u16 ret = 0;
- void *buf;
+ __le16 v;
+ int ret = asix_read_cmd(dev, AX_CMD_READ_MEDIUM_STATUS, 0, 0, 2, &v);
- buf = kmalloc(2, GFP_KERNEL);
- if (!buf)
- goto out1;
-
- if ((ret = asix_read_cmd(dev, AX_CMD_READ_MEDIUM_STATUS,
- 0, 0, 2, buf)) < 2) {
+ if (ret < 0) {
deverr(dev, "Error reading Medium Status register: %02x", ret);
- goto out2;
+ goto out;
}
- ret = le16_to_cpu(*((u16 *)buf));
-out2:
- kfree(buf);
-out1:
+ ret = le16_to_cpu(v);
+out:
return ret;
}
static int asix_mdio_read(struct net_device *netdev, int phy_id, int loc)
{
struct usbnet *dev = netdev_priv(netdev);
- u16 res;
+ __le16 res;
mutex_lock(&dev->phy_mutex);
asix_set_sw_mii(dev);
asix_read_cmd(dev, AX_CMD_READ_MII_REG, phy_id,
- (__u16)loc, 2, (u16 *)&res);
+ (__u16)loc, 2, &res);
asix_set_hw_mii(dev);
mutex_unlock(&dev->phy_mutex);
- devdbg(dev, "asix_mdio_read() phy_id=0x%02x, loc=0x%02x, returns=0x%04x", phy_id, loc, le16_to_cpu(res & 0xffff));
+ devdbg(dev, "asix_mdio_read() phy_id=0x%02x, loc=0x%02x, returns=0x%04x", phy_id, loc, le16_to_cpu(res));
- return le16_to_cpu(res & 0xffff);
+ return le16_to_cpu(res);
}
static void
asix_mdio_write(struct net_device *netdev, int phy_id, int loc, int val)
{
struct usbnet *dev = netdev_priv(netdev);
- u16 res = cpu_to_le16(val);
+ __le16 res = cpu_to_le16(val);
devdbg(dev, "asix_mdio_write() phy_id=0x%02x, loc=0x%02x, val=0x%04x", phy_id, loc, val);
mutex_lock(&dev->phy_mutex);
asix_set_sw_mii(dev);
- asix_write_cmd(dev, AX_CMD_WRITE_MII_REG, phy_id,
- (__u16)loc, 2, (u16 *)&res);
+ asix_write_cmd(dev, AX_CMD_WRITE_MII_REG, phy_id, (__u16)loc, 2, &res);
asix_set_hw_mii(dev);
mutex_unlock(&dev->phy_mutex);
}
{
struct usbnet *dev = netdev_priv(net);
u8 opt = 0;
- u8 buf[1];
if (wolinfo->wolopts & WAKE_PHY)
opt |= AX_MONITOR_LINK;
opt |= AX_MONITOR_MODE;
if (asix_write_cmd(dev, AX_CMD_WRITE_MONITOR_MODE,
- opt, 0, 0, &buf) < 0)
+ opt, 0, 0, NULL) < 0)
return -EINVAL;
return 0;
struct ethtool_eeprom *eeprom, u8 *data)
{
struct usbnet *dev = netdev_priv(net);
- u16 *ebuf = (u16 *)data;
+ __le16 *ebuf = (__le16 *)data;
int i;
/* Crude hack to ensure that we don't overwrite memory
static int ax88172_bind(struct usbnet *dev, struct usb_interface *intf)
{
int ret = 0;
- void *buf;
+ u8 buf[ETH_ALEN];
int i;
unsigned long gpio_bits = dev->driver_info->data;
struct asix_data *data = (struct asix_data *)&dev->data;
usbnet_get_endpoints(dev,intf);
- buf = kmalloc(ETH_ALEN, GFP_KERNEL);
- if(!buf) {
- ret = -ENOMEM;
- goto out1;
- }
-
/* Toggle the GPIOs in a manufacturer/model specific way */
for (i = 2; i >= 0; i--) {
if ((ret = asix_write_cmd(dev, AX_CMD_WRITE_GPIOS,
(gpio_bits >> (i * 8)) & 0xff, 0, 0,
- buf)) < 0)
- goto out2;
+ NULL)) < 0)
+ goto out;
msleep(5);
}
if ((ret = asix_write_rx_ctl(dev, 0x80)) < 0)
- goto out2;
+ goto out;
/* Get the MAC address */
- memset(buf, 0, ETH_ALEN);
if ((ret = asix_read_cmd(dev, AX88172_CMD_READ_NODE_ID,
- 0, 0, 6, buf)) < 0) {
+ 0, 0, ETH_ALEN, buf)) < 0) {
dbg("read AX_CMD_READ_NODE_ID failed: %d", ret);
- goto out2;
+ goto out;
}
memcpy(dev->net->dev_addr, buf, ETH_ALEN);
mii_nway_restart(&dev->mii);
return 0;
-out2:
- kfree(buf);
-out1:
+
+out:
return ret;
}
static int ax88772_bind(struct usbnet *dev, struct usb_interface *intf)
{
int ret, embd_phy;
- void *buf;
u16 rx_ctl;
struct asix_data *data = (struct asix_data *)&dev->data;
+ u8 buf[ETH_ALEN];
u32 phyid;
data->eeprom_len = AX88772_EEPROM_LEN;
usbnet_get_endpoints(dev,intf);
- buf = kmalloc(6, GFP_KERNEL);
- if(!buf) {
- dbg ("Cannot allocate memory for buffer");
- ret = -ENOMEM;
- goto out1;
- }
-
if ((ret = asix_write_gpio(dev,
AX_GPIO_RSE | AX_GPIO_GPO_2 | AX_GPIO_GPO2EN, 5)) < 0)
- goto out2;
+ goto out;
/* 0x10 is the phy id of the embedded 10/100 ethernet phy */
embd_phy = ((asix_get_phy_addr(dev) & 0x1f) == 0x10 ? 1 : 0);
if ((ret = asix_write_cmd(dev, AX_CMD_SW_PHY_SELECT,
- embd_phy, 0, 0, buf)) < 0) {
+ embd_phy, 0, 0, NULL)) < 0) {
dbg("Select PHY #1 failed: %d", ret);
- goto out2;
+ goto out;
}
if ((ret = asix_sw_reset(dev, AX_SWRESET_IPPD | AX_SWRESET_PRL)) < 0)
- goto out2;
+ goto out;
msleep(150);
if ((ret = asix_sw_reset(dev, AX_SWRESET_CLEAR)) < 0)
- goto out2;
+ goto out;
msleep(150);
if (embd_phy) {
if ((ret = asix_sw_reset(dev, AX_SWRESET_IPRL)) < 0)
- goto out2;
+ goto out;
}
else {
if ((ret = asix_sw_reset(dev, AX_SWRESET_PRTE)) < 0)
- goto out2;
+ goto out;
}
msleep(150);
rx_ctl = asix_read_rx_ctl(dev);
dbg("RX_CTL is 0x%04x after software reset", rx_ctl);
if ((ret = asix_write_rx_ctl(dev, 0x0000)) < 0)
- goto out2;
+ goto out;
rx_ctl = asix_read_rx_ctl(dev);
dbg("RX_CTL is 0x%04x setting to 0x0000", rx_ctl);
/* Get the MAC address */
- memset(buf, 0, ETH_ALEN);
if ((ret = asix_read_cmd(dev, AX_CMD_READ_NODE_ID,
0, 0, ETH_ALEN, buf)) < 0) {
dbg("Failed to read MAC address: %d", ret);
- goto out2;
+ goto out;
}
memcpy(dev->net->dev_addr, buf, ETH_ALEN);
dbg("PHYID=0x%08x", phyid);
if ((ret = asix_sw_reset(dev, AX_SWRESET_PRL)) < 0)
- goto out2;
+ goto out;
msleep(150);
if ((ret = asix_sw_reset(dev, AX_SWRESET_IPRL | AX_SWRESET_PRL)) < 0)
- goto out2;
+ goto out;
msleep(150);
mii_nway_restart(&dev->mii);
if ((ret = asix_write_medium_mode(dev, AX88772_MEDIUM_DEFAULT)) < 0)
- goto out2;
+ goto out;
if ((ret = asix_write_cmd(dev, AX_CMD_WRITE_IPG0,
AX88772_IPG0_DEFAULT | AX88772_IPG1_DEFAULT,
- AX88772_IPG2_DEFAULT, 0, buf)) < 0) {
+ AX88772_IPG2_DEFAULT, 0, NULL)) < 0) {
dbg("Write IPG,IPG1,IPG2 failed: %d", ret);
- goto out2;
+ goto out;
}
/* Set RX_CTL to default values with 2k buffer, and enable cactus */
if ((ret = asix_write_rx_ctl(dev, AX_DEFAULT_RX_CTL)) < 0)
- goto out2;
+ goto out;
rx_ctl = asix_read_rx_ctl(dev);
dbg("RX_CTL is 0x%04x after all initializations", rx_ctl);
rx_ctl = asix_read_medium_status(dev);
dbg("Medium Status is 0x%04x after all initializations", rx_ctl);
- kfree(buf);
-
/* Asix framing packs multiple eth frames into a 2K usb bulk transfer */
if (dev->driver_info->flags & FLAG_FRAMING_AX) {
/* hard_mtu is still the default - the device does not support
jumbo eth frames */
dev->rx_urb_size = 2048;
}
-
return 0;
-out2:
- kfree(buf);
-out1:
+out:
return ret;
}
{
struct asix_data *data = (struct asix_data *)&dev->data;
int ret;
- void *buf;
- u16 eeprom;
+ u8 buf[ETH_ALEN];
+ __le16 eeprom;
+ u8 status;
int gpio0 = 0;
u32 phyid;
usbnet_get_endpoints(dev,intf);
- buf = kmalloc(6, GFP_KERNEL);
- if(!buf) {
- dbg ("Cannot allocate memory for buffer");
- ret = -ENOMEM;
- goto out1;
- }
-
- eeprom = 0;
- asix_read_cmd(dev, AX_CMD_READ_GPIOS, 0, 0, 1, &eeprom);
- dbg("GPIO Status: 0x%04x", eeprom);
+ asix_read_cmd(dev, AX_CMD_READ_GPIOS, 0, 0, 1, &status);
+ dbg("GPIO Status: 0x%04x", status);
asix_write_cmd(dev, AX_CMD_WRITE_ENABLE, 0, 0, 0, NULL);
asix_read_cmd(dev, AX_CMD_READ_EEPROM, 0x0017, 0, 2, &eeprom);
dbg("EEPROM index 0x17 is 0x%04x", eeprom);
- if (eeprom == 0xffff) {
+ if (eeprom == cpu_to_le16(0xffff)) {
data->phymode = PHY_MODE_MARVELL;
data->ledmode = 0;
gpio0 = 1;
} else {
- data->phymode = eeprom & 7;
- data->ledmode = eeprom >> 8;
- gpio0 = (eeprom & 0x80) ? 0 : 1;
+ data->phymode = le16_to_cpu(eeprom) & 7;
+ data->ledmode = le16_to_cpu(eeprom) >> 8;
+ gpio0 = (le16_to_cpu(eeprom) & 0x80) ? 0 : 1;
}
dbg("GPIO0: %d, PhyMode: %d", gpio0, data->phymode);
asix_write_gpio(dev, AX_GPIO_RSE | AX_GPIO_GPO_1 | AX_GPIO_GPO1EN, 40);
- if ((eeprom >> 8) != 1) {
+ if ((le16_to_cpu(eeprom) >> 8) != 1) {
asix_write_gpio(dev, 0x003c, 30);
asix_write_gpio(dev, 0x001c, 300);
asix_write_gpio(dev, 0x003c, 30);
asix_write_rx_ctl(dev, 0);
/* Get the MAC address */
- memset(buf, 0, ETH_ALEN);
if ((ret = asix_read_cmd(dev, AX_CMD_READ_NODE_ID,
0, 0, ETH_ALEN, buf)) < 0) {
dbg("Failed to read MAC address: %d", ret);
- goto out2;
+ goto out;
}
memcpy(dev->net->dev_addr, buf, ETH_ALEN);
mii_nway_restart(&dev->mii);
if ((ret = asix_write_medium_mode(dev, AX88178_MEDIUM_DEFAULT)) < 0)
- goto out2;
+ goto out;
if ((ret = asix_write_rx_ctl(dev, AX_DEFAULT_RX_CTL)) < 0)
- goto out2;
-
- kfree(buf);
+ goto out;
/* Asix framing packs multiple eth frames into a 2K usb bulk transfer */
if (dev->driver_info->flags & FLAG_FRAMING_AX) {
jumbo eth frames */
dev->rx_urb_size = 2048;
}
-
return 0;
-out2:
- kfree(buf);
-out1:
+out:
return ret;
}
#define KAWETH_TX_TIMEOUT (5 * HZ)
#define KAWETH_SCRATCH_SIZE 32
#define KAWETH_FIRMWARE_BUF_SIZE 4096
-#define KAWETH_CONTROL_TIMEOUT (30 * HZ)
+#define KAWETH_CONTROL_TIMEOUT (30000)
#define KAWETH_STATUS_BROKEN 0x0000001
#define KAWETH_STATUS_CLOSING 0x0000002
ret = usb_control_msg(xdev, usb_rcvctrlpipe(xdev, 0), MCS7830_RD_BREQ,
MCS7830_RD_BMREQ, 0x0000, index, data,
- size, msecs_to_jiffies(MCS7830_CTRL_TIMEOUT));
+ size, MCS7830_CTRL_TIMEOUT);
return ret;
}
ret = usb_control_msg(xdev, usb_sndctrlpipe(xdev, 0), MCS7830_WR_BREQ,
MCS7830_WR_BMREQ, 0x0000, index, data,
- size, msecs_to_jiffies(MCS7830_CTRL_TIMEOUT));
+ size, MCS7830_CTRL_TIMEOUT);
return ret;
}
#include <net/dst.h>
#include <net/xfrm.h>
-#include <net/veth.h>
+#include <linux/veth.h>
#define DRV_NAME "veth"
#define DRV_VERSION "1.0"
static __exit void veth_exit(void)
{
- struct veth_priv *priv, *next;
-
- rtnl_lock();
- /*
- * cannot trust __rtnl_link_unregister() to unregister all
- * devices, as each ->dellink call will remove two devices
- * from the list at once.
- */
- list_for_each_entry_safe(priv, next, &veth_list, list)
- veth_dellink(priv->dev);
-
- __rtnl_link_unregister(&veth_link_ops);
- rtnl_unlock();
+ rtnl_link_unregister(&veth_link_ops);
}
module_init(veth_init);
dev->addr_len = 0; /* hardware address length */
if (!chan->svc)
- *(u16*)dev->dev_addr = htons(chan->lcn);
+ *(__be16*)dev->dev_addr = htons(chan->lcn);
/* Initialize hardware parameters (just for reference) */
dev->irq = wandev->irq;
const void *daddr, const void *saddr,
unsigned len)
{
- skb->protocol = type;
+ skb->protocol = htons(type);
return dev->hard_header_len;
}
struct cycx_device *card = chan->card;
if (!chan->svc)
- chan->protocol = skb->protocol;
+ chan->protocol = ntohs(skb->protocol);
if (card->wandev.state != WAN_CONNECTED)
++chan->ifstats.tx_dropped;
else if (chan->svc && chan->protocol &&
- chan->protocol != skb->protocol) {
+ chan->protocol != ntohs(skb->protocol)) {
printk(KERN_INFO
"%s: unsupported Ethertype 0x%04X on interface %s!\n",
- card->devname, skb->protocol, dev->name);
+ card->devname, ntohs(skb->protocol), dev->name);
++chan->ifstats.tx_errors;
} else if (chan->protocol == ETH_P_IP) {
switch (chan->state) {
switch (state) {
case WAN_CONNECTED:
string_state = "connected!";
- *(u16*)dev->dev_addr = htons(chan->lcn);
+ *(__be16*)dev->dev_addr = htons(chan->lcn);
netif_wake_queue(dev);
reset_timer(dev);
};
struct TxFD {
- u32 state;
- u32 next;
- u32 data;
- u32 complete;
+ __le32 state;
+ __le32 next;
+ __le32 data;
+ __le32 complete;
u32 jiffies; /* Allows sizeof(TxFD) == sizeof(RxFD) + extra hack */
+ /* FWIW, datasheet calls that "dummy" and says that card
+ * never looks at it; neither does the driver */
};
struct RxFD {
- u32 state1;
- u32 next;
- u32 data;
- u32 state2;
- u32 end;
+ __le32 state1;
+ __le32 next;
+ __le32 data;
+ __le32 state2;
+ __le32 end;
};
#define DUMMY_SKB_SIZE 64
#define SCC_REG_START(dpriv) (SCC_START+(dpriv->dev_id)*SCC_OFFSET)
struct dscc4_pci_priv {
- u32 *iqcfg;
+ __le32 *iqcfg;
int cfg_cur;
spinlock_t lock;
struct pci_dev *pdev;
struct RxFD *rx_fd;
struct TxFD *tx_fd;
- u32 *iqrx;
- u32 *iqtx;
+ __le32 *iqrx;
+ __le32 *iqtx;
/* FIXME: check all the volatile are required */
volatile u32 tx_current;
#define BrrExpMask 0x00000f00
#define BrrMultMask 0x0000003f
#define EncodingMask 0x00700000
-#define Hold 0x40000000
+#define Hold cpu_to_le32(0x40000000)
#define SccBusy 0x10000000
#define PowerUp 0x80000000
#define Vis 0x00001000
#define FrameRdo 0x40
#define FrameCrc 0x20
#define FrameRab 0x10
-#define FrameAborted 0x00000200
-#define FrameEnd 0x80000000
-#define DataComplete 0x40000000
+#define FrameAborted cpu_to_le32(0x00000200)
+#define FrameEnd cpu_to_le32(0x80000000)
+#define DataComplete cpu_to_le32(0x40000000)
#define LengthCheck 0x00008000
#define SccEvt 0x02000000
#define NoAck 0x00000200
#define Action 0x00000001
-#define HiDesc 0x20000000
+#define HiDesc cpu_to_le32(0x20000000)
/* SCC events */
#define RxEvt 0xf0000000
skbuff = dpriv->tx_skbuff;
for (i = 0; i < TX_RING_SIZE; i++) {
if (*skbuff) {
- pci_unmap_single(pdev, tx_fd->data, (*skbuff)->len,
- PCI_DMA_TODEVICE);
+ pci_unmap_single(pdev, le32_to_cpu(tx_fd->data),
+ (*skbuff)->len, PCI_DMA_TODEVICE);
dev_kfree_skb(*skbuff);
}
skbuff++;
skbuff = dpriv->rx_skbuff;
for (i = 0; i < RX_RING_SIZE; i++) {
if (*skbuff) {
- pci_unmap_single(pdev, rx_fd->data,
+ pci_unmap_single(pdev, le32_to_cpu(rx_fd->data),
RX_MAX(HDLC_MAX_MRU), PCI_DMA_FROMDEVICE);
dev_kfree_skb(*skbuff);
}
dpriv->rx_skbuff[dirty] = skb;
if (skb) {
skb->protocol = hdlc_type_trans(skb, dev);
- rx_fd->data = pci_map_single(dpriv->pci_priv->pdev, skb->data,
- len, PCI_DMA_FROMDEVICE);
+ rx_fd->data = cpu_to_le32(pci_map_single(dpriv->pci_priv->pdev,
+ skb->data, len, PCI_DMA_FROMDEVICE));
} else {
- rx_fd->data = (u32) NULL;
+ rx_fd->data = 0;
ret = -1;
}
return ret;
do {
if (!(dpriv->flags & (NeedIDR | NeedIDT)) ||
- (dpriv->iqtx[cur] & Xpr))
+ (dpriv->iqtx[cur] & cpu_to_le32(Xpr)))
break;
smp_rmb();
schedule_timeout_uninterruptible(10);
printk(KERN_DEBUG "%s: skb=0 (%s)\n", dev->name, __FUNCTION__);
goto refill;
}
- pkt_len = TO_SIZE(rx_fd->state2);
- pci_unmap_single(pdev, rx_fd->data, RX_MAX(HDLC_MAX_MRU), PCI_DMA_FROMDEVICE);
+ pkt_len = TO_SIZE(le32_to_cpu(rx_fd->state2));
+ pci_unmap_single(pdev, le32_to_cpu(rx_fd->data),
+ RX_MAX(HDLC_MAX_MRU), PCI_DMA_FROMDEVICE);
if ((skb->data[--pkt_len] & FrameOk) == FrameOk) {
stats->rx_packets++;
stats->rx_bytes += pkt_len;
}
dscc4_rx_update(dpriv, dev);
rx_fd->state2 = 0x00000000;
- rx_fd->end = 0xbabeface;
+ rx_fd->end = cpu_to_le32(0xbabeface);
}
static void dscc4_free1(struct pci_dev *pdev)
}
/* Global interrupt queue */
writel((u32)(((IRQ_RING_SIZE >> 5) - 1) << 20), ioaddr + IQLENR1);
- priv->iqcfg = (u32 *) pci_alloc_consistent(pdev,
- IRQ_RING_SIZE*sizeof(u32), &priv->iqcfg_dma);
+ priv->iqcfg = (__le32 *) pci_alloc_consistent(pdev,
+ IRQ_RING_SIZE*sizeof(__le32), &priv->iqcfg_dma);
if (!priv->iqcfg)
goto err_free_irq_5;
writel(priv->iqcfg_dma, ioaddr + IQCFG);
*/
for (i = 0; i < dev_per_card; i++) {
dpriv = priv->root + i;
- dpriv->iqtx = (u32 *) pci_alloc_consistent(pdev,
+ dpriv->iqtx = (__le32 *) pci_alloc_consistent(pdev,
IRQ_RING_SIZE*sizeof(u32), &dpriv->iqtx_dma);
if (!dpriv->iqtx)
goto err_free_iqtx_6;
}
for (i = 0; i < dev_per_card; i++) {
dpriv = priv->root + i;
- dpriv->iqrx = (u32 *) pci_alloc_consistent(pdev,
+ dpriv->iqrx = (__le32 *) pci_alloc_consistent(pdev,
IRQ_RING_SIZE*sizeof(u32), &dpriv->iqrx_dma);
if (!dpriv->iqrx)
goto err_free_iqrx_7;
dpriv->tx_skbuff[next] = skb;
tx_fd = dpriv->tx_fd + next;
tx_fd->state = FrameEnd | TO_STATE_TX(skb->len);
- tx_fd->data = pci_map_single(ppriv->pdev, skb->data, skb->len,
- PCI_DMA_TODEVICE);
+ tx_fd->data = cpu_to_le32(pci_map_single(ppriv->pdev, skb->data, skb->len,
+ PCI_DMA_TODEVICE));
tx_fd->complete = 0x00000000;
tx_fd->jiffies = jiffies;
mb();
if (state & Cfg) {
if (debug > 0)
printk(KERN_DEBUG "%s: CfgIV\n", DRV_NAME);
- if (priv->iqcfg[priv->cfg_cur++%IRQ_RING_SIZE] & Arf)
+ if (priv->iqcfg[priv->cfg_cur++%IRQ_RING_SIZE] & cpu_to_le32(Arf))
printk(KERN_ERR "%s: %s failed\n", dev->name, "CFG");
if (!(state &= ~Cfg))
goto out;
try:
cur = dpriv->iqtx_current%IRQ_RING_SIZE;
- state = dpriv->iqtx[cur];
+ state = le32_to_cpu(dpriv->iqtx[cur]);
if (!state) {
if (debug > 4)
printk(KERN_DEBUG "%s: Tx ISR = 0x%08x\n", dev->name,
tx_fd = dpriv->tx_fd + cur;
skb = dpriv->tx_skbuff[cur];
if (skb) {
- pci_unmap_single(ppriv->pdev, tx_fd->data,
+ pci_unmap_single(ppriv->pdev, le32_to_cpu(tx_fd->data),
skb->len, PCI_DMA_TODEVICE);
if (tx_fd->state & FrameEnd) {
stats->tx_packets++;
try:
cur = dpriv->iqrx_current%IRQ_RING_SIZE;
- state = dpriv->iqrx[cur];
+ state = le32_to_cpu(dpriv->iqrx[cur]);
if (!state)
return;
dpriv->iqrx[cur] = 0;
goto try;
rx_fd->state1 &= ~Hold;
rx_fd->state2 = 0x00000000;
- rx_fd->end = 0xbabeface;
+ rx_fd->end = cpu_to_le32(0xbabeface);
//}
goto try;
}
hdlc_stats(dev)->rx_over_errors++;
rx_fd->state1 |= Hold;
rx_fd->state2 = 0x00000000;
- rx_fd->end = 0xbabeface;
+ rx_fd->end = cpu_to_le32(0xbabeface);
} else
dscc4_rx_skb(dpriv, dev);
} while (1);
skb_copy_to_linear_data(skb, version,
strlen(version) % DUMMY_SKB_SIZE);
tx_fd->state = FrameEnd | TO_STATE_TX(DUMMY_SKB_SIZE);
- tx_fd->data = pci_map_single(dpriv->pci_priv->pdev, skb->data,
- DUMMY_SKB_SIZE, PCI_DMA_TODEVICE);
+ tx_fd->data = cpu_to_le32(pci_map_single(dpriv->pci_priv->pdev,
+ skb->data, DUMMY_SKB_SIZE,
+ PCI_DMA_TODEVICE));
dpriv->tx_skbuff[last] = skb;
}
return skb;
tx_fd->state = FrameEnd | TO_STATE_TX(2*DUMMY_SKB_SIZE);
tx_fd->complete = 0x00000000;
/* FIXME: NULL should be ok - to be tried */
- tx_fd->data = dpriv->tx_fd_dma;
- (tx_fd++)->next = (u32)(dpriv->tx_fd_dma +
+ tx_fd->data = cpu_to_le32(dpriv->tx_fd_dma);
+ (tx_fd++)->next = cpu_to_le32(dpriv->tx_fd_dma +
(++i%TX_RING_SIZE)*sizeof(*tx_fd));
} while (i < TX_RING_SIZE);
/* size set by the host. Multiple of 4 bytes please */
rx_fd->state1 = HiDesc;
rx_fd->state2 = 0x00000000;
- rx_fd->end = 0xbabeface;
+ rx_fd->end = cpu_to_le32(0xbabeface);
rx_fd->state1 |= TO_STATE_RX(HDLC_MAX_MRU);
// FIXME: return value verifiee mais traitement suspect
if (try_get_rx_skb(dpriv, dev) >= 0)
dpriv->rx_dirty++;
- (rx_fd++)->next = (u32)(dpriv->rx_fd_dma +
+ (rx_fd++)->next = cpu_to_le32(dpriv->rx_fd_dma +
(++i%RX_RING_SIZE)*sizeof(*rx_fd));
} while (i < RX_RING_SIZE);
static void
lmc_ssi_watchdog (lmc_softc_t * const sc)
{
- u_int16_t mii17;
- struct ssicsr2
- {
- unsigned short dtr:1, dsr:1, rts:1, cable:3, crc:1, led0:1, led1:1,
- led2:1, led3:1, fifo:1, ll:1, rl:1, tm:1, loop:1;
- };
- struct ssicsr2 *ssicsr;
- mii17 = lmc_mii_readreg (sc, 0, 17);
- ssicsr = (struct ssicsr2 *) &mii17;
- if (ssicsr->cable == 7)
+ u_int16_t mii17 = lmc_mii_readreg (sc, 0, 17);
+ if (((mii17 >> 3) & 7) == 7)
{
lmc_led_off (sc, LMC_MII16_LED2);
}
#define PR_RES 0x80
struct sbni_csr1 {
- unsigned rxl : 5;
- unsigned rate : 2;
- unsigned : 1;
+#ifdef __LITTLE_ENDIAN_BITFIELD
+ u8 rxl : 5;
+ u8 rate : 2;
+ u8 : 1;
+#else
+ u8 : 1;
+ u8 rate : 2;
+ u8 rxl : 5;
+#endif
};
/* fields in frame header */
<http://www.tldp.org/docs.html#howto>. Some more specific
information is contained in
<file:Documentation/networking/wavelan.txt> and in the source code
- <file:drivers/net/wavelan.p.h>.
+ <file:drivers/net/wireless/wavelan.p.h>.
You will also need the wireless tools package available from
<http://www.hpl.hp.com/personal/Jean_Tourrilhes/Linux/Tools.html>.
config P54_COMMON
tristate "Softmac Prism54 support"
depends on MAC80211 && WLAN_80211 && FW_LOADER && EXPERIMENTAL
+ ---help---
+ This is common code for isl38xx based cards.
+ This module does nothing by itself - the USB/PCI frontends
+ also need to be enabled in order to support any devices.
+
+ These devices require softmac firmware which can be found at
+ http://prism54.org/
+
+ If you choose to build a module, it'll be called p54common.
config P54_USB
tristate "Prism54 USB support"
depends on P54_COMMON && USB
select CRC32
+ ---help---
+ This driver is for USB isl38xx based wireless cards.
+ These are USB based adapters found in devices such as:
+
+ 3COM 3CRWE254G72
+ SMC 2862W-G
+ Accton 802.11g WN4501 USB
+ Siemens Gigaset USB
+ Netgear WG121
+ Netgear WG111
+ Medion 40900, Roper Europe
+ Shuttle PN15, Airvast WM168g, IOGear GWU513
+ Linksys WUSB54G
+ Linksys WUSB54G Portable
+ DLink DWL-G120 Spinnaker
+ DLink DWL-G122
+ Belkin F5D7050 ver 1000
+ Cohiba Proto board
+ SMC 2862W-G version 2
+ U.S. Robotics U5 802.11g Adapter
+ FUJITSU E-5400 USB D1700
+ Sagem XG703A
+ DLink DWL-G120 Cohiba
+ Spinnaker Proto board
+ Linksys WUSB54AG
+ Inventel UR054G
+ Spinnaker DUT
+
+ These devices require softmac firmware which can be found at
+ http://prism54.org/
+
+ If you choose to build a module, it'll be called p54usb.
config P54_PCI
tristate "Prism54 PCI support"
depends on P54_COMMON && PCI
+ ---help---
+ This driver is for PCI isl38xx based wireless cards.
+ This driver supports most devices that are supported by the
+ fullmac prism54 driver plus many devices which are not
+ supported by the fullmac driver/firmware.
+
+ This driver requires softmac firmware which can be found at
+ http://prism54.org/
+
+ If you choose to build a module, it'll be called p54pci.
source "drivers/net/wireless/iwlwifi/Kconfig"
source "drivers/net/wireless/hostap/Kconfig"
#define B43_PHYTYPE_A 0x00
#define B43_PHYTYPE_B 0x01
#define B43_PHYTYPE_G 0x02
+#define B43_PHYTYPE_N 0x04
+#define B43_PHYTYPE_LP 0x05
/* PHYRegisters */
#define B43_PHY_ILT_A_CTRL 0x0072
#define PAD_BYTES(nr_bytes) P4D_BYTES( __LINE__ , (nr_bytes))
/* Lightweight function to convert a frequency (in Mhz) to a channel number. */
-static inline u8 b43_freq_to_channel_a(int freq)
+static inline u8 b43_freq_to_channel_5ghz(int freq)
{
return ((freq - 5000) / 5);
}
-static inline u8 b43_freq_to_channel_bg(int freq)
+static inline u8 b43_freq_to_channel_2ghz(int freq)
{
u8 channel;
return channel;
}
-static inline u8 b43_freq_to_channel(struct b43_wldev *dev, int freq)
-{
- if (dev->phy.type == B43_PHYTYPE_A)
- return b43_freq_to_channel_a(freq);
- return b43_freq_to_channel_bg(freq);
-}
/* Lightweight function to convert a channel number to a frequency (in Mhz). */
-static inline int b43_channel_to_freq_a(u8 channel)
+static inline int b43_channel_to_freq_5ghz(u8 channel)
{
return (5000 + (5 * channel));
}
-static inline int b43_channel_to_freq_bg(u8 channel)
+static inline int b43_channel_to_freq_2ghz(u8 channel)
{
int freq;
return freq;
}
-static inline int b43_channel_to_freq(struct b43_wldev *dev, u8 channel)
-{
- if (dev->phy.type == B43_PHYTYPE_A)
- return b43_channel_to_freq_a(channel);
- return b43_channel_to_freq_bg(channel);
-}
static inline int b43_is_cck_rate(int rate)
{
rfk->rfkill->user_claim_unsupported = 1;
rfk->poll_dev = input_allocate_polled_device();
- if (!rfk->poll_dev)
- goto err_free_rfk;
+ if (!rfk->poll_dev) {
+ rfkill_free(rfk->rfkill);
+ goto err_freed_rfk;
+ }
+
rfk->poll_dev->private = dev;
rfk->poll_dev->poll = b43_rfkill_poll;
rfk->poll_dev->poll_interval = 1000; /* msecs */
err_free_polldev:
input_free_polled_device(rfk->poll_dev);
rfk->poll_dev = NULL;
-err_free_rfk:
- rfkill_free(rfk->rfkill);
+err_freed_rfk:
rfk->rfkill = NULL;
out_error:
rfk->registered = 0;
rfkill_unregister(rfk->rfkill);
input_free_polled_device(rfk->poll_dev);
rfk->poll_dev = NULL;
- rfkill_free(rfk->rfkill);
rfk->rfkill = NULL;
}
switch (chanstat & B43_RX_CHAN_PHYTYPE) {
case B43_PHYTYPE_A:
status.phymode = MODE_IEEE80211A;
- status.freq = chanid;
- status.channel = b43_freq_to_channel_a(chanid);
- break;
- case B43_PHYTYPE_B:
- status.phymode = MODE_IEEE80211B;
- status.freq = chanid + 2400;
- status.channel = b43_freq_to_channel_bg(chanid + 2400);
+ B43_WARN_ON(1);
+ /* FIXME: We don't really know which value the "chanid" contains.
+ * So the following assignment might be wrong. */
+ status.channel = chanid;
+ status.freq = b43_channel_to_freq_5ghz(status.channel);
break;
case B43_PHYTYPE_G:
status.phymode = MODE_IEEE80211G;
+ /* chanid is the radio channel cookie value as used
+ * to tune the radio. */
status.freq = chanid + 2400;
- status.channel = b43_freq_to_channel_bg(chanid + 2400);
+ status.channel = b43_freq_to_channel_2ghz(status.freq);
+ break;
+ case B43_PHYTYPE_N:
+ status.phymode = 0xDEAD /*FIXME MODE_IEEE80211N*/;
+ /* chanid is the SHM channel cookie. Which is the plain
+ * channel number in b43. */
+ status.channel = chanid;
+ if (chanstat & B43_RX_CHAN_5GHZ)
+ status.freq = b43_freq_to_channel_5ghz(status.freq);
+ else
+ status.freq = b43_freq_to_channel_2ghz(status.freq);
break;
default:
B43_WARN_ON(1);
+ goto drop;
}
dev->stats.last_rx = jiffies;
} __attribute__ ((__packed__));
/* PHY RX Status 0 */
-#define B43_RX_PHYST0_GAINCTL 0x4000 /* Gain Control */
-#define B43_RX_PHYST0_PLCPHCF 0x0200
-#define B43_RX_PHYST0_PLCPFV 0x0100
-#define B43_RX_PHYST0_SHORTPRMBL 0x0080 /* Received with Short Preamble */
+#define B43_RX_PHYST0_GAINCTL 0x4000 /* Gain Control */
+#define B43_RX_PHYST0_PLCPHCF 0x0200
+#define B43_RX_PHYST0_PLCPFV 0x0100
+#define B43_RX_PHYST0_SHORTPRMBL 0x0080 /* Received with Short Preamble */
#define B43_RX_PHYST0_LCRS 0x0040
-#define B43_RX_PHYST0_ANT 0x0020 /* Antenna */
-#define B43_RX_PHYST0_UNSRATE 0x0010
+#define B43_RX_PHYST0_ANT 0x0020 /* Antenna */
+#define B43_RX_PHYST0_UNSRATE 0x0010
#define B43_RX_PHYST0_CLIP 0x000C
#define B43_RX_PHYST0_CLIP_SHIFT 2
-#define B43_RX_PHYST0_FTYPE 0x0003 /* Frame type */
-#define B43_RX_PHYST0_CCK 0x0000 /* Frame type: CCK */
-#define B43_RX_PHYST0_OFDM 0x0001 /* Frame type: OFDM */
-#define B43_RX_PHYST0_PRE_N 0x0002 /* Pre-standard N-PHY frame */
-#define B43_RX_PHYST0_STD_N 0x0003 /* Standard N-PHY frame */
+#define B43_RX_PHYST0_FTYPE 0x0003 /* Frame type */
+#define B43_RX_PHYST0_CCK 0x0000 /* Frame type: CCK */
+#define B43_RX_PHYST0_OFDM 0x0001 /* Frame type: OFDM */
+#define B43_RX_PHYST0_PRE_N 0x0002 /* Pre-standard N-PHY frame */
+#define B43_RX_PHYST0_STD_N 0x0003 /* Standard N-PHY frame */
/* PHY RX Status 2 */
-#define B43_RX_PHYST2_LNAG 0xC000 /* LNA Gain */
+#define B43_RX_PHYST2_LNAG 0xC000 /* LNA Gain */
#define B43_RX_PHYST2_LNAG_SHIFT 14
-#define B43_RX_PHYST2_PNAG 0x3C00 /* PNA Gain */
+#define B43_RX_PHYST2_PNAG 0x3C00 /* PNA Gain */
#define B43_RX_PHYST2_PNAG_SHIFT 10
-#define B43_RX_PHYST2_FOFF 0x03FF /* F offset */
+#define B43_RX_PHYST2_FOFF 0x03FF /* F offset */
/* PHY RX Status 3 */
-#define B43_RX_PHYST3_DIGG 0x1800 /* DIG Gain */
+#define B43_RX_PHYST3_DIGG 0x1800 /* DIG Gain */
#define B43_RX_PHYST3_DIGG_SHIFT 11
-#define B43_RX_PHYST3_TRSTATE 0x0400 /* TR state */
+#define B43_RX_PHYST3_TRSTATE 0x0400 /* TR state */
/* MAC RX Status */
-#define B43_RX_MAC_BEACONSENT 0x00008000 /* Beacon send flag */
-#define B43_RX_MAC_KEYIDX 0x000007E0 /* Key index */
-#define B43_RX_MAC_KEYIDX_SHIFT 5
-#define B43_RX_MAC_DECERR 0x00000010 /* Decrypt error */
-#define B43_RX_MAC_DEC 0x00000008 /* Decryption attempted */
-#define B43_RX_MAC_PADDING 0x00000004 /* Pad bytes present */
-#define B43_RX_MAC_RESP 0x00000002 /* Response frame transmitted */
-#define B43_RX_MAC_FCSERR 0x00000001 /* FCS error */
+#define B43_RX_MAC_RXST_VALID 0x01000000 /* PHY RXST valid */
+#define B43_RX_MAC_TKIP_MICERR 0x00100000 /* TKIP MIC error */
+#define B43_RX_MAC_TKIP_MICATT 0x00080000 /* TKIP MIC attempted */
+#define B43_RX_MAC_AGGTYPE 0x00060000 /* Aggregation type */
+#define B43_RX_MAC_AGGTYPE_SHIFT 17
+#define B43_RX_MAC_AMSDU 0x00010000 /* A-MSDU mask */
+#define B43_RX_MAC_BEACONSENT 0x00008000 /* Beacon sent flag */
+#define B43_RX_MAC_KEYIDX 0x000007E0 /* Key index */
+#define B43_RX_MAC_KEYIDX_SHIFT 5
+#define B43_RX_MAC_DECERR 0x00000010 /* Decrypt error */
+#define B43_RX_MAC_DEC 0x00000008 /* Decryption attempted */
+#define B43_RX_MAC_PADDING 0x00000004 /* Pad bytes present */
+#define B43_RX_MAC_RESP 0x00000002 /* Response frame transmitted */
+#define B43_RX_MAC_FCSERR 0x00000001 /* FCS error */
/* RX channel */
-#define B43_RX_CHAN_GAIN 0xFC00 /* Gain */
-#define B43_RX_CHAN_GAIN_SHIFT 10
-#define B43_RX_CHAN_ID 0x03FC /* Channel ID */
-#define B43_RX_CHAN_ID_SHIFT 2
-#define B43_RX_CHAN_PHYTYPE 0x0003 /* PHY type */
+#define B43_RX_CHAN_40MHZ 0x1000 /* 40 Mhz channel width */
+#define B43_RX_CHAN_5GHZ 0x0800 /* 5 Ghz band */
+#define B43_RX_CHAN_ID 0x07F8 /* Channel ID */
+#define B43_RX_CHAN_ID_SHIFT 3
+#define B43_RX_CHAN_PHYTYPE 0x0007 /* PHY type */
+
u8 b43_plcp_get_ratecode_cck(const u8 bitrate);
u8 b43_plcp_get_ratecode_ofdm(const u8 bitrate);
MODULE_DEVICE_TABLE(pci, prism2_plx_id_table);
-static struct pci_driver prism2_plx_drv_id = {
+static struct pci_driver prism2_plx_driver = {
.name = "hostap_plx",
.id_table = prism2_plx_id_table,
.probe = prism2_plx_probe,
static int __init init_prism2_plx(void)
{
- return pci_register_driver(&prism2_plx_drv_id);
+ return pci_register_driver(&prism2_plx_driver);
}
static void __exit exit_prism2_plx(void)
{
- pci_unregister_driver(&prism2_plx_drv_id);
+ pci_unregister_driver(&prism2_plx_driver);
}
{
struct ipw_priv *priv = dev_get_drvdata(d);
u32 log_len = ipw_get_event_log_len(priv);
- struct ipw_event log[log_len];
+ u32 log_size;
+ struct ipw_event *log;
u32 len = 0, i;
+ /* not using min() because of its strict type checking */
+ log_size = PAGE_SIZE / sizeof(*log) > log_len ?
+ sizeof(*log) * log_len : PAGE_SIZE;
+ log = kzalloc(log_size, GFP_KERNEL);
+ if (!log) {
+ IPW_ERROR("Unable to allocate memory for log\n");
+ return 0;
+ }
+ log_len = log_size / sizeof(*log);
ipw_capture_event_log(priv, log_len, log);
len += snprintf(buf + len, PAGE_SIZE - len, "%08X", log_len);
"\n%08X%08X%08X",
log[i].time, log[i].event, log[i].data);
len += snprintf(buf + len, PAGE_SIZE - len, "\n");
+ kfree(log);
return len;
}
/**
* Reclaim Tx queue entries no more used by NIC.
*
- * When FW adwances 'R' index, all entries between old and
+ * When FW advances 'R' index, all entries between old and
* new 'R' index need to be reclaimed. As result, some free space
* forms. If there is enough free space (> low mark), wake Tx queue.
*
/* Unblock any waiting calls */
wake_up_interruptible_all(&priv->wait_command_queue);
- iwl_cancel_deferred_work(priv);
-
/* Wipe out the EXIT_PENDING status bit if we are not actually
* exiting the module */
if (!exit_pending)
mutex_lock(&priv->mutex);
__iwl_down(priv);
mutex_unlock(&priv->mutex);
+
+ iwl_cancel_deferred_work(priv);
}
#define MAX_HW_RESTARTS 5
IWL_DEBUG_INFO("*** UNLOAD DRIVER ***\n");
- mutex_lock(&priv->mutex);
set_bit(STATUS_EXIT_PENDING, &priv->status);
- __iwl_down(priv);
- mutex_unlock(&priv->mutex);
+
+ iwl_down(priv);
/* Free MAC hash list for ADHOC */
for (i = 0; i < IWL_IBSS_MAC_HASH_SIZE; i++) {
{
struct iwl_priv *priv = pci_get_drvdata(pdev);
- mutex_lock(&priv->mutex);
-
set_bit(STATUS_IN_SUSPEND, &priv->status);
/* Take down the device; powers it off, etc. */
- __iwl_down(priv);
+ iwl_down(priv);
if (priv->mac80211_registered)
ieee80211_stop_queues(priv->hw);
pci_disable_device(pdev);
pci_set_power_state(pdev, PCI_D3hot);
- mutex_unlock(&priv->mutex);
-
return 0;
}
printk(KERN_INFO "Coming out of suspend...\n");
- mutex_lock(&priv->mutex);
-
pci_set_power_state(pdev, PCI_D0);
err = pci_enable_device(pdev);
pci_restore_state(pdev);
pci_write_config_byte(pdev, 0x41, 0x00);
iwl_resume(priv);
- mutex_unlock(&priv->mutex);
return 0;
}
/* Unblock any waiting calls */
wake_up_interruptible_all(&priv->wait_command_queue);
- iwl_cancel_deferred_work(priv);
-
/* Wipe out the EXIT_PENDING status bit if we are not actually
* exiting the module */
if (!exit_pending)
mutex_lock(&priv->mutex);
__iwl_down(priv);
mutex_unlock(&priv->mutex);
+
+ iwl_cancel_deferred_work(priv);
}
#define MAX_HW_RESTARTS 5
IWL_DEBUG_INFO("*** UNLOAD DRIVER ***\n");
- mutex_lock(&priv->mutex);
set_bit(STATUS_EXIT_PENDING, &priv->status);
- __iwl_down(priv);
- mutex_unlock(&priv->mutex);
+
+ iwl_down(priv);
/* Free MAC hash list for ADHOC */
for (i = 0; i < IWL_IBSS_MAC_HASH_SIZE; i++) {
{
struct iwl_priv *priv = pci_get_drvdata(pdev);
- mutex_lock(&priv->mutex);
-
set_bit(STATUS_IN_SUSPEND, &priv->status);
/* Take down the device; powers it off, etc. */
- __iwl_down(priv);
+ iwl_down(priv);
if (priv->mac80211_registered)
ieee80211_stop_queues(priv->hw);
pci_disable_device(pdev);
pci_set_power_state(pdev, PCI_D3hot);
- mutex_unlock(&priv->mutex);
-
return 0;
}
printk(KERN_INFO "Coming out of suspend...\n");
- mutex_lock(&priv->mutex);
-
pci_set_power_state(pdev, PCI_D0);
err = pci_enable_device(pdev);
pci_restore_state(pdev);
pci_write_config_byte(pdev, 0x41, 0x00);
iwl_resume(priv);
- mutex_unlock(&priv->mutex);
return 0;
}
if (sscanf(func->card->info[i],
"ID: %x", &model) == 1)
break;
+ if (!strcmp(func->card->info[i], "IBIS Wireless SDIO Card")) {
+ model = 4;
+ break;
+ }
}
if (i == func->card->num_info) {
static void rt2500usb_config_mac_addr(struct rt2x00_dev *rt2x00dev,
__le32 *mac)
{
- rt2500usb_register_multiwrite(rt2x00dev, MAC_CSR2, &mac,
+ rt2500usb_register_multiwrite(rt2x00dev, MAC_CSR2, mac,
(3 * sizeof(__le16)));
}
struct data_entry *entry;
struct data_desc *rxd;
struct sk_buff *skb;
+ struct ieee80211_hdr *hdr;
struct rxdata_entry_desc desc;
+ int header_size;
+ int align;
u32 word;
while (1) {
memset(&desc, 0x00, sizeof(desc));
rt2x00dev->ops->lib->fill_rxdone(entry, &desc);
+ hdr = (struct ieee80211_hdr *)entry->data_addr;
+ header_size =
+ ieee80211_get_hdrlen(le16_to_cpu(hdr->frame_control));
+
+ /*
+ * The data behind the ieee80211 header must be
+ * aligned on a 4 byte boundary.
+ */
+ align = header_size % 4;
+
/*
* Allocate the sk_buffer, initialize it and copy
* all data into it.
*/
- skb = dev_alloc_skb(desc.size + NET_IP_ALIGN);
+ skb = dev_alloc_skb(desc.size + align);
if (!skb)
return;
- skb_reserve(skb, NET_IP_ALIGN);
- skb_put(skb, desc.size);
- memcpy(skb->data, entry->data_addr, desc.size);
+ skb_reserve(skb, align);
+ memcpy(skb_put(skb, desc.size), entry->data_addr, desc.size);
/*
* Send the frame to rt2x00lib for further processing.
struct data_ring *ring = entry->ring;
struct rt2x00_dev *rt2x00dev = ring->rt2x00dev;
struct sk_buff *skb;
+ struct ieee80211_hdr *hdr;
struct rxdata_entry_desc desc;
+ int header_size;
int frame_size;
if (!test_bit(DEVICE_ENABLED_RADIO, &rt2x00dev->flags) ||
* Allocate a new sk buffer to replace the current one.
* If allocation fails, we should drop the current frame
* so we can recycle the existing sk buffer for the new frame.
+ * As alignment we use 2 and not NET_IP_ALIGN because we need
+ * to be sure we have 2 bytes room in the head. (NET_IP_ALIGN
+ * can be 0 on some hardware). We use these 2 bytes for frame
+ * alignment later, we assume that the chance that
+ * header_size % 4 == 2 is bigger then header_size % 2 == 0
+ * and thus optimize alignment by reserving the 2 bytes in
+ * advance.
*/
frame_size = entry->ring->data_size + entry->ring->desc_size;
- skb = dev_alloc_skb(frame_size + NET_IP_ALIGN);
+ skb = dev_alloc_skb(frame_size + 2);
if (!skb)
goto skip_entry;
- skb_reserve(skb, NET_IP_ALIGN);
+ skb_reserve(skb, 2);
skb_put(skb, frame_size);
/*
- * Trim the skb_buffer to only contain the valid
- * frame data (so ignore the device's descriptor).
+ * The data behind the ieee80211 header must be
+ * aligned on a 4 byte boundary.
+ * After that trim the entire buffer down to only
+ * contain the valid frame data excluding the device
+ * descriptor.
*/
+ hdr = (struct ieee80211_hdr *)entry->skb->data;
+ header_size =
+ ieee80211_get_hdrlen(le16_to_cpu(hdr->frame_control));
+
+ if (header_size % 4 == 0) {
+ skb_push(entry->skb, 2);
+ memmove(entry->skb->data, entry->skb->data + 2, skb->len - 2);
+ }
skb_trim(entry->skb, desc.size);
/*
{
struct data_ring *ring;
struct data_entry *entry;
+ struct data_entry *entry_done;
struct data_desc *txd;
u32 word;
u32 reg;
!rt2x00_get_field32(word, TXD_W0_VALID))
return;
+ entry_done = rt2x00_get_data_entry_done(ring);
+ while (entry != entry_done) {
+ /* Catch up. Just report any entries we missed as
+ * failed. */
+ WARNING(rt2x00dev,
+ "TX status report missed for entry %p\n",
+ entry_done);
+ rt2x00lib_txdone(entry_done, TX_FAIL_OTHER, 0);
+ entry_done = rt2x00_get_data_entry_done(ring);
+ }
+
/*
* Obtain the status about this packet.
*/
{USB_DEVICE(0x0846, 0x6a00)},
/* HP */
{USB_DEVICE(0x03f0, 0xca02)},
+ /* Sitecom */
+ {USB_DEVICE(0x0df6, 0x000d)},
{}
};
spin_lock(&np->rx_lock);
- if (unlikely(!netif_carrier_ok(dev))) {
- spin_unlock(&np->rx_lock);
- return 0;
- }
-
skb_queue_head_init(&rxq);
skb_queue_head_init(&errq);
skb_queue_head_init(&tmpq);
/* The Yellowfin Rx and Tx buffer descriptors.
Elements are written as 32 bit for endian portability. */
struct yellowfin_desc {
- u32 dbdma_cmd;
- u32 addr;
- u32 branch_addr;
- u32 result_status;
+ __le32 dbdma_cmd;
+ __le32 addr;
+ __le32 branch_addr;
+ __le32 result_status;
};
struct tx_status_words {
dev->stats.tx_packets++;
dev->stats.tx_bytes += skb->len;
/* Free the original skb. */
- pci_unmap_single(yp->pci_dev, yp->tx_ring[entry].addr,
+ pci_unmap_single(yp->pci_dev, le32_to_cpu(yp->tx_ring[entry].addr),
skb->len, PCI_DMA_TODEVICE);
dev_kfree_skb_irq(skb);
yp->tx_skbuff[entry] = NULL;
if(!desc->result_status)
break;
- pci_dma_sync_single_for_cpu(yp->pci_dev, desc->addr,
+ pci_dma_sync_single_for_cpu(yp->pci_dev, le32_to_cpu(desc->addr),
yp->rx_buf_sz, PCI_DMA_FROMDEVICE);
desc_status = le32_to_cpu(desc->result_status) >> 16;
buf_addr = rx_skb->data;
data_size = (le32_to_cpu(desc->dbdma_cmd) -
le32_to_cpu(desc->result_status)) & 0xffff;
- frame_status = le16_to_cpu(get_unaligned((s16*)&(buf_addr[data_size - 2])));
+ frame_status = le16_to_cpu(get_unaligned((__le16*)&(buf_addr[data_size - 2])));
if (yellowfin_debug > 4)
printk(KERN_DEBUG " yellowfin_rx() status was %4.4x.\n",
frame_status);
if (pkt_len > rx_copybreak) {
skb_put(skb = rx_skb, pkt_len);
pci_unmap_single(yp->pci_dev,
- yp->rx_ring[entry].addr,
+ le32_to_cpu(yp->rx_ring[entry].addr),
yp->rx_buf_sz,
PCI_DMA_FROMDEVICE);
yp->rx_skbuff[entry] = NULL;
skb_reserve(skb, 2); /* 16 byte align the IP header */
skb_copy_to_linear_data(skb, rx_skb->data, pkt_len);
skb_put(skb, pkt_len);
- pci_dma_sync_single_for_device(yp->pci_dev, desc->addr,
- yp->rx_buf_sz,
- PCI_DMA_FROMDEVICE);
+ pci_dma_sync_single_for_device(yp->pci_dev,
+ le32_to_cpu(desc->addr),
+ yp->rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
}
skb->protocol = eth_type_trans(skb, dev);
netif_rx(skb);
/* Free all the skbuffs in the Rx queue. */
for (i = 0; i < RX_RING_SIZE; i++) {
yp->rx_ring[i].dbdma_cmd = cpu_to_le32(CMD_STOP);
- yp->rx_ring[i].addr = 0xBADF00D0; /* An invalid address. */
+ yp->rx_ring[i].addr = cpu_to_le32(0xBADF00D0); /* An invalid address. */
if (yp->rx_skbuff[i]) {
dev_kfree_skb(yp->rx_skbuff[i]);
}
return child;
}
-static void pci_enable_crs(struct pci_dev *dev)
-{
- u16 cap, rpctl;
- int rpcap = pci_find_capability(dev, PCI_CAP_ID_EXP);
- if (!rpcap)
- return;
-
- pci_read_config_word(dev, rpcap + PCI_CAP_FLAGS, &cap);
- if (((cap & PCI_EXP_FLAGS_TYPE) >> 4) != PCI_EXP_TYPE_ROOT_PORT)
- return;
-
- pci_read_config_word(dev, rpcap + PCI_EXP_RTCTL, &rpctl);
- rpctl |= PCI_EXP_RTCTL_CRSSVE;
- pci_write_config_word(dev, rpcap + PCI_EXP_RTCTL, rpctl);
-}
-
static void pci_fixup_parent_subordinate_busnr(struct pci_bus *child, int max)
{
struct pci_bus *parent = child->parent;
pci_write_config_word(dev, PCI_BRIDGE_CONTROL,
bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT);
- pci_enable_crs(dev);
-
if ((buses & 0xffff00) && !pcibios_assign_all_busses() && !is_cardbus) {
unsigned int cmax, busnr;
/*
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_0, quirk_ich6_lpc_acpi );
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_2, quirk_ich6_lpc_acpi );
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_3, quirk_ich6_lpc_acpi );
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_1, quirk_ich6_lpc_acpi );
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH8_4, quirk_ich6_lpc_acpi );
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_2, quirk_ich6_lpc_acpi );
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_4, quirk_ich6_lpc_acpi );
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_7, quirk_ich6_lpc_acpi );
+DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ICH9_8, quirk_ich6_lpc_acpi );
/*
* VIA ACPI: One IO region pointed to by longword at
#include "pxa2xx_base.h"
-int __init pcmcia_lubbock_init(struct sa1111_dev *sadev)
+int pcmcia_lubbock_init(struct sa1111_dev *sadev)
{
int ret = -ENODEV;
int i = 0;
int irq;
int p, t;
+ static unsigned char warned;
if (!valid_IRQ(gsi))
return;
while (!(res->irq_resource[i].flags & IORESOURCE_UNSET) &&
i < PNP_MAX_IRQ)
i++;
- if (i >= PNP_MAX_IRQ) {
+ if (i >= PNP_MAX_IRQ && !warned) {
printk(KERN_ERR "pnpacpi: exceeded the max number of IRQ "
"resources: %d \n", PNP_MAX_IRQ);
+ warned = 1;
return;
}
/*
int bus_master, int transfer)
{
int i = 0;
+ static unsigned char warned;
while (i < PNP_MAX_DMA &&
!(res->dma_resource[i].flags & IORESOURCE_UNSET))
}
res->dma_resource[i].start = dma;
res->dma_resource[i].end = dma;
- } else {
+ } else if (!warned) {
printk(KERN_ERR "pnpacpi: exceeded the max number of DMA "
"resources: %d \n", PNP_MAX_DMA);
+ warned = 1;
}
}
u64 io, u64 len, int io_decode)
{
int i = 0;
+ static unsigned char warned;
while (!(res->port_resource[i].flags & IORESOURCE_UNSET) &&
i < PNP_MAX_PORT)
}
res->port_resource[i].start = io;
res->port_resource[i].end = io + len - 1;
- } else {
+ } else if (!warned) {
printk(KERN_ERR "pnpacpi: exceeded the max number of IO "
"resources: %d \n", PNP_MAX_PORT);
+ warned = 1;
}
}
int write_protect)
{
int i = 0;
+ static unsigned char warned;
while (!(res->mem_resource[i].flags & IORESOURCE_UNSET) &&
(i < PNP_MAX_MEM))
res->mem_resource[i].start = mem;
res->mem_resource[i].end = mem + len - 1;
- } else {
+ } else if (!warned) {
printk(KERN_ERR "pnpacpi: exceeded the max number of mem "
"resources: %d\n", PNP_MAX_MEM);
+ warned = 1;
}
}
if (result) {
dev_dbg(&dev->core, "%s:%d: drv->probe failed\n",
__func__, __LINE__);
- down(&vuart_bus_priv.probe_mutex);
goto fail_probe;
}
This is a driver for RAID/SCSI Disk Array Controllers (EISA/ISA/PCI)
manufactured by Intel Corporation/ICP vortex GmbH. It is documented
in the kernel source in <file:drivers/scsi/gdth.c> and
- <file:drivers/scsi/gdth.h.>
+ <file:drivers/scsi/gdth.h>.
To compile this driver as a module, choose M here: the
module will be called gdth.
#define ASC_IOADR_TABLE_MAX_IX 11
-static PortAddr _asc_def_iop_base[ASC_IOADR_TABLE_MAX_IX] __devinitdata = {
+static PortAddr _asc_def_iop_base[ASC_IOADR_TABLE_MAX_IX] = {
0x100, 0x0110, 0x120, 0x0130, 0x140, 0x0150, 0x0190,
0x0210, 0x0230, 0x0250, 0x0330
};
int cnt;
int req_cnt;
int seg_cnt;
- dma_addr_t dma_handle;
u8 dir;
ENTER("qla1280_32bit_start_scsi");
cmd->cmnd[0]);
/* Calculate number of entries and segments required. */
+ req_cnt = 1;
seg_cnt = scsi_dma_map(cmd);
if (seg_cnt) {
/*
return ret;
}
-static void __devexit
+static void
qla2x00_remove_one(struct pci_dev *pdev)
{
scsi_qla_host_t *ha;
},
.id_table = qla2xxx_pci_tbl,
.probe = qla2x00_probe_one,
- .remove = __devexit_p(qla2x00_remove_one),
+ .remove = qla2x00_remove_one,
.err_handler = &qla2xxx_err_handler,
};
}
EXPORT_SYMBOL(scsi_prep_return);
-static int scsi_prep_fn(struct request_queue *q, struct request *req)
+int scsi_prep_fn(struct request_queue *q, struct request *req)
{
struct scsi_device *sdev = q->queuedata;
int ret = BLKPREP_KILL;
extern void scsi_free_queue(struct request_queue *q);
extern int scsi_init_queue(void);
extern void scsi_exit_queue(void);
+struct request_queue;
+struct request;
+extern int scsi_prep_fn(struct request_queue *, struct request *);
/* scsi_proc.c */
#ifdef CONFIG_SCSI_PROC_FS
return err;
}
+static int scsi_bus_remove(struct device *dev)
+{
+ struct device_driver *drv = dev->driver;
+ struct scsi_device *sdev = to_scsi_device(dev);
+ int err = 0;
+
+ /* reset the prep_fn back to the default since the
+ * driver may have altered it and it's being removed */
+ blk_queue_prep_rq(sdev->request_queue, scsi_prep_fn);
+
+ if (drv && drv->remove)
+ err = drv->remove(dev);
+
+ return 0;
+}
+
struct bus_type scsi_bus_type = {
.name = "scsi",
.match = scsi_bus_match,
.uevent = scsi_bus_uevent,
.suspend = scsi_bus_suspend,
.resume = scsi_bus_resume,
+ .remove = scsi_bus_remove,
};
int scsi_sysfs_register(void)
static int do_srp_rport_del(struct device *dev, void *data)
{
- srp_rport_del(dev_to_rport(dev));
+ if (scsi_is_srp_rport(dev))
+ srp_rport_del(dev_to_rport(dev));
return 0;
}
}
EXPORT_SYMBOL(sunserial_unregister_minors);
-int __init sunserial_console_match(struct console *con, struct device_node *dp,
+int sunserial_console_match(struct console *con, struct device_node *dp,
struct uart_driver *drv, int line)
{
int off;
struct spi_bitbang_cs *cs = spi->controller_state;
struct spi_bitbang *bitbang;
int retval;
+ unsigned long flags;
bitbang = spi_master_get_devdata(spi->master);
*/
/* deselect chip (low or high) */
- spin_lock(&bitbang->lock);
+ spin_lock_irqsave(&bitbang->lock, flags);
if (!bitbang->busy) {
bitbang->chipselect(spi, BITBANG_CS_INACTIVE);
ndelay(cs->nsecs);
}
- spin_unlock(&bitbang->lock);
+ spin_unlock_irqrestore(&bitbang->lock, flags);
return 0;
}
case SSB_DEV_PCI:
case SSB_DEV_PCIE:
#ifdef CONFIG_SSB_DRIVER_PCICORE
+ if (bus->bustype == SSB_BUSTYPE_PCI) {
+ /* Ignore PCI cores on PCI-E cards.
+ * Ignore PCI-E cores on PCI cards. */
+ if (dev->id.coreid == SSB_DEV_PCI) {
+ if (bus->host_pci->is_pcie)
+ continue;
+ } else {
+ if (!bus->host_pci->is_pcie)
+ continue;
+ }
+ }
if (bus->pcicore.dev) {
ssb_printk(KERN_WARNING PFX
"WARNING: Multiple PCI(E) cores found\n");
| USB_TYPE_STANDARD)) {
/* Note: The driver has not include OTG support yet.
* This will be set when OTG support is added */
- if (!gadget_is_otg(udc->gadget))
+ if (!gadget_is_otg(&udc->gadget))
break;
else if (setup->bRequest == USB_DEVICE_B_HNP_ENABLE)
udc->gadget.b_hnp_enable = 1;
static struct usb_device_id id_table [] = {
{ USB_DEVICE(0x08e6, 0x5501) }, /* Gemalto Prox-PU/CU contactless smartcard reader */
{ USB_DEVICE(0x0FCF, 0x1003) }, /* Dynastream ANT development board */
+ { USB_DEVICE(0x0FCF, 0x1004) }, /* Dynastream ANT2USB */
{ USB_DEVICE(0x10A6, 0xAA26) }, /* Knock-off DCU-11 cable */
{ USB_DEVICE(0x10AB, 0x10C5) }, /* Siemens MC60 Cable */
{ USB_DEVICE(0x10B5, 0xAC70) }, /* Nokia CA-42 USB */
port = (struct usb_serial_port *) urb->context;
tty = port->tty;
- if (urb->actual_length) {
+ if (tty && urb->actual_length) {
/* 0x80 bit is error flag */
if ((data[0] & 0x80) == 0) {
/* no errors on individual bytes, only possible overrun err*/
}
spin_unlock_irqrestore(&priv->lock, flags);
+ /* The PL2303 is reported to lose bytes if you change
+ serial settings even to the same values as before. Thus
+ we actually need to filter in this specific case */
+
+ if (!tty_termios_hw_change(port->tty->termios, old_termios))
+ return;
+
cflag = port->tty->termios->c_cflag;
buf = kzalloc(7, GFP_KERNEL);
{ USB_DEVICE(0x1199, 0x6804) }, /* Sierra Wireless MC8755 */
{ USB_DEVICE(0x1199, 0x6803) }, /* Sierra Wireless MC8765 */
{ USB_DEVICE(0x1199, 0x6812) }, /* Sierra Wireless MC8775 & AC 875U */
+ { USB_DEVICE(0x1199, 0x6813) }, /* Sierra Wireless MC8775 (Thinkpad internal) */
{ USB_DEVICE(0x1199, 0x6820) }, /* Sierra Wireless AirCard 875 */
{ USB_DEVICE(0x1199, 0x6832) }, /* Sierra Wireless MC8780*/
{ USB_DEVICE(0x1199, 0x6833) }, /* Sierra Wireless MC8781*/
{ USB_DEVICE(0x1199, 0x6804) }, /* Sierra Wireless MC8755 */
{ USB_DEVICE(0x1199, 0x6803) }, /* Sierra Wireless MC8765 */
{ USB_DEVICE(0x1199, 0x6812) }, /* Sierra Wireless MC8775 & AC 875U */
+ { USB_DEVICE(0x1199, 0x6813) }, /* Sierra Wireless MC8775 (Thinkpad internal) */
{ USB_DEVICE(0x1199, 0x6820) }, /* Sierra Wireless AirCard 875 */
{ USB_DEVICE(0x1199, 0x6832) }, /* Sierra Wireless MC8780*/
{ USB_DEVICE(0x1199, 0x6833) }, /* Sierra Wireless MC8781*/
module_exit(atmel_lcdfb_exit);
MODULE_DESCRIPTION("AT91/AT32 LCD Controller framebuffer driver");
-MODULE_AUTHOR("Nicolas Ferre <nicolas.ferre@rfo.atmel.com>");
+MODULE_AUTHOR("Nicolas Ferre <nicolas.ferre@atmel.com>");
MODULE_LICENSE("GPL");
/* 1366x768, 60 Hz, 47.403 kHz hsync, WXGA 16:9 aspect ratio */
NULL, 60, 1366, 768, 13806, 120, 10, 14, 3, 32, 5,
0, FB_VMODE_NONINTERLACED
+ }, {
+ /* 1280x800, 60 Hz, 47.403 kHz hsync, WXGA 16:10 aspect ratio */
+ NULL, 60, 1280, 800, 12048, 200, 64, 24, 1, 136, 3,
+ 0, FB_VMODE_NONINTERLACED
},
};
u32 ddr_line_length, xdr_line_length;
u64 ddr_base, xdr_base;
- acquire_console_sem();
-
if (frame > par->num_frames - 1) {
dev_dbg(info->device, "%s: invalid frame number (%u)\n",
__func__, frame);
xdr_line_length);
out:
- release_console_sem();
return error;
}
if (atomic_dec_and_test(&ps3fb.f_count)) {
if (atomic_read(&ps3fb.ext_flip)) {
atomic_set(&ps3fb.ext_flip, 0);
- ps3fb_sync(info, 0); /* single buffer */
+ if (!try_acquire_console_sem()) {
+ ps3fb_sync(info, 0); /* single buffer */
+ release_console_sem();
+ }
}
}
return 0;
break;
dev_dbg(info->device, "PS3FB_IOCTL_FSEL:%d\n", val);
+ acquire_console_sem();
retval = ps3fb_sync(info, val);
+ release_console_sem();
break;
default:
set_current_state(TASK_INTERRUPTIBLE);
if (ps3fb.is_kicked) {
ps3fb.is_kicked = 0;
+ acquire_console_sem();
ps3fb_sync(info, 0); /* single buffer */
+ release_console_sem();
}
schedule();
}
ps3fb_flip_ctl(0, &ps3fb); /* flip off */
ps3fb.dinfo->irq.mask = 0;
- if (info) {
- unregister_framebuffer(info);
- fb_dealloc_cmap(&info->cmap);
- framebuffer_release(info);
- }
-
ps3av_register_flip_ctl(NULL, NULL);
if (ps3fb.task) {
struct task_struct *task = ps3fb.task;
free_irq(ps3fb.irq_no, &dev->core);
ps3_irq_plug_destroy(ps3fb.irq_no);
}
+ if (info) {
+ unregister_framebuffer(info);
+ fb_dealloc_cmap(&info->cmap);
+ framebuffer_release(info);
+ info = dev->core.driver_data = NULL;
+ }
iounmap((u8 __iomem *)ps3fb.dinfo);
status = lv1_gpu_context_free(ps3fb.context_handle);
break;
}
- info->fix.line_length = (var->width * var->bits_per_pixel) / 8;
+ info->fix.line_length = (var->xres_virtual * var->bits_per_pixel) / 8;
/* activate this new configuration */
clk_enable(info->clk);
msleep(1);
- s3c2410fb_init_registers(info);
+ s3c2410fb_init_registers(fbinfo);
return 0;
}
};
static int mtrr __devinitdata = 3; /* enable mtrr by default */
-static int blank __devinitdata = 1; /* enable blanking by default */
+static int blank = 1; /* enable blanking by default */
static int ypan __devinitdata = 1; /* 0: scroll, 1: ypan, 2: ywrap */
static int pmi_setpal __devinitdata = 1; /* use PMI for palette changes */
static int nocrtc __devinitdata; /* ignore CRTC settings */
info->fbops->fb_pan_display = NULL;
}
-static void uvesafb_init_mtrr(struct fb_info *info)
+static void __devinit uvesafb_init_mtrr(struct fb_info *info)
{
#ifdef CONFIG_MTRR
if (mtrr && !(info->fix.smem_start & (PAGE_SIZE - 1))) {
static inline int w1_DS18B20_convert_temp(u8 rom[9])
{
- int t = (rom[1] << 8) | rom[0];
+ s16 t = (rom[1] << 8) | rom[0];
t /= 16;
return t;
}
crc = w1_calc_crc8(rom, 8);
- if (rom[8] == crc && rom[0])
+ if (rom[8] == crc)
verdict = 1;
}
}
w1_search_devices(dev, search_type, w1_slave_found);
list_for_each_entry_safe(sl, sln, &dev->slist, w1_slave_entry) {
- if (!test_bit(W1_SLAVE_ACTIVE, (unsigned long *)&sl->flags) && !--sl->ttl) {
+ if (!test_bit(W1_SLAVE_ACTIVE, (unsigned long *)&sl->flags) && !--sl->ttl)
w1_slave_detach(sl);
-
- dev->slave_count--;
- } else if (test_bit(W1_SLAVE_ACTIVE, (unsigned long *)&sl->flags))
+ else if (test_bit(W1_SLAVE_ACTIVE, (unsigned long *)&sl->flags))
sl->ttl = dev->slave_ttl;
}
/* we will autodetect the W83697HF/HG watchdog */
for (i = 0; ((!found) && (w83697hf_ioports[i] != 0)); i++) {
wdt_io = w83697hf_ioports[i];
- if (!w83697hf_check_wdt()) {
+ if (!w83697hf_check_wdt())
found++;
- break;
- }
}
} else {
if (!w83697hf_check_wdt())
help
If you say Y here, you will be able to mount Macintosh-formatted
floppy disks and hard drive partitions with full read-write access.
- Please read <file:fs/hfs/HFS.txt> to learn about the available mount
- options.
+ Please read <file:Documentation/filesystems/hfs.txt> to learn about
+ the available mount options.
To compile this file system support as a module, choose M here: the
module will be called hfs.
prstatus->pr_sigpend = p->pending.signal.sig[0];
prstatus->pr_sighold = p->blocked.sig[0];
prstatus->pr_pid = task_pid_vnr(p);
- prstatus->pr_ppid = task_pid_vnr(p->parent);
+ prstatus->pr_ppid = task_pid_vnr(p->real_parent);
prstatus->pr_pgrp = task_pgrp_vnr(p);
prstatus->pr_sid = task_session_vnr(p);
if (thread_group_leader(p)) {
psinfo->pr_psargs[len] = 0;
psinfo->pr_pid = task_pid_vnr(p);
- psinfo->pr_ppid = task_pid_vnr(p->parent);
+ psinfo->pr_ppid = task_pid_vnr(p->real_parent);
psinfo->pr_pgrp = task_pgrp_vnr(p);
psinfo->pr_sid = task_session_vnr(p);
* ioctls.
*/
+#include <linux/joystick.h>
+
#include <linux/types.h>
#include <linux/compat.h>
#include <linux/kernel.h>
COMPATIBLE_IOCTL(VIDEO_GET_SIZE)
COMPATIBLE_IOCTL(VIDEO_GET_FRAME_RATE)
+/* joystick */
+COMPATIBLE_IOCTL(JSIOCGVERSION)
+COMPATIBLE_IOCTL(JSIOCGAXES)
+COMPATIBLE_IOCTL(JSIOCGBUTTONS)
+COMPATIBLE_IOCTL(JSIOCGNAME(0))
+
/* now things that need handlers */
HANDLE_IOCTL(MEMREADOOB32, mtd_rw_oob)
HANDLE_IOCTL(MEMWRITEOOB32, mtd_rw_oob)
clear_bit(DQ_BLKS_B, &dquot->dq_flags);
}
+static int warning_issued(struct dquot *dquot, const int warntype)
+{
+ int flag = (warntype == QUOTA_NL_BHARDWARN ||
+ warntype == QUOTA_NL_BSOFTLONGWARN) ? DQ_BLKS_B :
+ ((warntype == QUOTA_NL_IHARDWARN ||
+ warntype == QUOTA_NL_ISOFTLONGWARN) ? DQ_INODES_B : 0);
+
+ if (!flag)
+ return 0;
+ return test_and_set_bit(flag, &dquot->dq_flags);
+}
+
#ifdef CONFIG_PRINT_QUOTA_WARNING
static int flag_print_warnings = 1;
}
/* Print warning to user which exceeded quota */
-static void print_warning(struct dquot *dquot, const char warntype)
+static void print_warning(struct dquot *dquot, const int warntype)
{
char *msg = NULL;
struct tty_struct *tty;
- int flag = (warntype == QUOTA_NL_BHARDWARN ||
- warntype == QUOTA_NL_BSOFTLONGWARN) ? DQ_BLKS_B :
- ((warntype == QUOTA_NL_IHARDWARN ||
- warntype == QUOTA_NL_ISOFTLONGWARN) ? DQ_INODES_B : 0);
- if (!need_print_warning(dquot) || (flag && test_and_set_bit(flag, &dquot->dq_flags)))
+ if (!need_print_warning(dquot))
return;
mutex_lock(&tty_mutex);
#ifdef CONFIG_QUOTA_NETLINK_INTERFACE
-/* Size of quota netlink message - actually an upperbound for buffer size */
-#define QUOTA_NL_MSG_SIZE 32
-
/* Netlink family structure for quota */
static struct genl_family quota_genl_family = {
.id = GENL_ID_GENERATE,
struct sk_buff *skb;
void *msg_head;
int ret;
+ int msg_size = 4 * nla_total_size(sizeof(u32)) +
+ 2 * nla_total_size(sizeof(u64));
/* We have to allocate using GFP_NOFS as we are called from a
* filesystem performing write and thus further recursion into
* the fs to free some data could cause deadlocks. */
- skb = genlmsg_new(QUOTA_NL_MSG_SIZE, GFP_NOFS);
+ skb = genlmsg_new(msg_size, GFP_NOFS);
if (!skb) {
printk(KERN_ERR
"VFS: Not enough memory to send quota warning.\n");
"VFS: Failed to send notification message: %d\n", ret);
return;
attr_err_out:
- printk(KERN_ERR "VFS: Failed to compose quota message: %d\n", ret);
+ printk(KERN_ERR "VFS: Not enough space to compose quota message!\n");
err_out:
kfree_skb(skb);
}
int i;
for (i = 0; i < MAXQUOTAS; i++)
- if (dquots[i] != NODQUOT && warntype[i] != QUOTA_NL_NOWARN) {
+ if (dquots[i] != NODQUOT && warntype[i] != QUOTA_NL_NOWARN &&
+ !warning_issued(dquots[i], warntype[i])) {
#ifdef CONFIG_PRINT_QUOTA_WARNING
print_warning(dquots[i], warntype[i]);
#endif
rc = ecryptfs_crypto_api_algify_cipher_name(&full_alg_name,
crypt_stat->cipher, "cbc");
if (rc)
- goto out;
+ goto out_unlock;
crypt_stat->tfm = crypto_alloc_blkcipher(full_alg_name, 0,
CRYPTO_ALG_ASYNC);
kfree(full_alg_name);
ecryptfs_printk(KERN_ERR, "cryptfs: init_crypt_ctx(): "
"Error initializing cipher [%s]\n",
crypt_stat->cipher);
- mutex_unlock(&crypt_stat->cs_tfm_mutex);
- goto out;
+ goto out_unlock;
}
crypto_blkcipher_set_flags(crypt_stat->tfm, CRYPTO_TFM_REQ_WEAK_KEY);
- mutex_unlock(&crypt_stat->cs_tfm_mutex);
rc = 0;
+out_unlock:
+ mutex_unlock(&crypt_stat->cs_tfm_mutex);
out:
return rc;
}
mutex_init(&tmp_tfm->key_tfm_mutex);
strncpy(tmp_tfm->cipher_name, cipher_name,
ECRYPTFS_MAX_CIPHER_NAME_SIZE);
+ tmp_tfm->cipher_name[ECRYPTFS_MAX_CIPHER_NAME_SIZE] = '\0';
tmp_tfm->key_size = key_size;
rc = ecryptfs_process_key_cipher(&tmp_tfm->key_tfm,
tmp_tfm->cipher_name,
rc = ecryptfs_create_underlying_file(lower_dir_dentry->d_inode,
ecryptfs_dentry, mode, nd);
if (rc) {
- struct inode *ecryptfs_inode = ecryptfs_dentry->d_inode;
- struct ecryptfs_inode_info *inode_info =
- ecryptfs_inode_to_private(ecryptfs_inode);
-
- printk(KERN_WARNING "%s: Error creating underlying file; "
- "rc = [%d]; checking for existing\n", __FUNCTION__, rc);
- if (inode_info) {
- mutex_lock(&inode_info->lower_file_mutex);
- if (!inode_info->lower_file) {
- mutex_unlock(&inode_info->lower_file_mutex);
- printk(KERN_ERR "%s: Failure to set underlying "
- "file; rc = [%d]\n", __FUNCTION__, rc);
- goto out_lock;
- }
- mutex_unlock(&inode_info->lower_file_mutex);
- }
+ printk(KERN_ERR "%s: Failure to create dentry in lower fs; "
+ "rc = [%d]\n", __FUNCTION__, rc);
+ goto out_lock;
}
rc = ecryptfs_interpose(lower_dentry, ecryptfs_dentry,
directory_inode->i_sb, 0);
dentry->d_inode->i_nlink =
ecryptfs_inode_to_lower(dentry->d_inode)->i_nlink;
dentry->d_inode->i_ctime = dir->i_ctime;
+ d_drop(dentry);
out_unlock:
unlock_parent(lower_dentry);
return rc;
inode_info->lower_file = dentry_open(lower_dentry,
lower_mnt,
(O_RDWR | O_LARGEFILE));
- if (IS_ERR(inode_info->lower_file))
+ if (IS_ERR(inode_info->lower_file)) {
+ dget(lower_dentry);
+ mntget(lower_mnt);
inode_info->lower_file = dentry_open(lower_dentry,
lower_mnt,
(O_RDONLY
| O_LARGEFILE));
+ }
if (IS_ERR(inode_info->lower_file)) {
printk(KERN_ERR "Error opening lower persistent file "
"for lower_dentry [0x%p] and lower_mnt [0x%p]\n",
if (!ecryptfs_daemon_id_hash) {
rc = -ENOMEM;
ecryptfs_printk(KERN_ERR, "Failed to allocate memory\n");
+ mutex_unlock(&ecryptfs_daemon_id_hash_mux);
goto out;
}
for (i = 0; i < ecryptfs_hash_buckets; i++)
fput(inode_info->lower_file);
inode_info->lower_file = NULL;
d_drop(lower_dentry);
- d_delete(lower_dentry);
}
}
mutex_unlock(&inode_info->lower_file_mutex);
EXPORT_SYMBOL_GPL(fat_free_clusters);
+/* 128kb is the whole sectors for FAT12 and FAT16 */
+#define FAT_READA_SIZE (128 * 1024)
+
+static void fat_ent_reada(struct super_block *sb, struct fat_entry *fatent,
+ unsigned long reada_blocks)
+{
+ struct fatent_operations *ops = MSDOS_SB(sb)->fatent_ops;
+ sector_t blocknr;
+ int i, offset;
+
+ ops->ent_blocknr(sb, fatent->entry, &offset, &blocknr);
+
+ for (i = 0; i < reada_blocks; i++)
+ sb_breadahead(sb, blocknr + i);
+}
+
int fat_count_free_clusters(struct super_block *sb)
{
struct msdos_sb_info *sbi = MSDOS_SB(sb);
struct fatent_operations *ops = sbi->fatent_ops;
struct fat_entry fatent;
+ unsigned long reada_blocks, reada_mask, cur_block;
int err = 0, free;
lock_fat(sbi);
if (sbi->free_clusters != -1)
goto out;
+ reada_blocks = FAT_READA_SIZE >> sb->s_blocksize_bits;
+ reada_mask = reada_blocks - 1;
+ cur_block = 0;
+
free = 0;
fatent_init(&fatent);
fatent_set_entry(&fatent, FAT_START_ENT);
while (fatent.entry < sbi->max_cluster) {
+ /* readahead of fat blocks */
+ if ((cur_block & reada_mask) == 0) {
+ unsigned long rest = sbi->fat_length - cur_block;
+ fat_ent_reada(sb, &fatent, min(reada_blocks, rest));
+ }
+ cur_block++;
+
err = fat_ent_read_block(sb, &fatent);
if (err)
goto out;
if (wbc->nr_to_write <= 0)
break;
}
- if (!list_empty(&sb->s_more_io))
- wbc->more_io = 1;
return; /* Leave any unwritten inodes on s_io */
}
rec = (e + b) / 2;
len = hfs_brec_lenoff(bnode, rec, &off);
keylen = hfs_brec_keylen(bnode, rec);
+ if (keylen == HFS_BAD_KEYLEN) {
+ res = -EINVAL;
+ goto done;
+ }
hfs_bnode_read(bnode, fd->key, off, keylen);
cmpval = bnode->tree->keycmp(fd->key, fd->search_key);
if (!cmpval) {
if (rec != e && e >= 0) {
len = hfs_brec_lenoff(bnode, e, &off);
keylen = hfs_brec_keylen(bnode, e);
+ if (keylen == HFS_BAD_KEYLEN) {
+ res = -EINVAL;
+ goto done;
+ }
hfs_bnode_read(bnode, fd->key, off, keylen);
}
done:
len = hfs_brec_lenoff(bnode, fd->record, &off);
keylen = hfs_brec_keylen(bnode, fd->record);
+ if (keylen == HFS_BAD_KEYLEN) {
+ res = -EINVAL;
+ goto out;
+ }
fd->keyoffset = off;
fd->keylength = keylen;
fd->entryoffset = off + keylen;
recoff = hfs_bnode_read_u16(node, node->tree->node_size - (rec + 1) * 2);
if (!recoff)
return 0;
- if (node->tree->attributes & HFS_TREE_BIGKEYS)
+ if (node->tree->attributes & HFS_TREE_BIGKEYS) {
retval = hfs_bnode_read_u16(node, recoff) + 2;
- else
+ if (retval > node->tree->max_key_len + 2) {
+ printk(KERN_ERR "hfs: keylen %d too large\n",
+ retval);
+ retval = HFS_BAD_KEYLEN;
+ }
+ } else {
retval = (hfs_bnode_read_u8(node, recoff) | 1) + 1;
+ if (retval > node->tree->max_key_len + 1) {
+ printk(KERN_ERR "hfs: keylen %d too large\n",
+ retval);
+ retval = HFS_BAD_KEYLEN;
+ }
+ }
}
return retval;
}
mapping = tree->inode->i_mapping;
page = read_mapping_page(mapping, 0, NULL);
if (IS_ERR(page))
- goto free_tree;
+ goto free_inode;
/* Load the header */
head = (struct hfs_btree_header_rec *)(kmap(page) + sizeof(struct hfs_bnode_desc));
goto fail_page;
if (!tree->node_count)
goto fail_page;
+ if ((id == HFS_EXT_CNID) && (tree->max_key_len != HFS_MAX_EXT_KEYLEN)) {
+ printk(KERN_ERR "hfs: invalid extent max_key_len %d\n",
+ tree->max_key_len);
+ goto fail_page;
+ }
+ if ((id == HFS_CAT_CNID) && (tree->max_key_len != HFS_MAX_CAT_KEYLEN)) {
+ printk(KERN_ERR "hfs: invalid catalog max_key_len %d\n",
+ tree->max_key_len);
+ goto fail_page;
+ }
+
tree->node_size_shift = ffs(size) - 1;
tree->pages_per_bnode = (tree->node_size + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
page_cache_release(page);
return tree;
- fail_page:
- tree->inode->i_mapping->a_ops = &hfs_aops;
+fail_page:
page_cache_release(page);
- free_tree:
+free_inode:
+ tree->inode->i_mapping->a_ops = &hfs_aops;
iput(tree->inode);
+free_tree:
kfree(tree);
return NULL;
}
#define HFS_MAX_NAMELEN 128
#define HFS_MAX_VALENCE 32767U
+#define HFS_BAD_KEYLEN 0xFF
+
/* Meanings of the drAtrb field of the MDB,
* Reference: _Inside Macintosh: Files_ p. 2-61
*/
struct hfs_ext_key ext;
} hfs_btree_key;
+#define HFS_MAX_CAT_KEYLEN (sizeof(struct hfs_cat_key) - sizeof(u8))
+#define HFS_MAX_EXT_KEYLEN (sizeof(struct hfs_ext_key) - sizeof(u8))
+
typedef union hfs_btree_key btree_key;
struct hfs_extent {
jbd_free_handle(handle);
current->journal_info = NULL;
handle = ERR_PTR(err);
+ goto out;
}
lock_acquire(&handle->h_lockdep_map, 0, 0, 0, 2, _THIS_IP_);
+out:
return handle;
}
if (S_ISLNK(inode->i_mode))
return -ELOOP;
- if (S_ISDIR(inode->i_mode) && (flag & FMODE_WRITE))
+ if (S_ISDIR(inode->i_mode) && (acc_mode & MAY_WRITE))
return -EISDIR;
/*
return -EACCES;
flag &= ~O_TRUNC;
- } else if (IS_RDONLY(inode) && (flag & FMODE_WRITE))
+ } else if (IS_RDONLY(inode) && (acc_mode & MAY_WRITE))
return -EROFS;
error = vfs_permission(nd, acc_mode);
#define NFS_LOCK_INITIALIZED 1
int ls_flags;
struct nfs_seqid_counter ls_seqid;
+ struct rpc_sequence ls_sequence;
struct nfs_unique_id ls_id;
nfs4_stateid ls_stateid;
atomic_t ls_count;
if (data->rpc_status == 0) {
memcpy(data->o_res.stateid.data, data->c_res.stateid.data,
sizeof(data->o_res.stateid.data));
+ nfs_confirm_seqid(&data->owner->so_seqid, 0);
renew_lease(data->o_res.server, data->timestamp);
data->rpc_done = 1;
}
- nfs_confirm_seqid(&data->owner->so_seqid, data->rpc_status);
nfs_increment_open_seqid(data->rpc_status, data->c_arg.seqid);
}
/* In case of error, no cleanup! */
if (!data->rpc_done)
goto out_free;
- nfs_confirm_seqid(&data->owner->so_seqid, 0);
state = nfs4_opendata_to_nfs4_state(data);
if (!IS_ERR(state))
nfs4_close_state(&data->path, state, data->o_arg.open_flags);
/* In case we need an open_confirm, no cleanup! */
if (data->o_res.rflags & NFS4_OPEN_RESULT_CONFIRM)
goto out_free;
- nfs_confirm_seqid(&data->owner->so_seqid, 0);
state = nfs4_opendata_to_nfs4_state(data);
if (!IS_ERR(state))
nfs4_close_state(&data->path, state, data->o_arg.open_flags);
p->arg.fh = NFS_FH(inode);
p->arg.fl = &p->fl;
+ if (!(lsp->ls_seqid.flags & NFS_SEQID_CONFIRMED)) {
+ p->arg.open_seqid = nfs_alloc_seqid(&lsp->ls_state->owner->so_seqid);
+ if (p->arg.open_seqid == NULL)
+ goto out_free;
+
+ }
p->arg.lock_seqid = nfs_alloc_seqid(&lsp->ls_seqid);
if (p->arg.lock_seqid == NULL)
goto out_free;
memcpy(&p->fl, fl, sizeof(p->fl));
return p;
out_free:
+ if (p->arg.open_seqid != NULL)
+ nfs_free_seqid(p->arg.open_seqid);
kfree(p);
return NULL;
}
.rpc_cred = sp->so_cred,
};
- if (nfs_wait_on_sequence(data->arg.lock_seqid, task) != 0)
- return;
dprintk("%s: begin!\n", __FUNCTION__);
/* Do we need to do an open_to_lock_owner? */
if (!(data->arg.lock_seqid->sequence->flags & NFS_SEQID_CONFIRMED)) {
- data->arg.open_seqid = nfs_alloc_seqid(&sp->so_seqid);
- if (data->arg.open_seqid == NULL) {
- data->rpc_status = -ENOMEM;
- task->tk_action = NULL;
- goto out;
- }
+ if (nfs_wait_on_sequence(data->arg.open_seqid, task) != 0)
+ return;
data->arg.open_stateid = &state->stateid;
data->arg.new_lock_owner = 1;
+ /* Retest in case we raced... */
+ if (!(data->arg.lock_seqid->sequence->flags & NFS_SEQID_CONFIRMED))
+ goto do_rpc;
}
+ if (nfs_wait_on_sequence(data->arg.lock_seqid, task) != 0)
+ return;
+ data->arg.new_lock_owner = 0;
+do_rpc:
data->timestamp = jiffies;
rpc_call_setup(task, &msg, 0);
-out:
dprintk("%s: done!, ret = %d\n", __FUNCTION__, data->rpc_status);
}
struct nfs4_lockdata *data = calldata;
dprintk("%s: begin!\n", __FUNCTION__);
- if (data->arg.open_seqid != NULL)
- nfs_free_seqid(data->arg.open_seqid);
if (data->cancelled != 0) {
struct rpc_task *task;
task = nfs4_do_unlck(&data->fl, data->ctx, data->lsp,
dprintk("%s: cancelling lock!\n", __FUNCTION__);
} else
nfs_free_seqid(data->arg.lock_seqid);
+ if (data->arg.open_seqid != NULL)
+ nfs_free_seqid(data->arg.open_seqid);
nfs4_put_lock_state(data->lsp);
put_nfs_open_context(data->ctx);
kfree(data);
void
nfs4_kill_renewd(struct nfs_client *clp)
{
- down_read(&clp->cl_sem);
cancel_delayed_work_sync(&clp->cl_renewd);
- up_read(&clp->cl_sem);
}
/*
lsp = kzalloc(sizeof(*lsp), GFP_KERNEL);
if (lsp == NULL)
return NULL;
- lsp->ls_seqid.sequence = &state->owner->so_sequence;
+ rpc_init_wait_queue(&lsp->ls_sequence.wait, "lock_seqid_waitqueue");
+ spin_lock_init(&lsp->ls_sequence.lock);
+ INIT_LIST_HEAD(&lsp->ls_sequence.list);
+ lsp->ls_seqid.sequence = &lsp->ls_sequence;
atomic_set(&lsp->ls_count, 1);
lsp->ls_owner = fl_owner;
spin_lock(&clp->cl_lock);
error = PTR_ERR(mntroot);
goto error_splat_super;
}
- if (mntroot->d_inode->i_op != server->nfs_client->rpc_ops->dir_inode_ops) {
+ if (mntroot->d_inode->i_op != NFS_SB(s)->nfs_client->rpc_ops->dir_inode_ops) {
dput(mntroot);
error = -ESTALE;
goto error_splat_super;
error = PTR_ERR(mntroot);
goto error_splat_super;
}
+ if (mntroot->d_inode->i_op != NFS_SB(s)->nfs_client->rpc_ops->dir_inode_ops) {
+ dput(mntroot);
+ error = -ESTALE;
+ goto error_splat_super;
+ }
s->s_flags |= MS_ACTIVE;
mnt->mnt_sb = s;
error = PTR_ERR(mntroot);
goto error_splat_super;
}
+ if (mntroot->d_inode->i_op != NFS_SB(s)->nfs_client->rpc_ops->dir_inode_ops) {
+ dput(mntroot);
+ error = -ESTALE;
+ goto error_splat_super;
+ }
s->s_flags |= MS_ACTIVE;
mnt->mnt_sb = s;
* Round the length of the data which was specified up to
* the next multiple of XDR units and then compare that
* against the length which was actually received.
+ * Note that when RPCSEC/GSS (for example) is used, the
+ * data buffer can be padded so dlen might be larger
+ * than required. It must never be smaller.
*/
- if (dlen != XDR_QUADLEN(len)*4)
+ if (dlen < XDR_QUADLEN(len)*4)
return 0;
if (args->count > max_blocksize) {
* Round the length of the data which was specified up to
* the next multiple of XDR units and then compare that
* against the length which was actually received.
+ * Note that when RPCSEC/GSS (for example) is used, the
+ * data buffer can be padded so dlen might be larger
+ * than required. It must never be smaller.
*/
- if (dlen != XDR_QUADLEN(len)*4)
+ if (dlen < XDR_QUADLEN(len)*4)
return 0;
rqstp->rq_vec[0].iov_base = (void*)p;
ppid = pid_alive(p) ?
task_tgid_nr_ns(rcu_dereference(p->real_parent), ns) : 0;
tpid = pid_alive(p) && p->ptrace ?
- task_ppid_nr_ns(rcu_dereference(p->parent), ns) : 0;
+ task_pid_nr_ns(rcu_dereference(p->parent), ns) : 0;
buffer += sprintf(buffer,
"State:\t%s\n"
"Tgid:\t%d\n"
}
sid = task_session_nr_ns(task, ns);
+ ppid = task_tgid_nr_ns(task->real_parent, ns);
pgid = task_pgrp_nr_ns(task, ns);
- ppid = task_ppid_nr_ns(task, ns);
unlock_task_sighand(task, &flags);
}
(task->state == TASK_STOPPED || task->state == TASK_TRACED) && \
security_ptrace(current,task) == 0))
+struct mm_struct *mm_for_maps(struct task_struct *task)
+{
+ struct mm_struct *mm = get_task_mm(task);
+ if (!mm)
+ return NULL;
+ down_read(&mm->mmap_sem);
+ task_lock(task);
+ if (task->mm != mm)
+ goto out;
+ if (task->mm != current->mm && __ptrace_may_attach(task) < 0)
+ goto out;
+ task_unlock(task);
+ return mm;
+out:
+ task_unlock(task);
+ up_read(&mm->mmap_sem);
+ mmput(mm);
+ return NULL;
+}
+
static int proc_pid_cmdline(struct task_struct *task, char * buffer)
{
int res = 0;
unsigned long largest_chunk;
};
+extern struct mm_struct *mm_for_maps(struct task_struct *);
+
#ifdef CONFIG_MMU
#define VMALLOC_TOTAL (VMALLOC_END - VMALLOC_START)
extern void get_vmalloc_info(struct vmalloc_info *vmi);
};
#endif
-#ifdef CONFIG_SLAB
+#ifdef CONFIG_SLABINFO
static int slabinfo_open(struct inode *inode, struct file *file)
{
return seq_open(file, &slabinfo_op);
#endif
create_seq_entry("stat", 0, &proc_stat_operations);
create_seq_entry("interrupts", 0, &proc_interrupts_operations);
-#ifdef CONFIG_SLAB
+#ifdef CONFIG_SLABINFO
create_seq_entry("slabinfo",S_IWUSR|S_IRUGO,&proc_slabinfo_operations);
#ifdef CONFIG_DEBUG_SLAB_LEAK
create_seq_entry("slab_allocators", 0 ,&proc_slabstats_operations);
if (!priv->task)
return NULL;
- mm = get_task_mm(priv->task);
+ mm = mm_for_maps(priv->task);
if (!mm)
return NULL;
priv->tail_vma = tail_vma = get_gate_vma(priv->task);
- down_read(&mm->mmap_sem);
/* Start with last addr hint */
if (last_addr && (vma = find_vma(mm, last_addr))) {
if (!priv->task)
return NULL;
- mm = get_task_mm(priv->task);
+ mm = mm_for_maps(priv->task);
if (!mm) {
put_task_struct(priv->task);
priv->task = NULL;
return NULL;
}
- down_read(&mm->mmap_sem);
-
/* start from the Nth VMA */
for (vml = mm->context.vmlist; vml; vml = vml->next)
if (n-- == 0)
sd = sysfs_find_dirent(parent_sd, dentry->d_name.name);
/* no such entry */
- if (!sd)
+ if (!sd) {
+ ret = ERR_PTR(-ENOENT);
goto out_unlock;
+ }
/* attach dentry and inode */
inode = sysfs_get_inode(sd);
old_dentry = sysfs_get_dentry(sd);
if (IS_ERR(old_dentry)) {
error = PTR_ERR(old_dentry);
+ old_dentry = NULL;
goto out;
}
old_dentry = sysfs_get_dentry(sd);
if (IS_ERR(old_dentry)) {
error = PTR_ERR(old_dentry);
+ old_dentry = NULL;
goto out;
}
old_parent = old_dentry->d_parent;
new_parent = sysfs_get_dentry(new_parent_sd);
if (IS_ERR(new_parent)) {
error = PTR_ERR(new_parent);
+ new_parent = NULL;
goto out;
}
error = 0;
d_add(new_dentry, NULL);
d_move(old_dentry, new_dentry);
- dput(new_dentry);
/* Remove from old parent's list and insert into new parent's list. */
sysfs_unlink_sibling(sd);
#else
struct hack_dirent {
- int namlen;
- loff_t offset;
u64 ino;
+ loff_t offset;
+ int namlen;
unsigned int d_type;
char name[];
};
{
struct hack_callback *buf = __buf;
struct hack_dirent *de = (struct hack_dirent *)(buf->dirent + buf->used);
+ unsigned int reclen;
- if (buf->used + sizeof(struct hack_dirent) + namlen > buf->len)
+ reclen = ALIGN(sizeof(struct hack_dirent) + namlen, sizeof(u64));
+ if (buf->used + reclen > buf->len)
return -EINVAL;
de->namlen = namlen;
de->ino = ino;
de->d_type = d_type;
memcpy(de->name, name, namlen);
- buf->used += sizeof(struct hack_dirent) + namlen;
+ buf->used += reclen;
return 0;
}
offset = filp->f_pos;
while (!eof) {
- int reclen;
+ unsigned int reclen;
+
start_offset = offset;
buf.used = 0;
goto done;
}
- reclen = sizeof(struct hack_dirent) + de->namlen;
+ reclen = ALIGN(sizeof(struct hack_dirent) + de->namlen,
+ sizeof(u64));
size -= reclen;
de = (struct hack_dirent *)((char *)de + reclen);
curr_offset = de->offset /* & 0x7fffffff */;
#define cpu_is_pxa21x() \
({ \
- unsigned int id = read_cpuid(CPUID_ID); \
- __cpu_is_pxa21x(id); \
+ __cpu_is_pxa21x(read_cpuid_id()); \
})
#define cpu_is_pxa25x() \
({ \
- unsigned int id = read_cpuid(CPUID_ID); \
- __cpu_is_pxa25x(id); \
+ __cpu_is_pxa25x(read_cpuid_id()); \
})
#define cpu_is_pxa27x() \
({ \
- unsigned int id = read_cpuid(CPUID_ID); \
- __cpu_is_pxa27x(id); \
+ __cpu_is_pxa27x(read_cpuid_id()); \
})
#define cpu_is_pxa300() \
({ \
- unsigned int id = read_cpuid(CPUID_ID); \
- __cpu_is_pxa300(id); \
+ __cpu_is_pxa300(read_cpuid_id()); \
})
#define cpu_is_pxa310() \
({ \
- unsigned int id = read_cpuid(CPUID_ID); \
- __cpu_is_pxa310(id); \
+ __cpu_is_pxa310(read_cpuid_id()); \
})
#define cpu_is_pxa320() \
({ \
- unsigned int id = read_cpuid(CPUID_ID); \
- __cpu_is_pxa320(id); \
+ __cpu_is_pxa320(read_cpuid_id()); \
})
/*
#define cpu_is_pxa2xx() \
({ \
- unsigned int id = read_cpuid(CPUID_ID); \
- __cpu_is_pxa2xx(id); \
+ __cpu_is_pxa2xx(read_cpuid_id()); \
})
#define cpu_is_pxa3xx() \
({ \
- unsigned int id = read_cpuid(CPUID_ID); \
- __cpu_is_pxa3xx(id); \
+ __cpu_is_pxa3xx(read_cpuid_id()); \
})
/*
#ifndef __ASSEMBLY__
#include <linux/linkage.h>
+#include <linux/stringify.h>
#include <linux/irqflags.h>
+/*
+ * The CPU ID never changes at run time, so we might as well tell the
+ * compiler that it's constant. Use this function to read the CPU ID
+ * rather than directly reading processor_id or read_cpuid() directly.
+ */
+static inline unsigned int read_cpuid_id(void) __attribute_const__;
+
+static inline unsigned int read_cpuid_id(void)
+{
+ return read_cpuid(CPUID_ID);
+}
+
#define __exception __attribute__((section(".exception.text")))
struct thread_info;
#ifdef __KERNEL__
#include <asm/arch/page.h>
+#include <linux/const.h>
/* PAGE_SHIFT determines the page size */
#define PAGE_SHIFT 13
-#ifndef __ASSEMBLY__
-#define PAGE_SIZE (1UL << PAGE_SHIFT)
-#else
-#define PAGE_SIZE (1 << PAGE_SHIFT)
-#endif
+#define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE-1))
#define clear_page(page) memset((void *)(page), 0, PAGE_SIZE)
#define __ARCH_WANT_SYS_SIGPENDING
#define __ARCH_WANT_SYS_SIGPROCMASK
#define __ARCH_WANT_SYS_RT_SIGACTION
+#define __ARCH_WANT_SYS_RT_SIGSUSPEND
/*
* "Conditional" syscalls
static inline void
tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end)
{
-#ifdef CONFIG_QUICKLIST
- tlb->need_flush += &__get_cpu_var(quicklist)[0].nr_pages != 0;
-#endif
tlb_flush_mmu(tlb, start, end);
/* keep the page table cache within bounds */
static inline enum xpc_retval
xpc_map_bte_errors(bte_result_t error)
{
+ if (error == BTE_SUCCESS)
+ return xpcSuccess;
+
if (is_shub2()) {
if (BTE_VALID_SH2_ERROR(error))
return xpcBteSh2Start + error;
- else
- return xpcBteUnmappedError;
+ return xpcBteUnmappedError;
}
switch (error) {
case BTE_SUCCESS: return xpcSuccess;
#define Page_Invalidate_T 0x16
/*
- * R1000-specific cacheops
+ * R10000-specific cacheops
*
* Cacheops 0x02, 0x06, 0x0a, 0x0c-0x0e, 0x16, 0x1a and 0x1e are unused.
* Most of the _S cacheops are identical to the R4000SC _SD cacheops.
static inline void smtc_ipi_nq(struct smtc_ipi_q *q, struct smtc_ipi *p)
{
- long flags;
+ unsigned long flags;
spin_lock_irqsave(&q->lock, flags);
if (q->head == NULL)
static inline void smtc_ipi_req(struct smtc_ipi_q *q, struct smtc_ipi *p)
{
- long flags;
+ unsigned long flags;
spin_lock_irqsave(&q->lock, flags);
if (q->head == NULL) {
static inline int smtc_ipi_qdepth(struct smtc_ipi_q *q)
{
- long flags;
+ unsigned long flags;
int retval;
spin_lock_irqsave(&q->lock, flags);
__u32 __user *ustatus);
int (*coredump_extra_notes_size)(void);
int (*coredump_extra_notes_write)(struct file *file, loff_t *foffset);
+ void (*notify_spus_active)(void);
struct module *owner;
};
int spu_switch_event_register(struct notifier_block * n);
int spu_switch_event_unregister(struct notifier_block * n);
+extern void notify_spus_active(void);
+extern void do_notify_spus_active(void);
+
/*
* This defines the Local Store, Problem Area and Privilege Area of an SPU.
*/
extern void __flush_invalidate_region(void *start, int size);
#endif
+#define ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE
+static inline void flush_kernel_dcache_page(struct page *page)
+{
+ flush_dcache_page(page);
+}
+
#if defined(CONFIG_CPU_SH4) && !defined(CONFIG_CACHE_OFF)
extern void copy_to_user_page(struct vm_area_struct *vma,
struct page *page, unsigned long vaddr, void *dst, const void *src,
/*
* __access_ok: Check if address with size is OK or not.
*
- * We do three checks:
- * (1) is it user space?
- * (2) addr + size --> carry?
- * (3) addr + size >= 0x80000000 (PAGE_OFFSET)
+ * Uhhuh, this needs 33-bit arithmetic. We have a carry..
*
- * (1) (2) (3) | RESULT
- * 0 0 0 | ok
- * 0 0 1 | ok
- * 0 1 0 | bad
- * 0 1 1 | bad
- * 1 0 0 | ok
- * 1 0 1 | bad
- * 1 1 0 | bad
- * 1 1 1 | bad
+ * sum := addr + size; carry? --> flag = true;
+ * if (sum >= addr_limit) flag = true;
*/
static inline int __access_ok(unsigned long addr, unsigned long size)
{
- unsigned long flag, tmp;
-
- __asm__("stc r7_bank, %0\n\t"
- "mov.l @(8,%0), %0\n\t"
- "clrt\n\t"
- "addc %2, %1\n\t"
- "and %1, %0\n\t"
- "rotcl %0\n\t"
- "rotcl %0\n\t"
- "and #3, %0"
- : "=&z" (flag), "=r" (tmp)
- : "r" (addr), "1" (size)
- : "t");
-
+ unsigned long flag, sum;
+
+ __asm__("clrt\n\t"
+ "addc %3, %1\n\t"
+ "movt %0\n\t"
+ "cmp/hi %4, %1\n\t"
+ "rotcl %0"
+ :"=&r" (flag), "=r" (sum)
+ :"1" (addr), "r" (size),
+ "r" (current_thread_info()->addr_limit.seg)
+ :"t");
return flag == 0;
+
}
#endif /* CONFIG_MMU */
void (*sync_single_for_cpu)(struct device *dev,
dma_addr_t dma_handle, size_t size,
enum dma_data_direction direction);
- void (*sync_single_for_device)(struct device *dev,
- dma_addr_t dma_handle, size_t size,
- enum dma_data_direction direction);
void (*sync_sg_for_cpu)(struct device *dev, struct scatterlist *sg,
int nelems,
enum dma_data_direction direction);
- void (*sync_sg_for_device)(struct device *dev, struct scatterlist *sg,
- int nelems,
- enum dma_data_direction direction);
};
extern const struct dma_ops *dma_ops;
size_t size,
enum dma_data_direction direction)
{
- dma_ops->sync_single_for_device(dev, dma_handle, size, direction);
+ /* No flushing needed to sync cpu writes to the device. */
}
static inline void dma_sync_single_range_for_cpu(struct device *dev,
size_t size,
enum dma_data_direction direction)
{
- dma_sync_single_for_device(dev, dma_handle+offset, size, direction);
+ /* No flushing needed to sync cpu writes to the device. */
}
struct scatterlist *sg, int nelems,
enum dma_data_direction direction)
{
- dma_ops->sync_sg_for_device(dev, sg, nelems, direction);
+ /* No flushing needed to sync cpu writes to the device. */
}
static inline int dma_mapping_error(dma_addr_t dma_addr)
struct device_node;
extern struct device_node *pci_device_to_OF_node(struct pci_dev *pdev);
+#define HAVE_ARCH_PCI_RESOURCE_TO_USER
+extern void pci_resource_to_user(const struct pci_dev *dev, int bar,
+ const struct resource *rsrc,
+ resource_size_t *start, resource_size_t *end);
#endif /* __KERNEL__ */
#endif /* __SPARC64_PCI_H */
} v;
v.u = val;
#ifdef CONFIG_X86_BSWAP
- asm("bswapl %0 ; bswapl %1 ; xchgl %0,%1"
+ __asm__("bswapl %0 ; bswapl %1 ; xchgl %0,%1"
: "=r" (v.s.a), "=r" (v.s.b)
: "0" (v.s.a), "1" (v.s.b));
#else
v.s.a = ___arch__swab32(v.s.a);
v.s.b = ___arch__swab32(v.s.b);
- asm("xchgl %0,%1" : "=r" (v.s.a), "=r" (v.s.b) : "0" (v.s.a), "1" (v.s.b));
+ __asm__("xchgl %0,%1" : "=r" (v.s.a), "=r" (v.s.b) : "0" (v.s.a), "1" (v.s.b));
#endif
return v.u;
}
#include <asm/msr-index.h>
+#ifndef __ASSEMBLY__
+# include <linux/types.h>
+#endif
+
#ifdef __i386__
#ifdef __KERNEL__
#define wrmsrl(msr,val) wrmsr(msr,(__u32)((__u64)(val)),((__u64)(val))>>32)
-/* wrmsr with exception handling */
-#define wrmsr_safe(msr,a,b) ({ int ret__; \
- asm volatile("2: wrmsr ; xorl %0,%0\n" \
- "1:\n\t" \
- ".section .fixup,\"ax\"\n\t" \
- "3: movl %4,%0 ; jmp 1b\n\t" \
- ".previous\n\t" \
- ".section __ex_table,\"a\"\n" \
- " .align 8\n\t" \
- " .quad 2b,3b\n\t" \
- ".previous" \
- : "=a" (ret__) \
- : "c" (msr), "0" (a), "d" (b), "i" (-EFAULT)); \
- ret__; })
-
-#define checking_wrmsrl(msr,val) wrmsr_safe(msr,(u32)(val),(u32)((val)>>32))
-
-#define rdmsr_safe(msr,a,b) \
- ({ int ret__; \
- asm volatile ("1: rdmsr\n" \
- "2:\n" \
- ".section .fixup,\"ax\"\n" \
- "3: movl %4,%0\n" \
- " jmp 2b\n" \
- ".previous\n" \
- ".section __ex_table,\"a\"\n" \
- " .align 8\n" \
- " .quad 1b,3b\n" \
- ".previous":"=&bDS" (ret__), "=a"(*(a)), "=d"(*(b)) \
- :"c"(msr), "i"(-EIO), "0"(0)); \
- ret__; })
-
#define rdtsc(low,high) \
__asm__ __volatile__("rdtsc" : "=a" (low), "=d" (high))
__asm__ __volatile__ ("rdtsc" : "=a" (low) : : "edx")
#define rdtscp(low,high,aux) \
- asm volatile (".byte 0x0f,0x01,0xf9" : "=a" (low), "=d" (high), "=c" (aux))
+ __asm__ __volatile__ (".byte 0x0f,0x01,0xf9" : "=a" (low), "=d" (high), "=c" (aux))
#define rdtscll(val) do { \
unsigned int __a,__d; \
- asm volatile("rdtsc" : "=a" (__a), "=d" (__d)); \
+ __asm__ __volatile__("rdtsc" : "=a" (__a), "=d" (__d)); \
(val) = ((unsigned long)__a) | (((unsigned long)__d)<<32); \
} while(0)
#define rdtscpll(val, aux) do { \
unsigned long __a, __d; \
- asm volatile (".byte 0x0f,0x01,0xf9" : "=a" (__a), "=d" (__d), "=c" (aux)); \
+ __asm__ __volatile__ (".byte 0x0f,0x01,0xf9" : "=a" (__a), "=d" (__d), "=c" (aux)); \
(val) = (__d << 32) | __a; \
} while (0)
: "=a" (low), "=d" (high) \
: "c" (counter))
+
static inline void cpuid(int op, unsigned int *eax, unsigned int *ebx,
unsigned int *ecx, unsigned int *edx)
{
return edx;
}
+#ifdef __KERNEL__
+
+/* wrmsr with exception handling */
+#define wrmsr_safe(msr,a,b) ({ int ret__; \
+ asm volatile("2: wrmsr ; xorl %0,%0\n" \
+ "1:\n\t" \
+ ".section .fixup,\"ax\"\n\t" \
+ "3: movl %4,%0 ; jmp 1b\n\t" \
+ ".previous\n\t" \
+ ".section __ex_table,\"a\"\n" \
+ " .align 8\n\t" \
+ " .quad 2b,3b\n\t" \
+ ".previous" \
+ : "=a" (ret__) \
+ : "c" (msr), "0" (a), "d" (b), "i" (-EFAULT)); \
+ ret__; })
+
+#define checking_wrmsrl(msr,val) wrmsr_safe(msr,(u32)(val),(u32)((val)>>32))
+
+#define rdmsr_safe(msr,a,b) \
+ ({ int ret__; \
+ asm volatile ("1: rdmsr\n" \
+ "2:\n" \
+ ".section .fixup,\"ax\"\n" \
+ "3: movl %4,%0\n" \
+ " jmp 2b\n" \
+ ".previous\n" \
+ ".section __ex_table,\"a\"\n" \
+ " .align 8\n" \
+ " .quad 1b,3b\n" \
+ ".previous":"=&bDS" (ret__), "=a"(*(a)), "=d"(*(b)) \
+ :"c"(msr), "i"(-EIO), "0"(0)); \
+ ret__; })
+
#ifdef CONFIG_SMP
void rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h);
void wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h);
return wrmsr_safe(msr_no, l, h);
}
#endif /* CONFIG_SMP */
+#endif /* __KERNEL__ */
#endif /* __ASSEMBLY__ */
#endif /* !__i386__ */
header-y += ultrasound.h
header-y += un.h
header-y += utime.h
+header-y += veth.h
header-y += video_decoder.h
header-y += video_encoder.h
header-y += videotext.h
static inline int ata_drive_40wire_relaxed(const u16 *dev_id)
{
- if (ata_id_is_sata(dev_id))
- return 0; /* SATA */
if ((dev_id[93] & 0x2000) == 0x2000)
return 0; /* 80 wire */
return 1;
#define register_hotcpu_notifier(nb) register_cpu_notifier(nb)
#define unregister_hotcpu_notifier(nb) unregister_cpu_notifier(nb)
int cpu_down(unsigned int cpu);
-#define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))
#else /* CONFIG_HOTPLUG_CPU */
/* These aren't inline functions due to a GCC bug. */
#define register_hotcpu_notifier(nb) ({ (void)(nb); 0; })
#define unregister_hotcpu_notifier(nb) ({ (void)(nb); })
-
-/* CPUs don't go offline once they're online w/o CONFIG_HOTPLUG_CPU */
-static inline int cpu_is_offline(int cpu) { return 0; }
#endif /* CONFIG_HOTPLUG_CPU */
#ifdef CONFIG_PM_SLEEP_SMP
#define cpu_present(cpu) ((cpu) == 0)
#endif
+#define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))
+
#ifdef CONFIG_SMP
extern int nr_cpu_ids;
#define any_online_cpu(mask) __any_online_cpu(&(mask))
#ifndef LINUX_I2C_ID_H
#define LINUX_I2C_ID_H
+/* Please note that I2C driver IDs are optional. They are only needed if a
+ legacy chip driver needs to identify a bus or a bus driver needs to
+ identify a legacy client. If you don't need them, just don't set them. */
+
/*
* ---- Driver types -----------------------------------------------------
*/
#define key_get(k) ({ NULL; })
#define key_put(k) do { } while(0)
#define key_ref_put(k) do { } while(0)
-#define make_key_ref(k) ({ NULL; })
+#define make_key_ref(k, p) ({ NULL; })
#define key_ref_to_ptr(k) ({ NULL; })
#define is_key_possessed(k) 0
#define alloc_uid_keyring(u,c) 0
enum
{
NAPI_STATE_SCHED, /* Poll is scheduled */
+ NAPI_STATE_DISABLE, /* Disable pending */
};
extern void FASTCALL(__napi_schedule(struct napi_struct *n));
+static inline int napi_disable_pending(struct napi_struct *n)
+{
+ return test_bit(NAPI_STATE_DISABLE, &n->state);
+}
+
/**
* napi_schedule_prep - check if napi can be scheduled
* @n: napi context
*
* Test if NAPI routine is already running, and if not mark
* it as running. This is used as a condition variable
- * insure only one NAPI poll instance runs
+ * insure only one NAPI poll instance runs. We also make
+ * sure there is no pending NAPI disable.
*/
static inline int napi_schedule_prep(struct napi_struct *n)
{
- return !test_and_set_bit(NAPI_STATE_SCHED, &n->state);
+ return !napi_disable_pending(n) &&
+ !test_and_set_bit(NAPI_STATE_SCHED, &n->state);
}
/**
*/
static inline void napi_disable(struct napi_struct *n)
{
+ set_bit(NAPI_STATE_DISABLE, &n->state);
while (test_and_set_bit(NAPI_STATE_SCHED, &n->state))
msleep(1);
+ clear_bit(NAPI_STATE_DISABLE, &n->state);
}
/**
static inline int netif_rx_schedule_prep(struct net_device *dev,
struct napi_struct *napi)
{
- return netif_running(dev) && napi_schedule_prep(napi);
+ return napi_schedule_prep(napi);
}
/* Add interface to tail of rx poll list. This assumes that _prep has
static inline void __netif_rx_schedule(struct net_device *dev,
struct napi_struct *napi)
{
- dev_hold(dev);
__napi_schedule(napi);
}
struct napi_struct *napi)
{
__napi_complete(napi);
- dev_put(dev);
}
/* Remove interface from poll list: it must be in the poll list
#define PCI_DEVICE_ID_INTEL_ICH9_4 0x2914
#define PCI_DEVICE_ID_INTEL_ICH9_5 0x2919
#define PCI_DEVICE_ID_INTEL_ICH9_6 0x2930
+#define PCI_DEVICE_ID_INTEL_ICH9_7 0x2916
+#define PCI_DEVICE_ID_INTEL_ICH9_8 0x2918
#define PCI_DEVICE_ID_INTEL_82855PM_HB 0x3340
#define PCI_DEVICE_ID_INTEL_82830_HB 0x3575
#define PCI_DEVICE_ID_INTEL_82830_CGC 0x3577
device_set_wakeup_enable(dev,val); \
} while(0)
+/*
+ * Global Power Management flags
+ * Used to keep APM and ACPI from both being active
+ */
+extern unsigned int pm_flags;
+
+#define PM_APM 1
+#define PM_ACPI 2
+
#endif /* __KERNEL__ */
#endif /* _LINUX_PM_H */
#ifdef CONFIG_PM_LEGACY
-extern int pm_active;
-
-#define PM_IS_ACTIVE() (pm_active != 0)
-
/*
* Register a device with power management
*/
#else /* CONFIG_PM_LEGACY */
-#define PM_IS_ACTIVE() 0
-
static inline struct pm_dev *pm_register(pm_dev_t type,
unsigned long id,
pm_callback callback)
#include <linux/errno.h>
#include <linux/mod_devicetable.h>
-#define PNP_MAX_PORT 24
+#define PNP_MAX_PORT 40
#define PNP_MAX_MEM 12
#define PNP_MAX_IRQ 2
#define PNP_MAX_DMA 2
extern void __ptrace_unlink(struct task_struct *child);
extern void ptrace_untrace(struct task_struct *child);
extern int ptrace_may_attach(struct task_struct *task);
+extern int __ptrace_may_attach(struct task_struct *task);
static inline void ptrace_link(struct task_struct *child,
struct task_struct *new_parent)
struct page *page)
{
struct quicklist *q;
- int nid = page_to_nid(page);
-
- if (unlikely(nid != numa_node_id())) {
- if (dtor)
- dtor(p);
- __free_page(page);
- return;
- }
q = &get_cpu_var(quicklist)[nr];
*(void **)p = q->page;
/*
* offset and length are unused for chain entry. Clear them.
*/
- prv->offset = 0;
- prv->length = 0;
+ prv[prv_nents - 1].offset = 0;
+ prv[prv_nents - 1].length = 0;
/*
* Set lowest bit to indicate a link pointer, and make sure to clear
*
* set_task_vxid() : assigns a virtual id to a task;
*
- * task_ppid_nr_ns() : the parent's id as seen from the namespace specified.
- * the result depends on the namespace and whether the
- * task in question is the namespace's init. e.g. for the
- * namespace's init this will return 0 when called from
- * the namespace of this init, or appropriate id otherwise.
- *
- *
* see also pid_nr() etc in include/linux/pid.h
*/
}
-static inline pid_t task_ppid_nr_ns(struct task_struct *tsk,
- struct pid_namespace *ns)
-{
- return pid_nr_ns(task_pid(rcu_dereference(tsk->real_parent)), ns);
-}
-
/**
* pid_alive - check that a task structure is not stale
* @p: Task structure to be checked.
return kmalloc(size, flags | __GFP_ZERO);
}
+#ifdef CONFIG_SLABINFO
+extern const struct seq_operations slabinfo_op;
+ssize_t slabinfo_write(struct file *, const char __user *, size_t, loff_t *);
+#endif
+
#endif /* __KERNEL__ */
#endif /* _LINUX_SLAB_H */
#endif /* CONFIG_NUMA */
-extern const struct seq_operations slabinfo_op;
-ssize_t slabinfo_write(struct file *, const char __user *, size_t, loff_t *);
-
#endif /* _LINUX_SLAB_DEF_H */
header-y += tc_ipt.h
header-y += tc_mirred.h
header-y += tc_pedit.h
+header-y += tc_nat.h
extern void tty_termios_encode_baud_rate(struct ktermios *termios, speed_t ibaud, speed_t obaud);
extern void tty_encode_baud_rate(struct tty_struct *tty, speed_t ibaud, speed_t obaud);
extern void tty_termios_copy_hw(struct ktermios *new, struct ktermios *old);
+extern int tty_termios_hw_change(struct ktermios *a, struct ktermios *b);
extern struct tty_ldisc *tty_ldisc_ref(struct tty_struct *);
extern void tty_ldisc_deref(struct tty_ldisc *);
extern struct workqueue_struct *
__create_workqueue_key(const char *name, int singlethread,
- int freezeable, struct lock_class_key *key);
+ int freezeable, struct lock_class_key *key,
+ const char *lock_name);
#ifdef CONFIG_LOCKDEP
#define __create_workqueue(name, singlethread, freezeable) \
({ \
static struct lock_class_key __key; \
+ const char *__lock_name; \
+ \
+ if (__builtin_constant_p(name)) \
+ __lock_name = (name); \
+ else \
+ __lock_name = #name; \
\
__create_workqueue_key((name), (singlethread), \
- (freezeable), &__key); \
+ (freezeable), &__key, \
+ __lock_name); \
})
#else
#define __create_workqueue(name, singlethread, freezeable) \
- __create_workqueue_key((name), (singlethread), (freezeable), NULL)
+ __create_workqueue_key((name), (singlethread), (freezeable), NULL, NULL)
#endif
#define create_workqueue(name) __create_workqueue((name), 0, 0)
unsigned for_reclaim:1; /* Invoked from the page allocator */
unsigned for_writepages:1; /* This is a writepages() call */
unsigned range_cyclic:1; /* range_start is cyclic */
- unsigned more_io:1; /* more io to be dispatched */
};
/*
#define AX25_P_ATALK 0xca /* Appletalk */
#define AX25_P_ATALK_ARP 0xcb /* Appletalk ARP */
#define AX25_P_IP 0xcc /* ARPA Internet Protocol */
-#define AX25_P_ARP 0xcd /* ARPA Adress Resolution */
+#define AX25_P_ARP 0xcd /* ARPA Address Resolution */
#define AX25_P_FLEXNET 0xce /* FlexNet */
#define AX25_P_NETROM 0xcf /* NET/ROM */
#define AX25_P_TEXT 0xF0 /* No layer 3 protocol impl. */
struct net_device *dev; /* virtual device associated with tunnel */
struct net_device_stats stat; /* statistics for tunnel device */
int recursion; /* depth of hard_start_xmit recursion */
- struct ip6_tnl_parm parms; /* tunnel configuration paramters */
+ struct ip6_tnl_parm parms; /* tunnel configuration parameters */
struct flowi fl; /* flowi template for xmit */
struct dst_entry *dst_cache; /* cached dst */
u32 dst_cookie;
irda_queue_t q; /* Must be first! */
discinfo_t data; /* Basic discovery information */
- int name_len; /* Lenght of nickname */
+ int name_len; /* Length of nickname */
LAP_REASON condition; /* More info about the discovery */
int gen_addr_bit; /* Need to generate a new device
return (skb->nfct == &nf_conntrack_untracked.ct_general);
}
+extern int nf_conntrack_set_hashsize(const char *val, struct kernel_param *kp);
extern unsigned int nf_conntrack_htable_size;
extern int nf_conntrack_checksum;
extern atomic_t nf_conntrack_count;
n->tc_verd = SET_TC_VERD(n->tc_verd, 0);
n->tc_verd = CLR_TC_OK2MUNGE(n->tc_verd);
n->tc_verd = CLR_TC_MUNGED(n->tc_verd);
- n->iif = skb->iif;
}
return n;
}
/* The default SACK delay timeout for new associations. */
__u32 sackdelay;
- /* Flags controling Heartbeat, SACK delay, and Path MTU Discovery. */
+ /* Flags controlling Heartbeat, SACK delay, and Path MTU Discovery. */
__u32 param_flags;
struct sctp_initmsg initmsg;
/* PMTU : The current known path MTU. */
__u32 pathmtu;
- /* Flags controling Heartbeat, SACK delay, and Path MTU Discovery. */
+ /* Flags controlling Heartbeat, SACK delay, and Path MTU Discovery. */
__u32 param_flags;
/* The number of times INIT has been sent on this transport. */
*/
__u32 pathmtu;
- /* Flags controling Heartbeat, SACK delay, and Path MTU Discovery. */
+ /* Flags controlling Heartbeat, SACK delay, and Path MTU Discovery. */
__u32 param_flags;
/* SACK delay timeout */
SCTP_SHUTDOWN_EVENT,
SCTP_PARTIAL_DELIVERY_EVENT,
SCTP_ADAPTATION_INDICATION,
- SCTP_AUTHENTICATION_EVENT,
+ SCTP_AUTHENTICATION_INDICATION,
};
/* Notification error codes used to fill up the error fields in some
return err;
rcu_read_lock_bh();
- filter = sk->sk_filter;
+ filter = rcu_dereference(sk->sk_filter);
if (filter) {
unsigned int pkt_len = sk_run_filter(skb, filter->insns,
filter->len);
return ret;
}
+static inline int xfrm_alg_len(struct xfrm_algo *alg)
+{
+ return sizeof(*alg) + ((alg->alg_key_len + 7) / 8);
+}
+
#ifdef CONFIG_XFRM_MIGRATE
static inline struct xfrm_algo *xfrm_algo_clone(struct xfrm_algo *orig)
{
- return (struct xfrm_algo *)kmemdup(orig, sizeof(*orig) + orig->alg_key_len, GFP_KERNEL);
+ return kmemdup(orig, xfrm_alg_len(orig), GFP_KERNEL);
}
static inline void xfrm_states_put(struct xfrm_state **states, int n)
endmenu # General setup
+config SLABINFO
+ bool
+ depends on PROC_FS
+ depends on SLAB || SLUB
+ default y
+
config RT_MUTEXES
boolean
select PLIST
#endif
#if ACCT_VERSION==3
ac.ac_pid = current->tgid;
- ac.ac_ppid = current->parent->tgid;
+ ac.ac_ppid = current->real_parent->tgid;
#endif
spin_lock_irq(¤t->sighand->siglock);
}
/*
- * Fixup the pi_state owner with current.
+ * Fixup the pi_state owner with the new owner.
*
* Must be called with hash bucket lock held and mm->sem held for non
* private futexes.
*/
static int fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
- struct task_struct *curr)
+ struct task_struct *newowner)
{
- u32 newtid = task_pid_vnr(curr) | FUTEX_WAITERS;
+ u32 newtid = task_pid_vnr(newowner) | FUTEX_WAITERS;
struct futex_pi_state *pi_state = q->pi_state;
u32 uval, curval, newval;
int ret;
} else
newtid |= FUTEX_OWNER_DIED;
- pi_state->owner = curr;
+ pi_state->owner = newowner;
- spin_lock_irq(&curr->pi_lock);
+ spin_lock_irq(&newowner->pi_lock);
WARN_ON(!list_empty(&pi_state->list));
- list_add(&pi_state->list, &curr->pi_state_list);
- spin_unlock_irq(&curr->pi_lock);
+ list_add(&pi_state->list, &newowner->pi_state_list);
+ spin_unlock_irq(&newowner->pi_lock);
/*
* We own it, so we have to replace the pending owner
* when we were on the way back before we locked the
* hash bucket.
*/
- if (q.pi_state->owner == curr &&
- rt_mutex_trylock(&q.pi_state->pi_mutex)) {
- ret = 0;
+ if (q.pi_state->owner == curr) {
+ /*
+ * Try to get the rt_mutex now. This might
+ * fail as some other task acquired the
+ * rt_mutex after we removed ourself from the
+ * rt_mutex waiters list.
+ */
+ if (rt_mutex_trylock(&q.pi_state->pi_mutex))
+ ret = 0;
+ else {
+ /*
+ * pi_state is incorrect, some other
+ * task did a lock steal and we
+ * returned due to timeout or signal
+ * without taking the rt_mutex. Too
+ * late. We can access the
+ * rt_mutex_owner without locking, as
+ * the other task is now blocked on
+ * the hash bucket lock. Fix the state
+ * up.
+ */
+ struct task_struct *owner;
+ int res;
+
+ owner = rt_mutex_owner(&q.pi_state->pi_mutex);
+ res = fixup_pi_state_owner(uaddr, &q, owner);
+
+ WARN_ON(rt_mutex_owner(&q.pi_state->pi_mutex) !=
+ owner);
+
+ /* propagate -EFAULT, if the fixup failed */
+ if (res)
+ ret = res;
+ }
} else {
/*
* Paranoia check. If we did not take the lock
/*
* Functions related to boot-time initialization:
*/
-static void __devinit init_hrtimers_cpu(int cpu)
+static void __cpuinit init_hrtimers_cpu(int cpu)
{
struct hrtimer_cpu_base *cpu_base = &per_cpu(hrtimer_bases, cpu);
int i;
VMCOREINFO_OFFSET(list_head, next);
VMCOREINFO_OFFSET(list_head, prev);
VMCOREINFO_LENGTH(zone.free_area, MAX_ORDER);
+ VMCOREINFO_LENGTH(free_area.free_list, MIGRATE_TYPES);
VMCOREINFO_NUMBER(NR_FREE_PAGES);
arch_crash_save_vmcoreinfo();
enum umh_wait wait)
{
DECLARE_COMPLETION_ONSTACK(done);
- int retval;
+ int retval = 0;
helper_lock();
- if (sub_info->path[0] == '\0') {
- retval = 0;
+ if (sub_info->path[0] == '\0')
goto out;
- }
if (!khelper_wq || usermodehelper_disabled) {
retval = -EBUSY;
sub_info->wait = wait;
queue_work(khelper_wq, &sub_info->work);
- if (wait == UMH_NO_WAIT) /* task has freed sub_info */
- return 0;
+ if (wait == UMH_NO_WAIT) /* task has freed sub_info */
+ goto unlock;
wait_for_completion(&done);
retval = sub_info->retval;
- out:
+out:
call_usermodehelper_freeinfo(sub_info);
+unlock:
helper_unlock();
return retval;
}
struct list_head *head;
unsigned long flags;
int i;
+ int locked;
raw_local_irq_save(flags);
- graph_lock();
+ locked = graph_lock();
/*
* Unhash all classes that were created by this module:
zap_class(class);
}
- graph_unlock();
+ if (locked)
+ graph_unlock();
raw_local_irq_restore(flags);
}
struct list_head *head;
unsigned long flags;
int i, j;
+ int locked;
raw_local_irq_save(flags);
* Debug check: in the end all mapped classes should
* be gone.
*/
- graph_lock();
+ locked = graph_lock();
for (i = 0; i < CLASSHASH_SIZE; i++) {
head = classhash_table + i;
if (list_empty(head))
}
}
}
- graph_unlock();
+ if (locked)
+ graph_unlock();
out_restore:
raw_local_irq_restore(flags);
/* For kallsyms to ask for address resolution. NULL means not found.
We don't lock, as this is used for oops resolution and races are a
lesser concern. */
+/* FIXME: Risky: returns a pointer into a module w/o lock */
const char *module_address_lookup(unsigned long addr,
unsigned long *size,
unsigned long *offset,
char **modname)
{
struct module *mod;
+ const char *ret = NULL;
+ preempt_disable();
list_for_each_entry(mod, &modules, list) {
if (within(addr, mod->module_init, mod->init_size)
|| within(addr, mod->module_core, mod->core_size)) {
if (modname)
*modname = mod->name;
- return get_ksymbol(mod, addr, size, offset);
+ ret = get_ksymbol(mod, addr, size, offset);
+ break;
}
}
- return NULL;
+ preempt_enable();
+ return ret;
}
int lookup_module_symbol_name(unsigned long addr, char *symname)
{
struct module *mod;
- mutex_lock(&module_mutex);
+ preempt_disable();
list_for_each_entry(mod, &modules, list) {
if (within(addr, mod->module_init, mod->init_size) ||
within(addr, mod->module_core, mod->core_size)) {
if (!sym)
goto out;
strlcpy(symname, sym, KSYM_NAME_LEN);
- mutex_unlock(&module_mutex);
+ preempt_enable();
return 0;
}
}
out:
- mutex_unlock(&module_mutex);
+ preempt_enable();
return -ERANGE;
}
{
struct module *mod;
- mutex_lock(&module_mutex);
+ preempt_disable();
list_for_each_entry(mod, &modules, list) {
if (within(addr, mod->module_init, mod->init_size) ||
within(addr, mod->module_core, mod->core_size)) {
strlcpy(modname, mod->name, MODULE_NAME_LEN);
if (name)
strlcpy(name, sym, KSYM_NAME_LEN);
- mutex_unlock(&module_mutex);
+ preempt_enable();
return 0;
}
}
out:
- mutex_unlock(&module_mutex);
+ preempt_enable();
return -ERANGE;
}
{
struct module *mod;
- mutex_lock(&module_mutex);
+ preempt_disable();
list_for_each_entry(mod, &modules, list) {
if (symnum < mod->num_symtab) {
*value = mod->symtab[symnum].st_value;
KSYM_NAME_LEN);
strlcpy(module_name, mod->name, MODULE_NAME_LEN);
*exported = is_exported(name, mod);
- mutex_unlock(&module_mutex);
+ preempt_enable();
return 0;
}
symnum -= mod->num_symtab;
}
- mutex_unlock(&module_mutex);
+ preempt_enable();
return -ERANGE;
}
unsigned long ret = 0;
/* Don't lock: we're in enough trouble already. */
+ preempt_disable();
if ((colon = strchr(name, ':')) != NULL) {
*colon = '\0';
if ((mod = find_module(name)) != NULL)
if ((ret = mod_find_symname(mod, name)) != 0)
break;
}
+ preempt_enable();
return ret;
}
#endif /* CONFIG_KALLSYMS */
decl_subsys(module, &module_ktype, &module_uevent_ops);
int module_sysfs_initialized;
+static void module_release(struct kobject *kobj)
+{
+ /*
+ * Stupid empty release function to allow the memory for the kobject to
+ * be properly cleaned up. This will not need to be present for 2.6.25
+ * with the upcoming kobject core rework.
+ */
+}
+
static struct kobj_type module_ktype = {
.sysfs_ops = &module_sysfs_ops,
+ .release = module_release,
};
/*
DEFINE_MUTEX(pm_mutex);
+unsigned int pm_flags;
+EXPORT_SYMBOL(pm_flags);
+
#ifdef CONFIG_SUSPEND
/* This is just an arbitrary number */
#include <linux/interrupt.h>
#include <linux/mutex.h>
-int pm_active;
-
/*
* Locking notes:
* pm_devs_lock can be a semaphore providing pm ops are not called
EXPORT_SYMBOL(pm_register);
EXPORT_SYMBOL(pm_send_all);
-EXPORT_SYMBOL(pm_active);
-
* commonly to provide a default console (ie from PROM variables) when
* the user has not supplied one.
*/
-int __init add_preferred_console(char *name, int idx, char *options)
+int add_preferred_console(char *name, int idx, char *options)
{
struct console_cmdline *c;
int i;
return ret;
}
-static int may_attach(struct task_struct *task)
+int __ptrace_may_attach(struct task_struct *task)
{
/* May we inspect the given task?
* This check is used both for attaching with ptrace
{
int err;
task_lock(task);
- err = may_attach(task);
+ err = __ptrace_may_attach(task);
task_unlock(task);
return !err;
}
/* the same process cannot be attached many times */
if (task->ptrace & PT_PTRACED)
goto bad;
- retval = may_attach(task);
+ retval = __ptrace_may_attach(task);
if (retval)
goto bad;
rdp->blimit = blimit;
}
-static void __devinit rcu_online_cpu(int cpu)
+static void __cpuinit rcu_online_cpu(int cpu)
{
struct rcu_data *rdp = &per_cpu(rcu_data, cpu);
struct rcu_data *bh_rdp = &per_cpu(rcu_bh_data, cpu);
}
#endif
printk(KERN_CONT "%5lu %5d %6d\n", free,
- task_pid_nr(p), task_pid_nr(p->parent));
+ task_pid_nr(p), task_pid_nr(p->real_parent));
if (state != TASK_RUNNING)
show_stack(p, NULL);
{
int i;
+ /*
+ * A weight of 0 or 1 can cause arithmetics problems.
+ * (The default weight is 1024 - so there's no practical
+ * limitation from this.)
+ */
+ if (shares < 2)
+ shares = 2;
+
spin_lock(&tg->lock);
if (tg->shares == shares)
goto done;
/*
* Ease the printing of nsec fields:
*/
-static long long nsec_high(long long nsec)
+static long long nsec_high(unsigned long long nsec)
{
- if (nsec < 0) {
+ if ((long long)nsec < 0) {
nsec = -nsec;
do_div(nsec, 1000000);
return -nsec;
return nsec;
}
-static unsigned long nsec_low(long long nsec)
+static unsigned long nsec_low(unsigned long long nsec)
{
- if (nsec < 0)
+ if ((long long)nsec < 0)
nsec = -nsec;
return do_div(nsec, 1000000);
int pid;
rcu_read_lock();
- pid = task_ppid_nr_ns(current, current->nsproxy->pid_ns);
+ pid = task_tgid_nr_ns(current->real_parent, current->nsproxy->pid_ns);
rcu_read_unlock();
return pid;
}
}
-static void __devinit migrate_timers(int cpu)
+static void __cpuinit migrate_timers(int cpu)
{
tvec_base_t *old_base;
tvec_base_t *new_base;
struct workqueue_struct *__create_workqueue_key(const char *name,
int singlethread,
int freezeable,
- struct lock_class_key *key)
+ struct lock_class_key *key,
+ const char *lock_name)
{
struct workqueue_struct *wq;
struct cpu_workqueue_struct *cwq;
}
wq->name = name;
- lockdep_init_map(&wq->lockdep_map, name, key, 0);
+ lockdep_init_map(&wq->lockdep_map, lock_name, key, 0);
wq->singlethread = singlethread;
wq->freezeable = freezeable;
INIT_LIST_HEAD(&wq->list);
* PERCPU
*/
+#define PROP_BATCH (8*(1+ilog2(nr_cpu_ids)))
+
int prop_local_init_percpu(struct prop_local_percpu *pl)
{
spin_lock_init(&pl->lock);
spin_lock_irqsave(&pl->lock, flags);
prop_adjust_shift(&pl->shift, &pl->period, pg->shift);
+
/*
* For each missed period, we half the local counter.
* basically:
* pl->events >> (global_period - pl->period);
- *
- * but since the distributed nature of percpu counters make division
- * rather hard, use a regular subtraction loop. This is safe, because
- * the events will only every be incremented, hence the subtraction
- * can never result in a negative number.
*/
- while (pl->period != global_period) {
- unsigned long val = percpu_counter_read(&pl->events);
- unsigned long half = (val + 1) >> 1;
-
- /*
- * Half of zero won't be much less, break out.
- * This limits the loop to shift iterations, even
- * if we missed a million.
- */
- if (!val)
- break;
-
- percpu_counter_add(&pl->events, -half);
- pl->period += period;
- }
+ period = (global_period - pl->period) >> (pg->shift - 1);
+ if (period < BITS_PER_LONG) {
+ s64 val = percpu_counter_read(&pl->events);
+
+ if (val < (nr_cpu_ids * PROP_BATCH))
+ val = percpu_counter_sum(&pl->events);
+
+ __percpu_counter_add(&pl->events, -val + (val >> period),
+ PROP_BATCH);
+ } else
+ percpu_counter_set(&pl->events, 0);
+
pl->period = global_period;
spin_unlock_irqrestore(&pl->lock, flags);
}
struct prop_global *pg = prop_get_global(pd);
prop_norm_percpu(pg, pl);
- percpu_counter_add(&pl->events, 1);
+ __percpu_counter_add(&pl->events, 1, PROP_BATCH);
percpu_counter_add(&pg->events, 1);
prop_put_global(pd, pg);
}
static struct page *xip_sparse_page(void)
{
if (!__xip_sparse_page) {
- unsigned long zeroes = get_zeroed_page(GFP_HIGHUSER);
- if (zeroes) {
+ struct page *page = alloc_page(GFP_HIGHUSER | __GFP_ZERO);
+
+ if (page) {
static DEFINE_SPINLOCK(xip_alloc_lock);
spin_lock(&xip_alloc_lock);
if (!__xip_sparse_page)
- __xip_sparse_page = virt_to_page(zeroes);
+ __xip_sparse_page = page;
else
- free_page(zeroes);
+ __free_page(page);
spin_unlock(&xip_alloc_lock);
}
}
if (free_huge_pages > resv_huge_pages)
page = dequeue_huge_page(vma, addr);
spin_unlock(&hugetlb_lock);
- if (!page)
+ if (!page) {
page = alloc_buddy_huge_page(vma, addr);
- return page ? page : ERR_PTR(-VM_FAULT_OOM);
+ if (!page) {
+ hugetlb_put_quota(vma->vm_file->f_mapping, 1);
+ return ERR_PTR(-VM_FAULT_OOM);
+ }
+ }
+ return page;
}
static struct page *alloc_huge_page(struct vm_area_struct *vma,
if (hugetlb_get_quota(inode->i_mapping, chg))
return -ENOSPC;
ret = hugetlb_acct_memory(chg);
- if (ret < 0)
+ if (ret < 0) {
+ hugetlb_put_quota(inode->i_mapping, chg);
return ret;
+ }
region_add(&inode->i_mapping->private_list, from, to);
return 0;
}
return NULL;
}
+#ifdef CONFIG_DEBUG_VM
/*
* Add some anal sanity checks for now. Eventually,
* we should just do "return pfn_to_page(pfn)", but
print_bad_pte(vma, pte, addr);
return NULL;
}
+#endif
/*
* NOTE! We still have PageReserved() pages in the page
unlock:
pte_unmap_unlock(page_table, ptl);
if (dirty_page) {
+ if (vma->vm_file)
+ file_update_time(vma->vm_file);
+
/*
* Yes, Virginia, this is actually required to prevent a race
* with clear_page_dirty_for_io() from clearing the page dirty
if (anon)
page_cache_release(vmf.page);
else if (dirty_page) {
+ if (vma->vm_file)
+ file_update_time(vma->vm_file);
+
set_page_dirty_balance(dirty_page, page_mkwrite);
put_page(dirty_page);
}
global_page_state(NR_UNSTABLE_NFS) < background_thresh
&& min_pages <= 0)
break;
- wbc.more_io = 0;
wbc.encountered_congestion = 0;
wbc.nr_to_write = MAX_WRITEBACK_PAGES;
wbc.pages_skipped = 0;
min_pages -= MAX_WRITEBACK_PAGES - wbc.nr_to_write;
if (wbc.nr_to_write > 0 || wbc.pages_skipped > 0) {
/* Wrote less than expected */
- if (wbc.encountered_congestion || wbc.more_io)
- congestion_wait(WRITE, HZ/10);
- else
+ congestion_wait(WRITE, HZ/10);
+ if (!wbc.encountered_congestion)
break;
}
}
global_page_state(NR_UNSTABLE_NFS) +
(inodes_stat.nr_inodes - inodes_stat.nr_unused);
while (nr_to_write > 0) {
- wbc.more_io = 0;
wbc.encountered_congestion = 0;
wbc.nr_to_write = MAX_WRITEBACK_PAGES;
writeback_inodes(&wbc);
if (wbc.nr_to_write > 0) {
- if (wbc.encountered_congestion || wbc.more_io)
+ if (wbc.encountered_congestion)
congestion_wait(WRITE, HZ/10);
else
break; /* All the old data is written */
memmap_init_zone((size), (nid), (zone), (start_pfn), MEMMAP_EARLY)
#endif
-static int __devinit zone_batchsize(struct zone *zone)
+static int zone_batchsize(struct zone *zone)
{
int batch;
mem_map = NODE_DATA(0)->node_mem_map;
#ifdef CONFIG_ARCH_POPULATES_NODE_MAP
if (page_to_pfn(mem_map) != pgdat->node_start_pfn)
- mem_map -= pgdat->node_start_pfn;
+ mem_map -= (pgdat->node_start_pfn - ARCH_PFN_OFFSET);
#endif /* CONFIG_ARCH_POPULATES_NODE_MAP */
}
#endif
static unsigned long max_pages(unsigned long min_pages)
{
unsigned long node_free_pages, max;
+ struct zone *zones = NODE_DATA(numa_node_id())->node_zones;
+
+ node_free_pages =
+#ifdef CONFIG_ZONE_DMA
+ zone_page_state(&zones[ZONE_DMA], NR_FREE_PAGES) +
+#endif
+#ifdef CONFIG_ZONE_DMA32
+ zone_page_state(&zones[ZONE_DMA32], NR_FREE_PAGES) +
+#endif
+ zone_page_state(&zones[ZONE_NORMAL], NR_FREE_PAGES);
- node_free_pages = node_page_state(numa_node_id(),
- NR_FREE_PAGES);
max = node_free_pages / FRACTION_OF_NODE_MEM;
return max(max, min_pages);
}
schedule_delayed_work(work, round_jiffies_relative(REAPTIMEOUT_CPUC));
}
-#ifdef CONFIG_PROC_FS
+#ifdef CONFIG_SLABINFO
static void print_slabinfo_header(struct seq_file *m)
{
* Mininum number of partial slabs. These will be left on the partial
* lists even if they are empty. kmem_cache_shrink may reclaim them.
*/
-#define MIN_PARTIAL 2
+#define MIN_PARTIAL 5
/*
* Maximum number of desirable partial slabs.
* then add it.
*/
if (unlikely(!prior))
- add_partial(get_node(s, page_to_nid(page)), page);
+ add_partial_tail(get_node(s, page_to_nid(page)), page);
out_unlock:
slab_unlock(page);
return slab_alloc(s, gfpflags, node, caller);
}
+static unsigned long count_partial(struct kmem_cache_node *n)
+{
+ unsigned long flags;
+ unsigned long x = 0;
+ struct page *page;
+
+ spin_lock_irqsave(&n->list_lock, flags);
+ list_for_each_entry(page, &n->partial, lru)
+ x += page->inuse;
+ spin_unlock_irqrestore(&n->list_lock, flags);
+ return x;
+}
+
#if defined(CONFIG_SYSFS) && defined(CONFIG_SLUB_DEBUG)
static int validate_slab(struct kmem_cache *s, struct page *page,
unsigned long *map)
return n;
}
-static unsigned long count_partial(struct kmem_cache_node *n)
-{
- unsigned long flags;
- unsigned long x = 0;
- struct page *page;
-
- spin_lock_irqsave(&n->list_lock, flags);
- list_for_each_entry(page, &n->partial, lru)
- x += page->inuse;
- spin_unlock_irqrestore(&n->list_lock, flags);
- return x;
-}
-
enum slab_stat_type {
SL_FULL,
SL_PARTIAL,
__initcall(slab_sysfs_init);
#endif
+
+/*
+ * The /proc/slabinfo ABI
+ */
+#ifdef CONFIG_SLABINFO
+
+ssize_t slabinfo_write(struct file *file, const char __user * buffer,
+ size_t count, loff_t *ppos)
+{
+ return -EINVAL;
+}
+
+
+static void print_slabinfo_header(struct seq_file *m)
+{
+ seq_puts(m, "slabinfo - version: 2.1\n");
+ seq_puts(m, "# name <active_objs> <num_objs> <objsize> "
+ "<objperslab> <pagesperslab>");
+ seq_puts(m, " : tunables <limit> <batchcount> <sharedfactor>");
+ seq_puts(m, " : slabdata <active_slabs> <num_slabs> <sharedavail>");
+ seq_putc(m, '\n');
+}
+
+static void *s_start(struct seq_file *m, loff_t *pos)
+{
+ loff_t n = *pos;
+
+ down_read(&slub_lock);
+ if (!n)
+ print_slabinfo_header(m);
+
+ return seq_list_start(&slab_caches, *pos);
+}
+
+static void *s_next(struct seq_file *m, void *p, loff_t *pos)
+{
+ return seq_list_next(p, &slab_caches, pos);
+}
+
+static void s_stop(struct seq_file *m, void *p)
+{
+ up_read(&slub_lock);
+}
+
+static int s_show(struct seq_file *m, void *p)
+{
+ unsigned long nr_partials = 0;
+ unsigned long nr_slabs = 0;
+ unsigned long nr_inuse = 0;
+ unsigned long nr_objs;
+ struct kmem_cache *s;
+ int node;
+
+ s = list_entry(p, struct kmem_cache, list);
+
+ for_each_online_node(node) {
+ struct kmem_cache_node *n = get_node(s, node);
+
+ if (!n)
+ continue;
+
+ nr_partials += n->nr_partial;
+ nr_slabs += atomic_long_read(&n->nr_slabs);
+ nr_inuse += count_partial(n);
+ }
+
+ nr_objs = nr_slabs * s->objects;
+ nr_inuse += (nr_slabs - nr_partials) * s->objects;
+
+ seq_printf(m, "%-17s %6lu %6lu %6u %4u %4d", s->name, nr_inuse,
+ nr_objs, s->size, s->objects, (1 << s->order));
+ seq_printf(m, " : tunables %4u %4u %4u", 0, 0, 0);
+ seq_printf(m, " : slabdata %6lu %6lu %6lu", nr_slabs, nr_slabs,
+ 0UL);
+ seq_putc(m, '\n');
+ return 0;
+}
+
+const struct seq_operations slabinfo_op = {
+ .start = s_start,
+ .next = s_next,
+ .stop = s_stop,
+ .show = s_show,
+};
+
+#endif /* CONFIG_SLABINFO */
static int __init rif_init(void)
{
init_timer(&rif_timer);
- rif_timer.expires = sysctl_tr_rif_timeout;
+ rif_timer.expires = jiffies + sysctl_tr_rif_timeout;
rif_timer.data = 0L;
rif_timer.function = rif_check_expire;
add_timer(&rif_timer);
static int vlan_dev_init(struct net_device *dev)
{
struct net_device *real_dev = VLAN_DEV_INFO(dev)->real_dev;
+ int subclass = 0;
/* IFF_BROADCAST|IFF_MULTICAST; ??? */
dev->flags = real_dev->flags & ~IFF_UP;
dev->hard_start_xmit = vlan_dev_hard_start_xmit;
}
- lockdep_set_class(&dev->_xmit_lock, &vlan_netdev_xmit_lock_key);
+ if (real_dev->priv_flags & IFF_802_1Q_VLAN)
+ subclass = 1;
+
+ lockdep_set_class_and_subclass(&dev->_xmit_lock,
+ &vlan_netdev_xmit_lock_key, subclass);
return 0;
}
if (eth->h_proto != htons(ETH_P_IP))
goto non_ip; /* Multi-Protocol Over ATM :-) */
+ /* Weed out funny packets (e.g., AF_PACKET or raw). */
+ if (skb->len < ETH_HLEN + sizeof(struct iphdr))
+ goto non_ip;
+ skb_set_network_header(skb, ETH_HLEN);
+ if (skb->len < ETH_HLEN + ip_hdr(skb)->ihl * 4 || ip_hdr(skb)->ihl < 5)
+ goto non_ip;
+
while (i < mpc->number_of_mps_macs) {
if (!compare_ether_addr(eth->h_dest, (mpc->mps_macs + i*ETH_ALEN)))
if ( send_via_shortcut(skb, mpc) == 0 ) /* try shortcut */
return;
spin_lock_bh(&ax25_list_lock);
+again:
ax25_for_each(s, node, &ax25_list) {
if (s->ax25_dev == ax25_dev) {
s->ax25_dev = NULL;
+ spin_unlock_bh(&ax25_list_lock);
ax25_disconnect(s, ENETUNREACH);
+ spin_lock_bh(&ax25_list_lock);
+
+ /* The entry could have been deleted from the
+ * list meanwhile and thus the next pointer is
+ * no longer valid. Play it safe and restart
+ * the scan. Forward progress is ensured
+ * because we set s->ax25_dev to NULL and we
+ * are never passed a NULL 'dev' argument.
+ */
+ goto again;
}
}
spin_unlock_bh(&ax25_list_lock);
* some sanity checks. code further down depends on this
*/
- if (addr_len == sizeof(struct sockaddr_ax25)) {
- /* support for this will go away in early 2.5.x */
- printk(KERN_WARNING "ax25_connect(): %s uses obsolete socket structure\n",
- current->comm);
- }
- else if (addr_len != sizeof(struct full_sockaddr_ax25)) {
- /* support for old structure may go away some time */
+ if (addr_len == sizeof(struct sockaddr_ax25))
+ /* support for this will go away in early 2.5.x
+ * ax25_connect(): uses obsolete socket structure
+ */
+ ;
+ else if (addr_len != sizeof(struct full_sockaddr_ax25))
+ /* support for old structure may go away some time
+ * ax25_connect(): uses old (6 digipeater) socket structure.
+ */
if ((addr_len < sizeof(struct sockaddr_ax25) + sizeof(ax25_address) * 6) ||
- (addr_len > sizeof(struct full_sockaddr_ax25))) {
+ (addr_len > sizeof(struct full_sockaddr_ax25)))
return -EINVAL;
- }
- printk(KERN_WARNING "ax25_connect(): %s uses old (6 digipeater) socket structure.\n",
- current->comm);
- }
if (fsa->fsa_ax25.sax25_family != AF_AX25)
return -EINVAL;
goto out;
}
- if (addr_len == sizeof(struct sockaddr_ax25)) {
- printk(KERN_WARNING "ax25_sendmsg(): %s uses obsolete socket structure\n",
- current->comm);
- }
- else if (addr_len != sizeof(struct full_sockaddr_ax25)) {
- /* support for old structure may go away some time */
+ if (addr_len == sizeof(struct sockaddr_ax25))
+ /* ax25_sendmsg(): uses obsolete socket structure */
+ ;
+ else if (addr_len != sizeof(struct full_sockaddr_ax25))
+ /* support for old structure may go away some time
+ * ax25_sendmsg(): uses old (6 digipeater)
+ * socket structure.
+ */
if ((addr_len < sizeof(struct sockaddr_ax25) + sizeof(ax25_address) * 6) ||
(addr_len > sizeof(struct full_sockaddr_ax25))) {
err = -EINVAL;
goto out;
}
- printk(KERN_WARNING "ax25_sendmsg(): %s uses old (6 digipeater) socket structure.\n",
- current->comm);
- }
if (addr_len > sizeof(struct sockaddr_ax25) && usax->sax25_ndigis != 0) {
int ct = 0;
}
skb_pull(skb, 1); /* Remove PID */
- skb_reset_mac_header(skb);
+ skb->mac_header = skb->network_header;
skb_reset_network_header(skb);
skb->dev = ax25->ax25_dev->dev;
skb->pkt_type = PACKET_HOST;
}
tasklet_disable(&hdev->tx_task);
-
- hci_conn_del_sysfs(conn);
-
hci_conn_hash_del(hdev, conn);
if (hdev->notify)
hdev->notify(hdev, HCI_NOTIFY_CONN_DEL);
-
tasklet_enable(&hdev->tx_task);
-
skb_queue_purge(&conn->data_q);
-
+ hci_conn_del_sysfs(conn);
hci_dev_put(hdev);
- /* will free via device release */
- put_device(&conn->dev);
-
return 0;
}
{
struct hci_conn *conn = container_of(work, struct hci_conn, work);
device_del(&conn->dev);
+ put_device(&conn->dev);
}
void hci_conn_del_sysfs(struct hci_conn *conn)
BT_DBG("dev %p dlc %p", dev, dlc);
- write_lock_bh(&rfcomm_dev_lock);
- list_del_init(&dev->list);
- write_unlock_bh(&rfcomm_dev_lock);
+ /* Refcount should only hit zero when called from rfcomm_dev_del()
+ which will have taken us off the list. Everything else are
+ refcounting bugs. */
+ BUG_ON(!list_empty(&dev->list));
rfcomm_dlc_lock(dlc);
/* Detach DLC if it's owned by this dev */
tty_unregister_device(rfcomm_tty_driver, dev->id);
- /* Refcount should only hit zero when called from rfcomm_dev_del()
- which will have taken us off the list. Everything else are
- refcounting bugs. */
- BUG_ON(!list_empty(&dev->list));
-
kfree(dev);
/* It's safe to call module_put() here because socket still
{
BT_DBG("dev %p", dev);
- set_bit(RFCOMM_TTY_RELEASED, &dev->flags);
+ if (test_bit(RFCOMM_TTY_RELEASED, &dev->flags))
+ BUG_ON(1);
+ else
+ set_bit(RFCOMM_TTY_RELEASED, &dev->flags);
+
+ write_lock_bh(&rfcomm_dev_lock);
+ list_del_init(&dev->list);
+ write_unlock_bh(&rfcomm_dev_lock);
+
rfcomm_dev_put(dev);
}
return skb->nf_bridge;
}
+static inline struct nf_bridge_info *nf_bridge_unshare(struct sk_buff *skb)
+{
+ struct nf_bridge_info *nf_bridge = skb->nf_bridge;
+
+ if (atomic_read(&nf_bridge->use) > 1) {
+ struct nf_bridge_info *tmp = nf_bridge_alloc(skb);
+
+ if (tmp) {
+ memcpy(tmp, nf_bridge, sizeof(struct nf_bridge_info));
+ atomic_set(&tmp->use, 1);
+ nf_bridge_put(nf_bridge);
+ }
+ nf_bridge = tmp;
+ }
+ return nf_bridge;
+}
+
static inline void nf_bridge_push_encap_header(struct sk_buff *skb)
{
unsigned int len = nf_bridge_encap_header_len(skb);
* Let us first consider the case that ip_route_input() succeeds:
*
* If skb->dst->dev equals the logical bridge device the packet
- * came in on, we can consider this bridging. We then call
- * skb->dst->output() which will make the packet enter br_nf_local_out()
+ * came in on, we can consider this bridging. The packet is passed
+ * through the neighbour output function to build a new destination
+ * MAC address, which will make the packet enter br_nf_local_out()
* not much later. In that function it is assured that the iptables
* FORWARD chain is traversed for the packet.
*
skb->nf_bridge->mask ^= BRNF_NF_BRIDGE_PREROUTING;
skb->dev = bridge_parent(skb->dev);
- if (!skb->dev)
- kfree_skb(skb);
- else {
+ if (skb->dev) {
+ struct dst_entry *dst = skb->dst;
+
nf_bridge_pull_encap_header(skb);
- skb->dst->output(skb);
+
+ if (dst->hh)
+ return neigh_hh_output(dst->hh, skb);
+ else if (dst->neighbour)
+ return dst->neighbour->output(skb);
}
+ kfree_skb(skb);
return 0;
}
if (!skb->nf_bridge)
return NF_ACCEPT;
+ /* Need exclusive nf_bridge_info since we might have multiple
+ * different physoutdevs. */
+ if (!nf_bridge_unshare(skb))
+ return NF_DROP;
+
parent = bridge_parent(out);
if (!parent)
return NF_DROP;
if (!skb->nf_bridge)
return NF_ACCEPT;
+ /* Need exclusive nf_bridge_info since we might have multiple
+ * different physoutdevs. */
+ if (!nf_bridge_unshare(skb))
+ return NF_DROP;
+
nf_bridge = skb->nf_bridge;
if (!(nf_bridge->mask & BRNF_BRIDGED_DNAT))
return NF_ACCEPT;
if (copy_to_user(CMSG_COMPAT_DATA(cm), data, cmlen - sizeof(struct compat_cmsghdr)))
return -EFAULT;
cmlen = CMSG_COMPAT_SPACE(len);
+ if (kmsg->msg_controllen < cmlen)
+ cmlen = kmsg->msg_controllen;
kmsg->msg_control += cmlen;
kmsg->msg_controllen -= cmlen;
return 0;
* still "owns" the NAPI instance and therefore can
* move the instance around on the list at-will.
*/
- if (unlikely(work == weight))
- list_move_tail(&n->poll_list, list);
+ if (unlikely(work == weight)) {
+ if (unlikely(napi_disable_pending(n)))
+ __napi_complete(n);
+ else
+ list_move_tail(&n->poll_list, list);
+ }
netpoll_poll_unlock(have);
}
/*
* Upload unicast and multicast address lists to device and
* configure RX filtering. When the device doesn't support unicast
- * filtering it is put in promiscous mode while unicast addresses
+ * filtering it is put in promiscuous mode while unicast addresses
* are present.
*/
void __dev_set_rx_mode(struct net_device *dev)
struct net *net;
for_each_net(net) {
+restart:
for_each_netdev_safe(net, dev, n) {
- if (dev->rtnl_link_ops == ops)
+ if (dev->rtnl_link_ops == ops) {
ops->dellink(dev);
+ goto restart;
+ }
}
}
list_del(&ops->list);
if (copy_to_user(CMSG_DATA(cm), data, cmlen - sizeof(struct cmsghdr)))
goto out;
cmlen = CMSG_SPACE(len);
+ if (msg->msg_controllen < cmlen)
+ cmlen = msg->msg_controllen;
msg->msg_control += cmlen;
msg->msg_controllen -= cmlen;
err = 0;
C(len);
C(data_len);
C(mac_len);
- n->cloned = 1;
n->hdr_len = skb->nohdr ? skb_headroom(skb) : skb->hdr_len;
+ n->cloned = 1;
n->nohdr = 0;
n->destructor = NULL;
- C(truesize);
- atomic_set(&n->users, 1);
- C(head);
- C(data);
+ C(iif);
C(tail);
C(end);
+ C(head);
+ C(data);
+ C(truesize);
+ atomic_set(&n->users, 1);
atomic_inc(&(skb_shinfo(skb)->dataref));
skb->cloned = 1;
* @dccpavr_ack_ackno - sequence number being acknowledged
* @dccpavr_ack_ptr - pointer into dccpav_buf where this record starts
* @dccpavr_ack_nonce - dccpav_ack_nonce at the time this record was sent
- * @dccpavr_sent_len - lenght of the record in dccpav_buf
+ * @dccpavr_sent_len - length of the record in dccpav_buf
*/
struct dccp_ackvec_record {
struct list_head dccpavr_node;
ccid3_tx_state_name(hctx->ccid3hctx_state),
(unsigned)(hctx->ccid3hctx_x >> 6));
/* The value of R is still undefined and so we can not recompute
- * the timout value. Keep initial value as per [RFC 4342, 5]. */
+ * the timeout value. Keep initial value as per [RFC 4342, 5]. */
t_nfb = TFRC_INITIAL_TIMEOUT;
ccid3_update_send_interval(hctx);
break;
break;
rcu_read_unlock_bh();
}
- return rt;
+ return rcu_dereference(rt);
}
static struct dn_route *dn_rt_cache_get_next(struct seq_file *seq, struct dn_route *rt)
{
- struct dn_rt_cache_iter_state *s = rcu_dereference(seq->private);
+ struct dn_rt_cache_iter_state *s = seq->private;
rt = rt->u.dst.dn_next;
while(!rt) {
rcu_read_lock_bh();
rt = dn_rt_hash_table[s->bucket].chain;
}
- return rt;
+ return rcu_dereference(rt);
}
static void *dn_rt_cache_seq_start(struct seq_file *seq, loff_t *pos)
struct arphdr *arp;
unsigned char *arp_ptr;
struct rtable *rt;
- unsigned char *sha, *tha;
+ unsigned char *sha;
__be32 sip, tip;
u16 dev_type = dev->type;
int addr_type;
arp_ptr += dev->addr_len;
memcpy(&sip, arp_ptr, 4);
arp_ptr += 4;
- tha = arp_ptr;
arp_ptr += dev->addr_len;
memcpy(&tip, arp_ptr, 4);
/*
memcpy(ifa->ifa_label, dev->name, IFNAMSIZ);
if (named++ == 0)
continue;
- dot = strchr(ifa->ifa_label, ':');
+ dot = strchr(old, ':');
if (dot == NULL) {
sprintf(old, ":%d", named);
dot = old;
nlh = nlmsg_hdr(skb);
if (skb->len < NLMSG_SPACE(0) || skb->len < nlh->nlmsg_len ||
- nlh->nlmsg_len < NLMSG_LENGTH(sizeof(*frn))) {
- kfree_skb(skb);
+ nlh->nlmsg_len < NLMSG_LENGTH(sizeof(*frn)))
return;
- }
+
+ skb = skb_clone(skb, GFP_KERNEL);
+ if (skb == NULL)
+ return;
+ nlh = nlmsg_hdr(skb);
frn = (struct fib_result_nl *) NLMSG_DATA(nlh);
tb = fib_get_table(frn->tb_id_in);
struct fib_info *fi_drop;
u8 state;
+ if (fi->fib_treeref > 1)
+ goto out;
+
write_lock_bh(&fib_hash_lock);
fi_drop = fa->fa_info;
fa->fa_info = fi;
{
int h, s_h;
+ if (fz->fz_hash == NULL)
+ return skb->len;
s_h = cb->args[3];
- for (h=0; h < fz->fz_divisor; h++) {
- if (h < s_h) continue;
- if (h > s_h)
- memset(&cb->args[4], 0,
- sizeof(cb->args) - 4*sizeof(cb->args[0]));
- if (fz->fz_hash == NULL ||
- hlist_empty(&fz->fz_hash[h]))
+ for (h = s_h; h < fz->fz_divisor; h++) {
+ if (hlist_empty(&fz->fz_hash[h]))
continue;
- if (fn_hash_dump_bucket(skb, cb, tb, fz, &fz->fz_hash[h])<0) {
+ if (fn_hash_dump_bucket(skb, cb, tb, fz, &fz->fz_hash[h]) < 0) {
cb->args[3] = h;
return -1;
}
+ memset(&cb->args[4], 0,
+ sizeof(cb->args) - 4*sizeof(cb->args[0]));
}
cb->args[3] = h;
return skb->len;
read_lock(&fib_hash_lock);
for (fz = table->fn_zone_list, m=0; fz; fz = fz->fz_next, m++) {
if (m < s_m) continue;
- if (m > s_m)
- memset(&cb->args[3], 0,
- sizeof(cb->args) - 3*sizeof(cb->args[0]));
if (fn_hash_dump_zone(skb, cb, tb, fz) < 0) {
cb->args[2] = m;
read_unlock(&fib_hash_lock);
return -1;
}
+ memset(&cb->args[3], 0,
+ sizeof(cb->args) - 3*sizeof(cb->args[0]));
}
read_unlock(&fib_hash_lock);
cb->args[2] = m;
struct fib_info *fi_drop;
u8 state;
+ if (fi->fib_treeref > 1)
+ goto out;
+
err = -ENOBUFS;
new_fa = kmem_cache_alloc(fn_alias_kmem, GFP_KERNEL);
if (new_fa == NULL)
icmp_param.data.icmph.checksum = 0;
icmp_param.skb = skb_in;
icmp_param.offset = skb_network_offset(skb_in);
- icmp_out_count(icmp_param.data.icmph.type);
inet_sk(icmp_socket->sk)->tos = tos;
ipc.addr = iph->saddr;
ipc.opt = &icmp_param.replyopts;
skb_shinfo(lro_desc->parent)->gso_size = lro_desc->mss;
if (lro_desc->vgrp) {
- if (test_bit(LRO_F_NAPI, &lro_mgr->features))
+ if (lro_mgr->features & LRO_F_NAPI)
vlan_hwaccel_receive_skb(lro_desc->parent,
lro_desc->vgrp,
lro_desc->vlan_tag);
lro_desc->vlan_tag);
} else {
- if (test_bit(LRO_F_NAPI, &lro_mgr->features))
+ if (lro_mgr->features & LRO_F_NAPI)
netif_receive_skb(lro_desc->parent);
else
netif_rx(lro_desc->parent);
goto out;
if ((skb->protocol == htons(ETH_P_8021Q))
- && !test_bit(LRO_F_EXTRACT_VLAN_ID, &lro_mgr->features))
+ && !(lro_mgr->features & LRO_F_EXTRACT_VLAN_ID))
vlan_hdr_len = VLAN_HLEN;
if (!lro_desc->active) { /* start new lro session */
goto out;
if ((skb->protocol == htons(ETH_P_8021Q))
- && !test_bit(LRO_F_EXTRACT_VLAN_ID, &lro_mgr->features))
+ && !(lro_mgr->features & LRO_F_EXTRACT_VLAN_ID))
vlan_hdr_len = VLAN_HLEN;
iph = (void *)(skb->data + vlan_hdr_len);
void *priv)
{
if (__lro_proc_skb(lro_mgr, skb, NULL, 0, priv)) {
- if (test_bit(LRO_F_NAPI, &lro_mgr->features))
+ if (lro_mgr->features & LRO_F_NAPI)
netif_receive_skb(skb);
else
netif_rx(skb);
void *priv)
{
if (__lro_proc_skb(lro_mgr, skb, vgrp, vlan_tag, priv)) {
- if (test_bit(LRO_F_NAPI, &lro_mgr->features))
+ if (lro_mgr->features & LRO_F_NAPI)
vlan_hwaccel_receive_skb(skb, vgrp, vlan_tag);
else
vlan_hwaccel_rx(skb, vgrp, vlan_tag);
if (!skb)
return;
- if (test_bit(LRO_F_NAPI, &lro_mgr->features))
+ if (lro_mgr->features & LRO_F_NAPI)
netif_receive_skb(skb);
else
netif_rx(skb);
if (!skb)
return;
- if (test_bit(LRO_F_NAPI, &lro_mgr->features))
+ if (lro_mgr->features & LRO_F_NAPI)
vlan_hwaccel_receive_skb(skb, vgrp, vlan_tag);
else
vlan_hwaccel_rx(skb, vgrp, vlan_tag);
offset += 4;
}
- skb_reset_mac_header(skb);
+ skb->mac_header = skb->network_header;
__pskb_pull(skb, offset);
skb_reset_network_header(skb);
skb_postpull_rcsum(skb, skb_transport_header(skb), offset);
if (!strcmp(name, "on") || !strcmp(name, "any")) {
return 1;
}
+ if (!strcmp(name, "off") || !strcmp(name, "none")) {
+ return 0;
+ }
#ifdef CONFIG_IP_PNP_DHCP
else if (!strcmp(name, "dhcp")) {
ic_proto_enabled &= ~IC_RARP;
int num = 0;
ic_set_manually = 1;
+ ic_enable = 1;
- ic_enable = (*addrs &&
- (strcmp(addrs, "off") != 0) &&
- (strcmp(addrs, "none") != 0));
- if (!ic_enable)
+ /*
+ * If any dhcp, bootp etc options are set, leave autoconfig on
+ * and skip the below static IP processing.
+ */
+ if (ic_proto_name(addrs))
return 1;
- if (ic_proto_name(addrs))
+ /* If no static IP is given, turn off autoconfig and bail. */
+ if (*addrs == 0 ||
+ strcmp(addrs, "off") == 0 ||
+ strcmp(addrs, "none") == 0) {
+ ic_enable = 0;
return 1;
+ }
- /* Parse the whole string */
+ /* Parse string for static IP assignment. */
ip = addrs;
while (ip && *ip) {
if ((cp = strchr(ip, ':')))
strlcpy(user_dev_name, ip, sizeof(user_dev_name));
break;
case 6:
- ic_proto_name(ip);
+ if (ic_proto_name(ip) == 0 &&
+ ic_myaddr == NONE) {
+ ic_enable = 0;
+ }
break;
}
}
.me = THIS_MODULE,
};
+module_param_call(hashsize, nf_conntrack_set_hashsize, param_get_uint,
+ &nf_conntrack_htable_size, 0600);
+
MODULE_ALIAS("nf_conntrack-" __stringify(AF_INET));
MODULE_ALIAS("ip_conntrack");
MODULE_LICENSE("GPL");
dataoff = ip_hdrlen(skb) + sizeof(struct udphdr);
- /* Get actual SDP lenght */
+ /* Get actual SDP length */
if (ct_sip_get_info(ct, dptr, skb->len - dataoff, &matchoff,
&matchlen, POS_SDP_HEADER) > 0) {
int hh_len;
struct iphdr *iph;
struct sk_buff *skb;
+ unsigned int iphlen;
int err;
if (length > rt->u.dst.dev->mtu) {
goto error_fault;
/* We don't modify invalid header */
- if (length >= sizeof(*iph) && iph->ihl * 4U <= length) {
+ iphlen = iph->ihl * 4;
+ if (iphlen >= sizeof(*iph) && iphlen <= length) {
if (!iph->saddr)
iph->saddr = rt->rt_src;
iph->check = 0;
break;
rcu_read_unlock_bh();
}
- return r;
+ return rcu_dereference(r);
}
static struct rtable *rt_cache_get_next(struct seq_file *seq, struct rtable *r)
{
- struct rt_cache_iter_state *st = rcu_dereference(seq->private);
+ struct rt_cache_iter_state *st = seq->private;
r = r->u.dst.rt_next;
while (!r) {
rcu_read_lock_bh();
r = rt_hash_table[st->bucket].chain;
}
- return r;
+ return rcu_dereference(r);
}
static struct rtable *rt_cache_get_idx(struct seq_file *seq, loff_t pos)
int idx, s_idx;
s_h = cb->args[0];
+ if (s_h < 0)
+ s_h = 0;
s_idx = idx = cb->args[1];
- for (h = 0; h <= rt_hash_mask; h++) {
- if (h < s_h) continue;
- if (h > s_h)
- s_idx = 0;
+ for (h = s_h; h <= rt_hash_mask; h++) {
rcu_read_lock_bh();
for (rt = rcu_dereference(rt_hash_table[h].chain), idx = 0; rt;
rt = rcu_dereference(rt->u.dst.rt_next), idx++) {
dst_release(xchg(&skb->dst, NULL));
}
rcu_read_unlock_bh();
+ s_idx = 0;
}
done:
u32 cnt = 0;
u32 reord = tp->packets_out;
s32 seq_rtt = -1;
+ s32 ca_seq_rtt = -1;
ktime_t last_ackt = net_invalid_timestamp();
while ((skb = tcp_write_queue_head(sk)) && skb != tcp_send_head(sk)) {
u32 packets_acked;
u8 sacked = scb->sacked;
+ /* Determine how many packets and what bytes were acked, tso and else */
if (after(scb->end_seq, tp->snd_una)) {
if (tcp_skb_pcount(skb) == 1 ||
!after(tp->snd_una, scb->seq))
if (sacked & TCPCB_SACKED_RETRANS)
tp->retrans_out -= packets_acked;
flag |= FLAG_RETRANS_DATA_ACKED;
+ ca_seq_rtt = -1;
seq_rtt = -1;
if ((flag & FLAG_DATA_ACKED) ||
(packets_acked > 1))
flag |= FLAG_NONHEAD_RETRANS_ACKED;
} else {
+ ca_seq_rtt = now - scb->when;
+ last_ackt = skb->tstamp;
if (seq_rtt < 0) {
- seq_rtt = now - scb->when;
- if (fully_acked)
- last_ackt = skb->tstamp;
+ seq_rtt = ca_seq_rtt;
}
if (!(sacked & TCPCB_SACKED_ACKED))
reord = min(cnt, reord);
!before(end_seq, tp->snd_up))
tp->urg_mode = 0;
} else {
+ ca_seq_rtt = now - scb->when;
+ last_ackt = skb->tstamp;
if (seq_rtt < 0) {
- seq_rtt = now - scb->when;
- if (fully_acked)
- last_ackt = skb->tstamp;
+ seq_rtt = ca_seq_rtt;
}
reord = min(cnt, reord);
}
net_invalid_timestamp()))
rtt_us = ktime_us_delta(ktime_get_real(),
last_ackt);
- else if (seq_rtt > 0)
- rtt_us = jiffies_to_usecs(seq_rtt);
+ else if (ca_seq_rtt > 0)
+ rtt_us = jiffies_to_usecs(ca_seq_rtt);
}
ca_ops->pkts_acked(sk, pkts_acked, rtt_us);
goto out;
}
sk->sk_bound_dev_if = usin->sin6_scope_id;
- if (!sk->sk_bound_dev_if &&
- (addr_type & IPV6_ADDR_MULTICAST))
- fl.oif = np->mcast_oif;
}
+ if (!sk->sk_bound_dev_if && (addr_type & IPV6_ADDR_MULTICAST))
+ sk->sk_bound_dev_if = np->mcast_oif;
+
/* Connect to link-local address requires an interface */
if (!sk->sk_bound_dev_if) {
err = -EINVAL;
}
err = icmpv6_push_pending_frames(sk, &fl, &tmp_hdr, len + sizeof(struct icmp6hdr));
- ICMP6_INC_STATS_BH(idev, ICMP6_MIB_OUTMSGS);
-
out_put:
if (likely(idev != NULL))
in6_dev_put(idev);
sk2->sk_family == PF_INET6 &&
ipv6_addr_equal(&tw6->tw_v6_daddr, saddr) &&
ipv6_addr_equal(&tw6->tw_v6_rcv_saddr, daddr) &&
- sk2->sk_bound_dev_if == sk->sk_bound_dev_if) {
+ (!sk2->sk_bound_dev_if || sk2->sk_bound_dev_if == dif)) {
if (twsk_unique(sk, sk2, twp))
goto unique;
else
* optimistic addresses, but we may send the solicitation
* if we don't include the sllao. So here we check
* if our address is optimistic, and if so, we
- * supress the inclusion of the sllao.
+ * suppress the inclusion of the sllao.
*/
if (send_sllao) {
struct inet6_ifaddr *ifp = ipv6_get_ifaddr(saddr, dev, 1);
memcpy(eui64 + 5, eth_hdr(skb)->h_source + 3, 3);
eui64[3] = 0xff;
eui64[4] = 0xfe;
- eui64[0] |= 0x02;
+ eui64[0] ^= 0x02;
i = 0;
while (ipv6_hdr(skb)->saddr.s6_addr[8 + i] == eui64[i]
[ICMPV6_PKT_TOOBIG] = "PktTooBigs",
[ICMPV6_TIME_EXCEED] = "TimeExcds",
[ICMPV6_PARAMPROB] = "ParmProblems",
- [ICMPV6_ECHO_REQUEST] = "EchoRequest",
+ [ICMPV6_ECHO_REQUEST] = "Echos",
[ICMPV6_ECHO_REPLY] = "EchoReplies",
[ICMPV6_MGM_QUERY] = "GroupMembQueries",
[ICMPV6_MGM_REPORT] = "GroupMembResponses",
[NDISC_ROUTER_SOLICITATION] = "RouterSolicits",
[NDISC_NEIGHBOUR_ADVERTISEMENT] = "NeighborAdvertisements",
[NDISC_NEIGHBOUR_SOLICITATION] = "NeighborSolicits",
- [NDISC_REDIRECT] = "NeighborRedirects",
+ [NDISC_REDIRECT] = "Redirects",
};
static inline int rt6_check_neigh(struct rt6_info *rt)
{
struct neighbour *neigh = rt->rt6i_nexthop;
- int m = 0;
+ int m;
if (rt->rt6i_flags & RTF_NONEXTHOP ||
!(rt->rt6i_flags & RTF_GATEWAY))
m = 1;
read_lock_bh(&neigh->lock);
if (neigh->nud_state & NUD_VALID)
m = 2;
- else if (!(neigh->nud_state & NUD_FAILED))
+#ifdef CONFIG_IPV6_ROUTER_PREF
+ else if (neigh->nud_state & NUD_FAILED)
+ m = 0;
+#endif
+ else
m = 1;
read_unlock_bh(&neigh->lock);
- }
+ } else
+ m = 0;
return m;
}
}
#endif /* CONFIG_IRDA_ULTRA */
+ self->ias_obj = irias_new_object(addr->sir_name, jiffies);
+ if (self->ias_obj == NULL)
+ return -ENOMEM;
+
err = irda_open_tsap(self, addr->sir_lsap_sel, addr->sir_name);
- if (err < 0)
+ if (err < 0) {
+ kfree(self->ias_obj->name);
+ kfree(self->ias_obj);
return err;
+ }
/* Register with LM-IAS */
- self->ias_obj = irias_new_object(addr->sir_name, jiffies);
irias_add_integer_attrib(self->ias_obj, "IrDA:TinyTP:LsapSel",
self->stsap_sel, IAS_KERNEL_ATTR);
irias_insert_object(self->ias_obj);
self->max_sdu_size_rx = TTP_SAR_UNBOUND;
break;
default:
- IRDA_ERROR("%s: protocol not supported!\n",
- __FUNCTION__);
return -ESOCKTNOSUPPORT;
}
break;
struct irda_ias_set *ias_opt;
struct ias_object *ias_obj;
struct ias_attrib * ias_attr; /* Attribute in IAS object */
- int opt;
+ int opt, free_ias = 0;
IRDA_DEBUG(2, "%s(%p)\n", __FUNCTION__, self);
/* Create a new object */
ias_obj = irias_new_object(ias_opt->irda_class_name,
jiffies);
+ if (ias_obj == NULL) {
+ kfree(ias_opt);
+ return -ENOMEM;
+ }
+ free_ias = 1;
}
/* Do we have the attribute already ? */
if(irias_find_attrib(ias_obj, ias_opt->irda_attrib_name)) {
kfree(ias_opt);
+ if (free_ias) {
+ kfree(ias_obj->name);
+ kfree(ias_obj);
+ }
return -EINVAL;
}
if(ias_opt->attribute.irda_attrib_octet_seq.len >
IAS_MAX_OCTET_STRING) {
kfree(ias_opt);
+ if (free_ias) {
+ kfree(ias_obj->name);
+ kfree(ias_obj);
+ }
+
return -EINVAL;
}
/* Add an octet sequence attribute */
break;
default :
kfree(ias_opt);
+ if (free_ias) {
+ kfree(ias_obj->name);
+ kfree(ias_obj);
+ }
return -EINVAL;
}
irias_insert_object(ias_obj);
IRDA_ASSERT(self != NULL, return -1;);
IRDA_ASSERT(self->magic == IRCOMM_TTY_MAGIC, return -1;);
- /* Poll parameters are always of lenght 0 (just a signal) */
+ /* Poll parameters are always of length 0 (just a signal) */
if (!get) {
/* Respond with DTE line settings */
ircomm_param_request(self, IRCOMM_DTE, TRUE);
if (dev->flags & IFF_PROMISC) {
/* Enable promiscuous mode */
- IRDA_WARNING("Promiscous mode not implemented by IrLAN!\n");
+ IRDA_WARNING("Promiscuous mode not implemented by IrLAN!\n");
}
else if ((dev->flags & IFF_ALLMULTI) || dev->mc_count > HW_MAX_ADDRS) {
/* Disable promiscuous mode, use normal mode. */
frame->control = SNRM_CMD | PF_BIT;
/*
- * If we are establishing a connection then insert QoS paramerters
+ * If we are establishing a connection then insert QoS parameters
*/
if (qos) {
skb_put(tx_skb, 9); /* 25 left */
int err;
p.pi = pi; /* In case handler needs to know */
- p.pl = type & PV_MASK; /* The integer type codes the lenght as well */
+ p.pl = type & PV_MASK; /* The integer type codes the length as well */
p.pv.i = 0; /* Clear value */
/* Call handler for this parameter */
return err;
/*
- * If parameter lenght is still 0, then (1) this is an any length
+ * If parameter length is still 0, then (1) this is an any length
* integer, and (2) the handler function does not care which length
* we choose to use, so we pick the one the gives the fewest bytes.
*/
{
irda_param_t p;
int n = 0;
- int extract_len; /* Real lenght we extract */
+ int extract_len; /* Real length we extract */
int err;
p.pi = pi; /* In case handler needs to know */
- p.pl = buf[1]; /* Extract lenght of value */
+ p.pl = buf[1]; /* Extract length of value */
p.pv.i = 0; /* Clear value */
extract_len = p.pl; /* Default : extract all */
IRDA_DEBUG(2, "%s()\n", __FUNCTION__);
p.pi = pi; /* In case handler needs to know */
- p.pl = buf[1]; /* Extract lenght of value */
+ p.pl = buf[1]; /* Extract length of value */
IRDA_DEBUG(2, "%s(), pi=%#x, pl=%d\n", __FUNCTION__,
p.pi, p.pl);
irda_param_t p;
p.pi = pi; /* In case handler needs to know */
- p.pl = buf[1]; /* Extract lenght of value */
+ p.pl = buf[1]; /* Extract length of value */
/* Check if buffer is long enough for parsing */
if (len < (2+p.pl)) {
skb_reserve(newskb, 1);
if(docopy) {
- /* Copy data without CRC (lenght already checked) */
+ /* Copy data without CRC (length already checked) */
skb_copy_to_linear_data(newskb, rx_buff->data,
rx_buff->len - 2);
/* Deliver this skb */
static inline int aalg_tmpl_set(struct xfrm_tmpl *t, struct xfrm_algo_desc *d)
{
- return t->aalgos & (1 << d->desc.sadb_alg_id);
+ unsigned int id = d->desc.sadb_alg_id;
+
+ if (id >= sizeof(t->aalgos) * 8)
+ return 0;
+
+ return (t->aalgos >> id) & 1;
}
static inline int ealg_tmpl_set(struct xfrm_tmpl *t, struct xfrm_algo_desc *d)
{
- return t->ealgos & (1 << d->desc.sadb_alg_id);
+ unsigned int id = d->desc.sadb_alg_id;
+
+ if (id >= sizeof(t->ealgos) * 8)
+ return 0;
+
+ return (t->ealgos >> id) & 1;
}
static int count_ah_combs(struct xfrm_tmpl *t)
/* old ipsecrequest */
int mode = pfkey_mode_from_xfrm(mp->mode);
if (mode < 0)
- return -EINVAL;
+ goto err;
if (set_ipsecrequest(skb, mp->proto, mode,
(mp->reqid ? IPSEC_LEVEL_UNIQUE : IPSEC_LEVEL_REQUIRE),
mp->reqid, mp->old_family,
- &mp->old_saddr, &mp->old_daddr) < 0) {
- return -EINVAL;
- }
+ &mp->old_saddr, &mp->old_daddr) < 0)
+ goto err;
/* new ipsecrequest */
if (set_ipsecrequest(skb, mp->proto, mode,
(mp->reqid ? IPSEC_LEVEL_UNIQUE : IPSEC_LEVEL_REQUIRE),
mp->reqid, mp->new_family,
- &mp->new_saddr, &mp->new_daddr) < 0) {
- return -EINVAL;
- }
+ &mp->new_saddr, &mp->new_daddr) < 0)
+ goto err;
}
/* broadcast migrate message to sockets */
pfkey_broadcast(skb, GFP_ATOMIC, BROADCAST_ALL, NULL);
return 0;
+
+err:
+ kfree_skb(skb);
+ return -EINVAL;
}
#else
static int pfkey_send_migrate(struct xfrm_selector *sel, u8 dir, u8 type,
void ieee80211_if_setup(struct net_device *dev)
{
ether_setup(dev);
- dev->header_ops = &ieee80211_header_ops;
dev->hard_start_xmit = ieee80211_subif_start_xmit;
dev->wireless_handlers = &ieee80211_iw_handler_def;
dev->set_multicast_list = ieee80211_set_multicast_list;
sdata->bss->force_unicast_rateidx = -1;
if (rate->value < 0)
return 0;
- for (i=0; i< mode->num_rates; i++) {
+ for (i=0; i < mode->num_rates; i++) {
struct ieee80211_rate *rates = &mode->rates[i];
int this_rate = rates->rate;
sdata->bss->max_ratectrl_rateidx = i;
if (rate->fixed)
sdata->bss->force_unicast_rateidx = i;
- break;
+ return 0;
}
}
- return 0;
+ return -EINVAL;
}
static int ieee80211_ioctl_giwrate(struct net_device *dev,
list_for_each_entry(alg, &rate_ctrl_algs, list) {
if (alg->ops == ops) {
list_del(&alg->list);
+ kfree(alg);
break;
}
}
mutex_unlock(&rate_ctrl_mutex);
- kfree(alg);
}
EXPORT_SYMBOL(ieee80211_rate_control_unregister);
sta_info_put(sta);
}
if (disassoc) {
- union iwreq_data wrqu;
- memset(wrqu.ap_addr.sa_data, 0, ETH_ALEN);
- wrqu.ap_addr.sa_family = ARPHRD_ETHER;
- wireless_send_event(dev, SIOCGIWAP, &wrqu, NULL);
- mod_timer(&ifsta->timer, jiffies +
- IEEE80211_MONITORING_INTERVAL + 30 * HZ);
+ ifsta->state = IEEE80211_DISABLED;
+ ieee80211_set_associated(dev, ifsta, 0);
} else {
mod_timer(&ifsta->timer, jiffies +
IEEE80211_MONITORING_INTERVAL);
struct ieee80211_sub_if_data *prev = NULL;
struct sk_buff *skb_new;
u8 *bssid;
+ int hdrlen;
/*
* key references and virtual interfaces are protected using RCU
rx.fc = le16_to_cpu(hdr->frame_control);
type = rx.fc & IEEE80211_FCTL_FTYPE;
+ /*
+ * Drivers are required to align the payload data to a four-byte
+ * boundary, so the last two bits of the address where it starts
+ * may not be set. The header is required to be directly before
+ * the payload data, padding like atheros hardware adds which is
+ * inbetween the 802.11 header and the payload is not supported,
+ * the driver is required to move the 802.11 header further back
+ * in that case.
+ */
+ hdrlen = ieee80211_get_hdrlen(rx.fc);
+ WARN_ON_ONCE(((unsigned long)(skb->data + hdrlen)) & 3);
+
if (type == IEEE80211_FTYPE_DATA || type == IEEE80211_FTYPE_MGMT)
local->dot11ReceivedFragmentCount++;
#include <linux/slab.h>
#include <linux/skbuff.h>
#include <linux/if_arp.h>
+#include <linux/timer.h>
#include <net/mac80211.h>
#include "ieee80211_i.h"
}
read_unlock_bh(&local->sta_lock);
- local->sta_cleanup.expires = jiffies + STA_INFO_CLEANUP_INTERVAL;
+ local->sta_cleanup.expires =
+ round_jiffies(jiffies + STA_INFO_CLEANUP_INTERVAL);
add_timer(&local->sta_cleanup);
}
INIT_LIST_HEAD(&local->sta_list);
init_timer(&local->sta_cleanup);
- local->sta_cleanup.expires = jiffies + STA_INFO_CLEANUP_INTERVAL;
+ local->sta_cleanup.expires =
+ round_jiffies(jiffies + STA_INFO_CLEANUP_INTERVAL);
local->sta_cleanup.data = (unsigned long) local;
local->sta_cleanup.function = sta_info_cleanup;
}
EXPORT_SYMBOL_GPL(nf_ct_alloc_hashtable);
-int set_hashsize(const char *val, struct kernel_param *kp)
+int nf_conntrack_set_hashsize(const char *val, struct kernel_param *kp)
{
int i, bucket, hashsize, vmalloced;
int old_vmalloced, old_size;
nf_ct_free_hashtable(old_hash, old_vmalloced, old_size);
return 0;
}
+EXPORT_SYMBOL_GPL(nf_conntrack_set_hashsize);
-module_param_call(hashsize, set_hashsize, param_get_uint,
+module_param_call(hashsize, nf_conntrack_set_hashsize, param_get_uint,
&nf_conntrack_htable_size, 0600);
int __init nf_conntrack_init(void)
}
};
-/* get line lenght until first CR or LF seen. */
+/* get line length until first CR or LF seen. */
int ct_sip_lnlen(const char *line, const char *limit)
{
const char *k = line;
return len;
}
-/* get digits lenght, skiping blank spaces. */
+/* get digits length, skipping blank spaces. */
static int skp_digits_len(struct nf_conn *ct, const char *dptr,
const char *limit, int *shift)
{
if (info->name[0] == '\0')
ret = !ret;
else
- ret ^= !strncmp(master_help->helper->name, info->name,
- strlen(master_help->helper->name));
+ ret ^= !strncmp(helper->name, info->name,
+ strlen(helper->name));
return ret;
}
};
/*
- * NetLabel Misc Managment Functions
+ * NetLabel Misc Management Functions
*/
/**
/* Spoof incoming device */
skb->dev = dev;
- skb_reset_mac_header(skb);
+ skb->mac_header = skb->network_header;
skb_reset_network_header(skb);
skb->pkt_type = PACKET_HOST;
rfkill_led_trigger_register(rfkill);
error = rfkill_add_switch(rfkill);
- if (error)
+ if (error) {
+ rfkill_led_trigger_unregister(rfkill);
return error;
+ }
error = device_add(dev);
if (error) {
+ rfkill_led_trigger_unregister(rfkill);
rfkill_remove_switch(rfkill);
return error;
}
u64 cl_vtoff; /* inter-period cumulative vt offset */
u64 cl_cvtmax; /* max child's vt in the last period */
u64 cl_cvtoff; /* cumulative cvtmax of all periods */
- u64 cl_pcvtoff; /* parent's cvtoff at initalization
+ u64 cl_pcvtoff; /* parent's cvtoff at initialization
time */
struct internal_sc cl_rsc; /* internal real-time service curve */
chunksize = sizeof(init) + addrs_len + SCTP_SAT_LEN(num_types);
chunksize += sizeof(ecap_param);
+ if (sctp_prsctp_enable)
+ chunksize += sizeof(prsctp_param);
+
/* ADDIP: Section 4.2.7:
* An implementation supporting this extension [ADDIP] MUST list
* the ASCONF,the ASCONF-ACK, and the AUTH chunks in its INIT and
sctp_addto_chunk(retval, sizeof(ecap_param), &ecap_param);
- /* Add the supported extensions paramter. Be nice and add this
+ /* Add the supported extensions parameter. Be nice and add this
* fist before addiding the parameters for the extensions themselves
*/
if (num_ext) {
if (asoc->peer.ecn_capable)
chunksize += sizeof(ecap_param);
+ if (sctp_prsctp_enable)
+ chunksize += sizeof(prsctp_param);
+
if (sctp_addip_enable) {
extensions[num_ext] = SCTP_CID_ASCONF;
extensions[num_ext+1] = SCTP_CID_ASCONF_ACK;
chunk_len -= length;
/* Skip the address parameter and store a pointer to the first
- * asconf paramter.
+ * asconf parameter.
*/
length = ntohs(addr_param->v4.param_hdr.length);
asconf_param = (sctp_addip_param_t *)((void *)addr_param + length);
/* create an ASCONF_ACK chunk.
* Based on the definitions of parameters, we know that the size of
* ASCONF_ACK parameters are less than or equal to the twice of ASCONF
- * paramters.
+ * parameters.
*/
asconf_ack = sctp_make_asconf_ack(asoc, serial, chunk_len * 2);
if (!asconf_ack)
asconf_len -= length;
/* Skip the address parameter in the last asconf sent and store a
- * pointer to the first asconf paramter.
+ * pointer to the first asconf parameter.
*/
length = ntohs(addr_param->v4.param_hdr.length);
asconf_param = (sctp_addip_param_t *)((void *)addr_param + length);
new_asoc->c.initial_tsn = asoc->c.initial_tsn;
}
-static void sctp_auth_params_populate(struct sctp_association *new_asoc,
- const struct sctp_association *asoc)
-{
- /* Only perform this if AUTH extension is enabled */
- if (!sctp_auth_enable)
- return;
-
- /* We need to provide the same parameter information as
- * was in the original INIT. This means that we need to copy
- * the HMACS, CHUNKS, and RANDOM parameter from the original
- * assocaition.
- */
- memcpy(new_asoc->c.auth_random, asoc->c.auth_random,
- sizeof(asoc->c.auth_random));
- memcpy(new_asoc->c.auth_hmacs, asoc->c.auth_hmacs,
- sizeof(asoc->c.auth_hmacs));
- memcpy(new_asoc->c.auth_chunks, asoc->c.auth_chunks,
- sizeof(asoc->c.auth_chunks));
-}
-
/*
* Compare vtag/tietag values to determine unexpected COOKIE-ECHO
* handling action.
sctp_tietags_populate(new_asoc, asoc);
- sctp_auth_params_populate(new_asoc, asoc);
-
/* B) "Z" shall respond immediately with an INIT ACK chunk. */
/* If there are errors need to be reported for unknown parameters,
ak = (struct sctp_authkey_event *)
skb_put(skb, sizeof(struct sctp_authkey_event));
- ak->auth_type = SCTP_AUTHENTICATION_EVENT;
+ ak->auth_type = SCTP_AUTHENTICATION_INDICATION;
ak->auth_flags = 0;
ak->auth_length = sizeof(struct sctp_authkey_event);
err = -EINVAL;
gss_auth->mech = gss_mech_get_by_pseudoflavor(flavor);
if (!gss_auth->mech) {
- printk(KERN_WARNING "%s: Pseudoflavor %d not found!",
+ printk(KERN_WARNING "%s: Pseudoflavor %d not found!\n",
__FUNCTION__, flavor);
goto err_free;
}
goto out;
if ( (skbn = pskb_copy(skb, GFP_ATOMIC)) == NULL){
- goto out;
+ goto output;
}
x25_transmit_link(skbn, nb);
- x25_neigh_put(nb);
rc = 1;
+output:
+ x25_neigh_put(nb);
out:
return rc;
}
if (audit_enabled == 0)
return;
- audit_buf = xfrm_audit_start(sid, auid);
+ audit_buf = xfrm_audit_start(auid, sid);
if (audit_buf == NULL)
return;
audit_log_format(audit_buf, " op=SPD-add res=%u", result);
if (audit_enabled == 0)
return;
- audit_buf = xfrm_audit_start(sid, auid);
+ audit_buf = xfrm_audit_start(auid, sid);
if (audit_buf == NULL)
return;
audit_log_format(audit_buf, " op=SPD-delete res=%u", result);
}
EXPORT_SYMBOL(km_policy_expired);
+#ifdef CONFIG_XFRM_MIGRATE
int km_migrate(struct xfrm_selector *sel, u8 dir, u8 type,
struct xfrm_migrate *m, int num_migrate)
{
return err;
}
EXPORT_SYMBOL(km_migrate);
+#endif
int km_report(u8 proto, struct xfrm_selector *sel, xfrm_address_t *addr)
{
if (audit_enabled == 0)
return;
- audit_buf = xfrm_audit_start(sid, auid);
+ audit_buf = xfrm_audit_start(auid, sid);
if (audit_buf == NULL)
return;
audit_log_format(audit_buf, " op=SAD-add res=%u",result);
if (audit_enabled == 0)
return;
- audit_buf = xfrm_audit_start(sid, auid);
+ audit_buf = xfrm_audit_start(auid, sid);
if (audit_buf == NULL)
return;
audit_log_format(audit_buf, " op=SAD-delete res=%u",result);
#include <linux/in6.h>
#endif
-static inline int alg_len(struct xfrm_algo *alg)
-{
- return sizeof(*alg) + ((alg->alg_key_len + 7) / 8);
-}
-
static int verify_one_alg(struct nlattr **attrs, enum xfrm_attr_type_t type)
{
struct nlattr *rt = attrs[type];
return 0;
algp = nla_data(rt);
- if (nla_len(rt) < alg_len(algp))
+ if (nla_len(rt) < xfrm_alg_len(algp))
return -EINVAL;
switch (type) {
return -ENOSYS;
*props = algo->desc.sadb_alg_id;
- p = kmemdup(ualg, alg_len(ualg), GFP_KERNEL);
+ p = kmemdup(ualg, xfrm_alg_len(ualg), GFP_KERNEL);
if (!p)
return -ENOMEM;
NLA_PUT_U64(skb, XFRMA_LASTUSED, x->lastused);
if (x->aalg)
- NLA_PUT(skb, XFRMA_ALG_AUTH, alg_len(x->aalg), x->aalg);
+ NLA_PUT(skb, XFRMA_ALG_AUTH, xfrm_alg_len(x->aalg), x->aalg);
if (x->ealg)
- NLA_PUT(skb, XFRMA_ALG_CRYPT, alg_len(x->ealg), x->ealg);
+ NLA_PUT(skb, XFRMA_ALG_CRYPT, xfrm_alg_len(x->ealg), x->ealg);
if (x->calg)
NLA_PUT(skb, XFRMA_ALG_COMP, sizeof(*(x->calg)), x->calg);
{
size_t l = 0;
if (x->aalg)
- l += nla_total_size(alg_len(x->aalg));
+ l += nla_total_size(xfrm_alg_len(x->aalg));
if (x->ealg)
- l += nla_total_size(alg_len(x->ealg));
+ l += nla_total_size(xfrm_alg_len(x->ealg));
if (x->calg)
l += nla_total_size(sizeof(*x->calg));
if (x->encap)
continue;
break;
case set_random:
- def = (random() % cnt) + 1;
+ if (is_new)
+ def = (random() % cnt) + 1;
case set_default:
case set_yes:
case set_mod:
EXPORT_SYMBOL(cap_netlink_recv);
+/*
+ * NOTE WELL: cap_capable() cannot be used like the kernel's capable()
+ * function. That is, it has the reverse semantics: cap_capable()
+ * returns 0 when a task has a capability, but the kernel's capable()
+ * returns 1 for this case.
+ */
int cap_capable (struct task_struct *tsk, int cap)
{
/* Derived from include/linux/sched.h:capable. */
static inline int cap_inh_is_capped(void)
{
/*
- * return 1 if changes to the inheritable set are limited
- * to the old permitted set.
+ * Return 1 if changes to the inheritable set are limited
+ * to the old permitted set. That is, if the current task
+ * does *not* possess the CAP_SETPCAP capability.
*/
- return !cap_capable(current, CAP_SETPCAP);
+ return (cap_capable(current, CAP_SETPCAP) != 0);
}
#else /* ie., ndef CONFIG_SECURITY_FILE_CAPABILITIES */
struct sk_security_struct *sksec = sk->sk_security;
struct netlbl_lsm_secattr secattr;
+ netlbl_secattr_init(&secattr);
+
rc = security_netlbl_sid_to_secattr(sid, &secattr);
if (rc != 0)
- return rc;
-
+ goto sock_setsid_return;
rc = netlbl_sock_setattr(sk, &secattr);
if (rc == 0) {
spin_lock_bh(&sksec->nlbl_lock);
spin_unlock_bh(&sksec->nlbl_lock);
}
+sock_setsid_return:
+ netlbl_secattr_destroy(&secattr);
return rc;
}
int rc = -ENOENT;
struct context *ctx;
- netlbl_secattr_init(secattr);
-
if (!ss_initialized)
return 0;
rslot->number = idx;
}
+/* In a separate function to keep gcc 3.2 happy - do NOT merge this in
+ snd_mixer_oss_build_input! */
+static int snd_mixer_oss_build_test_all(struct snd_mixer_oss *mixer,
+ struct snd_mixer_oss_assign_table *ptr,
+ struct slot *slot)
+{
+ char str[64];
+ int err;
+
+ err = snd_mixer_oss_build_test(mixer, slot, ptr->name, ptr->index,
+ SNDRV_MIXER_OSS_ITEM_GLOBAL);
+ if (err)
+ return err;
+ sprintf(str, "%s Switch", ptr->name);
+ err = snd_mixer_oss_build_test(mixer, slot, str, ptr->index,
+ SNDRV_MIXER_OSS_ITEM_GSWITCH);
+ if (err)
+ return err;
+ sprintf(str, "%s Route", ptr->name);
+ err = snd_mixer_oss_build_test(mixer, slot, str, ptr->index,
+ SNDRV_MIXER_OSS_ITEM_GROUTE);
+ if (err)
+ return err;
+ sprintf(str, "%s Volume", ptr->name);
+ err = snd_mixer_oss_build_test(mixer, slot, str, ptr->index,
+ SNDRV_MIXER_OSS_ITEM_GVOLUME);
+ if (err)
+ return err;
+ sprintf(str, "%s Playback Switch", ptr->name);
+ err = snd_mixer_oss_build_test(mixer, slot, str, ptr->index,
+ SNDRV_MIXER_OSS_ITEM_PSWITCH);
+ if (err)
+ return err;
+ sprintf(str, "%s Playback Route", ptr->name);
+ err = snd_mixer_oss_build_test(mixer, slot, str, ptr->index,
+ SNDRV_MIXER_OSS_ITEM_PROUTE);
+ if (err)
+ return err;
+ sprintf(str, "%s Playback Volume", ptr->name);
+ err = snd_mixer_oss_build_test(mixer, slot, str, ptr->index,
+ SNDRV_MIXER_OSS_ITEM_PVOLUME);
+ if (err)
+ return err;
+ sprintf(str, "%s Capture Switch", ptr->name);
+ err = snd_mixer_oss_build_test(mixer, slot, str, ptr->index,
+ SNDRV_MIXER_OSS_ITEM_CSWITCH);
+ if (err)
+ return err;
+ sprintf(str, "%s Capture Route", ptr->name);
+ err = snd_mixer_oss_build_test(mixer, slot, str, ptr->index,
+ SNDRV_MIXER_OSS_ITEM_CROUTE);
+ if (err)
+ return err;
+ sprintf(str, "%s Capture Volume", ptr->name);
+ err = snd_mixer_oss_build_test(mixer, slot, str, ptr->index,
+ SNDRV_MIXER_OSS_ITEM_CVOLUME);
+ if (err)
+ return err;
+
+ return 0;
+}
+
/*
* build an OSS mixer element.
* ptr_allocated means the entry is dynamically allocated (change via proc file).
memset(&slot, 0, sizeof(slot));
memset(slot.numid, 0xff, sizeof(slot.numid)); /* ID_UNKNOWN */
- if (snd_mixer_oss_build_test(mixer, &slot, ptr->name, ptr->index,
- SNDRV_MIXER_OSS_ITEM_GLOBAL))
- return 0;
- sprintf(str, "%s Switch", ptr->name);
- if (snd_mixer_oss_build_test(mixer, &slot, str, ptr->index,
- SNDRV_MIXER_OSS_ITEM_GSWITCH))
- return 0;
- sprintf(str, "%s Route", ptr->name);
- if (snd_mixer_oss_build_test(mixer, &slot, str, ptr->index,
- SNDRV_MIXER_OSS_ITEM_GROUTE))
- return 0;
- sprintf(str, "%s Volume", ptr->name);
- if (snd_mixer_oss_build_test(mixer, &slot, str, ptr->index,
- SNDRV_MIXER_OSS_ITEM_GVOLUME))
- return 0;
- sprintf(str, "%s Playback Switch", ptr->name);
- if (snd_mixer_oss_build_test(mixer, &slot, str, ptr->index,
- SNDRV_MIXER_OSS_ITEM_PSWITCH))
- return 0;
- sprintf(str, "%s Playback Route", ptr->name);
- if (snd_mixer_oss_build_test(mixer, &slot, str, ptr->index,
- SNDRV_MIXER_OSS_ITEM_PROUTE))
- return 0;
- sprintf(str, "%s Playback Volume", ptr->name);
- if (snd_mixer_oss_build_test(mixer, &slot, str, ptr->index,
- SNDRV_MIXER_OSS_ITEM_PVOLUME))
- return 0;
- sprintf(str, "%s Capture Switch", ptr->name);
- if (snd_mixer_oss_build_test(mixer, &slot, str, ptr->index,
- SNDRV_MIXER_OSS_ITEM_CSWITCH))
- return 0;
- sprintf(str, "%s Capture Route", ptr->name);
- if (snd_mixer_oss_build_test(mixer, &slot, str, ptr->index,
- SNDRV_MIXER_OSS_ITEM_CROUTE))
- return 0;
- sprintf(str, "%s Capture Volume", ptr->name);
- if (snd_mixer_oss_build_test(mixer, &slot, str, ptr->index,
- SNDRV_MIXER_OSS_ITEM_CVOLUME))
+ if (snd_mixer_oss_build_test_all(mixer, ptr, &slot))
return 0;
down_read(&mixer->card->controls_rwsem);
if (ptr->index == 0 && (kctl = snd_mixer_oss_test_id(mixer, "Capture Source", 0)) != NULL) {
This driver differs slightly from OSS/Free, so PLEASE READ the
- comments at the top of <file:drivers/sound/trident.c>.
+ comments at the top of <file:sound/oss/trident.c>.
config SOUND_MSNDCLAS
tristate "Support for Turtle Beach MultiSound Classic, Tahiti, Monterey"
questions.
Read the <file:Documentation/sound/oss/README.OSS> file and the head of
- <file:drivers/sound/aedsp16.c> as well as
+ <file:sound/oss/aedsp16.c> as well as
<file:Documentation/sound/oss/AudioExcelDSP16> to get more information
about this driver and its configuration.
spinlock_t lock;
int nresets;
unsigned long recsrc;
- int left_levels[16];
- int right_levels[16];
+ int left_levels[32];
+ int right_levels[32];
int mixer_mod_count;
int calibrate_signal;
int play_sample_size, play_sample_rate, play_channels;