MIPS: conglomerate of multiple MIPS bugfixes and improvements
authorLeonid Yegoshin <Leonid.Yegoshin@imgtec.com>
Sat, 22 Nov 2014 01:42:05 +0000 (17:42 -0800)
committerRaghu Gandham <raghu.gandham@imgtec.com>
Tue, 2 Dec 2014 00:19:30 +0000 (16:19 -0800)
MIPS R6 is built on top of 3.10 for real HW and this patch
combines the missed patches. It is needed to do a clean and working
MIPS R6 kernel.

Squashed patches:

MIPS: Expose missing pci_io{map,unmap} declarations
MIPS: kernel: mcount.S: Drop FRAME_POINTER codepath
MIPS: malta: Move defines of reset registers and values.
MIPS: malta: Remove software reset defines from generic header.
MIPS: Boot: Compressed: Remove -fstack-protector from CFLAGS
MIPS: GIC: Fix gic_set_affinity infinite loop
MIPS: Only set cpu_has_mmips if SYS_SUPPORTS_MICROMIPS
MIPS: Don't try to decode microMIPS branch instructions where they cannot exist.
MIPS: APSP: Remove <asm/kspd.h>
Revert "MIPS: make CAC_ADDR and UNCAC_ADDR account for PHYS_OFFSET"
MIPS: Malta: Update GCMP detection.
MIPS: Fix multiple definitions of UNCAC_BASE.
MIPS: R4k clock source initialization bug fix
MIPS: use generic-y where possible
MIPS: Kconfig: Drop obsolete NR_CPUS_DEFAULT_{1,2} options
MIPS: add <dt-bindings/> symlink
MIPS: Refactor boot and boot/compressed rules
MIPS: Refactor load/entry address calculations
MIPS: Add uImage build target
MIPS: Export copy_from_user_page() (needed by lustre)
MIPS: kdump: Skip walking indirection page for crashkernels
MIPS: kexec: Fix random crashes while loading crashkernel
MIPS: Fix SMP core calculations when using MT support.
MIPS: Fix accessing to per-cpu data when flushing the cache
MIPS: Fix VGA_MAP_MEM macro.
MIPS: 74K/1074K: Correct erratum workaround.
MIPS: cpu-features.h: s/MIPS53/MIPS64/
MIPS: Kconfig: CMP support needs to select SMP as well
MIPS: Remove bogus BUG_ON()
MIPS: Always register R4K clock when selected.
MIPS: bugfix of stack trace dump.
MIPS: Add printing of ES bit when cache error occurs.
MIPS: rearrange PTE bits into fixed positions for MIPS32 R2.
MIPS: removal of X bit in page tables for HEAP/BSS.
MIPS: Add -mfp64 support to FPU emulator.
MIPS:  -mfp64 for abi=o32 ELF binaries support.
MIPS: FPU2 IEEE754-2008 SNaN support
MIPS: Revert fixrange_init() limiting to the FIXMAP region.
MIPS: 64bit address support on MIPS64 R2
MIPS: Add proAPTIV CPU support.
MIPS: MIPS32R2 Segment/EVA support upto 3GB
MIPS: Cache flush functions are reworked.
MIPS: Fix bug in using flush_cache_vunmap
MIPS: EVA CACHEE instruction implementation in kernel
MIPS: bugfix of mips_flush_data_cache_range
MIPS: Add interAptiv CPU support.
MIPS: EVA SMP support for Malta board
MIPS: Malta new memory map support
MIPS: Malta: check memory map type - legacy or new
MIPS: bugfix of Malta PCI bridges loop.
MIPS: BEV overlay segment location and size verification.
MIPS: MIPS32 R2 SYNC optimization
MIPS: configs: Add Malta EVA defconfig
MIPS: GIC: Send IPIs using the GIC.
MIPS: Malta: Remove ttyS2 serial.
MIPS/Perf-events: Fix 74K cache map
MIPS/Perf-events: Support proAptiv/interAptiv cores
MIPS: Fix more section mismatch warnings
MIPS: EVA: Fix encoding for the UUSK AM bits on the SegCtl registers.
MIPS: Malta: Enable DEVTMPFS
MIPS: Clean up MIPS MT platform configuration options.
MIPS: Fix forgotten preempt_enable() when CPU has inclusive pcaches
MIPS: malta: Fix GIC interrupt offsets
Malta default config is generated by ARCH=mips scripts/kconfig/merge_config.sh arch/mips/config/malta_defconfig android/configs/android-base.cfg android/configs/android-recommended.cfg
Input: i8042-io - Exclude mips platforms when allocating/deallocating IO regions.
MIPS: Malta buildfix of 8042 keyboard controller
Modify Malta config with the required options for booting Android
MIPS: bugfix - missed hazard barrier in xcontext setup
MIPS: proAptiv tlb exeption handler improvement - remove an excessive EHB.
MIPS: printk more cpuinfo stuff
MIPS: sead3: Remove command line from SEAD-3 device tree file.
MIPS: Added Virtuoso basic support
MIPS: Cleanup of TMR and PERFCOUNT IRQ handling
MIPS: bugfix of "core" and "vpe_id" setup for MT ASE w/out GCMP
MIPS: bugfix of force-bev-location parsing
MIPS: bugfix of ebase WG bit mask
MIPS: bugfix of coherentio variable default setup
MIPS: Accelerate LDC1/SDC1 unaligned x4 emulation
MIPS: unaligned FPU failures fix
MIPS: asm: uaccess: fix EVA support for str*_user operation
MIPS: Exclude mips platforms when allocating/deallocating IO regions by i8042
MIPS: Malta emulation: avoid using BFC00000 for L2-SYNC only feature.
MIPS: futex: Use LLE and SCE for EVA
MIPS: Bugfix of address conversion between bus and kernel virtual address
MIPS: Remove a temporary hack for debugging cache flushes in SMTC configuration
MIPS: Malta: bugfix of CPU0 status masks setup for timer and perf interrupts
MIPS: bugfix of CP0 timer/GIC clockevent driver mix
MIPS: Malta: bugfix of GIC availability check.
MIPS: Added missed GIC registers definitions
MIPS: GIC registers can be obtained now via /sys FS
MIPS: bugfix of L2-SYNC only support for CM1 cores
MIPS: CM support cleanup
MIPS: Added GCR missed register definitions
MIPS: GCR registers can be obtained now via /sys FS
MIPS: Basic CPC support
MIPS: bugfix of instruction PREFX in FPU emulator
MIPS: bugfix of CONFIG_CPU_MIPSR2 usage
MIPS: bugfix of CONFIG_CPU_MIPS64 usage
MIPS: bugfix of local atomic ops in arch/mips/include/asm/local.h
MIPS: buildfix of local atomic operations
MIPS: QEMU generic CPU support is added
MIPS: Added P5600 CPU support
MIPS: bugfix - remove unconditional R6 on P5600
MIPS: bugfix of printing 64bit address in /proc/cpuinfo
MIPS: bugfix of branch-likely emulation in branch.c
MIPS: Malta: universal memory map initialisation
MIPS: bugfix: remove a double call of decode_configs for MIPS/IMG CPUs
MIPS: bugfix of FPU save/restore for MIPS32 context on MIPS64
MIPS: msc: Prevent out-of-bounds writes to MIPS SC ioremap'd region
MIPS: bugfix: missed cache flush of TLB refill handler
MIPS: PTE bit positions slightly changed to prepare a more simple swap/file presentation
MIPS: bugfix of PTE formats for swap and file entries
MIPS: bugfix of -mfp64 in signal/signal32 context save
MIPS: yet another bugfix of -mfp64 in signal/signal32 context restore)

Signed-off-by: Leonid Yegoshin <Leonid.Yegoshin@imgtec.com>
149 files changed:
arch/mips/Kconfig
arch/mips/Makefile
arch/mips/boot/.gitignore
arch/mips/boot/Makefile
arch/mips/boot/compressed/Makefile
arch/mips/boot/dts/include/dt-bindings [new symlink]
arch/mips/configs/malta_android_defconfig [new file with mode: 0644]
arch/mips/configs/malta_defconfig
arch/mips/configs/malta_eva_defconfig [new file with mode: 0644]
arch/mips/configs/maltaaprp_defconfig
arch/mips/configs/maltasmtc_defconfig
arch/mips/configs/maltasmvp_defconfig
arch/mips/configs/maltaup_defconfig
arch/mips/include/asm/Kbuild
arch/mips/include/asm/addrspace.h
arch/mips/include/asm/asm.h
arch/mips/include/asm/asmmacro-32.h
arch/mips/include/asm/asmmacro-64.h
arch/mips/include/asm/bitops.h
arch/mips/include/asm/cacheflush.h
arch/mips/include/asm/checksum.h
arch/mips/include/asm/compat.h
arch/mips/include/asm/cpcregs.h [new file with mode: 0644]
arch/mips/include/asm/cpu-features.h
arch/mips/include/asm/cpu-info.h
arch/mips/include/asm/cpu.h
arch/mips/include/asm/cputime.h [deleted file]
arch/mips/include/asm/current.h [deleted file]
arch/mips/include/asm/elf.h
arch/mips/include/asm/emergency-restart.h [deleted file]
arch/mips/include/asm/fixmap.h
arch/mips/include/asm/fpu.h
arch/mips/include/asm/futex.h
arch/mips/include/asm/fw/fw.h
arch/mips/include/asm/gcmpregs.h
arch/mips/include/asm/gic.h
arch/mips/include/asm/io.h
arch/mips/include/asm/irq_cpu.h
arch/mips/include/asm/kspd.h [deleted file]
arch/mips/include/asm/local.h
arch/mips/include/asm/local64.h [deleted file]
arch/mips/include/asm/mach-ar7/spaces.h
arch/mips/include/asm/mach-generic/dma-coherence.h
arch/mips/include/asm/mach-generic/spaces.h
arch/mips/include/asm/mach-ip28/spaces.h
arch/mips/include/asm/mach-malta/kernel-entry-init.h
arch/mips/include/asm/mach-malta/spaces.h [new file with mode: 0644]
arch/mips/include/asm/mips-boards/generic.h
arch/mips/include/asm/mips-boards/malta.h
arch/mips/include/asm/mipsregs.h
arch/mips/include/asm/mmu_context.h
arch/mips/include/asm/mutex.h [deleted file]
arch/mips/include/asm/page.h
arch/mips/include/asm/parport.h [deleted file]
arch/mips/include/asm/percpu.h [deleted file]
arch/mips/include/asm/pgtable-32.h
arch/mips/include/asm/pgtable-64.h
arch/mips/include/asm/pgtable-bits.h
arch/mips/include/asm/r4kcache.h
arch/mips/include/asm/scatterlist.h [deleted file]
arch/mips/include/asm/sections.h [deleted file]
arch/mips/include/asm/segment.h [deleted file]
arch/mips/include/asm/serial.h [deleted file]
arch/mips/include/asm/smp.h
arch/mips/include/asm/time.h
arch/mips/include/asm/topology.h
arch/mips/include/asm/uaccess.h
arch/mips/include/asm/ucontext.h [deleted file]
arch/mips/include/asm/vga.h
arch/mips/include/asm/xor.h [deleted file]
arch/mips/include/uapi/asm/Kbuild
arch/mips/include/uapi/asm/auxvec.h [deleted file]
arch/mips/include/uapi/asm/inst.h
arch/mips/include/uapi/asm/ipcbuf.h [deleted file]
arch/mips/kernel/Makefile
arch/mips/kernel/branch.c
arch/mips/kernel/cevt-gic.c
arch/mips/kernel/cevt-r4k.c
arch/mips/kernel/cpc.c [new file with mode: 0644]
arch/mips/kernel/cpu-probe.c
arch/mips/kernel/entry.S
arch/mips/kernel/ftrace.c
arch/mips/kernel/genex.S
arch/mips/kernel/head.S
arch/mips/kernel/idle.c
arch/mips/kernel/irq-gic.c
arch/mips/kernel/irq-msc01.c
arch/mips/kernel/irq_cpu.c
arch/mips/kernel/kgdb.c
arch/mips/kernel/mcount.S
arch/mips/kernel/mips_ksyms.c
arch/mips/kernel/perf_event_mipsxx.c
arch/mips/kernel/proc.c
arch/mips/kernel/process.c
arch/mips/kernel/ptrace.c
arch/mips/kernel/r4k_fpu.S
arch/mips/kernel/r4k_switch.S
arch/mips/kernel/relocate_kernel.S
arch/mips/kernel/scall32-o32.S
arch/mips/kernel/segment.c [new file with mode: 0644]
arch/mips/kernel/setup.c
arch/mips/kernel/signal.c
arch/mips/kernel/signal32.c
arch/mips/kernel/smp-cmp.c
arch/mips/kernel/smp-mt.c
arch/mips/kernel/smp.c
arch/mips/kernel/spram.c
arch/mips/kernel/time.c
arch/mips/kernel/traps.c
arch/mips/kernel/unaligned.c
arch/mips/kernel/vpe.c
arch/mips/lantiq/irq.c
arch/mips/lasat/image/Makefile
arch/mips/lib/csum_partial.S
arch/mips/lib/dump_tlb.c
arch/mips/lib/memcpy.S
arch/mips/lib/memset.S
arch/mips/lib/r3k_dump_tlb.c
arch/mips/lib/strlen_user.S
arch/mips/lib/strncpy_user.S
arch/mips/lib/strnlen_user.S
arch/mips/math-emu/cp1emu.c
arch/mips/math-emu/ieee754.h
arch/mips/math-emu/ieee754dp.c
arch/mips/math-emu/ieee754int.h
arch/mips/math-emu/ieee754sp.c
arch/mips/math-emu/kernel_linkage.c
arch/mips/mm/c-octeon.c
arch/mips/mm/c-r3k.c
arch/mips/mm/c-r4k.c
arch/mips/mm/c-tx39.c
arch/mips/mm/cache.c
arch/mips/mm/dma-default.c
arch/mips/mm/init.c
arch/mips/mm/pgtable-32.c
arch/mips/mm/pgtable-64.c
arch/mips/mm/sc-mips.c
arch/mips/mm/tlb-r4k.c
arch/mips/mm/tlbex.c
arch/mips/mti-malta/malta-init.c
arch/mips/mti-malta/malta-int.c
arch/mips/mti-malta/malta-memory.c
arch/mips/mti-malta/malta-pci.c
arch/mips/mti-malta/malta-platform.c
arch/mips/mti-malta/malta-reset.c
arch/mips/mti-malta/malta-setup.c
arch/mips/mti-sead3/sead3.dts
arch/mips/oprofile/common.c
arch/mips/oprofile/op_model_mipsxx.c

index 8e566a1905ebd3a609af2c1df56d557c78b442d0..207881b632dc3780e565260e6b744cbb11c79e31 100644 (file)
@@ -27,6 +27,7 @@ config MIPS
        select HAVE_GENERIC_HARDIRQS
        select GENERIC_IRQ_PROBE
        select GENERIC_IRQ_SHOW
+       select GENERIC_PCI_IOMAP
        select HAVE_ARCH_JUMP_LABEL
        select ARCH_WANT_IPC_PARSE_VERSION
        select IRQ_FORCED_THREADING
@@ -311,6 +312,7 @@ config MIPS_MALTA
        select SWAP_IO_SPACE
        select SYS_HAS_CPU_MIPS32_R1
        select SYS_HAS_CPU_MIPS32_R2
+       select SYS_HAS_CPU_MIPS32_R2_EVA
        select SYS_HAS_CPU_MIPS64_R1
        select SYS_HAS_CPU_MIPS64_R2
        select SYS_HAS_CPU_NEVADA
@@ -325,6 +327,7 @@ config MIPS_MALTA
        select SYS_SUPPORTS_SMARTMIPS
        select SYS_SUPPORTS_ZBOOT
        select SYS_SUPPORTS_HIGHMEM
+       select PM_SLEEP
        help
          This enables support for the MIPS Technologies Malta evaluation
          board.
@@ -608,7 +611,6 @@ config SIBYTE_SWARM
        select BOOT_ELF32
        select DMA_COHERENT
        select HAVE_PATA_PLATFORM
-       select NR_CPUS_DEFAULT_2
        select SIBYTE_SB1250
        select SWAP_IO_SPACE
        select SYS_HAS_CPU_SB1
@@ -622,7 +624,6 @@ config SIBYTE_LITTLESUR
        select BOOT_ELF32
        select DMA_COHERENT
        select HAVE_PATA_PLATFORM
-       select NR_CPUS_DEFAULT_2
        select SIBYTE_SB1250
        select SWAP_IO_SPACE
        select SYS_HAS_CPU_SB1
@@ -634,7 +635,6 @@ config SIBYTE_SENTOSA
        bool "Sibyte BCM91250E-Sentosa"
        select BOOT_ELF32
        select DMA_COHERENT
-       select NR_CPUS_DEFAULT_2
        select SIBYTE_SB1250
        select SWAP_IO_SPACE
        select SYS_HAS_CPU_SB1
@@ -1495,6 +1495,47 @@ config CPU_XLP
          Netlogic Microsystems XLP processors.
 endchoice
 
+config CPU_MIPS32_R2_EVA
+       bool "MIPS32 Release 2 with EVA support"
+       depends on SYS_HAS_CPU_MIPS32_R2_EVA
+       depends on CPU_MIPS32_R2
+       select EVA
+       help
+         Choose this option to build a kernel for release 2 or later of the
+         MIPS32 architecture working in EVA mode. EVA is an Extended Virtual
+         Addressing but it actually allows extended direct physical memory
+         addressing in kernel (more than 512MB - 2GB or 3GB). If you know the
+         specific type of processor in your system, choose those that one
+         otherwise CPU_MIPS32_R1 is a safe bet for any MIPS32 system.
+         If unsure, select just CPU_MIPS32_R2 or even CPU_MIPS32_R1.
+
+config EVA_OLD_MALTA_MAP
+       bool "Old memory map on Malta (sys controller 1.418)"
+       depends on EVA
+       help
+         Choose this option to build EVA kernel for old Malta memory map.
+         All memory are located above 0x80000000 and first 256M is mirrored
+         to first 0x80000000. IOCU doesn't work with this option.
+         It is designed for systems with RocIt system controller 1.418/1.424
+         and it is kept just for MTI testing purposes. (1.424 can be used
+         with new memory map too).
+         May or may not work with SMP - address aliasing is crazy for YAMON.
+
+config EVA_3GB
+       bool "EVA support for 3GB memory"
+       depends on EVA
+       depends on EVA_OLD_MALTA_MAP
+       help
+         Choose this option to build a EVA kernel supporting up to 3GB of
+         physical memory. This option shifts uncacheble IO registers from KSEG1
+         to KSEG3 which becomes uncachable and KSEG1 (+KSEG0) can be used for
+         additional 1GB physical memory. Actually, to minimize changes in
+         drivers and code the same name (KSEG1) will still be used but it's
+         address will be changed. The physical I/O address is still the same.
+         On Malta board with old memory map it doesn't give you 3GB
+         (because of PCI bridges loop) but it can be used as a start point
+         for development.
+
 if CPU_LOONGSON2F
 config CPU_NOP_WORKAROUNDS
        bool
@@ -1575,6 +1616,9 @@ config SYS_HAS_CPU_MIPS32_R1
 config SYS_HAS_CPU_MIPS32_R2
        bool
 
+config SYS_HAS_CPU_MIPS32_R2_EVA
+       bool
+
 config SYS_HAS_CPU_MIPS64_R1
        bool
 
@@ -1684,6 +1728,9 @@ config CPU_MIPSR2
        bool
        default y if CPU_MIPS32_R2 || CPU_MIPS64_R2 || CPU_CAVIUM_OCTEON
 
+config EVA
+       bool
+
 config SYS_SUPPORTS_32BIT_KERNEL
        bool
 config SYS_SUPPORTS_64BIT_KERNEL
@@ -1882,60 +1929,49 @@ choice
        prompt "MIPS MT options"
 
 config MIPS_MT_DISABLED
-       bool "Disable multithreading support."
+       bool "Disable multithreading support"
        help
-         Use this option if your workload can't take advantage of
-         MIPS hardware multithreading support.  On systems that don't have
-         the option of an MT-enabled processor this option will be the only
-         option in this menu.
+         Use this option if your platform does not support the MT ASE
+         which is hardware multithreading support. On systems without
+         an MT-enabled processor, this will be the only option that is
+         available in this menu.
 
 config MIPS_MT_SMP
        bool "Use 1 TC on each available VPE for SMP"
        depends on SYS_SUPPORTS_MULTITHREADING
        select CPU_MIPSR2_IRQ_VI
        select CPU_MIPSR2_IRQ_EI
+       select SYNC_R4K
        select MIPS_MT
-       select NR_CPUS_DEFAULT_2
        select SMP
-       select SYS_SUPPORTS_SCHED_SMT if SMP
-       select SYS_SUPPORTS_SMP
        select SMP_UP
+       select SYS_SUPPORTS_SMP
+       select SYS_SUPPORTS_SCHED_SMT
        select MIPS_PERF_SHARED_TC_COUNTERS
        help
-         This is a kernel model which is known a VSMP but lately has been
-         marketesed into SMVP.
-         Virtual SMP uses the processor's VPEs  to implement virtual
-         processors. In currently available configuration of the 34K processor
-         this allows for a dual processor. Both processors will share the same
-         primary caches; each will obtain the half of the TLB for it's own
-         exclusive use. For a layman this model can be described as similar to
-         what Intel calls Hyperthreading.
-
-         For further information see http://www.linux-mips.org/wiki/34K#VSMP
+         This is a kernel model which is known as SMVP. This is supported
+         on cores with the MT ASE and uses the available VPEs to implement
+         virtual processors which supports SMP. This is equivalent to the
+         Intel Hyperthreading feature. For further information go to
+         <http://www.imgtec.com/mips/mips-multithreading.asp>.
 
 config MIPS_MT_SMTC
-       bool "SMTC: Use all TCs on all VPEs for SMP"
+       bool "Use all TCs on all VPEs for SMP (DEPRECATED)"
        depends on CPU_MIPS32_R2
        #depends on CPU_MIPS64_R2               # once there is hardware ...
        depends on SYS_SUPPORTS_MULTITHREADING
        select CPU_MIPSR2_IRQ_VI
        select CPU_MIPSR2_IRQ_EI
        select MIPS_MT
-       select NR_CPUS_DEFAULT_8
        select SMP
-       select SYS_SUPPORTS_SMP
        select SMP_UP
+       select SYS_SUPPORTS_SMP
+       select NR_CPUS_DEFAULT_8
        help
-         This is a kernel model which is known a SMTC or lately has been
-         marketesed into SMVP.
-         is presenting the available TC's of the core as processors to Linux.
-         On currently available 34K processors this means a Linux system will
-         see up to 5 processors. The implementation of the SMTC kernel differs
-         significantly from VSMP and cannot efficiently coexist in the same
-         kernel binary so the choice between VSMP and SMTC is a compile time
-         decision.
-
-         For further information see http://www.linux-mips.org/wiki/34K#SMTC
+         This is a kernel model which is known as SMTC. This is
+         supported on cores with the MT ASE and presents all TCs
+         available on all VPEs to support SMP. For further
+         information see <http://www.linux-mips.org/wiki/34K#SMTC>.
 
 endchoice
 
@@ -1962,6 +1998,16 @@ config MIPS_MT_FPAFF
        default y
        depends on MIPS_MT_SMP || MIPS_MT_SMTC
 
+config MIPS_INCOMPATIBLE_FPU_EMULATION
+       bool "Emulation of incompatible FPU"
+       default n
+       depends on !CPU_MIPS32_R2 && !CPU_MIPS64_R1 && !CPU_MIPS64_R2
+       help
+         Emulation of 32x32bit or 32x64bit FPU ELFs on incompatible FPU.
+         CP0_Status.FR bit controls switch between both models but
+         some CPU may not have this capability.
+         If unsure, leave N here.
+
 config MIPS_VPE_LOADER
        bool "VPE loader support."
        depends on SYS_SUPPORTS_MULTITHREADING
@@ -2012,16 +2058,13 @@ config MIPS_VPE_APSP_API
        help
 
 config MIPS_CMP
-       bool "MIPS CMP framework support"
-       depends on SYS_SUPPORTS_MIPS_CMP
+       bool "MIPS CMP support"
+       depends on SYS_SUPPORTS_MIPS_CMP && !MIPS_MT_SMTC
        select SYNC_R4K
-       select SYS_SUPPORTS_SMP
-       select SYS_SUPPORTS_SCHED_SMT if SMP
        select WEAK_ORDERING
        default n
        help
-         This is a placeholder option for the GCMP work. It will need to
-         be handled differently...
+         Enable Coherency Manager processor (CMP) support.
 
 config SB1_PASS_1_WORKAROUNDS
        bool
@@ -2117,6 +2160,7 @@ config HIGHMEM
        bool "High Memory Support"
        depends on 32BIT && CPU_SUPPORTS_HIGHMEM && SYS_SUPPORTS_HIGHMEM
        depends on ( !SMP || NR_CPUS = 1 || NR_CPUS = 2 || NR_CPUS = 3 || NR_CPUS = 4 || NR_CPUS = 5 || NR_CPUS = 6 || NR_CPUS = 7 || NR_CPUS = 8 )
+       depends on !CPU_MIPS32_R2_EVA
 
 config CPU_SUPPORTS_HIGHMEM
        bool
@@ -2207,12 +2251,6 @@ config SYS_SUPPORTS_MIPS_CMP
 config SYS_SUPPORTS_SMP
        bool
 
-config NR_CPUS_DEFAULT_1
-       bool
-
-config NR_CPUS_DEFAULT_2
-       bool
-
 config NR_CPUS_DEFAULT_4
        bool
 
@@ -2230,10 +2268,8 @@ config NR_CPUS_DEFAULT_64
 
 config NR_CPUS
        int "Maximum number of CPUs (2-64)"
-       range 1 64 if NR_CPUS_DEFAULT_1
+       range 2 64
        depends on SMP
-       default "1" if NR_CPUS_DEFAULT_1
-       default "2" if NR_CPUS_DEFAULT_2
        default "4" if NR_CPUS_DEFAULT_4
        default "8" if NR_CPUS_DEFAULT_8
        default "16" if NR_CPUS_DEFAULT_16
@@ -2414,7 +2450,6 @@ config PCI
        bool "Support for PCI controller"
        depends on HW_HAS_PCI
        select PCI_DOMAINS
-       select GENERIC_PCI_IOMAP
        select NO_GENERIC_PCI_IOPORT_MAP
        help
          Find out whether you have a PCI motherboard. PCI is the name of a
@@ -2509,6 +2544,7 @@ config TRAD_SIGNALS
 config MIPS32_COMPAT
        bool "Kernel support for Linux/MIPS 32-bit binary compatibility"
        depends on 64BIT
+       default y if CPU_SUPPORTS_32BIT_KERNEL && SYS_SUPPORTS_32BIT_KERNEL
        help
          Select this option if you want Linux/MIPS 32-bit binary
          compatibility. Since all software available for Linux/MIPS is
@@ -2528,6 +2564,7 @@ config SYSVIPC_COMPAT
 config MIPS32_O32
        bool "Kernel support for o32 binaries"
        depends on MIPS32_COMPAT
+       default y if CPU_SUPPORTS_32BIT_KERNEL && SYS_SUPPORTS_32BIT_KERNEL
        help
          Select this option if you want to run o32 binaries.  These are pure
          32-bit binaries as used by the 32-bit Linux/MIPS port.  Most of
@@ -2546,6 +2583,10 @@ config MIPS32_N32
 
          If unsure, say N.
 
+comment "64bit kernel, but support of 32bit applications is disabled!"
+       depends on 64BIT && !MIPS32_O32 && !MIPS32_N32
+       depends on CPU_SUPPORTS_32BIT_KERNEL && SYS_SUPPORTS_32BIT_KERNEL
+
 config BINFMT_ELF32
        bool
        default y if MIPS32_O32 || MIPS32_N32
index dd58a04ef4bca5dc4a6604d283722aeb428e861c..d2acf38fab04908f6fb9e2bc025b25d4d503cef7 100644 (file)
@@ -194,6 +194,8 @@ include $(srctree)/arch/mips/Kbuild.platforms
 ifdef CONFIG_PHYSICAL_START
 load-y                                 = $(CONFIG_PHYSICAL_START)
 endif
+entry-y                                = 0x$(shell $(NM) vmlinux 2>/dev/null \
+                                       | grep "\bkernel_entry\b" | cut -f1 -d \ )
 
 cflags-y                       += -I$(srctree)/arch/mips/include/asm/mach-generic
 drivers-$(CONFIG_PCI)          += arch/mips/pci/
@@ -225,6 +227,9 @@ KBUILD_CFLAGS       += $(cflags-y)
 KBUILD_CPPFLAGS += -DVMLINUX_LOAD_ADDRESS=$(load-y)
 KBUILD_CPPFLAGS += -DDATAOFFSET=$(if $(dataoffset-y),$(dataoffset-y),0)
 
+bootvars-y     = VMLINUX_LOAD_ADDRESS=$(load-y) \
+                 VMLINUX_ENTRY_ADDRESS=$(entry-y)
+
 LDFLAGS                        += -m $(ld-emul)
 
 ifdef CONFIG_MIPS
@@ -250,9 +255,25 @@ drivers-$(CONFIG_OPROFILE) += arch/mips/oprofile/
 # suspend and hibernation support
 drivers-$(CONFIG_PM)   += arch/mips/power/
 
+# boot image targets (arch/mips/boot/)
+boot-y                 := vmlinux.bin
+boot-y                 += vmlinux.ecoff
+boot-y                 += vmlinux.srec
+ifeq ($(shell expr $(load-y) \< 0xffffffff80000000 2> /dev/null), 0)
+boot-y                 += uImage
+boot-y                 += uImage.gz
+endif
+
+# compressed boot image targets (arch/mips/boot/compressed/)
+bootz-y                        := vmlinuz
+bootz-y                        += vmlinuz.bin
+bootz-y                        += vmlinuz.ecoff
+bootz-y                        += vmlinuz.srec
+
 ifdef CONFIG_LASAT
 rom.bin rom.sw: vmlinux
-       $(Q)$(MAKE) $(build)=arch/mips/lasat/image $@
+       $(Q)$(MAKE) $(build)=arch/mips/lasat/image \
+               $(bootvars-y) $@
 endif
 
 #
@@ -276,13 +297,14 @@ vmlinux.64: vmlinux
 all:   $(all-y)
 
 # boot
-vmlinux.bin vmlinux.ecoff vmlinux.srec: $(vmlinux-32) FORCE
-       $(Q)$(MAKE) $(build)=arch/mips/boot VMLINUX=$(vmlinux-32) arch/mips/boot/$@
+$(boot-y): $(vmlinux-32) FORCE
+       $(Q)$(MAKE) $(build)=arch/mips/boot VMLINUX=$(vmlinux-32) \
+               $(bootvars-y) arch/mips/boot/$@
 
 # boot/compressed
-vmlinuz vmlinuz.bin vmlinuz.ecoff vmlinuz.srec: $(vmlinux-32) FORCE
+$(bootz-y): $(vmlinux-32) FORCE
        $(Q)$(MAKE) $(build)=arch/mips/boot/compressed \
-          VMLINUX_LOAD_ADDRESS=$(load-y) 32bit-bfd=$(32bit-bfd) $@
+               $(bootvars-y) 32bit-bfd=$(32bit-bfd) $@
 
 
 CLEAN_FILES += vmlinux.32 vmlinux.64
@@ -319,6 +341,8 @@ define archhelp
        echo '  vmlinuz.ecoff        - ECOFF zboot image'
        echo '  vmlinuz.bin          - Raw binary zboot image'
        echo '  vmlinuz.srec         - SREC zboot image'
+       echo '  uImage               - U-Boot image'
+       echo '  uImage.gz            - U-Boot image (gzip)'
        echo
        echo '  These will be default as appropriate for a configured platform.'
 endef
index f210b09ececcf0fe9ab0ee282da0f9332c050394..a73d6e2c4f64fe55e033c05e0ec4aa1eed8eed4e 100644 (file)
@@ -4,3 +4,4 @@ vmlinux.*
 zImage
 zImage.tmp
 calc_vmlinuz_load_addr
+uImage
index 851261e9fdc0a948bb1bfbb9e1f930a0ff8beec1..1466c00260936c7e387877c8c9e296d430cd6240 100644 (file)
@@ -40,3 +40,18 @@ quiet_cmd_srec = OBJCOPY $@
       cmd_srec = $(OBJCOPY) -S -O srec $(strip-flags) $(VMLINUX) $@
 $(obj)/vmlinux.srec: $(VMLINUX) FORCE
        $(call if_changed,srec)
+
+UIMAGE_LOADADDR  = $(VMLINUX_LOAD_ADDRESS)
+UIMAGE_ENTRYADDR = $(VMLINUX_ENTRY_ADDRESS)
+
+$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
+       $(call if_changed,gzip)
+
+targets += uImage.gz
+$(obj)/uImage.gz: $(obj)/vmlinux.bin.gz FORCE
+       $(call if_changed,uimage,gzip)
+
+targets += uImage
+$(obj)/uImage: $(obj)/uImage.gz FORCE
+       @ln -sf $(notdir $<) $@
+       @echo '  Image $@ is ready'
index bbaa1d4beb6df4661b11c661ec7f066c5dc36652..0048c08978965428a32a03bf211c3f86909c12fb 100644 (file)
@@ -18,12 +18,14 @@ BOOT_HEAP_SIZE := 0x400000
 # Disable Function Tracer
 KBUILD_CFLAGS := $(shell echo $(KBUILD_CFLAGS) | sed -e "s/-pg//")
 
+KBUILD_CFLAGS := $(filter-out -fstack-protector, $(KBUILD_CFLAGS))
+
 KBUILD_CFLAGS := $(LINUXINCLUDE) $(KBUILD_CFLAGS) -D__KERNEL__ \
        -DBOOT_HEAP_SIZE=$(BOOT_HEAP_SIZE) -D"VMLINUX_LOAD_ADDRESS_ULL=$(VMLINUX_LOAD_ADDRESS)ull"
 
 KBUILD_AFLAGS := $(LINUXINCLUDE) $(KBUILD_AFLAGS) -D__ASSEMBLY__ \
        -DBOOT_HEAP_SIZE=$(BOOT_HEAP_SIZE) \
-       -DKERNEL_ENTRY=0x$(shell $(NM) $(objtree)/$(KBUILD_IMAGE) 2>/dev/null | grep " kernel_entry" | cut -f1 -d \ )
+       -DKERNEL_ENTRY=$(VMLINUX_ENTRY_ADDRESS)
 
 targets := head.o decompress.o dbg.o uart-16550.o uart-alchemy.o
 
diff --git a/arch/mips/boot/dts/include/dt-bindings b/arch/mips/boot/dts/include/dt-bindings
new file mode 120000 (symlink)
index 0000000..08c00e4
--- /dev/null
@@ -0,0 +1 @@
+../../../../../include/dt-bindings
\ No newline at end of file
diff --git a/arch/mips/configs/malta_android_defconfig b/arch/mips/configs/malta_android_defconfig
new file mode 100644 (file)
index 0000000..ec3635d
--- /dev/null
@@ -0,0 +1,2914 @@
+#
+# Automatically generated file; DO NOT EDIT.
+# Linux/mips 3.10.14 Kernel Configuration
+#
+CONFIG_MIPS=y
+
+#
+# Machine selection
+#
+CONFIG_ZONE_DMA=y
+# CONFIG_MIPS_ALCHEMY is not set
+# CONFIG_AR7 is not set
+# CONFIG_ATH79 is not set
+# CONFIG_BCM47XX is not set
+# CONFIG_BCM63XX is not set
+# CONFIG_MIPS_COBALT is not set
+# CONFIG_MACH_DECSTATION is not set
+# CONFIG_MACH_JAZZ is not set
+# CONFIG_MACH_JZ4740 is not set
+# CONFIG_LANTIQ is not set
+# CONFIG_LASAT is not set
+# CONFIG_MACH_LOONGSON is not set
+# CONFIG_MACH_LOONGSON1 is not set
+CONFIG_MIPS_MALTA=y
+# CONFIG_MIPS_SEAD3 is not set
+# CONFIG_NEC_MARKEINS is not set
+# CONFIG_MACH_VR41XX is not set
+# CONFIG_NXP_STB220 is not set
+# CONFIG_NXP_STB225 is not set
+# CONFIG_PMC_MSP is not set
+# CONFIG_POWERTV is not set
+# CONFIG_RALINK is not set
+# CONFIG_SGI_IP22 is not set
+# CONFIG_SGI_IP27 is not set
+# CONFIG_SGI_IP28 is not set
+# CONFIG_SGI_IP32 is not set
+# CONFIG_SIBYTE_CRHINE is not set
+# CONFIG_SIBYTE_CARMEL is not set
+# CONFIG_SIBYTE_CRHONE is not set
+# CONFIG_SIBYTE_RHONE is not set
+# CONFIG_SIBYTE_SWARM is not set
+# CONFIG_SIBYTE_LITTLESUR is not set
+# CONFIG_SIBYTE_SENTOSA is not set
+# CONFIG_SIBYTE_BIGSUR is not set
+# CONFIG_SNI_RM is not set
+# CONFIG_MACH_TX39XX is not set
+# CONFIG_MACH_TX49XX is not set
+# CONFIG_MIKROTIK_RB532 is not set
+# CONFIG_WR_PPMC is not set
+# CONFIG_CAVIUM_OCTEON_SIMULATOR is not set
+# CONFIG_CAVIUM_OCTEON_REFERENCE_BOARD is not set
+# CONFIG_NLM_XLR_BOARD is not set
+# CONFIG_NLM_XLP_BOARD is not set
+# CONFIG_ALCHEMY_GPIO_INDIRECT is not set
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+# CONFIG_ARCH_HAS_ILOG2_U32 is not set
+# CONFIG_ARCH_HAS_ILOG2_U64 is not set
+CONFIG_GENERIC_HWEIGHT=y
+CONFIG_GENERIC_CALIBRATE_DELAY=y
+CONFIG_SCHED_OMIT_FRAME_POINTER=y
+CONFIG_ARCH_MAY_HAVE_PC_FDC=y
+CONFIG_BOOT_RAW=y
+CONFIG_CEVT_R4K=y
+# CONFIG_CEVT_GIC is not set
+CONFIG_CSRC_R4K=y
+CONFIG_CSRC_GIC=y
+# CONFIG_ARCH_DMA_ADDR_T_64BIT is not set
+CONFIG_DMA_NONCOHERENT=y
+CONFIG_NEED_DMA_MAP_STATE=y
+CONFIG_SYS_HAS_EARLY_PRINTK=y
+CONFIG_I8259=y
+CONFIG_MIPS_BONITO64=y
+CONFIG_MIPS_MSC=y
+CONFIG_SYNC_R4K=y
+# CONFIG_MIPS_MACHINE is not set
+# CONFIG_NO_IOPORT is not set
+CONFIG_GENERIC_ISA_DMA=y
+CONFIG_ISA_DMA_API=y
+# CONFIG_CPU_BIG_ENDIAN is not set
+CONFIG_CPU_LITTLE_ENDIAN=y
+CONFIG_SYS_SUPPORTS_BIG_ENDIAN=y
+CONFIG_SYS_SUPPORTS_LITTLE_ENDIAN=y
+# CONFIG_MIPS_HUGE_TLB_SUPPORT is not set
+CONFIG_IRQ_CPU=y
+CONFIG_IRQ_GIC=y
+CONFIG_PCI_GT64XXX_PCI0=y
+CONFIG_SWAP_IO_SPACE=y
+CONFIG_BOOT_ELF32=y
+CONFIG_MIPS_L1_CACHE_SHIFT=6
+
+#
+# CPU selection
+#
+# CONFIG_CPU_MIPS32_R1 is not set
+CONFIG_CPU_MIPS32_R2=y
+# CONFIG_CPU_MIPS64_R1 is not set
+# CONFIG_CPU_MIPS64_R2 is not set
+# CONFIG_CPU_NEVADA is not set
+# CONFIG_CPU_RM7000 is not set
+# CONFIG_CPU_MIPS32_R2_EVA is not set
+CONFIG_SYS_SUPPORTS_ZBOOT=y
+CONFIG_SYS_HAS_CPU_MIPS32_R1=y
+CONFIG_SYS_HAS_CPU_MIPS32_R2=y
+CONFIG_SYS_HAS_CPU_MIPS32_R2_EVA=y
+CONFIG_SYS_HAS_CPU_MIPS64_R1=y
+CONFIG_SYS_HAS_CPU_MIPS64_R2=y
+CONFIG_SYS_HAS_CPU_NEVADA=y
+CONFIG_SYS_HAS_CPU_RM7000=y
+CONFIG_WEAK_ORDERING=y
+CONFIG_CPU_MIPS32=y
+CONFIG_CPU_MIPSR2=y
+CONFIG_SYS_SUPPORTS_32BIT_KERNEL=y
+CONFIG_SYS_SUPPORTS_64BIT_KERNEL=y
+CONFIG_CPU_SUPPORTS_32BIT_KERNEL=y
+CONFIG_HARDWARE_WATCHPOINTS=y
+
+#
+# Kernel type
+#
+CONFIG_32BIT=y
+CONFIG_PAGE_SIZE_4KB=y
+# CONFIG_PAGE_SIZE_16KB is not set
+# CONFIG_PAGE_SIZE_64KB is not set
+CONFIG_FORCE_MAX_ZONEORDER=11
+CONFIG_BOARD_SCACHE=y
+CONFIG_MIPS_CPU_SCACHE=y
+CONFIG_CPU_HAS_PREFETCH=y
+CONFIG_CPU_GENERIC_DUMP_TLB=y
+CONFIG_CPU_R4K_FPU=y
+CONFIG_CPU_R4K_CACHE_TLB=y
+# CONFIG_MIPS_MT_DISABLED is not set
+CONFIG_MIPS_MT_SMP=y
+# CONFIG_MIPS_MT_SMTC is not set
+CONFIG_MIPS_MT=y
+# CONFIG_SCHED_SMT is not set
+CONFIG_SYS_SUPPORTS_SCHED_SMT=y
+CONFIG_SYS_SUPPORTS_MULTITHREADING=y
+CONFIG_MIPS_MT_FPAFF=y
+CONFIG_MIPS_CMP=y
+# CONFIG_ARCH_PHYS_ADDR_T_64BIT is not set
+# CONFIG_CPU_HAS_SMARTMIPS is not set
+CONFIG_CPU_MIPSR2_IRQ_VI=y
+CONFIG_CPU_MIPSR2_IRQ_EI=y
+CONFIG_CPU_HAS_SYNC=y
+# CONFIG_HIGHMEM is not set
+CONFIG_CPU_SUPPORTS_HIGHMEM=y
+CONFIG_SYS_SUPPORTS_HIGHMEM=y
+CONFIG_SYS_SUPPORTS_SMARTMIPS=y
+CONFIG_ARCH_FLATMEM_ENABLE=y
+CONFIG_HW_PERF_EVENTS=y
+CONFIG_FLATMEM=y
+CONFIG_FLAT_NODE_MEM_MAP=y
+CONFIG_HAVE_MEMBLOCK=y
+CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
+CONFIG_ARCH_DISCARD_MEMBLOCK=y
+# CONFIG_HAVE_BOOTMEM_INFO_NODE is not set
+CONFIG_PAGEFLAGS_EXTENDED=y
+CONFIG_SPLIT_PTLOCK_CPUS=4
+CONFIG_COMPACTION=y
+CONFIG_MIGRATION=y
+# CONFIG_PHYS_ADDR_T_64BIT is not set
+CONFIG_ZONE_DMA_FLAG=1
+CONFIG_BOUNCE=y
+CONFIG_VIRT_TO_BUS=y
+CONFIG_KSM=y
+CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
+CONFIG_CROSS_MEMORY_ATTACH=y
+# CONFIG_CLEANCACHE is not set
+# CONFIG_FRONTSWAP is not set
+CONFIG_SMP=y
+CONFIG_SMP_UP=y
+CONFIG_SYS_SUPPORTS_MIPS_CMP=y
+CONFIG_SYS_SUPPORTS_SMP=y
+CONFIG_NR_CPUS=8
+CONFIG_MIPS_PERF_SHARED_TC_COUNTERS=y
+# CONFIG_HZ_48 is not set
+CONFIG_HZ_100=y
+# CONFIG_HZ_128 is not set
+# CONFIG_HZ_250 is not set
+# CONFIG_HZ_256 is not set
+# CONFIG_HZ_1000 is not set
+# CONFIG_HZ_1024 is not set
+CONFIG_SYS_SUPPORTS_ARBIT_HZ=y
+CONFIG_HZ=100
+# CONFIG_PREEMPT_NONE is not set
+# CONFIG_PREEMPT_VOLUNTARY is not set
+CONFIG_PREEMPT=y
+CONFIG_PREEMPT_COUNT=y
+# CONFIG_KEXEC is not set
+# CONFIG_CRASH_DUMP is not set
+CONFIG_SECCOMP=y
+CONFIG_LOCKDEP_SUPPORT=y
+CONFIG_STACKTRACE_SUPPORT=y
+CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
+CONFIG_IRQ_WORK=y
+CONFIG_BUILDTIME_EXTABLE_SORT=y
+
+#
+# General setup
+#
+CONFIG_INIT_ENV_ARG_LIMIT=32
+CONFIG_CROSS_COMPILE=""
+CONFIG_LOCALVERSION=""
+CONFIG_LOCALVERSION_AUTO=y
+CONFIG_HAVE_KERNEL_GZIP=y
+CONFIG_HAVE_KERNEL_BZIP2=y
+CONFIG_HAVE_KERNEL_LZMA=y
+CONFIG_HAVE_KERNEL_LZO=y
+CONFIG_KERNEL_GZIP=y
+# CONFIG_KERNEL_BZIP2 is not set
+# CONFIG_KERNEL_LZMA is not set
+# CONFIG_KERNEL_LZO is not set
+CONFIG_DEFAULT_HOSTNAME="(none)"
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+CONFIG_SYSVIPC_SYSCTL=y
+# CONFIG_POSIX_MQUEUE is not set
+# CONFIG_FHANDLE is not set
+# CONFIG_AUDIT is not set
+CONFIG_HAVE_GENERIC_HARDIRQS=y
+
+#
+# IRQ subsystem
+#
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_GENERIC_IRQ_PROBE=y
+CONFIG_GENERIC_IRQ_SHOW=y
+CONFIG_IRQ_FORCED_THREADING=y
+CONFIG_GENERIC_CLOCKEVENTS=y
+CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
+CONFIG_GENERIC_CMOS_UPDATE=y
+
+#
+# Timers subsystem
+#
+CONFIG_TICK_ONESHOT=y
+CONFIG_NO_HZ_COMMON=y
+# CONFIG_HZ_PERIODIC is not set
+CONFIG_NO_HZ_IDLE=y
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+
+#
+# CPU/Task time and stats accounting
+#
+CONFIG_TICK_CPU_ACCOUNTING=y
+# CONFIG_BSD_PROCESS_ACCT is not set
+# CONFIG_TASKSTATS is not set
+
+#
+# RCU Subsystem
+#
+CONFIG_TREE_PREEMPT_RCU=y
+CONFIG_PREEMPT_RCU=y
+CONFIG_RCU_STALL_COMMON=y
+CONFIG_RCU_FANOUT=32
+CONFIG_RCU_FANOUT_LEAF=16
+# CONFIG_RCU_FANOUT_EXACT is not set
+# CONFIG_RCU_FAST_NO_HZ is not set
+# CONFIG_TREE_RCU_TRACE is not set
+# CONFIG_RCU_BOOST is not set
+# CONFIG_RCU_NOCB_CPU is not set
+# CONFIG_IKCONFIG is not set
+CONFIG_LOG_BUF_SHIFT=15
+CONFIG_CGROUPS=y
+CONFIG_CGROUP_DEBUG=y
+CONFIG_CGROUP_FREEZER=y
+# CONFIG_CGROUP_DEVICE is not set
+# CONFIG_CPUSETS is not set
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_RESOURCE_COUNTERS=y
+# CONFIG_MEMCG is not set
+# CONFIG_CGROUP_PERF is not set
+CONFIG_CGROUP_SCHED=y
+CONFIG_FAIR_GROUP_SCHED=y
+# CONFIG_CFS_BANDWIDTH is not set
+CONFIG_RT_GROUP_SCHED=y
+# CONFIG_BLK_CGROUP is not set
+# CONFIG_CHECKPOINT_RESTORE is not set
+CONFIG_NAMESPACES=y
+CONFIG_UTS_NS=y
+CONFIG_IPC_NS=y
+CONFIG_PID_NS=y
+CONFIG_NET_NS=y
+# CONFIG_SCHED_AUTOGROUP is not set
+# CONFIG_SYSFS_DEPRECATED is not set
+CONFIG_RELAY=y
+CONFIG_BLK_DEV_INITRD=y
+CONFIG_INITRAMFS_SOURCE=""
+CONFIG_RD_GZIP=y
+# CONFIG_RD_BZIP2 is not set
+# CONFIG_RD_LZMA is not set
+# CONFIG_RD_XZ is not set
+# CONFIG_RD_LZO is not set
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+CONFIG_SYSCTL=y
+CONFIG_ANON_INODES=y
+CONFIG_HOTPLUG=y
+CONFIG_HAVE_PCSPKR_PLATFORM=y
+CONFIG_PANIC_TIMEOUT=5
+CONFIG_EXPERT=y
+# CONFIG_SYSCTL_SYSCALL is not set
+CONFIG_KALLSYMS=y
+CONFIG_KALLSYMS_ALL=y
+CONFIG_PRINTK=y
+CONFIG_BUG=y
+CONFIG_ELF_CORE=y
+CONFIG_PCSPKR_PLATFORM=y
+CONFIG_BASE_FULL=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_SIGNALFD=y
+CONFIG_TIMERFD=y
+CONFIG_EVENTFD=y
+CONFIG_SHMEM=y
+CONFIG_AIO=y
+CONFIG_PCI_QUIRKS=y
+CONFIG_EMBEDDED=y
+CONFIG_HAVE_PERF_EVENTS=y
+CONFIG_PERF_USE_VMALLOC=y
+
+#
+# Kernel Performance Events And Counters
+#
+CONFIG_PERF_EVENTS=y
+# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
+CONFIG_VM_EVENT_COUNTERS=y
+# CONFIG_COMPAT_BRK is not set
+CONFIG_SLAB=y
+# CONFIG_SLUB is not set
+# CONFIG_SLOB is not set
+# CONFIG_PROFILING is not set
+CONFIG_HAVE_OPROFILE=y
+# CONFIG_JUMP_LABEL is not set
+# CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set
+CONFIG_HAVE_KPROBES=y
+CONFIG_HAVE_KRETPROBES=y
+CONFIG_HAVE_DMA_ATTRS=y
+CONFIG_USE_GENERIC_SMP_HELPERS=y
+CONFIG_GENERIC_SMP_IDLE_THREAD=y
+CONFIG_HAVE_DMA_API_DEBUG=y
+CONFIG_HAVE_ARCH_JUMP_LABEL=y
+CONFIG_ARCH_WANT_IPC_PARSE_VERSION=y
+CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
+CONFIG_CLONE_BACKWARDS=y
+
+#
+# GCOV-based kernel profiling
+#
+CONFIG_HAVE_GENERIC_DMA_COHERENT=y
+CONFIG_SLABINFO=y
+CONFIG_RT_MUTEXES=y
+CONFIG_BASE_SMALL=0
+# CONFIG_MODULES is not set
+CONFIG_BLOCK=y
+CONFIG_LBDAF=y
+CONFIG_BLK_DEV_BSG=y
+CONFIG_BLK_DEV_BSGLIB=y
+# CONFIG_BLK_DEV_INTEGRITY is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_AMIGA_PARTITION=y
+CONFIG_MSDOS_PARTITION=y
+CONFIG_EFI_PARTITION=y
+
+#
+# IO Schedulers
+#
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+# CONFIG_DEFAULT_DEADLINE is not set
+CONFIG_DEFAULT_CFQ=y
+# CONFIG_DEFAULT_NOOP is not set
+CONFIG_DEFAULT_IOSCHED="cfq"
+CONFIG_UNINLINE_SPIN_UNLOCK=y
+CONFIG_MUTEX_SPIN_ON_OWNER=y
+CONFIG_FREEZER=y
+
+#
+# Bus options (PCI, PCMCIA, EISA, ISA, TC)
+#
+CONFIG_HW_HAS_PCI=y
+CONFIG_PCI=y
+CONFIG_PCI_DOMAINS=y
+# CONFIG_PCI_DEBUG is not set
+# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
+# CONFIG_PCI_STUB is not set
+# CONFIG_PCI_IOV is not set
+# CONFIG_PCI_PRI is not set
+# CONFIG_PCI_PASID is not set
+# CONFIG_PCIEPORTBUS is not set
+CONFIG_MMU=y
+CONFIG_I8253=y
+# CONFIG_PCCARD is not set
+# CONFIG_HOTPLUG_PCI is not set
+# CONFIG_RAPIDIO is not set
+
+#
+# Executable file formats
+#
+CONFIG_BINFMT_ELF=y
+CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+CONFIG_BINFMT_SCRIPT=y
+# CONFIG_HAVE_AOUT is not set
+# CONFIG_BINFMT_MISC is not set
+CONFIG_COREDUMP=y
+CONFIG_TRAD_SIGNALS=y
+
+#
+# Power management options
+#
+CONFIG_HAS_WAKELOCK=y
+CONFIG_WAKELOCK=y
+CONFIG_PM_SLEEP=y
+# CONFIG_PM_AUTOSLEEP is not set
+# CONFIG_PM_WAKELOCKS is not set
+CONFIG_PM_RUNTIME=y
+CONFIG_PM=y
+CONFIG_PM_DEBUG=y
+# CONFIG_PM_ADVANCED_DEBUG is not set
+CONFIG_PM_SLEEP_DEBUG=y
+CONFIG_SUSPEND_TIME=y
+CONFIG_MIPS_EXTERNAL_TIMER=y
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+# CONFIG_PACKET_DIAG is not set
+CONFIG_UNIX=y
+# CONFIG_UNIX_DIAG is not set
+CONFIG_XFRM=y
+CONFIG_XFRM_ALGO=y
+CONFIG_XFRM_USER=y
+# CONFIG_XFRM_SUB_POLICY is not set
+CONFIG_XFRM_MIGRATE=y
+# CONFIG_XFRM_STATISTICS is not set
+CONFIG_XFRM_IPCOMP=y
+CONFIG_NET_KEY=y
+CONFIG_NET_KEY_MIGRATE=y
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_IP_ADVANCED_ROUTER=y
+# CONFIG_IP_FIB_TRIE_STATS is not set
+CONFIG_IP_MULTIPLE_TABLES=y
+CONFIG_IP_ROUTE_MULTIPATH=y
+CONFIG_IP_ROUTE_VERBOSE=y
+CONFIG_IP_ROUTE_CLASSID=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+CONFIG_NET_IPIP=y
+# CONFIG_NET_IPGRE_DEMUX is not set
+CONFIG_NET_IP_TUNNEL=y
+CONFIG_IP_MROUTE=y
+# CONFIG_IP_MROUTE_MULTIPLE_TABLES is not set
+CONFIG_IP_PIMSM_V1=y
+CONFIG_IP_PIMSM_V2=y
+# CONFIG_ARPD is not set
+CONFIG_SYN_COOKIES=y
+# CONFIG_NET_IPVTI is not set
+CONFIG_INET_AH=y
+CONFIG_INET_ESP=y
+CONFIG_INET_IPCOMP=y
+CONFIG_INET_XFRM_TUNNEL=y
+CONFIG_INET_TUNNEL=y
+CONFIG_INET_XFRM_MODE_TRANSPORT=y
+CONFIG_INET_XFRM_MODE_TUNNEL=y
+CONFIG_INET_XFRM_MODE_BEET=y
+# CONFIG_INET_LRO is not set
+CONFIG_INET_DIAG=y
+CONFIG_INET_TCP_DIAG=y
+# CONFIG_INET_UDP_DIAG is not set
+# CONFIG_TCP_CONG_ADVANCED is not set
+CONFIG_TCP_CONG_CUBIC=y
+CONFIG_DEFAULT_TCP_CONG="cubic"
+CONFIG_TCP_MD5SIG=y
+CONFIG_IPV6=y
+CONFIG_IPV6_PRIVACY=y
+CONFIG_IPV6_ROUTER_PREF=y
+CONFIG_IPV6_ROUTE_INFO=y
+CONFIG_IPV6_OPTIMISTIC_DAD=y
+CONFIG_INET6_AH=y
+CONFIG_INET6_ESP=y
+CONFIG_INET6_IPCOMP=y
+CONFIG_IPV6_MIP6=y
+CONFIG_INET6_XFRM_TUNNEL=y
+CONFIG_INET6_TUNNEL=y
+CONFIG_INET6_XFRM_MODE_TRANSPORT=y
+CONFIG_INET6_XFRM_MODE_TUNNEL=y
+CONFIG_INET6_XFRM_MODE_BEET=y
+# CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION is not set
+CONFIG_IPV6_SIT=y
+# CONFIG_IPV6_SIT_6RD is not set
+CONFIG_IPV6_NDISC_NODETYPE=y
+CONFIG_IPV6_TUNNEL=y
+# CONFIG_IPV6_GRE is not set
+CONFIG_IPV6_MULTIPLE_TABLES=y
+# CONFIG_IPV6_SUBTREES is not set
+CONFIG_IPV6_MROUTE=y
+# CONFIG_IPV6_MROUTE_MULTIPLE_TABLES is not set
+CONFIG_IPV6_PIMSM_V2=y
+CONFIG_ANDROID_PARANOID_NETWORK=y
+CONFIG_NET_ACTIVITY_STATS=y
+CONFIG_NETWORK_SECMARK=y
+# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
+CONFIG_NETFILTER=y
+# CONFIG_NETFILTER_DEBUG is not set
+CONFIG_NETFILTER_ADVANCED=y
+CONFIG_BRIDGE_NETFILTER=y
+
+#
+# Core Netfilter Configuration
+#
+CONFIG_NETFILTER_NETLINK=y
+# CONFIG_NETFILTER_NETLINK_ACCT is not set
+CONFIG_NETFILTER_NETLINK_QUEUE=y
+CONFIG_NETFILTER_NETLINK_LOG=y
+CONFIG_NF_CONNTRACK=y
+CONFIG_NF_CONNTRACK_MARK=y
+CONFIG_NF_CONNTRACK_SECMARK=y
+CONFIG_NF_CONNTRACK_PROCFS=y
+CONFIG_NF_CONNTRACK_EVENTS=y
+# CONFIG_NF_CONNTRACK_TIMEOUT is not set
+# CONFIG_NF_CONNTRACK_TIMESTAMP is not set
+CONFIG_NF_CT_PROTO_DCCP=y
+CONFIG_NF_CT_PROTO_GRE=y
+CONFIG_NF_CT_PROTO_SCTP=y
+CONFIG_NF_CT_PROTO_UDPLITE=y
+CONFIG_NF_CONNTRACK_AMANDA=y
+CONFIG_NF_CONNTRACK_FTP=y
+CONFIG_NF_CONNTRACK_H323=y
+CONFIG_NF_CONNTRACK_IRC=y
+CONFIG_NF_CONNTRACK_BROADCAST=y
+CONFIG_NF_CONNTRACK_NETBIOS_NS=y
+# CONFIG_NF_CONNTRACK_SNMP is not set
+CONFIG_NF_CONNTRACK_PPTP=y
+CONFIG_NF_CONNTRACK_SANE=y
+# CONFIG_NF_CONNTRACK_SIP is not set
+CONFIG_NF_CONNTRACK_TFTP=y
+CONFIG_NF_CT_NETLINK=y
+# CONFIG_NF_CT_NETLINK_TIMEOUT is not set
+# CONFIG_NETFILTER_NETLINK_QUEUE_CT is not set
+CONFIG_NETFILTER_TPROXY=y
+CONFIG_NETFILTER_XTABLES=y
+
+#
+# Xtables combined modules
+#
+CONFIG_NETFILTER_XT_MARK=y
+CONFIG_NETFILTER_XT_CONNMARK=y
+
+#
+# Xtables targets
+#
+# CONFIG_NETFILTER_XT_TARGET_CHECKSUM is not set
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=y
+CONFIG_NETFILTER_XT_TARGET_CONNMARK=y
+# CONFIG_NETFILTER_XT_TARGET_CONNSECMARK is not set
+# CONFIG_NETFILTER_XT_TARGET_CT is not set
+# CONFIG_NETFILTER_XT_TARGET_DSCP is not set
+CONFIG_NETFILTER_XT_TARGET_HL=y
+# CONFIG_NETFILTER_XT_TARGET_HMARK is not set
+CONFIG_NETFILTER_XT_TARGET_IDLETIMER=y
+# CONFIG_NETFILTER_XT_TARGET_LOG is not set
+CONFIG_NETFILTER_XT_TARGET_MARK=y
+CONFIG_NETFILTER_XT_TARGET_NFLOG=y
+CONFIG_NETFILTER_XT_TARGET_NFQUEUE=y
+# CONFIG_NETFILTER_XT_TARGET_NOTRACK is not set
+CONFIG_NETFILTER_XT_TARGET_RATEEST=y
+# CONFIG_NETFILTER_XT_TARGET_TEE is not set
+CONFIG_NETFILTER_XT_TARGET_TPROXY=y
+CONFIG_NETFILTER_XT_TARGET_TRACE=y
+CONFIG_NETFILTER_XT_TARGET_SECMARK=y
+CONFIG_NETFILTER_XT_TARGET_TCPMSS=y
+CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=y
+
+#
+# Xtables matches
+#
+# CONFIG_NETFILTER_XT_MATCH_ADDRTYPE is not set
+# CONFIG_NETFILTER_XT_MATCH_BPF is not set
+# CONFIG_NETFILTER_XT_MATCH_CLUSTER is not set
+CONFIG_NETFILTER_XT_MATCH_COMMENT=y
+CONFIG_NETFILTER_XT_MATCH_CONNBYTES=y
+# CONFIG_NETFILTER_XT_MATCH_CONNLABEL is not set
+CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_CONNMARK=y
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
+# CONFIG_NETFILTER_XT_MATCH_CPU is not set
+CONFIG_NETFILTER_XT_MATCH_DCCP=y
+# CONFIG_NETFILTER_XT_MATCH_DEVGROUP is not set
+# CONFIG_NETFILTER_XT_MATCH_DSCP is not set
+CONFIG_NETFILTER_XT_MATCH_ECN=y
+CONFIG_NETFILTER_XT_MATCH_ESP=y
+CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=y
+CONFIG_NETFILTER_XT_MATCH_HELPER=y
+CONFIG_NETFILTER_XT_MATCH_HL=y
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=y
+# CONFIG_NETFILTER_XT_MATCH_IPVS is not set
+CONFIG_NETFILTER_XT_MATCH_LENGTH=y
+CONFIG_NETFILTER_XT_MATCH_LIMIT=y
+CONFIG_NETFILTER_XT_MATCH_MAC=y
+CONFIG_NETFILTER_XT_MATCH_MARK=y
+CONFIG_NETFILTER_XT_MATCH_MULTIPORT=y
+# CONFIG_NETFILTER_XT_MATCH_NFACCT is not set
+# CONFIG_NETFILTER_XT_MATCH_OSF is not set
+CONFIG_NETFILTER_XT_MATCH_OWNER=y
+CONFIG_NETFILTER_XT_MATCH_POLICY=y
+# CONFIG_NETFILTER_XT_MATCH_PHYSDEV is not set
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA=y
+CONFIG_NETFILTER_XT_MATCH_QUOTA2=y
+CONFIG_NETFILTER_XT_MATCH_RATEEST=y
+CONFIG_NETFILTER_XT_MATCH_REALM=y
+CONFIG_NETFILTER_XT_MATCH_RECENT=y
+CONFIG_NETFILTER_XT_MATCH_SCTP=y
+CONFIG_NETFILTER_XT_MATCH_SOCKET=y
+CONFIG_NETFILTER_XT_MATCH_STATE=y
+CONFIG_NETFILTER_XT_MATCH_STATISTIC=y
+CONFIG_NETFILTER_XT_MATCH_STRING=y
+CONFIG_NETFILTER_XT_MATCH_TCPMSS=y
+CONFIG_NETFILTER_XT_MATCH_TIME=y
+CONFIG_NETFILTER_XT_MATCH_U32=y
+# CONFIG_IP_SET is not set
+CONFIG_IP_VS=y
+CONFIG_IP_VS_IPV6=y
+# CONFIG_IP_VS_DEBUG is not set
+CONFIG_IP_VS_TAB_BITS=12
+
+#
+# IPVS transport protocol load balancing support
+#
+CONFIG_IP_VS_PROTO_TCP=y
+CONFIG_IP_VS_PROTO_UDP=y
+CONFIG_IP_VS_PROTO_AH_ESP=y
+CONFIG_IP_VS_PROTO_ESP=y
+CONFIG_IP_VS_PROTO_AH=y
+# CONFIG_IP_VS_PROTO_SCTP is not set
+
+#
+# IPVS scheduler
+#
+CONFIG_IP_VS_RR=y
+CONFIG_IP_VS_WRR=y
+CONFIG_IP_VS_LC=y
+CONFIG_IP_VS_WLC=y
+CONFIG_IP_VS_LBLC=y
+CONFIG_IP_VS_LBLCR=y
+CONFIG_IP_VS_DH=y
+CONFIG_IP_VS_SH=y
+CONFIG_IP_VS_SED=y
+CONFIG_IP_VS_NQ=y
+
+#
+# IPVS SH scheduler
+#
+CONFIG_IP_VS_SH_TAB_BITS=8
+
+#
+# IPVS application helper
+#
+# CONFIG_IP_VS_NFCT is not set
+
+#
+# IP: Netfilter Configuration
+#
+CONFIG_NF_DEFRAG_IPV4=y
+CONFIG_NF_CONNTRACK_IPV4=y
+CONFIG_NF_CONNTRACK_PROC_COMPAT=y
+CONFIG_IP_NF_IPTABLES=y
+CONFIG_IP_NF_MATCH_AH=y
+CONFIG_IP_NF_MATCH_ECN=y
+# CONFIG_IP_NF_MATCH_RPFILTER is not set
+CONFIG_IP_NF_MATCH_TTL=y
+CONFIG_IP_NF_FILTER=y
+CONFIG_IP_NF_TARGET_REJECT=y
+CONFIG_IP_NF_TARGET_REJECT_SKERR=y
+CONFIG_IP_NF_TARGET_ULOG=y
+# CONFIG_NF_NAT_IPV4 is not set
+CONFIG_IP_NF_MANGLE=y
+CONFIG_IP_NF_TARGET_CLUSTERIP=y
+CONFIG_IP_NF_TARGET_ECN=y
+CONFIG_IP_NF_TARGET_TTL=y
+CONFIG_IP_NF_RAW=y
+CONFIG_IP_NF_ARPTABLES=y
+CONFIG_IP_NF_ARPFILTER=y
+CONFIG_IP_NF_ARP_MANGLE=y
+
+#
+# IPv6: Netfilter Configuration
+#
+CONFIG_NF_DEFRAG_IPV6=y
+CONFIG_NF_CONNTRACK_IPV6=y
+CONFIG_IP6_NF_IPTABLES=y
+CONFIG_IP6_NF_MATCH_AH=y
+CONFIG_IP6_NF_MATCH_EUI64=y
+CONFIG_IP6_NF_MATCH_FRAG=y
+CONFIG_IP6_NF_MATCH_OPTS=y
+CONFIG_IP6_NF_MATCH_HL=y
+CONFIG_IP6_NF_MATCH_IPV6HEADER=y
+CONFIG_IP6_NF_MATCH_MH=y
+# CONFIG_IP6_NF_MATCH_RPFILTER is not set
+CONFIG_IP6_NF_MATCH_RT=y
+CONFIG_IP6_NF_TARGET_HL=y
+CONFIG_IP6_NF_FILTER=y
+CONFIG_IP6_NF_TARGET_REJECT=y
+CONFIG_IP6_NF_TARGET_REJECT_SKERR=y
+CONFIG_IP6_NF_MANGLE=y
+CONFIG_IP6_NF_RAW=y
+# CONFIG_NF_NAT_IPV6 is not set
+CONFIG_BRIDGE_NF_EBTABLES=y
+CONFIG_BRIDGE_EBT_BROUTE=y
+CONFIG_BRIDGE_EBT_T_FILTER=y
+CONFIG_BRIDGE_EBT_T_NAT=y
+CONFIG_BRIDGE_EBT_802_3=y
+CONFIG_BRIDGE_EBT_AMONG=y
+CONFIG_BRIDGE_EBT_ARP=y
+CONFIG_BRIDGE_EBT_IP=y
+CONFIG_BRIDGE_EBT_IP6=y
+CONFIG_BRIDGE_EBT_LIMIT=y
+CONFIG_BRIDGE_EBT_MARK=y
+CONFIG_BRIDGE_EBT_PKTTYPE=y
+CONFIG_BRIDGE_EBT_STP=y
+CONFIG_BRIDGE_EBT_VLAN=y
+CONFIG_BRIDGE_EBT_ARPREPLY=y
+CONFIG_BRIDGE_EBT_DNAT=y
+CONFIG_BRIDGE_EBT_MARK_T=y
+CONFIG_BRIDGE_EBT_REDIRECT=y
+CONFIG_BRIDGE_EBT_SNAT=y
+CONFIG_BRIDGE_EBT_LOG=y
+CONFIG_BRIDGE_EBT_ULOG=y
+CONFIG_BRIDGE_EBT_NFLOG=y
+# CONFIG_IP_DCCP is not set
+CONFIG_IP_SCTP=y
+# CONFIG_SCTP_DBG_MSG is not set
+# CONFIG_SCTP_DBG_OBJCNT is not set
+CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5=y
+# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1 is not set
+# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
+CONFIG_SCTP_COOKIE_HMAC_MD5=y
+# CONFIG_SCTP_COOKIE_HMAC_SHA1 is not set
+# CONFIG_RDS is not set
+# CONFIG_TIPC is not set
+# CONFIG_ATM is not set
+# CONFIG_L2TP is not set
+CONFIG_STP=y
+CONFIG_GARP=y
+CONFIG_BRIDGE=y
+CONFIG_BRIDGE_IGMP_SNOOPING=y
+# CONFIG_BRIDGE_VLAN_FILTERING is not set
+CONFIG_HAVE_NET_DSA=y
+CONFIG_VLAN_8021Q=y
+CONFIG_VLAN_8021Q_GVRP=y
+# CONFIG_VLAN_8021Q_MVRP is not set
+# CONFIG_DECNET is not set
+CONFIG_LLC=y
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+CONFIG_ATALK=y
+CONFIG_DEV_APPLETALK=y
+CONFIG_IPDDP=y
+CONFIG_IPDDP_ENCAP=y
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+CONFIG_PHONET=y
+# CONFIG_IEEE802154 is not set
+CONFIG_NET_SCHED=y
+
+#
+# Queueing/Scheduling
+#
+CONFIG_NET_SCH_CBQ=y
+CONFIG_NET_SCH_HTB=y
+CONFIG_NET_SCH_HFSC=y
+CONFIG_NET_SCH_PRIO=y
+# CONFIG_NET_SCH_MULTIQ is not set
+CONFIG_NET_SCH_RED=y
+# CONFIG_NET_SCH_SFB is not set
+CONFIG_NET_SCH_SFQ=y
+CONFIG_NET_SCH_TEQL=y
+CONFIG_NET_SCH_TBF=y
+CONFIG_NET_SCH_GRED=y
+CONFIG_NET_SCH_DSMARK=y
+CONFIG_NET_SCH_NETEM=y
+# CONFIG_NET_SCH_DRR is not set
+# CONFIG_NET_SCH_MQPRIO is not set
+# CONFIG_NET_SCH_CHOKE is not set
+# CONFIG_NET_SCH_QFQ is not set
+# CONFIG_NET_SCH_CODEL is not set
+# CONFIG_NET_SCH_FQ_CODEL is not set
+CONFIG_NET_SCH_INGRESS=y
+# CONFIG_NET_SCH_PLUG is not set
+
+#
+# Classification
+#
+CONFIG_NET_CLS=y
+CONFIG_NET_CLS_BASIC=y
+CONFIG_NET_CLS_TCINDEX=y
+CONFIG_NET_CLS_ROUTE4=y
+CONFIG_NET_CLS_FW=y
+CONFIG_NET_CLS_U32=y
+# CONFIG_CLS_U32_PERF is not set
+# CONFIG_CLS_U32_MARK is not set
+CONFIG_NET_CLS_RSVP=y
+CONFIG_NET_CLS_RSVP6=y
+CONFIG_NET_CLS_FLOW=y
+# CONFIG_NET_CLS_CGROUP is not set
+CONFIG_NET_EMATCH=y
+CONFIG_NET_EMATCH_STACK=32
+# CONFIG_NET_EMATCH_CMP is not set
+# CONFIG_NET_EMATCH_NBYTE is not set
+CONFIG_NET_EMATCH_U32=y
+# CONFIG_NET_EMATCH_META is not set
+# CONFIG_NET_EMATCH_TEXT is not set
+CONFIG_NET_CLS_ACT=y
+CONFIG_NET_ACT_POLICE=y
+CONFIG_NET_ACT_GACT=y
+CONFIG_GACT_PROB=y
+CONFIG_NET_ACT_MIRRED=y
+CONFIG_NET_ACT_IPT=y
+CONFIG_NET_ACT_NAT=y
+CONFIG_NET_ACT_PEDIT=y
+CONFIG_NET_ACT_SIMP=y
+CONFIG_NET_ACT_SKBEDIT=y
+# CONFIG_NET_ACT_CSUM is not set
+CONFIG_NET_CLS_IND=y
+CONFIG_NET_SCH_FIFO=y
+# CONFIG_DCB is not set
+# CONFIG_BATMAN_ADV is not set
+# CONFIG_OPENVSWITCH is not set
+# CONFIG_VSOCKETS is not set
+# CONFIG_NETLINK_MMAP is not set
+# CONFIG_NETLINK_DIAG is not set
+CONFIG_RPS=y
+CONFIG_RFS_ACCEL=y
+CONFIG_XPS=y
+# CONFIG_NETPRIO_CGROUP is not set
+CONFIG_BQL=y
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_CAN is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+# CONFIG_AF_RXRPC is not set
+CONFIG_FIB_RULES=y
+CONFIG_WIRELESS=y
+CONFIG_WIRELESS_EXT=y
+CONFIG_WEXT_CORE=y
+CONFIG_WEXT_PROC=y
+CONFIG_WEXT_SPY=y
+CONFIG_WEXT_PRIV=y
+CONFIG_CFG80211=y
+# CONFIG_NL80211_TESTMODE is not set
+# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
+# CONFIG_CFG80211_REG_DEBUG is not set
+# CONFIG_CFG80211_CERTIFICATION_ONUS is not set
+CONFIG_CFG80211_DEFAULT_PS=y
+# CONFIG_CFG80211_INTERNAL_REGDB is not set
+# CONFIG_CFG80211_WEXT is not set
+CONFIG_LIB80211=y
+CONFIG_LIB80211_CRYPT_WEP=y
+CONFIG_LIB80211_CRYPT_CCMP=y
+CONFIG_LIB80211_CRYPT_TKIP=y
+# CONFIG_LIB80211_DEBUG is not set
+# CONFIG_CFG80211_ALLOW_RECONNECT is not set
+CONFIG_MAC80211=y
+CONFIG_MAC80211_HAS_RC=y
+CONFIG_MAC80211_RC_PID=y
+CONFIG_MAC80211_RC_MINSTREL=y
+CONFIG_MAC80211_RC_MINSTREL_HT=y
+CONFIG_MAC80211_RC_DEFAULT_PID=y
+# CONFIG_MAC80211_RC_DEFAULT_MINSTREL is not set
+CONFIG_MAC80211_RC_DEFAULT="pid"
+CONFIG_MAC80211_MESH=y
+# CONFIG_MAC80211_LEDS is not set
+# CONFIG_MAC80211_MESSAGE_TRACING is not set
+# CONFIG_MAC80211_DEBUG_MENU is not set
+# CONFIG_WIMAX is not set
+CONFIG_RFKILL=y
+CONFIG_RFKILL_PM=y
+# CONFIG_RFKILL_INPUT is not set
+# CONFIG_NET_9P is not set
+# CONFIG_CAIF is not set
+# CONFIG_CEPH_LIB is not set
+# CONFIG_NFC is not set
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+# CONFIG_DEVTMPFS_MOUNT is not set
+CONFIG_STANDALONE=y
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+CONFIG_FW_LOADER=y
+CONFIG_FIRMWARE_IN_KERNEL=y
+CONFIG_EXTRA_FIRMWARE=""
+CONFIG_FW_LOADER_USER_HELPER=y
+# CONFIG_DEBUG_DRIVER is not set
+# CONFIG_DEBUG_DEVRES is not set
+# CONFIG_SYS_HYPERVISOR is not set
+# CONFIG_GENERIC_CPU_DEVICES is not set
+CONFIG_DMA_SHARED_BUFFER=y
+
+#
+# Bus devices
+#
+CONFIG_CONNECTOR=y
+CONFIG_PROC_EVENTS=y
+CONFIG_MTD=y
+# CONFIG_MTD_REDBOOT_PARTS is not set
+# CONFIG_MTD_CMDLINE_PARTS is not set
+# CONFIG_MTD_AR7_PARTS is not set
+
+#
+# User Modules And Translation Layers
+#
+CONFIG_MTD_BLKDEVS=y
+CONFIG_MTD_BLOCK=y
+# CONFIG_FTL is not set
+# CONFIG_NFTL is not set
+# CONFIG_INFTL is not set
+# CONFIG_RFD_FTL is not set
+# CONFIG_SSFDC is not set
+# CONFIG_SM_FTL is not set
+CONFIG_MTD_OOPS=y
+# CONFIG_MTD_SWAP is not set
+
+#
+# RAM/ROM/Flash chip drivers
+#
+CONFIG_MTD_CFI=y
+# CONFIG_MTD_JEDECPROBE is not set
+CONFIG_MTD_GEN_PROBE=y
+# CONFIG_MTD_CFI_ADV_OPTIONS is not set
+CONFIG_MTD_MAP_BANK_WIDTH_1=y
+CONFIG_MTD_MAP_BANK_WIDTH_2=y
+CONFIG_MTD_MAP_BANK_WIDTH_4=y
+# CONFIG_MTD_MAP_BANK_WIDTH_8 is not set
+# CONFIG_MTD_MAP_BANK_WIDTH_16 is not set
+# CONFIG_MTD_MAP_BANK_WIDTH_32 is not set
+CONFIG_MTD_CFI_I1=y
+CONFIG_MTD_CFI_I2=y
+# CONFIG_MTD_CFI_I4 is not set
+# CONFIG_MTD_CFI_I8 is not set
+CONFIG_MTD_CFI_INTELEXT=y
+CONFIG_MTD_CFI_AMDSTD=y
+CONFIG_MTD_CFI_STAA=y
+CONFIG_MTD_CFI_UTIL=y
+# CONFIG_MTD_RAM is not set
+# CONFIG_MTD_ROM is not set
+# CONFIG_MTD_ABSENT is not set
+
+#
+# Mapping drivers for chip access
+#
+# CONFIG_MTD_COMPLEX_MAPPINGS is not set
+CONFIG_MTD_PHYSMAP=y
+# CONFIG_MTD_PHYSMAP_COMPAT is not set
+# CONFIG_MTD_INTEL_VR_NOR is not set
+# CONFIG_MTD_PLATRAM is not set
+
+#
+# Self-contained MTD device drivers
+#
+# CONFIG_MTD_PMC551 is not set
+# CONFIG_MTD_SLRAM is not set
+# CONFIG_MTD_PHRAM is not set
+# CONFIG_MTD_MTDRAM is not set
+# CONFIG_MTD_BLOCK2MTD is not set
+
+#
+# Disk-On-Chip Device Drivers
+#
+# CONFIG_MTD_DOCG3 is not set
+# CONFIG_MTD_NAND_IDS is not set
+# CONFIG_MTD_NAND is not set
+# CONFIG_MTD_ONENAND is not set
+
+#
+# LPDDR flash memory drivers
+#
+# CONFIG_MTD_LPDDR is not set
+CONFIG_MTD_UBI=y
+CONFIG_MTD_UBI_WL_THRESHOLD=4096
+CONFIG_MTD_UBI_BEB_LIMIT=20
+# CONFIG_MTD_UBI_FASTMAP is not set
+CONFIG_MTD_UBI_GLUEBI=y
+# CONFIG_PARPORT is not set
+CONFIG_BLK_DEV=y
+CONFIG_BLK_DEV_FD=y
+# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+CONFIG_BLK_DEV_UMEM=y
+# CONFIG_BLK_DEV_COW_COMMON is not set
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
+CONFIG_BLK_DEV_CRYPTOLOOP=y
+# CONFIG_BLK_DEV_DRBD is not set
+CONFIG_BLK_DEV_NBD=y
+# CONFIG_BLK_DEV_NVME is not set
+# CONFIG_BLK_DEV_SX8 is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=16
+CONFIG_BLK_DEV_RAM_SIZE=8192
+# CONFIG_BLK_DEV_XIP is not set
+CONFIG_CDROM_PKTCDVD=y
+CONFIG_CDROM_PKTCDVD_BUFFERS=8
+# CONFIG_CDROM_PKTCDVD_WCACHE is not set
+CONFIG_ATA_OVER_ETH=y
+# CONFIG_BLK_DEV_HD is not set
+# CONFIG_BLK_DEV_RBD is not set
+# CONFIG_BLK_DEV_RSXX is not set
+
+#
+# Misc devices
+#
+# CONFIG_SENSORS_LIS3LV02D is not set
+# CONFIG_DUMMY_IRQ is not set
+# CONFIG_PHANTOM is not set
+# CONFIG_INTEL_MID_PTI is not set
+# CONFIG_SGI_IOC4 is not set
+# CONFIG_TIFM_CORE is not set
+# CONFIG_ATMEL_SSC is not set
+# CONFIG_ENCLOSURE_SERVICES is not set
+# CONFIG_HP_ILO is not set
+CONFIG_UID_STAT=y
+# CONFIG_PCH_PHUB is not set
+# CONFIG_SRAM is not set
+# CONFIG_C2PORT is not set
+
+#
+# EEPROM support
+#
+# CONFIG_EEPROM_93CX6 is not set
+# CONFIG_CB710_CORE is not set
+
+#
+# Texas Instruments shared transport line discipline
+#
+
+#
+# Altera FPGA firmware download module
+#
+CONFIG_HAVE_IDE=y
+CONFIG_IDE=y
+
+#
+# Please see Documentation/ide/ide.txt for help/info on IDE drives
+#
+CONFIG_IDE_XFER_MODE=y
+CONFIG_IDE_ATAPI=y
+# CONFIG_BLK_DEV_IDE_SATA is not set
+CONFIG_IDE_GD=y
+CONFIG_IDE_GD_ATA=y
+# CONFIG_IDE_GD_ATAPI is not set
+CONFIG_BLK_DEV_IDECD=y
+CONFIG_BLK_DEV_IDECD_VERBOSE_ERRORS=y
+# CONFIG_BLK_DEV_IDETAPE is not set
+# CONFIG_IDE_TASK_IOCTL is not set
+CONFIG_IDE_PROC_FS=y
+
+#
+# IDE chipset support/bugfixes
+#
+CONFIG_IDE_GENERIC=y
+# CONFIG_BLK_DEV_PLATFORM is not set
+CONFIG_BLK_DEV_IDEDMA_SFF=y
+
+#
+# PCI IDE chipsets support
+#
+CONFIG_BLK_DEV_IDEPCI=y
+CONFIG_IDEPCI_PCIBUS_ORDER=y
+# CONFIG_BLK_DEV_OFFBOARD is not set
+CONFIG_BLK_DEV_GENERIC=y
+# CONFIG_BLK_DEV_OPTI621 is not set
+CONFIG_BLK_DEV_IDEDMA_PCI=y
+# CONFIG_BLK_DEV_AEC62XX is not set
+# CONFIG_BLK_DEV_ALI15X3 is not set
+# CONFIG_BLK_DEV_AMD74XX is not set
+# CONFIG_BLK_DEV_CMD64X is not set
+# CONFIG_BLK_DEV_TRIFLEX is not set
+# CONFIG_BLK_DEV_CS5520 is not set
+# CONFIG_BLK_DEV_CS5530 is not set
+# CONFIG_BLK_DEV_HPT366 is not set
+# CONFIG_BLK_DEV_JMICRON is not set
+# CONFIG_BLK_DEV_SC1200 is not set
+CONFIG_BLK_DEV_PIIX=y
+# CONFIG_BLK_DEV_IT8172 is not set
+CONFIG_BLK_DEV_IT8213=y
+# CONFIG_BLK_DEV_IT821X is not set
+# CONFIG_BLK_DEV_NS87415 is not set
+# CONFIG_BLK_DEV_PDC202XX_OLD is not set
+# CONFIG_BLK_DEV_PDC202XX_NEW is not set
+# CONFIG_BLK_DEV_SVWKS is not set
+# CONFIG_BLK_DEV_SIIMAGE is not set
+# CONFIG_BLK_DEV_SLC90E66 is not set
+# CONFIG_BLK_DEV_TRM290 is not set
+# CONFIG_BLK_DEV_VIA82CXXX is not set
+CONFIG_BLK_DEV_TC86C001=y
+CONFIG_BLK_DEV_IDEDMA=y
+
+#
+# SCSI device support
+#
+CONFIG_SCSI_MOD=y
+CONFIG_RAID_ATTRS=y
+CONFIG_SCSI=y
+CONFIG_SCSI_DMA=y
+CONFIG_SCSI_TGT=y
+CONFIG_SCSI_NETLINK=y
+CONFIG_SCSI_PROC_FS=y
+
+#
+# SCSI support type (disk, tape, CD-ROM)
+#
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_ST=y
+CONFIG_CHR_DEV_OSST=y
+CONFIG_BLK_DEV_SR=y
+CONFIG_BLK_DEV_SR_VENDOR=y
+CONFIG_CHR_DEV_SG=y
+# CONFIG_CHR_DEV_SCH is not set
+CONFIG_SCSI_MULTI_LUN=y
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+CONFIG_SCSI_SCAN_ASYNC=y
+
+#
+# SCSI Transports
+#
+CONFIG_SCSI_SPI_ATTRS=y
+CONFIG_SCSI_FC_ATTRS=y
+# CONFIG_SCSI_FC_TGT_ATTRS is not set
+CONFIG_SCSI_ISCSI_ATTRS=y
+# CONFIG_SCSI_SAS_ATTRS is not set
+# CONFIG_SCSI_SAS_LIBSAS is not set
+# CONFIG_SCSI_SRP_ATTRS is not set
+CONFIG_SCSI_LOWLEVEL=y
+CONFIG_ISCSI_TCP=y
+# CONFIG_ISCSI_BOOT_SYSFS is not set
+# CONFIG_SCSI_CXGB3_ISCSI is not set
+# CONFIG_SCSI_CXGB4_ISCSI is not set
+# CONFIG_SCSI_BNX2_ISCSI is not set
+# CONFIG_SCSI_BNX2X_FCOE is not set
+# CONFIG_BE2ISCSI is not set
+CONFIG_BLK_DEV_3W_XXXX_RAID=y
+# CONFIG_SCSI_HPSA is not set
+CONFIG_SCSI_3W_9XXX=y
+# CONFIG_SCSI_3W_SAS is not set
+CONFIG_SCSI_ACARD=y
+CONFIG_SCSI_AACRAID=y
+CONFIG_SCSI_AIC7XXX=y
+CONFIG_AIC7XXX_CMDS_PER_DEVICE=32
+CONFIG_AIC7XXX_RESET_DELAY_MS=15000
+# CONFIG_AIC7XXX_DEBUG_ENABLE is not set
+CONFIG_AIC7XXX_DEBUG_MASK=0
+CONFIG_AIC7XXX_REG_PRETTY_PRINT=y
+# CONFIG_SCSI_AIC7XXX_OLD is not set
+# CONFIG_SCSI_AIC79XX is not set
+# CONFIG_SCSI_AIC94XX is not set
+# CONFIG_SCSI_MVSAS is not set
+# CONFIG_SCSI_MVUMI is not set
+# CONFIG_SCSI_DPT_I2O is not set
+# CONFIG_SCSI_ADVANSYS is not set
+# CONFIG_SCSI_ARCMSR is not set
+# CONFIG_MEGARAID_NEWGEN is not set
+# CONFIG_MEGARAID_LEGACY is not set
+# CONFIG_MEGARAID_SAS is not set
+# CONFIG_SCSI_MPT2SAS is not set
+# CONFIG_SCSI_MPT3SAS is not set
+# CONFIG_SCSI_UFSHCD is not set
+# CONFIG_SCSI_HPTIOP is not set
+# CONFIG_SCSI_BUSLOGIC is not set
+# CONFIG_LIBFC is not set
+# CONFIG_LIBFCOE is not set
+# CONFIG_FCOE is not set
+# CONFIG_SCSI_DMX3191D is not set
+# CONFIG_SCSI_EATA is not set
+# CONFIG_SCSI_FUTURE_DOMAIN is not set
+# CONFIG_SCSI_GDTH is not set
+# CONFIG_SCSI_IPS is not set
+# CONFIG_SCSI_INITIO is not set
+# CONFIG_SCSI_INIA100 is not set
+# CONFIG_SCSI_STEX is not set
+# CONFIG_SCSI_SYM53C8XX_2 is not set
+# CONFIG_SCSI_QLOGIC_1280 is not set
+# CONFIG_SCSI_QLA_FC is not set
+# CONFIG_SCSI_QLA_ISCSI is not set
+# CONFIG_SCSI_LPFC is not set
+# CONFIG_SCSI_DC395x is not set
+# CONFIG_SCSI_DC390T is not set
+# CONFIG_SCSI_NSP32 is not set
+# CONFIG_SCSI_DEBUG is not set
+# CONFIG_SCSI_PMCRAID is not set
+# CONFIG_SCSI_PM8001 is not set
+# CONFIG_SCSI_SRP is not set
+# CONFIG_SCSI_BFA_FC is not set
+# CONFIG_SCSI_CHELSIO_FCOE is not set
+# CONFIG_SCSI_DH is not set
+# CONFIG_SCSI_OSD_INITIATOR is not set
+# CONFIG_ATA is not set
+CONFIG_MD=y
+CONFIG_BLK_DEV_MD=y
+CONFIG_MD_AUTODETECT=y
+CONFIG_MD_LINEAR=y
+CONFIG_MD_RAID0=y
+CONFIG_MD_RAID1=y
+CONFIG_MD_RAID10=y
+CONFIG_MD_RAID456=y
+CONFIG_MD_MULTIPATH=y
+CONFIG_MD_FAULTY=y
+# CONFIG_BCACHE is not set
+CONFIG_BLK_DEV_DM=y
+# CONFIG_DM_DEBUG is not set
+CONFIG_DM_CRYPT=y
+CONFIG_DM_SNAPSHOT=y
+# CONFIG_DM_THIN_PROVISIONING is not set
+# CONFIG_DM_CACHE is not set
+CONFIG_DM_MIRROR=y
+# CONFIG_DM_RAID is not set
+# CONFIG_DM_LOG_USERSPACE is not set
+CONFIG_DM_ZERO=y
+CONFIG_DM_MULTIPATH=y
+# CONFIG_DM_MULTIPATH_QL is not set
+# CONFIG_DM_MULTIPATH_ST is not set
+# CONFIG_DM_DELAY is not set
+CONFIG_DM_UEVENT=y
+# CONFIG_DM_FLAKEY is not set
+# CONFIG_DM_VERITY is not set
+# CONFIG_TARGET_CORE is not set
+# CONFIG_FUSION is not set
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_FIREWIRE is not set
+# CONFIG_FIREWIRE_NOSY is not set
+# CONFIG_I2O is not set
+CONFIG_NETDEVICES=y
+CONFIG_NET_CORE=y
+CONFIG_BONDING=y
+CONFIG_DUMMY=y
+CONFIG_EQUALIZER=y
+# CONFIG_NET_FC is not set
+CONFIG_MII=y
+CONFIG_IFB=y
+# CONFIG_NET_TEAM is not set
+CONFIG_MACVLAN=y
+# CONFIG_MACVTAP is not set
+# CONFIG_VXLAN is not set
+# CONFIG_NETCONSOLE is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+CONFIG_TUN=y
+CONFIG_VETH=y
+# CONFIG_ARCNET is not set
+
+#
+# CAIF transport drivers
+#
+
+#
+# Distributed Switch Architecture drivers
+#
+# CONFIG_NET_DSA_MV88E6XXX is not set
+# CONFIG_NET_DSA_MV88E6060 is not set
+# CONFIG_NET_DSA_MV88E6XXX_NEED_PPU is not set
+# CONFIG_NET_DSA_MV88E6131 is not set
+# CONFIG_NET_DSA_MV88E6123_61_65 is not set
+CONFIG_ETHERNET=y
+CONFIG_MDIO=y
+# CONFIG_NET_VENDOR_3COM is not set
+CONFIG_NET_VENDOR_ADAPTEC=y
+# CONFIG_ADAPTEC_STARFIRE is not set
+CONFIG_NET_VENDOR_ALTEON=y
+# CONFIG_ACENIC is not set
+CONFIG_NET_VENDOR_AMD=y
+# CONFIG_AMD8111_ETH is not set
+CONFIG_PCNET32=y
+CONFIG_NET_VENDOR_ATHEROS=y
+# CONFIG_ATL2 is not set
+# CONFIG_ATL1 is not set
+# CONFIG_ATL1E is not set
+# CONFIG_ATL1C is not set
+# CONFIG_ALX is not set
+CONFIG_NET_CADENCE=y
+# CONFIG_ARM_AT91_ETHER is not set
+# CONFIG_MACB is not set
+CONFIG_NET_VENDOR_BROADCOM=y
+# CONFIG_B44 is not set
+# CONFIG_BNX2 is not set
+# CONFIG_CNIC is not set
+# CONFIG_TIGON3 is not set
+# CONFIG_BNX2X is not set
+CONFIG_NET_VENDOR_BROCADE=y
+# CONFIG_BNA is not set
+# CONFIG_NET_CALXEDA_XGMAC is not set
+CONFIG_NET_VENDOR_CHELSIO=y
+# CONFIG_CHELSIO_T1 is not set
+CONFIG_CHELSIO_T3=y
+# CONFIG_CHELSIO_T4 is not set
+# CONFIG_CHELSIO_T4VF is not set
+CONFIG_NET_VENDOR_CISCO=y
+# CONFIG_ENIC is not set
+# CONFIG_DM9000 is not set
+# CONFIG_DNET is not set
+CONFIG_NET_VENDOR_DEC=y
+# CONFIG_NET_TULIP is not set
+CONFIG_NET_VENDOR_DLINK=y
+# CONFIG_DL2K is not set
+# CONFIG_SUNDANCE is not set
+CONFIG_NET_VENDOR_EMULEX=y
+# CONFIG_BE2NET is not set
+CONFIG_NET_VENDOR_EXAR=y
+# CONFIG_S2IO is not set
+# CONFIG_VXGE is not set
+CONFIG_NET_VENDOR_HP=y
+# CONFIG_HP100 is not set
+CONFIG_NET_VENDOR_INTEL=y
+# CONFIG_E100 is not set
+# CONFIG_E1000 is not set
+# CONFIG_E1000E is not set
+# CONFIG_IGB is not set
+# CONFIG_IGBVF is not set
+# CONFIG_IXGB is not set
+# CONFIG_IXGBE is not set
+CONFIG_NET_VENDOR_I825XX=y
+# CONFIG_IP1000 is not set
+# CONFIG_JME is not set
+CONFIG_NET_VENDOR_MARVELL=y
+# CONFIG_MVMDIO is not set
+# CONFIG_SKGE is not set
+# CONFIG_SKY2 is not set
+CONFIG_NET_VENDOR_MELLANOX=y
+# CONFIG_MLX4_EN is not set
+# CONFIG_MLX4_CORE is not set
+CONFIG_NET_VENDOR_MICREL=y
+# CONFIG_KS8851_MLL is not set
+# CONFIG_KSZ884X_PCI is not set
+CONFIG_NET_VENDOR_MYRI=y
+# CONFIG_MYRI10GE is not set
+# CONFIG_FEALNX is not set
+CONFIG_NET_VENDOR_NATSEMI=y
+# CONFIG_NATSEMI is not set
+# CONFIG_NS83820 is not set
+CONFIG_NET_VENDOR_8390=y
+CONFIG_AX88796=y
+# CONFIG_AX88796_93CX6 is not set
+# CONFIG_NE2K_PCI is not set
+CONFIG_NET_VENDOR_NVIDIA=y
+# CONFIG_FORCEDETH is not set
+CONFIG_NET_VENDOR_OKI=y
+# CONFIG_PCH_GBE is not set
+# CONFIG_ETHOC is not set
+CONFIG_NET_PACKET_ENGINE=y
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+CONFIG_NET_VENDOR_QLOGIC=y
+# CONFIG_QLA3XXX is not set
+# CONFIG_QLCNIC is not set
+# CONFIG_QLGE is not set
+CONFIG_NETXEN_NIC=y
+CONFIG_NET_VENDOR_REALTEK=y
+# CONFIG_8139CP is not set
+# CONFIG_8139TOO is not set
+# CONFIG_R8169 is not set
+CONFIG_NET_VENDOR_RDC=y
+# CONFIG_R6040 is not set
+CONFIG_NET_VENDOR_SEEQ=y
+CONFIG_NET_VENDOR_SILAN=y
+# CONFIG_SC92031 is not set
+CONFIG_NET_VENDOR_SIS=y
+# CONFIG_SIS900 is not set
+# CONFIG_SIS190 is not set
+# CONFIG_SFC is not set
+CONFIG_NET_VENDOR_SMSC=y
+# CONFIG_SMC91X is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SMSC911X is not set
+# CONFIG_SMSC9420 is not set
+CONFIG_NET_VENDOR_STMICRO=y
+# CONFIG_STMMAC_ETH is not set
+CONFIG_NET_VENDOR_SUN=y
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_CASSINI is not set
+# CONFIG_NIU is not set
+CONFIG_NET_VENDOR_TEHUTI=y
+# CONFIG_TEHUTI is not set
+CONFIG_NET_VENDOR_TI=y
+# CONFIG_TLAN is not set
+CONFIG_NET_VENDOR_TOSHIBA=y
+CONFIG_TC35815=y
+CONFIG_NET_VENDOR_VIA=y
+# CONFIG_VIA_RHINE is not set
+# CONFIG_VIA_VELOCITY is not set
+CONFIG_NET_VENDOR_WIZNET=y
+# CONFIG_WIZNET_W5100 is not set
+# CONFIG_WIZNET_W5300 is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+CONFIG_PHYLIB=y
+
+#
+# MII PHY device drivers
+#
+# CONFIG_AT803X_PHY is not set
+# CONFIG_AMD_PHY is not set
+CONFIG_MARVELL_PHY=y
+CONFIG_DAVICOM_PHY=y
+CONFIG_QSEMI_PHY=y
+CONFIG_LXT_PHY=y
+CONFIG_CICADA_PHY=y
+CONFIG_VITESSE_PHY=y
+CONFIG_SMSC_PHY=y
+CONFIG_BROADCOM_PHY=y
+# CONFIG_BCM87XX_PHY is not set
+CONFIG_ICPLUS_PHY=y
+CONFIG_REALTEK_PHY=y
+# CONFIG_NATIONAL_PHY is not set
+# CONFIG_STE10XP is not set
+# CONFIG_LSI_ET1011C_PHY is not set
+# CONFIG_MICREL_PHY is not set
+# CONFIG_FIXED_PHY is not set
+CONFIG_MDIO_BITBANG=y
+CONFIG_PPP=y
+CONFIG_PPP_BSDCOMP=y
+CONFIG_PPP_DEFLATE=y
+# CONFIG_PPP_FILTER is not set
+CONFIG_PPP_MPPE=y
+# CONFIG_PPP_MULTILINK is not set
+# CONFIG_PPPOE is not set
+CONFIG_PPPOLAC=y
+CONFIG_PPPOPNS=y
+# CONFIG_PPP_ASYNC is not set
+# CONFIG_PPP_SYNC_TTY is not set
+# CONFIG_SLIP is not set
+CONFIG_SLHC=y
+
+#
+# USB Network Adapters
+#
+# CONFIG_USB_CATC is not set
+# CONFIG_USB_KAWETH is not set
+# CONFIG_USB_PEGASUS is not set
+# CONFIG_USB_RTL8150 is not set
+# CONFIG_USB_RTL8152 is not set
+CONFIG_USB_USBNET=y
+CONFIG_USB_NET_AX8817X=y
+CONFIG_USB_NET_AX88179_178A=y
+CONFIG_USB_NET_CDCETHER=y
+# CONFIG_USB_NET_CDC_EEM is not set
+CONFIG_USB_NET_CDC_NCM=y
+# CONFIG_USB_NET_CDC_MBIM is not set
+# CONFIG_USB_NET_DM9601 is not set
+# CONFIG_USB_NET_SMSC75XX is not set
+# CONFIG_USB_NET_SMSC95XX is not set
+# CONFIG_USB_NET_GL620A is not set
+CONFIG_USB_NET_NET1080=y
+# CONFIG_USB_NET_PLUSB is not set
+# CONFIG_USB_NET_MCS7830 is not set
+# CONFIG_USB_NET_RNDIS_HOST is not set
+CONFIG_USB_NET_CDC_SUBSET=y
+# CONFIG_USB_ALI_M5632 is not set
+# CONFIG_USB_AN2720 is not set
+CONFIG_USB_BELKIN=y
+CONFIG_USB_ARMLINUX=y
+# CONFIG_USB_EPSON2888 is not set
+# CONFIG_USB_KC2190 is not set
+CONFIG_USB_NET_ZAURUS=y
+# CONFIG_USB_NET_CX82310_ETH is not set
+# CONFIG_USB_NET_KALMIA is not set
+# CONFIG_USB_NET_QMI_WWAN is not set
+# CONFIG_USB_HSO is not set
+# CONFIG_USB_NET_INT51X1 is not set
+# CONFIG_USB_CDC_PHONET is not set
+# CONFIG_USB_IPHETH is not set
+# CONFIG_USB_SIERRA_NET is not set
+# CONFIG_USB_VL600 is not set
+CONFIG_WLAN=y
+# CONFIG_LIBERTAS_THINFIRM is not set
+# CONFIG_AIRO is not set
+CONFIG_ATMEL=y
+CONFIG_PCI_ATMEL=y
+# CONFIG_AT76C50X_USB is not set
+CONFIG_PRISM54=y
+# CONFIG_USB_ZD1201 is not set
+# CONFIG_USB_NET_RNDIS_WLAN is not set
+# CONFIG_RTL8180 is not set
+# CONFIG_RTL8187 is not set
+# CONFIG_ADM8211 is not set
+# CONFIG_MAC80211_HWSIM is not set
+# CONFIG_MWL8K is not set
+# CONFIG_WIFI_CONTROL_FUNC is not set
+# CONFIG_ATH_CARDS is not set
+# CONFIG_B43 is not set
+# CONFIG_B43LEGACY is not set
+# CONFIG_BRCMFMAC is not set
+CONFIG_HOSTAP=y
+CONFIG_HOSTAP_FIRMWARE=y
+CONFIG_HOSTAP_FIRMWARE_NVRAM=y
+CONFIG_HOSTAP_PLX=y
+CONFIG_HOSTAP_PCI=y
+CONFIG_IPW2100=y
+CONFIG_IPW2100_MONITOR=y
+# CONFIG_IPW2100_DEBUG is not set
+CONFIG_LIBIPW=y
+# CONFIG_LIBIPW_DEBUG is not set
+# CONFIG_IWLWIFI is not set
+# CONFIG_IWL4965 is not set
+# CONFIG_IWL3945 is not set
+CONFIG_LIBERTAS=y
+# CONFIG_LIBERTAS_USB is not set
+# CONFIG_LIBERTAS_DEBUG is not set
+# CONFIG_LIBERTAS_MESH is not set
+# CONFIG_P54_COMMON is not set
+# CONFIG_RT2X00 is not set
+# CONFIG_RTLWIFI is not set
+# CONFIG_WL_TI is not set
+# CONFIG_ZD1211RW is not set
+# CONFIG_MWIFIEX is not set
+
+#
+# Enable WiMAX (Networking options) to see the WiMAX drivers
+#
+# CONFIG_WAN is not set
+# CONFIG_VMXNET3 is not set
+# CONFIG_ISDN is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+CONFIG_INPUT_FF_MEMLESS=y
+# CONFIG_INPUT_POLLDEV is not set
+# CONFIG_INPUT_SPARSEKMAP is not set
+# CONFIG_INPUT_MATRIXKMAP is not set
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+CONFIG_INPUT_EVDEV=y
+# CONFIG_INPUT_EVBUG is not set
+CONFIG_INPUT_KEYRESET=y
+
+#
+# Input Device Drivers
+#
+CONFIG_INPUT_KEYBOARD=y
+CONFIG_KEYBOARD_ATKBD=y
+# CONFIG_KEYBOARD_LKKBD is not set
+# CONFIG_KEYBOARD_NEWTON is not set
+# CONFIG_KEYBOARD_OPENCORES is not set
+# CONFIG_KEYBOARD_STOWAWAY is not set
+# CONFIG_KEYBOARD_SUNKBD is not set
+# CONFIG_KEYBOARD_XTKBD is not set
+CONFIG_INPUT_MOUSE=y
+CONFIG_MOUSE_PS2=y
+CONFIG_MOUSE_PS2_ALPS=y
+CONFIG_MOUSE_PS2_LOGIPS2PP=y
+CONFIG_MOUSE_PS2_SYNAPTICS=y
+CONFIG_MOUSE_PS2_CYPRESS=y
+CONFIG_MOUSE_PS2_TRACKPOINT=y
+# CONFIG_MOUSE_PS2_ELANTECH is not set
+# CONFIG_MOUSE_PS2_SENTELIC is not set
+# CONFIG_MOUSE_PS2_TOUCHKIT is not set
+# CONFIG_MOUSE_SERIAL is not set
+# CONFIG_MOUSE_APPLETOUCH is not set
+# CONFIG_MOUSE_BCM5974 is not set
+# CONFIG_MOUSE_VSXXXAA is not set
+# CONFIG_MOUSE_SYNAPTICS_USB is not set
+CONFIG_INPUT_JOYSTICK=y
+# CONFIG_JOYSTICK_ANALOG is not set
+# CONFIG_JOYSTICK_A3D is not set
+# CONFIG_JOYSTICK_ADI is not set
+# CONFIG_JOYSTICK_COBRA is not set
+# CONFIG_JOYSTICK_GF2K is not set
+# CONFIG_JOYSTICK_GRIP is not set
+# CONFIG_JOYSTICK_GRIP_MP is not set
+# CONFIG_JOYSTICK_GUILLEMOT is not set
+# CONFIG_JOYSTICK_INTERACT is not set
+# CONFIG_JOYSTICK_SIDEWINDER is not set
+# CONFIG_JOYSTICK_TMDC is not set
+# CONFIG_JOYSTICK_IFORCE is not set
+# CONFIG_JOYSTICK_WARRIOR is not set
+# CONFIG_JOYSTICK_MAGELLAN is not set
+# CONFIG_JOYSTICK_SPACEORB is not set
+# CONFIG_JOYSTICK_SPACEBALL is not set
+# CONFIG_JOYSTICK_STINGER is not set
+# CONFIG_JOYSTICK_TWIDJOY is not set
+# CONFIG_JOYSTICK_ZHENHUA is not set
+# CONFIG_JOYSTICK_JOYDUMP is not set
+CONFIG_JOYSTICK_XPAD=y
+CONFIG_JOYSTICK_XPAD_FF=y
+CONFIG_JOYSTICK_XPAD_LEDS=y
+CONFIG_INPUT_TABLET=y
+CONFIG_TABLET_USB_ACECAD=y
+CONFIG_TABLET_USB_AIPTEK=y
+CONFIG_TABLET_USB_GTCO=y
+CONFIG_TABLET_USB_HANWANG=y
+CONFIG_TABLET_USB_KBTAB=y
+CONFIG_TABLET_USB_WACOM=y
+CONFIG_INPUT_TOUCHSCREEN=y
+# CONFIG_TOUCHSCREEN_AD7879 is not set
+# CONFIG_TOUCHSCREEN_CYTTSP_CORE is not set
+# CONFIG_TOUCHSCREEN_DYNAPRO is not set
+# CONFIG_TOUCHSCREEN_HAMPSHIRE is not set
+# CONFIG_TOUCHSCREEN_FUJITSU is not set
+# CONFIG_TOUCHSCREEN_GUNZE is not set
+# CONFIG_TOUCHSCREEN_ELO is not set
+# CONFIG_TOUCHSCREEN_WACOM_W8001 is not set
+# CONFIG_TOUCHSCREEN_MTOUCH is not set
+# CONFIG_TOUCHSCREEN_INEXIO is not set
+# CONFIG_TOUCHSCREEN_MK712 is not set
+# CONFIG_TOUCHSCREEN_PENMOUNT is not set
+# CONFIG_TOUCHSCREEN_TOUCHRIGHT is not set
+# CONFIG_TOUCHSCREEN_TOUCHWIN is not set
+# CONFIG_TOUCHSCREEN_USB_COMPOSITE is not set
+# CONFIG_TOUCHSCREEN_TOUCHIT213 is not set
+# CONFIG_TOUCHSCREEN_TSC_SERIO is not set
+CONFIG_INPUT_MISC=y
+# CONFIG_INPUT_AD714X is not set
+# CONFIG_INPUT_PCSPKR is not set
+# CONFIG_INPUT_ATI_REMOTE2 is not set
+CONFIG_INPUT_KEYCHORD=y
+# CONFIG_INPUT_KEYSPAN_REMOTE is not set
+# CONFIG_INPUT_POWERMATE is not set
+# CONFIG_INPUT_YEALINK is not set
+# CONFIG_INPUT_CM109 is not set
+CONFIG_INPUT_UINPUT=y
+CONFIG_INPUT_GPIO=y
+# CONFIG_INPUT_ADXL34X is not set
+# CONFIG_INPUT_IMS_PCU is not set
+# CONFIG_INPUT_CMA3000 is not set
+
+#
+# Hardware I/O ports
+#
+CONFIG_SERIO=y
+CONFIG_SERIO_I8042=y
+CONFIG_SERIO_SERPORT=y
+# CONFIG_SERIO_PCIPS2 is not set
+CONFIG_SERIO_LIBPS2=y
+# CONFIG_SERIO_RAW is not set
+# CONFIG_SERIO_ALTERA_PS2 is not set
+# CONFIG_SERIO_PS2MULT is not set
+# CONFIG_SERIO_ARC_PS2 is not set
+# CONFIG_GAMEPORT is not set
+
+#
+# Character devices
+#
+CONFIG_TTY=y
+# CONFIG_VT is not set
+CONFIG_UNIX98_PTYS=y
+# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set
+# CONFIG_LEGACY_PTYS is not set
+# CONFIG_SERIAL_NONSTANDARD is not set
+# CONFIG_NOZOMI is not set
+# CONFIG_N_GSM is not set
+# CONFIG_TRACE_SINK is not set
+CONFIG_DEVMEM=y
+CONFIG_DEVKMEM=y
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_DEPRECATED_OPTIONS=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_PCI=y
+CONFIG_SERIAL_8250_NR_UARTS=4
+CONFIG_SERIAL_8250_RUNTIME_UARTS=4
+# CONFIG_SERIAL_8250_EXTENDED is not set
+# CONFIG_SERIAL_8250_DW is not set
+
+#
+# Non-8250 serial port support
+#
+# CONFIG_SERIAL_MFD_HSU is not set
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+# CONFIG_SERIAL_JSM is not set
+# CONFIG_SERIAL_SCCNXP is not set
+# CONFIG_SERIAL_TIMBERDALE is not set
+# CONFIG_SERIAL_ALTERA_JTAGUART is not set
+# CONFIG_SERIAL_ALTERA_UART is not set
+# CONFIG_SERIAL_PCH_UART is not set
+# CONFIG_SERIAL_ARC is not set
+# CONFIG_SERIAL_RP2 is not set
+# CONFIG_TTY_PRINTK is not set
+# CONFIG_IPMI_HANDLER is not set
+CONFIG_HW_RANDOM=y
+# CONFIG_HW_RANDOM_TIMERIOMEM is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+# CONFIG_RAW_DRIVER is not set
+# CONFIG_TCG_TPM is not set
+CONFIG_DEVPORT=y
+# CONFIG_I2C is not set
+# CONFIG_SPI is not set
+
+#
+# Qualcomm MSM SSBI bus support
+#
+# CONFIG_SSBI is not set
+# CONFIG_HSI is not set
+
+#
+# PPS support
+#
+# CONFIG_PPS is not set
+
+#
+# PPS generators support
+#
+
+#
+# PTP clock support
+#
+# CONFIG_PTP_1588_CLOCK is not set
+
+#
+# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
+#
+# CONFIG_PTP_1588_CLOCK_PCH is not set
+CONFIG_ARCH_HAVE_CUSTOM_GPIO_H=y
+CONFIG_GPIO_DEVRES=y
+# CONFIG_W1 is not set
+CONFIG_POWER_SUPPLY=y
+# CONFIG_POWER_SUPPLY_DEBUG is not set
+# CONFIG_PDA_POWER is not set
+# CONFIG_TEST_POWER is not set
+# CONFIG_BATTERY_DS2780 is not set
+# CONFIG_BATTERY_DS2781 is not set
+# CONFIG_BATTERY_BQ27x00 is not set
+# CONFIG_CHARGER_MAX8903 is not set
+# CONFIG_BATTERY_GOLDFISH is not set
+# CONFIG_POWER_RESET is not set
+# CONFIG_POWER_AVS is not set
+# CONFIG_HWMON is not set
+# CONFIG_THERMAL is not set
+# CONFIG_WATCHDOG is not set
+CONFIG_SSB_POSSIBLE=y
+
+#
+# Sonics Silicon Backplane
+#
+# CONFIG_SSB is not set
+CONFIG_BCMA_POSSIBLE=y
+
+#
+# Broadcom specific AMBA
+#
+# CONFIG_BCMA is not set
+
+#
+# Multifunction device drivers
+#
+# CONFIG_MFD_CORE is not set
+# CONFIG_MFD_CROS_EC is not set
+# CONFIG_HTC_PASIC3 is not set
+# CONFIG_LPC_ICH is not set
+# CONFIG_LPC_SCH is not set
+# CONFIG_MFD_JANZ_CMODIO is not set
+# CONFIG_MFD_VIPERBOARD is not set
+# CONFIG_MFD_RDC321X is not set
+# CONFIG_MFD_RTSX_PCI is not set
+# CONFIG_MFD_SM501 is not set
+# CONFIG_ABX500_CORE is not set
+# CONFIG_MFD_SYSCON is not set
+# CONFIG_MFD_TI_AM335X_TSCADC is not set
+# CONFIG_MFD_TMIO is not set
+# CONFIG_MFD_VX855 is not set
+# CONFIG_REGULATOR is not set
+CONFIG_MEDIA_SUPPORT=y
+
+#
+# Multimedia core support
+#
+# CONFIG_MEDIA_CAMERA_SUPPORT is not set
+# CONFIG_MEDIA_ANALOG_TV_SUPPORT is not set
+# CONFIG_MEDIA_DIGITAL_TV_SUPPORT is not set
+# CONFIG_MEDIA_RADIO_SUPPORT is not set
+# CONFIG_MEDIA_RC_SUPPORT is not set
+# CONFIG_VIDEO_ADV_DEBUG is not set
+# CONFIG_VIDEO_FIXED_MINOR_RANGES is not set
+
+#
+# Media drivers
+#
+# CONFIG_MEDIA_USB_SUPPORT is not set
+# CONFIG_MEDIA_PCI_SUPPORT is not set
+
+#
+# Supported MMC/SDIO adapters
+#
+# CONFIG_CYPRESS_FIRMWARE is not set
+
+#
+# Media ancillary drivers (tuners, sensors, i2c, frontends)
+#
+
+#
+# Customise DVB Frontends
+#
+
+#
+# Tools to develop new frontends
+#
+# CONFIG_DVB_DUMMY_FE is not set
+
+#
+# Graphics support
+#
+CONFIG_VGA_ARB=y
+CONFIG_VGA_ARB_MAX_GPUS=16
+# CONFIG_DRM is not set
+# CONFIG_VGASTATE is not set
+# CONFIG_VIDEO_OUTPUT_CONTROL is not set
+CONFIG_FB=y
+# CONFIG_FIRMWARE_EDID is not set
+# CONFIG_FB_DDC is not set
+# CONFIG_FB_BOOT_VESA_SUPPORT is not set
+CONFIG_FB_CFB_FILLRECT=y
+CONFIG_FB_CFB_COPYAREA=y
+CONFIG_FB_CFB_IMAGEBLIT=y
+# CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set
+# CONFIG_FB_SYS_FILLRECT is not set
+# CONFIG_FB_SYS_COPYAREA is not set
+# CONFIG_FB_SYS_IMAGEBLIT is not set
+# CONFIG_FB_FOREIGN_ENDIAN is not set
+# CONFIG_FB_SYS_FOPS is not set
+# CONFIG_FB_SVGALIB is not set
+# CONFIG_FB_MACMODES is not set
+# CONFIG_FB_BACKLIGHT is not set
+# CONFIG_FB_MODE_HELPERS is not set
+CONFIG_FB_TILEBLITTING=y
+
+#
+# Frame buffer hardware drivers
+#
+CONFIG_FB_CIRRUS=y
+# CONFIG_FB_PM2 is not set
+# CONFIG_FB_CYBER2000 is not set
+# CONFIG_FB_ASILIANT is not set
+# CONFIG_FB_IMSTT is not set
+# CONFIG_FB_UVESA is not set
+# CONFIG_FB_S1D13XXX is not set
+# CONFIG_FB_NVIDIA is not set
+# CONFIG_FB_RIVA is not set
+# CONFIG_FB_I740 is not set
+CONFIG_FB_MATROX=y
+# CONFIG_FB_MATROX_MILLENIUM is not set
+# CONFIG_FB_MATROX_MYSTIQUE is not set
+CONFIG_FB_MATROX_G=y
+# CONFIG_FB_MATROX_I2C is not set
+# CONFIG_FB_RADEON is not set
+# CONFIG_FB_ATY128 is not set
+# CONFIG_FB_ATY is not set
+# CONFIG_FB_S3 is not set
+# CONFIG_FB_SAVAGE is not set
+# CONFIG_FB_SIS is not set
+# CONFIG_FB_NEOMAGIC is not set
+# CONFIG_FB_KYRO is not set
+# CONFIG_FB_3DFX is not set
+# CONFIG_FB_VOODOO1 is not set
+# CONFIG_FB_VT8623 is not set
+# CONFIG_FB_TRIDENT is not set
+# CONFIG_FB_ARK is not set
+# CONFIG_FB_PM3 is not set
+# CONFIG_FB_CARMINE is not set
+# CONFIG_FB_SMSCUFX is not set
+# CONFIG_FB_UDL is not set
+# CONFIG_FB_GOLDFISH is not set
+# CONFIG_FB_VIRTUAL is not set
+# CONFIG_FB_METRONOME is not set
+# CONFIG_FB_MB862XX is not set
+# CONFIG_FB_BROADSHEET is not set
+# CONFIG_FB_AUO_K190X is not set
+# CONFIG_EXYNOS_VIDEO is not set
+CONFIG_BACKLIGHT_LCD_SUPPORT=y
+CONFIG_LCD_CLASS_DEVICE=y
+# CONFIG_LCD_PLATFORM is not set
+CONFIG_BACKLIGHT_CLASS_DEVICE=y
+CONFIG_BACKLIGHT_GENERIC=y
+# CONFIG_ADF is not set
+# CONFIG_LOGO is not set
+CONFIG_SOUND=y
+# CONFIG_SOUND_OSS_CORE is not set
+CONFIG_SND=y
+CONFIG_SND_TIMER=y
+CONFIG_SND_PCM=y
+CONFIG_SND_RAWMIDI=y
+CONFIG_SND_COMPRESS_OFFLOAD=y
+CONFIG_SND_JACK=y
+# CONFIG_SND_SEQUENCER is not set
+# CONFIG_SND_MIXER_OSS is not set
+# CONFIG_SND_PCM_OSS is not set
+# CONFIG_SND_HRTIMER is not set
+# CONFIG_SND_DYNAMIC_MINORS is not set
+CONFIG_SND_SUPPORT_OLD_API=y
+CONFIG_SND_VERBOSE_PROCFS=y
+# CONFIG_SND_VERBOSE_PRINTK is not set
+# CONFIG_SND_DEBUG is not set
+# CONFIG_SND_RAWMIDI_SEQ is not set
+# CONFIG_SND_OPL3_LIB_SEQ is not set
+# CONFIG_SND_OPL4_LIB_SEQ is not set
+# CONFIG_SND_SBAWE_SEQ is not set
+# CONFIG_SND_EMU10K1_SEQ is not set
+CONFIG_SND_DRIVERS=y
+# CONFIG_SND_DUMMY is not set
+# CONFIG_SND_ALOOP is not set
+# CONFIG_SND_MTPAV is not set
+# CONFIG_SND_SERIAL_U16550 is not set
+# CONFIG_SND_MPU401 is not set
+CONFIG_SND_PCI=y
+# CONFIG_SND_AD1889 is not set
+# CONFIG_SND_ALS300 is not set
+# CONFIG_SND_ALS4000 is not set
+# CONFIG_SND_ALI5451 is not set
+# CONFIG_SND_ATIIXP is not set
+# CONFIG_SND_ATIIXP_MODEM is not set
+# CONFIG_SND_AU8810 is not set
+# CONFIG_SND_AU8820 is not set
+# CONFIG_SND_AU8830 is not set
+# CONFIG_SND_AW2 is not set
+# CONFIG_SND_AZT3328 is not set
+# CONFIG_SND_BT87X is not set
+# CONFIG_SND_CA0106 is not set
+# CONFIG_SND_CMIPCI is not set
+# CONFIG_SND_OXYGEN is not set
+# CONFIG_SND_CS4281 is not set
+# CONFIG_SND_CS46XX is not set
+# CONFIG_SND_CS5530 is not set
+# CONFIG_SND_CS5535AUDIO is not set
+# CONFIG_SND_CTXFI is not set
+# CONFIG_SND_DARLA20 is not set
+# CONFIG_SND_GINA20 is not set
+# CONFIG_SND_LAYLA20 is not set
+# CONFIG_SND_DARLA24 is not set
+# CONFIG_SND_GINA24 is not set
+# CONFIG_SND_LAYLA24 is not set
+# CONFIG_SND_MONA is not set
+# CONFIG_SND_MIA is not set
+# CONFIG_SND_ECHO3G is not set
+# CONFIG_SND_INDIGO is not set
+# CONFIG_SND_INDIGOIO is not set
+# CONFIG_SND_INDIGODJ is not set
+# CONFIG_SND_INDIGOIOX is not set
+# CONFIG_SND_INDIGODJX is not set
+# CONFIG_SND_EMU10K1 is not set
+# CONFIG_SND_EMU10K1X is not set
+# CONFIG_SND_ENS1370 is not set
+# CONFIG_SND_ENS1371 is not set
+# CONFIG_SND_ES1938 is not set
+# CONFIG_SND_ES1968 is not set
+# CONFIG_SND_FM801 is not set
+# CONFIG_SND_HDA_INTEL is not set
+# CONFIG_SND_HDSP is not set
+# CONFIG_SND_HDSPM is not set
+# CONFIG_SND_ICE1712 is not set
+# CONFIG_SND_ICE1724 is not set
+# CONFIG_SND_INTEL8X0 is not set
+# CONFIG_SND_INTEL8X0M is not set
+# CONFIG_SND_KORG1212 is not set
+# CONFIG_SND_LOLA is not set
+# CONFIG_SND_LX6464ES is not set
+# CONFIG_SND_MAESTRO3 is not set
+# CONFIG_SND_MIXART is not set
+# CONFIG_SND_NM256 is not set
+# CONFIG_SND_PCXHR is not set
+# CONFIG_SND_RIPTIDE is not set
+# CONFIG_SND_RME32 is not set
+# CONFIG_SND_RME96 is not set
+# CONFIG_SND_RME9652 is not set
+# CONFIG_SND_SONICVIBES is not set
+# CONFIG_SND_TRIDENT is not set
+# CONFIG_SND_VIA82XX is not set
+# CONFIG_SND_VIA82XX_MODEM is not set
+# CONFIG_SND_VIRTUOSO is not set
+# CONFIG_SND_VX222 is not set
+# CONFIG_SND_YMFPCI is not set
+CONFIG_SND_MIPS=y
+CONFIG_SND_USB=y
+# CONFIG_SND_USB_AUDIO is not set
+# CONFIG_SND_USB_UA101 is not set
+# CONFIG_SND_USB_CAIAQ is not set
+# CONFIG_SND_USB_6FIRE is not set
+CONFIG_SND_SOC=y
+# CONFIG_SND_ATMEL_SOC is not set
+# CONFIG_SND_SOC_ALL_CODECS is not set
+# CONFIG_SND_SIMPLE_CARD is not set
+# CONFIG_SOUND_PRIME is not set
+
+#
+# HID support
+#
+CONFIG_HID=y
+# CONFIG_HID_BATTERY_STRENGTH is not set
+CONFIG_HIDRAW=y
+CONFIG_UHID=y
+CONFIG_HID_GENERIC=y
+
+#
+# Special HID drivers
+#
+CONFIG_HID_A4TECH=y
+CONFIG_HID_ACRUX=y
+CONFIG_HID_ACRUX_FF=y
+CONFIG_HID_APPLE=y
+# CONFIG_HID_APPLEIR is not set
+# CONFIG_HID_AUREAL is not set
+CONFIG_HID_BELKIN=y
+CONFIG_HID_CHERRY=y
+CONFIG_HID_CHICONY=y
+CONFIG_HID_PRODIKEYS=y
+CONFIG_HID_CYPRESS=y
+CONFIG_HID_DRAGONRISE=y
+CONFIG_DRAGONRISE_FF=y
+CONFIG_HID_EMS_FF=y
+CONFIG_HID_ELECOM=y
+CONFIG_HID_EZKEY=y
+CONFIG_HID_HOLTEK=y
+# CONFIG_HOLTEK_FF is not set
+CONFIG_HID_KEYTOUCH=y
+CONFIG_HID_KYE=y
+CONFIG_HID_UCLOGIC=y
+CONFIG_HID_WALTOP=y
+CONFIG_HID_GYRATION=y
+# CONFIG_HID_ICADE is not set
+CONFIG_HID_TWINHAN=y
+CONFIG_HID_KENSINGTON=y
+CONFIG_HID_LCPOWER=y
+# CONFIG_HID_LENOVO_TPKBD is not set
+CONFIG_HID_LOGITECH=y
+CONFIG_HID_LOGITECH_DJ=y
+CONFIG_LOGITECH_FF=y
+CONFIG_LOGIRUMBLEPAD2_FF=y
+CONFIG_LOGIG940_FF=y
+CONFIG_LOGIWHEELS_FF=y
+CONFIG_HID_MAGICMOUSE=y
+CONFIG_HID_MICROSOFT=y
+CONFIG_HID_MONTEREY=y
+CONFIG_HID_MULTITOUCH=y
+CONFIG_HID_NTRIG=y
+CONFIG_HID_ORTEK=y
+CONFIG_HID_PANTHERLORD=y
+CONFIG_PANTHERLORD_FF=y
+CONFIG_HID_PETALYNX=y
+CONFIG_HID_PICOLCD=y
+# CONFIG_HID_PICOLCD_FB is not set
+# CONFIG_HID_PICOLCD_BACKLIGHT is not set
+# CONFIG_HID_PICOLCD_LCD is not set
+# CONFIG_HID_PICOLCD_LEDS is not set
+CONFIG_HID_PRIMAX=y
+# CONFIG_HID_PS3REMOTE is not set
+CONFIG_HID_ROCCAT=y
+CONFIG_HID_SAITEK=y
+CONFIG_HID_SAMSUNG=y
+CONFIG_HID_SONY=y
+CONFIG_HID_SPEEDLINK=y
+# CONFIG_HID_STEELSERIES is not set
+CONFIG_HID_SUNPLUS=y
+CONFIG_HID_GREENASIA=y
+CONFIG_GREENASIA_FF=y
+CONFIG_HID_SMARTJOYPLUS=y
+CONFIG_SMARTJOYPLUS_FF=y
+CONFIG_HID_TIVO=y
+CONFIG_HID_TOPSEED=y
+# CONFIG_HID_THINGM is not set
+CONFIG_HID_THRUSTMASTER=y
+# CONFIG_THRUSTMASTER_FF is not set
+CONFIG_HID_WACOM=y
+CONFIG_HID_WIIMOTE=y
+CONFIG_HID_WIIMOTE_EXT=y
+CONFIG_HID_ZEROPLUS=y
+# CONFIG_ZEROPLUS_FF is not set
+CONFIG_HID_ZYDACRON=y
+# CONFIG_HID_SENSOR_HUB is not set
+
+#
+# USB HID support
+#
+CONFIG_USB_HID=y
+# CONFIG_HID_PID is not set
+CONFIG_USB_HIDDEV=y
+CONFIG_USB_ARCH_HAS_OHCI=y
+CONFIG_USB_ARCH_HAS_EHCI=y
+CONFIG_USB_ARCH_HAS_XHCI=y
+CONFIG_USB_SUPPORT=y
+CONFIG_USB_COMMON=y
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB=y
+# CONFIG_USB_DEBUG is not set
+CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
+
+#
+# Miscellaneous USB options
+#
+CONFIG_USB_DEFAULT_PERSIST=y
+# CONFIG_USB_DYNAMIC_MINORS is not set
+# CONFIG_USB_OTG is not set
+# CONFIG_USB_OTG_WHITELIST is not set
+# CONFIG_USB_OTG_BLACKLIST_HUB is not set
+# CONFIG_USB_MON is not set
+# CONFIG_USB_WUSB_CBAF is not set
+
+#
+# USB Host Controller Drivers
+#
+# CONFIG_USB_C67X00_HCD is not set
+# CONFIG_USB_XHCI_HCD is not set
+CONFIG_USB_EHCI_HCD=y
+# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
+CONFIG_USB_EHCI_TT_NEWSCHED=y
+CONFIG_USB_EHCI_PCI=y
+# CONFIG_USB_EHCI_HCD_PLATFORM is not set
+# CONFIG_USB_OXU210HP_HCD is not set
+# CONFIG_USB_ISP116X_HCD is not set
+# CONFIG_USB_ISP1760_HCD is not set
+# CONFIG_USB_ISP1362_HCD is not set
+# CONFIG_USB_OHCI_HCD is not set
+# CONFIG_USB_UHCI_HCD is not set
+# CONFIG_USB_SL811_HCD is not set
+# CONFIG_USB_R8A66597_HCD is not set
+# CONFIG_USB_MUSB_HDRC is not set
+# CONFIG_USB_RENESAS_USBHS is not set
+
+#
+# USB Device Class drivers
+#
+# CONFIG_USB_ACM is not set
+# CONFIG_USB_PRINTER is not set
+# CONFIG_USB_WDM is not set
+# CONFIG_USB_TMC is not set
+
+#
+# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
+#
+
+#
+# also be needed; see USB_STORAGE Help for more info
+#
+# CONFIG_USB_STORAGE is not set
+
+#
+# USB Imaging devices
+#
+# CONFIG_USB_MDC800 is not set
+# CONFIG_USB_MICROTEK is not set
+# CONFIG_USB_DWC3 is not set
+# CONFIG_USB_CHIPIDEA is not set
+
+#
+# USB port drivers
+#
+# CONFIG_USB_SERIAL is not set
+
+#
+# USB Miscellaneous drivers
+#
+# CONFIG_USB_EMI62 is not set
+# CONFIG_USB_EMI26 is not set
+# CONFIG_USB_ADUTUX is not set
+# CONFIG_USB_SEVSEG is not set
+# CONFIG_USB_RIO500 is not set
+# CONFIG_USB_LEGOTOWER is not set
+# CONFIG_USB_LCD is not set
+# CONFIG_USB_LED is not set
+# CONFIG_USB_CYPRESS_CY7C63 is not set
+# CONFIG_USB_CYTHERM is not set
+# CONFIG_USB_IDMOUSE is not set
+# CONFIG_USB_FTDI_ELAN is not set
+# CONFIG_USB_APPLEDISPLAY is not set
+# CONFIG_USB_SISUSBVGA is not set
+# CONFIG_USB_LD is not set
+# CONFIG_USB_TRANCEVIBRATOR is not set
+# CONFIG_USB_IOWARRIOR is not set
+# CONFIG_USB_TEST is not set
+# CONFIG_USB_ISIGHTFW is not set
+# CONFIG_USB_YUREX is not set
+# CONFIG_USB_EZUSB_FX2 is not set
+# CONFIG_USB_PHY is not set
+CONFIG_USB_OTG_WAKELOCK=y
+CONFIG_USB_GADGET=y
+# CONFIG_USB_GADGET_DEBUG is not set
+# CONFIG_USB_GADGET_DEBUG_FILES is not set
+CONFIG_USB_GADGET_VBUS_DRAW=2
+CONFIG_USB_GADGET_STORAGE_NUM_BUFFERS=2
+
+#
+# USB Peripheral Controller
+#
+# CONFIG_USB_FUSB300 is not set
+# CONFIG_USB_R8A66597 is not set
+# CONFIG_USB_PXA27X is not set
+# CONFIG_USB_MV_UDC is not set
+# CONFIG_USB_MV_U3D is not set
+# CONFIG_USB_M66592 is not set
+# CONFIG_USB_AMD5536UDC is not set
+# CONFIG_USB_NET2272 is not set
+# CONFIG_USB_NET2280 is not set
+# CONFIG_USB_GOKU is not set
+# CONFIG_USB_EG20T is not set
+# CONFIG_USB_DUMMY_HCD is not set
+CONFIG_USB_LIBCOMPOSITE=y
+CONFIG_USB_F_ACM=y
+CONFIG_USB_U_SERIAL=y
+# CONFIG_USB_ZERO is not set
+# CONFIG_USB_AUDIO is not set
+# CONFIG_USB_ETH is not set
+# CONFIG_USB_G_NCM is not set
+# CONFIG_USB_GADGETFS is not set
+# CONFIG_USB_FUNCTIONFS is not set
+# CONFIG_USB_MASS_STORAGE is not set
+# CONFIG_USB_G_SERIAL is not set
+# CONFIG_USB_MIDI_GADGET is not set
+# CONFIG_USB_G_PRINTER is not set
+CONFIG_USB_G_ANDROID=y
+# CONFIG_USB_ANDROID_RNDIS_DWORD_ALIGNED is not set
+# CONFIG_USB_CDC_COMPOSITE is not set
+# CONFIG_USB_G_NOKIA is not set
+# CONFIG_USB_G_ACM_MS is not set
+# CONFIG_USB_G_MULTI is not set
+# CONFIG_USB_G_HID is not set
+# CONFIG_USB_G_DBGP is not set
+# CONFIG_UWB is not set
+# CONFIG_MMC is not set
+# CONFIG_MEMSTICK is not set
+CONFIG_NEW_LEDS=y
+CONFIG_LEDS_CLASS=y
+
+#
+# LED drivers
+#
+# CONFIG_LEDS_OT200 is not set
+
+#
+# LED Triggers
+#
+# CONFIG_LEDS_TRIGGERS is not set
+CONFIG_SWITCH=y
+# CONFIG_ACCESSIBILITY is not set
+# CONFIG_INFINIBAND is not set
+CONFIG_RTC_LIB=y
+CONFIG_RTC_CLASS=y
+CONFIG_RTC_HCTOSYS=y
+CONFIG_RTC_SYSTOHC=y
+CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
+# CONFIG_RTC_DEBUG is not set
+
+#
+# RTC interfaces
+#
+CONFIG_RTC_INTF_SYSFS=y
+CONFIG_RTC_INTF_PROC=y
+CONFIG_RTC_INTF_DEV=y
+# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
+# CONFIG_RTC_DRV_TEST is not set
+
+#
+# SPI RTC drivers
+#
+
+#
+# Platform RTC drivers
+#
+CONFIG_RTC_DRV_CMOS=y
+# CONFIG_RTC_DRV_DS1286 is not set
+# CONFIG_RTC_DRV_DS1511 is not set
+# CONFIG_RTC_DRV_DS1553 is not set
+# CONFIG_RTC_DRV_DS1742 is not set
+# CONFIG_RTC_DRV_STK17TA8 is not set
+# CONFIG_RTC_DRV_M48T86 is not set
+# CONFIG_RTC_DRV_M48T35 is not set
+# CONFIG_RTC_DRV_M48T59 is not set
+# CONFIG_RTC_DRV_MSM6242 is not set
+# CONFIG_RTC_DRV_BQ4802 is not set
+# CONFIG_RTC_DRV_RP5C01 is not set
+# CONFIG_RTC_DRV_V3020 is not set
+# CONFIG_RTC_DRV_DS2404 is not set
+
+#
+# on-CPU RTC drivers
+#
+
+#
+# HID Sensor RTC drivers
+#
+# CONFIG_RTC_DRV_HID_SENSOR_TIME is not set
+# CONFIG_DMADEVICES is not set
+# CONFIG_AUXDISPLAY is not set
+CONFIG_UIO=y
+CONFIG_UIO_CIF=y
+# CONFIG_UIO_PDRV is not set
+# CONFIG_UIO_PDRV_GENIRQ is not set
+# CONFIG_UIO_DMEM_GENIRQ is not set
+# CONFIG_UIO_AEC is not set
+# CONFIG_UIO_SERCOS3 is not set
+# CONFIG_UIO_PCI_GENERIC is not set
+# CONFIG_UIO_NETX is not set
+# CONFIG_VIRT_DRIVERS is not set
+
+#
+# Virtio drivers
+#
+# CONFIG_VIRTIO_PCI is not set
+# CONFIG_VIRTIO_MMIO is not set
+
+#
+# Microsoft Hyper-V guest support
+#
+CONFIG_STAGING=y
+# CONFIG_ET131X is not set
+# CONFIG_USBIP_CORE is not set
+# CONFIG_W35UND is not set
+# CONFIG_PRISM2_USB is not set
+# CONFIG_ECHO is not set
+# CONFIG_ASUS_OLED is not set
+# CONFIG_R8712U is not set
+# CONFIG_RTS5139 is not set
+# CONFIG_TRANZPORT is not set
+# CONFIG_LINE6_USB is not set
+# CONFIG_DX_SEP is not set
+# CONFIG_ZSMALLOC is not set
+# CONFIG_FB_SM7XX is not set
+# CONFIG_CRYSTALHD is not set
+# CONFIG_FB_XGI is not set
+# CONFIG_BCM_WIMAX is not set
+# CONFIG_FT1000 is not set
+
+#
+# Speakup console speech
+#
+# CONFIG_STAGING_MEDIA is not set
+
+#
+# Android
+#
+CONFIG_ANDROID=y
+CONFIG_ANDROID_BINDER_IPC=y
+CONFIG_ASHMEM=y
+CONFIG_ANDROID_LOGGER=y
+CONFIG_ANDROID_TIMED_OUTPUT=y
+CONFIG_ANDROID_LOW_MEMORY_KILLER=y
+CONFIG_ANDROID_LOW_MEMORY_KILLER_AUTODETECT_OOM_ADJ_VALUES=y
+CONFIG_ANDROID_INTF_ALARM_DEV=y
+CONFIG_SYNC=y
+# CONFIG_SW_SYNC is not set
+CONFIG_ION=y
+# CONFIG_ION_TEST is not set
+# CONFIG_USB_WPAN_HCD is not set
+# CONFIG_WIMAX_GDM72XX is not set
+CONFIG_NET_VENDOR_SILICOM=y
+# CONFIG_CED1401 is not set
+# CONFIG_DGRP is not set
+# CONFIG_USB_DWC2 is not set
+
+#
+# Hardware Spinlock drivers
+#
+CONFIG_CLKSRC_I8253=y
+CONFIG_CLKEVT_I8253=y
+CONFIG_I8253_LOCK=y
+CONFIG_CLKBLD_I8253=y
+# CONFIG_MAILBOX is not set
+CONFIG_IOMMU_SUPPORT=y
+
+#
+# Remoteproc drivers
+#
+# CONFIG_STE_MODEM_RPROC is not set
+
+#
+# Rpmsg drivers
+#
+# CONFIG_PM_DEVFREQ is not set
+# CONFIG_EXTCON is not set
+# CONFIG_MEMORY is not set
+# CONFIG_IIO is not set
+# CONFIG_VME_BUS is not set
+# CONFIG_PWM is not set
+# CONFIG_IPACK_BUS is not set
+# CONFIG_RESET_CONTROLLER is not set
+
+#
+# Firmware Drivers
+#
+# CONFIG_FIRMWARE_MEMMAP is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT2_FS_XIP is not set
+CONFIG_EXT3_FS=y
+CONFIG_EXT3_DEFAULTS_TO_ORDERED=y
+CONFIG_EXT3_FS_XATTR=y
+# CONFIG_EXT3_FS_POSIX_ACL is not set
+# CONFIG_EXT3_FS_SECURITY is not set
+CONFIG_EXT4_FS=y
+# CONFIG_EXT4_FS_POSIX_ACL is not set
+CONFIG_EXT4_FS_SECURITY=y
+# CONFIG_EXT4_DEBUG is not set
+CONFIG_JBD=y
+CONFIG_JBD2=y
+CONFIG_FS_MBCACHE=y
+CONFIG_REISERFS_FS=y
+# CONFIG_REISERFS_CHECK is not set
+CONFIG_REISERFS_PROC_INFO=y
+CONFIG_REISERFS_FS_XATTR=y
+CONFIG_REISERFS_FS_POSIX_ACL=y
+CONFIG_REISERFS_FS_SECURITY=y
+CONFIG_JFS_FS=y
+CONFIG_JFS_POSIX_ACL=y
+CONFIG_JFS_SECURITY=y
+# CONFIG_JFS_DEBUG is not set
+# CONFIG_JFS_STATISTICS is not set
+CONFIG_XFS_FS=y
+CONFIG_XFS_QUOTA=y
+CONFIG_XFS_POSIX_ACL=y
+# CONFIG_XFS_RT is not set
+# CONFIG_XFS_WARN is not set
+# CONFIG_XFS_DEBUG is not set
+# CONFIG_GFS2_FS is not set
+# CONFIG_OCFS2_FS is not set
+# CONFIG_BTRFS_FS is not set
+# CONFIG_NILFS2_FS is not set
+CONFIG_FS_POSIX_ACL=y
+CONFIG_EXPORTFS=y
+CONFIG_FILE_LOCKING=y
+CONFIG_FSNOTIFY=y
+CONFIG_DNOTIFY=y
+CONFIG_INOTIFY_USER=y
+# CONFIG_FANOTIFY is not set
+CONFIG_QUOTA=y
+# CONFIG_QUOTA_NETLINK_INTERFACE is not set
+CONFIG_PRINT_QUOTA_WARNING=y
+# CONFIG_QUOTA_DEBUG is not set
+CONFIG_QUOTA_TREE=y
+# CONFIG_QFMT_V1 is not set
+CONFIG_QFMT_V2=y
+CONFIG_QUOTACTL=y
+# CONFIG_AUTOFS4_FS is not set
+CONFIG_FUSE_FS=y
+# CONFIG_CUSE is not set
+CONFIG_GENERIC_ACL=y
+
+#
+# Caches
+#
+# CONFIG_FSCACHE is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+CONFIG_ISO9660_FS=y
+CONFIG_JOLIET=y
+CONFIG_ZISOFS=y
+CONFIG_UDF_FS=y
+CONFIG_UDF_NLS=y
+
+#
+# DOS/FAT/NT Filesystems
+#
+CONFIG_FAT_FS=y
+CONFIG_MSDOS_FS=y
+CONFIG_VFAT_FS=y
+CONFIG_FAT_DEFAULT_CODEPAGE=437
+CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_PROC_SYSCTL=y
+CONFIG_PROC_PAGE_MONITOR=y
+CONFIG_SYSFS=y
+CONFIG_TMPFS=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_TMPFS_XATTR=y
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_CONFIGFS_FS=y
+CONFIG_MISC_FILESYSTEMS=y
+# CONFIG_ADFS_FS is not set
+CONFIG_AFFS_FS=y
+CONFIG_HFS_FS=y
+CONFIG_HFSPLUS_FS=y
+CONFIG_BEFS_FS=y
+# CONFIG_BEFS_DEBUG is not set
+CONFIG_BFS_FS=y
+CONFIG_EFS_FS=y
+CONFIG_JFFS2_FS=y
+CONFIG_JFFS2_FS_DEBUG=0
+CONFIG_JFFS2_FS_WRITEBUFFER=y
+# CONFIG_JFFS2_FS_WBUF_VERIFY is not set
+# CONFIG_JFFS2_SUMMARY is not set
+CONFIG_JFFS2_FS_XATTR=y
+CONFIG_JFFS2_FS_POSIX_ACL=y
+CONFIG_JFFS2_FS_SECURITY=y
+CONFIG_JFFS2_COMPRESSION_OPTIONS=y
+CONFIG_JFFS2_ZLIB=y
+# CONFIG_JFFS2_LZO is not set
+CONFIG_JFFS2_RTIME=y
+CONFIG_JFFS2_RUBIN=y
+# CONFIG_JFFS2_CMODE_NONE is not set
+CONFIG_JFFS2_CMODE_PRIORITY=y
+# CONFIG_JFFS2_CMODE_SIZE is not set
+# CONFIG_JFFS2_CMODE_FAVOURLZO is not set
+# CONFIG_UBIFS_FS is not set
+# CONFIG_LOGFS is not set
+CONFIG_CRAMFS=y
+# CONFIG_SQUASHFS is not set
+CONFIG_VXFS_FS=y
+CONFIG_MINIX_FS=y
+CONFIG_MINIX_FS_NATIVE_ENDIAN=y
+# CONFIG_OMFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_QNX6FS_FS is not set
+CONFIG_ROMFS_FS=y
+CONFIG_ROMFS_BACKED_BY_BLOCK=y
+# CONFIG_ROMFS_BACKED_BY_MTD is not set
+# CONFIG_ROMFS_BACKED_BY_BOTH is not set
+CONFIG_ROMFS_ON_BLOCK=y
+CONFIG_PSTORE=y
+CONFIG_PSTORE_CONSOLE=y
+CONFIG_PSTORE_RAM=y
+CONFIG_SYSV_FS=y
+CONFIG_UFS_FS=y
+# CONFIG_UFS_FS_WRITE is not set
+# CONFIG_UFS_DEBUG is not set
+# CONFIG_F2FS_FS is not set
+CONFIG_NETWORK_FILESYSTEMS=y
+CONFIG_NFS_FS=y
+CONFIG_NFS_V2=y
+CONFIG_NFS_V3=y
+# CONFIG_NFS_V3_ACL is not set
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_SWAP is not set
+CONFIG_ROOT_NFS=y
+CONFIG_NFSD=y
+CONFIG_NFSD_V3=y
+# CONFIG_NFSD_V3_ACL is not set
+# CONFIG_NFSD_V4 is not set
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+CONFIG_NFS_COMMON=y
+CONFIG_SUNRPC=y
+# CONFIG_SUNRPC_DEBUG is not set
+# CONFIG_CEPH_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+CONFIG_NLS=y
+CONFIG_NLS_DEFAULT="iso8859-1"
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_CODEPAGE_737=y
+CONFIG_NLS_CODEPAGE_775=y
+CONFIG_NLS_CODEPAGE_850=y
+CONFIG_NLS_CODEPAGE_852=y
+CONFIG_NLS_CODEPAGE_855=y
+CONFIG_NLS_CODEPAGE_857=y
+CONFIG_NLS_CODEPAGE_860=y
+CONFIG_NLS_CODEPAGE_861=y
+CONFIG_NLS_CODEPAGE_862=y
+CONFIG_NLS_CODEPAGE_863=y
+CONFIG_NLS_CODEPAGE_864=y
+CONFIG_NLS_CODEPAGE_865=y
+CONFIG_NLS_CODEPAGE_866=y
+CONFIG_NLS_CODEPAGE_869=y
+CONFIG_NLS_CODEPAGE_936=y
+CONFIG_NLS_CODEPAGE_950=y
+CONFIG_NLS_CODEPAGE_932=y
+CONFIG_NLS_CODEPAGE_949=y
+CONFIG_NLS_CODEPAGE_874=y
+CONFIG_NLS_ISO8859_8=y
+CONFIG_NLS_CODEPAGE_1250=y
+CONFIG_NLS_CODEPAGE_1251=y
+CONFIG_NLS_ASCII=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_NLS_ISO8859_2=y
+CONFIG_NLS_ISO8859_3=y
+CONFIG_NLS_ISO8859_4=y
+CONFIG_NLS_ISO8859_5=y
+CONFIG_NLS_ISO8859_6=y
+CONFIG_NLS_ISO8859_7=y
+CONFIG_NLS_ISO8859_9=y
+CONFIG_NLS_ISO8859_13=y
+CONFIG_NLS_ISO8859_14=y
+CONFIG_NLS_ISO8859_15=y
+CONFIG_NLS_KOI8_R=y
+CONFIG_NLS_KOI8_U=y
+# CONFIG_NLS_MAC_ROMAN is not set
+# CONFIG_NLS_MAC_CELTIC is not set
+# CONFIG_NLS_MAC_CENTEURO is not set
+# CONFIG_NLS_MAC_CROATIAN is not set
+# CONFIG_NLS_MAC_CYRILLIC is not set
+# CONFIG_NLS_MAC_GAELIC is not set
+# CONFIG_NLS_MAC_GREEK is not set
+# CONFIG_NLS_MAC_ICELAND is not set
+# CONFIG_NLS_MAC_INUIT is not set
+# CONFIG_NLS_MAC_ROMANIAN is not set
+# CONFIG_NLS_MAC_TURKISH is not set
+CONFIG_NLS_UTF8=y
+# CONFIG_DLM is not set
+
+#
+# Kernel hacking
+#
+CONFIG_TRACE_IRQFLAGS_SUPPORT=y
+# CONFIG_PRINTK_TIME is not set
+CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
+CONFIG_ENABLE_WARN_DEPRECATED=y
+CONFIG_ENABLE_MUST_CHECK=y
+CONFIG_FRAME_WARN=1024
+# CONFIG_MAGIC_SYSRQ is not set
+# CONFIG_STRIP_ASM_SYMS is not set
+# CONFIG_READABLE_ASM is not set
+# CONFIG_UNUSED_SYMBOLS is not set
+# CONFIG_DEBUG_FS is not set
+# CONFIG_HEADERS_CHECK is not set
+# CONFIG_DEBUG_SECTION_MISMATCH is not set
+CONFIG_DEBUG_KERNEL=y
+# CONFIG_DEBUG_SHIRQ is not set
+# CONFIG_LOCKUP_DETECTOR is not set
+# CONFIG_PANIC_ON_OOPS is not set
+CONFIG_PANIC_ON_OOPS_VALUE=0
+# CONFIG_DETECT_HUNG_TASK is not set
+CONFIG_SCHED_DEBUG=y
+CONFIG_SCHEDSTATS=y
+CONFIG_TIMER_STATS=y
+# CONFIG_DEBUG_OBJECTS is not set
+# CONFIG_DEBUG_SLAB is not set
+CONFIG_HAVE_DEBUG_KMEMLEAK=y
+# CONFIG_DEBUG_KMEMLEAK is not set
+CONFIG_DEBUG_PREEMPT=y
+# CONFIG_DEBUG_RT_MUTEXES is not set
+# CONFIG_RT_MUTEX_TESTER is not set
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_MUTEXES is not set
+# CONFIG_DEBUG_LOCK_ALLOC is not set
+# CONFIG_PROVE_LOCKING is not set
+# CONFIG_LOCK_STAT is not set
+# CONFIG_DEBUG_ATOMIC_SLEEP is not set
+# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
+CONFIG_STACKTRACE=y
+# CONFIG_DEBUG_STACK_USAGE is not set
+# CONFIG_DEBUG_KOBJECT is not set
+# CONFIG_DEBUG_INFO is not set
+# CONFIG_DEBUG_VM is not set
+# CONFIG_DEBUG_WRITECOUNT is not set
+# CONFIG_DEBUG_MEMORY_INIT is not set
+# CONFIG_DEBUG_LIST is not set
+# CONFIG_TEST_LIST_SORT is not set
+# CONFIG_DEBUG_SG is not set
+# CONFIG_DEBUG_NOTIFIERS is not set
+# CONFIG_DEBUG_CREDENTIALS is not set
+# CONFIG_BOOT_PRINTK_DELAY is not set
+
+#
+# RCU Debugging
+#
+# CONFIG_PROVE_RCU_DELAY is not set
+# CONFIG_SPARSE_RCU_POINTER is not set
+# CONFIG_RCU_TORTURE_TEST is not set
+CONFIG_RCU_CPU_STALL_TIMEOUT=21
+CONFIG_RCU_CPU_STALL_VERBOSE=y
+# CONFIG_RCU_CPU_STALL_INFO is not set
+# CONFIG_RCU_TRACE is not set
+# CONFIG_BACKTRACE_SELF_TEST is not set
+# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
+# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
+# CONFIG_DEBUG_PER_CPU_MAPS is not set
+# CONFIG_NOTIFIER_ERROR_INJECTION is not set
+# CONFIG_FAULT_INJECTION is not set
+# CONFIG_DEBUG_PAGEALLOC is not set
+CONFIG_HAVE_FUNCTION_TRACER=y
+CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
+CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
+CONFIG_HAVE_DYNAMIC_FTRACE=y
+CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
+CONFIG_HAVE_C_RECORDMCOUNT=y
+CONFIG_TRACING_SUPPORT=y
+CONFIG_FTRACE=y
+# CONFIG_FUNCTION_TRACER is not set
+# CONFIG_IRQSOFF_TRACER is not set
+# CONFIG_PREEMPT_TRACER is not set
+# CONFIG_SCHED_TRACER is not set
+# CONFIG_ENABLE_DEFAULT_TRACERS is not set
+# CONFIG_TRACER_SNAPSHOT is not set
+CONFIG_BRANCH_PROFILE_NONE=y
+# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
+# CONFIG_PROFILE_ALL_BRANCHES is not set
+# CONFIG_STACK_TRACER is not set
+# CONFIG_BLK_DEV_IO_TRACE is not set
+# CONFIG_PROBE_EVENTS is not set
+# CONFIG_DMA_API_DEBUG is not set
+# CONFIG_ATOMIC64_SELFTEST is not set
+# CONFIG_ASYNC_RAID6_TEST is not set
+# CONFIG_SAMPLES is not set
+CONFIG_HAVE_ARCH_KGDB=y
+# CONFIG_KGDB is not set
+# CONFIG_TEST_STRING_HELPERS is not set
+# CONFIG_TEST_KSTRTOX is not set
+CONFIG_EARLY_PRINTK=y
+# CONFIG_CMDLINE_BOOL is not set
+# CONFIG_DEBUG_STACKOVERFLOW is not set
+# CONFIG_RUNTIME_DEBUG is not set
+# CONFIG_DEBUG_ZBOOT is not set
+
+#
+# Security options
+#
+# CONFIG_KEYS is not set
+# CONFIG_SECURITY_DMESG_RESTRICT is not set
+# CONFIG_SECURITY is not set
+# CONFIG_SECURITYFS is not set
+CONFIG_DEFAULT_SECURITY_DAC=y
+CONFIG_DEFAULT_SECURITY=""
+CONFIG_XOR_BLOCKS=y
+CONFIG_ASYNC_CORE=y
+CONFIG_ASYNC_MEMCPY=y
+CONFIG_ASYNC_XOR=y
+CONFIG_ASYNC_PQ=y
+CONFIG_ASYNC_RAID6_RECOV=y
+CONFIG_CRYPTO=y
+
+#
+# Crypto core or helper
+#
+CONFIG_CRYPTO_ALGAPI=y
+CONFIG_CRYPTO_ALGAPI2=y
+CONFIG_CRYPTO_AEAD=y
+CONFIG_CRYPTO_AEAD2=y
+CONFIG_CRYPTO_BLKCIPHER=y
+CONFIG_CRYPTO_BLKCIPHER2=y
+CONFIG_CRYPTO_HASH=y
+CONFIG_CRYPTO_HASH2=y
+CONFIG_CRYPTO_RNG2=y
+CONFIG_CRYPTO_PCOMP2=y
+CONFIG_CRYPTO_MANAGER=y
+CONFIG_CRYPTO_MANAGER2=y
+# CONFIG_CRYPTO_USER is not set
+CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
+CONFIG_CRYPTO_GF128MUL=y
+CONFIG_CRYPTO_NULL=y
+# CONFIG_CRYPTO_PCRYPT is not set
+CONFIG_CRYPTO_WORKQUEUE=y
+CONFIG_CRYPTO_CRYPTD=y
+CONFIG_CRYPTO_AUTHENC=y
+
+#
+# Authenticated Encryption with Associated Data
+#
+# CONFIG_CRYPTO_CCM is not set
+# CONFIG_CRYPTO_GCM is not set
+# CONFIG_CRYPTO_SEQIV is not set
+
+#
+# Block modes
+#
+CONFIG_CRYPTO_CBC=y
+# CONFIG_CRYPTO_CTR is not set
+# CONFIG_CRYPTO_CTS is not set
+CONFIG_CRYPTO_ECB=y
+CONFIG_CRYPTO_LRW=y
+CONFIG_CRYPTO_PCBC=y
+# CONFIG_CRYPTO_XTS is not set
+
+#
+# Hash modes
+#
+# CONFIG_CRYPTO_CMAC is not set
+CONFIG_CRYPTO_HMAC=y
+CONFIG_CRYPTO_XCBC=y
+# CONFIG_CRYPTO_VMAC is not set
+
+#
+# Digest
+#
+CONFIG_CRYPTO_CRC32C=y
+# CONFIG_CRYPTO_CRC32 is not set
+# CONFIG_CRYPTO_GHASH is not set
+CONFIG_CRYPTO_MD4=y
+CONFIG_CRYPTO_MD5=y
+CONFIG_CRYPTO_MICHAEL_MIC=y
+# CONFIG_CRYPTO_RMD128 is not set
+# CONFIG_CRYPTO_RMD160 is not set
+# CONFIG_CRYPTO_RMD256 is not set
+# CONFIG_CRYPTO_RMD320 is not set
+CONFIG_CRYPTO_SHA1=y
+CONFIG_CRYPTO_SHA256=y
+CONFIG_CRYPTO_SHA512=y
+CONFIG_CRYPTO_TGR192=y
+CONFIG_CRYPTO_WP512=y
+
+#
+# Ciphers
+#
+CONFIG_CRYPTO_AES=y
+CONFIG_CRYPTO_ANUBIS=y
+CONFIG_CRYPTO_ARC4=y
+CONFIG_CRYPTO_BLOWFISH=y
+CONFIG_CRYPTO_BLOWFISH_COMMON=y
+CONFIG_CRYPTO_CAMELLIA=y
+CONFIG_CRYPTO_CAST_COMMON=y
+CONFIG_CRYPTO_CAST5=y
+CONFIG_CRYPTO_CAST6=y
+CONFIG_CRYPTO_DES=y
+CONFIG_CRYPTO_FCRYPT=y
+CONFIG_CRYPTO_KHAZAD=y
+# CONFIG_CRYPTO_SALSA20 is not set
+# CONFIG_CRYPTO_SEED is not set
+CONFIG_CRYPTO_SERPENT=y
+CONFIG_CRYPTO_TEA=y
+CONFIG_CRYPTO_TWOFISH=y
+CONFIG_CRYPTO_TWOFISH_COMMON=y
+
+#
+# Compression
+#
+CONFIG_CRYPTO_DEFLATE=y
+# CONFIG_CRYPTO_ZLIB is not set
+# CONFIG_CRYPTO_LZO is not set
+
+#
+# Random Number Generation
+#
+# CONFIG_CRYPTO_ANSI_CPRNG is not set
+# CONFIG_CRYPTO_USER_API_HASH is not set
+# CONFIG_CRYPTO_USER_API_SKCIPHER is not set
+CONFIG_CRYPTO_HW=y
+# CONFIG_CRYPTO_DEV_HIFN_795X is not set
+# CONFIG_BINARY_PRINTF is not set
+
+#
+# Library routines
+#
+CONFIG_RAID6_PQ=y
+CONFIG_BITREVERSE=y
+CONFIG_NO_GENERIC_PCI_IOPORT_MAP=y
+CONFIG_GENERIC_PCI_IOMAP=y
+CONFIG_GENERIC_IO=y
+# CONFIG_CRC_CCITT is not set
+CONFIG_CRC16=y
+# CONFIG_CRC_T10DIF is not set
+CONFIG_CRC_ITU_T=y
+CONFIG_CRC32=y
+# CONFIG_CRC32_SELFTEST is not set
+CONFIG_CRC32_SLICEBY8=y
+# CONFIG_CRC32_SLICEBY4 is not set
+# CONFIG_CRC32_SARWATE is not set
+# CONFIG_CRC32_BIT is not set
+# CONFIG_CRC7 is not set
+CONFIG_LIBCRC32C=y
+# CONFIG_CRC8 is not set
+CONFIG_ZLIB_INFLATE=y
+CONFIG_ZLIB_DEFLATE=y
+# CONFIG_XZ_DEC is not set
+# CONFIG_XZ_DEC_BCJ is not set
+CONFIG_DECOMPRESS_GZIP=y
+CONFIG_GENERIC_ALLOCATOR=y
+CONFIG_REED_SOLOMON=y
+CONFIG_REED_SOLOMON_ENC8=y
+CONFIG_REED_SOLOMON_DEC8=y
+CONFIG_TEXTSEARCH=y
+CONFIG_TEXTSEARCH_KMP=y
+CONFIG_TEXTSEARCH_BM=y
+CONFIG_TEXTSEARCH_FSM=y
+CONFIG_HAS_IOMEM=y
+CONFIG_HAS_IOPORT=y
+CONFIG_HAS_DMA=y
+CONFIG_CPU_RMAP=y
+CONFIG_DQL=y
+CONFIG_NLATTR=y
+CONFIG_GENERIC_ATOMIC64=y
+CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
+CONFIG_AVERAGE=y
+# CONFIG_CORDIC is not set
+# CONFIG_DDR is not set
+CONFIG_HAVE_KVM=y
+# CONFIG_VIRTUALIZATION is not set
index ce1d3eeeb7373373fa0f07056ff3ca39e0f0c496..58c98bee9d2a4cd9797d5c826751c66c7ec8dd0b 100644 (file)
@@ -226,6 +226,7 @@ CONFIG_MAC80211_RC_DEFAULT_PID=y
 CONFIG_MAC80211_MESH=y
 CONFIG_RFKILL=m
 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
 CONFIG_CONNECTOR=m
 CONFIG_MTD=y
 CONFIG_MTD_CHAR=y
diff --git a/arch/mips/configs/malta_eva_defconfig b/arch/mips/configs/malta_eva_defconfig
new file mode 100644 (file)
index 0000000..0284888
--- /dev/null
@@ -0,0 +1,447 @@
+CONFIG_MIPS_MALTA=y
+CONFIG_CPU_LITTLE_ENDIAN=y
+CONFIG_CPU_MIPS32_R2=y
+CONFIG_CPU_MIPS32_R2_EVA=y
+CONFIG_MIPS_MT_SMP=y
+CONFIG_NR_CPUS=2
+CONFIG_HZ_100=y
+CONFIG_SYSVIPC=y
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_LOG_BUF_SHIFT=15
+CONFIG_NAMESPACES=y
+CONFIG_RELAY=y
+CONFIG_EXPERT=y
+# CONFIG_COMPAT_BRK is not set
+CONFIG_SLAB=y
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODVERSIONS=y
+CONFIG_MODULE_SRCVERSION_ALL=y
+CONFIG_PCI=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_XFRM_USER=m
+CONFIG_NET_KEY=y
+CONFIG_NET_KEY_MIGRATE=y
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_IP_ADVANCED_ROUTER=y
+CONFIG_IP_MULTIPLE_TABLES=y
+CONFIG_IP_ROUTE_MULTIPATH=y
+CONFIG_IP_ROUTE_VERBOSE=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+CONFIG_NET_IPIP=m
+CONFIG_IP_MROUTE=y
+CONFIG_IP_PIMSM_V1=y
+CONFIG_IP_PIMSM_V2=y
+CONFIG_SYN_COOKIES=y
+CONFIG_INET_AH=m
+CONFIG_INET_ESP=m
+CONFIG_INET_IPCOMP=m
+CONFIG_INET_XFRM_MODE_TRANSPORT=m
+CONFIG_INET_XFRM_MODE_TUNNEL=m
+CONFIG_TCP_MD5SIG=y
+CONFIG_IPV6_PRIVACY=y
+CONFIG_IPV6_ROUTER_PREF=y
+CONFIG_IPV6_ROUTE_INFO=y
+CONFIG_IPV6_OPTIMISTIC_DAD=y
+CONFIG_INET6_AH=m
+CONFIG_INET6_ESP=m
+CONFIG_INET6_IPCOMP=m
+CONFIG_IPV6_TUNNEL=m
+CONFIG_IPV6_MROUTE=y
+CONFIG_IPV6_PIMSM_V2=y
+CONFIG_NETWORK_SECMARK=y
+CONFIG_NETFILTER=y
+CONFIG_NF_CONNTRACK=m
+CONFIG_NF_CONNTRACK_SECMARK=y
+CONFIG_NF_CONNTRACK_EVENTS=y
+CONFIG_NF_CT_PROTO_DCCP=m
+CONFIG_NF_CT_PROTO_UDPLITE=m
+CONFIG_NF_CONNTRACK_AMANDA=m
+CONFIG_NF_CONNTRACK_FTP=m
+CONFIG_NF_CONNTRACK_H323=m
+CONFIG_NF_CONNTRACK_IRC=m
+CONFIG_NF_CONNTRACK_PPTP=m
+CONFIG_NF_CONNTRACK_SANE=m
+CONFIG_NF_CONNTRACK_SIP=m
+CONFIG_NF_CONNTRACK_TFTP=m
+CONFIG_NF_CT_NETLINK=m
+CONFIG_NETFILTER_TPROXY=m
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
+CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
+CONFIG_NETFILTER_XT_TARGET_MARK=m
+CONFIG_NETFILTER_XT_TARGET_NFLOG=m
+CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
+CONFIG_NETFILTER_XT_TARGET_TPROXY=m
+CONFIG_NETFILTER_XT_TARGET_TRACE=m
+CONFIG_NETFILTER_XT_TARGET_SECMARK=m
+CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
+CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m
+CONFIG_NETFILTER_XT_MATCH_COMMENT=m
+CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
+CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
+CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
+CONFIG_NETFILTER_XT_MATCH_DCCP=m
+CONFIG_NETFILTER_XT_MATCH_ESP=m
+CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
+CONFIG_NETFILTER_XT_MATCH_HELPER=m
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
+CONFIG_NETFILTER_XT_MATCH_LENGTH=m
+CONFIG_NETFILTER_XT_MATCH_LIMIT=m
+CONFIG_NETFILTER_XT_MATCH_MAC=m
+CONFIG_NETFILTER_XT_MATCH_MARK=m
+CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
+CONFIG_NETFILTER_XT_MATCH_OWNER=m
+CONFIG_NETFILTER_XT_MATCH_POLICY=m
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
+CONFIG_NETFILTER_XT_MATCH_QUOTA=m
+CONFIG_NETFILTER_XT_MATCH_RATEEST=m
+CONFIG_NETFILTER_XT_MATCH_REALM=m
+CONFIG_NETFILTER_XT_MATCH_RECENT=m
+CONFIG_NETFILTER_XT_MATCH_SOCKET=m
+CONFIG_NETFILTER_XT_MATCH_STATE=m
+CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
+CONFIG_NETFILTER_XT_MATCH_STRING=m
+CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
+CONFIG_NETFILTER_XT_MATCH_TIME=m
+CONFIG_NETFILTER_XT_MATCH_U32=m
+CONFIG_IP_VS=m
+CONFIG_IP_VS_IPV6=y
+CONFIG_IP_VS_PROTO_TCP=y
+CONFIG_IP_VS_PROTO_UDP=y
+CONFIG_IP_VS_PROTO_ESP=y
+CONFIG_IP_VS_PROTO_AH=y
+CONFIG_IP_VS_RR=m
+CONFIG_IP_VS_WRR=m
+CONFIG_IP_VS_LC=m
+CONFIG_IP_VS_WLC=m
+CONFIG_IP_VS_LBLC=m
+CONFIG_IP_VS_LBLCR=m
+CONFIG_IP_VS_DH=m
+CONFIG_IP_VS_SH=m
+CONFIG_IP_VS_SED=m
+CONFIG_IP_VS_NQ=m
+CONFIG_NF_CONNTRACK_IPV4=m
+CONFIG_IP_NF_IPTABLES=m
+CONFIG_IP_NF_MATCH_AH=m
+CONFIG_IP_NF_MATCH_ECN=m
+CONFIG_IP_NF_MATCH_TTL=m
+CONFIG_IP_NF_FILTER=m
+CONFIG_IP_NF_TARGET_REJECT=m
+CONFIG_IP_NF_TARGET_ULOG=m
+CONFIG_IP_NF_MANGLE=m
+CONFIG_IP_NF_TARGET_CLUSTERIP=m
+CONFIG_IP_NF_TARGET_ECN=m
+CONFIG_IP_NF_TARGET_TTL=m
+CONFIG_IP_NF_RAW=m
+CONFIG_IP_NF_ARPTABLES=m
+CONFIG_IP_NF_ARPFILTER=m
+CONFIG_IP_NF_ARP_MANGLE=m
+CONFIG_NF_CONNTRACK_IPV6=m
+CONFIG_IP6_NF_MATCH_AH=m
+CONFIG_IP6_NF_MATCH_EUI64=m
+CONFIG_IP6_NF_MATCH_FRAG=m
+CONFIG_IP6_NF_MATCH_OPTS=m
+CONFIG_IP6_NF_MATCH_HL=m
+CONFIG_IP6_NF_MATCH_IPV6HEADER=m
+CONFIG_IP6_NF_MATCH_MH=m
+CONFIG_IP6_NF_MATCH_RT=m
+CONFIG_IP6_NF_TARGET_HL=m
+CONFIG_IP6_NF_FILTER=m
+CONFIG_IP6_NF_TARGET_REJECT=m
+CONFIG_IP6_NF_MANGLE=m
+CONFIG_IP6_NF_RAW=m
+CONFIG_BRIDGE_NF_EBTABLES=m
+CONFIG_BRIDGE_EBT_BROUTE=m
+CONFIG_BRIDGE_EBT_T_FILTER=m
+CONFIG_BRIDGE_EBT_T_NAT=m
+CONFIG_BRIDGE_EBT_802_3=m
+CONFIG_BRIDGE_EBT_AMONG=m
+CONFIG_BRIDGE_EBT_ARP=m
+CONFIG_BRIDGE_EBT_IP=m
+CONFIG_BRIDGE_EBT_IP6=m
+CONFIG_BRIDGE_EBT_LIMIT=m
+CONFIG_BRIDGE_EBT_MARK=m
+CONFIG_BRIDGE_EBT_PKTTYPE=m
+CONFIG_BRIDGE_EBT_STP=m
+CONFIG_BRIDGE_EBT_VLAN=m
+CONFIG_BRIDGE_EBT_ARPREPLY=m
+CONFIG_BRIDGE_EBT_DNAT=m
+CONFIG_BRIDGE_EBT_MARK_T=m
+CONFIG_BRIDGE_EBT_REDIRECT=m
+CONFIG_BRIDGE_EBT_SNAT=m
+CONFIG_BRIDGE_EBT_LOG=m
+CONFIG_BRIDGE_EBT_ULOG=m
+CONFIG_BRIDGE_EBT_NFLOG=m
+CONFIG_IP_SCTP=m
+CONFIG_BRIDGE=m
+CONFIG_VLAN_8021Q=m
+CONFIG_VLAN_8021Q_GVRP=y
+CONFIG_ATALK=m
+CONFIG_DEV_APPLETALK=m
+CONFIG_IPDDP=m
+CONFIG_IPDDP_ENCAP=y
+CONFIG_PHONET=m
+CONFIG_NET_SCHED=y
+CONFIG_NET_SCH_CBQ=m
+CONFIG_NET_SCH_HTB=m
+CONFIG_NET_SCH_HFSC=m
+CONFIG_NET_SCH_PRIO=m
+CONFIG_NET_SCH_RED=m
+CONFIG_NET_SCH_SFQ=m
+CONFIG_NET_SCH_TEQL=m
+CONFIG_NET_SCH_TBF=m
+CONFIG_NET_SCH_GRED=m
+CONFIG_NET_SCH_DSMARK=m
+CONFIG_NET_SCH_NETEM=m
+CONFIG_NET_SCH_INGRESS=m
+CONFIG_NET_CLS_BASIC=m
+CONFIG_NET_CLS_TCINDEX=m
+CONFIG_NET_CLS_ROUTE4=m
+CONFIG_NET_CLS_FW=m
+CONFIG_NET_CLS_U32=m
+CONFIG_NET_CLS_RSVP=m
+CONFIG_NET_CLS_RSVP6=m
+CONFIG_NET_CLS_FLOW=m
+CONFIG_NET_CLS_ACT=y
+CONFIG_NET_ACT_POLICE=y
+CONFIG_NET_ACT_GACT=m
+CONFIG_GACT_PROB=y
+CONFIG_NET_ACT_MIRRED=m
+CONFIG_NET_ACT_IPT=m
+CONFIG_NET_ACT_NAT=m
+CONFIG_NET_ACT_PEDIT=m
+CONFIG_NET_ACT_SIMP=m
+CONFIG_NET_ACT_SKBEDIT=m
+CONFIG_NET_CLS_IND=y
+CONFIG_CFG80211=m
+CONFIG_MAC80211=m
+CONFIG_MAC80211_RC_PID=y
+CONFIG_MAC80211_RC_DEFAULT_PID=y
+CONFIG_MAC80211_MESH=y
+CONFIG_RFKILL=m
+CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_CONNECTOR=m
+CONFIG_MTD=y
+CONFIG_MTD_BLOCK=y
+CONFIG_MTD_OOPS=m
+CONFIG_MTD_CFI=y
+CONFIG_MTD_CFI_INTELEXT=y
+CONFIG_MTD_CFI_AMDSTD=y
+CONFIG_MTD_CFI_STAA=y
+CONFIG_MTD_PHYSMAP=y
+CONFIG_MTD_UBI=m
+CONFIG_MTD_UBI_GLUEBI=m
+CONFIG_BLK_DEV_FD=m
+CONFIG_BLK_DEV_UMEM=m
+CONFIG_BLK_DEV_LOOP=m
+CONFIG_BLK_DEV_CRYPTOLOOP=m
+CONFIG_BLK_DEV_NBD=m
+CONFIG_BLK_DEV_RAM=y
+CONFIG_CDROM_PKTCDVD=m
+CONFIG_ATA_OVER_ETH=m
+CONFIG_IDE=y
+CONFIG_BLK_DEV_IDECD=y
+CONFIG_IDE_GENERIC=y
+CONFIG_BLK_DEV_GENERIC=y
+CONFIG_BLK_DEV_PIIX=y
+CONFIG_BLK_DEV_IT8213=m
+CONFIG_BLK_DEV_TC86C001=m
+CONFIG_RAID_ATTRS=m
+CONFIG_SCSI=m
+CONFIG_SCSI_TGT=m
+CONFIG_BLK_DEV_SD=m
+CONFIG_CHR_DEV_ST=m
+CONFIG_CHR_DEV_OSST=m
+CONFIG_BLK_DEV_SR=m
+CONFIG_BLK_DEV_SR_VENDOR=y
+CONFIG_CHR_DEV_SG=m
+CONFIG_SCSI_MULTI_LUN=y
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_LOGGING=y
+CONFIG_SCSI_SCAN_ASYNC=y
+CONFIG_SCSI_FC_ATTRS=m
+CONFIG_ISCSI_TCP=m
+CONFIG_BLK_DEV_3W_XXXX_RAID=m
+CONFIG_SCSI_3W_9XXX=m
+CONFIG_SCSI_ACARD=m
+CONFIG_SCSI_AACRAID=m
+CONFIG_SCSI_AIC7XXX=m
+CONFIG_AIC7XXX_RESET_DELAY_MS=15000
+# CONFIG_AIC7XXX_DEBUG_ENABLE is not set
+CONFIG_MD=y
+CONFIG_BLK_DEV_MD=m
+CONFIG_MD_LINEAR=m
+CONFIG_MD_RAID0=m
+CONFIG_MD_RAID1=m
+CONFIG_MD_RAID10=m
+CONFIG_MD_RAID456=m
+CONFIG_MD_MULTIPATH=m
+CONFIG_MD_FAULTY=m
+CONFIG_BLK_DEV_DM=m
+CONFIG_DM_CRYPT=m
+CONFIG_DM_SNAPSHOT=m
+CONFIG_DM_MIRROR=m
+CONFIG_DM_ZERO=m
+CONFIG_DM_MULTIPATH=m
+CONFIG_NETDEVICES=y
+CONFIG_BONDING=m
+CONFIG_DUMMY=m
+CONFIG_EQUALIZER=m
+CONFIG_IFB=m
+CONFIG_MACVLAN=m
+CONFIG_TUN=m
+CONFIG_VETH=m
+# CONFIG_NET_VENDOR_3COM is not set
+CONFIG_PCNET32=y
+CONFIG_CHELSIO_T3=m
+CONFIG_AX88796=m
+CONFIG_NETXEN_NIC=m
+CONFIG_TC35815=m
+CONFIG_MARVELL_PHY=m
+CONFIG_DAVICOM_PHY=m
+CONFIG_QSEMI_PHY=m
+CONFIG_LXT_PHY=m
+CONFIG_CICADA_PHY=m
+CONFIG_VITESSE_PHY=m
+CONFIG_SMSC_PHY=m
+CONFIG_BROADCOM_PHY=m
+CONFIG_ICPLUS_PHY=m
+CONFIG_REALTEK_PHY=m
+CONFIG_ATMEL=m
+CONFIG_PCI_ATMEL=m
+CONFIG_PRISM54=m
+CONFIG_HOSTAP=m
+CONFIG_HOSTAP_FIRMWARE=y
+CONFIG_HOSTAP_FIRMWARE_NVRAM=y
+CONFIG_HOSTAP_PLX=m
+CONFIG_HOSTAP_PCI=m
+CONFIG_IPW2100=m
+CONFIG_IPW2100_MONITOR=y
+CONFIG_LIBERTAS=m
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_SERIO_I8042 is not set
+CONFIG_VT_HW_CONSOLE_BINDING=y
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+# CONFIG_HWMON is not set
+CONFIG_FB=y
+CONFIG_FB_CIRRUS=y
+# CONFIG_VGA_CONSOLE is not set
+CONFIG_FRAMEBUFFER_CONSOLE=y
+CONFIG_HID=m
+CONFIG_RTC_CLASS=y
+CONFIG_RTC_DRV_CMOS=y
+CONFIG_UIO=m
+CONFIG_UIO_CIF=m
+CONFIG_EXT2_FS=y
+CONFIG_EXT3_FS=y
+CONFIG_REISERFS_FS=m
+CONFIG_REISERFS_PROC_INFO=y
+CONFIG_REISERFS_FS_XATTR=y
+CONFIG_REISERFS_FS_POSIX_ACL=y
+CONFIG_REISERFS_FS_SECURITY=y
+CONFIG_JFS_FS=m
+CONFIG_JFS_POSIX_ACL=y
+CONFIG_JFS_SECURITY=y
+CONFIG_XFS_FS=m
+CONFIG_XFS_QUOTA=y
+CONFIG_XFS_POSIX_ACL=y
+CONFIG_QUOTA=y
+CONFIG_QFMT_V2=y
+CONFIG_FUSE_FS=m
+CONFIG_ISO9660_FS=m
+CONFIG_JOLIET=y
+CONFIG_ZISOFS=y
+CONFIG_UDF_FS=m
+CONFIG_MSDOS_FS=m
+CONFIG_VFAT_FS=m
+CONFIG_PROC_KCORE=y
+CONFIG_TMPFS=y
+CONFIG_AFFS_FS=m
+CONFIG_HFS_FS=m
+CONFIG_HFSPLUS_FS=m
+CONFIG_BEFS_FS=m
+CONFIG_BFS_FS=m
+CONFIG_EFS_FS=m
+CONFIG_JFFS2_FS=m
+CONFIG_JFFS2_FS_XATTR=y
+CONFIG_JFFS2_COMPRESSION_OPTIONS=y
+CONFIG_JFFS2_RUBIN=y
+CONFIG_CRAMFS=m
+CONFIG_VXFS_FS=m
+CONFIG_MINIX_FS=m
+CONFIG_ROMFS_FS=m
+CONFIG_SYSV_FS=m
+CONFIG_UFS_FS=m
+CONFIG_NFS_FS=y
+CONFIG_ROOT_NFS=y
+CONFIG_NFSD=y
+CONFIG_NFSD_V3=y
+CONFIG_NLS_CODEPAGE_437=m
+CONFIG_NLS_CODEPAGE_737=m
+CONFIG_NLS_CODEPAGE_775=m
+CONFIG_NLS_CODEPAGE_850=m
+CONFIG_NLS_CODEPAGE_852=m
+CONFIG_NLS_CODEPAGE_855=m
+CONFIG_NLS_CODEPAGE_857=m
+CONFIG_NLS_CODEPAGE_860=m
+CONFIG_NLS_CODEPAGE_861=m
+CONFIG_NLS_CODEPAGE_862=m
+CONFIG_NLS_CODEPAGE_863=m
+CONFIG_NLS_CODEPAGE_864=m
+CONFIG_NLS_CODEPAGE_865=m
+CONFIG_NLS_CODEPAGE_866=m
+CONFIG_NLS_CODEPAGE_869=m
+CONFIG_NLS_CODEPAGE_936=m
+CONFIG_NLS_CODEPAGE_950=m
+CONFIG_NLS_CODEPAGE_932=m
+CONFIG_NLS_CODEPAGE_949=m
+CONFIG_NLS_CODEPAGE_874=m
+CONFIG_NLS_ISO8859_8=m
+CONFIG_NLS_CODEPAGE_1250=m
+CONFIG_NLS_CODEPAGE_1251=m
+CONFIG_NLS_ASCII=m
+CONFIG_NLS_ISO8859_1=m
+CONFIG_NLS_ISO8859_2=m
+CONFIG_NLS_ISO8859_3=m
+CONFIG_NLS_ISO8859_4=m
+CONFIG_NLS_ISO8859_5=m
+CONFIG_NLS_ISO8859_6=m
+CONFIG_NLS_ISO8859_7=m
+CONFIG_NLS_ISO8859_9=m
+CONFIG_NLS_ISO8859_13=m
+CONFIG_NLS_ISO8859_14=m
+CONFIG_NLS_ISO8859_15=m
+CONFIG_NLS_KOI8_R=m
+CONFIG_NLS_KOI8_U=m
+CONFIG_CRYPTO_NULL=m
+CONFIG_CRYPTO_CRYPTD=m
+CONFIG_CRYPTO_LRW=m
+CONFIG_CRYPTO_PCBC=m
+CONFIG_CRYPTO_HMAC=y
+CONFIG_CRYPTO_XCBC=m
+CONFIG_CRYPTO_MD4=m
+CONFIG_CRYPTO_SHA256=m
+CONFIG_CRYPTO_SHA512=m
+CONFIG_CRYPTO_TGR192=m
+CONFIG_CRYPTO_WP512=m
+CONFIG_CRYPTO_ANUBIS=m
+CONFIG_CRYPTO_BLOWFISH=m
+CONFIG_CRYPTO_CAMELLIA=m
+CONFIG_CRYPTO_CAST5=m
+CONFIG_CRYPTO_CAST6=m
+CONFIG_CRYPTO_FCRYPT=m
+CONFIG_CRYPTO_KHAZAD=m
+CONFIG_CRYPTO_SERPENT=m
+CONFIG_CRYPTO_TEA=m
+CONFIG_CRYPTO_TWOFISH=m
+# CONFIG_CRYPTO_ANSI_CPRNG is not set
+CONFIG_CRC16=m
index 93057a760dfa09bbe997a5fb55f8eba4212da17a..f31d04794b0e30c1456cce82d384aa0209cb7556 100644 (file)
@@ -80,6 +80,7 @@ CONFIG_NET_CLS_ACT=y
 CONFIG_NET_ACT_POLICE=y
 CONFIG_NET_CLS_IND=y
 # CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
 CONFIG_BLK_DEV_LOOP=y
 CONFIG_BLK_DEV_CRYPTOLOOP=m
 CONFIG_IDE=y
index 4e54b75d89be33af5f991e2bc162f45d649a6d6c..86dd3b3ec3a32b52d68a5c033c34d7e49c9c8564 100644 (file)
@@ -81,6 +81,7 @@ CONFIG_NET_CLS_ACT=y
 CONFIG_NET_ACT_POLICE=y
 CONFIG_NET_CLS_IND=y
 # CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
 CONFIG_BLK_DEV_LOOP=y
 CONFIG_BLK_DEV_CRYPTOLOOP=m
 CONFIG_IDE=y
index 8a666021b870cac9f3ac958d6455e06f4f12199b..063676ef315fef075ee8062aa474836f33c12668 100644 (file)
@@ -58,7 +58,6 @@ CONFIG_ATALK=m
 CONFIG_DEV_APPLETALK=m
 CONFIG_IPDDP=m
 CONFIG_IPDDP_ENCAP=y
-CONFIG_IPDDP_DECAP=y
 CONFIG_NET_SCHED=y
 CONFIG_NET_SCH_CBQ=m
 CONFIG_NET_SCH_HTB=m
@@ -83,6 +82,7 @@ CONFIG_NET_CLS_ACT=y
 CONFIG_NET_ACT_POLICE=y
 CONFIG_NET_CLS_IND=y
 # CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
 CONFIG_BLK_DEV_LOOP=y
 CONFIG_BLK_DEV_CRYPTOLOOP=m
 CONFIG_IDE=y
index 9868fc9c11338746180614e3967cacab72927b8e..49121d349aba5eb0082ed80b5a40d902492047fd 100644 (file)
@@ -79,6 +79,7 @@ CONFIG_NET_CLS_ACT=y
 CONFIG_NET_ACT_POLICE=y
 CONFIG_NET_CLS_IND=y
 # CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
 CONFIG_BLK_DEV_LOOP=y
 CONFIG_BLK_DEV_CRYPTOLOOP=m
 CONFIG_IDE=y
index 9b54b7a403d446b59073fe39fec03e0db7a432e8..454ddf9bb76f8a5fc5660bc6ab02c4d5380f7593 100644 (file)
@@ -1,2 +1,15 @@
 # MIPS headers
+generic-y += cputime.h
+generic-y += current.h
+generic-y += emergency-restart.h
+generic-y += local64.h
+generic-y += mutex.h
+generic-y += parport.h
+generic-y += percpu.h
+generic-y += scatterlist.h
+generic-y += sections.h
+generic-y += segment.h
+generic-y += serial.h
 generic-y += trace_clock.h
+generic-y += ucontext.h
+generic-y += xor.h
index 13d61c002e4fc329ec6de191c31d43afb9020885..f1b440f6f69e1423f95ecc654532b1a36918eeef 100644 (file)
@@ -94,6 +94,7 @@
  * Memory segments (32bit kernel mode addresses)
  * These are the traditional names used in the 32-bit universe.
  */
+#ifndef KSEG
 #define KUSEG                  0x00000000
 #define KSEG0                  0x80000000
 #define KSEG1                  0xa0000000
 #define CKSEG1                 0xa0000000
 #define CKSEG2                 0xc0000000
 #define CKSEG3                 0xe0000000
+#endif
 
 #endif
 
index 879691d194af426f5532d47c4e467491bf785038..c054d37b9cf190b7ac06025b67ef6bd4000e8d5d 100644 (file)
@@ -149,7 +149,16 @@ symbol             =       value
                pref    hint, addr;                     \
                .set    pop
 
-#define PREFX(hint,addr)                               \
+#ifdef CONFIG_EVA
+#define PREFE(hint,addr)                                \
+               .set    push;                           \
+               .set    mips4;                          \
+               .set    eva;                            \
+               prefe   hint, addr;                     \
+               .set    pop
+#endif
+
+#define PREFX(hint,addr)                                \
                .set    push;                           \
                .set    mips4;                          \
                prefx   hint, addr;                     \
@@ -158,6 +167,7 @@ symbol              =       value
 #else /* !CONFIG_CPU_HAS_PREFETCH */
 
 #define PREF(hint, addr)
+#define PREFE(hint, addr)
 #define PREFX(hint, addr)
 
 #endif /* !CONFIG_CPU_HAS_PREFETCH */
index 2413afe21b3369f4b5bc1145fcb8c1082c8ceb72..c0d49e66ce130203f3a2e8a29093abc6434b0f59 100644 (file)
 #include <asm/fpregdef.h>
 #include <asm/mipsregs.h>
 
-       .macro  fpu_save_double thread status tmp1=t0
+#ifdef CONFIG_CPU_MIPS32_R2
+
+       /* copy stuff from MIPS64 */
+
+       .macro  fpu_save_16even thread tmp=t0
+       cfc1    \tmp, fcr31
+       sdc1    $f0,  THREAD_FPR0(\thread)
+       sdc1    $f2,  THREAD_FPR2(\thread)
+       sdc1    $f4,  THREAD_FPR4(\thread)
+       sdc1    $f6,  THREAD_FPR6(\thread)
+       sdc1    $f8,  THREAD_FPR8(\thread)
+       sdc1    $f10, THREAD_FPR10(\thread)
+       sdc1    $f12, THREAD_FPR12(\thread)
+       sdc1    $f14, THREAD_FPR14(\thread)
+       sdc1    $f16, THREAD_FPR16(\thread)
+       sdc1    $f18, THREAD_FPR18(\thread)
+       sdc1    $f20, THREAD_FPR20(\thread)
+       sdc1    $f22, THREAD_FPR22(\thread)
+       sdc1    $f24, THREAD_FPR24(\thread)
+       sdc1    $f26, THREAD_FPR26(\thread)
+       sdc1    $f28, THREAD_FPR28(\thread)
+       sdc1    $f30, THREAD_FPR30(\thread)
+       sw  \tmp, THREAD_FCR31(\thread)
+       .endm
+
+       .macro  fpu_save_16odd thread
+       .set    push
+       .set    mips64r2
+       sdc1    $f1,  THREAD_FPR1(\thread)
+       sdc1    $f3,  THREAD_FPR3(\thread)
+       sdc1    $f5,  THREAD_FPR5(\thread)
+       sdc1    $f7,  THREAD_FPR7(\thread)
+       sdc1    $f9,  THREAD_FPR9(\thread)
+       sdc1    $f11, THREAD_FPR11(\thread)
+       sdc1    $f13, THREAD_FPR13(\thread)
+       sdc1    $f15, THREAD_FPR15(\thread)
+       sdc1    $f17, THREAD_FPR17(\thread)
+       sdc1    $f19, THREAD_FPR19(\thread)
+       sdc1    $f21, THREAD_FPR21(\thread)
+       sdc1    $f23, THREAD_FPR23(\thread)
+       sdc1    $f25, THREAD_FPR25(\thread)
+       sdc1    $f27, THREAD_FPR27(\thread)
+       sdc1    $f29, THREAD_FPR29(\thread)
+       sdc1    $f31, THREAD_FPR31(\thread)
+       .set    pop
+       .endm
+
+       .macro  fpu_save_double thread status tmp
+       .set    push
+       .set    noreorder
+       sll     \tmp, \status, 31 - _ST0_FR
+       bgez    \tmp, 2f
+        nop
+       fpu_save_16odd \thread
+2:
+       fpu_save_16even \thread \tmp
+       .set    pop
+       .endm
+
+       .macro  fpu_restore_16even thread tmp=t0
+       lw  \tmp, THREAD_FCR31(\thread)
+       ldc1    $f0,  THREAD_FPR0(\thread)
+       ldc1    $f2,  THREAD_FPR2(\thread)
+       ldc1    $f4,  THREAD_FPR4(\thread)
+       ldc1    $f6,  THREAD_FPR6(\thread)
+       ldc1    $f8,  THREAD_FPR8(\thread)
+       ldc1    $f10, THREAD_FPR10(\thread)
+       ldc1    $f12, THREAD_FPR12(\thread)
+       ldc1    $f14, THREAD_FPR14(\thread)
+       ldc1    $f16, THREAD_FPR16(\thread)
+       ldc1    $f18, THREAD_FPR18(\thread)
+       ldc1    $f20, THREAD_FPR20(\thread)
+       ldc1    $f22, THREAD_FPR22(\thread)
+       ldc1    $f24, THREAD_FPR24(\thread)
+       ldc1    $f26, THREAD_FPR26(\thread)
+       ldc1    $f28, THREAD_FPR28(\thread)
+       ldc1    $f30, THREAD_FPR30(\thread)
+       ctc1    \tmp, fcr31
+       .endm
+
+       .macro  fpu_restore_16odd thread
+       .set    push
+       .set    mips64r2
+       ldc1    $f1,  THREAD_FPR1(\thread)
+       ldc1    $f3,  THREAD_FPR3(\thread)
+       ldc1    $f5,  THREAD_FPR5(\thread)
+       ldc1    $f7,  THREAD_FPR7(\thread)
+       ldc1    $f9,  THREAD_FPR9(\thread)
+       ldc1    $f11, THREAD_FPR11(\thread)
+       ldc1    $f13, THREAD_FPR13(\thread)
+       ldc1    $f15, THREAD_FPR15(\thread)
+       ldc1    $f17, THREAD_FPR17(\thread)
+       ldc1    $f19, THREAD_FPR19(\thread)
+       ldc1    $f21, THREAD_FPR21(\thread)
+       ldc1    $f23, THREAD_FPR23(\thread)
+       ldc1    $f25, THREAD_FPR25(\thread)
+       ldc1    $f27, THREAD_FPR27(\thread)
+       ldc1    $f29, THREAD_FPR29(\thread)
+       ldc1    $f31, THREAD_FPR31(\thread)
+       .set    pop
+       .endm
+
+       .macro  fpu_restore_double thread status tmp
+       .set    push
+       .set    noreorder
+       sll     \tmp, \status, 31 - _ST0_FR
+       bgez    \tmp, 1f                # 16 register mode?
+        nop
+
+       fpu_restore_16odd \thread
+1:      fpu_restore_16even \thread \tmp
+       .set    pop
+       .endm
+
+#else
+
+       .macro  fpu_save_double thread status tmp1=t0
        cfc1    \tmp1,  fcr31
        sdc1    $f0,  THREAD_FPR0(\thread)
        sdc1    $f2,  THREAD_FPR2(\thread)
        ctc1    \tmp, fcr31
        .endm
 
+#endif  // CONFIG_CPU_MIPS32_R2
+
        .macro  cpu_save_nonscratch thread
        LONG_S  s0, THREAD_REG16(\thread)
        LONG_S  s1, THREAD_REG17(\thread)
index 08a527dfe4a32df9ec954c28f6e99665adb1a18b..922209d6347026de49be8871a7879f83f6938dc7 100644 (file)
@@ -54,7 +54,7 @@
        .endm
 
        .macro  fpu_save_double thread status tmp
-       sll     \tmp, \status, 5
+       sll     \tmp, \status, 31 - _ST0_FR
        bgez    \tmp, 2f
        fpu_save_16odd \thread
 2:
        .endm
 
        .macro  fpu_restore_double thread status tmp
-       sll     \tmp, \status, 5
+       sll     \tmp, \status, 31 - _ST0_FR
        bgez    \tmp, 1f                                # 16 register mode?
 
        fpu_restore_16odd \thread
index 71305a8b3d78b8053b46a78d63f332cffc3a95db..a548a3374483dbbc4242ab304fdb5d6ba86c9b8f 100644 (file)
@@ -28,6 +28,8 @@
 #define __SC           "sc     "
 #define __INS          "ins    "
 #define __EXT          "ext    "
+#define __ADDU          "addu   "
+#define __SUBU          "subu   "
 #elif _MIPS_SZLONG == 64
 #define SZLONG_LOG 6
 #define SZLONG_MASK 63UL
@@ -35,6 +37,8 @@
 #define __SC           "scd    "
 #define __INS          "dins    "
 #define __EXT          "dext    "
+#define __ADDU          "daddu   "
+#define __SUBU          "dsubu   "
 #endif
 
 /*
index 0cf73ff8849daaedb89cd9976dd6de1822fa20a8..04b87820de83d7a6aebf2e78bff525554662045a 100644 (file)
@@ -66,20 +66,20 @@ static inline void flush_icache_page(struct vm_area_struct *vma,
 extern void (*flush_icache_range)(unsigned long start, unsigned long end);
 extern void (*local_flush_icache_range)(unsigned long start, unsigned long end);
 
-extern void (*__flush_cache_vmap)(void);
+extern void (*__flush_cache_vmap)(unsigned long start, unsigned long end);
 
 static inline void flush_cache_vmap(unsigned long start, unsigned long end)
 {
        if (cpu_has_dc_aliases)
-               __flush_cache_vmap();
+               __flush_cache_vmap(start,end);
 }
 
-extern void (*__flush_cache_vunmap)(void);
+extern void (*__flush_cache_vunmap)(unsigned long start, unsigned long end);
 
 static inline void flush_cache_vunmap(unsigned long start, unsigned long end)
 {
        if (cpu_has_dc_aliases)
-               __flush_cache_vunmap();
+               __flush_cache_vunmap(start,end);
 }
 
 extern void copy_to_user_page(struct vm_area_struct *vma,
@@ -94,6 +94,9 @@ extern void (*flush_cache_sigtramp)(unsigned long addr);
 extern void (*flush_icache_all)(void);
 extern void (*local_flush_data_cache_page)(void * addr);
 extern void (*flush_data_cache_page)(unsigned long addr);
+extern void (*mips_flush_data_cache_range)(struct vm_area_struct *vma,
+       unsigned long vaddr, struct page *page, unsigned long addr,
+       unsigned long size);
 
 /*
  * This flag is used to indicate that the page pointed to by a pte
index ac3d2b8a20d4bfa483fcb55d57e92670c2c08dd2..19d4fc841daaf0a7c995e0640cf6a514578b36cf 100644 (file)
@@ -31,6 +31,12 @@ __wsum csum_partial(const void *buff, int len, __wsum sum);
 
 __wsum __csum_partial_copy_user(const void *src, void *dst,
                                int len, __wsum sum, int *err_ptr);
+#ifdef  CONFIG_EVA
+__wsum __csum_partial_copy_fromuser(const void *src, void *dst,
+                                   int len, __wsum sum, int *err_ptr);
+__wsum __csum_partial_copy_touser(const void *src, void *dst,
+                                 int len, __wsum sum, int *err_ptr);
+#endif
 
 /*
  * this is a new version of the above that records errors it finds in *errp,
@@ -40,9 +46,34 @@ static inline
 __wsum csum_partial_copy_from_user(const void __user *src, void *dst, int len,
                                   __wsum sum, int *err_ptr)
 {
+#ifndef CONFIG_EVA
        might_fault();
        return __csum_partial_copy_user((__force void *)src, dst,
-                                       len, sum, err_ptr);
+                                           len, sum, err_ptr);
+#else
+       if (segment_eq(get_fs(), KERNEL_DS))
+               return __csum_partial_copy_user((__force void *)src, dst,
+                                               len, sum, err_ptr);
+       else {
+               might_fault();
+               return __csum_partial_copy_fromuser((__force void *)src, dst,
+                                                   len, sum, err_ptr);
+       }
+#endif
+}
+
+#define _HAVE_ARCH_COPY_AND_CSUM_FROM_USER
+static inline
+__wsum csum_and_copy_from_user (const void __user *src, void *dst,
+                                     int len, __wsum sum, int *err_ptr)
+{
+       if (access_ok(VERIFY_READ, src, len))
+               return csum_partial_copy_from_user(src, dst, len, sum, err_ptr);
+
+       if (len)
+               *err_ptr = -EFAULT;
+
+       return sum;
 }
 
 /*
@@ -53,10 +84,22 @@ static inline
 __wsum csum_and_copy_to_user(const void *src, void __user *dst, int len,
                             __wsum sum, int *err_ptr)
 {
-       might_fault();
-       if (access_ok(VERIFY_WRITE, dst, len))
+       if (access_ok(VERIFY_WRITE, dst, len)) {
+#ifndef CONFIG_EVA
+               might_fault();
                return __csum_partial_copy_user(src, (__force void *)dst,
-                                               len, sum, err_ptr);
+                                                 len, sum, err_ptr);
+#else
+               if (segment_eq(get_fs(), KERNEL_DS))
+                       return __csum_partial_copy_user(src, (__force void *)dst,
+                                                         len, sum, err_ptr);
+               else {
+                       might_fault();
+                       return __csum_partial_copy_touser(src, (__force void *)dst,
+                                                         len, sum, err_ptr);
+               }
+#endif
+       }
        if (len)
                *err_ptr = -EFAULT;
 
index c4bd54a7f5ce1675de61705a5b8779846e012589..b6e254a7640d1cb86dc44272156fbcbaa4e66e17 100644 (file)
@@ -298,7 +298,13 @@ typedef struct compat_sigaltstack {
 
 static inline int is_compat_task(void)
 {
+#ifdef CONFIG_MIPS32_O32
+       return test_thread_flag(TIF_32BIT_REGS);
+#elif defined(CONFIG_MIPS32_N32)
        return test_thread_flag(TIF_32BIT_ADDR);
+#else
+#error No MIPS32 compatibility mode defined
+#endif /* CONFIG_MIPS32_O32 */
 }
 
 #endif /* _ASM_COMPAT_H */
diff --git a/arch/mips/include/asm/cpcregs.h b/arch/mips/include/asm/cpcregs.h
new file mode 100644 (file)
index 0000000..8d40855
--- /dev/null
@@ -0,0 +1,72 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Cluster Power Controller Subsystem Register Definitions
+ *
+ * Copyright (C) 2013 Imagination Technologies Ltd
+ *    Leonid Yegoshin (Leonid.Yegoshin@imgtec.com)
+ *
+ */
+#ifndef _ASM_CPCREGS_H
+#define _ASM_CPCREGS_H
+
+
+/* Offsets to major blocks within CPC from CPC base */
+#define CPC_GCB_OFS             0x0000 /* Global Control Block */
+#define CPC_CLCB_OFS            0x2000 /* Core Local Control Block */
+#define CPC_COCB_OFS            0x4000 /* Core Other Control Block */
+
+#define CPCGCBOFS(x)            (CPC_##x##_OFS + CPC_GCB_OFS)
+#define CPCLCBOFS(x)            (CPC_##x##_OFS + CPC_CLCB_OFS)
+#define CPCOCBOFS(x)            (CPC_##x##_OFS + CPC_COCB_OFS)
+
+#define CPCGCB(x)               REGP(_cpc_base, CPCGCBOFS(x))
+#define CPCLCB(x)               REGP(_cpc_base, CPCLCBOFS(x))
+#define CPCOCB(x)               REGP(_cpc_base, CPCOCBOFS(x))
+
+/* Global section registers offsets */
+#define CPC_CSRAPR_OFS          0x000
+#define CPC_SEQDELAY_OFS        0x008
+#define CPC_RAILDELAY_OFS       0x010
+#define CPC_RESETWIDTH_OFS      0x018
+#define CPC_REVID_OFS           0x020
+
+/* Local and Other Local sections registers offsets */
+#define CPC_CMD_OFS             0x000
+#define CPC_STATUS_OFS          0x008
+#define CPC_OTHER_OFS           0x010
+
+/* Command and Status registers fields masks and offsets */
+
+#define CPCL_PWRUP_EVENT_MASK   0x00800000
+#define CPCL_PWRUP_EVENT_SH     23
+
+#define CPCL_STATUS_MASK        0x00780000
+#define CPCL_STATUS_SH          19
+#define CPCL_STATUS_U5          0x6
+#define CPCL_STATUS_U6          0x7
+
+
+#define CPCL_CLKGAT_IMPL_MASK   0x00020000
+#define CPCL_CLKGAT_IMPL_SH     17
+
+#define CPCL_PWRDN_IMPL_MASK    0x00010000
+#define CPCL_PWRDN_IMPL_SH      16
+
+#define CPCL_EJTAG_MASK         0x00008000
+#define CPCL_EJTAG_SH           15
+
+#define CPCL_IO_PWUP_POLICY_MASK    0x00000300
+#define CPCL_IO_PWUP_POLICY_SH      8
+
+#define CPCL_IO_TRFFC_EN_MASK   0x00000010
+#define CPCL_IO_TRFFC_EN_SH     4
+
+#define CPCL_CMD_MASK           0xf
+#define CPCL_CMD_SH             0
+
+extern int __init cpc_probe(unsigned long defaddr, unsigned long defsize);
+
+#endif /* _ASM_CPCREGS_H */
index fdcb80b731f849c94d133a15588d314138faa73b..f5e104899538d822c390c83302688ce3a44bae36 100644 (file)
@@ -24,6 +24,9 @@
 #ifndef cpu_has_tlb
 #define cpu_has_tlb            (cpu_data[0].options & MIPS_CPU_TLB)
 #endif
+#ifndef cpu_has_tlbinv
+#define cpu_has_tlbinv          (cpu_data[0].options & MIPS_CPU_TLBINV)
+#endif
 #ifndef cpu_has_4kex
 #define cpu_has_4kex           (cpu_data[0].options & MIPS_CPU_4KEX)
 #endif
 #define cpu_has_rixi           (cpu_data[0].options & MIPS_CPU_RIXI)
 #endif
 #ifndef cpu_has_mmips
-#define cpu_has_mmips          (cpu_data[0].options & MIPS_CPU_MICROMIPS)
+# ifdef CONFIG_SYS_SUPPORTS_MICROMIPS
+#  define cpu_has_mmips                (cpu_data[0].options & MIPS_CPU_MICROMIPS)
+# else
+#  define cpu_has_mmips                0
+# endif
 #endif
 #ifndef cpu_has_vtag_icache
 #define cpu_has_vtag_icache    (cpu_data[0].icache.flags & MIPS_CACHE_VTAG)
 #ifndef cpu_has_local_ebase
 #define cpu_has_local_ebase    1
 #endif
+#ifdef CONFIG_MIPS_CMP
+#ifndef cpu_has_cm2
+#define cpu_has_cm2             (cpu_data[0].options & MIPS_CPU_CM2)
+#endif
+#ifndef cpu_has_cm2_l2sync
+#define cpu_has_cm2_l2sync      (cpu_data[0].options & MIPS_CPU_CM2_L2SYNC)
+#endif
+#else
+#define cpu_has_cm2             (0)
+#define cpu_has_cm2_l2sync      (0)
+#endif
 
 /*
  * I-Cache snoops remote store.         This only matters on SMP.  Some multiprocessors
 
 /*
  * MIPS32, MIPS64, VR5500, IDT32332, IDT32334 and maybe a few other
- * pre-MIPS32/MIPS53 processors have CLO, CLZ. The IDT RC64574 is 64-bit and
+ * pre-MIPS32/MIPS64 processors have CLO, CLZ. The IDT RC64574 is 64-bit and
  * has CLO and CLZ but not DCLO nor DCLZ.  For 64-bit kernels
  * cpu_has_clo_clz also indicates the availability of DCLO and DCLZ.
  */
 #define cpu_has_userlocal      (cpu_data[0].options & MIPS_CPU_ULRI)
 #endif
 
+#ifndef cpu_has_segments
+#define cpu_has_segments       (cpu_data[0].options & MIPS_CPU_SEGMENTS)
+#endif
+
+#ifndef cpu_has_eva
+#define cpu_has_eva            (cpu_data[0].options & MIPS_CPU_EVA)
+#endif
+
 #ifdef CONFIG_32BIT
 # ifndef cpu_has_nofpuex
 # define cpu_has_nofpuex       (cpu_data[0].options & MIPS_CPU_NOFPUEX)
index 41401d8eb7d1a3ca13bfbe77c7d78575307e3531..aa7e22c6bd1407acdcfbba356e9a2d4d5b0ff8cd 100644 (file)
@@ -52,11 +52,14 @@ struct cpuinfo_mips {
        unsigned int            cputype;
        int                     isa_level;
        int                     tlbsize;
-       struct cache_desc       icache; /* Primary I-cache */
-       struct cache_desc       dcache; /* Primary D or combined I/D cache */
-       struct cache_desc       scache; /* Secondary cache */
-       struct cache_desc       tcache; /* Tertiary/split secondary cache */
-       int                     srsets; /* Shadow register sets */
+       int                     tlbsizevtlb;
+       int                     tlbsizeftlbsets;
+       int                     tlbsizeftlbways;
+       struct cache_desc       icache; /* Primary I-cache */
+       struct cache_desc       dcache; /* Primary D or combined I/D cache */
+       struct cache_desc       scache; /* Secondary cache */
+       struct cache_desc       tcache; /* Tertiary/split secondary cache */
+       int                     srsets; /* Shadow register sets */
        int                     core;   /* physical core number */
 #ifdef CONFIG_64BIT
        int                     vmbits; /* Virtual memory size in bits */
@@ -79,6 +82,9 @@ struct cpuinfo_mips {
 #define NUM_WATCH_REGS 4
        u16                     watch_reg_masks[NUM_WATCH_REGS];
        unsigned int            kscratch_mask; /* Usable KScratch mask. */
+       unsigned int            segctl0; /* Memory Segmentation Control 0 */
+       unsigned int            segctl1; /* Memory Segmentation Control 1 */
+       unsigned int            segctl2; /* Memory Segmentation Control 2 */
 } __attribute__((aligned(SMP_CACHE_BYTES)));
 
 extern struct cpuinfo_mips cpu_data[];
index dd86ab205483800c1b7b7579c22e21b970c41b4b..87c02db51dc28543d3ac429f38a45b092c03abb8 100644 (file)
@@ -79,6 +79,7 @@
  * These are the PRID's for when 23:16 == PRID_COMP_MIPS
  */
 
+#define PRID_IMP_QEMU           0x0000
 #define PRID_IMP_4KC           0x8000
 #define PRID_IMP_5KC           0x8100
 #define PRID_IMP_20KC          0x8200
 #define PRID_IMP_1074K         0x9a00
 #define PRID_IMP_M14KC         0x9c00
 #define PRID_IMP_M14KEC                0x9e00
+#define PRID_IMP_INTERAPTIV_UP 0xa000
+#define PRID_IMP_INTERAPTIV_MP 0xa100
+#define PRID_IMP_PROAPTIV_UP   0xa200
+#define PRID_IMP_PROAPTIV_MP   0xa300
+#define PRID_IMP_VIRTUOSO       0xa700
+#define PRID_IMP_P5600          0xa800
 
 /*
  * These are the PRID's for when 23:16 == PRID_COMP_SIBYTE
@@ -265,7 +272,8 @@ enum cpu_type_enum {
        CPU_4KC, CPU_4KEC, CPU_4KSC, CPU_24K, CPU_34K, CPU_1004K, CPU_74K,
        CPU_ALCHEMY, CPU_PR4450, CPU_BMIPS32, CPU_BMIPS3300, CPU_BMIPS4350,
        CPU_BMIPS4380, CPU_BMIPS5000, CPU_JZRISC, CPU_LOONGSON1, CPU_M14KC,
-       CPU_M14KEC,
+       CPU_M14KEC, CPU_PROAPTIV, CPU_INTERAPTIV, CPU_VIRTUOSO,
+       CPU_P5600,
 
        /*
         * MIPS64 class processors
@@ -274,9 +282,11 @@ enum cpu_type_enum {
        CPU_CAVIUM_OCTEON, CPU_CAVIUM_OCTEON_PLUS, CPU_CAVIUM_OCTEON2,
        CPU_XLR, CPU_XLP,
 
+       CPU_QEMU,
        CPU_LAST
 };
 
+#define MIPS_FTLB_CAPABLE       0x1
 
 /*
  * ISA Level encodings
@@ -325,6 +335,11 @@ enum cpu_type_enum {
 #define MIPS_CPU_PCI           0x00400000 /* CPU has Perf Ctr Int indicator */
 #define MIPS_CPU_RIXI          0x00800000 /* CPU has TLB Read/eXec Inhibit */
 #define MIPS_CPU_MICROMIPS     0x01000000 /* CPU has microMIPS capability */
+#define MIPS_CPU_SEGMENTS       0x02000000 /* CPU supports memory segmentation */
+#define MIPS_CPU_EVA            0x04000000 /* CPU supports EVA functionality */
+#define MIPS_CPU_TLBINV         0x08000000 /* CPU supports TLBINV/F */
+#define MIPS_CPU_CM2            0x10000000 /* CPU has CM2 */
+#define MIPS_CPU_CM2_L2SYNC     0x20000000 /* CPU has CM2 L2-only SYNC feature */
 
 /*
  * CPU ASE encodings
diff --git a/arch/mips/include/asm/cputime.h b/arch/mips/include/asm/cputime.h
deleted file mode 100644 (file)
index c00eacb..0000000
+++ /dev/null
@@ -1,6 +0,0 @@
-#ifndef __MIPS_CPUTIME_H
-#define __MIPS_CPUTIME_H
-
-#include <asm-generic/cputime.h>
-
-#endif /* __MIPS_CPUTIME_H */
diff --git a/arch/mips/include/asm/current.h b/arch/mips/include/asm/current.h
deleted file mode 100644 (file)
index 4c51401..0000000
+++ /dev/null
@@ -1 +0,0 @@
-#include <asm-generic/current.h>
index cf3ae2480b1d25f5dfd77e8e328e5745b413e2be..38c9a7d379f4f3a062e223a33463c0aab4a752b6 100644 (file)
@@ -36,6 +36,7 @@
 #define EF_MIPS_ABI2           0x00000020
 #define EF_MIPS_OPTIONS_FIRST  0x00000080
 #define EF_MIPS_32BITMODE      0x00000100
+#define EF_MIPS_32BITMODE_FP64  0x00000200
 #define EF_MIPS_ABI            0x0000f000
 #define EF_MIPS_ARCH           0xf0000000
 
@@ -227,6 +228,32 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
 
 #endif /* CONFIG_64BIT */
 
+/*
+ * Ensure we don't load the incompatible architecture lib via uselib() -
+ * - verify FPU model.
+ */
+#define elf_lib_check_arch(hdr)                                         \
+({                                                                     \
+       int __res = 1;                                                  \
+       struct elfhdr *__h = (hdr);                                     \
+                                                                       \
+       if (test_thread_flag(TIF_32BIT_REGS)) {                         \
+               if ((__h->e_flags & EF_MIPS_ABI2) != 0)                 \
+                       __res = 0;                                      \
+               if (((__h->e_flags & EF_MIPS_ABI) != 0) &&              \
+                   ((__h->e_flags & EF_MIPS_ABI) != EF_MIPS_ABI_O32))  \
+                       __res = 0;                                      \
+               if (__h->e_flags & EF_MIPS_32BITMODE_FP64)             \
+                       __res = 0;                                  \
+       } else {                                                        \
+               if (((__h->e_flags & EF_MIPS_ABI) == 0) ||              \
+                   ((__h->e_flags & EF_MIPS_ABI) == EF_MIPS_ABI_O32))  \
+                       if (!(__h->e_flags & EF_MIPS_32BITMODE_FP64))  \
+                               __res = 0;                          \
+       }                                                               \
+       __res;                                                          \
+})
+
 /*
  * These are used to set parameters in the core dumps.
  */
@@ -249,6 +276,11 @@ extern struct mips_abi mips_abi_n32;
 
 #define SET_PERSONALITY(ex)                                            \
 do {                                                                   \
+       if ((ex).e_flags & EF_MIPS_32BITMODE_FP64)                      \
+           clear_thread_flag(TIF_32BIT_REGS);                          \
+       else                                                            \
+           set_thread_flag(TIF_32BIT_REGS);                            \
+                                                                       \
        if (personality(current->personality) != PER_LINUX)             \
                set_personality(PER_LINUX);                             \
                                                                        \
@@ -262,6 +294,7 @@ do {                                                                        \
 #ifdef CONFIG_MIPS32_N32
 #define __SET_PERSONALITY32_N32()                                      \
        do {                                                            \
+               clear_thread_flag(TIF_32BIT_REGS);                      \
                set_thread_flag(TIF_32BIT_ADDR);                        \
                current->thread.abi = &mips_abi_n32;                    \
        } while (0)
@@ -271,14 +304,18 @@ do {                                                                      \
 #endif
 
 #ifdef CONFIG_MIPS32_O32
-#define __SET_PERSONALITY32_O32(                                     \
+#define __SET_PERSONALITY32_O32(ex)                                     \
        do {                                                            \
-               set_thread_flag(TIF_32BIT_REGS);                        \
+               if ((ex).e_flags & EF_MIPS_32BITMODE_FP64)              \
+                   clear_thread_flag(TIF_32BIT_REGS);                  \
+               else                                                    \
+                   set_thread_flag(TIF_32BIT_REGS);                    \
+                                                                       \
                set_thread_flag(TIF_32BIT_ADDR);                        \
                current->thread.abi = &mips_abi_32;                     \
        } while (0)
 #else
-#define __SET_PERSONALITY32_O32(                                     \
+#define __SET_PERSONALITY32_O32(ex)                                     \
        do { } while (0)
 #endif
 
@@ -289,7 +326,7 @@ do {                                                                        \
             ((ex).e_flags & EF_MIPS_ABI) == 0)                         \
                __SET_PERSONALITY32_N32();                              \
        else                                                            \
-               __SET_PERSONALITY32_O32();                              \
+               __SET_PERSONALITY32_O32(ex);                            \
 } while (0)
 #else
 #define __SET_PERSONALITY32(ex) do { } while (0)
diff --git a/arch/mips/include/asm/emergency-restart.h b/arch/mips/include/asm/emergency-restart.h
deleted file mode 100644 (file)
index 108d8c4..0000000
+++ /dev/null
@@ -1,6 +0,0 @@
-#ifndef _ASM_EMERGENCY_RESTART_H
-#define _ASM_EMERGENCY_RESTART_H
-
-#include <asm-generic/emergency-restart.h>
-
-#endif /* _ASM_EMERGENCY_RESTART_H */
index a690f05f2e73a173773827d521d63c0958284039..7d05241aff9b47f7064bc6d2add3ea85ed38c9ab 100644 (file)
@@ -20,6 +20,7 @@
 #include <asm/kmap_types.h>
 #endif
 
+#ifndef CONFIG_EVA_3GB
 /*
  * Here we define all the compile-time 'special' virtual
  * addresses. The point is to have a constant address at
@@ -127,3 +128,4 @@ extern void fixrange_init(unsigned long start, unsigned long end,
 
 
 #endif
+#endif
index d088e5db49032bf3d2ed0f050ac49eb889446e8f..fe42767ba47ec6a1549cdbff72d51856036a7bb1 100644 (file)
@@ -29,35 +29,20 @@ struct sigcontext;
 struct sigcontext32;
 
 extern void fpu_emulator_init_fpu(void);
-extern void _init_fpu(void);
+extern int _init_fpu(void);
 extern void _save_fp(struct task_struct *);
 extern void _restore_fp(struct task_struct *);
 
-#define __enable_fpu()                                                 \
+/*
+ * This macro is used only to obtain FIR from FPU and it seems
+ * like a BUG in 34K with single FPU affinity to VPE0.
+ */
+#define __enable_fpu()                                                  \
 do {                                                                   \
        set_c0_status(ST0_CU1);                                         \
        enable_fpu_hazard();                                            \
 } while (0)
 
-#define __disable_fpu()                                                        \
-do {                                                                   \
-       clear_c0_status(ST0_CU1);                                       \
-       disable_fpu_hazard();                                           \
-} while (0)
-
-#define enable_fpu()                                                   \
-do {                                                                   \
-       if (cpu_has_fpu)                                                \
-               __enable_fpu();                                         \
-} while (0)
-
-#define disable_fpu()                                                  \
-do {                                                                   \
-       if (cpu_has_fpu)                                                \
-               __disable_fpu();                                        \
-} while (0)
-
-
 #define clear_fpu_owner()      clear_thread_flag(TIF_USEDFPU)
 
 static inline int __is_fpu_owner(void)
@@ -70,27 +55,58 @@ static inline int is_fpu_owner(void)
        return cpu_has_fpu && __is_fpu_owner();
 }
 
-static inline void __own_fpu(void)
+static inline int __own_fpu(void)
 {
-       __enable_fpu();
+       int ret = 0;
+
+#if defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_CPU_MIPS64)
+       if (test_thread_flag(TIF_32BIT_REGS)) {
+               change_c0_status(ST0_CU1|ST0_FR,ST0_CU1);
+               KSTK_STATUS(current) |= ST0_CU1;
+               KSTK_STATUS(current) &= ~ST0_FR;
+               enable_fpu_hazard();
+               if (read_c0_status() & ST0_FR)
+                   ret = SIGFPE;
+       } else {
+               set_c0_status(ST0_CU1|ST0_FR);
+               KSTK_STATUS(current) |= ST0_CU1|ST0_FR;
+               enable_fpu_hazard();
+               if (!(read_c0_status() & ST0_FR))
+                   ret = SIGFPE;
+       }
+#else
+       if (!test_thread_flag(TIF_32BIT_REGS))
+               return SIGFPE;  /* core has no 64bit FPU, so ... */
+
+       set_c0_status(ST0_CU1);
        KSTK_STATUS(current) |= ST0_CU1;
+       enable_fpu_hazard();
+#endif
        set_thread_flag(TIF_USEDFPU);
+       return ret;
 }
 
-static inline void own_fpu_inatomic(int restore)
+static inline int own_fpu_inatomic(int restore)
 {
+       int ret = 0;
+
        if (cpu_has_fpu && !__is_fpu_owner()) {
-               __own_fpu();
-               if (restore)
+               ret =__own_fpu();
+               if (restore && !ret)
                        _restore_fp(current);
        }
+       return ret;
 }
 
-static inline void own_fpu(int restore)
+static inline int own_fpu(int restore)
 {
+       int ret;
+
        preempt_disable();
-       own_fpu_inatomic(restore);
+       ret = own_fpu_inatomic(restore);
        preempt_enable();
+
+       return ret;
 }
 
 static inline void lose_fpu(int save)
@@ -101,21 +117,25 @@ static inline void lose_fpu(int save)
                        _save_fp(current);
                KSTK_STATUS(current) &= ~ST0_CU1;
                clear_thread_flag(TIF_USEDFPU);
-               __disable_fpu();
+               clear_c0_status(ST0_CU1);
+               disable_fpu_hazard();
        }
        preempt_enable();
 }
 
-static inline void init_fpu(void)
+static inline int init_fpu(void)
 {
+       int ret = 0;
+
        preempt_disable();
-       if (cpu_has_fpu) {
-               __own_fpu();
+       if (cpu_has_fpu && !(ret = __own_fpu()))
                _init_fpu();
-       } else {
+       else
                fpu_emulator_init_fpu();
-       }
+
        preempt_enable();
+
+       return ret;
 }
 
 static inline void save_fp(struct task_struct *tsk)
index 6ea15815d3ee2f83456f665b5fdfcacf2a537d9a..f8c3a095871d7d3dc19eafdabdb6d7ee123346c1 100644 (file)
@@ -16,6 +16,7 @@
 #include <asm/errno.h>
 #include <asm/war.h>
 
+#ifndef CONFIG_EVA
 #define __futex_atomic_op(insn, ret, oldval, uaddr, oparg)             \
 {                                                                      \
        if (cpu_has_llsc && R10000_LLSC_WAR) {                          \
        } else                                                          \
                ret = -ENOSYS;                                          \
 }
+#else
+#define __futex_atomic_op(insn, ret, oldval, uaddr, oparg)             \
+{                                                                      \
+       if (cpu_has_llsc && R10000_LLSC_WAR) {                          \
+               __asm__ __volatile__(                                   \
+               "       .set    push                            \n"     \
+               "       .set    noat                            \n"     \
+               "       .set    mips3                           \n"     \
+               "1:     ll      %1, %4  # __futex_atomic_op     \n"     \
+               "       .set    mips0                           \n"     \
+               "       " insn  "                               \n"     \
+               "       .set    mips3                           \n"     \
+               "2:     sc      $1, %2                          \n"     \
+               "       beqzl   $1, 1b                          \n"     \
+               __WEAK_LLSC_MB                                          \
+               "3:                                             \n"     \
+               "       .insn                                   \n"     \
+               "       .set    pop                             \n"     \
+               "       .set    mips0                           \n"     \
+               "       .section .fixup,\"ax\"                  \n"     \
+               "4:     li      %0, %6                          \n"     \
+               "       j       3b                              \n"     \
+               "       .previous                               \n"     \
+               "       .section __ex_table,\"a\"               \n"     \
+               "       "__UA_ADDR "\t1b, 4b                    \n"     \
+               "       "__UA_ADDR "\t2b, 4b                    \n"     \
+               "       .previous                               \n"     \
+               : "=r" (ret), "=&r" (oldval), "=R" (*uaddr)             \
+               : "0" (0), "R" (*uaddr), "Jr" (oparg), "i" (-EFAULT)    \
+               : "memory");                                            \
+       } else if (cpu_has_llsc) {                                      \
+               __asm__ __volatile__(                                   \
+               "       .set    push                            \n"     \
+               "       .set    noat                            \n"     \
+               "       .set    eva                             \n"     \
+               "1:     lle     %1, %4                          \n"     \
+               "       .set    mips0                           \n"     \
+               "       " insn  "                               \n"     \
+               "       .set    eva                             \n"     \
+               "2:     sce     $1, %2                          \n"     \
+               "       beqz    $1, 1b                          \n"     \
+               __WEAK_LLSC_MB                                          \
+               "3:                                             \n"     \
+               "       .insn                                   \n"     \
+               "       .set    pop                             \n"     \
+               "       .set    mips0                           \n"     \
+               "       .section .fixup,\"ax\"                  \n"     \
+               "4:     li      %0, %6                          \n"     \
+               "       j       3b                              \n"     \
+               "       .previous                               \n"     \
+               "       .section __ex_table,\"a\"               \n"     \
+               "       "__UA_ADDR "\t1b, 4b                    \n"     \
+               "       "__UA_ADDR "\t2b, 4b                    \n"     \
+               "       .previous                               \n"     \
+               : "=r" (ret), "=&r" (oldval), "=R" (*uaddr)             \
+               : "0" (0), "R" (*uaddr), "Jr" (oparg), "i" (-EFAULT)    \
+               : "memory");                                            \
+       } else                                                          \
+               ret = -ENOSYS;                                          \
+}
+#endif
 
 static inline int
 futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr)
@@ -131,6 +193,7 @@ futex_atomic_op_inuser(int encoded_op, u32 __user *uaddr)
        return ret;
 }
 
+#ifndef CONFIG_EVA
 static inline int
 futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
                              u32 oldval, u32 newval)
@@ -201,6 +264,80 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
        *uval = val;
        return ret;
 }
+#else
+static inline int
+futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+                             u32 oldval, u32 newval)
+{
+       int ret = 0;
+       u32 val;
+
+       if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+               return -EFAULT;
+
+       if (cpu_has_llsc && R10000_LLSC_WAR) {
+               __asm__ __volatile__(
+               "# futex_atomic_cmpxchg_inatomic                        \n"
+               "       .set    push                                    \n"
+               "       .set    noat                                    \n"
+               "       .set    mips3                                   \n"
+               "1:     ll      %1, %3                                  \n"
+               "       bne     %1, %z4, 3f                             \n"
+               "       .set    mips0                                   \n"
+               "       move    $1, %z5                                 \n"
+               "       .set    mips3                                   \n"
+               "2:     sc      $1, %2                                  \n"
+               "       beqzl   $1, 1b                                  \n"
+               __WEAK_LLSC_MB
+               "3:                                                     \n"
+               "       .insn                                           \n"
+               "       .set    pop                                     \n"
+               "       .section .fixup,\"ax\"                          \n"
+               "4:     li      %0, %6                                  \n"
+               "       j       3b                                      \n"
+               "       .previous                                       \n"
+               "       .section __ex_table,\"a\"                       \n"
+               "       "__UA_ADDR "\t1b, 4b                            \n"
+               "       "__UA_ADDR "\t2b, 4b                            \n"
+               "       .previous                                       \n"
+               : "+r" (ret), "=&r" (val), "=R" (*uaddr)
+               : "R" (*uaddr), "Jr" (oldval), "Jr" (newval), "i" (-EFAULT)
+               : "memory");
+       } else if (cpu_has_llsc) {
+               __asm__ __volatile__(
+               "# futex_atomic_cmpxchg_inatomic                        \n"
+               "       .set    push                                    \n"
+               "       .set    noat                                    \n"
+               "       .set    eva                                     \n"
+               "1:     lle     %1, %3                                  \n"
+               "       bne     %1, %z4, 3f                             \n"
+               "       .set    mips0                                   \n"
+               "       move    $1, %z5                                 \n"
+               "       .set    eva                                     \n"
+               "2:     sce     $1, %2                                  \n"
+               "       beqz    $1, 1b                                  \n"
+               __WEAK_LLSC_MB
+               "3:                                                     \n"
+               "       .insn                                           \n"
+               "       .set    pop                                     \n"
+               "       .section .fixup,\"ax\"                          \n"
+               "4:     li      %0, %6                                  \n"
+               "       j       3b                                      \n"
+               "       .previous                                       \n"
+               "       .section __ex_table,\"a\"                       \n"
+               "       "__UA_ADDR "\t1b, 4b                            \n"
+               "       "__UA_ADDR "\t2b, 4b                            \n"
+               "       .previous                                       \n"
+               : "+r" (ret), "=&r" (val), "=R" (*uaddr)
+               : "R" (*uaddr), "Jr" (oldval), "Jr" (newval), "i" (-EFAULT)
+               : "memory");
+       } else
+               return -ENOSYS;
+
+       *uval = val;
+       return ret;
+}
+#endif
 
 #endif
 #endif /* _ASM_FUTEX_H */
index d6c50a7e9edebf13e68338758b297667d87087f2..3b194c78e79e2a02c56c3d2812110d5f8d0cba9e 100644 (file)
@@ -18,7 +18,7 @@ enum fw_memtypes {
 
 typedef struct {
        unsigned long base;     /* Within KSEG0 */
-       unsigned int size;      /* bytes */
+       unsigned long size;     /* bytes */
        enum fw_memtypes type;  /* fw_memtypes */
 } fw_memblock_t;
 
index a7359f77a48eaa9f65e34c74b146d0ec7dc9e79a..3717da70e891fe649076f84e719ab9546b6963d6 100644 (file)
 #define GCMPGCBOFS(reg)                GCMPOFS(GCB, GCB, reg)
 #define GCMPGCBOFSn(reg, n)    GCMPOFSn(GCB, GCB, reg, n)
 #define GCMPCLCBOFS(reg)       GCMPOFS(CLCB, CCB, reg)
+#define GCMPCLCBOFSn(reg, n)     GCMPOFSn(CLCB, CCB, reg, n)
 #define GCMPCOCBOFS(reg)       GCMPOFS(COCB, CCB, reg)
+#define GCMPCOCBOFSn(reg, n)     GCMPOFSn(COCB, CCB, reg, n)
 #define GCMPGDBOFS(reg)                GCMPOFS(GDB, GDB, reg)
 
 /* GCMP register access */
 #define GCMPGCB(reg)                   REGP(_gcmp_base, GCMPGCBOFS(reg))
 #define GCMPGCBn(reg, n)              REGP(_gcmp_base, GCMPGCBOFSn(reg, n))
 #define GCMPCLCB(reg)                  REGP(_gcmp_base, GCMPCLCBOFS(reg))
+#define GCMPCLCBn(reg, n)               REGP(_gcmp_base, GCMPCLCBOFSn(reg, n))
 #define GCMPCOCB(reg)                  REGP(_gcmp_base, GCMPCOCBOFS(reg))
+#define GCMPCOCBn(reg, n)               REGP(_gcmp_base, GCMPCOCBOFSn(reg, n))
 #define GCMPGDB(reg)                   REGP(_gcmp_base, GCMPGDBOFS(reg))
 
 /* Mask generation */
 #define         GCMP_GCB_GCMPB_CMDEFTGT_MEM            1
 #define         GCMP_GCB_GCMPB_CMDEFTGT_IOCU1          2
 #define         GCMP_GCB_GCMPB_CMDEFTGT_IOCU2          3
-#define GCMP_GCB_CCMC_OFS              0x0010  /* Global CM Control */
+#define GCMP_GCB_GCMC_OFS               0x0010  /* Global CM Control */
+#define GCMP_GCB_GCMC2_OFS              0x0018  /* Global CM Control2 */
 #define GCMP_GCB_GCSRAP_OFS            0x0020  /* Global CSR Access Privilege */
 #define         GCMP_GCB_GCSRAP_CMACCESS_SHF   0
 #define         GCMP_GCB_GCSRAP_CMACCESS_MSK   GCMPGCBMSK(GCSRAP_CMACCESS, 8)
 #define GCMP_GCB_GCMPREV_OFS           0x0030  /* GCMP Revision Register */
+#define         GCMP_GCB_GCMPREV_MAJOR_SHF     8
+#define         GCMP_GCB_GCMPREV_MAJOR_MSK     GCMPGCBMSK(GCMPREV_MAJOR, 8)
+#define         GCMP_GCB_GCMPREV_MINOR_SHF     0
+#define         GCMP_GCB_GCMPREV_MINOR_MSK     GCMPGCBMSK(GCMPREV_MINOR, 8)
 #define GCMP_GCB_GCMEM_OFS             0x0040  /* Global CM Error Mask */
 #define GCMP_GCB_GCMEC_OFS             0x0048  /* Global CM Error Cause */
 #define         GCMP_GCB_GMEC_ERROR_TYPE_SHF   27
 #define GCMP_GCB_GCMEO_OFS             0x0058  /* Global CM Error Multiple */
 #define         GCMP_GCB_GMEO_ERROR_2ND_SHF    0
 #define         GCMP_GCB_GMEO_ERROR_2ND_MSK    GCMPGCBMSK(GMEO_ERROR_2ND, 5)
-#define GCMP_GCB_GICBA_OFS             0x0080  /* Global Interrupt Controller Base Address */
+#define GCMP_GCB_GCMCUS_OFS             0x0060  /* GCR Custom Base */
+#define GCMP_GCB_GCMCST_OFS             0x0068  /* GCR Custom Status */
+#define GCMP_GCB_GCML2S_OFS             0x0070  /* Global L2 only Sync Register */
+#define  GCMP_GCB_GCML2S_EN_SHF         0
+#define  GCMP_GCB_GCML2S_EN_MSK         GCMPGCBMSK(GCML2S_EN, 1)
+#define GCMP_GCB_GICBA_OFS              0x0080  /* Global Interrupt Controller Base Address */
 #define         GCMP_GCB_GICBA_BASE_SHF        17
 #define         GCMP_GCB_GICBA_BASE_MSK        GCMPGCBMSK(GICBA_BASE, 15)
 #define         GCMP_GCB_GICBA_EN_SHF          0
 #define         GCMP_GCB_GICBA_EN_MSK          GCMPGCBMSK(GICBA_EN, 1)
+#define GCMP_GCB_CPCBA_OFS              0x0088  /* CPC Base Address */
+#define  GCMP_GCB_CPCBA_SHF             15
+#define  GCMP_GCB_CPCBA_MSK             GCMPGCBMSK(CPCBA, 17)
+#define  GCMP_GCB_CPCBA_EN_SHF          0
+#define  GCMP_GCB_CPCBA_EN_MASK         GCMPGCBMSK(CPCBA_EN, 1)
+
+#define GCMP_GCB_GICST_OFS              0x00D0  /* Global Interrupt Controller Status */
+#define GCMP_GCB_GCSHREV_OFS            0x00E0  /* Cache Revision */
+#define GCMP_GCB_CPCST_OFS              0x00F0  /* CPC Status */
+#define  GCMP_GCB_CPCST_EN_SHF          0
+#define  GCMP_GCB_CPCST_EN_MASK         GCMPGCBMSK(CPCST_EN, 1)
 
 /* GCB Regions */
 #define GCMP_GCB_CMxBASE_OFS(n)                (0x0090+16*(n))         /* Global Region[0-3] Base Address */
 #define         GCMP_GCB_CMxMASK_CMREGTGT_IOCU1 2
 #define         GCMP_GCB_CMxMASK_CMREGTGT_IOCU2 3
 
+#define GCMP_GCB_GAOR0BA_OFS              0x0190  /* Attribute-Only Region0 Base Address */
+#define GCMP_GCB_GAOR0MASK_OFS            0x0198  /* Attribute-Only Region0 Mask  */
+#define GCMP_GCB_GAOR1BA_OFS              0x01A0  /* Attribute-Only Region1 Base Address */
+#define GCMP_GCB_GAOR1MASK_OFS            0x01A8  /* Attribute-Only Region1 Mask */
+
+#define GCMP_GCB_IOCUREV_OFS              0x0200  /* IOCU Revision */
+
+#define GCMP_GCB_GAOR2BA_OFS              0x0210  /* Attribute-Only Region2 Base Address */
+#define GCMP_GCB_GAOR2MASK_OFS            0x0218  /* Attribute-Only Region2 Mask */
+#define GCMP_GCB_GAOR3BA_OFS              0x0220  /* Attribute-Only Region3 Base Address */
+#define GCMP_GCB_GAOR3MASK_OFS            0x0228  /* Attribute-Only Region3 Mask */
 
 /* Core local/Core other control block registers */
 #define GCMP_CCB_RESETR_OFS            0x0000                  /* Reset Release */
 #define GCMP_CCB_COHCTL_OFS            0x0008                  /* Coherence Control */
 #define         GCMP_CCB_COHCTL_DOMAIN_SHF     0
 #define         GCMP_CCB_COHCTL_DOMAIN_MSK     GCMPCCBMSK(COHCTL_DOMAIN, 8)
-#define GCMP_CCB_CFG_OFS               0x0010                  /* Config */
+#define  GCMP_CCB_COHCTL_DOMAIN_ENABLE  (GCMP_CCB_COHCTL_DOMAIN_MSK)
+#define GCMP_CCB_CFG_OFS                0x0010                  /* Config */
 #define         GCMP_CCB_CFG_IOCUTYPE_SHF      10
 #define         GCMP_CCB_CFG_IOCUTYPE_MSK      GCMPCCBMSK(CFG_IOCUTYPE, 2)
 #define          GCMP_CCB_CFG_IOCUTYPE_CPU     0
 #define GCMP_CCB_RESETBASE_OFS         0x0020          /* Reset Exception Base */
 #define         GCMP_CCB_RESETBASE_BEV_SHF     12
 #define         GCMP_CCB_RESETBASE_BEV_MSK     GCMPCCBMSK(RESETBASE_BEV, 20)
+#define GCMP_CCB_RESETBASEEXT_OFS       0x0030          /* Reset Exception Base Extention */
+#define  GCMP_CCB_RESETEXTBASE_BEV_SHF      20
+#define  GCMP_CCB_RESETEXTBASE_BEV_MASK_MSK GCMPCCBMSK(RESETEXTBASE_BEV, 8)
+#define  GCMP_CCB_RESETEXTBASE_LOWBITS_SHF     0
+#define  GCMP_CCB_RESETEXTBASE_BEV_MASK_LOWBITS GCMPCCBMSK(RESETEXTBASE_LOWBITS, 20)
 #define GCMP_CCB_ID_OFS                        0x0028          /* Identification */
 #define GCMP_CCB_DINTGROUP_OFS         0x0030          /* DINT Group Participate */
 #define GCMP_CCB_DBGGROUP_OFS          0x0100          /* DebugBreak Group */
 
+#define GCMP_CCB_TCIDxPRI_OFS(n)        (0x0040+8*(n))  /* TCID x PRIORITY */
+
 extern int __init gcmp_probe(unsigned long, unsigned long);
-extern int __init gcmp_niocu(void);
 extern void __init gcmp_setregion(int, unsigned long, unsigned long, int);
+#ifdef CONFIG_MIPS_CMP
+extern int __init gcmp_niocu(void);
+extern int gcmp_present;
+#else
+#define gcmp_niocu(x)   (0)
+#define gcmp_present    (0)
+#endif
+extern unsigned long _gcmp_base;
+#define GCMP_L2SYNC_OFFSET              0x8100
+
 #endif /* _ASM_GCMPREGS_H */
index 7153b32de18e6692e2792be19a34b886d1121ba8..899c5f23254e32f43ae447d0b68c0f293169e87f 100644 (file)
        (GIC_SH_INTR_MAP_TO_VPE_BASE_OFS + (32 * (intr)) + (((vpe) / 32) * 4))
 #define GIC_SH_MAP_TO_VPE_REG_BIT(vpe) (1 << ((vpe) % 32))
 
+#define GIC_DINT_OFS                    0x6000
+
 /* Convert an interrupt number to a byte offset/bit for multi-word registers */
 #define GIC_INTR_OFS(intr) (((intr) / 32)*4)
 #define GIC_INTR_BIT(intr) ((intr) % 32)
 #define GIC_VPE_WD_MAP_OFS             0x0040
 #define GIC_VPE_COMPARE_MAP_OFS                0x0044
 #define GIC_VPE_TIMER_MAP_OFS          0x0048
+#define GIC_VPE_FDEBUG_MAP_OFS          0x004c
 #define GIC_VPE_PERFCTR_MAP_OFS                0x0050
 #define GIC_VPE_SWINT0_MAP_OFS         0x0054
 #define GIC_VPE_SWINT1_MAP_OFS         0x0058
 #define GIC_VPE_OTHER_ADDR_OFS         0x0080
+#define GIC_VPE_ID_OFS                  0x0088
 #define GIC_VPE_WD_CONFIG0_OFS         0x0090
 #define GIC_VPE_WD_COUNT0_OFS          0x0094
 #define GIC_VPE_WD_INITIAL0_OFS                0x0098
 #define GIC_VPE_TENABLE_INT_31_0_OFS   0x1080
 #define GIC_VPE_TENABLE_INT_63_32_OFS  0x1084
 
+#define GIC_VPE_DINT_OFS                0x3000
+#define GIC_VPE_DEBUG_BREAK_OFS         0x3080
+
 /* User Mode Visible Section Register Map */
 #define GIC_UMV_SH_COUNTER_31_00_OFS   0x0000
 #define GIC_UMV_SH_COUNTER_63_32_OFS   0x0004
@@ -351,7 +358,7 @@ struct gic_shared_intr_map {
 
 /* Local GIC interrupts. */
 #define GIC_INT_TMR            (GIC_CPU_INT5)
-#define GIC_INT_PERFCTR                (GIC_CPU_INT5)
+#define GIC_INT_PERFCTR         (GIC_CPU_INT4)
 
 /* Add 2 to convert non-EIC hardware interrupt to EIC vector number. */
 #define GIC_CPU_TO_VEC_OFFSET  (2)
index b7e59853fd33b05df930c3fb795aa03a9a6de23e..4bbea8d8218ec9ef2e0f202012272dbab9e3b639 100644 (file)
@@ -162,6 +162,9 @@ static inline void * isa_bus_to_virt(unsigned long address)
 #define virt_to_bus virt_to_phys
 #define bus_to_virt phys_to_virt
 
+#define phys_to_bus(x)  ((dma_addr_t)(x))
+#define bus_to_phys(x)  ((phys_t)(x))
+
 /*
  * Change "struct page" to physical address.
  */
@@ -170,6 +173,11 @@ static inline void * isa_bus_to_virt(unsigned long address)
 extern void __iomem * __ioremap(phys_t offset, phys_t size, unsigned long flags);
 extern void __iounmap(const volatile void __iomem *addr);
 
+#ifndef CONFIG_PCI
+struct pci_dev;
+static inline void pci_iounmap(struct pci_dev *dev, void __iomem *addr) {}
+#endif
+
 static inline void __iomem * __ioremap_mode(phys_t offset, unsigned long size,
        unsigned long flags)
 {
index 3f11fdb3ed8cf117346a58b6f5387c93cd26738c..dbcadabf157add3253488b09515d64a0d4d5ae5f 100644 (file)
@@ -22,5 +22,6 @@ struct device_node;
 extern int mips_cpu_intc_init(struct device_node *of_node,
                              struct device_node *parent);
 #endif
+extern unsigned mips_smp_c0_status_mask;
 
 #endif /* _ASM_IRQ_CPU_H */
diff --git a/arch/mips/include/asm/kspd.h b/arch/mips/include/asm/kspd.h
deleted file mode 100644 (file)
index ec68329..0000000
+++ /dev/null
@@ -1,32 +0,0 @@
-/*
- * Copyright (C) 2005 MIPS Technologies, Inc.  All rights reserved.
- *
- *  This program is free software; you can distribute it and/or modify it
- *  under the terms of the GNU General Public License (Version 2) as
- *  published by the Free Software Foundation.
- *
- *  This program is distributed in the hope it will be useful, but WITHOUT
- *  ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- *  FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
- *  for more details.
- *
- *  You should have received a copy of the GNU General Public License along
- *  with this program; if not, write to the Free Software Foundation, Inc.,
- *  59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
- *
- */
-
-#ifndef _ASM_KSPD_H
-#define _ASM_KSPD_H
-
-struct kspd_notifications {
-       void (*kspd_sp_exit)(int sp_id);
-
-       struct list_head list;
-};
-
-static inline void kspd_notify(struct kspd_notifications *notify)
-{
-}
-
-#endif
index d44622cd74becb52ab33e80d8f21ab0e7d68ea05..00c03451e757b267420b39a5d470b747be5860be 100644 (file)
@@ -35,10 +35,10 @@ static __inline__ long local_add_return(long i, local_t * l)
                __asm__ __volatile__(
                "       .set    mips3                                   \n"
                "1:"    __LL    "%1, %2         # local_add_return      \n"
-               "       addu    %0, %1, %3                              \n"
+                       __ADDU  "%0, %1, %3                             \n"
                        __SC    "%0, %2                                 \n"
                "       beqzl   %0, 1b                                  \n"
-               "       addu    %0, %1, %3                              \n"
+                       __ADDU  "%0, %1, %3                             \n"
                "       .set    mips0                                   \n"
                : "=&r" (result), "=&r" (temp), "=m" (l->a.counter)
                : "Ir" (i), "m" (l->a.counter)
@@ -49,10 +49,10 @@ static __inline__ long local_add_return(long i, local_t * l)
                __asm__ __volatile__(
                "       .set    mips3                                   \n"
                "1:"    __LL    "%1, %2         # local_add_return      \n"
-               "       addu    %0, %1, %3                              \n"
+                       __ADDU  "%0, %1, %3                             \n"
                        __SC    "%0, %2                                 \n"
                "       beqz    %0, 1b                                  \n"
-               "       addu    %0, %1, %3                              \n"
+                       __ADDU  "%0, %1, %3                             \n"
                "       .set    mips0                                   \n"
                : "=&r" (result), "=&r" (temp), "=m" (l->a.counter)
                : "Ir" (i), "m" (l->a.counter)
@@ -80,10 +80,10 @@ static __inline__ long local_sub_return(long i, local_t * l)
                __asm__ __volatile__(
                "       .set    mips3                                   \n"
                "1:"    __LL    "%1, %2         # local_sub_return      \n"
-               "       subu    %0, %1, %3                              \n"
+                       __SUBU  "%0, %1, %3                             \n"
                        __SC    "%0, %2                                 \n"
                "       beqzl   %0, 1b                                  \n"
-               "       subu    %0, %1, %3                              \n"
+                       __SUBU  "%0, %1, %3                             \n"
                "       .set    mips0                                   \n"
                : "=&r" (result), "=&r" (temp), "=m" (l->a.counter)
                : "Ir" (i), "m" (l->a.counter)
@@ -94,10 +94,10 @@ static __inline__ long local_sub_return(long i, local_t * l)
                __asm__ __volatile__(
                "       .set    mips3                                   \n"
                "1:"    __LL    "%1, %2         # local_sub_return      \n"
-               "       subu    %0, %1, %3                              \n"
+                       __SUBU  "%0, %1, %3                             \n"
                        __SC    "%0, %2                                 \n"
                "       beqz    %0, 1b                                  \n"
-               "       subu    %0, %1, %3                              \n"
+                       __SUBU  "%0, %1, %3                             \n"
                "       .set    mips0                                   \n"
                : "=&r" (result), "=&r" (temp), "=m" (l->a.counter)
                : "Ir" (i), "m" (l->a.counter)
diff --git a/arch/mips/include/asm/local64.h b/arch/mips/include/asm/local64.h
deleted file mode 100644 (file)
index 36c93b5..0000000
+++ /dev/null
@@ -1 +0,0 @@
-#include <asm-generic/local64.h>
index ac28f273449cfdde16cb2456aa24ac8af537c376..660ab64c0fc9936030c3cfc399016a1397edf26e 100644 (file)
  * This handles the memory map.
  * We handle pages at KSEG0 for kernels with 32 bit address space.
  */
-#define PAGE_OFFSET            0x94000000UL
-#define PHYS_OFFSET            0x14000000UL
+#define PAGE_OFFSET    _AC(0x94000000, UL)
+#define PHYS_OFFSET    _AC(0x14000000, UL)
+
+#define UNCAC_BASE     _AC(0xb4000000, UL)     /* 0xa0000000 + PHYS_OFFSET */
+#define IO_BASE                UNCAC_BASE
 
 #include <asm/mach-generic/spaces.h>
 
index fe23034aaf721497af158ee2deec840038478dcf..faa0144daef185aa35cc81dda103a2b7aed9533e 100644 (file)
@@ -14,19 +14,19 @@ struct device;
 static inline dma_addr_t plat_map_dma_mem(struct device *dev, void *addr,
        size_t size)
 {
-       return virt_to_phys(addr);
+       return virt_to_bus(addr);
 }
 
 static inline dma_addr_t plat_map_dma_mem_page(struct device *dev,
        struct page *page)
 {
-       return page_to_phys(page);
+       return phys_to_bus(page_to_phys(page));
 }
 
 static inline unsigned long plat_dma_addr_to_phys(struct device *dev,
        dma_addr_t dma_addr)
 {
-       return dma_addr;
+       return bus_to_phys(dma_addr);
 }
 
 static inline void plat_unmap_dma_mem(struct device *dev, dma_addr_t dma_addr,
@@ -62,7 +62,7 @@ static inline int plat_device_is_coherent(struct device *dev)
 #ifdef CONFIG_DMA_COHERENT
        return 1;
 #else
-       return coherentio;
+       return (coherentio > 0);
 #endif
 }
 
index 5b2f2e68e57f08210be7d4a98370cb3895111adf..fa7aa2b2b7dd4752f22453994c9b8522586f2a55 100644 (file)
 #else
 #define CAC_BASE               _AC(0x80000000, UL)
 #endif
+#ifndef IO_BASE
 #define IO_BASE                        _AC(0xa0000000, UL)
+#endif
+#ifndef UNCAC_BASE
 #define UNCAC_BASE             _AC(0xa0000000, UL)
+#endif
 
 #ifndef MAP_BASE
 #ifdef CONFIG_KVM_GUEST
 #define FIXADDR_TOP            ((unsigned long)(long)(int)0xfffe0000)
 #endif
 
+#ifndef in_module
+/*
+ * If the Instruction Pointer is in module space (0xc0000000), return true;
+ * otherwise, it is in kernel space (0x80000000), return false.
+ *
+ * FIXME: This will not work when the kernel space and module space are the
+ * same. If they are the same, we need to modify scripts/recordmcount.pl,
+ * ftrace_make_nop/call() and the other related parts to ensure the
+ * enabling/disabling of the calling site to _mcount is right for both kernel
+ * and module.
+ *
+ */
+#define in_module(ip)   (((unsigned long)ip) & 0x40000000)
+#endif
+
 #endif /* __ASM_MACH_GENERIC_SPACES_H */
index 5edf05d9dad8b648f100905c1b9b08dcdb6ca817..5d6a76434d00b12a7bb4e2078e03b86c2e8f8e49 100644 (file)
 #ifndef _ASM_MACH_IP28_SPACES_H
 #define _ASM_MACH_IP28_SPACES_H
 
-#define CAC_BASE               0xa800000000000000
+#define CAC_BASE       _AC(0xa800000000000000, UL)
 
-#define HIGHMEM_START          (~0UL)
+#define HIGHMEM_START  (~0UL)
 
-#define PHYS_OFFSET            _AC(0x20000000, UL)
+#define PHYS_OFFSET    _AC(0x20000000, UL)
+
+#define UNCAC_BASE     _AC(0xc0000000, UL)     /* 0xa0000000 + PHYS_OFFSET */
+#define IO_BASE                UNCAC_BASE
 
 #include <asm/mach-generic/spaces.h>
 
index 0b793e7bf67e4d77a960a419c32b73200f0d1506..1cfcf314c132cad395e22eab645f5d0be30ae01a 100644 (file)
  * for more details.
  *
  * Chris Dearman (chris@mips.com)
- * Copyright (C) 2007 Mips Technologies, Inc.
+ * Leonid Yegoshin (yegoshin@mips.com)
+ * Copyright (C) 2012 Mips Technologies, Inc.
  */
 #ifndef __ASM_MACH_MIPS_KERNEL_ENTRY_INIT_H
 #define __ASM_MACH_MIPS_KERNEL_ENTRY_INIT_H
 
+       .macro  eva_entry   t1  t2  t0
+       andi    \t1, 0x7    /* Config.K0 == CCA */
+       move    \t2, \t1
+       ins     \t2, \t1, 16, 3
+
+#ifdef CONFIG_EVA_OLD_MALTA_MAP
+
+#ifdef CONFIG_EVA_3GB
+       li      \t0, ((MIPS_SEGCFG_UK << MIPS_SEGCFG_AM_SHIFT) |            \
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (2 << MIPS_SEGCFG_C_SHIFT) |  \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_MK << MIPS_SEGCFG_AM_SHIFT) |                \
+               (0 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+       ins     \t0, \t1, 16, 3
+       mtc0    \t0, $5, 2
+#ifdef CONFIG_SMP
+       li      \t0, ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |         \
+               (0 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |             \
+               (0 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+#else
+       li      \t0, ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |         \
+               (0 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |             \
+               (4 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+#endif /* CONFIG_SMP */
+       or      \t0, \t2
+       mtc0    \t0, $5, 3
+#else /* !CONFIG_EVA_3GB */
+       li      \t0, ((MIPS_SEGCFG_MK << MIPS_SEGCFG_AM_SHIFT) |            \
+               (0 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_MK << MIPS_SEGCFG_AM_SHIFT) |                \
+               (0 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+       or      \t0, \t2
+       mtc0    \t0, $5, 2
+#ifdef CONFIG_SMP
+       li      \t0, ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |         \
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (2 << MIPS_SEGCFG_C_SHIFT) |  \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |             \
+               (0 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+#else
+       li      \t0, ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |         \
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (2 << MIPS_SEGCFG_C_SHIFT) |  \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |             \
+               (4 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+#endif /* CONFIG_SMP */
+       ins     \t0, \t1, 16, 3
+       mtc0    \t0, $5, 3
+#endif /* CONFIG_EVA_3GB */
+       li      \t0, ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |         \
+               (6 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |             \
+               (4 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+
+#else /* !CONFIG_EVA_OLD_MALTA_MAP */
+
+#ifdef CONFIG_EVA_3GB
+       li      \t0, ((MIPS_SEGCFG_UK << MIPS_SEGCFG_AM_SHIFT) |            \
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (2 << MIPS_SEGCFG_C_SHIFT) |  \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_MK << MIPS_SEGCFG_AM_SHIFT) |                \
+               (0 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+       ins     \t0, \t1, 16, 3
+       mtc0    \t0, $5, 2
+       li      \t0, ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |          \
+               (6 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |             \
+               (5 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+       or      \t0, \t2
+       mtc0    \t0, $5, 3
+       li      \t0, ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |         \
+               (3 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |             \
+               (1 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+#else /* !CONFIG_EVA_3GB */
+       li      \t0, ((MIPS_SEGCFG_MK << MIPS_SEGCFG_AM_SHIFT) |            \
+               (0 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_MK << MIPS_SEGCFG_AM_SHIFT) |                \
+               (0 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+       or      \t0, \t2
+       mtc0    \t0, $5, 2
+       li      \t0, ((MIPS_SEGCFG_UK << MIPS_SEGCFG_AM_SHIFT) |            \
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (2 << MIPS_SEGCFG_C_SHIFT) |  \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_UK << MIPS_SEGCFG_AM_SHIFT) |                \
+               (0 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+       ins     \t0, \t1, 16, 3
+       mtc0    \t0, $5, 3
+       li      \t0, ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |         \
+               (2 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) |                              \
+               (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |             \
+               (0 << MIPS_SEGCFG_PA_SHIFT) |                               \
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
+#endif /* CONFIG_EVA_3GB */
+
+#endif /* CONFIG_EVA_OLD_MALTA_MAP */
+
+       or      \t0, \t2
+       mtc0    \t0, $5, 4
+       jal     mips_ihb
+
+       mfc0    \t0, $16, 5
+       li      \t2, 0x40000000      /* K bit */
+       or      \t0, \t0, \t2
+       mtc0    \t0, $16, 5
+       sync
+       jal     mips_ihb
+       .endm
+
+
        .macro  kernel_entry_setup
 #ifdef CONFIG_MIPS_MT_SMTC
        mfc0    t0, CP0_CONFIG
@@ -40,13 +173,65 @@ nonmt_processor:
        .asciz  "SMTC kernel requires the MT ASE to run\n"
        __FINIT
 0:
-#endif
+#endif /* CONFIG_MIPS_MT_SMTC */
+
+#ifdef CONFIG_EVA
+       sync
+       ehb
+
+       mfc0    t1, CP0_CONFIG
+       bgez    t1, 9f
+       mfc0    t0, CP0_CONFIG, 1
+       bgez    t0, 9f
+       mfc0    t0, CP0_CONFIG, 2
+       bgez    t0, 9f
+       mfc0    t0, CP0_CONFIG, 3
+       sll     t0, t0, 6   /* SC bit */
+       bgez    t0, 9f
+
+       eva_entry t1 t2 t0
+       PTR_LA  t0, mips_cca
+       sw      t1, 0(t0)
+       b       0f
+
+9:
+       /* Assume we came from YAMON... */
+       PTR_LA  v0, 0x9fc00534  /* YAMON print */
+       lw      v0, (v0)
+       move    a0, zero
+       PTR_LA  a1, nonsc_processor
+       jal     v0
+
+       PTR_LA  v0, 0x9fc00520  /* YAMON exit */
+       lw      v0, (v0)
+       li      a0, 1
+       jal     v0
+
+1:     b       1b
+       nop
+
+       __INITDATA
+nonsc_processor:
+       .asciz  "Kernel requires the Segment/EVA to run\n"
+       __FINIT
+#endif /* CONFIG_EVA */
+
+0:
        .endm
 
 /*
  * Do SMP slave processor setup necessary before we can safely execute C code.
  */
        .macro  smp_slave_setup
+
+#ifdef CONFIG_EVA
+
+       sync
+       ehb
+       mfc0    t1, CP0_CONFIG
+       eva_entry   t1 t2 t0
+#endif /* CONFIG_EVA */
+
        .endm
 
 #endif /* __ASM_MACH_MIPS_KERNEL_ENTRY_INIT_H */
diff --git a/arch/mips/include/asm/mach-malta/spaces.h b/arch/mips/include/asm/mach-malta/spaces.h
new file mode 100644 (file)
index 0000000..ea3c928
--- /dev/null
@@ -0,0 +1,155 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Author: Leonid Yegoshin (yegoshin@mips.com)
+ * Copyright (C) 2012 MIPS Technologies, Inc.
+ */
+
+#ifndef _ASM_MALTA_SPACES_H
+#define _ASM_MALTA_SPACES_H
+
+#ifdef CONFIG_EVA
+
+#ifdef CONFIG_EVA_OLD_MALTA_MAP
+
+/* Classic (old) Malta board EVA memory map with system controller RocIt2-1.418
+
+   This memory map is traditional but IOCU works only with mirroring 256MB memory
+   and effectively can't be used with EVA.
+
+   Phys memory - 80000000 to ffffffff - 2GB (last 64KB reserved for
+                correct HIGHMEM macros arithmetics)
+   KV memory   - 0 - 7fffffff (2GB) or even higher
+   Kernel code is located in the same place (80000000) just for keeping
+                the same YAMON and other stuff, so KSEG0 is used "illegally",
+                using Malta mirroring of 1st 256MB (see also __pa_symbol below)
+                for SMP kernel and direct map to 80000000 for non-SMP.
+                SMP kernel requires that because of way how YAMON starts
+                secondary cores.
+   IO/UNCAC_ADDR ... well, even KSEG1 or KSEG3 but physaddr is 0UL (256MB-512MB)
+   CAC_ADDR      ... it used to revert effect of UNCAC_ADDR
+   VMALLOC is cut to C0000000 to E0000000 (KSEG2)
+   PKMAP/kmap_coherent should be not used - no HIGHMEM
+
+   PCI bus:
+   PCI devices are located in 256MB-512MB of PCI space,
+   phys memory window is located in 2GB-4GB of PCI space.
+
+   Note: CONFIG_EVA_3GB actually gives only 2GB but it shifts IO space up to KSEG3
+        It is done as a preparation work for non-Malta boards.
+ */
+
+#define PAGE_OFFSET             _AC(0x0, UL)
+#define PHYS_OFFSET             _AC(0x80000000, UL)
+#define HIGHMEM_START           _AC(0xffff0000, UL)
+
+/* trick definition, just to use kernel symbols from KSEG0 but keep
+   all dynamic memory in EVA's MUSUK KUSEG segment (=0x80000000) - I am lazy
+   to move kernel code from 80000000 to zero
+   Don't copy it for other boards, it is likely to have a different kernel
+   location */
+#define __pa_symbol(x)          (RELOC_HIDE((unsigned long)(x), 0))
+
+#define YAMON_BASE              _AC(0x80000000, UL)
+
+#else /* !CONFIG_EVA_OLD_MALTA_MAP */
+
+/* New Malta board EVA memory map basic's:
+
+   This map is designed for work with IOCU on Malta.
+   IOCU on Malta can't support mirroring of first 256MB in old memory map.
+
+   Phys memory - 00000000 to ffffffff - up to 4GB
+                (last 64KB reserved for correct HIGHMEM macros arithmetics,
+                 memory hole 256M-512M is for I/O registers and PCI)
+                For EVA_3GB the first 512MB are not used, let's use 4GB memory.
+   KV memory   - 0 - 7fffffff (2GB) or even higher,
+   Kernel code is located in the same place (80000000) just for keeping
+                the same YAMON and other stuff, at least for now.
+                Need to be copied for 3GB configurations.
+   IO/UNCAC_ADDR ... well, even KSEG1 or KSEG3 but physaddr is 0UL (256MB-512MB)
+   CAC_ADDR      ... it used to revert effect of UNCAC_ADDR
+   VMALLOC is cut to C0000000 to E0000000 (KSEG2)
+   PKMAP/kmap_coherent should be not used - no HIGHMEM
+
+   PCI bus:
+   PCI devices are located in 2GB + (256MB-512MB) of PCI space (non-transparent)
+   phys memory window is located in 0GB-2GB of PCI space.
+
+   Note: 3GB configuration doesn't work until PCI bridges loop problem is fixed
+        and that code is not finished yet (loop was discovered after work done)
+ */
+
+#define PAGE_OFFSET             _AC(0x0, UL)
+
+#ifdef CONFIG_EVA_3GB
+/* skip first 512MB */
+#define PHYS_OFFSET             _AC(0x20000000, UL)
+#else
+#define PHYS_OFFSET             _AC(0x0, UL)
+#define YAMON_BASE              _AC(0x80000000, UL)
+#endif
+
+#define HIGHMEM_START           _AC(0xffff0000, UL)
+
+/* trick definition, just to use kernel symbols from KSEG0 but keep
+   all dynamic memory in EVA's MUSUK KUSEG segment - I am lazy
+   to move kernel code from 80000000 to zero
+   Don't copy it for other boards, it is likely to have a different kernel
+   location */
+#define __pa_symbol(x)          __pa(CPHYSADDR(RELOC_HIDE((unsigned long)(x), 0)))
+
+#endif /* CONFIG_EVA_OLD_MALTA_MAP */
+
+/*  I put INDEX_BASE here to underline the fact that in EVA mode kernel
+    may be located somethere and not in CKSEG0, so CKSEG0 may have
+    a "surprise" location and index-based CACHE may give unexpected result */
+#define INDEX_BASE      CKSEG0
+
+/*
+ * If the Instruction Pointer is in module space (0xc0000000), return true;
+ * otherwise, it is in kernel space (0x80000000), return false.
+ *
+ * FIXME: This will not work when the kernel space and module space are the
+ * same. If they are the same, we need to modify scripts/recordmcount.pl,
+ * ftrace_make_nop/call() and the other related parts to ensure the
+ * enabling/disabling of the calling site to _mcount is right for both kernel
+ * and module.
+ *
+ * It must be changed for 3.5GB memory map. LY22
+ */
+#define in_module(ip)   (((unsigned long)ip) & 0x40000000)
+
+#ifdef CONFIG_EVA_3GB
+
+#define UNCAC_BASE              _AC(0xe0000000, UL)
+#define IO_BASE                 UNCAC_BASE
+
+#define KSEG
+#define KUSEG                   0x00000000
+#define KSEG0                   0x80000000
+#define KSEG3                   0xa0000000
+#define KSEG2                   0xc0000000
+#define KSEG1                   0xe0000000
+
+#define CKUSEG                  0x00000000
+#define CKSEG0                  0x80000000
+#define CKSEG3                  0xa0000000
+#define CKSEG2                  _AC(0xc0000000, UL)
+#define CKSEG1                  0xe0000000
+
+#define MAP_BASE                CKSEG2
+#define VMALLOC_END             (MAP_BASE + _AC(0x20000000, UL) - 2*PAGE_SIZE)
+
+#endif  /* CONFIG_EVA_3GB */
+
+#define IO_SIZE                 _AC(0x10000000, UL)
+#define IO_SHIFT                _AC(0x10000000, UL)
+
+#endif  /* CONFIG_EVA */
+
+#include <asm/mach-generic/spaces.h>
+
+#endif /* __ASM_MALTA_SPACES_H */
index bd9746fbe4af8a6b098df65c5f386cecec86f2e4..48616816bcbc9073b3184e83a6276dad687d6feb 100644 (file)
 #define ASCII_DISPLAY_WORD_BASE           0x1f000410
 #define ASCII_DISPLAY_POS_BASE    0x1f000418
 
-/*
- * Reset register.
- */
-#define SOFTRES_REG      0x1f000500
-#define GORESET                  0x42
-
 /*
  * Revision register.
  */
index 722bc889eab555dfc29d2c30dbb01642df8119bb..9750dce34c5e6015f4faa8dee3589210f51bbb4e 100644 (file)
@@ -54,8 +54,8 @@ static inline unsigned long get_msc_port_base(unsigned long reg)
 /*
  * GCMP Specific definitions
  */
-#define GCMP_BASE_ADDR                 0x1fbf8000
-#define GCMP_ADDRSPACE_SZ              (256 * 1024)
+#define GCMP_BASE_ADDR_MALTA            0x1fbf8000
+#define GCMP_ADDRSPACE_SZ_MALTA         (64 * 1024)
 
 /*
  * GIC Specific definitions
@@ -63,6 +63,12 @@ static inline unsigned long get_msc_port_base(unsigned long reg)
 #define GIC_BASE_ADDR                  0x1bdc0000
 #define GIC_ADDRSPACE_SZ               (128 * 1024)
 
+/*
+ * CPC Specific definitions
+ */
+#define CPC_BASE_ADDR_MALTA             0x1bde0000
+#define CPC_ADDRSPACE_SZ_MALTA          (32 * 1024)
+
 /*
  * MSC01 BIU Specific definitions
  * FIXME : These should be elsewhere ?
index 8342d16b2066c89472b7832fb4e45ac0420b00d5..f4f3fe3bd81831685ffd1a76122ab387341d0627 100644 (file)
  * and should be written as zero.
  */
 #define FPU_CSR_RSVD   0x001c0000
+/* ... but FPU2 uses that bits */
+#define FPU_CSR_NAN2008 0x00040000
+#define FPU_CSR_ABS2008 0x00080000
+#define FPU_CSR_MAC2008 0x00100000
+
+#define FPU_CSR_DEFAULT 0x00000000
 
 /*
  * X the exception cause indicator
 #define ST0_BEV                        0x00400000
 #define ST0_RE                 0x02000000
 #define ST0_FR                 0x04000000
+#define _ST0_FR                 (26)
 #define ST0_CU                 0xf0000000
 #define ST0_CU0                        0x10000000
 #define ST0_CU1                        0x20000000
 #define MIPS_CONF1_IA          (_ULCAST_(7) << 16)
 #define MIPS_CONF1_IL          (_ULCAST_(7) << 19)
 #define MIPS_CONF1_IS          (_ULCAST_(7) << 22)
-#define MIPS_CONF1_TLBS                (_ULCAST_(63)<< 25)
+#define MIPS_CONF1_TLBS_SHIFT   (25)
+#define MIPS_CONF1_TLBS_SIZE    (6)
+#define MIPS_CONF1_TLBS         (_ULCAST_(63)<< MIPS_CONF1_TLBS_SHIFT)
 
 #define MIPS_CONF2_SA          (_ULCAST_(15)<<  0)
 #define MIPS_CONF2_SL          (_ULCAST_(15)<<  4)
 #define MIPS_CONF3_TL          (_ULCAST_(1) <<  0)
 #define MIPS_CONF3_SM          (_ULCAST_(1) <<  1)
 #define MIPS_CONF3_MT          (_ULCAST_(1) <<  2)
+#define MIPS_CONF3_CDMM                (_ULCAST_(1) <<  3)
 #define MIPS_CONF3_SP          (_ULCAST_(1) <<  4)
 #define MIPS_CONF3_VINT                (_ULCAST_(1) <<  5)
 #define MIPS_CONF3_VEIC                (_ULCAST_(1) <<  6)
 #define MIPS_CONF3_LPA         (_ULCAST_(1) <<  7)
+#define MIPS_CONF3_ITL         (_ULCAST_(1) <<  8)
+#define MIPS_CONF3_CTXTC       (_ULCAST_(1) <<  9)
 #define MIPS_CONF3_DSP         (_ULCAST_(1) << 10)
 #define MIPS_CONF3_DSP2P       (_ULCAST_(1) << 11)
 #define MIPS_CONF3_RXI         (_ULCAST_(1) << 12)
 #define MIPS_CONF3_ULRI                (_ULCAST_(1) << 13)
 #define MIPS_CONF3_ISA         (_ULCAST_(3) << 14)
-#define MIPS_CONF3_ISA_OE      (_ULCAST_(3) << 16)
+#define MIPS_CONF3_ISA_OE      (_ULCAST_(1) << 16)
+#define MIPS_CONF3_MCU         (_ULCAST_(1) << 17)
+#define MIPS_CONF3_MMAR                (_ULCAST_(7) << 18)
+#define MIPS_CONF3_IPLW                (_ULCAST_(3) << 21)
 #define MIPS_CONF3_VZ          (_ULCAST_(1) << 23)
-
+#define MIPS_CONF3_PW          (_ULCAST_(1) << 24)
+#define MIPS_CONF3_SC          (_ULCAST_(1) << 25)
+#define MIPS_CONF3_BI          (_ULCAST_(1) << 26)
+#define MIPS_CONF3_BP          (_ULCAST_(1) << 27)
+#define MIPS_CONF3_MSA         (_ULCAST_(1) << 28)
+#define MIPS_CONF3_CMGCR       (_ULCAST_(1) << 29)
+#define MIPS_CONF3_BPG         (_ULCAST_(1) << 30)
+
+#define MIPS_CONF4_MMUSIZEEXT_SHIFT    (0)
 #define MIPS_CONF4_MMUSIZEEXT  (_ULCAST_(255) << 0)
+#define MIPS_CONF4_FTLBSETS_SHIFT      (0)
+#define MIPS_CONF4_FTLBSETS    (_ULCAST_(15) << MIPS_CONF4_FTLBSETS_SHIFT)
+#define MIPS_CONF4_FTLBWAYS_SHIFT      (4)
+#define MIPS_CONF4_FTLBWAYS    (_ULCAST_(15) << MIPS_CONF4_FTLBWAYS_SHIFT)
+#define MIPS_CONF4_FTLBPAGESIZE_SHIFT  (8)
+#define MIPS_CONF4_FTLBPAGESIZE        (_ULCAST_(31) << MIPS_CONF4_FTLBPAGESIZE_SHIFT)
 #define MIPS_CONF4_MMUEXTDEF   (_ULCAST_(3) << 14)
 #define MIPS_CONF4_MMUEXTDEF_MMUSIZEEXT (_ULCAST_(1) << 14)
 
+#define MIPS_CONF4_MMUEXTDEF_FTLBSIZEEXT       (_ULCAST_(2) << 14)
+#define MIPS_CONF4_MMUEXTDEF_VTLBSIZEEXT       (_ULCAST_(3) << 14)
+#define MIPS_CONF4_KSCREXIST   (_ULCAST_(255) << 16)
+#define MIPS_CONF4_VTLBSIZEEXT_SHIFT   (24)
+#define MIPS_CONF4_VTLBSIZEEXT (_ULCAST_(15) << MIPS_CONF4_VTLBSIZEEXT_SHIFT)
+#define MIPS_CONF4_AE          (_ULCAST_(1) << 28)
+#define MIPS_CONF4_IE          (_ULCAST_(3) << 29)
+#define MIPS_CONF4_TLBINV      (_ULCAST_(2) << 29)
+
+#define MIPS_CONF5_EVA         (_ULCAST_(1) << 28)
+#define MIPS_CONF5_CV          (_ULCAST_(1) << 29)
+#define MIPS_CONF5_K           (_ULCAST_(1) << 30)
+
+#define MIPS_CONF6_JRCD                (_ULCAST_(1) << 0)
+#define MIPS_CONF6_JRCP                (_ULCAST_(1) << 1)
 #define MIPS_CONF6_SYND                (_ULCAST_(1) << 13)
+#define MIPS_CONF6_SPCD                (_ULCAST_(1) << 14)
+#define MIPS_CONF6_FTLBEN       (_ULCAST_(1) << 15)
 
 #define MIPS_CONF7_WII         (_ULCAST_(1) << 31)
 #define MIPS_CONF7_AR          (_ULCAST_(1) << 16)
 #define MIPS_CONF7_IAR         (_ULCAST_(1) << 10)
 #define MIPS_CONF7_RPS         (_ULCAST_(1) << 2)
 
+/*  EntryHI bit definition */
+#define MIPS_EHINV             (_ULCAST_(1) << 10)
 
 /*
  * Bits in the MIPS32/64 coprocessor 1 (FPU) revision register.
 #define MIPS_FPIR_W            (_ULCAST_(1) << 20)
 #define MIPS_FPIR_L            (_ULCAST_(1) << 21)
 #define MIPS_FPIR_F64          (_ULCAST_(1) << 22)
+/* additional bits in MIPS32/64 coprocessor 2 (FPU) */
+#define MIPS_FPIR_HAS2008       (_ULCAST_(1) << 23)
+#define MIPS_FPIR_FC            (_ULCAST_(1) << 24)
+
+/*
+ * Bits in the MIPS32 Memory Segmentation registers.
+ */
+#define MIPS_SEGCFG_PA_SHIFT   9
+#define MIPS_SEGCFG_PA         (_ULCAST_(127) << MIPS_SEGCFG_PA_SHIFT)
+#define MIPS_SEGCFG_AM_SHIFT   4
+#define MIPS_SEGCFG_AM         (_ULCAST_(7) << MIPS_SEGCFG_AM_SHIFT)
+#define MIPS_SEGCFG_EU_SHIFT   3
+#define MIPS_SEGCFG_EU         (_ULCAST_(1) << MIPS_SEGCFG_EU_SHIFT)
+#define MIPS_SEGCFG_C_SHIFT    0
+#define MIPS_SEGCFG_C          (_ULCAST_(7) << MIPS_SEGCFG_C_SHIFT)
+
+#define MIPS_SEGCFG_UUSK       _ULCAST_(7)
+#define MIPS_SEGCFG_USK                _ULCAST_(5)
+#define MIPS_SEGCFG_MUSUK      _ULCAST_(4)
+#define MIPS_SEGCFG_MUSK       _ULCAST_(3)
+#define MIPS_SEGCFG_MSK                _ULCAST_(2)
+#define MIPS_SEGCFG_MK         _ULCAST_(1)
+#define MIPS_SEGCFG_UK         _ULCAST_(0)
+
+/* ebase register bit definition */
+#define MIPS_EBASE_WG           (_ULCAST_(1) << 11)
 
 #ifndef __ASSEMBLY__
 
@@ -931,6 +1005,7 @@ do {                                                                       \
 #define write_c0_epc(val)      __write_ulong_c0_register($14, 0, val)
 
 #define read_c0_prid()         __read_32bit_c0_register($15, 0)
+#define read_c0_cmgcrbase()     __read_ulong_c0_register($15, 3)
 
 #define read_c0_config()       __read_32bit_c0_register($16, 0)
 #define read_c0_config1()      __read_32bit_c0_register($16, 1)
@@ -1096,6 +1171,15 @@ do {                                                                     \
 #define read_c0_ebase()                __read_32bit_c0_register($15, 1)
 #define write_c0_ebase(val)    __write_32bit_c0_register($15, 1, val)
 
+/* MIPSR3 */
+#define read_c0_segctl0()      __read_32bit_c0_register($5, 2)
+#define write_c0_segctl0(val)  __write_32bit_c0_register($5, 2, val)
+
+#define read_c0_segctl1()      __read_32bit_c0_register($5, 3)
+#define write_c0_segctl1(val)  __write_32bit_c0_register($5, 3, val)
+
+#define read_c0_segctl2()      __read_32bit_c0_register($5, 4)
+#define write_c0_segctl2(val)  __write_32bit_c0_register($5, 4, val)
 
 /* Cavium OCTEON (cnMIPS) */
 #define read_c0_cvmcount()     __read_ulong_c0_register($9, 6)
@@ -1180,6 +1264,22 @@ do {                                                                     \
        __res;                                                          \
 })
 
+#define write_32bit_cp1_register(dest,value)                            \
+({                                                                     \
+       __asm__ __volatile__(                                           \
+       "       .set    push                                    \n"     \
+       "       .set    reorder                                 \n"     \
+       "       # gas fails to assemble cfc1 for some archs,    \n"     \
+       "       # like Octeon.                                  \n"     \
+       "       .set    mips1                                   \n"     \
+       "       ctc1    %0,"STR(dest)"                          \n"     \
+       "       .set    pop                                     \n"     \
+       :: "r" (value));                                                \
+})
+
+/*
+ * Macros to access the DSP ASE registers
+ */
 #ifdef HAVE_AS_DSP
 #define rddsp(mask)                                                    \
 ({                                                                     \
@@ -1632,6 +1732,15 @@ static inline void tlb_write_random(void)
                ".set reorder");
 }
 
+static inline void tlbinvf(void)
+{
+       __asm__ __volatile__(
+               ".set push\n\t"
+               ".set noreorder\n\t"
+               ".word 0x42000004\n\t"
+               ".set pop");
+}
+
 /*
  * Manipulate bits in a c0 register.
  */
index 516e6e9a55940ec8abea80160278a744dc220c88..3389d030373011d4284bee833345fcc6e56acecb 100644 (file)
@@ -40,6 +40,7 @@ do {                                                                  \
        do {                                                            \
                TLBMISS_HANDLER_SETUP_PGD(swapper_pg_dir);              \
                write_c0_xcontext((unsigned long) smp_processor_id() << 51); \
+               back_to_back_c0_hazard();                               \
        } while (0)
 
 #else /* CONFIG_MIPS_PGD_C0_CONTEXT: using  pgd_current*/
diff --git a/arch/mips/include/asm/mutex.h b/arch/mips/include/asm/mutex.h
deleted file mode 100644 (file)
index 458c1f7..0000000
+++ /dev/null
@@ -1,9 +0,0 @@
-/*
- * Pull in the generic implementation for the mutex fastpath.
- *
- * TODO: implement optimized primitives instead, or leave the generic
- * implementation in place, or pick the atomic_xchg() based generic
- * implementation. (see asm-generic/mutex-xchg.h for details)
- */
-
-#include <asm-generic/mutex-dec.h>
index 023f3da9f5e395142b6c3568539a2216ab9547c5..dd49204a2cadf2e88be2001d1fff8ee36496fb7b 100644 (file)
@@ -33,6 +33,9 @@
 #define PAGE_SIZE      (_AC(1,UL) << PAGE_SHIFT)
 #define PAGE_MASK      (~((1 << PAGE_SHIFT) - 1))
 
+/* this is used for calculation of real page sizes and should be the same */
+#define BASIC_PAGE_SHIFT    12
+
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
 #define HPAGE_SHIFT    (PAGE_SHIFT + PAGE_SHIFT - 3)
 #define HPAGE_SIZE     (_AC(1,UL) << HPAGE_SHIFT)
@@ -168,7 +171,9 @@ typedef struct { unsigned long pgprot; } pgprot_t;
  * https://patchwork.linux-mips.org/patch/1541/
  */
 
-#define __pa_symbol(x) __pa(RELOC_HIDE((unsigned long)(x), 0))
+#ifndef __pa_symbol
+#define __pa_symbol(x)  __pa(RELOC_HIDE((unsigned long)(x), 0))
+#endif
 
 #define pfn_to_kaddr(pfn)      __va((pfn) << PAGE_SHIFT)
 
@@ -205,13 +210,13 @@ extern int __virt_addr_valid(const volatile void *kaddr);
 #define virt_addr_valid(kaddr)                                         \
        __virt_addr_valid((const volatile void *) (kaddr))
 
-#define VM_DATA_DEFAULT_FLAGS  (VM_READ | VM_WRITE | VM_EXEC | \
+#define VM_DATA_DEFAULT_FLAGS   (VM_READ | VM_WRITE | \
+                                VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
+#define VM_STACK_DEFAULT_FLAGS   (VM_READ | VM_WRITE | VM_EXEC | \
                                 VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC)
 
-#define UNCAC_ADDR(addr)       ((addr) - PAGE_OFFSET + UNCAC_BASE +    \
-                                                               PHYS_OFFSET)
-#define CAC_ADDR(addr)         ((addr) - UNCAC_BASE + PAGE_OFFSET -    \
-                                                               PHYS_OFFSET)
+#define UNCAC_ADDR(addr)       ((addr) - PAGE_OFFSET + UNCAC_BASE)
+#define CAC_ADDR(addr)         ((addr) - UNCAC_BASE + PAGE_OFFSET)
 
 #include <asm-generic/memory_model.h>
 #include <asm-generic/getorder.h>
diff --git a/arch/mips/include/asm/parport.h b/arch/mips/include/asm/parport.h
deleted file mode 100644 (file)
index cf252af..0000000
+++ /dev/null
@@ -1 +0,0 @@
-#include <asm-generic/parport.h>
diff --git a/arch/mips/include/asm/percpu.h b/arch/mips/include/asm/percpu.h
deleted file mode 100644 (file)
index 844e763..0000000
+++ /dev/null
@@ -1,6 +0,0 @@
-#ifndef __ASM_PERCPU_H
-#define __ASM_PERCPU_H
-
-#include <asm-generic/percpu.h>
-
-#endif /* __ASM_PERCPU_H */
index b4204c179b979b5c984edb0fbf1a3181ff535bdb..50d184eb4ecbe5e8cd0325a1b1cf8b6e81084033 100644 (file)
 
 #define PKMAP_BASE             (0xfe000000UL)
 
+#ifndef VMALLOC_END
 #ifdef CONFIG_HIGHMEM
 # define VMALLOC_END   (PKMAP_BASE-2*PAGE_SIZE)
 #else
 # define VMALLOC_END   (FIXADDR_START-2*PAGE_SIZE)
 #endif
+#endif
 
 #ifdef CONFIG_64BIT_PHYS_ADDR
 #define pte_ERROR(e) \
@@ -140,76 +142,61 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
        ((pte_t *)page_address(pmd_page(*(dir))) + __pte_offset(address))
 #define pte_unmap(pte) ((void)(pte))
 
-#if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
-
-/* Swap entries must have VALID bit cleared. */
-#define __swp_type(x)          (((x).val >> 10) & 0x1f)
-#define __swp_offset(x)                ((x).val >> 15)
-#define __swp_entry(type,offset)       \
-       ((swp_entry_t) { ((type) << 10) | ((offset) << 15) })
-
-/*
- * Bits 0, 4, 8, and 9 are taken, split up 28 bits of offset into this range:
- */
-#define PTE_FILE_MAX_BITS      28
-
-#define pte_to_pgoff(_pte)     ((((_pte).pte >> 1 ) & 0x07) | \
-                                (((_pte).pte >> 2 ) & 0x38) | \
-                                (((_pte).pte >> 10) <<  6 ))
-
-#define pgoff_to_pte(off)      ((pte_t) { (((off) & 0x07) << 1 ) | \
-                                          (((off) & 0x38) << 2 ) | \
-                                          (((off) >>  6 ) << 10) | \
-                                          _PAGE_FILE })
-
-#else
-
-/* Swap entries must have VALID and GLOBAL bits cleared. */
 #if defined(CONFIG_64BIT_PHYS_ADDR) && defined(CONFIG_CPU_MIPS32)
-#define __swp_type(x)          (((x).val >> 2) & 0x1f)
-#define __swp_offset(x)                 ((x).val >> 7)
-#define __swp_entry(type,offset)       \
-               ((swp_entry_t)  { ((type) << 2) | ((offset) << 7) })
-#else
-#define __swp_type(x)          (((x).val >> 8) & 0x1f)
-#define __swp_offset(x)                 ((x).val >> 13)
-#define __swp_entry(type,offset)       \
-               ((swp_entry_t)  { ((type) << 8) | ((offset) << 13) })
-#endif /* defined(CONFIG_64BIT_PHYS_ADDR) && defined(CONFIG_CPU_MIPS32) */
-
-#if defined(CONFIG_64BIT_PHYS_ADDR) && defined(CONFIG_CPU_MIPS32)
-/*
- * Bits 0 and 1 of pte_high are taken, use the rest for the page offset...
- */
-#define PTE_FILE_MAX_BITS      30
-
-#define pte_to_pgoff(_pte)     ((_pte).pte_high >> 2)
-#define pgoff_to_pte(off)      ((pte_t) { _PAGE_FILE, (off) << 2 })
 
-#else
 /*
- * Bits 0, 4, 6, and 7 are taken, split up 28 bits of offset into this range:
+ * Two words PTE case:
+ * Bits 0 and 1 (V+G) of pte_high are taken, use the rest for the swaps and
+ * page offset...
+ * Bits F and P are in pte_low.
+ *
+ * Note: swp_entry_t is one word today.
  */
-#define PTE_FILE_MAX_BITS      28
+#define __swp_type(x)       \
+               (((x).val >> __SWP_PTE_SKIP_BITS_NUM) & __SWP_TYPE_MASK)
+#define __swp_offset(x)     \
+               ((x).val >> (__SWP_PTE_SKIP_BITS_NUM + __SWP_TYPE_BITS_NUM))
+#define __swp_entry(type, offset)        \
+               ((swp_entry_t)  { ((type) << __SWP_PTE_SKIP_BITS_NUM) | \
+               ((offset) << (__SWP_TYPE_BITS_NUM + __SWP_PTE_SKIP_BITS_NUM)) })
+#define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_high })
+#define __swp_entry_to_pte(x)  ((pte_t) { 0, (x).val })
 
-#define pte_to_pgoff(_pte)     ((((_pte).pte >> 1) & 0x7) | \
-                                (((_pte).pte >> 2) & 0x8) | \
-                                (((_pte).pte >> 8) <<  4))
+#define PTE_FILE_MAX_BITS       (32 - __SWP_PTE_SKIP_BITS_NUM)
 
-#define pgoff_to_pte(off)      ((pte_t) { (((off) & 0x7) << 1) | \
-                                          (((off) & 0x8) << 2) | \
-                                          (((off) >>  4) << 8) | \
-                                          _PAGE_FILE })
-#endif
+#define pte_to_pgoff(_pte)      ((_pte).pte_high >> __SWP_PTE_SKIP_BITS_NUM)
+#define pgoff_to_pte(off)   \
+               ((pte_t) { _PAGE_FILE, (off) << __SWP_PTE_SKIP_BITS_NUM })
 
-#endif
+#else /* CONFIG_MIPS32 && !CONFIG_64BIT_PHYS_ADDR */
 
-#if defined(CONFIG_64BIT_PHYS_ADDR) && defined(CONFIG_CPU_MIPS32)
-#define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_high })
-#define __swp_entry_to_pte(x)  ((pte_t) { 0, (x).val })
-#else
+/* Swap  entries must have V,G,P and F bits cleared. */
+#define __swp_type(x)       (((x).val >> _PAGE_DIRTY_SHIFT) & __SWP_TYPE_MASK)
+#define __swp_offset(x)     \
+               ((x).val >> (_PAGE_DIRTY_SHIFT + __SWP_TYPE_BITS_NUM))
+#define __swp_entry(type, offset)        \
+               ((swp_entry_t) { ((type) << _PAGE_DIRTY_SHIFT) | \
+               ((offset) << (_PAGE_DIRTY_SHIFT + __SWP_TYPE_BITS_NUM)) })
 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
 #define __swp_entry_to_pte(x)  ((pte_t) { (x).val })
-#endif
 
+/*
+ * Bits V+G, and F+P are taken, split up 28 bits of offset into this range:
+ */
+#define PTE_FILE_MAX_BITS       (32 - __FILE_PTE_TOTAL_BITS_NUM)
+
+#define pte_to_pgoff(_pte)  \
+               ((((_pte).pte >> __FILE_PTE_TOTAL_BITS_NUM) & \
+                   ~(__FILE_PTE_LOW_MASK)) | \
+                (((_pte).pte >> __FILE_PTE_LOW_BITS_NUM) & \
+                   (__FILE_PTE_LOW_MASK)))
+
+#define pgoff_to_pte(off)   \
+               ((pte_t) { (((off) & __FILE_PTE_LOW_MASK) << \
+                     (__FILE_PTE_LOW_BITS_NUM)) | \
+                  (((off) & ~(__FILE_PTE_LOW_MASK)) << \
+                     (__FILE_PTE_TOTAL_BITS_NUM)) | \
+                  _PAGE_FILE })
+
+#endif /* CONFIG_64BIT_PHYS_ADDR && CONFIG_MIPS32 */
 #endif /* _ASM_PGTABLE_32_H */
index e1c49a96807d68bbeb61c7bd14af6273665c9a6c..6379aef43458645a5df424da3408d5be7cfad2fb 100644 (file)
@@ -283,21 +283,30 @@ extern void pmd_init(unsigned long page, unsigned long pagetable);
  * low 32 bits zero.
  */
 static inline pte_t mk_swap_pte(unsigned long type, unsigned long offset)
-{ pte_t pte; pte_val(pte) = (type << 32) | (offset << 40); return pte; }
+{
+       pte_t pte;
+
+       pte_val(pte) = (type << __SWP_PTE_SKIP_BITS_NUM) |
+               (offset << (__SWP_PTE_SKIP_BITS_NUM + __SWP_TYPE_BITS_NUM));
+       return pte;
+}
 
-#define __swp_type(x)          (((x).val >> 32) & 0xff)
-#define __swp_offset(x)                ((x).val >> 40)
+#define __swp_type(x)           \
+               (((x).val >> __SWP_PTE_SKIP_BITS_NUM) & __SWP_TYPE_MASK)
+#define __swp_offset(x)         \
+               ((x).val >> (__SWP_PTE_SKIP_BITS_NUM + __SWP_TYPE_BITS_NUM))
 #define __swp_entry(type, offset) ((swp_entry_t) { pte_val(mk_swap_pte((type), (offset))) })
 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
 #define __swp_entry_to_pte(x)  ((pte_t) { (x).val })
 
 /*
- * Bits 0, 4, 6, and 7 are taken. Let's leave bits 1, 2, 3, and 5 alone to
- * make things easier, and only use the upper 56 bits for the page offset...
+ * Take out all bits from V to bit 0. We should actually take out only VGFP but
+ * today PTE is too complicated by HUGE page support etc
  */
-#define PTE_FILE_MAX_BITS      56
+#define PTE_FILE_MAX_BITS       (64 - _PAGE_DIRTY_SHIFT)
 
-#define pte_to_pgoff(_pte)     ((_pte).pte >> 8)
-#define pgoff_to_pte(off)      ((pte_t) { ((off) << 8) | _PAGE_FILE })
+#define pte_to_pgoff(_pte)      ((_pte).pte >> _PAGE_DIRTY_SHIFT)
+#define pgoff_to_pte(off)       \
+               ((pte_t) { ((off) << _PAGE_DIRTY_SHIFT) | _PAGE_FILE })
 
 #endif /* _ASM_PGTABLE_64_H */
index 32aea4852fb0796045efcbc9fbd6414b67a4fe77..1a5cebabd0ae71a962958470641f795769955bb9 100644 (file)
  */
 #define _PAGE_PRESENT_SHIFT    6
 #define _PAGE_PRESENT          (1 << _PAGE_PRESENT_SHIFT)
-#define _PAGE_READ_SHIFT       7
+#define _PAGE_MODIFIED_SHIFT    7
+#define _PAGE_MODIFIED         (1 << _PAGE_MODIFIED_SHIFT)
+#define _PAGE_FILE              (1 << _PAGE_MODIFIED_SHIFT)
+#define _PAGE_READ_SHIFT        8
 #define _PAGE_READ             (1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT      8
+#define _PAGE_WRITE_SHIFT       9
 #define _PAGE_WRITE            (1 << _PAGE_WRITE_SHIFT)
-#define _PAGE_ACCESSED_SHIFT   9
+#define _PAGE_ACCESSED_SHIFT    10
 #define _PAGE_ACCESSED         (1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT   10
-#define _PAGE_MODIFIED         (1 << _PAGE_MODIFIED_SHIFT)
-
-#define _PAGE_FILE             (1 << 10)
 
 #elif defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
 
  */
 #define _PAGE_PRESENT_SHIFT    0
 #define _PAGE_PRESENT          (1 <<  _PAGE_PRESENT_SHIFT)
-#define _PAGE_READ_SHIFT       1
+#define _PAGE_MODIFIED_SHIFT    1
+#define _PAGE_MODIFIED         (1 <<  _PAGE_MODIFIED_SHIFT)
+#define _PAGE_FILE_SHIFT        1
+#define _PAGE_FILE             (1 <<  _PAGE_FILE_SHIFT)
+#define _PAGE_READ_SHIFT        2
 #define _PAGE_READ             (1 <<  _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT      2
+#define _PAGE_WRITE_SHIFT       3
 #define _PAGE_WRITE            (1 <<  _PAGE_WRITE_SHIFT)
-#define _PAGE_ACCESSED_SHIFT   3
+#define _PAGE_ACCESSED_SHIFT    4
 #define _PAGE_ACCESSED         (1 <<  _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT   4
-#define _PAGE_MODIFIED         (1 <<  _PAGE_MODIFIED_SHIFT)
-#define _PAGE_FILE_SHIFT       4
-#define _PAGE_FILE             (1 <<  _PAGE_FILE_SHIFT)
 
 /*
  * And these are the hardware TLB bits
 #define _CACHE_MASK            (1 << _CACHE_UNCACHED_SHIFT)
 
 #else /* 'Normal' r4K case */
+
+#ifndef CONFIG_CPU_MIPSR2
 /*
  * When using the RI/XI bit support, we have 13 bits of flags below
  * the physical address. The RI/XI bits are placed such that a SRL 5
  */
 #define _PAGE_PRESENT_SHIFT    (0)
 #define _PAGE_PRESENT          (1 << _PAGE_PRESENT_SHIFT)
-#define _PAGE_READ_SHIFT       (cpu_has_rixi ? _PAGE_PRESENT_SHIFT : _PAGE_PRESENT_SHIFT + 1)
+#define _PAGE_MODIFIED_SHIFT    (_PAGE_PRESENT_SHIFT + 1)
+#define _PAGE_MODIFIED         (1 << _PAGE_MODIFIED_SHIFT)
+#define _PAGE_FILE_SHIFT        (_PAGE_MODIFIED_SHIFT)
+#define _PAGE_FILE             (_PAGE_MODIFIED)
+#define _PAGE_READ_SHIFT        \
+               (cpu_has_rixi ? _PAGE_MODIFIED_SHIFT : _PAGE_MODIFIED_SHIFT + 1)
 #define _PAGE_READ ({BUG_ON(cpu_has_rixi); 1 << _PAGE_READ_SHIFT; })
 #define _PAGE_WRITE_SHIFT      (_PAGE_READ_SHIFT + 1)
 #define _PAGE_WRITE            (1 << _PAGE_WRITE_SHIFT)
 #define _PAGE_ACCESSED_SHIFT   (_PAGE_WRITE_SHIFT + 1)
 #define _PAGE_ACCESSED         (1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT   (_PAGE_ACCESSED_SHIFT + 1)
-#define _PAGE_MODIFIED         (1 << _PAGE_MODIFIED_SHIFT)
-#define _PAGE_FILE             (_PAGE_MODIFIED)
 
 #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
 /* huge tlb page */
-#define _PAGE_HUGE_SHIFT       (_PAGE_MODIFIED_SHIFT + 1)
+#define _PAGE_HUGE_SHIFT        (_PAGE_ACCESSED_SHIFT + 1)
 #define _PAGE_HUGE             (1 << _PAGE_HUGE_SHIFT)
 #else
-#define _PAGE_HUGE_SHIFT       (_PAGE_MODIFIED_SHIFT)
+#define _PAGE_HUGE_SHIFT        (_PAGE_ACCESSED_SHIFT)
 #define _PAGE_HUGE             ({BUG(); 1; })  /* Dummy value */
 #endif
 
 #define _PAGE_NO_READ_SHIFT    (cpu_has_rixi ? _PAGE_NO_EXEC_SHIFT + 1 : _PAGE_NO_EXEC_SHIFT)
 #define _PAGE_NO_READ          ({BUG_ON(!cpu_has_rixi); 1 << _PAGE_NO_READ_SHIFT; })
 
-#define _PAGE_GLOBAL_SHIFT     (_PAGE_NO_READ_SHIFT + 1)
+#else /* CONFIG_CPU_MIPSR2 */
+
+/* static bits allocation in MIPS R2, two variants -
+   HUGE TLB in 64BIT kernel support or not.
+   RIXI support in both */
+
+#ifdef CONFIG_64BIT
+
+/*
+ * Low bits are: CCC D V G RI XI [S H] A W R M(=F) P
+ * TLB refill will do a ROTR 7/9 (in case of cpu_has_rixi),
+ * or SRL/DSRL 7/9 to strip low bits.
+ * PFN size in high bits is 49 or 51 bit --> 512TB or 4*512TB for 4KB pages
+ */
+
+#define _PAGE_PRESENT_SHIFT     (0)
+#define _PAGE_PRESENT          (1 << _PAGE_PRESENT_SHIFT)
+/* implemented in software */
+#define _PAGE_MODIFIED_SHIFT    (_PAGE_PRESENT_SHIFT + 1)
+#define _PAGE_MODIFIED         (1 << _PAGE_MODIFIED_SHIFT)
+/* set:pagecache unset:swap */
+#define _PAGE_FILE             (_PAGE_MODIFIED)
+/* implemented in software, should be unused if cpu_has_rixi. */
+#define _PAGE_READ_SHIFT        (_PAGE_MODIFIED_SHIFT + 1)
+#define _PAGE_READ              (1 << _PAGE_READ_SHIFT)
+/* implemented in software */
+#define _PAGE_WRITE_SHIFT      (_PAGE_READ_SHIFT + 1)
+#define _PAGE_WRITE            (1 << _PAGE_WRITE_SHIFT)
+/* implemented in software */
+#define _PAGE_ACCESSED_SHIFT   (_PAGE_WRITE_SHIFT + 1)
+#define _PAGE_ACCESSED         (1 << _PAGE_ACCESSED_SHIFT)
+
+#ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
+/* huge tlb page */
+#define _PAGE_HUGE_SHIFT        (_PAGE_ACCESSED_SHIFT + 1)
+#define _PAGE_HUGE             (1 << _PAGE_HUGE_SHIFT)
+#define _PAGE_SPLITTING_SHIFT  (_PAGE_HUGE_SHIFT + 1)
+#define _PAGE_SPLITTING                (1 << _PAGE_SPLITTING_SHIFT)
+#else
+#define _PAGE_HUGE_SHIFT        (_PAGE_ACCESSED_SHIFT)
+#define _PAGE_HUGE             ({BUG(); 1; })  /* Dummy value */
+#define _PAGE_SPLITTING_SHIFT  (_PAGE_HUGE_SHIFT)
+#define _PAGE_SPLITTING                ({BUG(); 1; })  /* Dummy value */
+#endif /* CONFIG_MIPS_HUGE_TLB_SUPPORT */
+
+/* Page cannot be executed */
+#define _PAGE_NO_EXEC_SHIFT     (_PAGE_SPLITTING_SHIFT + 1)
+#define _PAGE_NO_EXEC           (1 << _PAGE_NO_EXEC_SHIFT)
+
+/* Page cannot be read */
+#define _PAGE_NO_READ_SHIFT     (_PAGE_NO_EXEC_SHIFT + 1)
+#define _PAGE_NO_READ           (1 << _PAGE_NO_READ_SHIFT)
+
+#else /* !CONFIG_64BIT */
+
+#ifndef CONFIG_MIPS_HUGE_TLB_SUPPORT
+
+/*
+ * No HUGE page support
+ * Low bits are: CCC D V G RI(=R) XI A W M(=F) P
+ * TLB refill will do a ROTR 6 (in case of cpu_has_rixi),
+ * or SRL 6 to strip low bits.
+ * All 20 bits PFN are preserved in high bits (4GB in 4KB pages)
+ */
+
+#define _PAGE_PRESENT_SHIFT     (0)
+#define _PAGE_PRESENT          (1 << _PAGE_PRESENT_SHIFT)
+/* implemented in software */
+#define _PAGE_MODIFIED_SHIFT    (_PAGE_PRESENT_SHIFT + 1)
+#define _PAGE_MODIFIED         (1 << _PAGE_MODIFIED_SHIFT)
+/* set:pagecache unset:swap */
+#define _PAGE_FILE_SHIFT        (_PAGE_MODIFIED_SHIFT)
+#define _PAGE_FILE             (_PAGE_MODIFIED)
+/* implemented in software */
+#define _PAGE_WRITE_SHIFT       (_PAGE_MODIFIED_SHIFT + 1)
+#define _PAGE_WRITE            (1 << _PAGE_WRITE_SHIFT)
+/* implemented in software */
+#define _PAGE_ACCESSED_SHIFT   (_PAGE_WRITE_SHIFT + 1)
+#define _PAGE_ACCESSED         (1 << _PAGE_ACCESSED_SHIFT)
+
+/* huge tlb page dummies */
+#define _PAGE_HUGE_SHIFT        (_PAGE_ACCESSED_SHIFT)
+#define _PAGE_HUGE             ({BUG(); 1; })  /* Dummy value */
+#define _PAGE_SPLITTING_SHIFT  (_PAGE_HUGE_SHIFT)
+#define _PAGE_SPLITTING                ({BUG(); 1; })  /* Dummy value */
+
+/* Page cannot be executed */
+#define _PAGE_NO_EXEC_SHIFT     (_PAGE_SPLITTING_SHIFT + 1)
+#define _PAGE_NO_EXEC           (1 << _PAGE_NO_EXEC_SHIFT)
+
+/* Page cannot be read */
+#define _PAGE_NO_READ_SHIFT     (_PAGE_NO_EXEC_SHIFT + 1)
+#define _PAGE_NO_READ           (1 << _PAGE_NO_READ_SHIFT)
+
+/* implemented in software, should be unused if cpu_has_rixi. */
+#define _PAGE_READ_SHIFT        (_PAGE_NO_READ_SHIFT)
+#define _PAGE_READ              (1 << _PAGE_READ_SHIFT)
+
+#else /* CONFIG_MIPS_HUGE_TLB_SUPPORT */
+
+/*
+ * Low bits are: CCC D V G S H A W R M(=F) P
+ * No RIXI is enforced
+ * TLB refill will do a SRL 7,
+ * only 19 bits PFN are preserved in high bits (2GB in 4KB pages)
+ */
+
+#define _PAGE_PRESENT_SHIFT     (0)
+#define _PAGE_PRESENT          (1 << _PAGE_PRESENT_SHIFT)
+/* implemented in software */
+#define _PAGE_MODIFIED_SHIFT    (_PAGE_PRESENT_SHIFT + 1)
+#define _PAGE_MODIFIED         (1 << _PAGE_MODIFIED_SHIFT)
+/* set:pagecache unset:swap */
+#define _PAGE_FILE_SHIFT        (_PAGE_MODIFIED_SHIFT)
+#define _PAGE_FILE             (_PAGE_MODIFIED)
+/* implemented in software */
+#define _PAGE_READ_SHIFT        (_PAGE_MODIFIED_SHIFT + 1)
+#define _PAGE_READ              (1 << _PAGE_READ_SHIFT)
+/* implemented in software */
+#define _PAGE_WRITE_SHIFT       (_PAGE_READ_SHIFT + 1)
+#define _PAGE_WRITE            (1 << _PAGE_WRITE_SHIFT)
+/* implemented in software */
+#define _PAGE_ACCESSED_SHIFT   (_PAGE_WRITE_SHIFT + 1)
+#define _PAGE_ACCESSED         (1 << _PAGE_ACCESSED_SHIFT)
+
+/* huge tlb page... but no HUGE page support in MIPS32 yet */
+#define _PAGE_HUGE_SHIFT        (_PAGE_ACCESSED_SHIFT + 1)
+#define _PAGE_HUGE             (1 << _PAGE_HUGE_SHIFT)
+#define _PAGE_SPLITTING_SHIFT  (_PAGE_HUGE_SHIFT + 1)
+#define _PAGE_SPLITTING                (1 << _PAGE_SPLITTING_SHIFT)
+
+/* Page cannot be executed */
+#define _PAGE_NO_EXEC_SHIFT     (_PAGE_SPLITTING_SHIFT)
+#define _PAGE_NO_EXEC           ({BUG(); 1; })  /* Dummy value */
+/* Page cannot be read */
+#define _PAGE_NO_READ_SHIFT     (_PAGE_NO_EXEC_SHIFT)
+#define _PAGE_NO_READ           ({BUG(); 1; })  /* Dummy value */
+
+#endif /* CONFIG_MIPS_HUGE_TLB_SUPPORT */
+
+#endif /* CONFIG_64BIT */
+
+#endif /* !CONFIG_CPU_MIPSR2 */
+
+
+#define _PAGE_GLOBAL_SHIFT      (_PAGE_NO_READ_SHIFT + 1)
 #define _PAGE_GLOBAL           (1 << _PAGE_GLOBAL_SHIFT)
 
 #define _PAGE_VALID_SHIFT      (_PAGE_GLOBAL_SHIFT + 1)
 #define _PAGE_GLOBAL_SHIFT ilog2(_PAGE_GLOBAL)
 #endif
 
+/*
+ * Swap and File entries format definitions in PTE
+ * This constant definitions are here because it is linked with bit positions
+ * The real macros are still in pgtable-32/64.h
+ *
+ * There are 3 kind of format - 64BIT, generic 32BIT and 32BIT & 64BIT PA
+ */
+#define __SWP_TYPE_BITS_NUM     5
+#define __SWP_TYPE_MASK         ((1 << __SWP_TYPE_BITS_NUM) - 1)
+
+#if defined(CONFIG_64BIT_PHYS_ADDR) && defined(CONFIG_CPU_MIPS32)
+
+/*
+ * Two words PTE case:
+ * Bits 0 and 1 (V+G) of pte_high are taken, use the rest for the swaps and
+ * page offset...
+ * Bits F and P are in pte_low.
+ *
+ * Note: swp_entry_t or file entry are one word today (pte_high)
+ */
+#define __SWP_PTE_SKIP_BITS_NUM         2
+
+#elif defined(CONFIG_64BIT)
+/*
+ * Swap entry is located in high 32 bits of PTE
+ *
+ * File entry starts right from D bit
+ */
+#define __SWP_PTE_SKIP_BITS_NUM         32
+
+#else /* CONFIG_32BIT && !CONFIG_64BIT_PHYS_ADDR */
+/*
+ * Swap entry is encoded starting right from D bit
+ *
+ * File entry is encoded in all bits besides V,G,F and P which are grouped in
+ * two fields with variable gap, so - additonal location info is defined here
+ */
+/* rightmost taken out field - F and P */
+#define __FILE_PTE_LOW_BITS_NUM         2
+/* total number of taken out bits - V,G,F,P */
+#define __FILE_PTE_TOTAL_BITS_NUM       4
+/* mask for intermediate field which is used for encoding */
+#define __FILE_PTE_LOW_MASK     ((_PAGE_GLOBAL - 1) >> (_PAGE_FILE_SHIFT + 1))
+
+#endif /* defined(CONFIG_64BIT_PHYS_ADDR) && defined(CONFIG_CPU_MIPS32) */
 
 #ifndef __ASSEMBLY__
 /*
index a0b2650516ac9a2837a362f4c5768ce609767216..19661115ec644f833b84ccddbca90f29b573a1e1 100644 (file)
@@ -16,6 +16,7 @@
 #include <asm/cacheops.h>
 #include <asm/cpu-features.h>
 #include <asm/mipsmtregs.h>
+#include <asm/uaccess.h>
 
 /*
  * This macro return a properly sign-extended address suitable as base address
@@ -28,7 +29,9 @@
  *  - We need a properly sign extended address for 64-bit code.         To get away
  *    without ifdefs we let the compiler do it by a type cast.
  */
-#define INDEX_BASE     CKSEG0
+#ifndef INDEX_BASE
+#define INDEX_BASE      CKSEG0
+#endif
 
 #define cache_op(op,addr)                                              \
        __asm__ __volatile__(                                           \
@@ -46,8 +49,6 @@
  * execution during I-cache flushes.
  */
 
-#define PROTECT_CACHE_FLUSHES 1
-
 #ifdef PROTECT_CACHE_FLUSHES
 
 extern int mt_protiflush;
@@ -203,12 +204,31 @@ static inline void flush_scache_line(unsigned long addr)
        :                                                       \
        : "i" (op), "r" (addr))
 
+#ifdef CONFIG_EVA
+#define protected_cachee_op(op,addr)                            \
+       __asm__ __volatile__(                                   \
+       "       .set    push                    \n"             \
+       "       .set    noreorder               \n"             \
+       "       .set    eva                     \n"             \
+       "1:     cachee  %0, (%1)                \n"             \
+       "2:     .set    pop                     \n"             \
+       "       .section __ex_table,\"a\"       \n"             \
+       "       "STR(PTR)" 1b, 2b               \n"             \
+       "       .previous"                                      \
+       :                                                       \
+       : "i" (op), "r" (addr))
+#endif
+
 /*
  * The next two are for badland addresses like signal trampolines.
  */
 static inline void protected_flush_icache_line(unsigned long addr)
 {
+#ifndef CONFIG_EVA
        protected_cache_op(Hit_Invalidate_I, addr);
+#else
+       protected_cachee_op(Hit_Invalidate_I, addr);
+#endif
 }
 
 /*
@@ -219,7 +239,11 @@ static inline void protected_flush_icache_line(unsigned long addr)
  */
 static inline void protected_writeback_dcache_line(unsigned long addr)
 {
+#ifndef CONFIG_EVA
        protected_cache_op(Hit_Writeback_Inv_D, addr);
+#else
+       protected_cachee_op(Hit_Writeback_Inv_D, addr);
+#endif
 }
 
 static inline void protected_writeback_scache_line(unsigned long addr)
@@ -339,6 +363,112 @@ static inline void invalidate_tcache_page(unsigned long addr)
                : "r" (base),                                           \
                  "i" (op));
 
+#ifdef CONFIG_EVA
+#define cache16_unroll32_user(base,op)                                  \
+       __asm__ __volatile__(                                           \
+       "       .set push                                       \n"     \
+       "       .set noreorder                                  \n"     \
+       "       .set eva                                        \n"     \
+       "       cachee %1, 0x000(%0); cachee %1, 0x010(%0)      \n"     \
+       "       cachee %1, 0x020(%0); cachee %1, 0x030(%0)      \n"     \
+       "       cachee %1, 0x040(%0); cachee %1, 0x050(%0)      \n"     \
+       "       cachee %1, 0x060(%0); cachee %1, 0x070(%0)      \n"     \
+       "       cachee %1, 0x080(%0); cachee %1, 0x090(%0)      \n"     \
+       "       cachee %1, 0x0a0(%0); cachee %1, 0x0b0(%0)      \n"     \
+       "       cachee %1, 0x0c0(%0); cachee %1, 0x0d0(%0)      \n"     \
+       "       cachee %1, 0x0e0(%0); cachee %1, 0x0f0(%0)      \n"     \
+       "       cachee %1, 0x100(%0); cachee %1, 0x110(%0)      \n"     \
+       "       cachee %1, 0x120(%0); cachee %1, 0x130(%0)      \n"     \
+       "       cachee %1, 0x140(%0); cachee %1, 0x150(%0)      \n"     \
+       "       cachee %1, 0x160(%0); cachee %1, 0x170(%0)      \n"     \
+       "       cachee %1, 0x180(%0); cachee %1, 0x190(%0)      \n"     \
+       "       cachee %1, 0x1a0(%0); cachee %1, 0x1b0(%0)      \n"     \
+       "       cachee %1, 0x1c0(%0); cachee %1, 0x1d0(%0)      \n"     \
+       "       cachee %1, 0x1e0(%0); cachee %1, 0x1f0(%0)      \n"     \
+       "       .set pop                                        \n"     \
+               :                                                       \
+               : "r" (base),                                           \
+                 "i" (op));
+
+#define cache32_unroll32_user(base,op)                                  \
+       __asm__ __volatile__(                                           \
+       "       .set push                                       \n"     \
+       "       .set noreorder                                  \n"     \
+       "       .set eva                                        \n"     \
+       "       cachee %1, 0x000(%0); cachee %1, 0x020(%0)      \n"     \
+       "       cachee %1, 0x040(%0); cachee %1, 0x060(%0)      \n"     \
+       "       cachee %1, 0x080(%0); cachee %1, 0x0a0(%0)      \n"     \
+       "       cachee %1, 0x0c0(%0); cachee %1, 0x0e0(%0)      \n"     \
+       "       cachee %1, 0x100(%0); cachee %1, 0x120(%0)      \n"     \
+       "       cachee %1, 0x140(%0); cachee %1, 0x160(%0)      \n"     \
+       "       cachee %1, 0x180(%0); cachee %1, 0x1a0(%0)      \n"     \
+       "       cachee %1, 0x1c0(%0); cachee %1, 0x1e0(%0)      \n"     \
+       "       cachee %1, 0x200(%0); cachee %1, 0x220(%0)      \n"     \
+       "       cachee %1, 0x240(%0); cachee %1, 0x260(%0)      \n"     \
+       "       cachee %1, 0x280(%0); cachee %1, 0x2a0(%0)      \n"     \
+       "       cachee %1, 0x2c0(%0); cachee %1, 0x2e0(%0)      \n"     \
+       "       cachee %1, 0x300(%0); cachee %1, 0x320(%0)      \n"     \
+       "       cachee %1, 0x340(%0); cachee %1, 0x360(%0)      \n"     \
+       "       cachee %1, 0x380(%0); cachee %1, 0x3a0(%0)      \n"     \
+       "       cachee %1, 0x3c0(%0); cachee %1, 0x3e0(%0)      \n"     \
+       "       .set pop                                        \n"     \
+               :                                                       \
+               : "r" (base),                                           \
+                 "i" (op));
+
+#define cache64_unroll32_user(base,op)                                  \
+       __asm__ __volatile__(                                           \
+       "       .set push                                       \n"     \
+       "       .set noreorder                                  \n"     \
+       "       .set eva                                        \n"     \
+       "       cachee %1, 0x000(%0); cachee %1, 0x040(%0)      \n"     \
+       "       cachee %1, 0x080(%0); cachee %1, 0x0c0(%0)      \n"     \
+       "       cachee %1, 0x100(%0); cachee %1, 0x140(%0)      \n"     \
+       "       cachee %1, 0x180(%0); cachee %1, 0x1c0(%0)      \n"     \
+       "       cachee %1, 0x200(%0); cachee %1, 0x240(%0)      \n"     \
+       "       cachee %1, 0x280(%0); cachee %1, 0x2c0(%0)      \n"     \
+       "       cachee %1, 0x300(%0); cachee %1, 0x340(%0)      \n"     \
+       "       cachee %1, 0x380(%0); cachee %1, 0x3c0(%0)      \n"     \
+       "       cachee %1, 0x400(%0); cachee %1, 0x440(%0)      \n"     \
+       "       cachee %1, 0x480(%0); cachee %1, 0x4c0(%0)      \n"     \
+       "       cachee %1, 0x500(%0); cachee %1, 0x540(%0)      \n"     \
+       "       cachee %1, 0x580(%0); cachee %1, 0x5c0(%0)      \n"     \
+       "       cachee %1, 0x600(%0); cachee %1, 0x640(%0)      \n"     \
+       "       cachee %1, 0x680(%0); cachee %1, 0x6c0(%0)      \n"     \
+       "       cachee %1, 0x700(%0); cachee %1, 0x740(%0)      \n"     \
+       "       cachee %1, 0x780(%0); cachee %1, 0x7c0(%0)      \n"     \
+       "       .set pop                                        \n"     \
+               :                                                       \
+               : "r" (base),                                           \
+                 "i" (op));
+
+#define cache128_unroll32_user(base,op)                                 \
+       __asm__ __volatile__(                                           \
+       "       .set push                                       \n"     \
+       "       .set noreorder                                  \n"     \
+       "       .set eva                                        \n"     \
+       "       cachee %1, 0x000(%0); cachee %1, 0x080(%0)      \n"     \
+       "       cachee %1, 0x100(%0); cachee %1, 0x180(%0)      \n"     \
+       "       cachee %1, 0x200(%0); cachee %1, 0x280(%0)      \n"     \
+       "       cachee %1, 0x300(%0); cachee %1, 0x380(%0)      \n"     \
+       "       cachee %1, 0x400(%0); cachee %1, 0x480(%0)      \n"     \
+       "       cachee %1, 0x500(%0); cachee %1, 0x580(%0)      \n"     \
+       "       cachee %1, 0x600(%0); cachee %1, 0x680(%0)      \n"     \
+       "       cachee %1, 0x700(%0); cachee %1, 0x780(%0)      \n"     \
+       "       cachee %1, 0x800(%0); cachee %1, 0x880(%0)      \n"     \
+       "       cachee %1, 0x900(%0); cachee %1, 0x980(%0)      \n"     \
+       "       cachee %1, 0xa00(%0); cachee %1, 0xa80(%0)      \n"     \
+       "       cachee %1, 0xb00(%0); cachee %1, 0xb80(%0)      \n"     \
+       "       cachee %1, 0xc00(%0); cachee %1, 0xc80(%0)      \n"     \
+       "       cachee %1, 0xd00(%0); cachee %1, 0xd80(%0)      \n"     \
+       "       cachee %1, 0xe00(%0); cachee %1, 0xe80(%0)      \n"     \
+       "       cachee %1, 0xf00(%0); cachee %1, 0xf80(%0)      \n"     \
+       "       .set pop                                        \n"     \
+               :                                                       \
+               : "r" (base),                                           \
+                 "i" (op));
+#endif
+
 /* build blast_xxx, blast_xxx_page, blast_xxx_page_indexed */
 #define __BUILD_BLAST_CACHE(pfx, desc, indexop, hitop, lsize) \
 static inline void blast_##pfx##cache##lsize(void)                     \
@@ -411,6 +541,33 @@ __BUILD_BLAST_CACHE(inv_s, scache, Index_Writeback_Inv_SD, Hit_Invalidate_SD, 32
 __BUILD_BLAST_CACHE(inv_s, scache, Index_Writeback_Inv_SD, Hit_Invalidate_SD, 64)
 __BUILD_BLAST_CACHE(inv_s, scache, Index_Writeback_Inv_SD, Hit_Invalidate_SD, 128)
 
+#ifdef CONFIG_EVA
+
+#define __BUILD_BLAST_USER_CACHE(pfx, desc, indexop, hitop, lsize) \
+static inline void blast_##pfx##cache##lsize##_user_page(unsigned long page) \
+{                                                                      \
+       unsigned long start = page;                                     \
+       unsigned long end = page + PAGE_SIZE;                           \
+                                                                       \
+       __##pfx##flush_prologue                                         \
+                                                                       \
+       do {                                                            \
+               cache##lsize##_unroll32_user(start, hitop);             \
+               start += lsize * 32;                                    \
+       } while (start < end);                                          \
+                                                                       \
+       __##pfx##flush_epilogue                                         \
+}
+
+__BUILD_BLAST_USER_CACHE(d, dcache, Index_Writeback_Inv_D, Hit_Writeback_Inv_D, 16)
+__BUILD_BLAST_USER_CACHE(i, icache, Index_Invalidate_I, Hit_Invalidate_I, 16)
+__BUILD_BLAST_USER_CACHE(d, dcache, Index_Writeback_Inv_D, Hit_Writeback_Inv_D, 32)
+__BUILD_BLAST_USER_CACHE(i, icache, Index_Invalidate_I, Hit_Invalidate_I, 32)
+__BUILD_BLAST_USER_CACHE(d, dcache, Index_Writeback_Inv_D, Hit_Writeback_Inv_D, 64)
+__BUILD_BLAST_USER_CACHE(i, icache, Index_Invalidate_I, Hit_Invalidate_I, 64)
+
+#endif
+
 /* build blast_xxx_range, protected_blast_xxx_range */
 #define __BUILD_BLAST_CACHE_RANGE(pfx, desc, hitop, prot) \
 static inline void prot##blast_##pfx##cache##_range(unsigned long start, \
@@ -423,7 +580,7 @@ static inline void prot##blast_##pfx##cache##_range(unsigned long start, \
        __##pfx##flush_prologue                                         \
                                                                        \
        while (1) {                                                     \
-               prot##cache_op(hitop, addr);                            \
+               prot##cache_op(hitop, addr);                            \
                if (addr == aend)                                       \
                        break;                                          \
                addr += lsize;                                          \
@@ -432,10 +589,49 @@ static inline void prot##blast_##pfx##cache##_range(unsigned long start, \
        __##pfx##flush_epilogue                                         \
 }
 
+#ifndef CONFIG_EVA
+
 __BUILD_BLAST_CACHE_RANGE(d, dcache, Hit_Writeback_Inv_D, protected_)
-__BUILD_BLAST_CACHE_RANGE(s, scache, Hit_Writeback_Inv_SD, protected_)
 __BUILD_BLAST_CACHE_RANGE(i, icache, Hit_Invalidate_I, protected_)
+
+#else
+
+#define __BUILD_PROT_BLAST_CACHE_RANGE(pfx, desc, hitop)                \
+static inline void protected_blast_##pfx##cache##_range(unsigned long start, \
+                                                   unsigned long end)  \
+{                                                                      \
+       unsigned long lsize = cpu_##desc##_line_size();                 \
+       unsigned long addr = start & ~(lsize - 1);                      \
+       unsigned long aend = (end - 1) & ~(lsize - 1);                  \
+                                                                       \
+       __##pfx##flush_prologue                                         \
+                                                                       \
+       if (segment_eq(get_fs(), USER_DS))                              \
+               while (1) {                                             \
+                       protected_cachee_op(hitop, addr);               \
+                       if (addr == aend)                               \
+                               break;                                  \
+                       addr += lsize;                                  \
+               }                                                       \
+       else                                                            \
+               while (1) {                                             \
+                       protected_cache_op(hitop, addr);                \
+                       if (addr == aend)                               \
+                               break;                                  \
+                       addr += lsize;                                  \
+               }                                                       \
+                                                                       \
+       __##pfx##flush_epilogue                                         \
+}
+
+__BUILD_PROT_BLAST_CACHE_RANGE(d, dcache, Hit_Writeback_Inv_D)
+__BUILD_PROT_BLAST_CACHE_RANGE(i, icache, Hit_Invalidate_I)
+
+#endif
+
+__BUILD_BLAST_CACHE_RANGE(s, scache, Hit_Writeback_Inv_SD, protected_)
 __BUILD_BLAST_CACHE_RANGE(d, dcache, Hit_Writeback_Inv_D, )
+__BUILD_BLAST_CACHE_RANGE(i, icache, Hit_Invalidate_I, )
 __BUILD_BLAST_CACHE_RANGE(s, scache, Hit_Writeback_Inv_SD, )
 /* blast_inv_dcache_range */
 __BUILD_BLAST_CACHE_RANGE(inv_d, dcache, Hit_Invalidate_D, )
diff --git a/arch/mips/include/asm/scatterlist.h b/arch/mips/include/asm/scatterlist.h
deleted file mode 100644 (file)
index 7ee0e64..0000000
+++ /dev/null
@@ -1,6 +0,0 @@
-#ifndef __ASM_SCATTERLIST_H
-#define __ASM_SCATTERLIST_H
-
-#include <asm-generic/scatterlist.h>
-
-#endif /* __ASM_SCATTERLIST_H */
diff --git a/arch/mips/include/asm/sections.h b/arch/mips/include/asm/sections.h
deleted file mode 100644 (file)
index b7e3726..0000000
+++ /dev/null
@@ -1,6 +0,0 @@
-#ifndef _ASM_SECTIONS_H
-#define _ASM_SECTIONS_H
-
-#include <asm-generic/sections.h>
-
-#endif /* _ASM_SECTIONS_H */
diff --git a/arch/mips/include/asm/segment.h b/arch/mips/include/asm/segment.h
deleted file mode 100644 (file)
index 92ac001..0000000
+++ /dev/null
@@ -1,6 +0,0 @@
-#ifndef _ASM_SEGMENT_H
-#define _ASM_SEGMENT_H
-
-/* Only here because we have some old header files that expect it.. */
-
-#endif /* _ASM_SEGMENT_H */
diff --git a/arch/mips/include/asm/serial.h b/arch/mips/include/asm/serial.h
deleted file mode 100644 (file)
index a0cb0ca..0000000
+++ /dev/null
@@ -1 +0,0 @@
-#include <asm-generic/serial.h>
index eb60087584844381a717bf6dc1ffdf39508dcf1a..5a7283af46c2e41570bfa44af5fc6ce12912509b 100644 (file)
 #include <linux/smp.h>
 #include <linux/threads.h>
 #include <linux/cpumask.h>
+#include <linux/cache.h>
 
 #include <linux/atomic.h>
 #include <asm/smp-ops.h>
+#include <asm/percpu.h>
 
 extern int smp_num_siblings;
-extern cpumask_t cpu_sibling_map[];
+DECLARE_PER_CPU_SHARED_ALIGNED(cpumask_t, cpu_sibling_map);
 
 #define raw_smp_processor_id() (current_thread_info()->cpu)
 
index 2d7b9df4542dd478d53f53d8151d3381d1c58aee..7229562e3825647e74aa0906f0e2cb673e736519 100644 (file)
@@ -18,6 +18,7 @@
 #include <linux/spinlock.h>
 #include <linux/clockchips.h>
 #include <linux/clocksource.h>
+#include <asm/gic.h>
 
 extern spinlock_t rtc_lock;
 
@@ -53,14 +54,16 @@ extern int (*perf_irq)(void);
 extern unsigned int __weak get_c0_compare_int(void);
 extern int r4k_clockevent_init(void);
 extern int smtc_clockevent_init(void);
-extern int gic_clockevent_init(void);
 
 static inline int mips_clockevent_init(void)
 {
 #ifdef CONFIG_MIPS_MT_SMTC
        return smtc_clockevent_init();
 #elif defined(CONFIG_CEVT_GIC)
-       return (gic_clockevent_init() | r4k_clockevent_init());
+       extern int gic_clockevent_init(void);
+
+       gic_clockevent_init();
+       return r4k_clockevent_init();
 #elif defined(CONFIG_CEVT_R4K)
        return r4k_clockevent_init();
 #else
@@ -75,7 +78,7 @@ extern int init_r4k_clocksource(void);
 
 static inline int init_mips_clocksource(void)
 {
-#if defined(CONFIG_CSRC_R4K) && !defined(CONFIG_CSRC_GIC)
+#ifdef CONFIG_CSRC_R4K
        return init_r4k_clocksource();
 #else
        return 0;
index 12609a17dc8b5893faacf8df47b954b10a140664..8cd0efb2a7eeef316ceaa247510122c5dbd362ac 100644 (file)
@@ -4,6 +4,7 @@
  * for more details.
  *
  * Copyright (C) 2007 by Ralf Baechle
+ * Copyright (C) 2012 by Leonid Yegoshin
  */
 #ifndef __ASM_TOPOLOGY_H
 #define __ASM_TOPOLOGY_H
 
 #ifdef CONFIG_SMP
 #define smt_capable()  (smp_num_siblings > 1)
+#define topology_thread_cpumask(cpu)    (&per_cpu(cpu_sibling_map, cpu))
+#define topology_core_id(cpu)           (cpu_data[cpu].core)
+#define topology_core_cpumask(cpu)      ((void)(cpu), cpu_online_mask)
+#define topology_physical_package_id(cpu)   ((void)cpu, 0)
 #endif
 
 #endif /* __ASM_TOPOLOGY_H */
index f3fa3750f577c2414396943871a8f0bd6df6b928..48e3be26203338ae62a461bd9e56a4785c747874 100644 (file)
@@ -223,47 +223,96 @@ struct __large_struct { unsigned long buf[100]; };
  * for 32 bit mode and old iron.
  */
 #ifdef CONFIG_32BIT
+#define __GET_KERNEL_DW(val, ptr) __get_kernel_asm_ll32(val, ptr)
 #define __GET_USER_DW(val, ptr) __get_user_asm_ll32(val, ptr)
 #endif
 #ifdef CONFIG_64BIT
-#define __GET_USER_DW(val, ptr) __get_user_asm(val, "ld", ptr)
+#define __GET_KERNEL_DW(val, ptr) __get_kernel_asm(val, "ld", ptr)
 #endif
 
+extern void __get_kernel_unknown(void);
 extern void __get_user_unknown(void);
 
-#define __get_user_common(val, size, ptr)                              \
+#define __get_kernel_common(val, size, ptr)                             \
 do {                                                                   \
+       __chk_user_ptr(ptr);                                            \
+       __gu_err = 0;                                                   \
+       switch (size) {                                                 \
+       case 1: __get_kernel_asm(val, "lb", ptr);  break;               \
+       case 2: __get_kernel_asm(val, "lh", ptr);  break;               \
+       case 4: __get_kernel_asm(val, "lw", ptr);  break;               \
+       case 8: __GET_KERNEL_DW(val, ptr); break;                       \
+       default: __get_kernel_unknown(); break;                         \
+       }                                                               \
+} while (0)
+
+#ifdef CONFIG_EVA
+#define __get_user_common(val, size, ptr)                               \
+do {                                                                   \
+       __gu_err = 0;                                                   \
        switch (size) {                                                 \
-       case 1: __get_user_asm(val, "lb", ptr); break;                  \
-       case 2: __get_user_asm(val, "lh", ptr); break;                  \
-       case 4: __get_user_asm(val, "lw", ptr); break;                  \
-       case 8: __GET_USER_DW(val, ptr); break;                         \
-       default: __get_user_unknown(); break;                           \
+       case 1: __get_user_asm(val, "lbe", ptr); break;                 \
+       case 2: __get_user_asm(val, "lhe", ptr); break;                 \
+       case 4: __get_user_asm(val, "lwe", ptr); break;                 \
+       case 8: __GET_USER_DW(val, ptr); break;                         \
+       default: __get_user_unknown(); break;                           \
        }                                                               \
 } while (0)
+#endif
 
-#define __get_user_nocheck(x, ptr, size)                               \
+#ifndef CONFIG_EVA
+#define __get_user_nocheck(x, ptr, size)                                \
 ({                                                                     \
-       int __gu_err;                                                   \
+       int __gu_err;                                                   \
+       __get_kernel_common((x), size, ptr);                            \
+       __gu_err;                                                       \
+})
+#else
+#define __get_user_nocheck(x, ptr, size)                                \
+({                                                                     \
+       int __gu_err;                                                   \
+       const __typeof__(*(ptr)) __user * __gu_ptr = (ptr);             \
                                                                        \
-       __chk_user_ptr(ptr);                                            \
-       __get_user_common((x), size, ptr);                              \
+       if (segment_eq(get_fs(), KERNEL_DS))                            \
+               __get_kernel_common((x), size, __gu_ptr);               \
+       else {                                                          \
+               __chk_user_ptr(ptr);                                    \
+               __get_user_common((x), size, __gu_ptr);                 \
+       }                                                               \
        __gu_err;                                                       \
 })
+#endif
 
-#define __get_user_check(x, ptr, size)                                 \
+#ifndef CONFIG_EVA
+#define __get_user_check(x, ptr, size)                                  \
 ({                                                                     \
        int __gu_err = -EFAULT;                                         \
        const __typeof__(*(ptr)) __user * __gu_ptr = (ptr);             \
                                                                        \
-       might_fault();                                                  \
-       if (likely(access_ok(VERIFY_READ,  __gu_ptr, size)))            \
-               __get_user_common((x), size, __gu_ptr);                 \
+       might_fault();                                                  \
+       if (likely(access_ok(VERIFY_READ,  __gu_ptr, size)))            \
+               __get_kernel_common((x), size, __gu_ptr);               \
                                                                        \
        __gu_err;                                                       \
 })
+#else
+#define __get_user_check(x, ptr, size)                                  \
+({                                                                     \
+       int __gu_err = -EFAULT;                                         \
+       const __typeof__(*(ptr)) __user * __gu_ptr = (ptr);             \
+                                                                       \
+       if (segment_eq(get_fs(), KERNEL_DS)) {                          \
+               __get_kernel_common((x), size, __gu_ptr);               \
+       } else {                                                        \
+               might_fault();                                          \
+               if (likely(access_ok(VERIFY_READ,  __gu_ptr, size)))    \
+                       __get_user_common((x), size, __gu_ptr);         \
+       }                                                               \
+       __gu_err;                                                       \
+})
+#endif
 
-#define __get_user_asm(val, insn, addr)                                        \
+#define __get_kernel_asm(val, insn, addr)                               \
 {                                                                      \
        long __gu_tmp;                                                  \
                                                                        \
@@ -284,10 +333,34 @@ do {                                                                      \
        (val) = (__typeof__(*(addr))) __gu_tmp;                         \
 }
 
+#ifdef CONFIG_EVA
+#define __get_user_asm(val, insn, addr)                                 \
+{                                                                      \
+       long __gu_tmp;                                                  \
+                                                                       \
+       __asm__ __volatile__(                                           \
+       "       .set    eva                                     \n"     \
+       "1:     " insn "        %1, 0(%3)                          \n"     \
+       "2:                                                     \n"     \
+       "       .insn                                           \n"     \
+       "       .section .fixup,\"ax\"                          \n"     \
+       "3:     li      %0, %4                                  \n"     \
+       "       j       2b                                      \n"     \
+       "       .previous                                       \n"     \
+       "       .section __ex_table,\"a\"                       \n"     \
+       "       "__UA_ADDR "\t1b, 3b                            \n"     \
+       "       .previous                                       \n"     \
+       : "=r" (__gu_err), "=r" (__gu_tmp)                              \
+       : "0" (0), "r" (addr), "i" (-EFAULT));                     \
+                                                                       \
+       (val) = (__typeof__(*(addr))) __gu_tmp;                         \
+}
+#endif
+
 /*
  * Get a long long 64 using 32 bit registers.
  */
-#define __get_user_asm_ll32(val, addr)                                 \
+#define __get_kernel_asm_ll32(val, addr)                                \
 {                                                                      \
        union {                                                         \
                unsigned long long      l;                              \
@@ -314,18 +387,113 @@ do {                                                                     \
                                                                        \
        (val) = __gu_tmp.t;                                             \
 }
+#ifdef CONFIG_EVA
+#define __get_user_asm_ll32(val, addr)                                 \
+{                                                                      \
+       union {                                                         \
+               unsigned long long      l;                              \
+               __typeof__(*(addr))     t;                              \
+       } __gu_tmp;                                                     \
+                                                                       \
+       __asm__ __volatile__(                                           \
+       "       .set    eva                                     \n"     \
+       "1:     lwe     %1, (%3)                                \n"     \
+       "2:     lwe     %D1, 4(%3)                              \n"     \
+       "3:                                                     \n"     \
+       "       .insn                                           \n"     \
+       "       .section        .fixup,\"ax\"                   \n"     \
+       "4:     li      %0, %4                                  \n"     \
+       "       move    %1, $0                                  \n"     \
+       "       move    %D1, $0                                 \n"     \
+       "       j       3b                                      \n"     \
+       "       .previous                                       \n"     \
+       "       .section        __ex_table,\"a\"                \n"     \
+       "       " __UA_ADDR "   1b, 4b                          \n"     \
+       "       " __UA_ADDR "   2b, 4b                          \n"     \
+       "       .previous                                       \n"     \
+       : "=r" (__gu_err), "=&r" (__gu_tmp.l)                           \
+       : "0" (0), "r" (addr), "i" (-EFAULT));                          \
+                                                                       \
+       (val) = __gu_tmp.t;                                             \
+}
+#endif
+
 
 /*
  * Yuck.  We need two variants, one for 64bit operation and one
  * for 32 bit mode and old iron.
  */
 #ifdef CONFIG_32BIT
+#define __PUT_KERNEL_DW(ptr) __put_kernel_asm_ll32(ptr)
 #define __PUT_USER_DW(ptr) __put_user_asm_ll32(ptr)
 #endif
 #ifdef CONFIG_64BIT
-#define __PUT_USER_DW(ptr) __put_user_asm("sd", ptr)
+#define __PUT_KERNEL_DW(ptr) __put_kernel_asm("sd", ptr)
 #endif
 
+extern void __put_kernel_unknown(void);
+
+#ifdef CONFIG_EVA
+extern void __put_user_unknown(void);
+
+#define __put_kernel_common(size, ptr)                                  \
+do {                                                                   \
+       switch (size) {                                                 \
+       case 1: __put_kernel_asm("sb", ptr);  break;                    \
+       case 2: __put_kernel_asm("sh", ptr);  break;                    \
+       case 4: __put_kernel_asm("sw", ptr);  break;                    \
+       case 8: __PUT_KERNEL_DW(ptr); break;                            \
+       default: __put_kernel_unknown(); break;                         \
+       }                                                               \
+} while (0)
+
+#define __put_user_common(size, ptr)                                    \
+do {                                                                   \
+       switch (size) {                                                 \
+       case 1: __put_user_asm("sbe", ptr);  break;                     \
+       case 2: __put_user_asm("she", ptr);  break;                     \
+       case 4: __put_user_asm("swe", ptr);  break;                     \
+       case 8: __PUT_USER_DW(ptr); break;                              \
+       default: __put_user_unknown(); break;                           \
+       }                                                               \
+} while (0)
+
+#define __put_user_nocheck(x, ptr, size)                                \
+({                                                                     \
+       __typeof__(*(ptr)) __pu_val;                                    \
+       int __pu_err = 0;                                               \
+       const __typeof__(*(ptr)) __user * __pu_ptr = (ptr);             \
+                                                                       \
+       if (segment_eq(get_fs(), KERNEL_DS)) {                          \
+               __chk_user_ptr(__pu_ptr);                               \
+               __pu_val = (x);                                         \
+               __put_kernel_common(size, __pu_ptr);                    \
+       } else {                                                        \
+               __chk_user_ptr(__pu_ptr);                               \
+               __pu_val = (x);                                         \
+               __put_user_common(size, __pu_ptr);                      \
+       }                                                               \
+       __pu_err;                                                       \
+})
+
+#define __put_user_check(x, ptr, size)                                  \
+({                                                                     \
+       __typeof__(*(ptr)) __pu_val = (x);                              \
+       int __pu_err = -EFAULT;                                         \
+       const __typeof__(*(ptr)) __user * __pu_ptr = (ptr);             \
+                                                                       \
+       if (segment_eq(get_fs(), KERNEL_DS))                            \
+               __put_kernel_common(size, __pu_ptr);                    \
+       else {                                                          \
+               might_fault();                                          \
+               if (likely(access_ok(VERIFY_WRITE,  __pu_ptr, size)))   \
+                       __put_user_common(size, __pu_ptr);              \
+       }                                                               \
+       __pu_err;                                                       \
+})
+
+#else
+
 #define __put_user_nocheck(x, ptr, size)                               \
 ({                                                                     \
        __typeof__(*(ptr)) __pu_val;                                    \
@@ -334,11 +502,11 @@ do {                                                                      \
        __chk_user_ptr(ptr);                                            \
        __pu_val = (x);                                                 \
        switch (size) {                                                 \
-       case 1: __put_user_asm("sb", ptr); break;                       \
-       case 2: __put_user_asm("sh", ptr); break;                       \
-       case 4: __put_user_asm("sw", ptr); break;                       \
-       case 8: __PUT_USER_DW(ptr); break;                              \
-       default: __put_user_unknown(); break;                           \
+       case 1: __put_kernel_asm("sb", ptr); break;                     \
+       case 2: __put_kernel_asm("sh", ptr); break;                     \
+       case 4: __put_kernel_asm("sw", ptr); break;                     \
+       case 8: __PUT_KERNEL_DW(ptr); break;                            \
+       default: __put_kernel_unknown(); break;                         \
        }                                                               \
        __pu_err;                                                       \
 })
@@ -352,17 +520,19 @@ do {                                                                      \
        might_fault();                                                  \
        if (likely(access_ok(VERIFY_WRITE,  __pu_addr, size))) {        \
                switch (size) {                                         \
-               case 1: __put_user_asm("sb", __pu_addr); break;         \
-               case 2: __put_user_asm("sh", __pu_addr); break;         \
-               case 4: __put_user_asm("sw", __pu_addr); break;         \
-               case 8: __PUT_USER_DW(__pu_addr); break;                \
-               default: __put_user_unknown(); break;                   \
+               case 1: __put_kernel_asm("sb", __pu_addr); break;       \
+               case 2: __put_kernel_asm("sh", __pu_addr); break;       \
+               case 4: __put_kernel_asm("sw", __pu_addr); break;       \
+               case 8: __PUT_KERNEL_DW(__pu_addr); break;              \
+               default: __put_kernel_unknown(); break;                 \
                }                                                       \
        }                                                               \
        __pu_err;                                                       \
 })
+#endif /* CONFIG_EVA */
 
-#define __put_user_asm(insn, ptr)                                      \
+#ifndef CONFIG_EVA
+#define __put_kernel_asm(insn, ptr)                                     \
 {                                                                      \
        __asm__ __volatile__(                                           \
        "1:     " insn "        %z2, %3         # __put_user_asm\n"     \
@@ -379,8 +549,47 @@ do {                                                                       \
        : "0" (0), "Jr" (__pu_val), "o" (__m(ptr)),                     \
          "i" (-EFAULT));                                               \
 }
+#else
+#define __put_kernel_asm(insn, ptr)                                     \
+{                                                                      \
+       __asm__ __volatile__(                                           \
+       "1:     " insn "        %2, %3         # __put_user_asm\n"      \
+       "2:                                                     \n"     \
+       "       .insn                                           \n"     \
+       "       .section        .fixup,\"ax\"                   \n"     \
+       "3:     li      %0, %4                                  \n"     \
+       "       j       2b                                      \n"     \
+       "       .previous                                       \n"     \
+       "       .section        __ex_table,\"a\"                \n"     \
+       "       " __UA_ADDR "   1b, 3b                          \n"     \
+       "       .previous                                       \n"     \
+       : "=r" (__pu_err)                                               \
+       : "0" (0), "r" (__pu_val), "o" (__m(ptr)),                      \
+         "i" (-EFAULT));                                               \
+}
+
+#define __put_user_asm(insn, ptr)                                       \
+{                                                                      \
+       __asm__ __volatile__(                                           \
+       "       .set        eva                                 \n"     \
+       "1:     " insn "        %2, 0(%3)         # __put_user_asm\n"   \
+       "2:                                                     \n"     \
+       "       .insn                                           \n"     \
+       "       .section        .fixup,\"ax\"                   \n"     \
+       "3:     li      %0, %4                                  \n"     \
+       "       j       2b                                      \n"     \
+       "       .previous                                       \n"     \
+       "       .section        __ex_table,\"a\"                \n"     \
+       "       " __UA_ADDR "   1b, 3b                          \n"     \
+       "       .previous                                       \n"     \
+       : "=r" (__pu_err)                                               \
+       : "0" (0), "r" (__pu_val), "r" (ptr),                           \
+         "i" (-EFAULT));                                               \
+}
+#endif
 
-#define __put_user_asm_ll32(ptr)                                       \
+
+#define __put_kernel_asm_ll32(ptr)                                      \
 {                                                                      \
        __asm__ __volatile__(                                           \
        "1:     sw      %2, (%3)        # __put_user_asm_ll32   \n"     \
@@ -400,8 +609,30 @@ do {                                                                       \
          "i" (-EFAULT));                                               \
 }
 
-extern void __put_user_unknown(void);
+#ifdef CONFIG_EVA
+#define __put_user_asm_ll32(ptr)                                        \
+{                                                                      \
+       __asm__ __volatile__(                                           \
+       "       .set    eva                                     \n"     \
+       "1:     swe     %2, (%3)        # __put_user_asm_ll32   \n"     \
+       "2:     swe     %D2, 4(%3)                              \n"     \
+       "3:                                                     \n"     \
+       "       .insn                                           \n"     \
+       "       .section        .fixup,\"ax\"                   \n"     \
+       "4:     li      %0, %4                                  \n"     \
+       "       j       3b                                      \n"     \
+       "       .previous                                       \n"     \
+       "       .section        __ex_table,\"a\"                \n"     \
+       "       " __UA_ADDR "   1b, 4b                          \n"     \
+       "       " __UA_ADDR "   2b, 4b                          \n"     \
+       "       .previous"                                              \
+       : "=r" (__pu_err)                                               \
+       : "0" (0), "r" (__pu_val), "r" (ptr),                           \
+         "i" (-EFAULT));                                               \
+}
+#endif
 
+#ifndef CONFIG_EVA
 /*
  * put_user_unaligned: - Write a simple value into user space.
  * @x:  Value to copy to user space.
@@ -670,6 +901,8 @@ do {                                                                        \
 
 extern void __put_user_unaligned_unknown(void);
 
+#endif /* CONFIG_EVA */
+
 /*
  * We're generating jump to subroutines which will be outside the range of
  * jump instructions
@@ -692,8 +925,12 @@ extern void __put_user_unaligned_unknown(void);
 #endif
 
 extern size_t __copy_user(void *__to, const void *__from, size_t __n);
+#ifdef CONFIG_EVA
+extern size_t __copy_fromuser(void *__to, const void *__from, size_t __n);
+extern size_t __copy_touser(void *__to, const void *__from, size_t __n);
+#endif
 
-#define __invoke_copy_to_user(to, from, n)                             \
+#define __invoke_copy_to_kernel(to, from, n)                            \
 ({                                                                     \
        register void __user *__cu_to_r __asm__("$4");                  \
        register const void *__cu_from_r __asm__("$5");                 \
@@ -703,7 +940,7 @@ extern size_t __copy_user(void *__to, const void *__from, size_t __n);
        __cu_from_r = (from);                                           \
        __cu_len_r = (n);                                               \
        __asm__ __volatile__(                                           \
-       __MODULE_JAL(__copy_user)                                       \
+       __MODULE_JAL(__copy_user)                                       \
        : "+r" (__cu_to_r), "+r" (__cu_from_r), "+r" (__cu_len_r)       \
        :                                                               \
        : "$8", "$9", "$10", "$11", "$12", "$14", "$15", "$24", "$31",  \
@@ -711,6 +948,26 @@ extern size_t __copy_user(void *__to, const void *__from, size_t __n);
        __cu_len_r;                                                     \
 })
 
+#ifdef CONFIG_EVA
+#define __invoke_copy_to_user(to, from, n)                              \
+({                                                                     \
+       register void __user *__cu_to_r __asm__("$4");                  \
+       register const void *__cu_from_r __asm__("$5");                 \
+       register long __cu_len_r __asm__("$6");                         \
+                                                                       \
+       __cu_to_r = (to);                                               \
+       __cu_from_r = (from);                                           \
+       __cu_len_r = (n);                                               \
+       __asm__ __volatile__(                                           \
+       __MODULE_JAL(__copy_touser)                                     \
+       : "+r" (__cu_to_r), "+r" (__cu_from_r), "+r" (__cu_len_r)       \
+       :                                                               \
+       : "$8", "$9", "$10", "$11", "$12", "$15", "$24", "$31",         \
+         DADDI_SCRATCH, "memory");                                     \
+       __cu_len_r;                                                     \
+})
+#endif
+
 /*
  * __copy_to_user: - Copy a block of data into user space, with less checking.
  * @to:          Destination address, in user space.
@@ -725,6 +982,7 @@ extern size_t __copy_user(void *__to, const void *__from, size_t __n);
  * Returns number of bytes that could not be copied.
  * On success, this will be zero.
  */
+#ifndef CONFIG_EVA
 #define __copy_to_user(to, from, n)                                    \
 ({                                                                     \
        void __user *__cu_to;                                           \
@@ -734,13 +992,58 @@ extern size_t __copy_user(void *__to, const void *__from, size_t __n);
        __cu_to = (to);                                                 \
        __cu_from = (from);                                             \
        __cu_len = (n);                                                 \
-       might_fault();                                                  \
-       __cu_len = __invoke_copy_to_user(__cu_to, __cu_from, __cu_len); \
+       might_fault();                                                  \
+       __cu_len = __invoke_copy_to_kernel(__cu_to, __cu_from, __cu_len); \
        __cu_len;                                                       \
 })
+#else
+#define __copy_to_user(to, from, n)                                    \
+({                                                                     \
+       void __user *__cu_to;                                           \
+       const void *__cu_from;                                          \
+       long __cu_len;                                                  \
+                                                                       \
+       __cu_to = (to);                                                 \
+       __cu_from = (from);                                             \
+       __cu_len = (n);                                                 \
+       if (segment_eq(get_fs(), KERNEL_DS))                            \
+               __cu_len = __invoke_copy_to_kernel(__cu_to, __cu_from, __cu_len); \
+       else {                                                          \
+               might_fault();                                                  \
+               __cu_len = __invoke_copy_to_user(__cu_to, __cu_from, __cu_len); \
+       }                                                               \
+       __cu_len;                                                       \
+})
+#endif
 
-extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
+#ifndef CONFIG_EVA
+#define __copy_to_user_inatomic(to, from, n)                            \
+({                                                                      \
+       void __user *__cu_to;                                           \
+       const void *__cu_from;                                          \
+       long __cu_len;                                                  \
+                                                                       \
+       __cu_to = (to);                                                 \
+       __cu_from = (from);                                             \
+       __cu_len = (n);                                                 \
+       __cu_len = __invoke_copy_to_kernel(__cu_to, __cu_from, __cu_len); \
+       __cu_len;                                                       \
+})
 
+#define __copy_from_user_inatomic(to, from, n)                          \
+({                                                                      \
+       void *__cu_to;                                                  \
+       const void __user *__cu_from;                                   \
+       long __cu_len;                                                  \
+                                                                       \
+       __cu_to = (to);                                                 \
+       __cu_from = (from);                                             \
+       __cu_len = (n);                                                 \
+       __cu_len = __invoke_copy_from_kernel_inatomic(__cu_to, __cu_from, \
+                                                   __cu_len);          \
+       __cu_len;                                                       \
+})
+#else
 #define __copy_to_user_inatomic(to, from, n)                           \
 ({                                                                     \
        void __user *__cu_to;                                           \
@@ -750,7 +1053,10 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
        __cu_to = (to);                                                 \
        __cu_from = (from);                                             \
        __cu_len = (n);                                                 \
-       __cu_len = __invoke_copy_to_user(__cu_to, __cu_from, __cu_len); \
+       if (segment_eq(get_fs(), KERNEL_DS))                            \
+               __cu_len = __invoke_copy_to_kernel(__cu_to, __cu_from, __cu_len); \
+       else                                                            \
+               __cu_len = __invoke_copy_to_user(__cu_to, __cu_from, __cu_len); \
        __cu_len;                                                       \
 })
 
@@ -763,10 +1069,15 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
        __cu_to = (to);                                                 \
        __cu_from = (from);                                             \
        __cu_len = (n);                                                 \
-       __cu_len = __invoke_copy_from_user_inatomic(__cu_to, __cu_from, \
-                                                   __cu_len);          \
+       if (segment_eq(get_fs(), KERNEL_DS))                            \
+               __cu_len = __invoke_copy_from_kernel_inatomic(__cu_to, __cu_from, \
+                                                   __cu_len);          \
+       else                                                            \
+               __cu_len = __invoke_copy_from_user_inatomic(__cu_to, __cu_from, \
+                                                   __cu_len);          \
        __cu_len;                                                       \
 })
+#endif
 
 /*
  * copy_to_user: - Copy a block of data into user space.
@@ -781,6 +1092,24 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
  * Returns number of bytes that could not be copied.
  * On success, this will be zero.
  */
+#ifndef CONFIG_EVA
+#define copy_to_user(to, from, n)                                       \
+({                                                                      \
+       void __user *__cu_to;                                           \
+       const void *__cu_from;                                          \
+       long __cu_len;                                                  \
+                                                                       \
+       __cu_to = (to);                                                 \
+       __cu_from = (from);                                             \
+       __cu_len = (n);                                                 \
+       if (access_ok(VERIFY_WRITE, __cu_to, __cu_len)) {               \
+               might_fault();                                          \
+               __cu_len = __invoke_copy_to_kernel(__cu_to, __cu_from,  \
+                                                __cu_len);             \
+       }                                                               \
+       __cu_len;                                                       \
+})
+#else
 #define copy_to_user(to, from, n)                                      \
 ({                                                                     \
        void __user *__cu_to;                                           \
@@ -790,15 +1119,44 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
        __cu_to = (to);                                                 \
        __cu_from = (from);                                             \
        __cu_len = (n);                                                 \
-       if (access_ok(VERIFY_WRITE, __cu_to, __cu_len)) {               \
-               might_fault();                                          \
-               __cu_len = __invoke_copy_to_user(__cu_to, __cu_from,    \
-                                                __cu_len);             \
-       }                                                               \
+       if (segment_eq(get_fs(), KERNEL_DS))                            \
+               __cu_len = __invoke_copy_to_kernel(__cu_to, __cu_from,  \
+                                                __cu_len);             \
+       else                                                            \
+               if (access_ok(VERIFY_WRITE, __cu_to, __cu_len)) {       \
+                       might_fault();                                  \
+                       __cu_len = __invoke_copy_to_user(__cu_to, __cu_from, \
+                                                        __cu_len);     \
+               }                                                       \
        __cu_len;                                                       \
 })
+#endif
+
+#define __invoke_copy_from_kernel(to, from, n)                          \
+({                                                                     \
+       register void *__cu_to_r __asm__("$4");                         \
+       register const void __user *__cu_from_r __asm__("$5");          \
+       register long __cu_len_r __asm__("$6");                         \
+                                                                       \
+       __cu_to_r = (to);                                               \
+       __cu_from_r = (from);                                           \
+       __cu_len_r = (n);                                               \
+       __asm__ __volatile__(                                           \
+       ".set\tnoreorder\n\t"                                           \
+       __MODULE_JAL(__copy_user)                                       \
+       ".set\tnoat\n\t"                                                \
+       __UA_ADDU "\t$1, %1, %2\n\t"                                    \
+       ".set\tat\n\t"                                                  \
+       ".set\treorder"                                                 \
+       : "+r" (__cu_to_r), "+r" (__cu_from_r), "+r" (__cu_len_r)       \
+       :                                                               \
+       : "$8", "$9", "$10", "$11", "$12", "$14", "$15", "$24", "$31",  \
+         DADDI_SCRATCH, "memory");                                     \
+       __cu_len_r;                                                     \
+})
 
-#define __invoke_copy_from_user(to, from, n)                           \
+#ifdef CONFIG_EVA
+#define __invoke_copy_from_user(to, from, n)                            \
 ({                                                                     \
        register void *__cu_to_r __asm__("$4");                         \
        register const void __user *__cu_from_r __asm__("$5");          \
@@ -809,7 +1167,7 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
        __cu_len_r = (n);                                               \
        __asm__ __volatile__(                                           \
        ".set\tnoreorder\n\t"                                           \
-       __MODULE_JAL(__copy_user)                                       \
+       __MODULE_JAL(__copy_fromuser)                                   \
        ".set\tnoat\n\t"                                                \
        __UA_ADDU "\t$1, %1, %2\n\t"                                    \
        ".set\tat\n\t"                                                  \
@@ -820,6 +1178,35 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
          DADDI_SCRATCH, "memory");                                     \
        __cu_len_r;                                                     \
 })
+#endif
+
+extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
+
+#define __invoke_copy_from_kernel_inatomic(to, from, n)                 \
+({                                                                     \
+       register void *__cu_to_r __asm__("$4");                         \
+       register const void __user *__cu_from_r __asm__("$5");          \
+       register long __cu_len_r __asm__("$6");                         \
+                                                                       \
+       __cu_to_r = (to);                                               \
+       __cu_from_r = (from);                                           \
+       __cu_len_r = (n);                                               \
+       __asm__ __volatile__(                                           \
+       ".set\tnoreorder\n\t"                                           \
+       __MODULE_JAL(__copy_user_inatomic)                              \
+       ".set\tnoat\n\t"                                                \
+       __UA_ADDU "\t$1, %1, %2\n\t"                                    \
+       ".set\tat\n\t"                                                  \
+       ".set\treorder"                                                 \
+       : "+r" (__cu_to_r), "+r" (__cu_from_r), "+r" (__cu_len_r)       \
+       :                                                               \
+       : "$8", "$9", "$10", "$11", "$12", "$14", "$15", "$24", "$31",  \
+         DADDI_SCRATCH, "memory");                                     \
+       __cu_len_r;                                                     \
+})
+
+#ifdef CONFIG_EVA
+extern size_t __copy_fromuser_inatomic(void *__to, const void *__from, size_t __n);
 
 #define __invoke_copy_from_user_inatomic(to, from, n)                  \
 ({                                                                     \
@@ -832,7 +1219,7 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
        __cu_len_r = (n);                                               \
        __asm__ __volatile__(                                           \
        ".set\tnoreorder\n\t"                                           \
-       __MODULE_JAL(__copy_user_inatomic)                              \
+       __MODULE_JAL(__copy_fromuser_inatomic)                              \
        ".set\tnoat\n\t"                                                \
        __UA_ADDU "\t$1, %1, %2\n\t"                                    \
        ".set\tat\n\t"                                                  \
@@ -844,6 +1231,32 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
        __cu_len_r;                                                     \
 })
 
+extern size_t __copy_inuser(void *__to, const void *__from, size_t __n);
+
+#define __invoke_copy_in_user(to, from, n)                              \
+({                                                                     \
+       register void *__cu_to_r __asm__("$4");                         \
+       register const void __user *__cu_from_r __asm__("$5");          \
+       register long __cu_len_r __asm__("$6");                         \
+                                                                       \
+       __cu_to_r = (to);                                               \
+       __cu_from_r = (from);                                           \
+       __cu_len_r = (n);                                               \
+       __asm__ __volatile__(                                           \
+       ".set\tnoreorder\n\t"                                           \
+       __MODULE_JAL(__copy_inuser)                                     \
+       ".set\tnoat\n\t"                                                \
+       __UA_ADDU "\t$1, %1, %2\n\t"                                    \
+       ".set\tat\n\t"                                                  \
+       ".set\treorder"                                                 \
+       : "+r" (__cu_to_r), "+r" (__cu_from_r), "+r" (__cu_len_r)       \
+       :                                                               \
+       : "$8", "$9", "$10", "$11", "$12", "$14", "$15", "$24", "$31",  \
+         DADDI_SCRATCH, "memory");                                     \
+       __cu_len_r;                                                     \
+})
+#endif
+
 /*
  * __copy_from_user: - Copy a block of data from user space, with less checking.
  * @to:          Destination address, in kernel space.
@@ -861,6 +1274,22 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
  * If some data could not be copied, this function will pad the copied
  * data to the requested size using zero bytes.
  */
+#ifndef CONFIG_EVA
+#define __copy_from_user(to, from, n)                                   \
+({                                                                     \
+       void *__cu_to;                                                  \
+       const void __user *__cu_from;                                   \
+       long __cu_len;                                                  \
+                                                                       \
+       __cu_to = (to);                                                 \
+       __cu_from = (from);                                             \
+       __cu_len = (n);                                                 \
+       might_fault();                                                  \
+       __cu_len = __invoke_copy_from_kernel(__cu_to, __cu_from,        \
+                                          __cu_len);                   \
+       __cu_len;                                                       \
+})
+#else
 #define __copy_from_user(to, from, n)                                  \
 ({                                                                     \
        void *__cu_to;                                                  \
@@ -872,9 +1301,10 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
        __cu_len = (n);                                                 \
        might_fault();                                                  \
        __cu_len = __invoke_copy_from_user(__cu_to, __cu_from,          \
-                                          __cu_len);                   \
+                                          __cu_len);                   \
        __cu_len;                                                       \
 })
+#endif
 
 /*
  * copy_from_user: - Copy a block of data from user space.
@@ -892,7 +1322,25 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
  * If some data could not be copied, this function will pad the copied
  * data to the requested size using zero bytes.
  */
-#define copy_from_user(to, from, n)                                    \
+#ifndef CONFIG_EVA
+#define copy_from_user(to, from, n)                                     \
+({                                                                      \
+       void *__cu_to;                                                  \
+       const void __user *__cu_from;                                   \
+       long __cu_len;                                                  \
+                                                                       \
+       __cu_to = (to);                                                 \
+       __cu_from = (from);                                             \
+       __cu_len = (n);                                                 \
+       if (access_ok(VERIFY_READ, __cu_from, __cu_len)) {              \
+               might_fault();                                          \
+               __cu_len = __invoke_copy_from_kernel(__cu_to, __cu_from,  \
+                                                  __cu_len);           \
+       }                                                               \
+       __cu_len;                                                       \
+})
+#else
+#define copy_from_user(to, from, n)                                     \
 ({                                                                     \
        void *__cu_to;                                                  \
        const void __user *__cu_from;                                   \
@@ -901,14 +1349,53 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
        __cu_to = (to);                                                 \
        __cu_from = (from);                                             \
        __cu_len = (n);                                                 \
-       if (access_ok(VERIFY_READ, __cu_from, __cu_len)) {              \
-               might_fault();                                          \
-               __cu_len = __invoke_copy_from_user(__cu_to, __cu_from,  \
-                                                  __cu_len);           \
-       }                                                               \
+       if (segment_eq(get_fs(), KERNEL_DS))                            \
+               __cu_len = __invoke_copy_from_kernel(__cu_to, __cu_from,  \
+                                                  __cu_len);           \
+       else                                                            \
+               if (access_ok(VERIFY_READ, __cu_from, __cu_len)) {      \
+                       might_fault();                                  \
+                       __cu_len = __invoke_copy_from_user(__cu_to, __cu_from,  \
+                                                          __cu_len);   \
+               }                                                       \
        __cu_len;                                                       \
 })
+#endif
 
+#ifndef CONFIG_EVA
+#define __copy_in_user(to, from, n)                                     \
+({                                                                      \
+       void __user *__cu_to;                                           \
+       const void __user *__cu_from;                                   \
+       long __cu_len;                                                  \
+                                                                       \
+       __cu_to = (to);                                                 \
+       __cu_from = (from);                                             \
+       __cu_len = (n);                                                 \
+       might_fault();                                                  \
+       __cu_len = __invoke_copy_from_kernel(__cu_to, __cu_from,          \
+                                          __cu_len);                   \
+       __cu_len;                                                       \
+})
+
+#define copy_in_user(to, from, n)                                       \
+({                                                                      \
+       void __user *__cu_to;                                           \
+       const void __user *__cu_from;                                   \
+       long __cu_len;                                                  \
+                                                                       \
+       __cu_to = (to);                                                 \
+       __cu_from = (from);                                             \
+       __cu_len = (n);                                                 \
+       if (likely(access_ok(VERIFY_READ, __cu_from, __cu_len) &&       \
+                  access_ok(VERIFY_WRITE, __cu_to, __cu_len))) {       \
+               might_fault();                                          \
+               __cu_len = __invoke_copy_from_kernel(__cu_to, __cu_from,  \
+                                                  __cu_len);           \
+       }                                                               \
+       __cu_len;                                                       \
+})
+#else
 #define __copy_in_user(to, from, n)                                    \
 ({                                                                     \
        void __user *__cu_to;                                           \
@@ -919,8 +1406,8 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
        __cu_from = (from);                                             \
        __cu_len = (n);                                                 \
        might_fault();                                                  \
-       __cu_len = __invoke_copy_from_user(__cu_to, __cu_from,          \
-                                          __cu_len);                   \
+       __cu_len = __invoke_copy_in_user(__cu_to, __cu_from,          \
+                                        __cu_len);                   \
        __cu_len;                                                       \
 })
 
@@ -936,11 +1423,12 @@ extern size_t __copy_user_inatomic(void *__to, const void *__from, size_t __n);
        if (likely(access_ok(VERIFY_READ, __cu_from, __cu_len) &&       \
                   access_ok(VERIFY_WRITE, __cu_to, __cu_len))) {       \
                might_fault();                                          \
-               __cu_len = __invoke_copy_from_user(__cu_to, __cu_from,  \
-                                                  __cu_len);           \
+               __cu_len = __invoke_copy_in_user(__cu_to, __cu_from,  \
+                                                __cu_len);           \
        }                                                               \
        __cu_len;                                                       \
 })
+#endif
 
 /*
  * __clear_user: - Zero a block of memory in user space, with less checking.
@@ -963,7 +1451,11 @@ __clear_user(void __user *addr, __kernel_size_t size)
                "move\t$4, %1\n\t"
                "move\t$5, $0\n\t"
                "move\t$6, %2\n\t"
+#ifndef CONFIG_EVA
                __MODULE_JAL(__bzero)
+#else
+               __MODULE_JAL(__bzero_user)
+#endif
                "move\t%0, $6"
                : "=r" (res)
                : "r" (addr), "r" (size)
@@ -1012,7 +1504,11 @@ __strncpy_from_user(char *__to, const char __user *__from, long __len)
                "move\t$4, %1\n\t"
                "move\t$5, %2\n\t"
                "move\t$6, %3\n\t"
+#ifndef CONFIG_EVA
+               __MODULE_JAL(__strncpy_from_kernel_nocheck_asm)
+#else
                __MODULE_JAL(__strncpy_from_user_nocheck_asm)
+#endif
                "move\t%0, $2"
                : "=r" (res)
                : "r" (__to), "r" (__from), "r" (__len)
@@ -1039,11 +1535,43 @@ __strncpy_from_user(char *__to, const char __user *__from, long __len)
  * If @count is smaller than the length of the string, copies @count bytes
  * and returns @count.
  */
+#ifndef CONFIG_EVA
 static inline long
 strncpy_from_user(char *__to, const char __user *__from, long __len)
 {
        long res;
 
+       might_fault();
+       __asm__ __volatile__(
+               "move\t$4, %1\n\t"
+               "move\t$5, %2\n\t"
+               "move\t$6, %3\n\t"
+               __MODULE_JAL(__strncpy_from_kernel_asm)
+               "move\t%0, $2"
+               : "=r" (res)
+               : "r" (__to), "r" (__from), "r" (__len)
+               : "$2", "$3", "$4", "$5", "$6", __UA_t0, "$31", "memory");
+
+       return res;
+}
+#else
+static inline long
+strncpy_from_user(char *__to, const char __user *__from, long __len)
+{
+       long res;
+
+       if (segment_eq(get_fs(), KERNEL_DS)) {
+               __asm__ __volatile__(
+                       "move\t$4, %1\n\t"
+                       "move\t$5, %2\n\t"
+                       "move\t$6, %3\n\t"
+                       __MODULE_JAL(__strncpy_from_kernel_asm)
+                       "move\t%0, $2"
+                       : "=r" (res)
+                       : "r" (__to), "r" (__from), "r" (__len)
+                       : "$2", "$3", "$4", "$5", "$6", __UA_t0, "$31", "memory");
+               return res;
+       }
        might_fault();
        __asm__ __volatile__(
                "move\t$4, %1\n\t"
@@ -1057,6 +1585,7 @@ strncpy_from_user(char *__to, const char __user *__from, long __len)
 
        return res;
 }
+#endif
 
 /* Returns: 0 if bad, string length+1 (memory size) of string if ok */
 static inline long __strlen_user(const char __user *s)
@@ -1066,7 +1595,11 @@ static inline long __strlen_user(const char __user *s)
        might_fault();
        __asm__ __volatile__(
                "move\t$4, %1\n\t"
+#ifndef CONFIG_EVA
+               __MODULE_JAL(__strlen_kernel_nocheck_asm)
+#else
                __MODULE_JAL(__strlen_user_nocheck_asm)
+#endif
                "move\t%0, $2"
                : "=r" (res)
                : "r" (s)
@@ -1096,7 +1629,11 @@ static inline long strlen_user(const char __user *s)
        might_fault();
        __asm__ __volatile__(
                "move\t$4, %1\n\t"
+#ifndef CONFIG_EVA
+               __MODULE_JAL(__strlen_kernel_asm)
+#else
                __MODULE_JAL(__strlen_user_asm)
+#endif
                "move\t%0, $2"
                : "=r" (res)
                : "r" (s)
@@ -1114,7 +1651,11 @@ static inline long __strnlen_user(const char __user *s, long n)
        __asm__ __volatile__(
                "move\t$4, %1\n\t"
                "move\t$5, %2\n\t"
+#ifndef CONFIG_EVA
+               __MODULE_JAL(__strnlen_kernel_nocheck_asm)
+#else
                __MODULE_JAL(__strnlen_user_nocheck_asm)
+#endif
                "move\t%0, $2"
                : "=r" (res)
                : "r" (s), "r" (n)
@@ -1137,10 +1678,39 @@ static inline long __strnlen_user(const char __user *s, long n)
  * If there is a limit on the length of a valid string, you may wish to
  * consider using strnlen_user() instead.
  */
+#ifndef CONFIG_EVA
+static inline long strnlen_user(const char __user *s, long n)
+{
+       long res;
+
+       might_fault();
+       __asm__ __volatile__(
+               "move\t$4, %1\n\t"
+               "move\t$5, %2\n\t"
+               __MODULE_JAL(__strnlen_kernel_asm)
+               "move\t%0, $2"
+               : "=r" (res)
+               : "r" (s), "r" (n)
+               : "$2", "$4", "$5", __UA_t0, "$31");
+
+       return res;
+}
+#else
 static inline long strnlen_user(const char __user *s, long n)
 {
        long res;
 
+       if (segment_eq(get_fs(), KERNEL_DS)) {
+               __asm__ __volatile__(
+                       "move\t$4, %1\n\t"
+                       "move\t$5, %2\n\t"
+                       __MODULE_JAL(__strnlen_kernel_asm)
+                       "move\t%0, $2"
+                       : "=r" (res)
+                       : "r" (s), "r" (n)
+                       : "$2", "$4", "$5", __UA_t0, "$31");
+               return res;
+       }
        might_fault();
        __asm__ __volatile__(
                "move\t$4, %1\n\t"
@@ -1153,6 +1723,7 @@ static inline long strnlen_user(const char __user *s, long n)
 
        return res;
 }
+#endif
 
 struct exception_table_entry
 {
diff --git a/arch/mips/include/asm/ucontext.h b/arch/mips/include/asm/ucontext.h
deleted file mode 100644 (file)
index 9bc07b9..0000000
+++ /dev/null
@@ -1 +0,0 @@
-#include <asm-generic/ucontext.h>
index f4cff7e4fa8a2a531a80e2374eb4056c46dab7aa..f82c83749a089bf26d5a06846ebd73edcc4130a8 100644 (file)
@@ -6,6 +6,7 @@
 #ifndef _ASM_VGA_H
 #define _ASM_VGA_H
 
+#include <asm/addrspace.h>
 #include <asm/byteorder.h>
 
 /*
@@ -13,7 +14,7 @@
  *     access the videoram directly without any black magic.
  */
 
-#define VGA_MAP_MEM(x, s)      (0xb0000000L + (unsigned long)(x))
+#define VGA_MAP_MEM(x, s)      CKSEG1ADDR(0x10000000L + (unsigned long)(x))
 
 #define vga_readb(x)   (*(x))
 #define vga_writeb(x, y)       (*(y) = (x))
diff --git a/arch/mips/include/asm/xor.h b/arch/mips/include/asm/xor.h
deleted file mode 100644 (file)
index c82eb12..0000000
+++ /dev/null
@@ -1 +0,0 @@
-#include <asm-generic/xor.h>
index 350ccccadcb99e3696540d77a208989b93ac01a4..be7196eacb8890a1875123a6473ee620d65f5514 100644 (file)
@@ -1,7 +1,9 @@
 # UAPI Header export list
 include include/uapi/asm-generic/Kbuild.asm
 
-header-y += auxvec.h
+generic-y += auxvec.h
+generic-y += ipcbuf.h
+
 header-y += bitsperlong.h
 header-y += break.h
 header-y += byteorder.h
@@ -11,7 +13,6 @@ header-y += fcntl.h
 header-y += inst.h
 header-y += ioctl.h
 header-y += ioctls.h
-header-y += ipcbuf.h
 header-y += kvm_para.h
 header-y += mman.h
 header-y += msgbuf.h
diff --git a/arch/mips/include/uapi/asm/auxvec.h b/arch/mips/include/uapi/asm/auxvec.h
deleted file mode 100644 (file)
index 7cf7f2d..0000000
+++ /dev/null
@@ -1,4 +0,0 @@
-#ifndef _ASM_AUXVEC_H
-#define _ASM_AUXVEC_H
-
-#endif /* _ASM_AUXVEC_H */
index 0f4aec2ad1e6e1c9f7d8b7a1b54cf49ebb58c392..67b878a42759a0c73d9c2e6aa84e56de8d87f294 100644 (file)
@@ -74,8 +74,16 @@ enum spec3_op {
        ext_op, dextm_op, dextu_op, dext_op,
        ins_op, dinsm_op, dinsu_op, dins_op,
        lx_op = 0x0a,
-       bshfl_op = 0x20,
+       lwle_op = 0x19,
+       lwre_op = 0x1a, cachee_op = 0x1b,
+       sbe_op = 0x1c, she_op = 0x1d,
+       sce_op = 0x1e, swe_op = 0x1f,
+       bshfl_op = 0x20, swle_op = 0x21,
+       swre_op = 0x22, prefe_op = 0x23,
        dbshfl_op = 0x24,
+       lbue_op = 0x28, lhue_op = 0x29,
+       lbe_op = 0x2c, lhe_op = 0x2d,
+       lle_op = 0x2e, lwe_op = 0x2f,
        rdhwr_op = 0x3b
 };
 
@@ -98,8 +106,9 @@ enum rt_op {
  */
 enum cop_op {
        mfc_op        = 0x00, dmfc_op       = 0x01,
-       cfc_op        = 0x02, mtc_op        = 0x04,
-       dmtc_op       = 0x05, ctc_op        = 0x06,
+       cfc_op        = 0x02, mfhc_op       = 0x03,
+       mtc_op        = 0x04, dmtc_op       = 0x05,
+       ctc_op        = 0x06, mthc_op       = 0x07,
        bc_op         = 0x08, cop_op        = 0x10,
        copm_op       = 0x18
 };
@@ -162,8 +171,8 @@ enum cop1_sdw_func {
  */
 enum cop1x_func {
        lwxc1_op     =  0x00, ldxc1_op     =  0x01,
-       pfetch_op    =  0x07, swxc1_op     =  0x08,
-       sdxc1_op     =  0x09, madd_s_op    =  0x20,
+       swxc1_op     =  0x08, sdxc1_op     =  0x09,
+       pfetch_op    =  0x0f, madd_s_op    =  0x20,
        madd_d_op    =  0x21, madd_e_op    =  0x22,
        msub_s_op    =  0x28, msub_d_op    =  0x29,
        msub_e_op    =  0x2a, nmadd_s_op   =  0x30,
@@ -538,6 +547,15 @@ struct p_format {          /* Performance counter format (R10000) */
        ;))))))
 };
 
+struct spec3_format {   /* SPEC3 */
+       BITFIELD_FIELD(unsigned int opcode : 6,
+       BITFIELD_FIELD(unsigned int rs : 5,
+       BITFIELD_FIELD(unsigned int rt : 5,
+       BITFIELD_FIELD(signed int simmediate : 9,
+       BITFIELD_FIELD(unsigned int ls_func : 7,
+       ;)))))
+};
+
 struct f_format {                      /* FPU register format */
        BITFIELD_FIELD(unsigned int opcode : 6,
        BITFIELD_FIELD(unsigned int : 1,
@@ -854,6 +872,7 @@ union mips_instruction {
        struct c_format c_format;
        struct r_format r_format;
        struct p_format p_format;
+       struct spec3_format spec3_format;
        struct f_format f_format;
        struct ma_format ma_format;
        struct b_format b_format;
diff --git a/arch/mips/include/uapi/asm/ipcbuf.h b/arch/mips/include/uapi/asm/ipcbuf.h
deleted file mode 100644 (file)
index 84c7e51..0000000
+++ /dev/null
@@ -1 +0,0 @@
-#include <asm-generic/ipcbuf.h>
index 423d871a946ba15ae5b5ea70338530949fa8d166..d93277cb7ba306634e956caa5f8fdfd81170c8c2 100644 (file)
@@ -51,7 +51,7 @@ obj-$(CONFIG_MIPS_MT)         += mips-mt.o
 obj-$(CONFIG_MIPS_MT_FPAFF)    += mips-mt-fpaff.o
 obj-$(CONFIG_MIPS_MT_SMTC)     += smtc.o smtc-asm.o smtc-proc.o
 obj-$(CONFIG_MIPS_MT_SMP)      += smp-mt.o
-obj-$(CONFIG_MIPS_CMP)         += smp-cmp.o
+obj-$(CONFIG_MIPS_CMP)          += smp-cmp.o cpc.o
 obj-$(CONFIG_CPU_MIPSR2)       += spram.o
 
 obj-$(CONFIG_MIPS_VPE_LOADER)  += vpe.o
index 46c2ad0703a0b1040140b4a2041c179fdaa45b34..f1b5bd182509332bd0d3db9c302dd95c926544d6 100644 (file)
@@ -317,7 +317,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
                if (regs->regs[insn.i_format.rs] ==
                    regs->regs[insn.i_format.rt]) {
                        epc = epc + 4 + (insn.i_format.simmediate << 2);
-                       if (insn.i_format.rt == beql_op)
+                       if (insn.i_format.opcode == beql_op)
                                ret = BRANCH_LIKELY_TAKEN;
                } else
                        epc += 8;
@@ -329,7 +329,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
                if (regs->regs[insn.i_format.rs] !=
                    regs->regs[insn.i_format.rt]) {
                        epc = epc + 4 + (insn.i_format.simmediate << 2);
-                       if (insn.i_format.rt == bnel_op)
+                       if (insn.i_format.opcode == bnel_op)
                                ret = BRANCH_LIKELY_TAKEN;
                } else
                        epc += 8;
@@ -341,7 +341,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
                /* rt field assumed to be zero */
                if ((long)regs->regs[insn.i_format.rs] <= 0) {
                        epc = epc + 4 + (insn.i_format.simmediate << 2);
-                       if (insn.i_format.rt == bnel_op)
+                       if (insn.i_format.opcode == blezl_op)
                                ret = BRANCH_LIKELY_TAKEN;
                } else
                        epc += 8;
@@ -353,7 +353,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
                /* rt field assumed to be zero */
                if ((long)regs->regs[insn.i_format.rs] > 0) {
                        epc = epc + 4 + (insn.i_format.simmediate << 2);
-                       if (insn.i_format.rt == bnel_op)
+                       if (insn.i_format.opcode == bgtzl_op)
                                ret = BRANCH_LIKELY_TAKEN;
                } else
                        epc += 8;
index 730eaf92c0189a8c12f7a91ad58f4a164fea1d77..9a3c3cd9fe1d8407c964824a6f4f2e80dcc71190 100644 (file)
@@ -65,7 +65,7 @@ int __cpuinit gic_clockevent_init(void)
        struct clock_event_device *cd;
        unsigned int irq;
 
-       if (!cpu_has_counter || !gic_frequency)
+       if (!gic_frequency)
                return -ENXIO;
 
        irq = MIPS_GIC_IRQ_BASE;
@@ -81,7 +81,7 @@ int __cpuinit gic_clockevent_init(void)
        cd->max_delta_ns        = clockevent_delta2ns(0x7fffffff, cd);
        cd->min_delta_ns        = clockevent_delta2ns(0x300, cd);
 
-       cd->rating              = 300;
+       cd->rating              = 350;
        cd->irq                 = irq;
        cd->cpumask             = cpumask_of(cpu);
        cd->set_next_event      = gic_next_event;
index 02033eaf8825420eea30105e92edcc795d8b5b70..fb1cb727099aa3abc115e26874b130057a271654 100644 (file)
@@ -16,6 +16,7 @@
 #include <asm/time.h>
 #include <asm/cevt-r4k.h>
 #include <asm/gic.h>
+#include <asm/irq_cpu.h>
 
 /*
  * The SMTC Kernel for the 34K, 1004K, et. al. replaces several
@@ -210,9 +211,6 @@ int __cpuinit r4k_clockevent_init(void)
        cd->set_mode            = mips_set_clock_mode;
        cd->event_handler       = mips_event_handler;
 
-#ifdef CONFIG_CEVT_GIC
-       if (!gic_present)
-#endif
        clockevents_register_device(cd);
 
        if (cp0_timer_irq_installed)
@@ -222,6 +220,12 @@ int __cpuinit r4k_clockevent_init(void)
 
        setup_irq(irq, &c0_compare_irqaction);
 
+#ifdef CONFIG_IRQ_CPU
+       mips_smp_c0_status_mask |= (0x100 << cp0_compare_irq);
+       if (cp0_perfcount_irq >= 0)
+               mips_smp_c0_status_mask |= (0x100 << cp0_perfcount_irq);
+#endif
+
        return 0;
 }
 
diff --git a/arch/mips/kernel/cpc.c b/arch/mips/kernel/cpc.c
new file mode 100644 (file)
index 0000000..4993d66
--- /dev/null
@@ -0,0 +1,204 @@
+/*
+ *  This program is free software; you can distribute it and/or modify it
+ *  under the terms of the GNU General Public License (Version 2) as
+ *  published by the Free Software Foundation.
+ *
+ *  This program is distributed in the hope it will be useful, but WITHOUT
+ *  ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ *  FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ *  for more details.
+ *
+ *  You should have received a copy of the GNU General Public License along
+ *  with this program; if not, write to the Free Software Foundation, Inc.,
+ *  59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Copyright (C) 2013 Imagination Technologies Ltd
+ *    Leonid Yegoshin (Leonid.Yegoshin@imgtec.com)
+ */
+
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/smp.h>
+#include <linux/cpumask.h>
+#include <linux/interrupt.h>
+#include <linux/compiler.h>
+
+#include <linux/cpu.h>
+
+#include <linux/atomic.h>
+#include <asm/cacheflush.h>
+#include <asm/cpu.h>
+#include <asm/processor.h>
+#include <asm/hardirq.h>
+#include <asm/mmu_context.h>
+#include <asm/smp.h>
+#include <asm/time.h>
+#include <asm/mipsregs.h>
+#include <asm/mipsmtregs.h>
+#include <asm/mips_mt.h>
+#include <asm/amon.h>
+#include <asm/gic.h>
+#include <asm/gcmpregs.h>
+#include <asm/cpcregs.h>
+#include <asm/bootinfo.h>
+#include <asm/irq_cpu.h>
+
+unsigned long _cpc_base;
+int cpc_present = -1;
+
+int __init cpc_probe(unsigned long defaddr, unsigned long defsize)
+{
+       if (cpc_present >= 0)
+               return cpc_present;
+
+       if (gcmp_present <= 0) {
+               cpc_present = 0;
+               return 0;
+       }
+
+       if ((GCMPGCB(CPCST) & GCMP_GCB_CPCST_EN_MASK) == 0) {
+               cpc_present = 0;
+               return 0;
+       }
+
+       _cpc_base = GCMPGCB(CPCBA);
+       if (_cpc_base & GCMP_GCB_CPCBA_EN_MASK)
+               goto success;
+
+       if (!defaddr) {
+               cpc_present = 0;
+               return 0;
+       }
+
+       /* Try to setup a platform value */
+       GCMPGCB(CPCBA) = defaddr | GCMP_GCB_CPCBA_EN_MASK;
+       _cpc_base = GCMPGCB(CPCBA);
+       if ((_cpc_base & GCMP_GCB_CPCBA_EN_MASK) == 0) {
+               cpc_present = 0;
+               return 0;
+       }
+success:
+       pr_info("CPC available\n");
+       _cpc_base = (unsigned long) ioremap_nocache(_cpc_base & ~GCMP_GCB_CPCBA_EN_MASK, defsize);
+       cpc_present = 1;
+       return 1;
+}
+
+#ifdef CONFIG_SYSFS
+
+static ssize_t show_cpc_global(struct device *dev,
+                              struct device_attribute *attr, char *buf)
+{
+       int n = 0;
+
+       n = snprintf(buf, PAGE_SIZE,
+               "CPC Global CSR Access Privilege Register\t%08x\n"
+               "CPC Global Sequence Delay Counter\t\t%08x\n"
+               "CPC Global Rail Delay Counter Register\t\t%08x\n"
+               "CPC Global Reset Width Counter Register\t\t%08x\n"
+               "CPC Global Revision Register\t\t\t%08x\n"
+               ,
+               CPCGCB(CSRAPR),
+               CPCGCB(SEQDELAY),
+               CPCGCB(RAILDELAY),
+               CPCGCB(RESETWIDTH),
+               CPCGCB(REVID)
+       );
+
+       return n;
+}
+
+static char *cpc_cmd[] = { "0", "ClockOff", "PwrDown", "PwrUp", "Reset",
+           "5", "6", "7", "8", "9", "10", "11", "12", "13", "14", "15" };
+
+static char *cpc_status[] = { "PwrDwn", "VddOK", "UpDelay", "UClkOff", "Reset",
+           "ResetDly", "nonCoherent", "Coherent", "Isolate", "ClrBus",
+           "DClkOff", "11", "12", "13", "14", "15" };
+
+static ssize_t show_cpc_local(struct device *dev,
+                             struct device_attribute *attr, char *buf)
+{
+       int n = 0;
+
+       CPCLCB(OTHER) = (dev->id)<<16;
+
+       n += snprintf(buf+n, PAGE_SIZE-n,
+               "CPC Local Command Register\t\t\t%08x:  CMD=%s\n"
+               "CPC Local Status and Configuration register\t%08x:   Status=%s, LastCMD=%s\n"
+               "CPC Core Other Addressing Register\t\t%08x\n"
+               ,
+               CPCOCB(CMD), cpc_cmd[(CPCOCB(CMD) & CPCL_CMD_MASK) >> CPCL_CMD_SH],
+               CPCOCB(STATUS), cpc_status[(CPCOCB(STATUS) & CPCL_STATUS_MASK) >> CPCL_STATUS_SH],
+                       cpc_cmd[(CPCOCB(STATUS) & CPCL_CMD_MASK) >> CPCL_CMD_SH],
+               CPCOCB(OTHER)
+       );
+
+       return n;
+}
+
+static DEVICE_ATTR(cpc_global, 0444, show_cpc_global, NULL);
+static DEVICE_ATTR(cpc_local, 0444, show_cpc_local, NULL);
+
+static struct bus_type cpc_subsys = {
+       .name = "cpc",
+       .dev_name = "cpc",
+};
+
+
+
+static __cpuinit int cpc_add_core(int cpu)
+{
+       struct device *dev;
+       int err;
+       char name[16];
+
+       dev = kzalloc(sizeof *dev, GFP_KERNEL);
+       if (!dev)
+               return -ENOMEM;
+
+       dev->id = cpu;
+       dev->bus = &cpc_subsys;
+       snprintf(name, sizeof name, "core%d",cpu);
+       dev->init_name = name;
+
+       err = device_register(dev);
+       if (err)
+               return err;
+
+       err = device_create_file(dev, &dev_attr_cpc_local);
+       if (err)
+               return err;
+
+       return 0;
+}
+
+static int __init init_cpc_sysfs(void)
+{
+       int rc;
+       int cpuN;
+       int cpu;
+
+       if (cpc_present <= 0)
+               return 0;
+
+       rc = subsys_system_register(&cpc_subsys, NULL);
+       if (rc)
+               return rc;
+
+       rc = device_create_file(cpc_subsys.dev_root, &dev_attr_cpc_global);
+       if (rc)
+               return rc;
+
+       cpuN = ((GCMPGCB(GC) & GCMP_GCB_GC_NUMCORES_MSK) >> GCMP_GCB_GC_NUMCORES_SHF) + 1;
+       for (cpu=0; cpu<cpuN; cpu++) {
+               rc = cpc_add_core(cpu);
+               if (rc)
+                       return rc;
+       }
+
+       return 0;
+}
+
+device_initcall_sync(init_cpc_sysfs);
+
+#endif /* CONFIG_SYSFS */
index c6568bf4b1b05559b43bc18f65e33c84a85963ba..4e678db70bb69fcad5a4eced9c4d0d31b4e08b61 100644 (file)
@@ -75,6 +75,8 @@ void __init check_bugs32(void)
        check_errata();
 }
 
+#include <asm/pgtable.h>
+#include <asm/bootinfo.h>
 /*
  * Probe whether cpu has config register by trying to play with
  * alternate cache bit and see whether it matters.
@@ -117,6 +119,22 @@ static inline unsigned long cpu_get_fpu_id(void)
        return fpu_id;
 }
 
+/*
+ * Set and Get the FPU CSR31.
+ */
+static inline unsigned long cpu_test_fpu_csr31(unsigned long fcr31)
+{
+       unsigned long tmp;
+
+       tmp = read_c0_status();
+       __enable_fpu();
+       write_32bit_cp1_register(CP1_STATUS,fcr31);
+       enable_fpu_hazard();
+       fcr31 = read_32bit_cp1_register(CP1_STATUS);
+       write_c0_status(tmp);
+       return fcr31;
+}
+
 /*
  * Check the CPU has an FPU the official way.
  */
@@ -174,6 +192,8 @@ static inline unsigned int decode_config0(struct cpuinfo_mips *c)
 
        if (((config0 & MIPS_CONF_MT) >> 7) == 1)
                c->options |= MIPS_CPU_TLB;
+       if (((config0 & MIPS_CONF_MT) >> 7) == 4)
+               c->options |= MIPS_CPU_TLB;
        isa = (config0 & MIPS_CONF_AT) >> 13;
        switch (isa) {
        case 0:
@@ -196,8 +216,6 @@ static inline unsigned int decode_config0(struct cpuinfo_mips *c)
                case 1:
                        set_isa(c, MIPS_CPU_ISA_M64R2);
                        break;
-               default:
-                       goto unknown;
                }
                break;
        default:
@@ -228,8 +246,11 @@ static inline unsigned int decode_config1(struct cpuinfo_mips *c)
                c->options |= MIPS_CPU_FPU;
                c->options |= MIPS_CPU_32FPR;
        }
-       if (cpu_has_tlb)
+       if (cpu_has_tlb) {
                c->tlbsize = ((config1 & MIPS_CONF1_TLBS) >> 25) + 1;
+               c->tlbsizevtlb = c->tlbsize;
+               c->tlbsizeftlbsets = 0;
+       }
 
        return config1 & MIPS_CONF_M;
 }
@@ -254,10 +275,14 @@ static inline unsigned int decode_config3(struct cpuinfo_mips *c)
 
        if (config3 & MIPS_CONF3_SM) {
                c->ases |= MIPS_ASE_SMARTMIPS;
+#if defined(CONFIG_64BIT) || !defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
                c->options |= MIPS_CPU_RIXI;
+#endif
        }
+#if defined(CONFIG_64BIT) || !defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
        if (config3 & MIPS_CONF3_RXI)
                c->options |= MIPS_CPU_RIXI;
+#endif
        if (config3 & MIPS_CONF3_DSP)
                c->ases |= MIPS_ASE_DSP;
        if (config3 & MIPS_CONF3_DSP2P)
@@ -277,28 +302,125 @@ static inline unsigned int decode_config3(struct cpuinfo_mips *c)
 #endif
        if (config3 & MIPS_CONF3_VZ)
                c->ases |= MIPS_ASE_VZ;
+       if (config3 & MIPS_CONF3_SC)
+               c->options |= MIPS_CPU_SEGMENTS;
 
        return config3 & MIPS_CONF_M;
 }
 
-static inline unsigned int decode_config4(struct cpuinfo_mips *c)
+static unsigned int cpu_capability = 0;
+
+static inline unsigned int decode_config4(struct cpuinfo_mips *c, int pass,
+                                         int conf6available)
 {
        unsigned int config4;
+       unsigned int newcf4;
+       unsigned int config6;
 
        config4 = read_c0_config4();
 
-       if ((config4 & MIPS_CONF4_MMUEXTDEF) == MIPS_CONF4_MMUEXTDEF_MMUSIZEEXT
-           && cpu_has_tlb)
-               c->tlbsize += (config4 & MIPS_CONF4_MMUSIZEEXT) * 0x40;
+       if (pass && cpu_has_tlb) {
+               if (config4 & MIPS_CONF4_IE) {
+                       if (config4 & MIPS_CONF4_TLBINV) {
+                               c->options |= MIPS_CPU_TLBINV;
+                               printk("TLBINV/F supported, config4=0x%0x\n",config4);
+                       }
+                       /* TBW: page walker support starts here */
+               }
+               switch (config4 & MIPS_CONF4_MMUEXTDEF) {
+               case MIPS_CONF4_MMUEXTDEF_MMUSIZEEXT:
+                       c->tlbsize =
+                           ((((config4 & MIPS_CONF4_MMUSIZEEXT) >>
+                              MIPS_CONF4_MMUSIZEEXT_SHIFT) <<
+                             MIPS_CONF1_TLBS_SIZE) |
+                               (c->tlbsize - 1)) + 1;
+                       c->tlbsizevtlb = c->tlbsize;
+                       printk("MMUSizeExt found, total TLB=%d\n",c->tlbsize);
+                       break;
+               case MIPS_CONF4_MMUEXTDEF_VTLBSIZEEXT:
+                       c->tlbsizevtlb = ((c->tlbsizevtlb - 1) |
+                               (((config4 & MIPS_CONF4_VTLBSIZEEXT) >>
+                                 MIPS_CONF4_VTLBSIZEEXT_SHIFT) <<
+                                MIPS_CONF1_TLBS_SIZE)) + 1;
+                       c->tlbsize = c->tlbsizevtlb;
+                       /* fall through */
+               case MIPS_CONF4_MMUEXTDEF_FTLBSIZEEXT:
+                       newcf4 = (config4 & ~MIPS_CONF4_FTLBPAGESIZE) |
+                               ((((fls(PAGE_SIZE >> BASIC_PAGE_SHIFT)-1)/2)+1) <<
+                                MIPS_CONF4_FTLBPAGESIZE_SHIFT);
+                       write_c0_config4(newcf4);
+                       back_to_back_c0_hazard();
+                       config4 = read_c0_config4();
+                       if (config4 != newcf4) {
+                               printk(KERN_ERR "PAGE_SIZE 0x%0lx is not supported by FTLB (config4=0x%0x)\n",
+                                       PAGE_SIZE, config4);
+                               if (conf6available && (cpu_capability & MIPS_FTLB_CAPABLE)) {
+                                       printk("Switching FTLB OFF\n");
+                                       config6 = read_c0_config6();
+                                       write_c0_config6(config6 & ~MIPS_CONF6_FTLBEN);
+                               }
+                               printk("Total TLB(VTLB) inuse: %d\n",c->tlbsizevtlb);
+                               break;
+                       }
+                       c->tlbsizeftlbsets = 1 <<
+                               ((config4 & MIPS_CONF4_FTLBSETS) >>
+                                MIPS_CONF4_FTLBSETS_SHIFT);
+                       c->tlbsizeftlbways = ((config4 & MIPS_CONF4_FTLBWAYS) >>
+                                             MIPS_CONF4_FTLBWAYS_SHIFT) + 2;
+                       c->tlbsize += (c->tlbsizeftlbways *
+                                      c->tlbsizeftlbsets);
+                       printk("V/FTLB found: VTLB=%d, FTLB sets=%d, ways=%d total TLB=%d\n",
+                               c->tlbsizevtlb, c->tlbsizeftlbsets, c->tlbsizeftlbways, c->tlbsize);
+                       break;
+               }
+       }
 
        c->kscratch_mask = (config4 >> 16) & 0xff;
 
        return config4 & MIPS_CONF_M;
 }
 
-static void __cpuinit decode_configs(struct cpuinfo_mips *c)
+static inline unsigned int decode_config5(struct cpuinfo_mips *c)
+{
+       unsigned int config5;
+
+       config5 = read_c0_config5();
+
+       if (config5 & MIPS_CONF5_EVA)
+               c->options |= MIPS_CPU_EVA;
+
+       return config5 & MIPS_CONF_M;
+}
+
+static inline unsigned int decode_config6_ftlb(struct cpuinfo_mips *c)
 {
-       int ok;
+       unsigned int config6;
+
+       if (cpu_capability & MIPS_FTLB_CAPABLE) {
+
+               /*
+                * Can't rely on mips_ftlb_disabled since kernel command line
+                * hasn't been processed yet.  Need to peek at the raw command
+                * line for "noftlb".
+                */
+               if (strstr(arcs_cmdline, "noftlb") == NULL) {
+                       config6 = read_c0_config6();
+
+                       printk("Enable FTLB attempt\n");
+                       write_c0_config6(config6 | MIPS_CONF6_FTLBEN);
+                       back_to_back_c0_hazard();
+
+                       return(1);
+               }
+       }
+
+       return(0);
+}
+
+
+static void decode_configs(struct cpuinfo_mips *c)
+{
+       int ok, ok3 = 0, ok6 = 0;
 
        /* MIPS32 or MIPS64 compliant CPU.  */
        c->options = MIPS_CPU_4KEX | MIPS_CPU_4K_CACHE | MIPS_CPU_COUNTER |
@@ -307,15 +429,22 @@ static void __cpuinit decode_configs(struct cpuinfo_mips *c)
        c->scache.flags = MIPS_CACHE_NOT_PRESENT;
 
        ok = decode_config0(c);                 /* Read Config registers.  */
-       BUG_ON(!ok);                            /* Arch spec violation!  */
+       BUG_ON(!ok);                            /* Arch spec violation!  */
        if (ok)
                ok = decode_config1(c);
        if (ok)
                ok = decode_config2(c);
        if (ok)
-               ok = decode_config3(c);
+               ok = ok3 = decode_config3(c);
        if (ok)
-               ok = decode_config4(c);
+               ok = decode_config4(c,0,0);   /* first pass - just return Mbit */
+       if (ok)
+               ok = decode_config5(c);
+       if (cpu_capability & MIPS_FTLB_CAPABLE)
+               ok6 = decode_config6_ftlb(c);
+
+       if (ok3)
+               ok = decode_config4(c,1,ok6); /* real parse pass, thanks HW team :-/ */
 
        mips_probe_watch_registers(c);
 
@@ -648,8 +777,11 @@ static inline void cpu_probe_legacy(struct cpuinfo_mips *c, unsigned int cpu)
 
 static inline void cpu_probe_mips(struct cpuinfo_mips *c, unsigned int cpu)
 {
-       decode_configs(c);
        switch (c->processor_id & 0xff00) {
+       case PRID_IMP_QEMU:
+               c->cputype = CPU_QEMU;
+               __cpu_name[cpu] = "MIPS GENERIC QEMU";
+               break;
        case PRID_IMP_4KC:
                c->cputype = CPU_4KC;
                __cpu_name[cpu] = "MIPS 4Kc";
@@ -712,7 +844,34 @@ static inline void cpu_probe_mips(struct cpuinfo_mips *c, unsigned int cpu)
                c->cputype = CPU_74K;
                __cpu_name[cpu] = "MIPS 1074Kc";
                break;
+       case PRID_IMP_PROAPTIV_UP:
+               c->cputype = CPU_PROAPTIV;
+               __cpu_name[cpu] = "MIPS proAptiv";
+               cpu_capability = MIPS_FTLB_CAPABLE;
+               break;
+       case PRID_IMP_PROAPTIV_MP:
+               c->cputype = CPU_PROAPTIV;
+               __cpu_name[cpu] = "MIPS proAptiv (multi)";
+               cpu_capability = MIPS_FTLB_CAPABLE;
+               break;
+       case PRID_IMP_INTERAPTIV_UP:
+               c->cputype = CPU_INTERAPTIV;
+               __cpu_name[cpu] = "MIPS interAptiv UP";
+               break;
+       case PRID_IMP_INTERAPTIV_MP:
+               c->cputype = CPU_INTERAPTIV;
+               __cpu_name[cpu] = "MIPS interAptiv";
+       case PRID_IMP_VIRTUOSO:
+               c->cputype = CPU_VIRTUOSO;
+               __cpu_name[cpu] = "MIPS Virtuoso";
+               break;
+       case PRID_IMP_P5600:
+               c->cputype = CPU_P5600;
+               __cpu_name[cpu] = "MIPS P5600";
+               cpu_capability = MIPS_FTLB_CAPABLE;
+               break;
        }
+       decode_configs(c);
 
        spram_config();
 }
@@ -969,6 +1128,8 @@ EXPORT_SYMBOL(__ua_limit);
 
 const char *__cpu_name[NR_CPUS];
 const char *__elf_platform;
+unsigned int fpu_fcr31 __read_mostly = 0;
+unsigned int system_has_fpu __read_mostly = 0;
 
 __cpuinit void cpu_probe(void)
 {
@@ -1030,12 +1191,17 @@ __cpuinit void cpu_probe(void)
                c->ases &= ~(MIPS_ASE_DSP | MIPS_ASE_DSP2P);
 
        if (c->options & MIPS_CPU_FPU) {
+               system_has_fpu = 1;
+               fpu_fcr31 = cpu_test_fpu_csr31(FPU_CSR_DEFAULT);
+
                c->fpu_id = cpu_get_fpu_id();
 
                if (c->isa_level & (MIPS_CPU_ISA_M32R1 | MIPS_CPU_ISA_M32R2 |
                                    MIPS_CPU_ISA_M64R1 | MIPS_CPU_ISA_M64R2)) {
                        if (c->fpu_id & MIPS_FPIR_3D)
                                c->ases |= MIPS_ASE_MIPS3D;
+                       if (c->fpu_id & MIPS_FPIR_HAS2008)
+                               fpu_fcr31 = cpu_test_fpu_csr31(FPU_CSR_DEFAULT|FPU_CSR_MAC2008|FPU_CSR_ABS2008|FPU_CSR_NAN2008);
                }
        }
 
@@ -1059,8 +1225,8 @@ __cpuinit void cpu_report(void)
 {
        struct cpuinfo_mips *c = &current_cpu_data;
 
-       printk(KERN_INFO "CPU revision is: %08x (%s)\n",
-              c->processor_id, cpu_name_string());
+       printk(KERN_INFO "CPU%d revision is: %08x (%s)\n",
+              smp_processor_id(), c->processor_id, cpu_name_string());
        if (c->options & MIPS_CPU_FPU)
                printk(KERN_INFO "FPU revision is: %08x\n", c->fpu_id);
 }
index e5786858cdb6808f24993930c899d89f3ab3bfa5..8bf27492a2385d7262df84b7f5d36a43e1d35bfe 100644 (file)
@@ -203,10 +203,16 @@ syscall_exit_work:
  *
  * For C code use the inline version named instruction_hazard().
  */
+#ifdef CONFIG_EVA
+       .align  8
+#endif
 LEAF(mips_ihb)
        .set    mips32r2
        jr.hb   ra
        nop
+#ifdef CONFIG_EVA
+       .align  8
+#endif
        END(mips_ihb)
 
 #endif /* CONFIG_CPU_MIPSR2 or CONFIG_MIPS_MT */
index dba90ec0dc385ffcad5cc09eda51031d4e9a0fcc..d635ba236cb829a9904177b6eeaad6b74b9b0c5e 100644 (file)
@@ -87,6 +87,7 @@ static inline void ftrace_dyn_arch_init_insns(void)
 static int ftrace_modify_code(unsigned long ip, unsigned int new_code)
 {
        int faulted;
+       mm_segment_t old_fs;
 
        /* *(unsigned int *)ip = new_code; */
        safe_store_code(new_code, ip, faulted);
@@ -94,7 +95,10 @@ static int ftrace_modify_code(unsigned long ip, unsigned int new_code)
        if (unlikely(faulted))
                return -EFAULT;
 
+       old_fs = get_fs();
+       set_fs(KERNEL_DS);
        flush_icache_range(ip, ip + 8);
+       set_fs(old_fs);
 
        return 0;
 }
index 31fa856829cbf2620521317e5247d42b9e3fb087..ece887634c66dfe8b6f1961cfcbbdcd50c2823cb 100644 (file)
@@ -374,10 +374,26 @@ NESTED(except_vec_nmi, 0, sp)
 NESTED(nmi_handler, PT_SIZE, sp)
        .set    push
        .set    noat
+               /* Clear ERL - restore segment mapping */
+               mfc0    k0, CP0_STATUS
+               ori     k0, k0, ST0_EXL
+               lui     k1, 0xffff & ~(ST0_BEV>>16)
+               ori     k1, k1, 0xffff & ~(ST0_ERL)
+               and     k0, k0, k1
+               mtc0    k0, CP0_STATUS
+               ehb
        SAVE_ALL
        move    a0, sp
        jal     nmi_exception_handler
        RESTORE_ALL
+               /* Set ERL and clear EXL|NMI */
+               mfc0    k0, CP0_STATUS
+               ori     k0, k0, ST0_ERL
+               lui     k1, 0xffff & ~(ST0_NMI>>16)
+               ori     k1, k1, 0xffff & ~(ST0_EXL)
+               and     k0, k0, k1
+               mtc0    k0, CP0_STATUS
+               ehb
        .set    mips3
        eret
        .set    pop
@@ -468,6 +484,7 @@ NESTED(nmi_handler, PT_SIZE, sp)
        BUILD_HANDLER ov ov sti silent                  /* #12 */
        BUILD_HANDLER tr tr sti silent                  /* #13 */
        BUILD_HANDLER fpe fpe fpe silent                /* #15 */
+       BUILD_HANDLER ftlb ftlb none silent             /* #16 */
        BUILD_HANDLER mdmx mdmx sti silent              /* #22 */
 #ifdef CONFIG_HARDWARE_WATCHPOINTS
        /*
index c61cdaed2b1d998f2614be9f9a7451837f59ebaf..4a4c4f174143b96dacb71a26ee3a9773c14ed329 100644 (file)
@@ -142,9 +142,13 @@ FEXPORT(__kernel_entry)
 
        __REF
 
+#ifdef CONFIG_EVA
+       .align  8
+#endif
+
 NESTED(kernel_entry, 16, sp)                   # kernel entry point
 
-       kernel_entry_setup                      # cpu specific setup
+       kernel_entry_setup                      # cpu specific setup
 
        setup_c0_status_pri
 
@@ -152,6 +156,9 @@ NESTED(kernel_entry, 16, sp)                        # kernel entry point
           so we jump there.  */
        PTR_LA  t0, 0f
        jr      t0
+#ifdef CONFIG_EVA
+       .align  8
+#endif
 0:
 
 #ifdef CONFIG_MIPS_MT_SMTC
@@ -204,7 +211,13 @@ NESTED(kernel_entry, 16, sp)                       # kernel entry point
  * SMP slave cpus entry point. Board specific code for bootstrap calls this
  * function after setting up the stack and gp registers.
  */
+
+#ifdef CONFIG_EVA
+       .align  8
+#endif
+
 NESTED(smp_bootstrap, 16, sp)
+
 #ifdef CONFIG_MIPS_MT_SMTC
        /*
         * Read-modify-writes of Status must be atomic, and this
@@ -216,8 +229,10 @@ NESTED(smp_bootstrap, 16, sp)
        DMT     10      # dmt t2 /* t0, t1 are used by CLI and setup_c0_status() */
        jal     mips_ihb
 #endif /* CONFIG_MIPS_MT_SMTC */
-       setup_c0_status_sec
+
        smp_slave_setup
+       setup_c0_status_sec
+
 #ifdef CONFIG_MIPS_MT_SMTC
        andi    t2, t2, VPECONTROL_TE
        beqz    t2, 2f
@@ -225,6 +240,9 @@ NESTED(smp_bootstrap, 16, sp)
 2:
 #endif /* CONFIG_MIPS_MT_SMTC */
        j       start_secondary
+#ifdef CONFIG_EVA
+       .align  8
+#endif
        END(smp_bootstrap)
 #endif /* CONFIG_SMP */
 
index 0c655deeea4adff8fd575cd5d09b372cd3f820e5..9190b33a0f94ded49b8a47479440f0be717578d3 100644 (file)
@@ -182,6 +182,10 @@ void __init check_wait(void)
        case CPU_24K:
        case CPU_34K:
        case CPU_1004K:
+       case CPU_PROAPTIV:
+       case CPU_INTERAPTIV:
+       case CPU_VIRTUOSO:
+       case CPU_P5600:
                cpu_wait = r4k_wait;
                if (read_c0_config7() & MIPS_CONF7_WII)
                        cpu_wait = r4k_wait_irqoff;
index c01b307317a9635b88394d18164710262e206234..451082c7e6979ba5e2534f5a5afe5d899b03104a 100644 (file)
@@ -12,6 +12,9 @@
 #include <linux/irq.h>
 #include <linux/clocksource.h>
 
+#include <linux/cpu.h>
+#include <linux/slab.h>
+
 #include <asm/io.h>
 #include <asm/gic.h>
 #include <asm/setup.h>
@@ -19,6 +22,7 @@
 #include <asm/gcmpregs.h>
 #include <linux/hardirq.h>
 #include <asm-generic/bitops/find.h>
+#include <asm/irq_cpu.h>
 
 unsigned int gic_frequency;
 unsigned int gic_present;
@@ -133,18 +137,26 @@ static void __init vpe_local_setup(unsigned int numvpes)
 
                /* Are Interrupts locally routable? */
                GICREAD(GIC_REG(VPE_OTHER, GIC_VPE_CTL), vpe_ctl);
-               if (vpe_ctl & GIC_VPE_CTL_TIMER_RTBL_MSK)
+               if (vpe_ctl & GIC_VPE_CTL_TIMER_RTBL_MSK) {
+                       if (cp0_compare_irq >= 2)
+                               timer_intr = cp0_compare_irq - 2;
                        GICWRITE(GIC_REG(VPE_OTHER, GIC_VPE_TIMER_MAP),
                                 GIC_MAP_TO_PIN_MSK | timer_intr);
+                       mips_smp_c0_status_mask |= (0x400 << timer_intr);
+               }
                if (cpu_has_veic) {
                        set_vi_handler(timer_intr + GIC_PIN_TO_VEC_OFFSET,
                                gic_eic_irq_dispatch);
                        gic_shared_intr_map[timer_intr + GIC_PIN_TO_VEC_OFFSET].local_intr_mask |= GIC_VPE_RMASK_TIMER_MSK;
                }
 
-               if (vpe_ctl & GIC_VPE_CTL_PERFCNT_RTBL_MSK)
+               if (vpe_ctl & GIC_VPE_CTL_PERFCNT_RTBL_MSK) {
+                       if (cp0_perfcount_irq >= 2)
+                               perf_intr = cp0_perfcount_irq - 2;
                        GICWRITE(GIC_REG(VPE_OTHER, GIC_VPE_PERFCTR_MAP),
                                 GIC_MAP_TO_PIN_MSK | perf_intr);
+                       mips_smp_c0_status_mask |= (0x400 << perf_intr);
+               }
                if (cpu_has_veic) {
                        set_vi_handler(perf_intr + GIC_PIN_TO_VEC_OFFSET, gic_eic_irq_dispatch);
                        gic_shared_intr_map[perf_intr + GIC_PIN_TO_VEC_OFFSET].local_intr_mask |= GIC_VPE_RMASK_PERFCNT_MSK;
@@ -219,16 +231,15 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *cpumask,
 
        /* Assumption : cpumask refers to a single CPU */
        spin_lock_irqsave(&gic_lock, flags);
-       for (;;) {
-               /* Re-route this IRQ */
-               GIC_SH_MAP_TO_VPE_SMASK(irq, first_cpu(tmp));
 
-               /* Update the pcpu_masks */
-               for (i = 0; i < NR_CPUS; i++)
-                       clear_bit(irq, pcpu_masks[i].pcpu_mask);
-               set_bit(irq, pcpu_masks[first_cpu(tmp)].pcpu_mask);
+       /* Re-route this IRQ */
+       GIC_SH_MAP_TO_VPE_SMASK(irq, first_cpu(tmp));
+
+       /* Update the pcpu_masks */
+       for (i = 0; i < NR_CPUS; i++)
+               clear_bit(irq, pcpu_masks[i].pcpu_mask);
+       set_bit(irq, pcpu_masks[first_cpu(tmp)].pcpu_mask);
 
-       }
        cpumask_copy(d->affinity, cpumask);
        spin_unlock_irqrestore(&gic_lock, flags);
 
@@ -365,3 +376,241 @@ void __init gic_init(unsigned long gic_base_addr,
 
        gic_platform_init(numintrs, &gic_irq_controller);
 }
+
+
+#ifdef CONFIG_SYSFS
+static ssize_t show_gic_global(struct device *dev,
+                              struct device_attribute *attr, char *buf)
+{
+       int n = 0;
+       int i,j;
+       int numints;
+
+       n = snprintf(buf, PAGE_SIZE,
+               "GIC Config Register\t\t%08x\n"
+               "GIC CounterLo\t\t\t%08x\n"
+               "GIC CounterHi\t\t\t%08x\n"
+               "GIC Revision\t\t\t%08x\n"
+               "Global Interrupt Polarity Registers:\t\t%08x %08x %08x %08x\n"
+               "\t\t\t\t\t\t%08x %08x %08x %08x\n"
+               "Global Interrupt Trigger Type Registers:\t%08x %08x %08x %08x\n"
+               "\t\t\t\t\t\t%08x %08x %08x %08x\n"
+               "Global Interrupt Dual Edge Registers:\t\t%08x %08x %08x %08x\n"
+               "\t\t\t\t\t\t%08x %08x %08x %08x\n"
+               "Global Interrupt Write Edge Register:\t\t%08x\n"
+               "Global Interrupt Reset Mask Registers:\t\t%08x %08x %08x %08x\n"
+               "\t\t\t\t\t\t%08x %08x %08x %08x\n"
+               "Global Interrupt Set Mask Registers:\t\t%08x %08x %08x %08x\n"
+               "\t\t\t\t\t\t%08x %08x %08x %08x\n"
+               "Global Interrupt Mask Registers:\t\t%08x %08x %08x %08x\n"
+               "\t\t\t\t\t\t%08x %08x %08x %08x\n"
+               "Global Interrupt Pending Registers:\t\t%08x %08x %08x %08x\n"
+               "\t\t\t\t\t\t%08x %08x %08x %08x\n"
+               ,
+               GIC_REG(SHARED,GIC_SH_CONFIG),
+               GIC_REG(SHARED,GIC_SH_COUNTER_31_00),
+               GIC_REG(SHARED,GIC_SH_COUNTER_63_32),
+               GIC_REG(SHARED,GIC_SH_REVISIONID),
+               GIC_REG(SHARED,GIC_SH_POL_31_0),        GIC_REG(SHARED,GIC_SH_POL_63_32),
+               GIC_REG(SHARED,GIC_SH_POL_95_64),       GIC_REG(SHARED,GIC_SH_POL_127_96),
+               GIC_REG(SHARED,GIC_SH_POL_159_128),     GIC_REG(SHARED,GIC_SH_POL_191_160),
+               GIC_REG(SHARED,GIC_SH_POL_223_192),     GIC_REG(SHARED,GIC_SH_POL_255_224),
+               GIC_REG(SHARED,GIC_SH_TRIG_31_0),       GIC_REG(SHARED,GIC_SH_TRIG_63_32),
+               GIC_REG(SHARED,GIC_SH_TRIG_95_64),      GIC_REG(SHARED,GIC_SH_TRIG_127_96),
+               GIC_REG(SHARED,GIC_SH_TRIG_159_128),    GIC_REG(SHARED,GIC_SH_TRIG_191_160),
+               GIC_REG(SHARED,GIC_SH_TRIG_223_192),    GIC_REG(SHARED,GIC_SH_TRIG_255_224),
+               GIC_REG(SHARED,GIC_SH_DUAL_31_0),       GIC_REG(SHARED,GIC_SH_DUAL_63_32),
+               GIC_REG(SHARED,GIC_SH_DUAL_95_64),      GIC_REG(SHARED,GIC_SH_DUAL_127_96),
+               GIC_REG(SHARED,GIC_SH_DUAL_159_128),    GIC_REG(SHARED,GIC_SH_DUAL_191_160),
+               GIC_REG(SHARED,GIC_SH_DUAL_223_192),    GIC_REG(SHARED,GIC_SH_DUAL_255_224),
+               GIC_REG(SHARED,GIC_SH_WEDGE),
+               GIC_REG(SHARED,GIC_SH_RMASK_31_0),      GIC_REG(SHARED,GIC_SH_RMASK_63_32),
+               GIC_REG(SHARED,GIC_SH_RMASK_95_64),     GIC_REG(SHARED,GIC_SH_RMASK_127_96),
+               GIC_REG(SHARED,GIC_SH_RMASK_159_128),   GIC_REG(SHARED,GIC_SH_RMASK_191_160),
+               GIC_REG(SHARED,GIC_SH_RMASK_223_192),   GIC_REG(SHARED,GIC_SH_RMASK_255_224),
+               GIC_REG(SHARED,GIC_SH_SMASK_31_0),      GIC_REG(SHARED,GIC_SH_SMASK_63_32),
+               GIC_REG(SHARED,GIC_SH_SMASK_95_64),     GIC_REG(SHARED,GIC_SH_SMASK_127_96),
+               GIC_REG(SHARED,GIC_SH_SMASK_159_128),   GIC_REG(SHARED,GIC_SH_SMASK_191_160),
+               GIC_REG(SHARED,GIC_SH_SMASK_223_192),   GIC_REG(SHARED,GIC_SH_SMASK_255_224),
+               GIC_REG(SHARED,GIC_SH_MASK_31_0),       GIC_REG(SHARED,GIC_SH_MASK_63_32),
+               GIC_REG(SHARED,GIC_SH_MASK_95_64),      GIC_REG(SHARED,GIC_SH_MASK_127_96),
+               GIC_REG(SHARED,GIC_SH_MASK_159_128),    GIC_REG(SHARED,GIC_SH_MASK_191_160),
+               GIC_REG(SHARED,GIC_SH_MASK_223_192),    GIC_REG(SHARED,GIC_SH_MASK_255_224),
+               GIC_REG(SHARED,GIC_SH_PEND_31_0),       GIC_REG(SHARED,GIC_SH_PEND_63_32),
+               GIC_REG(SHARED,GIC_SH_PEND_95_64),      GIC_REG(SHARED,GIC_SH_PEND_127_96),
+               GIC_REG(SHARED,GIC_SH_PEND_159_128),    GIC_REG(SHARED,GIC_SH_PEND_191_160),
+               GIC_REG(SHARED,GIC_SH_PEND_223_192),    GIC_REG(SHARED,GIC_SH_PEND_255_224)
+       );
+
+       numints = (GIC_REG(SHARED,GIC_SH_CONFIG) & GIC_SH_CONFIG_NUMINTRS_MSK) >> GIC_SH_CONFIG_NUMINTRS_SHF;
+       numints = (numints + 1) * 8;
+
+       n += snprintf(buf+n, PAGE_SIZE-n,
+               "\nGlobal Interrupt Map SrcX to Pin:\n");
+       for (i=0; i<numints; i++) {
+
+               if ((i % 8) == 0)
+                       n += snprintf(buf+n, PAGE_SIZE-n, "%02x:\t",i);
+               n += snprintf(buf+n, PAGE_SIZE-n,
+                       "%08x ",GIC_REG_ADDR(SHARED,GIC_SH_MAP_TO_PIN(i)));
+               if ((i % 8) == 7)
+                       n += snprintf(buf+n, PAGE_SIZE-n, "\n");
+       };
+
+       n += snprintf(buf+n, PAGE_SIZE-n,
+               "\nGlobal Interrupt Map SrcX to VPE:\n");
+       for (i=0; i<numints; i++) {
+               if ((i % 4) == 0)
+                       n += snprintf(buf+n, PAGE_SIZE-n, "%02x:\t",i);
+               for (j=0; j<2; j++) {
+                       n += snprintf(buf+n, PAGE_SIZE-n,
+                           "%08x ",GIC_REG_ADDR(SHARED,GIC_SH_INTR_MAP_TO_VPE_BASE_OFS + ((j * 4) + (i * 32))));
+               };
+               n += snprintf(buf+n, PAGE_SIZE-n, "\t");
+               if ((i % 4) == 3)
+                       n += snprintf(buf+n, PAGE_SIZE-n, "\n");
+       };
+       n += snprintf(buf+n, PAGE_SIZE-n,
+               "\nDINT Send to Group Register\t\t%08x\n",
+               GIC_REG(SHARED,GIC_DINT));
+
+       return n;
+}
+
+static ssize_t show_gic_local(struct device *dev,
+                             struct device_attribute *attr, char *buf)
+{
+       int n = 0;
+       int i;
+
+       GIC_REG(VPE_LOCAL,GIC_VPE_OTHER_ADDR) = (dev->id);
+
+       n += snprintf(buf+n, PAGE_SIZE-n,
+               "Local Interrupt Control Register:\t\t%08x\n"
+               "Local Interrupt Pending Register:\t\t%08x\n"
+               "Local Mask Register:\t\t\t\t%08x\n"
+               "Local Reset Mask Register:\t\t\t%08x\n"
+               "Local Set Mask Register:\t\t\t%08x\n"
+               "Local WatchDog Map-to-Pin Register:\t\t%08x\n"
+               "Local GIC Counter/Compare Map-to-Pin Register:\t%08x\n"
+               "Local CPU Timer Map-to-Pin Register:\t\t%08x\n"
+               "Local CPU Fast Debug Channel Map-to-Pin:\t%08x\n"
+               "Local Perf Counter Map-to-Pin Register:\t\t%08x\n"
+               "Local SWInt0 Map-to-Pin Register:\t\t%08x\n"
+               "Local SWInt1 Map-to-Pin Register:\t\t%08x\n"
+               "VPE-Other Addressing Register:\t\t\t%08x\n"
+               "VPE-Local Identification Register:\t\t%08x\n"
+               "Programmable/Watchdog Timer0 Config Register:\t\t%08x\n"
+               "Programmable/Watchdog Timer0 Count Register:\t\t%08x\n"
+               "Programmable/Watchdog Timer0 Initial Count Register:\t%08x\n"
+               "CompareLo Register:\t\t\t\t%08x\n"
+               "CompareHi Register:\t\t\t\t%08x\n"
+               ,
+               GIC_REG(VPE_OTHER,GIC_VPE_CTL),
+               GIC_REG(VPE_OTHER,GIC_VPE_PEND),
+               GIC_REG(VPE_OTHER,GIC_VPE_MASK),
+               GIC_REG(VPE_OTHER,GIC_VPE_RMASK),
+               GIC_REG(VPE_OTHER,GIC_VPE_SMASK),
+               GIC_REG(VPE_OTHER,GIC_VPE_WD_MAP),
+               GIC_REG(VPE_OTHER,GIC_VPE_COMPARE_MAP),
+               GIC_REG(VPE_OTHER,GIC_VPE_TIMER_MAP),
+               GIC_REG(VPE_OTHER,GIC_VPE_FDEBUG_MAP),
+               GIC_REG(VPE_OTHER,GIC_VPE_PERFCTR_MAP),
+               GIC_REG(VPE_OTHER,GIC_VPE_SWINT0_MAP),
+               GIC_REG(VPE_OTHER,GIC_VPE_SWINT1_MAP),
+               GIC_REG(VPE_OTHER,GIC_VPE_OTHER_ADDR),
+               GIC_REG(VPE_OTHER,GIC_VPE_ID),
+               GIC_REG(VPE_OTHER,GIC_VPE_WD_CONFIG0),
+               GIC_REG(VPE_OTHER,GIC_VPE_WD_COUNT0),
+               GIC_REG(VPE_OTHER,GIC_VPE_WD_INITIAL0),
+               GIC_REG(VPE_OTHER,GIC_VPE_COMPARE_LO),
+               GIC_REG(VPE_OTHER,GIC_VPE_COMPARE_HI)
+       );
+
+       n += snprintf(buf+n, PAGE_SIZE-n,
+               "\nEIC Shadow Set for Interrupt SrcX:\n");
+       for (i=0; i<64; i++) {
+               if ((i % 8) == 0)
+                       n += snprintf(buf+n, PAGE_SIZE-n, "%02x:\t",i);
+               n += snprintf(buf+n, PAGE_SIZE-n,
+                       "%08x ",GIC_REG_ADDR(VPE_OTHER,GIC_VPE_EIC_SS(i)));
+               if ((i % 8) == 7)
+                       n += snprintf(buf+n, PAGE_SIZE-n, "\n");
+       };
+
+       n += snprintf(buf+n, PAGE_SIZE-n,
+               "\nVPE Local DINT Group Participate Register:\t%08x\n"
+               "VPE Local DebugBreak Group Register:\t\t%08x\n"
+               ,
+               GIC_REG(VPE_OTHER,GIC_VPE_DINT),
+               GIC_REG(VPE_OTHER,GIC_VPE_DEBUG_BREAK));
+
+       return n;
+}
+
+static DEVICE_ATTR(gic_global, 0444, show_gic_global, NULL);
+static DEVICE_ATTR(gic_local, 0444, show_gic_local, NULL);
+
+static struct bus_type gic_subsys = {
+       .name = "gic",
+       .dev_name = "gic",
+};
+
+
+
+static __cpuinit int gic_add_vpe(int cpu)
+{
+       struct device *dev;
+       int err;
+       char name[16];
+
+       dev = kzalloc(sizeof *dev, GFP_KERNEL);
+       if (!dev)
+               return -ENOMEM;
+
+       dev->id = cpu;
+       dev->bus = &gic_subsys;
+       snprintf(name, sizeof name, "vpe%d",cpu);
+       dev->init_name = name;
+
+       err = device_register(dev);
+       if (err)
+               return err;
+
+       err = device_create_file(dev, &dev_attr_gic_local);
+       if (err)
+               return err;
+
+       return 0;
+}
+
+static int __init init_gic_sysfs(void)
+{
+       int rc;
+       int vpeN;
+       int vpe;
+
+       if (!gic_present)
+               return 0;
+
+       rc = subsys_system_register(&gic_subsys, NULL);
+       if (rc)
+               return rc;
+
+       rc = device_create_file(gic_subsys.dev_root, &dev_attr_gic_global);
+       if (rc)
+               return rc;
+
+       vpeN = ((GIC_REG(SHARED,GIC_SH_CONFIG) & GIC_SH_CONFIG_NUMVPES_MSK) >> GIC_SH_CONFIG_NUMVPES_SHF) + 1;
+       for (vpe=0; vpe<vpeN; vpe++) {
+               rc = gic_add_vpe(vpe);
+               if (rc)
+                       return rc;
+       }
+
+       return 0;
+}
+
+device_initcall_sync(init_gic_sysfs);
+
+#endif /* CONFIG_SYSFS */
index fab40f7d2e0331713883e6b63ce05600b6ff738b..ac9facc086947df303222babf911b812b188b39d 100644 (file)
@@ -131,7 +131,7 @@ void __init init_msc_irqs(unsigned long icubase, unsigned int irqbase, msc_irqma
 
        board_bind_eic_interrupt = &msc_bind_eic_interrupt;
 
-       for (; nirq >= 0; nirq--, imp++) {
+       for (; nirq > 0; nirq--, imp++) {
                int n = imp->im_irq;
 
                switch (imp->im_type) {
index 72ef2d25cbf21ab5bebb1a93b0c0bf6e4e4fdff7..9457bb615bd88e825c83ab2be1f4a6f173114065 100644 (file)
@@ -26,6 +26,8 @@
  *
  * This file exports one global function:
  *     void mips_cpu_irq_init(void);
+ * and one variable:
+ *      unsigned mips_smp_c0_status_mask;
  */
 #include <linux/init.h>
 #include <linux/interrupt.h>
@@ -37,6 +39,8 @@
 #include <asm/mipsregs.h>
 #include <asm/mipsmtregs.h>
 
+unsigned mips_smp_c0_status_mask;
+
 static inline void unmask_mips_irq(struct irq_data *d)
 {
        set_c0_status(0x100 << (d->irq - MIPS_CPU_IRQ_BASE));
index fcaac2f132f08e850e1a4088788e6327c2ce4f64..284f84bd2ec46bb6a0f68b5222d3f6644b3b3041 100644 (file)
@@ -32,6 +32,7 @@
 #include <asm/cacheflush.h>
 #include <asm/processor.h>
 #include <asm/sigcontext.h>
+#include <asm/uaccess.h>
 
 static struct hard_trap_info {
        unsigned char tt;       /* Trap type code for MIPS R3xxx and R4xxx */
@@ -208,7 +209,14 @@ void arch_kgdb_breakpoint(void)
 
 static void kgdb_call_nmi_hook(void *ignored)
 {
+       mm_segment_t old_fs;
+
+       old_fs = get_fs();
+       set_fs(KERNEL_DS);
+
        kgdb_nmicallback(raw_smp_processor_id(), NULL);
+
+       set_fs(old_fs);
 }
 
 void kgdb_roundup_cpus(unsigned long flags)
@@ -282,6 +290,7 @@ static int kgdb_mips_notify(struct notifier_block *self, unsigned long cmd,
        struct die_args *args = (struct die_args *)ptr;
        struct pt_regs *regs = args->regs;
        int trap = (regs->cp0_cause & 0x7c) >> 2;
+       mm_segment_t old_fs;
 
 #ifdef CONFIG_KPROBES
        /*
@@ -296,11 +305,16 @@ static int kgdb_mips_notify(struct notifier_block *self, unsigned long cmd,
        if (user_mode(regs))
                return NOTIFY_DONE;
 
+       old_fs = get_fs();
+       set_fs(KERNEL_DS);
+
        if (atomic_read(&kgdb_active) != -1)
                kgdb_nmicallback(smp_processor_id(), regs);
 
-       if (kgdb_handle_exception(trap, compute_signal(trap), cmd, regs))
+       if (kgdb_handle_exception(trap, compute_signal(trap), cmd, regs)) {
+               set_fs(old_fs);
                return NOTIFY_DONE;
+       }
 
        if (atomic_read(&kgdb_setting_breakpoint))
                if ((trap == 9) && (regs->cp0_epc == (unsigned long)breakinst))
@@ -310,6 +324,7 @@ static int kgdb_mips_notify(struct notifier_block *self, unsigned long cmd,
        local_irq_enable();
        __flush_cache_all();
 
+       set_fs(old_fs);
        return NOTIFY_STOP;
 }
 
index 33d067148e61ba6d1526529ef735b24f2c3017c3..a03e93c4a94634b786c873f41e8b2f56846bad26 100644 (file)
@@ -168,14 +168,10 @@ NESTED(ftrace_graph_caller, PT_SIZE, ra)
 #endif
 
        /* arg3: Get frame pointer of current stack */
-#ifdef CONFIG_FRAME_POINTER
-       move    a2, fp
-#else /* ! CONFIG_FRAME_POINTER */
 #ifdef CONFIG_64BIT
        PTR_LA  a2, PT_SIZE(sp)
 #else
        PTR_LA  a2, (PT_SIZE+8)(sp)
-#endif
 #endif
 
        jal     prepare_ftrace_return
index 6e58e97fcd39bb09581d39030a38940cc9a4f848..59d45f9826b0b46ced6f6a40a355d36e84cb29ba 100644 (file)
 #include <asm/ftrace.h>
 
 extern void *__bzero(void *__s, size_t __count);
-extern long __strncpy_from_user_nocheck_asm(char *__to,
+extern long __strncpy_from_kernel_nocheck_asm(char *__to,
                                            const char *__from, long __len);
-extern long __strncpy_from_user_asm(char *__to, const char *__from,
+extern long __strncpy_from_kernel_asm(char *__to, const char *__from,
                                    long __len);
-extern long __strlen_user_nocheck_asm(const char *s);
-extern long __strlen_user_asm(const char *s);
-extern long __strnlen_user_nocheck_asm(const char *s);
-extern long __strnlen_user_asm(const char *s);
+extern long __strlen_kernel_nocheck_asm(const char *s);
+extern long __strlen_kernel_asm(const char *s);
+extern long __strnlen_kernel_nocheck_asm(const char *s);
+extern long __strnlen_kernel_asm(const char *s);
 
 /*
  * String functions
@@ -44,16 +44,44 @@ EXPORT_SYMBOL(copy_page);
 EXPORT_SYMBOL(__copy_user);
 EXPORT_SYMBOL(__copy_user_inatomic);
 EXPORT_SYMBOL(__bzero);
-EXPORT_SYMBOL(__strncpy_from_user_nocheck_asm);
-EXPORT_SYMBOL(__strncpy_from_user_asm);
+EXPORT_SYMBOL(__strlen_kernel_nocheck_asm);
+EXPORT_SYMBOL(__strlen_kernel_asm);
+EXPORT_SYMBOL(__strnlen_kernel_nocheck_asm);
+EXPORT_SYMBOL(__strnlen_kernel_asm);
+EXPORT_SYMBOL(__strncpy_from_kernel_nocheck_asm);
+EXPORT_SYMBOL(__strncpy_from_kernel_asm);
+
+#ifdef CONFIG_EVA
+extern void *__bzero_user(void *__s, size_t __count);
+extern long __strncpy_from_user_nocheck_asm(char *__to,
+                                           const char *__from, long __len);
+extern long __strncpy_from_user_asm(char *__to, const char *__from,
+                                   long __len);
+extern long __strlen_user_nocheck_asm(const char *s);
+extern long __strlen_user_asm(const char *s);
+extern long __strnlen_user_nocheck_asm(const char *s);
+extern long __strnlen_user_asm(const char *s);
+
+EXPORT_SYMBOL(__copy_touser);
+EXPORT_SYMBOL(__copy_fromuser);
+EXPORT_SYMBOL(__copy_fromuser_inatomic);
+EXPORT_SYMBOL(__copy_inuser);
+EXPORT_SYMBOL(__bzero_user);
 EXPORT_SYMBOL(__strlen_user_nocheck_asm);
 EXPORT_SYMBOL(__strlen_user_asm);
 EXPORT_SYMBOL(__strnlen_user_nocheck_asm);
 EXPORT_SYMBOL(__strnlen_user_asm);
+EXPORT_SYMBOL(__strncpy_from_user_nocheck_asm);
+EXPORT_SYMBOL(__strncpy_from_user_asm);
+#endif
 
 EXPORT_SYMBOL(csum_partial);
 EXPORT_SYMBOL(csum_partial_copy_nocheck);
 EXPORT_SYMBOL(__csum_partial_copy_user);
+#ifdef CONFIG_EVA
+EXPORT_SYMBOL(__csum_partial_copy_fromuser);
+EXPORT_SYMBOL(__csum_partial_copy_touser);
+#endif
 
 EXPORT_SYMBOL(invalid_pte_table);
 #ifdef CONFIG_FUNCTION_TRACER
index 45f1ffcf1a4b6299b181cf99e9d08cac59da6cfb..b94274f2c662789644dda7960915d8993d0895b7 100644 (file)
@@ -805,7 +805,7 @@ static void reset_counters(void *arg)
        }
 }
 
-/* 24K/34K/1004K cores can share the same event map. */
+/* 24K/34K/1004K/interAptiv/loongson1 cores share the same event map. */
 static const struct mips_perf_event mipsxxcore_event_map
                                [PERF_COUNT_HW_MAX] = {
        [PERF_COUNT_HW_CPU_CYCLES] = { 0x00, CNTR_EVEN | CNTR_ODD, P },
@@ -814,8 +814,8 @@ static const struct mips_perf_event mipsxxcore_event_map
        [PERF_COUNT_HW_BRANCH_MISSES] = { 0x02, CNTR_ODD, T },
 };
 
-/* 74K core has different branch event code. */
-static const struct mips_perf_event mipsxx74Kcore_event_map
+/* 74K/proAptiv core has different branch event code. */
+static const struct mips_perf_event mipsxxcore_event_map2
                                [PERF_COUNT_HW_MAX] = {
        [PERF_COUNT_HW_CPU_CYCLES] = { 0x00, CNTR_EVEN | CNTR_ODD, P },
        [PERF_COUNT_HW_INSTRUCTIONS] = { 0x01, CNTR_EVEN | CNTR_ODD, T },
@@ -849,7 +849,7 @@ static const struct mips_perf_event xlp_event_map[PERF_COUNT_HW_MAX] = {
        [PERF_COUNT_HW_BRANCH_MISSES] = { 0x1c, CNTR_ALL }, /* PAPI_BR_MSP */
 };
 
-/* 24K/34K/1004K cores can share the same cache event map. */
+/* 24K/34K/1004K/interAptiv/loongson1 cores share the same cache event map. */
 static const struct mips_perf_event mipsxxcore_cache_map
                                [PERF_COUNT_HW_CACHE_MAX]
                                [PERF_COUNT_HW_CACHE_OP_MAX]
@@ -930,8 +930,8 @@ static const struct mips_perf_event mipsxxcore_cache_map
 },
 };
 
-/* 74K core has completely different cache event map. */
-static const struct mips_perf_event mipsxx74Kcore_cache_map
+/* 74K/proAptiv core has completely different cache event map. */
+static const struct mips_perf_event mipsxxcore_cache_map2
                                [PERF_COUNT_HW_CACHE_MAX]
                                [PERF_COUNT_HW_CACHE_OP_MAX]
                                [PERF_COUNT_HW_CACHE_RESULT_MAX] = {
@@ -971,13 +971,18 @@ static const struct mips_perf_event mipsxx74Kcore_cache_map
 [C(LL)] = {
        [C(OP_READ)] = {
                [C(RESULT_ACCESS)]      = { 0x1c, CNTR_ODD, P },
-               [C(RESULT_MISS)]        = { 0x1d, CNTR_EVEN | CNTR_ODD, P },
+               [C(RESULT_MISS)]        = { 0x1d, CNTR_EVEN, P },
        },
        [C(OP_WRITE)] = {
                [C(RESULT_ACCESS)]      = { 0x1c, CNTR_ODD, P },
-               [C(RESULT_MISS)]        = { 0x1d, CNTR_EVEN | CNTR_ODD, P },
+               [C(RESULT_MISS)]        = { 0x1d, CNTR_EVEN, P },
        },
 },
+/*
+ * 74K core does not have specific DTLB events. proAptiv core has
+ * "speculative" DTLB events which are numbered 0x63 (even/odd) and
+ * not included here. One can use raw events if really needed.
+ */
 [C(ITLB)] = {
        [C(OP_READ)] = {
                [C(RESULT_ACCESS)]      = { 0x04, CNTR_EVEN, T },
@@ -1378,6 +1383,10 @@ static irqreturn_t mipsxx_pmu_handle_irq(int irq, void *dev)
 #define IS_BOTH_COUNTERS_74K_EVENT(b)                                  \
        ((b) == 0 || (b) == 1)
 
+/* proAptiv */
+#define IS_BOTH_COUNTERS_PROAPTIV_EVENT(b)                             \
+       ((b) == 0 || (b) == 1)
+
 /* 1004K */
 #define IS_BOTH_COUNTERS_1004K_EVENT(b)                                        \
        ((b) == 0 || (b) == 1 || (b) == 11)
@@ -1391,6 +1400,20 @@ static irqreturn_t mipsxx_pmu_handle_irq(int irq, void *dev)
 #define IS_RANGE_V_1004K_EVENT(r)      ((r) == 47)
 #endif
 
+/* interAptiv */
+#define IS_BOTH_COUNTERS_INTERAPTIV_EVENT(b)                           \
+       ((b) == 0 || (b) == 1 || (b) == 11)
+#ifdef CONFIG_MIPS_MT_SMP
+/* The P/V/T info is not provided for "(b) == 38" in SUM, assume P. */
+#define IS_RANGE_P_INTERAPTIV_EVENT(r, b)                              \
+       ((b) == 0 || (r) == 18 || (b) == 21 || (b) == 22 ||             \
+        (b) == 25 || (b) == 36 || (b) == 38 || (b) == 39 ||            \
+        (r) == 44 || (r) == 174 || (r) == 176 || ((b) >= 50 &&         \
+        (b) <= 59) || (r) == 188 || (b) == 61 || (b) == 62 ||          \
+        ((b) >= 64 && (b) <= 67))
+#define IS_RANGE_V_INTERAPTIV_EVENT(r) ((r) == 47 || (r) == 175)
+#endif
+
 /* BMIPS5000 */
 #define IS_BOTH_COUNTERS_BMIPS5000_EVENT(b)                            \
        ((b) == 0 || (b) == 1)
@@ -1449,6 +1472,16 @@ static const struct mips_perf_event *mipsxx_pmu_map_raw_event(u64 config)
                                raw_id > 127 ? CNTR_ODD : CNTR_EVEN;
 #ifdef CONFIG_MIPS_MT_SMP
                raw_event.range = P;
+#endif
+               break;
+       case CPU_PROAPTIV:
+               if (IS_BOTH_COUNTERS_PROAPTIV_EVENT(base_id))
+                       raw_event.cntr_mask = CNTR_EVEN | CNTR_ODD;
+               else
+                       raw_event.cntr_mask =
+                               raw_id > 127 ? CNTR_ODD : CNTR_EVEN;
+#ifdef CONFIG_MIPS_MT_SMP
+               raw_event.range = P;
 #endif
                break;
        case CPU_1004K:
@@ -1464,6 +1497,21 @@ static const struct mips_perf_event *mipsxx_pmu_map_raw_event(u64 config)
                        raw_event.range = V;
                else
                        raw_event.range = T;
+#endif
+               break;
+       case CPU_INTERAPTIV:
+               if (IS_BOTH_COUNTERS_INTERAPTIV_EVENT(base_id))
+                       raw_event.cntr_mask = CNTR_EVEN | CNTR_ODD;
+               else
+                       raw_event.cntr_mask =
+                               raw_id > 127 ? CNTR_ODD : CNTR_EVEN;
+#ifdef CONFIG_MIPS_MT_SMP
+               if (IS_RANGE_P_INTERAPTIV_EVENT(raw_id, base_id))
+                       raw_event.range = P;
+               else if (unlikely(IS_RANGE_V_INTERAPTIV_EVENT(raw_id)))
+                       raw_event.range = V;
+               else
+                       raw_event.range = T;
 #endif
                break;
        case CPU_BMIPS5000:
@@ -1576,14 +1624,24 @@ init_hw_perf_events(void)
                break;
        case CPU_74K:
                mipspmu.name = "mips/74K";
-               mipspmu.general_event_map = &mipsxx74Kcore_event_map;
-               mipspmu.cache_event_map = &mipsxx74Kcore_cache_map;
+               mipspmu.general_event_map = &mipsxxcore_event_map2;
+               mipspmu.cache_event_map = &mipsxxcore_cache_map2;
                break;
        case CPU_1004K:
                mipspmu.name = "mips/1004K";
                mipspmu.general_event_map = &mipsxxcore_event_map;
                mipspmu.cache_event_map = &mipsxxcore_cache_map;
                break;
+       case CPU_INTERAPTIV:
+               mipspmu.name = "mips/interAptiv";
+               mipspmu.general_event_map = &mipsxxcore_event_map;
+               mipspmu.cache_event_map = &mipsxxcore_cache_map;
+               break;
+       case CPU_PROAPTIV:
+               mipspmu.name = "mips/proAptiv";
+               mipspmu.general_event_map = &mipsxxcore_event_map2;
+               mipspmu.cache_event_map = &mipsxxcore_cache_map2;
+               break;
        case CPU_LOONGSON1:
                mipspmu.name = "mips/loongson1";
                mipspmu.general_event_map = &mipsxxcore_event_map;
index acb34373679e21f9940f62f251e64dc1e32e5f17..ef6fc20cbb0784f3bd5dfb66550e131d91f2564d 100644 (file)
@@ -98,22 +98,51 @@ static int show_cpuinfo(struct seq_file *m, void *v)
        if (cpu_has_mipsmt)     seq_printf(m, "%s", " mt");
        if (cpu_has_mmips)      seq_printf(m, "%s", " micromips");
        if (cpu_has_vz)         seq_printf(m, "%s", " vz");
+       if (cpu_has_eva)        seq_printf(m, "%s", " eva");
        seq_printf(m, "\n");
 
-       if (cpu_has_mmips) {
-               seq_printf(m, "micromips kernel\t: %s\n",
-                     (read_c0_config3() & MIPS_CONF3_ISA_OE) ?  "yes" : "no");
-       }
+
        seq_printf(m, "shadow register sets\t: %d\n",
                      cpu_data[n].srsets);
        seq_printf(m, "kscratch registers\t: %d\n",
                      hweight8(cpu_data[n].kscratch_mask));
        seq_printf(m, "core\t\t\t: %d\n", cpu_data[n].core);
+#if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_MIPS_MT_SMTC)
+       if (cpu_has_mipsmt) {
+               seq_printf(m, "VPE\t\t\t: %d\n", cpu_data[n].vpe_id);
+#if defined(CONFIG_MIPS_MT_SMTC)
+               seq_printf(m, "TC\t\t\t: %d\n", cpu_data[n].tc_id);
+#endif
+       }
+#endif
 
        sprintf(fmt, "VCE%%c exceptions\t\t: %s\n",
                      cpu_has_vce ? "%u" : "not available");
        seq_printf(m, fmt, 'D', vced_count);
        seq_printf(m, fmt, 'I', vcei_count);
+
+       seq_printf(m, "kernel modes\t\t:");
+#ifdef CONFIG_64BIT
+       seq_printf(m, " 64bit");
+#else
+       seq_printf(m, " 32bit");
+#endif
+#ifdef CONFIG_64BIT_PHYS_ADDR
+       seq_printf(m, " 64bit-address");
+#endif
+#ifdef CONFIG_EVA
+       seq_printf(m, " eva");
+#endif
+#ifdef CONFIG_HIGHMEM
+       seq_printf(m, " highmem");
+#endif
+       if (cpu_has_mmips && (read_c0_config3() & MIPS_CONF3_ISA_OE))
+               seq_printf(m, " micromips");
+#ifdef CONFIG_SMP
+       seq_printf(m, " smp");
+#endif
+       seq_printf(m, "\n");
+
        seq_printf(m, "\n");
 
        return 0;
index c6a041d9d05d57fcd71f91a228c86524e00efd08..44f15d4d7fbde5ccd76fc3302512fa9783b2338a 100644 (file)
@@ -60,9 +60,6 @@ void start_thread(struct pt_regs * regs, unsigned long pc, unsigned long sp)
 
        /* New thread loses kernel privileges. */
        status = regs->cp0_status & ~(ST0_CU0|ST0_CU1|ST0_FR|KU_MASK);
-#ifdef CONFIG_64BIT
-       status |= test_thread_flag(TIF_32BIT_REGS) ? 0 : ST0_FR;
-#endif
        status |= KU_USER;
        regs->cp0_status = status;
        clear_used_math();
index 9c6299c733a317ce5c975ce91b054297502ba960..413f0d0d3efd0b2c7c37a60863a066655e62ca5c 100644 (file)
@@ -222,14 +222,14 @@ int ptrace_set_watch_regs(struct task_struct *child,
        for (i = 0; i < current_cpu_data.watch_reg_use_cnt; i++) {
                __get_user(lt[i], &addr->WATCH_STYLE.watchlo[i]);
 #ifdef CONFIG_32BIT
-               if (lt[i] & __UA_LIMIT)
+               if (lt[i] & USER_DS.seg)
                        return -EINVAL;
 #else
                if (test_tsk_thread_flag(child, TIF_32BIT_ADDR)) {
                        if (lt[i] & 0xffffffff80000000UL)
                                return -EINVAL;
                } else {
-                       if (lt[i] & __UA_LIMIT)
+                       if (lt[i] & USER_DS.seg)
                                return -EINVAL;
                }
 #endif
index 55ffe149dae90582bd2be2ec1d828639eae79b8d..be6d9815bcd6ffdaff6cfc0e2d50c853958aa3d1 100644 (file)
@@ -53,6 +53,36 @@ LEAF(_save_fp_context)
        EX      sdc1 $f27, SC_FPREGS+216(a0)
        EX      sdc1 $f29, SC_FPREGS+232(a0)
        EX      sdc1 $f31, SC_FPREGS+248(a0)
+#else
+#ifdef CONFIG_MIPS32_R2
+       .set    push
+       .set    mips64r2
+       .set    noreorder
+       mfc0    t0, CP0_STATUS
+       sll     t0, t0, 31 - _ST0_FR
+       bgez    t0, 1f              # 16 / 32 register mode?
+        nop
+
+       /* Store the 16 odd double precision registers */
+       EX      sdc1 $f1, SC_FPREGS+8(a0)
+       EX      sdc1 $f3, SC_FPREGS+24(a0)
+       EX      sdc1 $f5, SC_FPREGS+40(a0)
+       EX      sdc1 $f7, SC_FPREGS+56(a0)
+       EX      sdc1 $f9, SC_FPREGS+72(a0)
+       EX      sdc1 $f11, SC_FPREGS+88(a0)
+       EX      sdc1 $f13, SC_FPREGS+104(a0)
+       EX      sdc1 $f15, SC_FPREGS+120(a0)
+       EX      sdc1 $f17, SC_FPREGS+136(a0)
+       EX      sdc1 $f19, SC_FPREGS+152(a0)
+       EX      sdc1 $f21, SC_FPREGS+168(a0)
+       EX      sdc1 $f23, SC_FPREGS+184(a0)
+       EX      sdc1 $f25, SC_FPREGS+200(a0)
+       EX      sdc1 $f27, SC_FPREGS+216(a0)
+       EX      sdc1 $f29, SC_FPREGS+232(a0)
+       EX      sdc1 $f31, SC_FPREGS+248(a0)
+       .set    pop
+1:
+#endif
 #endif
 
        /* Store the 16 even double precision registers */
@@ -82,6 +112,30 @@ LEAF(_save_fp_context)
 LEAF(_save_fp_context32)
        cfc1    t1, fcr31
 
+       mfc0    t0, CP0_STATUS
+       sll     t0, t0, 31 - _ST0_FR
+       bgez    t0, 1f              # 16 / 32 register mode?
+        nop
+
+       /* Store the 16 odd double precision registers */
+       EX      sdc1 $f1, SC32_FPREGS+8(a0)
+       EX      sdc1 $f3, SC32_FPREGS+24(a0)
+       EX      sdc1 $f5, SC32_FPREGS+40(a0)
+       EX      sdc1 $f7, SC32_FPREGS+56(a0)
+       EX      sdc1 $f9, SC32_FPREGS+72(a0)
+       EX      sdc1 $f11, SC32_FPREGS+88(a0)
+       EX      sdc1 $f13, SC32_FPREGS+104(a0)
+       EX      sdc1 $f15, SC32_FPREGS+120(a0)
+       EX      sdc1 $f17, SC32_FPREGS+136(a0)
+       EX      sdc1 $f19, SC32_FPREGS+152(a0)
+       EX      sdc1 $f21, SC32_FPREGS+168(a0)
+       EX      sdc1 $f23, SC32_FPREGS+184(a0)
+       EX      sdc1 $f25, SC32_FPREGS+200(a0)
+       EX      sdc1 $f27, SC32_FPREGS+216(a0)
+       EX      sdc1 $f29, SC32_FPREGS+232(a0)
+       EX      sdc1 $f31, SC32_FPREGS+248(a0)
+1:
+
        EX      sdc1 $f0, SC32_FPREGS+0(a0)
        EX      sdc1 $f2, SC32_FPREGS+16(a0)
        EX      sdc1 $f4, SC32_FPREGS+32(a0)
@@ -131,6 +185,36 @@ LEAF(_restore_fp_context)
        EX      ldc1 $f27, SC_FPREGS+216(a0)
        EX      ldc1 $f29, SC_FPREGS+232(a0)
        EX      ldc1 $f31, SC_FPREGS+248(a0)
+
+#else
+#ifdef CONFIG_MIPS32_R2
+       .set    push
+       .set    mips64r2
+       .set    noreorder
+       mfc0    t1, CP0_STATUS
+       sll     t1, t1, 31 - _ST0_FR
+       bgez    t1, 1f                          # 16 / 32 register mode?
+        nop
+
+       EX      ldc1 $f1, SC_FPREGS+8(a0)
+       EX      ldc1 $f3, SC_FPREGS+24(a0)
+       EX      ldc1 $f5, SC_FPREGS+40(a0)
+       EX      ldc1 $f7, SC_FPREGS+56(a0)
+       EX      ldc1 $f9, SC_FPREGS+72(a0)
+       EX      ldc1 $f11, SC_FPREGS+88(a0)
+       EX      ldc1 $f13, SC_FPREGS+104(a0)
+       EX      ldc1 $f15, SC_FPREGS+120(a0)
+       EX      ldc1 $f17, SC_FPREGS+136(a0)
+       EX      ldc1 $f19, SC_FPREGS+152(a0)
+       EX      ldc1 $f21, SC_FPREGS+168(a0)
+       EX      ldc1 $f23, SC_FPREGS+184(a0)
+       EX      ldc1 $f25, SC_FPREGS+200(a0)
+       EX      ldc1 $f27, SC_FPREGS+216(a0)
+       EX      ldc1 $f29, SC_FPREGS+232(a0)
+       EX      ldc1 $f31, SC_FPREGS+248(a0)
+       .set    pop
+1:
+#endif
 #endif
        EX      ldc1 $f0, SC_FPREGS+0(a0)
        EX      ldc1 $f2, SC_FPREGS+16(a0)
@@ -155,9 +239,37 @@ LEAF(_restore_fp_context)
 
 #ifdef CONFIG_MIPS32_COMPAT
 LEAF(_restore_fp_context32)
+       .set    push
+       .set    mips64r2
+       .set    noreorder
+
        /* Restore an o32 sigcontext.  */
        EX      lw t0, SC32_FPC_CSR(a0)
-       EX      ldc1 $f0, SC32_FPREGS+0(a0)
+
+       mfc0    t1, CP0_STATUS
+       sll     t1, t1, 31 - _ST0_FR
+       bgez    t1, 1f                          # 16 / 32 register mode?
+        nop
+
+       EX      ldc1 $f1, SC32_FPREGS+8(a0)
+       EX      ldc1 $f3, SC32_FPREGS+24(a0)
+       EX      ldc1 $f5, SC32_FPREGS+40(a0)
+       EX      ldc1 $f7, SC32_FPREGS+56(a0)
+       EX      ldc1 $f9, SC32_FPREGS+72(a0)
+       EX      ldc1 $f11, SC32_FPREGS+88(a0)
+       EX      ldc1 $f13, SC32_FPREGS+104(a0)
+       EX      ldc1 $f15, SC32_FPREGS+120(a0)
+       EX      ldc1 $f17, SC32_FPREGS+136(a0)
+       EX      ldc1 $f19, SC32_FPREGS+152(a0)
+       EX      ldc1 $f21, SC32_FPREGS+168(a0)
+       EX      ldc1 $f23, SC32_FPREGS+184(a0)
+       EX      ldc1 $f25, SC32_FPREGS+200(a0)
+       EX      ldc1 $f27, SC32_FPREGS+216(a0)
+       EX      ldc1 $f29, SC32_FPREGS+232(a0)
+       EX      ldc1 $f31, SC32_FPREGS+248(a0)
+1:
+
+       EX      ldc1 $f0, SC32_FPREGS+0(a0)
        EX      ldc1 $f2, SC32_FPREGS+16(a0)
        EX      ldc1 $f4, SC32_FPREGS+32(a0)
        EX      ldc1 $f6, SC32_FPREGS+48(a0)
@@ -177,6 +289,7 @@ LEAF(_restore_fp_context32)
        jr      ra
         li     v0, 0                                   # success
        END(_restore_fp_context32)
+       .set    pop
 #endif
 
        .set    reorder
index 5e51219990aa03d5617459c818b96748b787a510..f4abdb653aa31095568144dff905ead9f2dd5d0f 100644 (file)
        and     t0, t0, t1
        LONG_S  t0, ST_OFF(t3)
 
+       /* Now copy FR from it */
+
+#if defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_64BIT)
+#ifdef CONFIG_MIPS_MT_SMTC
+
+       li      t3, ST0_FR
+       mfc0    t2, CP0_TCSTATUS
+       nor     t1, $0, t3
+       and     t0, t0, t3                      # extract FR from prev
+       and     t3, t2, t1
+       or      t0, t0, t3
+       mtc0    t0, CP0_TCSTATUS
+       enable_fpu_hazard
+
+       fpu_save_double a0 t0 t1                # c0_status passed in t0
+                                               # clobbers t1
+       mtc0    t2, CP0_TCSTATUS
+#else
+       li      t3, ST0_FR
+       mfc0    t2, CP0_STATUS
+       nor     t1, $0, t3
+       and     t0, t0, t3                      # extract FR from prev
+       and     t3, t2, t1
+       or      t0, t0, t3
+       mtc0    t0, CP0_STATUS
+       enable_fpu_hazard
+
+       fpu_save_double a0 t0 t1                # c0_status passed in t0
+                                               # clobbers t1
+       mtc0    t2, CP0_STATUS
+
+#endif /* CONFIG_MIPS_MT_SMTC */
+#else
+
        fpu_save_double a0 t0 t1                # c0_status passed in t0
                                                # clobbers t1
-1:
+#endif
 
+1:
        /*
         * The order of restoring the registers takes care of the race
         * updating $28, $29 and kernelsp without disabling ints.
        xori    t1, t1, TCSTATUS_IXMT
        or      t1, t1, t2
        mtc0    t1, CP0_TCSTATUS
-       _ehb
 #endif /* CONFIG_MIPS_MT_SMTC */
        move    v0, a0
-       jr      ra
+#ifdef CONFIG_CPU_MIPSR2
+       jr.hb   ra
+#else
+       _ehb
+       jr      ra
+#endif
        END(resume)
 
 /*
  * Save a thread's fp context.
  */
 LEAF(_save_fp)
-#ifdef CONFIG_64BIT
+#if defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_64BIT)
        mfc0    t0, CP0_STATUS
 #endif
        fpu_save_double a0 t0 t1                # clobbers t1
@@ -128,7 +167,7 @@ LEAF(_save_fp)
  * Restore a thread's fp context.
  */
 LEAF(_restore_fp)
-#ifdef CONFIG_64BIT
+#if defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_64BIT)
        mfc0    t0, CP0_STATUS
 #endif
        fpu_restore_double a0 t0 t1             # clobbers t1
@@ -143,8 +182,6 @@ LEAF(_restore_fp)
  * We initialize fcr31 to rounding to nearest, no exceptions.
  */
 
-#define FPU_DEFAULT  0x00000000
-
 LEAF(_init_fpu)
 #ifdef CONFIG_MIPS_MT_SMTC
        /* Rather than manipulate per-VPE Status, set per-TC bit in TCStatus */
@@ -161,15 +198,32 @@ LEAF(_init_fpu)
 #endif /* CONFIG_MIPS_MT_SMTC */
        enable_fpu_hazard
 
-       li      t1, FPU_DEFAULT
-       ctc1    t1, fcr31
+#if defined(CONFIG_CPU_MIPS32) || defined(CONFIG_CPU_MIPS64)
+       li      t2, MIPS_FPIR_HAS2008
+       cfc1    t1, CP1_REVISION
+       and     t2, t2, t1
+       li      t1, FPU_CSR_DEFAULT
+       beq     t2, $0, 3f
+       li      t1, FPU_CSR_DEFAULT|FPU_CSR_MAC2008|FPU_CSR_ABS2008|FPU_CSR_NAN2008
+3:
+#endif
+       ctc1    t1, fcr31
 
-       li      t1, -1                          # SNaN
+       li      t1, -1                          # SNaN MIPS, DP or SP or DP+SP
 
 #ifdef CONFIG_64BIT
-       sll     t0, t0, 5
+       sll     t0, t0, 31 - _ST0_FR
        bgez    t0, 1f                          # 16 / 32 register mode?
 
+#ifdef CONFIG_CPU_MIPSR2
+       enable_fpu_hazard
+       li      t2, FPU_CSR_NAN2008
+       cfc1    t3, fcr31
+       and     t2, t2, t3
+       beq     t2, $0, 2f
+       dli     t1, 0x7ff000007fa00000          # SNaN 2008, DP + SP
+2:
+#endif
        dmtc1   t1, $f1
        dmtc1   t1, $f3
        dmtc1   t1, $f5
@@ -187,9 +241,23 @@ LEAF(_init_fpu)
        dmtc1   t1, $f29
        dmtc1   t1, $f31
 1:
-#endif
+#endif /* CONFIG_64BIT */
 
 #ifdef CONFIG_CPU_MIPS32
+#ifdef CONFIG_CPU_MIPS32_R2
+       sll     t0, t0, 31 - _ST0_FR
+       bgez    t0, 2f                          # 16 / 32 register mode?
+
+       enable_fpu_hazard
+       li      t2, FPU_CSR_NAN2008
+       cfc1    t3, fcr31
+       and     t2, t2, t3
+       move    t3, t1                          # SNaN MIPS, DP high word
+       beq     t2, $0, 2f
+       li      t1, 0x7fa00000                  # SNaN 2008, SP
+       li      t3, 0x7ff00000                  # SNaN 2008, DP high word
+2:
+#endif
        mtc1    t1, $f0
        mtc1    t1, $f1
        mtc1    t1, $f2
@@ -222,7 +290,49 @@ LEAF(_init_fpu)
        mtc1    t1, $f29
        mtc1    t1, $f30
        mtc1    t1, $f31
-#else
+
+#ifdef CONFIG_CPU_MIPS32_R2
+       bgez    t0, 1f                          # 16 / 32 register mode?
+
+       move    t1, t3                          # move SNaN, DP high word
+       .set    push
+       .set    mips64r2
+       mthc1   t1, $f0
+       mthc1   t1, $f1
+       mthc1   t1, $f2
+       mthc1   t1, $f3
+       mthc1   t1, $f4
+       mthc1   t1, $f5
+       mthc1   t1, $f6
+       mthc1   t1, $f7
+       mthc1   t1, $f8
+       mthc1   t1, $f9
+       mthc1   t1, $f10
+       mthc1   t1, $f11
+       mthc1   t1, $f12
+       mthc1   t1, $f13
+       mthc1   t1, $f14
+       mthc1   t1, $f15
+       mthc1   t1, $f16
+       mthc1   t1, $f17
+       mthc1   t1, $f18
+       mthc1   t1, $f19
+       mthc1   t1, $f20
+       mthc1   t1, $f21
+       mthc1   t1, $f22
+       mthc1   t1, $f23
+       mthc1   t1, $f24
+       mthc1   t1, $f25
+       mthc1   t1, $f26
+       mthc1   t1, $f27
+       mthc1   t1, $f28
+       mthc1   t1, $f29
+       mthc1   t1, $f30
+       mthc1   t1, $f31
+       .set    pop
+1:
+#endif /* CONFIG_CPU_MIPS32_R2 */
+#else  /* CONFIG_CPU_MIPS32 */
        .set    mips3
        dmtc1   t1, $f0
        dmtc1   t1, $f2
@@ -240,6 +350,6 @@ LEAF(_init_fpu)
        dmtc1   t1, $f26
        dmtc1   t1, $f28
        dmtc1   t1, $f30
-#endif
+#endif /* CONFIG_CPU_MIPS32 */
        jr      ra
        END(_init_fpu)
index 43d2d78d3287dfbfa52ad7fb1a52f6571d9deaa3..74bab9ddd0e1984c9d4e4e95c038bb1d269b60dd 100644 (file)
@@ -26,6 +26,12 @@ process_entry:
        PTR_L           s2, (s0)
        PTR_ADD         s0, s0, SZREG
 
+       /*
+        * In case of a kdump/crash kernel, the indirection page is not
+        * populated as the kernel is directly copied to a reserved location
+        */
+       beqz            s2, done
+
        /* destination page */
        and             s3, s2, 0x1
        beq             s3, zero, 1f
index 9b36424b03c5f41aa48312c770f90e69a43f6fae..c28b40bf509f3cbf6d9fd835fd8809a666fcb831 100644 (file)
@@ -125,17 +125,39 @@ stackargs:
 
        la      t1, 5f                  # load up to 3 arguments
        subu    t1, t3
-1:     lw      t5, 16(t0)              # argument #5 from usp
-       .set    push
-       .set    noreorder
+#ifndef CONFIG_EVA
+1:      lw      t5, 16(t0)              # argument #5 from usp
+       .set    push
+       .set    noreorder
        .set    nomacro
        jr      t1
         addiu  t1, 6f - 5f
 
-2:     lw      t8, 28(t0)              # argument #8 from usp
-3:     lw      t7, 24(t0)              # argument #7 from usp
-4:     lw      t6, 20(t0)              # argument #6 from usp
-5:     jr      t1
+2:     .insn
+       lw      t8, 28(t0)              # argument #8 from usp
+3:     .insn
+       lw      t7, 24(t0)              # argument #7 from usp
+4:     .insn
+       lw      t6, 20(t0)              # argument #6 from usp
+5:     .insn
+#else
+       .set    eva
+1:      lwe      t5, 16(t0)              # argument #5 from usp
+       .set    push
+       .set    noreorder
+       .set    nomacro
+       jr      t1
+        addiu  t1, 6f - 5f
+
+2:     .insn
+       lwe      t8, 28(t0)              # argument #8 from usp
+3:     .insn
+       lwe      t7, 24(t0)              # argument #7 from usp
+4:     .insn
+       lwe      t6, 20(t0)              # argument #6 from usp
+5:     .insn
+#endif /* CONFIG_EVA */
+       jr      t1
         sw     t5, 16(sp)              # argument #5 to ksp
 
 #ifdef CONFIG_CPU_MICROMIPS
@@ -150,7 +172,7 @@ stackargs:
        sw      t7, 24(sp)              # argument #7 to ksp
        sw      t6, 20(sp)              # argument #6 to ksp
 #endif
-6:     j       stack_done              # go back
+6:      j       stack_done              # go back
         nop
        .set    pop
 
diff --git a/arch/mips/kernel/segment.c b/arch/mips/kernel/segment.c
new file mode 100644 (file)
index 0000000..c3ceb77
--- /dev/null
@@ -0,0 +1,103 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2011 MIPS Technologies, Inc.
+ */
+#include <linux/kernel.h>
+#include <linux/proc_fs.h>
+#include <asm/mipsregs.h>
+
+#ifdef CONFIG_PROC_FS
+static struct proc_dir_entry *segments;
+
+static void proc_build_segment_config(char *str, unsigned int cfg)
+{
+       unsigned int am;
+       int len = 0;
+       static const char * const am_str[] = {
+               "UK  ", "MK  ", "MSK  ", "MUSK  ", "MUSUK  ", "USK  ",
+               "*Reserved*  ", "UUSK  "};
+
+       /* Segment access mode. */
+       am = (cfg & MIPS_SEGCFG_AM) >> MIPS_SEGCFG_AM_SHIFT;
+       len += sprintf(str + len, "%s", am_str[am]);
+
+       /*
+        * Access modes MK, MSK and MUSK are mapped segments. Therefore
+        * there is no direct physical address mapping.
+        */
+       if ((am == 0) || (am > 3))
+               len += sprintf(str + len, "          %03lx",
+                       ((cfg & MIPS_SEGCFG_PA) >> MIPS_SEGCFG_PA_SHIFT));
+       else
+               len += sprintf(str + len, "          UND");
+
+       /*
+        * Access modes MK, MSK and MUSK are mapped segments. Therefore
+        * there is no defined cache mode setting.
+        */
+       if ((am == 0) || (am > 3))
+               len += sprintf(str + len, "         %01ld",
+                       ((cfg & MIPS_SEGCFG_C) >> MIPS_SEGCFG_C_SHIFT));
+       else
+               len += sprintf(str + len, "         U");
+
+       /* Exception configuration. */
+       len += sprintf(str + len, "         %01ld\n",
+               ((cfg & MIPS_SEGCFG_EU) >> MIPS_SEGCFG_EU_SHIFT));
+}
+
+static int proc_read_segments(char *page, char **start, off_t off,
+                         int count, int *eof, void *data)
+{
+       int len = 0;
+       unsigned int segcfg;
+       char str[42];
+
+       len += sprintf(page + len, "\nSegment   Virtual    Size   Access Mode    Physical    Caching     EU\n");
+
+       len += sprintf(page + len, "-------   --------   ----   -----------   ----------   -------   ------\n");
+
+       segcfg = read_c0_segctl0();
+       proc_build_segment_config(str, segcfg);
+       len += sprintf(page + len, "   0      e0000000   512M      %s", str);
+
+       segcfg >>= 16;
+       proc_build_segment_config(str, segcfg);
+       len += sprintf(page + len, "   1      c0000000   512M      %s", str);
+
+       segcfg = read_c0_segctl1();
+       proc_build_segment_config(str, segcfg);
+       len += sprintf(page + len, "   2      a0000000   512M      %s", str);
+
+       segcfg >>= 16;
+       proc_build_segment_config(str, segcfg);
+       len += sprintf(page + len, "   3      80000000   512M      %s", str);
+
+       segcfg = read_c0_segctl2();
+       proc_build_segment_config(str, segcfg);
+       len += sprintf(page + len, "   4      40000000    1G       %s", str);
+
+       segcfg >>= 16;
+       proc_build_segment_config(str, segcfg);
+       len += sprintf(page + len, "   5      00000000    1G       %s\n", str);
+
+       page += len;
+       return len;
+}
+
+static int __init segments_info(void)
+{
+       if (cpu_has_segments) {
+               segments = create_proc_read_entry("segments", 0444, NULL,
+                               proc_read_segments, NULL);
+               if (!segments)
+                       return -ENOMEM;
+       }
+       return 0;
+}
+
+__initcall(segments_info);
+#endif /* CONFIG_PROC_FS */
index c7f90519e58ce0ade98dd826752fbaa1d2287da0..c538d6e01b7b744cb4af330a5de15b78963cfa7f 100644 (file)
@@ -552,6 +552,52 @@ static void __init arch_mem_addpart(phys_t mem, phys_t end, int type)
        add_memory_region(mem, size, type);
 }
 
+#ifdef CONFIG_KEXEC
+static inline unsigned long long get_total_mem(void)
+{
+       unsigned long long total;
+
+       total = max_pfn - min_low_pfn;
+       return total << PAGE_SHIFT;
+}
+
+static void __init mips_parse_crashkernel(void)
+{
+       unsigned long long total_mem;
+       unsigned long long crash_size, crash_base;
+       int ret;
+
+       total_mem = get_total_mem();
+       ret = parse_crashkernel(boot_command_line, total_mem,
+                               &crash_size, &crash_base);
+       if (ret != 0 || crash_size <= 0)
+               return;
+
+       crashk_res.start = crash_base;
+       crashk_res.end   = crash_base + crash_size - 1;
+}
+
+static void __init request_crashkernel(struct resource *res)
+{
+       int ret;
+
+       ret = request_resource(res, &crashk_res);
+       if (!ret)
+               pr_info("Reserving %ldMB of memory at %ldMB for crashkernel\n",
+                       (unsigned long)((crashk_res.end -
+                                        crashk_res.start + 1) >> 20),
+                       (unsigned long)(crashk_res.start  >> 20));
+}
+#else /* !defined(CONFIG_KEXEC)                */
+static void __init mips_parse_crashkernel(void)
+{
+}
+
+static void __init request_crashkernel(struct resource *res)
+{
+}
+#endif /* !defined(CONFIG_KEXEC)  */
+
 static void __init arch_mem_init(char **cmdline_p)
 {
        extern void plat_mem_setup(void);
@@ -608,6 +654,8 @@ static void __init arch_mem_init(char **cmdline_p)
                                BOOTMEM_DEFAULT);
        }
 #endif
+
+       mips_parse_crashkernel();
 #ifdef CONFIG_KEXEC
        if (crashk_res.start != crashk_res.end)
                reserve_bootmem(crashk_res.start,
@@ -620,52 +668,6 @@ static void __init arch_mem_init(char **cmdline_p)
        paging_init();
 }
 
-#ifdef CONFIG_KEXEC
-static inline unsigned long long get_total_mem(void)
-{
-       unsigned long long total;
-
-       total = max_pfn - min_low_pfn;
-       return total << PAGE_SHIFT;
-}
-
-static void __init mips_parse_crashkernel(void)
-{
-       unsigned long long total_mem;
-       unsigned long long crash_size, crash_base;
-       int ret;
-
-       total_mem = get_total_mem();
-       ret = parse_crashkernel(boot_command_line, total_mem,
-                               &crash_size, &crash_base);
-       if (ret != 0 || crash_size <= 0)
-               return;
-
-       crashk_res.start = crash_base;
-       crashk_res.end   = crash_base + crash_size - 1;
-}
-
-static void __init request_crashkernel(struct resource *res)
-{
-       int ret;
-
-       ret = request_resource(res, &crashk_res);
-       if (!ret)
-               pr_info("Reserving %ldMB of memory at %ldMB for crashkernel\n",
-                       (unsigned long)((crashk_res.end -
-                               crashk_res.start + 1) >> 20),
-                       (unsigned long)(crashk_res.start  >> 20));
-}
-#else /* !defined(CONFIG_KEXEC)         */
-static void __init mips_parse_crashkernel(void)
-{
-}
-
-static void __init request_crashkernel(struct resource *res)
-{
-}
-#endif /* !defined(CONFIG_KEXEC)  */
-
 static void __init resource_init(void)
 {
        int i;
@@ -678,11 +680,6 @@ static void __init resource_init(void)
        data_resource.start = __pa_symbol(&_etext);
        data_resource.end = __pa_symbol(&_edata) - 1;
 
-       /*
-        * Request address space for all standard RAM.
-        */
-       mips_parse_crashkernel();
-
        for (i = 0; i < boot_mem_map.nr_map; i++) {
                struct resource *res;
                unsigned long start, end;
index fd3ef2c2afbc37732d9bedcb9fff59d1dda935ea..498723fde67d2c6b85b9dc96c937e6f90dfbd15e 100644 (file)
@@ -68,11 +68,17 @@ struct rt_sigframe {
 static int protected_save_fp_context(struct sigcontext __user *sc)
 {
        int err;
+#ifndef CONFIG_EVA
+       int err2;
+
        while (1) {
                lock_fpu_owner();
-               own_fpu_inatomic(1);
-               err = save_fp_context(sc); /* this might fail */
+               err2 = own_fpu_inatomic(1);
+               if (!err2)
+                       err = save_fp_context(sc); /* this might fail */
                unlock_fpu_owner();
+               if (err2)
+                       err = fpu_emulator_save_context(sc);
                if (likely(!err))
                        break;
                /* touch the sigcontext and try again */
@@ -82,17 +88,27 @@ static int protected_save_fp_context(struct sigcontext __user *sc)
                if (err)
                        break;  /* really bad sigcontext */
        }
+#else
+       lose_fpu(1);
+       err = save_fp_context(sc); /* this might fail */
+#endif  /* CONFIG_EVA */
        return err;
 }
 
 static int protected_restore_fp_context(struct sigcontext __user *sc)
 {
        int err, tmp __maybe_unused;
+#ifndef CONFIG_EVA
+       int err2;
+
        while (1) {
                lock_fpu_owner();
-               own_fpu_inatomic(0);
-               err = restore_fp_context(sc); /* this might fail */
+               err2 = own_fpu_inatomic(0);
+               if (!err2)
+                       err = restore_fp_context(sc); /* this might fail */
                unlock_fpu_owner();
+               if (err2)
+                       err = fpu_emulator_restore_context(sc);
                if (likely(!err))
                        break;
                /* touch the sigcontext and try again */
@@ -102,6 +118,10 @@ static int protected_restore_fp_context(struct sigcontext __user *sc)
                if (err)
                        break;  /* really bad sigcontext */
        }
+#else
+       lose_fpu(0);
+       err = restore_fp_context(sc); /* this might fail */
+#endif  /* CONFIG_EVA */
        return err;
 }
 
@@ -584,6 +604,7 @@ asmlinkage void do_notify_resume(struct pt_regs *regs, void *unused,
 }
 
 #ifdef CONFIG_SMP
+#ifndef CONFIG_EVA
 static int smp_save_fp_context(struct sigcontext __user *sc)
 {
        return raw_cpu_has_fpu
@@ -598,9 +619,11 @@ static int smp_restore_fp_context(struct sigcontext __user *sc)
               : fpu_emulator_restore_context(sc);
 }
 #endif
+#endif
 
 static int signal_setup(void)
 {
+#ifndef CONFIG_EVA
 #ifdef CONFIG_SMP
        /* For now just do the cpu_has_fpu check when the functions are invoked */
        save_fp_context = smp_save_fp_context;
@@ -613,7 +636,11 @@ static int signal_setup(void)
                save_fp_context = fpu_emulator_save_context;
                restore_fp_context = fpu_emulator_restore_context;
        }
-#endif
+#endif /* CONFIG_SMP */
+#else
+       save_fp_context = fpu_emulator_save_context;
+       restore_fp_context = fpu_emulator_restore_context;
+#endif /* CONFIG_EVA */
 
        return 0;
 }
index 57de8b751627be4eb57b21f124525a8521e1f5ea..a1169200c8fdfae673be59a33334da197f0ea106 100644 (file)
@@ -83,11 +83,17 @@ struct rt_sigframe32 {
 static int protected_save_fp_context32(struct sigcontext32 __user *sc)
 {
        int err;
+#ifndef CONFIG_EVA
+       int err2;
+
        while (1) {
                lock_fpu_owner();
-               own_fpu_inatomic(1);
-               err = save_fp_context32(sc); /* this might fail */
+               err2 = own_fpu_inatomic(1);
+               if (!err2)
+                       err = save_fp_context32(sc); /* this might fail */
                unlock_fpu_owner();
+               if (err2)
+                       err = fpu_emulator_save_context32(sc);
                if (likely(!err))
                        break;
                /* touch the sigcontext and try again */
@@ -97,17 +103,27 @@ static int protected_save_fp_context32(struct sigcontext32 __user *sc)
                if (err)
                        break;  /* really bad sigcontext */
        }
+#else
+       lose_fpu(1);
+       err = save_fp_context32(sc); /* this might fail */
+#endif
        return err;
 }
 
 static int protected_restore_fp_context32(struct sigcontext32 __user *sc)
 {
        int err, tmp __maybe_unused;
+#ifndef CONFIG_EVA
+       int err2;
+
        while (1) {
                lock_fpu_owner();
-               own_fpu_inatomic(0);
-               err = restore_fp_context32(sc); /* this might fail */
+               err2 = own_fpu_inatomic(0);
+               if (!err2)
+                       err = restore_fp_context32(sc); /* this might fail */
                unlock_fpu_owner();
+               if (err2)
+                       err = fpu_emulator_restore_context32(sc);
                if (likely(!err))
                        break;
                /* touch the sigcontext and try again */
@@ -117,6 +133,10 @@ static int protected_restore_fp_context32(struct sigcontext32 __user *sc)
                if (err)
                        break;  /* really bad sigcontext */
        }
+#else
+       lose_fpu(0);
+       err = restore_fp_context32(sc); /* this might fail */
+#endif /* CONFIG_EVA */
        return err;
 }
 
@@ -558,8 +578,30 @@ struct mips_abi mips_abi_32 = {
        .restart        = __NR_O32_restart_syscall
 };
 
+#ifdef CONFIG_SMP
+static int smp_save_fp_context32(struct sigcontext32 __user *sc)
+{
+       return raw_cpu_has_fpu
+              ? _save_fp_context32(sc)
+              : fpu_emulator_save_context32(sc);
+}
+
+static int smp_restore_fp_context32(struct sigcontext32 __user *sc)
+{
+       return raw_cpu_has_fpu
+              ? _restore_fp_context32(sc)
+              : fpu_emulator_restore_context32(sc);
+}
+#endif
+
 static int signal32_init(void)
 {
+#ifndef CONFIG_EVA
+#ifdef CONFIG_SMP
+       /* For now just do the cpu_has_fpu check when the functions are invoked */
+       save_fp_context32 = smp_save_fp_context32;
+       restore_fp_context32 = smp_restore_fp_context32;
+#else
        if (cpu_has_fpu) {
                save_fp_context32 = _save_fp_context32;
                restore_fp_context32 = _restore_fp_context32;
@@ -567,6 +609,11 @@ static int signal32_init(void)
                save_fp_context32 = fpu_emulator_save_context32;
                restore_fp_context32 = fpu_emulator_restore_context32;
        }
+#endif /* CONFIG_SMP */
+#else
+       save_fp_context32 = fpu_emulator_save_context32;
+       restore_fp_context32 = fpu_emulator_restore_context32;
+#endif /* CONFIG_EVA */
 
        return 0;
 }
index c2e5d74739b49fa6dcd8be06dd6e51b7b71864e4..ab7f0acdf3faf460637b26daa91188549a820027 100644 (file)
@@ -24,6 +24,7 @@
 #include <linux/cpumask.h>
 #include <linux/interrupt.h>
 #include <linux/compiler.h>
+#include <linux/cpu.h>
 
 #include <linux/atomic.h>
 #include <asm/cacheflush.h>
 #include <asm/mips_mt.h>
 #include <asm/amon.h>
 #include <asm/gic.h>
+#include <asm/gcmpregs.h>
+#include <asm/bootinfo.h>
+#include <asm/irq_cpu.h>
+
+/* we need to keep _gcmp_base in a separate cacheline for uncoherent read */
+unsigned long _gcmp_base __cacheline_aligned;
+int gcmp_present __cacheline_aligned = -1;
 
 static void ipi_call_function(unsigned int cpu)
 {
@@ -87,23 +95,127 @@ static void cmp_send_ipi_mask(const struct cpumask *mask, unsigned int action)
                cmp_send_ipi_single(i, action);
 }
 
-static void cmp_init_secondary(void)
+#ifdef CONFIG_EVA
+static unsigned long bev_location = -1;
+
+static int rd_bev_location(char *p)
 {
-       struct cpuinfo_mips *c = &current_cpu_data;
+       if (p && strlen(p)) {
+               bev_location = memparse(p, &p);
+       } else
+               bev_location = 0xbfc00000;
+       return 0;
+}
+early_param("force-bev-location", rd_bev_location);
+
+static void BEV_overlay_segment_map_check(unsigned long excBase,
+       unsigned long excMask, unsigned long excSize)
+{
+       unsigned long addr;
+
+       if ((excBase == (IO_BASE + IO_SHIFT)) && (excSize == IO_SIZE))
+               return;
+
+       printk("WARNING: BEV overlay segment doesn't fit whole I/O reg space, NMI/EJTAG/sRESET may not work\n");
+
+       if ((MAP_BASE < (excBase + excSize)) && (excBase < VMALLOC_END))
+               panic("BEV Overlay segment overlaps VMALLOC area\n");
+#ifdef CONFIG_HIGHMEM
+       if ((PKMAP_BASE < (excBase + excSize)) &&
+           (excBase < (PKMAP_BASE + (PAGE_SIZE*(LAST_PKMAP-1)))))
+               panic("BEV Overlay segment overlaps HIGHMEM/PKMAP area\n");
+#endif
+       for (addr = excBase; addr < (excBase + excSize); addr += PAGE_SIZE) {
+               if (page_is_ram(__pa(addr>>PAGE_SHIFT)))
+                       panic("BEV Overlay segment overlaps memory at %lx\n",addr);
+       }
+}
+
+void BEV_overlay_segment(void)
+{
+       unsigned long RExcBase;
+       unsigned long RExcExtBase;
+       unsigned long excBase;
+       unsigned long excMask;
+       unsigned long excSize;
+       unsigned long addr;
+       char *p;
+
+       printk("IO: BASE = 0x%lx, SHIFT = 0x%lx, SIZE = 0x%lx\n",IO_BASE, IO_SHIFT, IO_SIZE);
+       RExcBase = GCMPCLCB(RESETBASE);
+       RExcExtBase = GCMPCLCB(RESETBASEEXT);
+       printk("GCMP base addr = 0x%lx, CLB: ResetExcBase = 0x%lx, ResetExcExtBase = 0x%lx\n",
+               _gcmp_base,RExcBase,RExcExtBase);
+       if ( !(RExcExtBase & 0x1) )
+               return;
+
+       if (bev_location == -1) {
+               if ((p = strstr(arcs_cmdline, "force-bev-location")))
+                       rd_bev_location(p);
+       }
+       if (bev_location != -1) {
+               addr = fls((IO_BASE + IO_SHIFT) ^ bev_location);
+nextSize:
+               if (addr > 28)
+                       panic("enforced BEV location is too far from I/O reg space\n");
+
+               excMask = (0xffffffffUL >> (32 - addr));
+               excBase = bev_location & ~excMask;
+               if (((IO_BASE + IO_SHIFT + IO_SIZE - 1) & ~excMask) != excBase) {
+                       addr++;
+                       goto nextSize;
+               }
+               excSize = ((excBase | excMask) + 1) - excBase;
+               printk("Setting BEV = 0x%lx, Overlay segment = 0x%lx, size = 0x%lx\n",
+                       bev_location, excBase, excSize);
+
+               BEV_overlay_segment_map_check(excBase, excMask, excSize);
+
+               GCMPCLCB(RESETBASEEXT) = (GCMPCLCB(RESETBASEEXT) &
+                       ~GCMP_CCB_RESETEXTBASE_BEV_MASK_MSK) |
+                       (excMask & GCMP_CCB_RESETEXTBASE_BEV_MASK_MSK);
+               GCMPCLCB(RESETBASE) = (GCMPCLCB(RESETBASE) & ~GCMP_CCB_RESETBASE_BEV_MSK) |
+                       bev_location;
+               RExcBase = GCMPCLCB(RESETBASE);
+               RExcExtBase = GCMPCLCB(RESETBASEEXT);
+
+               return;
+       }
 
-       /* Assume GIC is present */
-       change_c0_status(ST0_IM, STATUSF_IP3 | STATUSF_IP4 | STATUSF_IP6 |
-                                STATUSF_IP7);
+       excBase = RExcBase & GCMP_CCB_RESETBASE_BEV_MSK;
+       excMask = (RExcExtBase & GCMP_CCB_RESETEXTBASE_BEV_MASK_MSK) |
+                   GCMP_CCB_RESETEXTBASE_BEV_MASK_LOWBITS;
+       excBase &= ~excMask;
+       excSize = ((excBase | excMask) + 1) - excBase;
+       printk("BEV Overlay segment = 0x%lx, size = 0x%lx\n",excBase, excSize);
 
-       /* Enable per-cpu interrupts: platform specific */
+       BEV_overlay_segment_map_check(excBase, excMask, excSize);
+}
+#endif
+
+static void cmp_init_secondary(void)
+{
+       struct cpuinfo_mips *c = &current_cpu_data;
 
-       c->core = (read_c0_ebase() >> 1) & 0x1ff;
+       if (!cpu_has_veic) {
+               set_c0_status(mips_smp_c0_status_mask);
+               back_to_back_c0_hazard();
+               printk("CPU%d: status register %08x\n", smp_processor_id(), read_c0_status());
+       }
+       c->core = (read_c0_ebase() & 0x3ff) >> (fls(smp_num_siblings)-1);
 #if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_MIPS_MT_SMTC)
-       c->vpe_id = (read_c0_tcbind() >> TCBIND_CURVPE_SHIFT) & TCBIND_CURVPE;
+       if (cpu_has_mipsmt)
+               c->vpe_id = (read_c0_tcbind() >> TCBIND_CURVPE_SHIFT) &
+                       TCBIND_CURVPE;
 #endif
 #ifdef CONFIG_MIPS_MT_SMTC
        c->tc_id  = (read_c0_tcbind() & TCBIND_CURTC) >> TCBIND_CURTC_SHIFT;
 #endif
+
+#ifdef CONFIG_EVA
+       if (gcmp_present)
+               BEV_overlay_segment();
+#endif
 }
 
 static void cmp_smp_finish(void)
@@ -145,7 +257,7 @@ static void cmp_boot_secondary(int cpu, struct task_struct *idle)
 
 #if 0
        /* Needed? */
-       flush_icache_range((unsigned long)gp,
+       local_flush_icache_range((unsigned long)gp,
                           (unsigned long)(gp + sizeof(struct thread_info)));
 #endif
 
@@ -177,9 +289,16 @@ void __init cmp_smp_setup(void)
        }
 
        if (cpu_has_mipsmt) {
-               unsigned int nvpe, mvpconf0 = read_c0_mvpconf0();
+               unsigned int nvpe = 1;
+#ifdef CONFIG_MIPS_MT_SMP
+               unsigned int mvpconf0 = read_c0_mvpconf0();
+
+               nvpe = ((mvpconf0 & MVPCONF0_PVPE) >> MVPCONF0_PVPE_SHIFT) + 1;
+#elif defined(CONFIG_MIPS_MT_SMTC)
+               unsigned int mvpconf0 = read_c0_mvpconf0();
 
                nvpe = ((mvpconf0 & MVPCONF0_PTC) >> MVPCONF0_PTC_SHIFT) + 1;
+#endif
                smp_num_siblings = nvpe;
        }
        pr_info("Detected %i available secondary CPU(s)\n", ncpu);
@@ -207,3 +326,280 @@ struct plat_smp_ops cmp_smp_ops = {
        .smp_setup              = cmp_smp_setup,
        .prepare_cpus           = cmp_prepare_cpus,
 };
+
+/*
+ * GCMP needs to be detected before any SMP initialisation
+ */
+int __init gcmp_probe(unsigned long addr, unsigned long size)
+{
+       unsigned long confaddr = 0;
+
+       if (gcmp_present >= 0)
+               return gcmp_present;
+
+       if (cpu_has_mips_r2 && (read_c0_config3() & MIPS_CONF3_CMGCR)) {
+               /* try CMGCRBase */
+               confaddr = read_c0_cmgcrbase() << 4;
+               _gcmp_base = (unsigned long) ioremap_nocache(confaddr, size);
+               gcmp_present = ((GCMPGCB(GCMPB) & GCMP_GCB_GCMPB_GCMPBASE_MSK) == confaddr)? 1 : 0;
+               if (gcmp_present) {
+                       /* reassign it to 'addr' */
+                       if (addr != confaddr)
+                               GCMPGCB(GCMPB) = (GCMPGCB(GCMPB) & ~GCMP_GCB_GCMPB_GCMPBASE_MSK) | addr;
+                       _gcmp_base = (unsigned long) ioremap_nocache(addr , size);
+                       gcmp_present = ((GCMPGCB(GCMPB) & GCMP_GCB_GCMPB_GCMPBASE_MSK) == confaddr)? 1 : 0;
+                       confaddr = addr;
+                       if (!gcmp_present) {
+                               /* reassignment failed, try CMGCRBase again */
+                               confaddr = read_c0_cmgcrbase() << 4;
+                               _gcmp_base = (unsigned long) ioremap_nocache(confaddr, size);
+                               gcmp_present =
+                                       ((GCMPGCB(GCMPB) & GCMP_GCB_GCMPB_GCMPBASE_MSK) == confaddr)? 1 : 0;
+                       }
+               }
+       }
+       if (addr && (gcmp_present <= 0)) {
+               /* try addr */
+               _gcmp_base = (unsigned long) ioremap_nocache(addr, size);
+               gcmp_present = ((GCMPGCB(GCMPB) & GCMP_GCB_GCMPB_GCMPBASE_MSK) == addr)? 1 : 0;
+               confaddr = addr;
+       }
+
+       if (gcmp_present > 0) {
+               printk("GCMP available\n");
+               if (((GCMPGCB(GCMPREV) & GCMP_GCB_GCMPREV_MAJOR_MSK) >>
+                     GCMP_GCB_GCMPREV_MAJOR_SHF) >= 6)
+                       cpu_data[0].options |= MIPS_CPU_CM2;
+               if (cpu_has_cm2 && (size > 0x8000)) {
+                       GCMPGCB(GCML2S) = (confaddr + 0x8000) | GCMP_GCB_GCML2S_EN_MSK;
+                       cpu_data[0].options |= MIPS_CPU_CM2_L2SYNC;
+                       printk("L2-only SYNC available\n");
+               }
+               return gcmp_present;
+       }
+
+       gcmp_present = 0;
+       return gcmp_present;
+}
+
+/* Return the number of IOCU's present */
+int __init gcmp_niocu(void)
+{
+    return (gcmp_present > 0) ?
+      (GCMPGCB(GC) & GCMP_GCB_GC_NUMIOCU_MSK) >> GCMP_GCB_GC_NUMIOCU_SHF :
+      0;
+}
+
+/* Set GCMP region attributes */
+void __init gcmp_setregion(int region, unsigned long base,
+                         unsigned long mask, int type)
+{
+       GCMPGCBn(CMxBASE, region) = base;
+       GCMPGCBn(CMxMASK, region) = mask | type;
+}
+
+#ifdef CONFIG_SYSFS
+static ssize_t show_gcr_global(struct device *dev,
+                              struct device_attribute *attr, char *buf)
+{
+       int n = 0;
+
+       n = snprintf(buf, PAGE_SIZE,
+               "Global Config Register\t\t\t%08x\n"
+               "GCR Base Register\t\t\t%08x\n"
+               "Global CM Control Register\t\t%08x\n"
+               "Global CM Control2 Register\t\t%08x\n"
+               "Global CSR Access Privilege Register\t%08x\n"
+               "GCR Revision Register\t\t\t%08x\n"
+               "Global CM Error Mask Register\t\t%08x\n"
+               "Global CM Error Cause Register\t\t%08x\n"
+               "Global CM Error Address Register\t%08x\n"
+               "Global CM Error Multiple Register\t%08x\n"
+               "GCR Custom Base Register\t\t%08x\n"
+               "GCR Custom Status Register\t\t%x\n"
+               "Global L2 only Sync Register\t\t%08x\n"
+               "GIC Base Address Register\t\t%08x\n"
+               "CPC Base Address Register\t\t%08x\n"
+               "Region0 Base Address Register\t\t%08x,\tMask\t%08x\n"
+               "Region1 Base Address Register\t\t%08x,\tMask\t%08x\n"
+               "Region2 Base Address Register\t\t%08x,\tMask\t%08x\n"
+               "Region3 Base Address Register\t\t%08x,\tMask\t%08x\n"
+               "GIC Status Register\t\t\t%x\n"
+               "Cache Revision Register\t\t\t%08x\n"
+               "CPC Status Register\t\t\t%x\n"
+               "Attribute Region0 Base Address Register\t%08x,\tMask\t%08x\n"
+               "Attribute Region0 Base Address Register\t%08x,\tMask\t%08x\n"
+               "IOCU Revision Register\t\t\t%08x\n"
+               "Attribute Region0 Base Address Register\t%08x,\tMask\t%08x\n"
+               "Attribute Region0 Base Address Register\t%08x,\tMask\t%08x\n"
+               ,
+               GCMPGCB(GC),
+               GCMPGCB(GCMPB),
+               GCMPGCB(GCMC),
+               GCMPGCB(GCMC2),
+               GCMPGCB(GCSRAP),
+               GCMPGCB(GCMPREV),
+               GCMPGCB(GCMEM),
+               GCMPGCB(GCMEC),
+               GCMPGCB(GCMEA),
+               GCMPGCB(GCMEO),
+               GCMPGCB(GCMCUS),
+               GCMPGCB(GCMCST),
+               GCMPGCB(GCML2S),
+               GCMPGCB(GICBA),
+               GCMPGCB(CPCBA),
+               GCMPGCBn(CMxBASE, 0),   GCMPGCBn(CMxMASK, 0),
+               GCMPGCBn(CMxBASE, 1),   GCMPGCBn(CMxMASK, 1),
+               GCMPGCBn(CMxBASE, 2),   GCMPGCBn(CMxMASK, 2),
+               GCMPGCBn(CMxBASE, 3),   GCMPGCBn(CMxMASK, 3),
+               GCMPGCB(GICST),
+               GCMPGCB(GCSHREV),
+               GCMPGCB(CPCST),
+               GCMPGCB(GAOR0BA),       GCMPGCB(GAOR0MASK),
+               GCMPGCB(GAOR1BA),       GCMPGCB(GAOR1MASK),
+               GCMPGCB(IOCUREV),
+               GCMPGCB(GAOR2BA),       GCMPGCB(GAOR2MASK),
+               GCMPGCB(GAOR3BA),       GCMPGCB(GAOR3MASK)
+       );
+
+       return n;
+}
+
+static ssize_t show_gcr_local(struct device *dev,
+                             struct device_attribute *attr, char *buf)
+{
+       int n = 0;
+
+       GCMPCLCB(OTHER) = (dev->id)<<16;
+
+       n += snprintf(buf+n, PAGE_SIZE-n,
+               "Local Reset Release Register\t\t%08x\n"
+               "Local Coherence Control Register\t%08x\n"
+               "Local Config Register\t\t\t%08x\n"
+               "Other Addressing Register\t\t%08x\n"
+               "Local Reset Exception Base Register\t%08x\n"
+               "Local Identification Register\t\t%08x\n"
+               "Local Reset Exception Extended Base\t%08x\n"
+               "Local TCID PRIORITYs\t\t\t%08x %08x %08x %08x\n"
+               "\t\t\t\t\t%08x %08x %08x %08x %08x\n"
+               ,
+               GCMPCOCB(RESETR),
+               GCMPCOCB(COHCTL),
+               GCMPCOCB(CFG),
+               GCMPCOCB(OTHER),
+               GCMPCOCB(RESETBASE),
+               GCMPCOCB(ID),
+               GCMPCOCB(RESETBASEEXT),
+               GCMPCOCBn(TCIDxPRI, 0), GCMPCOCBn(TCIDxPRI, 1),
+               GCMPCOCBn(TCIDxPRI, 2), GCMPCOCBn(TCIDxPRI, 3),
+               GCMPCOCBn(TCIDxPRI, 4), GCMPCOCBn(TCIDxPRI, 5),
+               GCMPCOCBn(TCIDxPRI, 6), GCMPCOCBn(TCIDxPRI, 7),
+               GCMPCOCBn(TCIDxPRI, 8)
+       );
+
+       return n;
+}
+
+static DEVICE_ATTR(gcr_global, 0444, show_gcr_global, NULL);
+static DEVICE_ATTR(gcr_local, 0444, show_gcr_local, NULL);
+
+static struct bus_type gcmp_subsys = {
+       .name = "gcmp",
+       .dev_name = "gcmp",
+};
+
+static __cpuinit int gcmp_add_core(int cpu)
+{
+       struct device *dev;
+       int err;
+       char name[16];
+
+       dev = kzalloc(sizeof *dev, GFP_KERNEL);
+       if (!dev)
+               return -ENOMEM;
+
+       dev->id = cpu;
+       dev->bus = &gcmp_subsys;
+       snprintf(name, sizeof name, "core%d",cpu);
+       dev->init_name = name;
+
+       err = device_register(dev);
+       if (err)
+               return err;
+
+       err = device_create_file(dev, &dev_attr_gcr_local);
+       if (err)
+               return err;
+
+       return 0;
+}
+
+static __cpuinit int gcmp_add_iocu(int cpu,int totcpu)
+{
+       struct device *dev;
+       int err;
+       char name[16];
+
+       dev = kzalloc(sizeof *dev, GFP_KERNEL);
+       if (!dev)
+               return -ENOMEM;
+
+       /* Ask Tom Berg @ IMGTec about more generic formula. LY22 */
+       if (totcpu <= 4 )
+               totcpu = 4;
+       else
+               totcpu = 6;
+
+       dev->id = cpu + totcpu;
+       dev->bus = &gcmp_subsys;
+       snprintf(name, sizeof name, "iocu%d",cpu);
+       dev->init_name = name;
+
+       err = device_register(dev);
+       if (err)
+               return err;
+
+       err = device_create_file(dev, &dev_attr_gcr_local);
+       if (err)
+               return err;
+
+       return 0;
+}
+
+static int __init init_gcmp_sysfs(void)
+{
+       int rc;
+       int cpuN, iocuN;
+       int cpu;
+
+       if (gcmp_present <= 0)
+               return 0;
+
+       rc = subsys_system_register(&gcmp_subsys, NULL);
+       if (rc)
+               return rc;
+
+       rc = device_create_file(gcmp_subsys.dev_root, &dev_attr_gcr_global);
+       if (rc)
+               return rc;
+
+       cpuN = ((GCMPGCB(GC) & GCMP_GCB_GC_NUMCORES_MSK) >> GCMP_GCB_GC_NUMCORES_SHF) + 1;
+       for (cpu=0; cpu<cpuN; cpu++) {
+               rc = gcmp_add_core(cpu);
+               if (rc)
+                       return rc;
+       }
+
+       iocuN = ((GCMPGCB(GC) & GCMP_GCB_GC_NUMIOCU_MSK) >> GCMP_GCB_GC_NUMIOCU_SHF);
+       for (cpu=0; cpu<iocuN; cpu++) {
+               rc = gcmp_add_iocu(cpu,cpuN);
+               if (rc)
+                       return rc;
+       }
+
+       return 0;
+}
+
+device_initcall_sync(init_gcmp_sysfs);
+
+#endif /* CONFIG_SYSFS */
index 3e5164c11cacabe66ea021cb29c979b93d920e00..985e2634bf75c01054ac7728a9b63c0d28926786 100644 (file)
@@ -35,6 +35,7 @@
 #include <asm/mipsmtregs.h>
 #include <asm/mips_mt.h>
 #include <asm/gic.h>
+#include <asm/irq_cpu.h>
 
 static void __init smvp_copy_vpe_config(void)
 {
@@ -71,6 +72,7 @@ static unsigned int __init smvp_vpe_init(unsigned int tc, unsigned int mvpconf0,
 
                /* Record this as available CPU */
                set_cpu_possible(tc, true);
+               set_cpu_present(tc, true);
                __cpu_number_map[tc]    = ++ncpu;
                __cpu_logical_map[ncpu] = tc;
        }
@@ -112,12 +114,35 @@ static void __init smvp_tc_init(unsigned int tc, unsigned int mvpconf0)
        write_tc_c0_tchalt(TCHALT_H);
 }
 
+static void mp_send_ipi_single(int cpu, unsigned int action)
+{
+       unsigned long flags;
+
+       local_irq_save(flags);
+
+       switch (action) {
+       case SMP_CALL_FUNCTION:
+               gic_send_ipi(plat_ipi_call_int_xlate(cpu));
+               break;
+
+       case SMP_RESCHEDULE_YOURSELF:
+               gic_send_ipi(plat_ipi_resched_int_xlate(cpu));
+               break;
+       }
+
+       local_irq_restore(flags);
+}
+
 static void vsmp_send_ipi_single(int cpu, unsigned int action)
 {
        int i;
        unsigned long flags;
        int vpflags;
 
+       if (gic_present) {
+               mp_send_ipi_single(cpu, action);
+               return;
+       }
        local_irq_save(flags);
 
        vpflags = dvpe();       /* can't access the other CPU's registers whilst MVPE enabled */
@@ -151,19 +176,24 @@ static void vsmp_send_ipi_mask(const struct cpumask *mask, unsigned int action)
 
 static void __cpuinit vsmp_init_secondary(void)
 {
-#ifdef CONFIG_IRQ_GIC
-       /* This is Malta specific: IPI,performance and timer interrupts */
-       if (gic_present)
-               change_c0_status(ST0_IM, STATUSF_IP3 | STATUSF_IP4 |
-                                        STATUSF_IP6 | STATUSF_IP7);
-       else
+       struct cpuinfo_mips *c = &current_cpu_data;
+
+       if (!cpu_has_veic) {
+               set_c0_status(mips_smp_c0_status_mask);
+               back_to_back_c0_hazard();
+               printk("CPU%d: status register %08x\n", smp_processor_id(), read_c0_status());
+       }
+       c->core = (read_c0_ebase() & 0x3ff) >> (fls(smp_num_siblings)-1);
+#if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_MIPS_MT_SMTC)
+       if (cpu_has_mipsmt)
+               c->vpe_id = (read_c0_tcbind() >> TCBIND_CURVPE_SHIFT) & TCBIND_CURVPE;
 #endif
-               change_c0_status(ST0_IM, STATUSF_IP0 | STATUSF_IP1 |
-                                        STATUSF_IP6 | STATUSF_IP7);
 }
 
 static void __cpuinit vsmp_smp_finish(void)
 {
+       pr_debug("SMPMT: CPU%d: vsmp_smp_finish\n", smp_processor_id());
+
        /* CDFIXME: remove this? */
        write_c0_compare(read_c0_count() + (8* mips_hpt_frequency/HZ));
 
@@ -178,6 +208,7 @@ static void __cpuinit vsmp_smp_finish(void)
 
 static void vsmp_cpus_done(void)
 {
+       pr_debug("SMPMT: CPU%d: vsmp_cpus_done\n", smp_processor_id());
 }
 
 /*
@@ -191,6 +222,8 @@ static void vsmp_cpus_done(void)
 static void __cpuinit vsmp_boot_secondary(int cpu, struct task_struct *idle)
 {
        struct thread_info *gp = task_thread_info(idle);
+       pr_debug("SMPMT: CPU%d: vsmp_boot_secondary cpu %d\n",
+               smp_processor_id(), cpu);
        dvpe();
        set_c0_mvpcontrol(MVPCONTROL_VPC);
 
@@ -213,8 +246,8 @@ static void __cpuinit vsmp_boot_secondary(int cpu, struct task_struct *idle)
        /* global pointer */
        write_tc_gpr_gp((unsigned long)gp);
 
-       flush_icache_range((unsigned long)gp,
-                          (unsigned long)(gp + sizeof(struct thread_info)));
+       local_flush_icache_range((unsigned long)gp,
+                       (unsigned long)(gp + sizeof(struct thread_info)));
 
        /* finally out of configuration and into chaos */
        clear_c0_mvpcontrol(MVPCONTROL_VPC);
@@ -232,6 +265,7 @@ static void __init vsmp_smp_setup(void)
        unsigned int mvpconf0, ntc, tc, ncpu = 0;
        unsigned int nvpe;
 
+       pr_debug("SMPMT: CPU%d: vsmp_smp_setup\n", smp_processor_id());
 #ifdef CONFIG_MIPS_MT_FPAFF
        /* If we have an FPU, enroll ourselves in the FPU-full mask */
        if (cpu_has_fpu)
@@ -272,6 +306,8 @@ static void __init vsmp_smp_setup(void)
 
 static void __init vsmp_prepare_cpus(unsigned int max_cpus)
 {
+       pr_debug("SMPMT: CPU%d: vsmp_prepare_cpus %d\n",
+               smp_processor_id(), max_cpus);
        mips_mt_set_cpuoptions();
 }
 
index 6e7862ab46cc4a6fef3c31e1ade85e04b357824f..54c2046e11740c8516e2bcaed763aaa4988de92e 100644 (file)
@@ -60,12 +60,17 @@ int smp_num_siblings = 1;
 EXPORT_SYMBOL(smp_num_siblings);
 
 /* representing the TCs (or siblings in Intel speak) of each logical CPU */
-cpumask_t cpu_sibling_map[NR_CPUS] __read_mostly;
-EXPORT_SYMBOL(cpu_sibling_map);
+DEFINE_PER_CPU_SHARED_ALIGNED(struct cpumask, cpu_sibling_map);
+EXPORT_PER_CPU_SYMBOL(cpu_sibling_map);
 
 /* representing cpus for which sibling maps can be computed */
 static cpumask_t cpu_sibling_setup_map;
 
+/* CPU siblings in MIPS:
+ *
+ *      SMVP kernel - VPEs on common core are siblings
+ *      SMTC kernel - TC's on common core are siblings
+ */
 static inline void set_cpu_sibling_map(int cpu)
 {
        int i;
@@ -75,12 +80,12 @@ static inline void set_cpu_sibling_map(int cpu)
        if (smp_num_siblings > 1) {
                for_each_cpu_mask(i, cpu_sibling_setup_map) {
                        if (cpu_data[cpu].core == cpu_data[i].core) {
-                               cpu_set(i, cpu_sibling_map[cpu]);
-                               cpu_set(cpu, cpu_sibling_map[i]);
+                               cpu_set(i, per_cpu(cpu_sibling_map, cpu));
+                               cpu_set(cpu, per_cpu(cpu_sibling_map, i));
                        }
                }
        } else
-               cpu_set(cpu, cpu_sibling_map[cpu]);
+               cpu_set(cpu, per_cpu(cpu_sibling_map, cpu));
 }
 
 struct plat_smp_ops *mp_ops;
index 6af08d896e20bdfd3ffc03bd7b03c99d6e7246d1..39091a62e1d17119c2ae17ff8c8130ca1b54c3e8 100644 (file)
@@ -204,8 +204,10 @@ void __cpuinit spram_config(void)
        switch (c->cputype) {
        case CPU_24K:
        case CPU_34K:
-       case CPU_74K:
        case CPU_1004K:
+       case CPU_74K:
+       case CPU_PROAPTIV:
+       case CPU_INTERAPTIV:
                config0 = read_c0_config();
                /* FIXME: addresses are Malta specific */
                if (config0 & (1<<24)) {
index 9d686bf97b0e35a598dfe3c5711553b5148d8329..364d26ae42152ea89f60e5fa18b653daed803e17 100644 (file)
@@ -121,6 +121,14 @@ void __init time_init(void)
 {
        plat_time_init();
 
-       if (!mips_clockevent_init() || !cpu_has_mfc0_count_bug())
+       /*
+        * The use of the R4k timer as a clock event takes precedence;
+        * if reading the Count register might interfere with the timer
+        * interrupt, then we don't use the timer as a clock source.
+        * We may still use the timer as a clock source though if the
+        * timer interrupt isn't reliable; the interference doesn't
+        * matter then, because we don't use the interrupt.
+        */
+       if (mips_clockevent_init() != 0 || !cpu_has_mfc0_count_bug())
                init_mips_clocksource();
 }
index a75ae40184aa3a5d35e08668e71022230ee087bc..83a4882141a7c5f71373b843beb745add47c853a 100644 (file)
@@ -76,6 +76,7 @@ extern asmlinkage void handle_cpu(void);
 extern asmlinkage void handle_ov(void);
 extern asmlinkage void handle_tr(void);
 extern asmlinkage void handle_fpe(void);
+extern asmlinkage void handle_ftlb(void);
 extern asmlinkage void handle_mdmx(void);
 extern asmlinkage void handle_watch(void);
 extern asmlinkage void handle_mt(void);
@@ -328,6 +329,7 @@ void show_regs(struct pt_regs *regs)
 void show_registers(struct pt_regs *regs)
 {
        const int field = 2 * sizeof(unsigned long);
+       mm_segment_t old_fs = get_fs();
 
        __show_regs(regs);
        print_modules();
@@ -342,9 +344,12 @@ void show_registers(struct pt_regs *regs)
                        printk("*HwTLS: %0*lx\n", field, tls);
        }
 
+       if (!user_mode(regs))
+               set_fs(KERNEL_DS);
        show_stacktrace(current, regs);
        show_code((unsigned int __user *) regs->cp0_epc);
        printk("\n");
+       set_fs(old_fs);
 }
 
 static int regs_to_trapnr(struct pt_regs *regs)
@@ -837,6 +842,13 @@ asmlinkage void do_bp(struct pt_regs *regs)
        unsigned int opcode, bcode;
        unsigned long epc;
        u16 instr[2];
+#ifdef CONFIG_EVA
+       mm_segment_t seg;
+
+       seg = get_fs();
+       if (!user_mode(regs))
+               set_fs(KERNEL_DS);
+#endif
 
        if (get_isa16_mode(regs->cp0_epc)) {
                /* Calculate EPC. */
@@ -852,6 +864,9 @@ asmlinkage void do_bp(struct pt_regs *regs)
                                goto out_sigsegv;
                    bcode = (instr[0] >> 6) & 0x3f;
                    do_trap_or_bp(regs, bcode, "Break");
+#ifdef CONFIG_EVA
+                   set_fs(seg);
+#endif
                    return;
                }
        } else {
@@ -875,23 +890,35 @@ asmlinkage void do_bp(struct pt_regs *regs)
         */
        switch (bcode) {
        case BRK_KPROBE_BP:
-               if (notify_die(DIE_BREAK, "debug", regs, bcode, regs_to_trapnr(regs), SIGTRAP) == NOTIFY_STOP)
+               if (notify_die(DIE_BREAK, "debug", regs, bcode, regs_to_trapnr(regs), SIGTRAP) == NOTIFY_STOP) {
+#ifdef CONFIG_EVA
+                       set_fs(seg);
+#endif
                        return;
-               else
+               else
                        break;
        case BRK_KPROBE_SSTEPBP:
-               if (notify_die(DIE_SSTEPBP, "single_step", regs, bcode, regs_to_trapnr(regs), SIGTRAP) == NOTIFY_STOP)
+               if (notify_die(DIE_SSTEPBP, "single_step", regs, bcode, regs_to_trapnr(regs), SIGTRAP) == NOTIFY_STOP) {
+#ifdef CONFIG_EVA
+                       set_fs(seg);
+#endif
                        return;
-               else
+               else
                        break;
        default:
                break;
        }
 
        do_trap_or_bp(regs, bcode, "Break");
+#ifdef CONFIG_EVA
+       set_fs(seg);
+#endif
        return;
 
 out_sigsegv:
+#ifdef CONFIG_EVA
+       set_fs(seg);
+#endif
        force_sig(SIGSEGV, current);
 }
 
@@ -900,6 +927,13 @@ asmlinkage void do_tr(struct pt_regs *regs)
        u32 opcode, tcode = 0;
        u16 instr[2];
        unsigned long epc = msk_isa16_mode(exception_epc(regs));
+#ifdef CONFIG_EVA
+       mm_segment_t seg;
+
+       seg = get_fs();
+       if (!user_mode(regs))
+               set_fs(KERNEL_DS);
+#endif
 
        if (get_isa16_mode(regs->cp0_epc)) {
                if (__get_user(instr[0], (u16 __user *)(epc + 0)) ||
@@ -1114,20 +1148,28 @@ asmlinkage void do_cpu(struct pt_regs *regs)
                /* Fall through.  */
 
        case 1:
-               if (used_math())        /* Using the FPU again.  */
-                       own_fpu(1);
-               else {                  /* First time FPU user.  */
-                       init_fpu();
+               status = 0;
+               if (used_math())        /* Using the FPU again.  */
+                       status = own_fpu(1);
+               else {                  /* First time FPU user.  */
+                       status = init_fpu();
+#ifndef CONFIG_MIPS_INCOMPATIBLE_FPU_EMULATION
+                       if (status) {
+                               force_sig(SIGFPE, current);
+                               return;
+                       }
+#endif
+
                        set_used_math();
                }
 
-               if (!raw_cpu_has_fpu) {
+               if ((!raw_cpu_has_fpu) || status) {
                        int sig;
                        void __user *fault_addr = NULL;
                        sig = fpu_emulator_cop1Handler(regs,
                                                       &current->thread.fpu,
                                                       0, &fault_addr);
-                       if (!process_fpemu_return(sig, fault_addr))
+                       if ((!process_fpemu_return(sig, fault_addr)) && !status)
                                mt_ase_fp_affinity();
                }
 
@@ -1286,6 +1328,8 @@ static inline void parity_protection_init(void)
        case CPU_34K:
        case CPU_74K:
        case CPU_1004K:
+       case CPU_PROAPTIV:
+       case CPU_INTERAPTIV:
                {
 #define ERRCTL_PE      0x80000000
 #define ERRCTL_L2P     0x00800000
@@ -1367,22 +1411,26 @@ asmlinkage void cache_parity_error(void)
        unsigned int reg_val;
 
        /* For the moment, report the problem and hang. */
-       printk("Cache error exception:\n");
+       printk("Cache error exception, cp0_ecc=0x%08x:\n",read_c0_ecc());
        printk("cp0_errorepc == %0*lx\n", field, read_c0_errorepc());
        reg_val = read_c0_cacheerr();
        printk("c0_cacheerr == %08x\n", reg_val);
 
-       printk("Decoded c0_cacheerr: %s cache fault in %s reference.\n",
-              reg_val & (1<<30) ? "secondary" : "primary",
-              reg_val & (1<<31) ? "data" : "insn");
-       printk("Error bits: %s%s%s%s%s%s%s\n",
+       if ((reg_val & 0xc0000000) == 0xc0000000)
+               printk("Decoded c0_cacheerr: FTLB parity error\n");
+       else
+               printk("Decoded c0_cacheerr: %s cache fault in %s reference.\n",
+                       reg_val & (1<<30) ? "secondary" : "primary",
+                       reg_val & (1<<31) ? "data" : "insn");
+       printk("Error bits: %s%s%s%s%s%s%s%s\n",
               reg_val & (1<<29) ? "ED " : "",
               reg_val & (1<<28) ? "ET " : "",
+              reg_val & (1<<27) ? "ES " : "",
               reg_val & (1<<26) ? "EE " : "",
               reg_val & (1<<25) ? "EB " : "",
-              reg_val & (1<<24) ? "EI " : "",
-              reg_val & (1<<23) ? "E1 " : "",
-              reg_val & (1<<22) ? "E0 " : "");
+              reg_val & (1<<24) ? "EI/EF " : "",
+              reg_val & (1<<23) ? "E1/SP " : "",
+              reg_val & (1<<22) ? "E0/EW " : "");
        printk("IDX: 0x%08x\n", reg_val & ((1<<22)-1));
 
 #if defined(CONFIG_CPU_MIPS32) || defined(CONFIG_CPU_MIPS64)
@@ -1396,6 +1444,45 @@ asmlinkage void cache_parity_error(void)
        panic("Can't handle the cache error!");
 }
 
+asmlinkage void do_ftlb(void)
+{
+       const int field = 2 * sizeof(unsigned long);
+       unsigned int reg_val;
+
+       /* For the moment, report the problem and hang. */
+       printk("FTLB error exception, cp0_ecc=0x%08x:\n",read_c0_ecc());
+       printk("cp0_errorepc == %0*lx\n", field, read_c0_errorepc());
+       reg_val = read_c0_cacheerr();
+       printk("c0_cacheerr == %08x\n", reg_val);
+
+       if ((reg_val & 0xc0000000) == 0xc0000000)
+               printk("Decoded c0_cacheerr: FTLB parity error\n");
+       else
+               printk("Decoded c0_cacheerr: %s cache fault in %s reference.\n",
+                      reg_val & (1<<30) ? "secondary" : "primary",
+                      reg_val & (1<<31) ? "data" : "insn");
+       printk("Error bits: %s%s%s%s%s%s%s%s\n",
+              reg_val & (1<<29) ? "ED " : "",
+              reg_val & (1<<28) ? "ET " : "",
+              reg_val & (1<<27) ? "ES " : "",
+              reg_val & (1<<26) ? "EE " : "",
+              reg_val & (1<<25) ? "EB " : "",
+              reg_val & (1<<24) ? "EI/EF " : "",
+              reg_val & (1<<23) ? "E1/SP " : "",
+              reg_val & (1<<22) ? "E0/EW " : "");
+       printk("IDX: 0x%08x\n", reg_val & ((1<<22)-1));
+
+#if defined(CONFIG_CPU_MIPS32) || defined(CONFIG_CPU_MIPS64)
+       if (reg_val & (1<<22))
+               printk("DErrAddr0: 0x%0*lx\n", field, read_c0_derraddr0());
+
+       if (reg_val & (1<<23))
+               printk("DErrAddr1: 0x%0*lx\n", field, read_c0_derraddr1());
+#endif
+
+       panic("Can't handle the FTLB parity error!");
+}
+
 /*
  * SDBBP EJTAG debug exception handler.
  * We skip the instruction and return to the next instruction.
@@ -1447,10 +1534,16 @@ int register_nmi_notifier(struct notifier_block *nb)
 
 void __noreturn nmi_exception_handler(struct pt_regs *regs)
 {
+       unsigned long epc;
+       char str[100];
+
        raw_notifier_call_chain(&nmi_chain, 0, regs);
        bust_spinlocks(1);
-       printk("NMI taken!!!!\n");
-       die("NMI", regs);
+       epc = regs->cp0_epc;
+       snprintf(str, 100, "CPU%d NMI taken, CP0_EPC=%lx (before replacement by CP0_ERROREPC)\n",smp_processor_id(),regs->cp0_epc);
+       regs->cp0_epc = read_c0_errorepc();
+       die(str, regs);
+       regs->cp0_epc = epc;
 }
 
 #define VECTORSPACING 0x100    /* for EI/VI mode */
@@ -1513,7 +1606,6 @@ static void *set_vi_srs_handler(int n, vi_handler_t addr, int srs)
        unsigned char *b;
 
        BUG_ON(!cpu_has_veic && !cpu_has_vint);
-       BUG_ON((n < 0) && (n > 9));
 
        if (addr == NULL) {
                handler = (unsigned long) do_default_vi;
@@ -1687,7 +1779,7 @@ void __cpuinit per_cpu_trap_init(bool is_boot_cpu)
        if (cpu_has_dsp)
                status_set |= ST0_MX;
 
-       change_c0_status(ST0_CU|ST0_MX|ST0_RE|ST0_FR|ST0_BEV|ST0_TS|ST0_KX|ST0_SX|ST0_UX,
+       change_c0_status(ST0_CU|ST0_MX|ST0_RE|ST0_BEV|ST0_TS|ST0_KX|ST0_SX|ST0_UX,
                         status_set);
 
        if (cpu_has_mips_r2)
@@ -1705,6 +1797,10 @@ void __cpuinit per_cpu_trap_init(bool is_boot_cpu)
 
        if (cpu_has_veic || cpu_has_vint) {
                unsigned long sr = set_c0_status(ST0_BEV);
+#ifdef CONFIG_EVA
+               write_c0_ebase(ebase|MIPS_EBASE_WG);
+               back_to_back_c0_hazard();
+#endif
                write_c0_ebase(ebase);
                write_c0_status(sr);
                /* Setting vector spacing enables EI/VI mode  */
@@ -1770,7 +1866,7 @@ void __cpuinit per_cpu_trap_init(bool is_boot_cpu)
 }
 
 /* Install CPU exception handler */
-void __cpuinit set_handler(unsigned long offset, void *addr, unsigned long size)
+void set_handler(unsigned long offset, void *addr, unsigned long size)
 {
 #ifdef CONFIG_CPU_MICROMIPS
        memcpy((void *)(ebase + offset), ((unsigned char *)addr - 1), size);
@@ -1934,6 +2030,7 @@ void __init trap_init(void)
        if (cpu_has_fpu && !cpu_has_nofpuex)
                set_except_vector(15, handle_fpe);
 
+       set_except_vector(16, handle_ftlb);
        set_except_vector(22, handle_mdmx);
 
        if (cpu_has_mcheck)
index 203d8857070dd225f2d8fdea7bf986ad7a6560cc..3b946e28e84e62b4ba7a07a6c90527b1b4bf90ac 100644 (file)
@@ -105,6 +105,255 @@ static u32 unaligned_action;
 #define unaligned_action UNALIGNED_ACTION_QUIET
 #endif
 extern void show_registers(struct pt_regs *regs);
+asmlinkage void do_cpu(struct pt_regs *regs);
+
+#ifdef CONFIG_EVA
+/* EVA variant */
+
+#ifdef __BIG_ENDIAN
+#define     LoadHW(addr, value, res)  \
+               __asm__ __volatile__ (".set\tnoat\n"        \
+                       "1:\tlbe\t%0, 0(%2)\n"               \
+                       "2:\tlbue\t$1, 1(%2)\n\t"            \
+                       "sll\t%0, 0x8\n\t"                  \
+                       "or\t%0, $1\n\t"                    \
+                       "li\t%1, 0\n"                       \
+                       "3:\t.set\tat\n\t"                  \
+                       ".insn\n\t"                         \
+                       ".section\t.fixup,\"ax\"\n\t"       \
+                       "4:\tli\t%1, %3\n\t"                \
+                       "j\t3b\n\t"                         \
+                       ".previous\n\t"                     \
+                       ".section\t__ex_table,\"a\"\n\t"    \
+                       STR(PTR)"\t1b, 4b\n\t"              \
+                       STR(PTR)"\t2b, 4b\n\t"              \
+                       ".previous"                         \
+                       : "=&r" (value), "=r" (res)         \
+                       : "r" (addr), "i" (-EFAULT));
+
+#define     LoadW(addr, value, res)   \
+               __asm__ __volatile__ (                      \
+                       "1:\tlwle\t%0, (%2)\n"               \
+                       "2:\tlwre\t%0, 3(%2)\n\t"            \
+                       "li\t%1, 0\n"                       \
+                       "3:\n\t"                            \
+                       ".insn\n\t"                         \
+                       ".section\t.fixup,\"ax\"\n\t"       \
+                       "4:\tli\t%1, %3\n\t"                \
+                       "j\t3b\n\t"                         \
+                       ".previous\n\t"                     \
+                       ".section\t__ex_table,\"a\"\n\t"    \
+                       STR(PTR)"\t1b, 4b\n\t"              \
+                       STR(PTR)"\t2b, 4b\n\t"              \
+                       ".previous"                         \
+                       : "=&r" (value), "=r" (res)         \
+                       : "r" (addr), "i" (-EFAULT));
+
+#define     LoadHWU(addr, value, res) \
+               __asm__ __volatile__ (                      \
+                       ".set\tnoat\n"                      \
+                       "1:\tlbue\t%0, 0(%2)\n"              \
+                       "2:\tlbue\t$1, 1(%2)\n\t"            \
+                       "sll\t%0, 0x8\n\t"                  \
+                       "or\t%0, $1\n\t"                    \
+                       "li\t%1, 0\n"                       \
+                       "3:\n\t"                            \
+                       ".insn\n\t"                         \
+                       ".set\tat\n\t"                      \
+                       ".section\t.fixup,\"ax\"\n\t"       \
+                       "4:\tli\t%1, %3\n\t"                \
+                       "j\t3b\n\t"                         \
+                       ".previous\n\t"                     \
+                       ".section\t__ex_table,\"a\"\n\t"    \
+                       STR(PTR)"\t1b, 4b\n\t"              \
+                       STR(PTR)"\t2b, 4b\n\t"              \
+                       ".previous"                         \
+                       : "=&r" (value), "=r" (res)         \
+                       : "r" (addr), "i" (-EFAULT));
+
+#define     LoadWU(addr, value, res)  \
+               __asm__ __volatile__ (                      \
+                       "1:\tlwle\t%0, (%2)\n"               \
+                       "2:\tlwre\t%0, 3(%2)\n\t"            \
+                       "dsll\t%0, %0, 32\n\t"              \
+                       "dsrl\t%0, %0, 32\n\t"              \
+                       "li\t%1, 0\n"                       \
+                       "3:\n\t"                            \
+                       ".insn\n\t"                         \
+                       "\t.section\t.fixup,\"ax\"\n\t"     \
+                       "4:\tli\t%1, %3\n\t"                \
+                       "j\t3b\n\t"                         \
+                       ".previous\n\t"                     \
+                       ".section\t__ex_table,\"a\"\n\t"    \
+                       STR(PTR)"\t1b, 4b\n\t"              \
+                       STR(PTR)"\t2b, 4b\n\t"              \
+                       ".previous"                         \
+                       : "=&r" (value), "=r" (res)         \
+                       : "r" (addr), "i" (-EFAULT));
+
+#define     StoreHW(addr, value, res) \
+               __asm__ __volatile__ (                      \
+                       ".set\tnoat\n"                      \
+                       "1:\tsbe\t%1, 1(%2)\n\t"             \
+                       "srl\t$1, %1, 0x8\n"                \
+                       "2:\tsbe\t$1, 0(%2)\n\t"             \
+                       ".set\tat\n\t"                      \
+                       "li\t%0, 0\n"                       \
+                       "3:\n\t"                            \
+                       ".insn\n\t"                         \
+                       ".section\t.fixup,\"ax\"\n\t"       \
+                       "4:\tli\t%0, %3\n\t"                \
+                       "j\t3b\n\t"                         \
+                       ".previous\n\t"                     \
+                       ".section\t__ex_table,\"a\"\n\t"    \
+                       STR(PTR)"\t1b, 4b\n\t"              \
+                       STR(PTR)"\t2b, 4b\n\t"              \
+                       ".previous"                         \
+                       : "=r" (res)                        \
+                       : "r" (value), "r" (addr), "i" (-EFAULT));
+
+#define     StoreW(addr, value, res)  \
+               __asm__ __volatile__ (                      \
+                       "1:\tswle\t%1,(%2)\n"                \
+                       "2:\tswre\t%1, 3(%2)\n\t"            \
+                       "li\t%0, 0\n"                       \
+                       "3:\n\t"                            \
+                       ".insn\n\t"                         \
+                       ".section\t.fixup,\"ax\"\n\t"       \
+                       "4:\tli\t%0, %3\n\t"                \
+                       "j\t3b\n\t"                         \
+                       ".previous\n\t"                     \
+                       ".section\t__ex_table,\"a\"\n\t"    \
+                       STR(PTR)"\t1b, 4b\n\t"              \
+                       STR(PTR)"\t2b, 4b\n\t"              \
+                       ".previous"                         \
+               : "=r" (res)                                \
+               : "r" (value), "r" (addr), "i" (-EFAULT));
+#endif
+
+#ifdef __LITTLE_ENDIAN
+#define     LoadHW(addr, value, res)  \
+               __asm__ __volatile__ (".set\tnoat\n"        \
+                       "1:\tlbe\t%0, 1(%2)\n"               \
+                       "2:\tlbue\t$1, 0(%2)\n\t"            \
+                       "sll\t%0, 0x8\n\t"                  \
+                       "or\t%0, $1\n\t"                    \
+                       "li\t%1, 0\n"                       \
+                       "3:\t.set\tat\n\t"                  \
+                       ".insn\n\t"                         \
+                       ".section\t.fixup,\"ax\"\n\t"       \
+                       "4:\tli\t%1, %3\n\t"                \
+                       "j\t3b\n\t"                         \
+                       ".previous\n\t"                     \
+                       ".section\t__ex_table,\"a\"\n\t"    \
+                       STR(PTR)"\t1b, 4b\n\t"              \
+                       STR(PTR)"\t2b, 4b\n\t"              \
+                       ".previous"                         \
+                       : "=&r" (value), "=r" (res)         \
+                       : "r" (addr), "i" (-EFAULT));
+
+#define     LoadW(addr, value, res)   \
+               __asm__ __volatile__ (                      \
+                       "1:\tlwle\t%0, 3(%2)\n"              \
+                       "2:\tlwre\t%0, (%2)\n\t"             \
+                       "li\t%1, 0\n"                       \
+                       "3:\n\t"                            \
+                       ".insn\n\t"                         \
+                       ".section\t.fixup,\"ax\"\n\t"       \
+                       "4:\tli\t%1, %3\n\t"                \
+                       "j\t3b\n\t"                         \
+                       ".previous\n\t"                     \
+                       ".section\t__ex_table,\"a\"\n\t"    \
+                       STR(PTR)"\t1b, 4b\n\t"              \
+                       STR(PTR)"\t2b, 4b\n\t"              \
+                       ".previous"                         \
+                       : "=&r" (value), "=r" (res)         \
+                       : "r" (addr), "i" (-EFAULT));
+
+#define     LoadHWU(addr, value, res) \
+               __asm__ __volatile__ (                      \
+                       ".set\tnoat\n"                      \
+                       "1:\tlbue\t%0, 1(%2)\n"              \
+                       "2:\tlbue\t$1, 0(%2)\n\t"            \
+                       "sll\t%0, 0x8\n\t"                  \
+                       "or\t%0, $1\n\t"                    \
+                       "li\t%1, 0\n"                       \
+                       "3:\n\t"                            \
+                       ".insn\n\t"                         \
+                       ".set\tat\n\t"                      \
+                       ".section\t.fixup,\"ax\"\n\t"       \
+                       "4:\tli\t%1, %3\n\t"                \
+                       "j\t3b\n\t"                         \
+                       ".previous\n\t"                     \
+                       ".section\t__ex_table,\"a\"\n\t"    \
+                       STR(PTR)"\t1b, 4b\n\t"              \
+                       STR(PTR)"\t2b, 4b\n\t"              \
+                       ".previous"                         \
+                       : "=&r" (value), "=r" (res)         \
+                       : "r" (addr), "i" (-EFAULT));
+
+#define     LoadWU(addr, value, res)  \
+               __asm__ __volatile__ (                      \
+                       "1:\tlwle\t%0, 3(%2)\n"              \
+                       "2:\tlwre\t%0, (%2)\n\t"             \
+                       "dsll\t%0, %0, 32\n\t"              \
+                       "dsrl\t%0, %0, 32\n\t"              \
+                       "li\t%1, 0\n"                       \
+                       "3:\n\t"                            \
+                       ".insn\n\t"                         \
+                       "\t.section\t.fixup,\"ax\"\n\t"     \
+                       "4:\tli\t%1, %3\n\t"                \
+                       "j\t3b\n\t"                         \
+                       ".previous\n\t"                     \
+                       ".section\t__ex_table,\"a\"\n\t"    \
+                       STR(PTR)"\t1b, 4b\n\t"              \
+                       STR(PTR)"\t2b, 4b\n\t"              \
+                       ".previous"                         \
+                       : "=&r" (value), "=r" (res)         \
+                       : "r" (addr), "i" (-EFAULT));
+
+#define     StoreHW(addr, value, res) \
+               __asm__ __volatile__ (                      \
+                       ".set\tnoat\n"                      \
+                       "1:\tsbe\t%1, 0(%2)\n\t"             \
+                       "srl\t$1,%1, 0x8\n"                 \
+                       "2:\tsbe\t$1, 1(%2)\n\t"             \
+                       ".set\tat\n\t"                      \
+                       "li\t%0, 0\n"                       \
+                       "3:\n\t"                            \
+                       ".insn\n\t"                         \
+                       ".section\t.fixup,\"ax\"\n\t"       \
+                       "4:\tli\t%0, %3\n\t"                \
+                       "j\t3b\n\t"                         \
+                       ".previous\n\t"                     \
+                       ".section\t__ex_table,\"a\"\n\t"    \
+                       STR(PTR)"\t1b, 4b\n\t"              \
+                       STR(PTR)"\t2b, 4b\n\t"              \
+                       ".previous"                         \
+                       : "=r" (res)                        \
+                       : "r" (value), "r" (addr), "i" (-EFAULT));
+
+#define     StoreW(addr, value, res)  \
+               __asm__ __volatile__ (                      \
+                       "1:\tswle\t%1, 3(%2)\n"              \
+                       "2:\tswre\t%1, (%2)\n\t"             \
+                       "li\t%0, 0\n"                       \
+                       "3:\n\t"                            \
+                       ".insn\n\t"                         \
+                       ".section\t.fixup,\"ax\"\n\t"       \
+                       "4:\tli\t%0, %3\n\t"                \
+                       "j\t3b\n\t"                         \
+                       ".previous\n\t"                     \
+                       ".section\t__ex_table,\"a\"\n\t"    \
+                       STR(PTR)"\t1b, 4b\n\t"              \
+                       STR(PTR)"\t2b, 4b\n\t"              \
+                       ".previous"                         \
+               : "=r" (res)                                \
+               : "r" (value), "r" (addr), "i" (-EFAULT));
+#endif
+
+#else
+/* non-EVA variant */
 
 #ifdef __BIG_ENDIAN
 #define     LoadHW(addr, value, res)  \
@@ -420,6 +669,333 @@ extern void show_registers(struct pt_regs *regs);
                : "r" (value), "r" (addr), "i" (-EFAULT));
 #endif
 
+#endif
+
+#ifdef CONFIG_64BIT
+static inline void dmtc1(unsigned long val, unsigned reg)
+{
+       switch (reg) {
+       case 0: __asm__ __volatile__ ("dmtc1\t%0,$0"::"r"(val)); break;
+       case 1: __asm__ __volatile__ ("dmtc1\t%0,$1"::"r"(val)); break;
+       case 2: __asm__ __volatile__ ("dmtc1\t%0,$2"::"r"(val)); break;
+       case 3: __asm__ __volatile__ ("dmtc1\t%0,$3"::"r"(val)); break;
+       case 4: __asm__ __volatile__ ("dmtc1\t%0,$4"::"r"(val)); break;
+       case 5: __asm__ __volatile__ ("dmtc1\t%0,$5"::"r"(val)); break;
+       case 6: __asm__ __volatile__ ("dmtc1\t%0,$6"::"r"(val)); break;
+       case 7: __asm__ __volatile__ ("dmtc1\t%0,$7"::"r"(val)); break;
+       case 8: __asm__ __volatile__ ("dmtc1\t%0,$8"::"r"(val)); break;
+       case 9: __asm__ __volatile__ ("dmtc1\t%0,$9"::"r"(val)); break;
+       case 10: __asm__ __volatile__ ("dmtc1\t%0,$10"::"r"(val)); break;
+       case 11: __asm__ __volatile__ ("dmtc1\t%0,$11"::"r"(val)); break;
+       case 12: __asm__ __volatile__ ("dmtc1\t%0,$12"::"r"(val)); break;
+       case 13: __asm__ __volatile__ ("dmtc1\t%0,$13"::"r"(val)); break;
+       case 14: __asm__ __volatile__ ("dmtc1\t%0,$14"::"r"(val)); break;
+       case 15: __asm__ __volatile__ ("dmtc1\t%0,$15"::"r"(val)); break;
+       case 16: __asm__ __volatile__ ("dmtc1\t%0,$16"::"r"(val)); break;
+       case 17: __asm__ __volatile__ ("dmtc1\t%0,$17"::"r"(val)); break;
+       case 18: __asm__ __volatile__ ("dmtc1\t%0,$18"::"r"(val)); break;
+       case 19: __asm__ __volatile__ ("dmtc1\t%0,$19"::"r"(val)); break;
+       case 20: __asm__ __volatile__ ("dmtc1\t%0,$20"::"r"(val)); break;
+       case 21: __asm__ __volatile__ ("dmtc1\t%0,$21"::"r"(val)); break;
+       case 22: __asm__ __volatile__ ("dmtc1\t%0,$22"::"r"(val)); break;
+       case 23: __asm__ __volatile__ ("dmtc1\t%0,$23"::"r"(val)); break;
+       case 24: __asm__ __volatile__ ("dmtc1\t%0,$24"::"r"(val)); break;
+       case 25: __asm__ __volatile__ ("dmtc1\t%0,$25"::"r"(val)); break;
+       case 26: __asm__ __volatile__ ("dmtc1\t%0,$26"::"r"(val)); break;
+       case 27: __asm__ __volatile__ ("dmtc1\t%0,$27"::"r"(val)); break;
+       case 28: __asm__ __volatile__ ("dmtc1\t%0,$28"::"r"(val)); break;
+       case 29: __asm__ __volatile__ ("dmtc1\t%0,$29"::"r"(val)); break;
+       case 30: __asm__ __volatile__ ("dmtc1\t%0,$30"::"r"(val)); break;
+       case 31: __asm__ __volatile__ ("dmtc1\t%0,$31"::"r"(val)); break;
+       }
+}
+
+static inline unsigned long dmfc1(unsigned reg)
+{
+       unsigned long uninitialized_var(val);
+
+       switch (reg) {
+       case 0: __asm__ __volatile__ ("dmfc1\t%0,$0":"=r"(val)); break;
+       case 1: __asm__ __volatile__ ("dmfc1\t%0,$1":"=r"(val)); break;
+       case 2: __asm__ __volatile__ ("dmfc1\t%0,$2":"=r"(val)); break;
+       case 3: __asm__ __volatile__ ("dmfc1\t%0,$3":"=r"(val)); break;
+       case 4: __asm__ __volatile__ ("dmfc1\t%0,$4":"=r"(val)); break;
+       case 5: __asm__ __volatile__ ("dmfc1\t%0,$5":"=r"(val)); break;
+       case 6: __asm__ __volatile__ ("dmfc1\t%0,$6":"=r"(val)); break;
+       case 7: __asm__ __volatile__ ("dmfc1\t%0,$7":"=r"(val)); break;
+       case 8: __asm__ __volatile__ ("dmfc1\t%0,$8":"=r"(val)); break;
+       case 9: __asm__ __volatile__ ("dmfc1\t%0,$9":"=r"(val)); break;
+       case 10: __asm__ __volatile__ ("dmfc1\t%0,$10":"=r"(val)); break;
+       case 11: __asm__ __volatile__ ("dmfc1\t%0,$11":"=r"(val)); break;
+       case 12: __asm__ __volatile__ ("dmfc1\t%0,$12":"=r"(val)); break;
+       case 13: __asm__ __volatile__ ("dmfc1\t%0,$13":"=r"(val)); break;
+       case 14: __asm__ __volatile__ ("dmfc1\t%0,$14":"=r"(val)); break;
+       case 15: __asm__ __volatile__ ("dmfc1\t%0,$15":"=r"(val)); break;
+       case 16: __asm__ __volatile__ ("dmfc1\t%0,$16":"=r"(val)); break;
+       case 17: __asm__ __volatile__ ("dmfc1\t%0,$17":"=r"(val)); break;
+       case 18: __asm__ __volatile__ ("dmfc1\t%0,$18":"=r"(val)); break;
+       case 19: __asm__ __volatile__ ("dmfc1\t%0,$19":"=r"(val)); break;
+       case 20: __asm__ __volatile__ ("dmfc1\t%0,$20":"=r"(val)); break;
+       case 21: __asm__ __volatile__ ("dmfc1\t%0,$21":"=r"(val)); break;
+       case 22: __asm__ __volatile__ ("dmfc1\t%0,$22":"=r"(val)); break;
+       case 23: __asm__ __volatile__ ("dmfc1\t%0,$23":"=r"(val)); break;
+       case 24: __asm__ __volatile__ ("dmfc1\t%0,$24":"=r"(val)); break;
+       case 25: __asm__ __volatile__ ("dmfc1\t%0,$25":"=r"(val)); break;
+       case 26: __asm__ __volatile__ ("dmfc1\t%0,$26":"=r"(val)); break;
+       case 27: __asm__ __volatile__ ("dmfc1\t%0,$27":"=r"(val)); break;
+       case 28: __asm__ __volatile__ ("dmfc1\t%0,$28":"=r"(val)); break;
+       case 29: __asm__ __volatile__ ("dmfc1\t%0,$29":"=r"(val)); break;
+       case 30: __asm__ __volatile__ ("dmfc1\t%0,$30":"=r"(val)); break;
+       case 31: __asm__ __volatile__ ("dmfc1\t%0,$31":"=r"(val)); break;
+       }
+
+       return val;
+}
+#else /* !CONFIG_64BIT */
+
+static inline void mtc1_mthc1(unsigned long val, unsigned long val2, unsigned reg)
+{
+       switch (reg) {
+#ifdef __BIG_ENDIAN
+       case 0:  __asm__ __volatile__ ("mtc1\t%0,$0\n\tmthc1\t%1,$0"::"r"(val2),"r"(val)); break;
+       case 1:  __asm__ __volatile__ ("mtc1\t%0,$1\n\tmthc1\t%1,$1"::"r"(val2),"r"(val)); break;
+       case 2:  __asm__ __volatile__ ("mtc1\t%0,$2\n\tmthc1\t%1,$2"::"r"(val2),"r"(val)); break;
+       case 3:  __asm__ __volatile__ ("mtc1\t%0,$3\n\tmthc1\t%1,$3"::"r"(val2),"r"(val)); break;
+       case 4:  __asm__ __volatile__ ("mtc1\t%0,$4\n\tmthc1\t%1,$4"::"r"(val2),"r"(val)); break;
+       case 5:  __asm__ __volatile__ ("mtc1\t%0,$5\n\tmthc1\t%1,$5"::"r"(val2),"r"(val)); break;
+       case 6:  __asm__ __volatile__ ("mtc1\t%0,$6\n\tmthc1\t%1,$6"::"r"(val2),"r"(val)); break;
+       case 7:  __asm__ __volatile__ ("mtc1\t%0,$7\n\tmthc1\t%1,$7"::"r"(val2),"r"(val)); break;
+       case 8:  __asm__ __volatile__ ("mtc1\t%0,$8\n\tmthc1\t%1,$8"::"r"(val2),"r"(val)); break;
+       case 9:  __asm__ __volatile__ ("mtc1\t%0,$9\n\tmthc1\t%1,$9"::"r"(val2),"r"(val)); break;
+       case 10: __asm__ __volatile__ ("mtc1\t%0,$10\n\tmthc1\t%1,$10"::"r"(val2),"r"(val)); break;
+       case 11: __asm__ __volatile__ ("mtc1\t%0,$11\n\tmthc1\t%1,$11"::"r"(val2),"r"(val)); break;
+       case 12: __asm__ __volatile__ ("mtc1\t%0,$12\n\tmthc1\t%1,$12"::"r"(val2),"r"(val)); break;
+       case 13: __asm__ __volatile__ ("mtc1\t%0,$13\n\tmthc1\t%1,$13"::"r"(val2),"r"(val)); break;
+       case 14: __asm__ __volatile__ ("mtc1\t%0,$14\n\tmthc1\t%1,$14"::"r"(val2),"r"(val)); break;
+       case 15: __asm__ __volatile__ ("mtc1\t%0,$15\n\tmthc1\t%1,$15"::"r"(val2),"r"(val)); break;
+       case 16: __asm__ __volatile__ ("mtc1\t%0,$16\n\tmthc1\t%1,$16"::"r"(val2),"r"(val)); break;
+       case 17: __asm__ __volatile__ ("mtc1\t%0,$17\n\tmthc1\t%1,$17"::"r"(val2),"r"(val)); break;
+       case 18: __asm__ __volatile__ ("mtc1\t%0,$18\n\tmthc1\t%1,$18"::"r"(val2),"r"(val)); break;
+       case 19: __asm__ __volatile__ ("mtc1\t%0,$19\n\tmthc1\t%1,$19"::"r"(val2),"r"(val)); break;
+       case 20: __asm__ __volatile__ ("mtc1\t%0,$20\n\tmthc1\t%1,$20"::"r"(val2),"r"(val)); break;
+       case 21: __asm__ __volatile__ ("mtc1\t%0,$21\n\tmthc1\t%1,$21"::"r"(val2),"r"(val)); break;
+       case 22: __asm__ __volatile__ ("mtc1\t%0,$22\n\tmthc1\t%1,$22"::"r"(val2),"r"(val)); break;
+       case 23: __asm__ __volatile__ ("mtc1\t%0,$23\n\tmthc1\t%1,$23"::"r"(val2),"r"(val)); break;
+       case 24: __asm__ __volatile__ ("mtc1\t%0,$24\n\tmthc1\t%1,$24"::"r"(val2),"r"(val)); break;
+       case 25: __asm__ __volatile__ ("mtc1\t%0,$25\n\tmthc1\t%1,$25"::"r"(val2),"r"(val)); break;
+       case 26: __asm__ __volatile__ ("mtc1\t%0,$26\n\tmthc1\t%1,$26"::"r"(val2),"r"(val)); break;
+       case 27: __asm__ __volatile__ ("mtc1\t%0,$27\n\tmthc1\t%1,$27"::"r"(val2),"r"(val)); break;
+       case 28: __asm__ __volatile__ ("mtc1\t%0,$28\n\tmthc1\t%1,$28"::"r"(val2),"r"(val)); break;
+       case 29: __asm__ __volatile__ ("mtc1\t%0,$29\n\tmthc1\t%1,$29"::"r"(val2),"r"(val)); break;
+       case 30: __asm__ __volatile__ ("mtc1\t%0,$30\n\tmthc1\t%1,$30"::"r"(val2),"r"(val)); break;
+       case 31: __asm__ __volatile__ ("mtc1\t%0,$31\n\tmthc1\t%1,$31"::"r"(val2),"r"(val)); break;
+       }
+#endif
+#ifdef __LITTLE_ENDIAN
+       case 0:  __asm__ __volatile__ ("mtc1\t%0,$0\n\tmthc1\t%1,$0"::"r"(val),"r"(val2)); break;
+       case 1:  __asm__ __volatile__ ("mtc1\t%0,$1\n\tmthc1\t%1,$1"::"r"(val),"r"(val2)); break;
+       case 2:  __asm__ __volatile__ ("mtc1\t%0,$2\n\tmthc1\t%1,$2"::"r"(val),"r"(val2)); break;
+       case 3:  __asm__ __volatile__ ("mtc1\t%0,$3\n\tmthc1\t%1,$3"::"r"(val),"r"(val2)); break;
+       case 4:  __asm__ __volatile__ ("mtc1\t%0,$4\n\tmthc1\t%1,$4"::"r"(val),"r"(val2)); break;
+       case 5:  __asm__ __volatile__ ("mtc1\t%0,$5\n\tmthc1\t%1,$5"::"r"(val),"r"(val2)); break;
+       case 6:  __asm__ __volatile__ ("mtc1\t%0,$6\n\tmthc1\t%1,$6"::"r"(val),"r"(val2)); break;
+       case 7:  __asm__ __volatile__ ("mtc1\t%0,$7\n\tmthc1\t%1,$7"::"r"(val),"r"(val2)); break;
+       case 8:  __asm__ __volatile__ ("mtc1\t%0,$8\n\tmthc1\t%1,$8"::"r"(val),"r"(val2)); break;
+       case 9:  __asm__ __volatile__ ("mtc1\t%0,$9\n\tmthc1\t%1,$9"::"r"(val),"r"(val2)); break;
+       case 10: __asm__ __volatile__ ("mtc1\t%0,$10\n\tmthc1\t%1,$10"::"r"(val),"r"(val2)); break;
+       case 11: __asm__ __volatile__ ("mtc1\t%0,$11\n\tmthc1\t%1,$11"::"r"(val),"r"(val2)); break;
+       case 12: __asm__ __volatile__ ("mtc1\t%0,$12\n\tmthc1\t%1,$12"::"r"(val),"r"(val2)); break;
+       case 13: __asm__ __volatile__ ("mtc1\t%0,$13\n\tmthc1\t%1,$13"::"r"(val),"r"(val2)); break;
+       case 14: __asm__ __volatile__ ("mtc1\t%0,$14\n\tmthc1\t%1,$14"::"r"(val),"r"(val2)); break;
+       case 15: __asm__ __volatile__ ("mtc1\t%0,$15\n\tmthc1\t%1,$15"::"r"(val),"r"(val2)); break;
+       case 16: __asm__ __volatile__ ("mtc1\t%0,$16\n\tmthc1\t%1,$16"::"r"(val),"r"(val2)); break;
+       case 17: __asm__ __volatile__ ("mtc1\t%0,$17\n\tmthc1\t%1,$17"::"r"(val),"r"(val2)); break;
+       case 18: __asm__ __volatile__ ("mtc1\t%0,$18\n\tmthc1\t%1,$18"::"r"(val),"r"(val2)); break;
+       case 19: __asm__ __volatile__ ("mtc1\t%0,$19\n\tmthc1\t%1,$19"::"r"(val),"r"(val2)); break;
+       case 20: __asm__ __volatile__ ("mtc1\t%0,$20\n\tmthc1\t%1,$20"::"r"(val),"r"(val2)); break;
+       case 21: __asm__ __volatile__ ("mtc1\t%0,$21\n\tmthc1\t%1,$21"::"r"(val),"r"(val2)); break;
+       case 22: __asm__ __volatile__ ("mtc1\t%0,$22\n\tmthc1\t%1,$22"::"r"(val),"r"(val2)); break;
+       case 23: __asm__ __volatile__ ("mtc1\t%0,$23\n\tmthc1\t%1,$23"::"r"(val),"r"(val2)); break;
+       case 24: __asm__ __volatile__ ("mtc1\t%0,$24\n\tmthc1\t%1,$24"::"r"(val),"r"(val2)); break;
+       case 25: __asm__ __volatile__ ("mtc1\t%0,$25\n\tmthc1\t%1,$25"::"r"(val),"r"(val2)); break;
+       case 26: __asm__ __volatile__ ("mtc1\t%0,$26\n\tmthc1\t%1,$26"::"r"(val),"r"(val2)); break;
+       case 27: __asm__ __volatile__ ("mtc1\t%0,$27\n\tmthc1\t%1,$27"::"r"(val),"r"(val2)); break;
+       case 28: __asm__ __volatile__ ("mtc1\t%0,$28\n\tmthc1\t%1,$28"::"r"(val),"r"(val2)); break;
+       case 29: __asm__ __volatile__ ("mtc1\t%0,$29\n\tmthc1\t%1,$29"::"r"(val),"r"(val2)); break;
+       case 30: __asm__ __volatile__ ("mtc1\t%0,$30\n\tmthc1\t%1,$30"::"r"(val),"r"(val2)); break;
+       case 31: __asm__ __volatile__ ("mtc1\t%0,$31\n\tmthc1\t%1,$31"::"r"(val),"r"(val2)); break;
+       }
+#endif
+}
+
+static inline void mfc1_mfhc1(unsigned long *val, unsigned long *val2, unsigned reg)
+{
+       unsigned long uninitialized_var(lval), uninitialized_var(lval2);
+
+       switch (reg) {
+#ifdef __BIG_ENDIAN
+       case 0:  __asm__ __volatile__ ("mfc1\t%0,$0\n\tmfhc1\t%1,$0":"=r"(lval2),"=r"(lval)); break;
+       case 1:  __asm__ __volatile__ ("mfc1\t%0,$1\n\tmfhc1\t%1,$1":"=r"(lval2),"=r"(lval)); break;
+       case 2:  __asm__ __volatile__ ("mfc1\t%0,$2\n\tmfhc1\t%1,$2":"=r"(lval2),"=r"(lval)); break;
+       case 3:  __asm__ __volatile__ ("mfc1\t%0,$3\n\tmfhc1\t%1,$3":"=r"(lval2),"=r"(lval)); break;
+       case 4:  __asm__ __volatile__ ("mfc1\t%0,$4\n\tmfhc1\t%1,$4":"=r"(lval2),"=r"(lval)); break;
+       case 5:  __asm__ __volatile__ ("mfc1\t%0,$5\n\tmfhc1\t%1,$5":"=r"(lval2),"=r"(lval)); break;
+       case 6:  __asm__ __volatile__ ("mfc1\t%0,$6\n\tmfhc1\t%1,$6":"=r"(lval2),"=r"(lval)); break;
+       case 7:  __asm__ __volatile__ ("mfc1\t%0,$7\n\tmfhc1\t%1,$7":"=r"(lval2),"=r"(lval)); break;
+       case 8:  __asm__ __volatile__ ("mfc1\t%0,$8\n\tmfhc1\t%1,$8":"=r"(lval2),"=r"(lval)); break;
+       case 9:  __asm__ __volatile__ ("mfc1\t%0,$9\n\tmfhc1\t%1,$9":"=r"(lval2),"=r"(lval)); break;
+       case 10: __asm__ __volatile__ ("mfc1\t%0,$10\n\tmfhc1\t%1,$10":"=r"(lval2),"=r"(lval)); break;
+       case 11: __asm__ __volatile__ ("mfc1\t%0,$11\n\tmfhc1\t%1,$11":"=r"(lval2),"=r"(lval)); break;
+       case 12: __asm__ __volatile__ ("mfc1\t%0,$12\n\tmfhc1\t%1,$12":"=r"(lval2),"=r"(lval)); break;
+       case 13: __asm__ __volatile__ ("mfc1\t%0,$13\n\tmfhc1\t%1,$13":"=r"(lval2),"=r"(lval)); break;
+       case 14: __asm__ __volatile__ ("mfc1\t%0,$14\n\tmfhc1\t%1,$14":"=r"(lval2),"=r"(lval)); break;
+       case 15: __asm__ __volatile__ ("mfc1\t%0,$15\n\tmfhc1\t%1,$15":"=r"(lval2),"=r"(lval)); break;
+       case 16: __asm__ __volatile__ ("mfc1\t%0,$16\n\tmfhc1\t%1,$16":"=r"(lval2),"=r"(lval)); break;
+       case 17: __asm__ __volatile__ ("mfc1\t%0,$17\n\tmfhc1\t%1,$17":"=r"(lval2),"=r"(lval)); break;
+       case 18: __asm__ __volatile__ ("mfc1\t%0,$18\n\tmfhc1\t%1,$18":"=r"(lval2),"=r"(lval)); break;
+       case 19: __asm__ __volatile__ ("mfc1\t%0,$19\n\tmfhc1\t%1,$19":"=r"(lval2),"=r"(lval)); break;
+       case 20: __asm__ __volatile__ ("mfc1\t%0,$20\n\tmfhc1\t%1,$20":"=r"(lval2),"=r"(lval)); break;
+       case 21: __asm__ __volatile__ ("mfc1\t%0,$21\n\tmfhc1\t%1,$21":"=r"(lval2),"=r"(lval)); break;
+       case 22: __asm__ __volatile__ ("mfc1\t%0,$22\n\tmfhc1\t%1,$22":"=r"(lval2),"=r"(lval)); break;
+       case 23: __asm__ __volatile__ ("mfc1\t%0,$23\n\tmfhc1\t%1,$23":"=r"(lval2),"=r"(lval)); break;
+       case 24: __asm__ __volatile__ ("mfc1\t%0,$24\n\tmfhc1\t%1,$24":"=r"(lval2),"=r"(lval)); break;
+       case 25: __asm__ __volatile__ ("mfc1\t%0,$25\n\tmfhc1\t%1,$25":"=r"(lval2),"=r"(lval)); break;
+       case 26: __asm__ __volatile__ ("mfc1\t%0,$26\n\tmfhc1\t%1,$26":"=r"(lval2),"=r"(lval)); break;
+       case 27: __asm__ __volatile__ ("mfc1\t%0,$27\n\tmfhc1\t%1,$27":"=r"(lval2),"=r"(lval)); break;
+       case 28: __asm__ __volatile__ ("mfc1\t%0,$28\n\tmfhc1\t%1,$28":"=r"(lval2),"=r"(lval)); break;
+       case 29: __asm__ __volatile__ ("mfc1\t%0,$29\n\tmfhc1\t%1,$29":"=r"(lval2),"=r"(lval)); break;
+       case 30: __asm__ __volatile__ ("mfc1\t%0,$30\n\tmfhc1\t%1,$30":"=r"(lval2),"=r"(lval)); break;
+       case 31: __asm__ __volatile__ ("mfc1\t%0,$31\n\tmfhc1\t%1,$31":"=r"(lval2),"=r"(lval)); break;
+#endif
+#ifdef __LITTLE_ENDIAN
+       case 0:  __asm__ __volatile__ ("mfc1\t%0,$0\n\tmfhc1\t%1,$0":"=r"(lval),"=r"(lval2)); break;
+       case 1:  __asm__ __volatile__ ("mfc1\t%0,$1\n\tmfhc1\t%1,$1":"=r"(lval),"=r"(lval2)); break;
+       case 2:  __asm__ __volatile__ ("mfc1\t%0,$2\n\tmfhc1\t%1,$2":"=r"(lval),"=r"(lval2)); break;
+       case 3:  __asm__ __volatile__ ("mfc1\t%0,$3\n\tmfhc1\t%1,$3":"=r"(lval),"=r"(lval2)); break;
+       case 4:  __asm__ __volatile__ ("mfc1\t%0,$4\n\tmfhc1\t%1,$4":"=r"(lval),"=r"(lval2)); break;
+       case 5:  __asm__ __volatile__ ("mfc1\t%0,$5\n\tmfhc1\t%1,$5":"=r"(lval),"=r"(lval2)); break;
+       case 6:  __asm__ __volatile__ ("mfc1\t%0,$6\n\tmfhc1\t%1,$6":"=r"(lval),"=r"(lval2)); break;
+       case 7:  __asm__ __volatile__ ("mfc1\t%0,$7\n\tmfhc1\t%1,$7":"=r"(lval),"=r"(lval2)); break;
+       case 8:  __asm__ __volatile__ ("mfc1\t%0,$8\n\tmfhc1\t%1,$8":"=r"(lval),"=r"(lval2)); break;
+       case 9:  __asm__ __volatile__ ("mfc1\t%0,$9\n\tmfhc1\t%1,$9":"=r"(lval),"=r"(lval2)); break;
+       case 10: __asm__ __volatile__ ("mfc1\t%0,$10\n\tmfhc1\t%1,$10":"=r"(lval),"=r"(lval2)); break;
+       case 11: __asm__ __volatile__ ("mfc1\t%0,$11\n\tmfhc1\t%1,$11":"=r"(lval),"=r"(lval2)); break;
+       case 12: __asm__ __volatile__ ("mfc1\t%0,$12\n\tmfhc1\t%1,$12":"=r"(lval),"=r"(lval2)); break;
+       case 13: __asm__ __volatile__ ("mfc1\t%0,$13\n\tmfhc1\t%1,$13":"=r"(lval),"=r"(lval2)); break;
+       case 14: __asm__ __volatile__ ("mfc1\t%0,$14\n\tmfhc1\t%1,$14":"=r"(lval),"=r"(lval2)); break;
+       case 15: __asm__ __volatile__ ("mfc1\t%0,$15\n\tmfhc1\t%1,$15":"=r"(lval),"=r"(lval2)); break;
+       case 16: __asm__ __volatile__ ("mfc1\t%0,$16\n\tmfhc1\t%1,$16":"=r"(lval),"=r"(lval2)); break;
+       case 17: __asm__ __volatile__ ("mfc1\t%0,$17\n\tmfhc1\t%1,$17":"=r"(lval),"=r"(lval2)); break;
+       case 18: __asm__ __volatile__ ("mfc1\t%0,$18\n\tmfhc1\t%1,$18":"=r"(lval),"=r"(lval2)); break;
+       case 19: __asm__ __volatile__ ("mfc1\t%0,$19\n\tmfhc1\t%1,$19":"=r"(lval),"=r"(lval2)); break;
+       case 20: __asm__ __volatile__ ("mfc1\t%0,$20\n\tmfhc1\t%1,$20":"=r"(lval),"=r"(lval2)); break;
+       case 21: __asm__ __volatile__ ("mfc1\t%0,$21\n\tmfhc1\t%1,$21":"=r"(lval),"=r"(lval2)); break;
+       case 22: __asm__ __volatile__ ("mfc1\t%0,$22\n\tmfhc1\t%1,$22":"=r"(lval),"=r"(lval2)); break;
+       case 23: __asm__ __volatile__ ("mfc1\t%0,$23\n\tmfhc1\t%1,$23":"=r"(lval),"=r"(lval2)); break;
+       case 24: __asm__ __volatile__ ("mfc1\t%0,$24\n\tmfhc1\t%1,$24":"=r"(lval),"=r"(lval2)); break;
+       case 25: __asm__ __volatile__ ("mfc1\t%0,$25\n\tmfhc1\t%1,$25":"=r"(lval),"=r"(lval2)); break;
+       case 26: __asm__ __volatile__ ("mfc1\t%0,$26\n\tmfhc1\t%1,$26":"=r"(lval),"=r"(lval2)); break;
+       case 27: __asm__ __volatile__ ("mfc1\t%0,$27\n\tmfhc1\t%1,$27":"=r"(lval),"=r"(lval2)); break;
+       case 28: __asm__ __volatile__ ("mfc1\t%0,$28\n\tmfhc1\t%1,$28":"=r"(lval),"=r"(lval2)); break;
+       case 29: __asm__ __volatile__ ("mfc1\t%0,$29\n\tmfhc1\t%1,$29":"=r"(lval),"=r"(lval2)); break;
+       case 30: __asm__ __volatile__ ("mfc1\t%0,$30\n\tmfhc1\t%1,$30":"=r"(lval),"=r"(lval2)); break;
+       case 31: __asm__ __volatile__ ("mfc1\t%0,$31\n\tmfhc1\t%1,$31":"=r"(lval),"=r"(lval2)); break;
+#endif
+       }
+       *val = lval;
+       *val2 = lval2;
+}
+#endif /* CONFIG_64BIT */
+
+static inline void mtc1_pair(unsigned long val, unsigned long val2, unsigned reg)
+{
+       switch (reg & ~0x1) {
+#ifdef __BIG_ENDIAN
+       case 0:  __asm__ __volatile__ ("mtc1\t%0,$0\n\tmtc1\t%1,$1"::"r"(val2),"r"(val)); break;
+       case 2:  __asm__ __volatile__ ("mtc1\t%0,$2\n\tmtc1\t%1,$3"::"r"(val2),"r"(val)); break;
+       case 4:  __asm__ __volatile__ ("mtc1\t%0,$4\n\tmtc1\t%1,$5"::"r"(val2),"r"(val)); break;
+       case 6:  __asm__ __volatile__ ("mtc1\t%0,$6\n\tmtc1\t%1,$7"::"r"(val2),"r"(val)); break;
+       case 8:  __asm__ __volatile__ ("mtc1\t%0,$8\n\tmtc1\t%1,$9"::"r"(val2),"r"(val)); break;
+       case 10: __asm__ __volatile__ ("mtc1\t%0,$10\n\tmtc1\t%1,$11"::"r"(val2),"r"(val)); break;
+       case 12: __asm__ __volatile__ ("mtc1\t%0,$12\n\tmtc1\t%1,$13"::"r"(val2),"r"(val)); break;
+       case 14: __asm__ __volatile__ ("mtc1\t%0,$14\n\tmtc1\t%1,$15"::"r"(val2),"r"(val)); break;
+       case 16: __asm__ __volatile__ ("mtc1\t%0,$16\n\tmtc1\t%1,$17"::"r"(val2),"r"(val)); break;
+       case 18: __asm__ __volatile__ ("mtc1\t%0,$18\n\tmtc1\t%1,$19"::"r"(val2),"r"(val)); break;
+       case 20: __asm__ __volatile__ ("mtc1\t%0,$20\n\tmtc1\t%1,$21"::"r"(val2),"r"(val)); break;
+       case 22: __asm__ __volatile__ ("mtc1\t%0,$22\n\tmtc1\t%1,$23"::"r"(val2),"r"(val)); break;
+       case 24: __asm__ __volatile__ ("mtc1\t%0,$24\n\tmtc1\t%1,$25"::"r"(val2),"r"(val)); break;
+       case 26: __asm__ __volatile__ ("mtc1\t%0,$26\n\tmtc1\t%1,$27"::"r"(val2),"r"(val)); break;
+       case 28: __asm__ __volatile__ ("mtc1\t%0,$28\n\tmtc1\t%1,$29"::"r"(val2),"r"(val)); break;
+       case 30: __asm__ __volatile__ ("mtc1\t%0,$30\n\tmtc1\t%1,$31"::"r"(val2),"r"(val)); break;
+#endif
+#ifdef __LITTLE_ENDIAN
+       case 0:  __asm__ __volatile__ ("mtc1\t%0,$0\n\tmtc1\t%1,$1"::"r"(val),"r"(val2)); break;
+       case 2:  __asm__ __volatile__ ("mtc1\t%0,$2\n\tmtc1\t%1,$3"::"r"(val),"r"(val2)); break;
+       case 4:  __asm__ __volatile__ ("mtc1\t%0,$4\n\tmtc1\t%1,$5"::"r"(val),"r"(val2)); break;
+       case 6:  __asm__ __volatile__ ("mtc1\t%0,$6\n\tmtc1\t%1,$7"::"r"(val),"r"(val2)); break;
+       case 8:  __asm__ __volatile__ ("mtc1\t%0,$8\n\tmtc1\t%1,$9"::"r"(val),"r"(val2)); break;
+       case 10: __asm__ __volatile__ ("mtc1\t%0,$10\n\tmtc1\t%1,$11"::"r"(val),"r"(val2)); break;
+       case 12: __asm__ __volatile__ ("mtc1\t%0,$12\n\tmtc1\t%1,$13"::"r"(val),"r"(val2)); break;
+       case 14: __asm__ __volatile__ ("mtc1\t%0,$14\n\tmtc1\t%1,$15"::"r"(val),"r"(val2)); break;
+       case 16: __asm__ __volatile__ ("mtc1\t%0,$16\n\tmtc1\t%1,$17"::"r"(val),"r"(val2)); break;
+       case 18: __asm__ __volatile__ ("mtc1\t%0,$18\n\tmtc1\t%1,$19"::"r"(val),"r"(val2)); break;
+       case 20: __asm__ __volatile__ ("mtc1\t%0,$20\n\tmtc1\t%1,$21"::"r"(val),"r"(val2)); break;
+       case 22: __asm__ __volatile__ ("mtc1\t%0,$22\n\tmtc1\t%1,$23"::"r"(val),"r"(val2)); break;
+       case 24: __asm__ __volatile__ ("mtc1\t%0,$24\n\tmtc1\t%1,$25"::"r"(val),"r"(val2)); break;
+       case 26: __asm__ __volatile__ ("mtc1\t%0,$26\n\tmtc1\t%1,$27"::"r"(val),"r"(val2)); break;
+       case 28: __asm__ __volatile__ ("mtc1\t%0,$28\n\tmtc1\t%1,$29"::"r"(val),"r"(val2)); break;
+       case 30: __asm__ __volatile__ ("mtc1\t%0,$30\n\tmtc1\t%1,$31"::"r"(val),"r"(val2)); break;
+#endif
+       }
+}
+
+static inline void mfc1_pair(unsigned long *val, unsigned long *val2, unsigned reg)
+{
+       unsigned long uninitialized_var(lval), uninitialized_var(lval2);
+
+       switch (reg & ~0x1) {
+#ifdef __BIG_ENDIAN
+       case 0:  __asm__ __volatile__ ("mfc1\t%0,$0\n\tmfc1\t%1,$1":"=r"(lval2),"=r"(lval)); break;
+       case 2:  __asm__ __volatile__ ("mfc1\t%0,$2\n\tmfc1\t%1,$3":"=r"(lval2),"=r"(lval)); break;
+       case 4:  __asm__ __volatile__ ("mfc1\t%0,$4\n\tmfc1\t%1,$5":"=r"(lval2),"=r"(lval)); break;
+       case 6:  __asm__ __volatile__ ("mfc1\t%0,$6\n\tmfc1\t%1,$7":"=r"(lval2),"=r"(lval)); break;
+       case 8:  __asm__ __volatile__ ("mfc1\t%0,$8\n\tmfc1\t%1,$9":"=r"(lval2),"=r"(lval)); break;
+       case 10: __asm__ __volatile__ ("mfc1\t%0,$10\n\tmfc1\t%1,$11":"=r"(lval2),"=r"(lval)); break;
+       case 12: __asm__ __volatile__ ("mfc1\t%0,$12\n\tmfc1\t%1,$13":"=r"(lval2),"=r"(lval)); break;
+       case 14: __asm__ __volatile__ ("mfc1\t%0,$14\n\tmfc1\t%1,$15":"=r"(lval2),"=r"(lval)); break;
+       case 16: __asm__ __volatile__ ("mfc1\t%0,$16\n\tmfc1\t%1,$17":"=r"(lval2),"=r"(lval)); break;
+       case 18: __asm__ __volatile__ ("mfc1\t%0,$18\n\tmfc1\t%1,$19":"=r"(lval2),"=r"(lval)); break;
+       case 20: __asm__ __volatile__ ("mfc1\t%0,$20\n\tmfc1\t%1,$21":"=r"(lval2),"=r"(lval)); break;
+       case 22: __asm__ __volatile__ ("mfc1\t%0,$22\n\tmfc1\t%1,$23":"=r"(lval2),"=r"(lval)); break;
+       case 24: __asm__ __volatile__ ("mfc1\t%0,$24\n\tmfc1\t%1,$25":"=r"(lval2),"=r"(lval)); break;
+       case 26: __asm__ __volatile__ ("mfc1\t%0,$26\n\tmfc1\t%1,$27":"=r"(lval2),"=r"(lval)); break;
+       case 28: __asm__ __volatile__ ("mfc1\t%0,$28\n\tmfc1\t%1,$29":"=r"(lval2),"=r"(lval)); break;
+       case 30: __asm__ __volatile__ ("mfc1\t%0,$30\n\tmfc1\t%1,$31":"=r"(lval2),"=r"(lval)); break;
+#endif
+#ifdef __LITTLE_ENDIAN
+       case 0:  __asm__ __volatile__ ("mfc1\t%0,$0\n\tmfc1\t%1,$1":"=r"(lval),"=r"(lval2)); break;
+       case 2:  __asm__ __volatile__ ("mfc1\t%0,$2\n\tmfc1\t%1,$3":"=r"(lval),"=r"(lval2)); break;
+       case 4:  __asm__ __volatile__ ("mfc1\t%0,$4\n\tmfc1\t%1,$5":"=r"(lval),"=r"(lval2)); break;
+       case 6:  __asm__ __volatile__ ("mfc1\t%0,$6\n\tmfc1\t%1,$7":"=r"(lval),"=r"(lval2)); break;
+       case 8:  __asm__ __volatile__ ("mfc1\t%0,$8\n\tmfc1\t%1,$9":"=r"(lval),"=r"(lval2)); break;
+       case 10: __asm__ __volatile__ ("mfc1\t%0,$10\n\tmfc1\t%1,$11":"=r"(lval),"=r"(lval2)); break;
+       case 12: __asm__ __volatile__ ("mfc1\t%0,$12\n\tmfc1\t%1,$13":"=r"(lval),"=r"(lval2)); break;
+       case 14: __asm__ __volatile__ ("mfc1\t%0,$14\n\tmfc1\t%1,$15":"=r"(lval),"=r"(lval2)); break;
+       case 16: __asm__ __volatile__ ("mfc1\t%0,$16\n\tmfc1\t%1,$17":"=r"(lval),"=r"(lval2)); break;
+       case 18: __asm__ __volatile__ ("mfc1\t%0,$18\n\tmfc1\t%1,$19":"=r"(lval),"=r"(lval2)); break;
+       case 20: __asm__ __volatile__ ("mfc1\t%0,$20\n\tmfc1\t%1,$21":"=r"(lval),"=r"(lval2)); break;
+       case 22: __asm__ __volatile__ ("mfc1\t%0,$22\n\tmfc1\t%1,$23":"=r"(lval),"=r"(lval2)); break;
+       case 24: __asm__ __volatile__ ("mfc1\t%0,$24\n\tmfc1\t%1,$25":"=r"(lval),"=r"(lval2)); break;
+       case 26: __asm__ __volatile__ ("mfc1\t%0,$26\n\tmfc1\t%1,$27":"=r"(lval),"=r"(lval2)); break;
+       case 28: __asm__ __volatile__ ("mfc1\t%0,$28\n\tmfc1\t%1,$29":"=r"(lval),"=r"(lval2)); break;
+       case 30: __asm__ __volatile__ ("mfc1\t%0,$30\n\tmfc1\t%1,$31":"=r"(lval),"=r"(lval2)); break;
+#endif
+       }
+       *val = lval;
+       *val2 = lval2;
+}
+
+
 static void emulate_load_store_insn(struct pt_regs *regs,
        void __user *addr, unsigned int __user *pc)
 {
@@ -429,6 +1005,9 @@ static void emulate_load_store_insn(struct pt_regs *regs,
        unsigned long origpc;
        unsigned long orig31;
        void __user *fault_addr = NULL;
+#ifdef CONFIG_EVA
+       mm_segment_t seg;
+#endif
 
        origpc = (unsigned long)pc;
        orig31 = regs->regs[31];
@@ -474,6 +1053,98 @@ static void emulate_load_store_insn(struct pt_regs *regs,
                 * The remaining opcodes are the ones that are really of
                 * interest.
                 */
+#ifdef CONFIG_EVA
+       case spec3_op:
+
+               /* we can land here only from kernel accessing USER,
+                  so - set user space, temporary for verification */
+               seg = get_fs();
+               set_fs(USER_DS);
+
+               switch (insn.spec3_format.ls_func) {
+
+               case lhe_op:
+                       if (!access_ok(VERIFY_READ, addr, 2)) {
+                               set_fs(seg);
+                               goto sigbus;
+                       }
+
+                       LoadHW(addr, value, res);
+                       if (res) {
+                               set_fs(seg);
+                               goto fault;
+                       }
+                       compute_return_epc(regs);
+                       regs->regs[insn.spec3_format.rt] = value;
+                       break;
+
+               case lwe_op:
+                       if (!access_ok(VERIFY_READ, addr, 4)) {
+                               set_fs(seg);
+                               goto sigbus;
+                       }
+
+                       LoadW(addr, value, res);
+                       if (res) {
+                               set_fs(seg);
+                               goto fault;
+                       }
+                       compute_return_epc(regs);
+                       regs->regs[insn.spec3_format.rt] = value;
+                       break;
+
+               case lhue_op:
+                       if (!access_ok(VERIFY_READ, addr, 2)) {
+                               set_fs(seg);
+                               goto sigbus;
+                       }
+
+                       LoadHWU(addr, value, res);
+                       if (res) {
+                               set_fs(seg);
+                               goto fault;
+                       }
+                       compute_return_epc(regs);
+                       regs->regs[insn.spec3_format.rt] = value;
+                       break;
+
+               case she_op:
+                       if (!access_ok(VERIFY_WRITE, addr, 2)) {
+                               set_fs(seg);
+                               goto sigbus;
+                       }
+
+                       compute_return_epc(regs);
+                       value = regs->regs[insn.spec3_format.rt];
+                       StoreHW(addr, value, res);
+                       if (res) {
+                               set_fs(seg);
+                               goto fault;
+                       }
+                       break;
+
+               case swe_op:
+                       if (!access_ok(VERIFY_WRITE, addr, 4)) {
+                               set_fs(seg);
+                               goto sigbus;
+                       }
+
+                       compute_return_epc(regs);
+                       value = regs->regs[insn.spec3_format.rt];
+                       StoreW(addr, value, res);
+                       if (res) {
+                               set_fs(seg);
+                               goto fault;
+                       }
+                       break;
+
+               default:
+                       set_fs(seg);
+                       goto sigill;
+               }
+               set_fs(seg);
+               break;
+#endif
        case lh_op:
                if (!access_ok(VERIFY_READ, addr, 2))
                        goto sigbus;
@@ -598,13 +1269,99 @@ static void emulate_load_store_insn(struct pt_regs *regs,
                /* Cannot handle 64-bit instructions in 32-bit kernel */
                goto sigill;
 
-       case lwc1_op:
        case ldc1_op:
-       case swc1_op:
+               if (!access_ok(VERIFY_READ, addr, 8))
+                       goto sigbus;
+
+               preempt_disable();
+               if (is_fpu_owner()) {
+                       if (read_c0_status() & ST0_FR) {
+#ifdef CONFIG_64BIT
+                               LoadDW(addr, value, res);
+                               if (res)
+                                       goto preempt_fault;
+                               dmtc1(value, insn.i_format.rt);
+#else /* !CONFIG_64BIT */
+                               unsigned long value2;
+
+                               LoadW(addr, value, res);
+                               if (res)
+                                       goto preempt_fault;
+                               LoadW((addr + 4), value2, res);
+                               if (res)
+                                       goto preempt_fault;
+                               mtc1_mthc1(value, value2, insn.i_format.rt);
+#endif /* CONFIG_64BIT */
+                       } else {
+                               unsigned long value2;
+
+                               LoadW(addr, value, res);
+                               if (res)
+                                       goto preempt_fault;
+                               LoadW((addr + 4), value2, res);
+                               if (res)
+                                       goto preempt_fault;
+                               mtc1_pair(value, value2, insn.i_format.rt);
+                       }
+                       preempt_enable();
+                       compute_return_epc(regs);
+                       break;
+               }
+
+               preempt_enable();
+               goto fpu_continue;
+
        case sdc1_op:
+               if (!access_ok(VERIFY_WRITE, addr, 8))
+                       goto sigbus;
+
+               preempt_disable();
+               if (is_fpu_owner()) {
+                       compute_return_epc(regs);
+                       if (read_c0_status() & ST0_FR) {
+#ifdef CONFIG_64BIT
+                               value = dmfc1(insn.i_format.rt);
+                               StoreDW(addr, value, res);
+                               if (res)
+                                       goto preempt_fault;
+#else /* !CONFIG_64BIT */
+                               unsigned long value2;
+
+                               mfc1_mfhc1(&value, &value2, insn.i_format.rt);
+                               StoreW(addr, value, res);
+                               if (res)
+                                       goto preempt_fault;
+                               StoreW((addr + 4), value2, res);
+                               if (res)
+                                       goto preempt_fault;
+#endif /* CONFIG_64BIT */
+                       } else {
+                               unsigned long value2;
+
+                               mfc1_pair(&value, &value2, insn.i_format.rt);
+                               StoreW(addr, value, res);
+                               if (res)
+                                       goto preempt_fault;
+                               StoreW((addr + 4), value2, res);
+                               if (res)
+                                       goto preempt_fault;
+                       }
+                       preempt_enable();
+                       break;
+               }
+
+preempt_fault:
+               preempt_enable();
+               goto fpu_continue;
+
+       case ldxc1_op:
+       case sdxc1_op:
+       case lwc1_op:
+       case swc1_op:
+fpu_continue:
                die_if_kernel("Unaligned FP access in kernel code", regs);
                BUG_ON(!used_math());
-               BUG_ON(!is_fpu_owner());
+//                BUG_ON(!is_fpu_owner());
 
                lose_fpu(1);    /* Save FPU state for the emulator. */
                res = fpu_emulator_cop1Handler(regs, &current->thread.fpu, 1,
@@ -1025,7 +1782,90 @@ void emulate_load_store_microMIPS(struct pt_regs *regs, void __user * addr)
                goto sigbus;
 
        case mm_ldc132_op:
+               if (!access_ok(VERIFY_READ, addr, 8))
+                       goto sigbus;
+
+               if ((unsigned long)addr & 0x3)
+                       goto fpu_emul;  /* generic case */
+
+               preempt_disable();
+               if (is_fpu_owner()) {
+                       if (read_c0_status() & ST0_FR) {
+#ifdef CONFIG_64BIT
+                               LoadDW(addr, value, res);
+                               if (res)
+                                       goto preempt_fault;
+                               dmtc1(value, insn.mm_i_format.rt);
+#else /* !CONFIG_64BIT */
+                               unsigned long value2;
+
+                               LoadW(addr, value, res);
+                               if (res)
+                                       goto preempt_fault;
+                               LoadW((addr + 4), value2, res);
+                               if (res)
+                                       goto preempt_fault;
+                               mtc1_mthc1(value, value2, insn.mm_i_format.rt);
+#endif /* CONFIG_64BIT */
+                       } else {
+                               unsigned long value2;
+
+                               LoadW(addr, value, res);
+                               if (res)
+                                       goto preempt_fault;
+                               LoadW((addr + 4), value2, res);
+                               if (res)
+                                       goto preempt_fault;
+                               mtc1_pair(value, value2, insn.mm_i_format.rt);
+                       }
+                       preempt_enable();
+                       goto success;
+               }
+
+               preempt_enable();
+               goto fpu_emul;
+
        case mm_sdc132_op:
+               if (!access_ok(VERIFY_WRITE, addr, 8))
+                       goto sigbus;
+
+               preempt_disable();
+               if (is_fpu_owner()) {
+                       if (read_c0_status() & ST0_FR) {
+#ifdef CONFIG_64BIT
+                               value = dmfc1(insn.mm_i_format.rt);
+                               StoreDW(addr, value, res);
+                               if (res)
+                                       goto preempt_fault;
+#else /* !CONFIG_64BIT */
+                               unsigned long value2;
+
+                               mfc1_mfhc1(&value, &value2, insn.mm_i_format.rt);
+                               StoreW(addr, value, res);
+                               if (res)
+                                       goto preempt_fault;
+                               StoreW((addr + 4), value2, res);
+                               if (res)
+                                       goto preempt_fault;
+#endif /* CONFIG_64BIT */
+                       } else {
+                               unsigned long value2;
+
+                               mfc1_pair(&value, &value2, insn.mm_i_format.rt);
+                               StoreW(addr, value, res);
+                               if (res)
+                                       goto preempt_fault;
+                               StoreW((addr + 4), value2, res);
+                               if (res)
+                                       goto preempt_fault;
+                       }
+                       preempt_enable();
+                       goto success;
+               }
+
+               preempt_enable();
+               goto fpu_emul;
+
        case mm_lwc132_op:
        case mm_swc132_op:
 fpu_emul:
@@ -1279,6 +2119,8 @@ success:
 #endif
        return;
 
+preempt_fault:
+       preempt_enable();
 fault:
        /* roll back jump/branch */
        regs->cp0_epc = origpc;
index 1765bab000a018eb53da0c6986f3e9762e490124..4169541ecd0b18fe53d7eac7ca00ded1b39a2ea5 100644 (file)
@@ -832,6 +832,7 @@ static int vpe_elfload(struct vpe * v)
        char *secstrings, *strtab = NULL;
        unsigned int len, i, symindex = 0, strindex = 0, relocate = 0;
        struct module mod;      // so we can re-use the relocations code
+       mm_segment_t old_fs;
 
        memset(&mod, 0, sizeof(struct module));
        strcpy(mod.name, "VPE loader");
@@ -973,8 +974,12 @@ static int vpe_elfload(struct vpe * v)
        }
 
        /* make sure it's physically written out */
+       /* flush the icache in correct context */
+       old_fs = get_fs();
+       set_fs(KERNEL_DS);
        flush_icache_range((unsigned long)v->load_addr,
                           (unsigned long)v->load_addr + v->len);
+       set_fs(old_fs);
 
        if ((find_vpe_symbols(v, sechdrs, symindex, strtab, &mod)) < 0) {
                if (v->__start == 0) {
index 51194875f1582ddb0507070c27ebe802b6924821..f983d95b540d12e2c6a937b3f6b255f605983652 100644 (file)
@@ -438,6 +438,7 @@ int __init icu_of_init(struct device_node *node, struct device_node *parent)
        arch_init_ipiirq(MIPS_CPU_IRQ_BASE + MIPS_CPU_IPI_RESCHED_IRQ,
                &irq_resched);
        arch_init_ipiirq(MIPS_CPU_IRQ_BASE + MIPS_CPU_IPI_CALL_IRQ, &irq_call);
+       mips_smp_c0_status_mask |= (IE_SW0 | IE_SW1);
 #endif
 
 #if !defined(CONFIG_MIPS_MT_SMP) && !defined(CONFIG_MIPS_MT_SMTC)
index dfb509d21d8e544e4c552b7b222f901e90930aaa..fd32075679c66b3fc6cf2ebdff6e3f15fb3a83c6 100644 (file)
@@ -13,13 +13,11 @@ endif
 MKLASATIMG = mklasatimg
 MKLASATIMG_ARCH = mq2,mqpro,sp100,sp200
 KERNEL_IMAGE = vmlinux
-KERNEL_START = $(shell $(NM) $(KERNEL_IMAGE) | grep " _text" | cut -f1 -d\ )
-KERNEL_ENTRY = $(shell $(NM) $(KERNEL_IMAGE) | grep kernel_entry | cut -f1 -d\ )
 
 LDSCRIPT= -L$(srctree)/$(src) -Tromscript.normal
 
-HEAD_DEFINES := -D_kernel_start=0x$(KERNEL_START) \
-               -D_kernel_entry=0x$(KERNEL_ENTRY) \
+HEAD_DEFINES := -D_kernel_start=$(VMLINUX_LOAD_ADDRESS) \
+               -D_kernel_entry=$(VMLINUX_ENTRY_ADDRESS) \
                -D VERSION="\"$(Version)\"" \
                -D TIMESTAMP=$(shell date +%s)
 
index a6adffbb4e5f0a5ccc5ec268c18c718b69c5a8da..cd15f3005c27bdddb7f665b333a961ab3c7726ff 100644 (file)
@@ -9,6 +9,17 @@
  * Copyright (C) 1999 Silicon Graphics, Inc.
  * Copyright (C) 2007  Maciej W. Rozycki
  */
+/*
+ * Hack to resolve longstanding prefetch issue
+ *
+ * Prefetching may be fatal on some systems if we're prefetching beyond the
+ * end of memory on some systems.  It's also a seriously bad idea on non
+ * dma-coherent systems.
+ */
+#if defined(CONFIG_DMA_NONCOHERENT) || defined(CONFIG_MIPS_MALTA)
+#undef CONFIG_CPU_HAS_PREFETCH
+#endif
+
 #include <linux/errno.h>
 #include <asm/asm.h>
 #include <asm/asm-offsets.h>
@@ -43,6 +54,8 @@
 #define ADD    daddu
 #define NBYTES 8
 
+#define LOADK  ld
+
 #else
 
 #define LOAD   lw
@@ -50,6 +63,8 @@
 #define ADD    addu
 #define NBYTES 4
 
+#define LOADK  lw
+
 #endif /* USE_DOUBLE */
 
 #define UNIT(unit)  ((unit)*NBYTES)
@@ -417,12 +432,18 @@ FEXPORT(csum_partial_copy_nocheck)
         *
         * If len < NBYTES use byte operations.
         */
-       sltu    t2, len, NBYTES
+       PREF(   0, 0(src) )
+       PREF(   1, 0(dst) )
+       sltu    t2, len, NBYTES
        and     t1, dst, ADDRMASK
-       bnez    t2, .Lcopy_bytes_checklen
+       PREF(   0, 1*32(src) )
+       PREF(   1, 1*32(dst) )
+       bnez    t2, .Lcopy_bytes_checklen
         and    t0, src, ADDRMASK
        andi    odd, dst, 0x1                   /* odd buffer? */
-       bnez    t1, .Ldst_unaligned
+       PREF(   0, 2*32(src) )
+       PREF(   1, 2*32(dst) )
+       bnez    t1, .Ldst_unaligned
         nop
        bnez    t0, .Lsrc_unaligned_dst_aligned
        /*
@@ -434,7 +455,9 @@ FEXPORT(csum_partial_copy_nocheck)
        beqz    t0, .Lcleanup_both_aligned # len < 8*NBYTES
         nop
        SUB     len, 8*NBYTES           # subtract here for bgez loop
-       .align  4
+       PREF(   0, 3*32(src) )
+       PREF(   1, 3*32(dst) )
+       .align  4
 1:
 EXC(   LOAD    t0, UNIT(0)(src),       .Ll_exc)
 EXC(   LOAD    t1, UNIT(1)(src),       .Ll_exc_copy)
@@ -462,8 +485,10 @@ EXC(       STORE   t6, UNIT(6)(dst),       .Ls_exc)
        ADDC(sum, t6)
 EXC(   STORE   t7, UNIT(7)(dst),       .Ls_exc)
        ADDC(sum, t7)
-       .set    reorder                         /* DADDI_WAR */
+       .set    reorder                         /* DADDI_WAR */
        ADD     dst, dst, 8*NBYTES
+       PREF(   0, 8*32(src) )
+       PREF(   1, 8*32(dst) )
        bgez    len, 1b
        .set    noreorder
        ADD     len, 8*NBYTES           # revert len (see above)
@@ -568,9 +593,11 @@ EXC(       STFIRST t3, FIRST(0)(dst),      .Ls_exc)
         ADD    src, src, t2
 
 .Lsrc_unaligned_dst_aligned:
-       SRL     t0, len, LOG_NBYTES+2    # +2 for 4 units/iter
+       SRL     t0, len, LOG_NBYTES+2    # +2 for 4 units/iter
+       PREF(   0, 3*32(src) )
        beqz    t0, .Lcleanup_src_unaligned
-        and    rem, len, (4*NBYTES-1)   # rem = len % 4*NBYTES
+        and    rem, len, (4*NBYTES-1)   # rem = len % 4*NBYTES
+       PREF(   1, 3*32(dst) )
 1:
 /*
  * Avoid consecutive LD*'s to the same register since some mips
@@ -587,7 +614,8 @@ EXC(        LDFIRST t2, FIRST(2)(src),      .Ll_exc_copy)
 EXC(   LDFIRST t3, FIRST(3)(src),      .Ll_exc_copy)
 EXC(   LDREST  t2, REST(2)(src),       .Ll_exc_copy)
 EXC(   LDREST  t3, REST(3)(src),       .Ll_exc_copy)
-       ADD     src, src, 4*NBYTES
+       PREF(   0, 9*32(src) )          # 0 is PREF_LOAD  (not streamed)
+       ADD     src, src, 4*NBYTES
 #ifdef CONFIG_CPU_SB1
        nop                             # improves slotting
 #endif
@@ -600,7 +628,8 @@ EXC(        STORE   t2, UNIT(2)(dst),       .Ls_exc)
 EXC(   STORE   t3, UNIT(3)(dst),       .Ls_exc)
        ADDC(sum, t3)
        .set    reorder                         /* DADDI_WAR */
-       ADD     dst, dst, 4*NBYTES
+       PREF(   1, 9*32(dst) )          # 1 is PREF_STORE (not streamed)
+       ADD     dst, dst, 4*NBYTES
        bne     len, rem, 1b
        .set    noreorder
 
@@ -700,9 +729,9 @@ EXC(        sb      t0, NBYTES-2(dst), .Ls_exc)
         *
         * Assumes src < THREAD_BUADDR($28)
         */
-       LOAD    t0, TI_TASK($28)
+       LOADK   t0, TI_TASK($28)
         li     t2, SHIFT_START
-       LOAD    t0, THREAD_BUADDR(t0)
+       LOADK   t0, THREAD_BUADDR(t0)
 1:
 EXC(   lbu     t1, 0(src),     .Ll_exc)
        ADD     src, src, 1
@@ -715,9 +744,9 @@ EXC(        lbu     t1, 0(src),     .Ll_exc)
        bne     src, t0, 1b
        .set    noreorder
 .Ll_exc:
-       LOAD    t0, TI_TASK($28)
+       LOADK   t0, TI_TASK($28)
         nop
-       LOAD    t0, THREAD_BUADDR(t0)   # t0 is just past last good address
+       LOADK   t0, THREAD_BUADDR(t0)   # t0 is just past last good address
         nop
        SUB     len, AT, t0             # len number of uncopied bytes
        /*
@@ -758,3 +787,738 @@ EXC(      lbu     t1, 0(src),     .Ll_exc)
         sw     v1, (errptr)
        .set    pop
        END(__csum_partial_copy_user)
+
+
+#ifdef CONFIG_EVA
+
+       .set    eva
+
+#undef  LOAD
+#undef  LOADL
+#undef  LOADR
+#undef  STORE
+#undef  STOREL
+#undef  STORER
+#undef  LDFIRST
+#undef  LDREST
+#undef  STFIRST
+#undef  STREST
+#undef  COPY_BYTE
+
+#define LOAD   lwe
+#define LOADL  lwle
+#define LOADR  lwre
+#define STOREL swl
+#define STORER swr
+#define STORE  sw
+
+#ifdef CONFIG_CPU_LITTLE_ENDIAN
+#define LDFIRST LOADR
+#define LDREST  LOADL
+#define STFIRST STORER
+#define STREST  STOREL
+#else
+#define LDFIRST LOADL
+#define LDREST  LOADR
+#define STFIRST STOREL
+#define STREST  STORER
+#endif
+
+LEAF(__csum_partial_copy_fromuser)
+       PTR_ADDU        AT, src, len    /* See (1) above. */
+#ifdef CONFIG_64BIT
+       move    errptr, a4
+#else
+       lw      errptr, 16(sp)
+#endif
+       move    sum, zero
+       move    odd, zero
+       /*
+        * Note: dst & src may be unaligned, len may be 0
+        * Temps
+        */
+       /*
+        * The "issue break"s below are very approximate.
+        * Issue delays for dcache fills will perturb the schedule, as will
+        * load queue full replay traps, etc.
+        *
+        * If len < NBYTES use byte operations.
+        */
+       PREFE(  0, 0(src) )
+       PREF(   1, 0(dst) )
+       sltu    t2, len, NBYTES
+       and     t1, dst, ADDRMASK
+       PREFE(  0, 1*32(src) )
+       PREF(   1, 1*32(dst) )
+       bnez    t2, .LFcopy_bytes_checklen
+        and    t0, src, ADDRMASK
+       andi    odd, dst, 0x1                   /* odd buffer? */
+       PREFE(  0, 2*32(src) )
+       PREF(   1, 2*32(dst) )
+       bnez    t1, .LFdst_unaligned
+        nop
+       bnez    t0, .LFsrc_unaligned_dst_aligned
+       /*
+        * use delay slot for fall-through
+        * src and dst are aligned; need to compute rem
+        */
+.LFboth_aligned:
+        SRL    t0, len, LOG_NBYTES+3    # +3 for 8 units/iter
+       beqz    t0, .LFcleanup_both_aligned # len < 8*NBYTES
+        nop
+       SUB     len, 8*NBYTES           # subtract here for bgez loop
+       PREFE(  0, 3*32(src) )
+       PREF(   1, 3*32(dst) )
+       .align  4
+1:
+EXC(    LOAD    t0, UNIT(0)(src),       .LFl_exc)
+EXC(    LOAD    t1, UNIT(1)(src),       .LFl_exc_copy)
+EXC(    LOAD    t2, UNIT(2)(src),       .LFl_exc_copy)
+EXC(    LOAD    t3, UNIT(3)(src),       .LFl_exc_copy)
+EXC(    LOAD    t4, UNIT(4)(src),       .LFl_exc_copy)
+EXC(    LOAD    t5, UNIT(5)(src),       .LFl_exc_copy)
+EXC(    LOAD    t6, UNIT(6)(src),       .LFl_exc_copy)
+EXC(    LOAD    t7, UNIT(7)(src),       .LFl_exc_copy)
+       SUB     len, len, 8*NBYTES
+       ADD     src, src, 8*NBYTES
+       STORE   t0, UNIT(0)(dst)
+       ADDC(sum, t0)
+       STORE   t1, UNIT(1)(dst)
+       ADDC(sum, t1)
+       STORE   t2, UNIT(2)(dst)
+       ADDC(sum, t2)
+       STORE   t3, UNIT(3)(dst)
+       ADDC(sum, t3)
+       STORE   t4, UNIT(4)(dst)
+       ADDC(sum, t4)
+       STORE   t5, UNIT(5)(dst)
+       ADDC(sum, t5)
+       STORE   t6, UNIT(6)(dst)
+       ADDC(sum, t6)
+       STORE   t7, UNIT(7)(dst)
+       ADDC(sum, t7)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 8*NBYTES
+       PREFE(  0, 8*32(src) )
+       PREF(   1, 8*32(dst) )
+       bgez    len, 1b
+       .set    noreorder
+       ADD     len, 8*NBYTES           # revert len (see above)
+
+       /*
+        * len == the number of bytes left to copy < 8*NBYTES
+        */
+.LFcleanup_both_aligned:
+       beqz    len, .LFdone
+        sltu   t0, len, 4*NBYTES
+       bnez    t0, .LFless_than_4units
+        and    rem, len, (NBYTES-1)    # rem = len % NBYTES
+       /*
+        * len >= 4*NBYTES
+        */
+EXC(    LOAD    t0, UNIT(0)(src),       .LFl_exc)
+EXC(    LOAD    t1, UNIT(1)(src),       .LFl_exc_copy)
+EXC(    LOAD    t2, UNIT(2)(src),       .LFl_exc_copy)
+EXC(    LOAD    t3, UNIT(3)(src),       .LFl_exc_copy)
+       SUB     len, len, 4*NBYTES
+       ADD     src, src, 4*NBYTES
+       STORE   t0, UNIT(0)(dst)
+       ADDC(sum, t0)
+       STORE   t1, UNIT(1)(dst)
+       ADDC(sum, t1)
+       STORE   t2, UNIT(2)(dst)
+       ADDC(sum, t2)
+       STORE   t3, UNIT(3)(dst)
+       ADDC(sum, t3)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 4*NBYTES
+       beqz    len, .LFdone
+       .set    noreorder
+.LFless_than_4units:
+       /*
+        * rem = len % NBYTES
+        */
+       beq     rem, len, .LFcopy_bytes
+        nop
+1:
+EXC(    LOAD    t0, 0(src),             .LFl_exc)
+       ADD     src, src, NBYTES
+       SUB     len, len, NBYTES
+       STORE   t0, 0(dst)
+       ADDC(sum, t0)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, NBYTES
+       bne     rem, len, 1b
+       .set    noreorder
+
+       /*
+        * src and dst are aligned, need to copy rem bytes (rem < NBYTES)
+        * A loop would do only a byte at a time with possible branch
+        * mispredicts.  Can't do an explicit LOAD dst,mask,or,STORE
+        * because can't assume read-access to dst.  Instead, use
+        * STREST dst, which doesn't require read access to dst.
+        *
+        * This code should perform better than a simple loop on modern,
+        * wide-issue mips processors because the code has fewer branches and
+        * more instruction-level parallelism.
+        */
+       beqz    len, .LFdone
+        ADD    t1, dst, len    # t1 is just past last byte of dst
+       li      bits, 8*NBYTES
+       SLL     rem, len, 3     # rem = number of bits to keep
+EXC(    LOAD    t0, 0(src),             .LFl_exc)
+       SUB     bits, bits, rem # bits = number of bits to discard
+       SHIFT_DISCARD t0, t0, bits
+       STREST  t0, -1(t1)
+       SHIFT_DISCARD_REVERT t0, t0, bits
+       .set reorder
+       ADDC(sum, t0)
+       b       .LFdone
+       .set noreorder
+.LFdst_unaligned:
+       /*
+        * dst is unaligned
+        * t0 = src & ADDRMASK
+        * t1 = dst & ADDRMASK; T1 > 0
+        * len >= NBYTES
+        *
+        * Copy enough bytes to align dst
+        * Set match = (src and dst have same alignment)
+        */
+EXC(    LDFIRST t3, FIRST(0)(src),      .LFl_exc)
+       ADD     t2, zero, NBYTES
+EXC(    LDREST  t3, REST(0)(src),       .LFl_exc_copy)
+       SUB     t2, t2, t1      # t2 = number of bytes copied
+       xor     match, t0, t1
+       STFIRST t3, FIRST(0)(dst)
+       SLL     t4, t1, 3               # t4 = number of bits to discard
+       SHIFT_DISCARD t3, t3, t4
+       /* no SHIFT_DISCARD_REVERT to handle odd buffer properly */
+       ADDC(sum, t3)
+       beq     len, t2, .LFdone
+        SUB    len, len, t2
+       ADD     dst, dst, t2
+       beqz    match, .LFboth_aligned
+        ADD    src, src, t2
+
+.LFsrc_unaligned_dst_aligned:
+       SRL     t0, len, LOG_NBYTES+2    # +2 for 4 units/iter
+       PREFE(  0, 3*32(src) )
+       beqz    t0, .LFcleanup_src_unaligned
+        and    rem, len, (4*NBYTES-1)   # rem = len % 4*NBYTES
+       PREF(   1, 3*32(dst) )
+1:
+/*
+ * Avoid consecutive LD*'s to the same register since some mips
+ * implementations can't issue them in the same cycle.
+ * It's OK to load FIRST(N+1) before REST(N) because the two addresses
+ * are to the same unit (unless src is aligned, but it's not).
+ */
+EXC(    LDFIRST t0, FIRST(0)(src),      .LFl_exc)
+EXC(    LDFIRST t1, FIRST(1)(src),      .LFl_exc_copy)
+       SUB     len, len, 4*NBYTES
+EXC(    LDREST  t0, REST(0)(src),       .LFl_exc_copy)
+EXC(    LDREST  t1, REST(1)(src),       .LFl_exc_copy)
+EXC(    LDFIRST t2, FIRST(2)(src),      .LFl_exc_copy)
+EXC(    LDFIRST t3, FIRST(3)(src),      .LFl_exc_copy)
+EXC(    LDREST  t2, REST(2)(src),       .LFl_exc_copy)
+EXC(    LDREST  t3, REST(3)(src),       .LFl_exc_copy)
+       PREFE(  0, 9*32(src) )          # 0 is PREF_LOAD  (not streamed)
+       ADD     src, src, 4*NBYTES
+#ifdef CONFIG_CPU_SB1
+       nop                             # improves slotting
+#endif
+       STORE   t0, UNIT(0)(dst)
+       ADDC(sum, t0)
+       STORE   t1, UNIT(1)(dst)
+       ADDC(sum, t1)
+       STORE   t2, UNIT(2)(dst)
+       ADDC(sum, t2)
+       STORE   t3, UNIT(3)(dst)
+       ADDC(sum, t3)
+       PREF(   1, 9*32(dst) )          # 1 is PREF_STORE (not streamed)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 4*NBYTES
+       bne     len, rem, 1b
+       .set    noreorder
+
+.LFcleanup_src_unaligned:
+       beqz    len, .LFdone
+        and    rem, len, NBYTES-1  # rem = len % NBYTES
+       beq     rem, len, .LFcopy_bytes
+        nop
+1:
+EXC(    LDFIRST t0, FIRST(0)(src),      .LFl_exc)
+EXC(    LDREST  t0, REST(0)(src),       .LFl_exc_copy)
+       ADD     src, src, NBYTES
+       SUB     len, len, NBYTES
+       STORE   t0, 0(dst)
+       ADDC(sum, t0)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, NBYTES
+       bne     len, rem, 1b
+       .set    noreorder
+
+.LFcopy_bytes_checklen:
+       beqz    len, .LFdone
+        nop
+.LFcopy_bytes:
+       /* 0 < len < NBYTES  */
+       move    t2, zero        # partial word
+       li      t3, SHIFT_START # shift
+/* use .Ll_exc_copy here to return correct sum on fault */
+#define COPY_BYTE(N)                    \
+EXC(    lbue    t0, N(src), .LFl_exc_copy);      \
+       SUB     len, len, 1;            \
+       sb      t0, N(dst);   \
+       SLLV    t0, t0, t3;             \
+       addu    t3, SHIFT_INC;          \
+       beqz    len, .LFcopy_bytes_done; \
+        or     t2, t0
+
+       COPY_BYTE(0)
+       COPY_BYTE(1)
+#ifdef USE_DOUBLE
+       COPY_BYTE(2)
+       COPY_BYTE(3)
+       COPY_BYTE(4)
+       COPY_BYTE(5)
+#endif
+EXC(    lbue    t0, NBYTES-2(src), .LFl_exc_copy)
+       SUB     len, len, 1
+       sb      t0, NBYTES-2(dst)
+       SLLV    t0, t0, t3
+       or      t2, t0
+.LFcopy_bytes_done:
+       ADDC(sum, t2)
+.LFdone:
+       /* fold checksum */
+#ifdef USE_DOUBLE
+       dsll32  v1, sum, 0
+       daddu   sum, v1
+       sltu    v1, sum, v1
+       dsra32  sum, sum, 0
+       addu    sum, v1
+#endif
+
+#ifdef CONFIG_CPU_MIPSR2
+       wsbh    v1, sum
+       movn    sum, v1, odd
+#else
+       beqz    odd, 1f                 /* odd buffer alignment? */
+        lui    v1, 0x00ff
+       addu    v1, 0x00ff
+       and     t0, sum, v1
+       sll     t0, t0, 8
+       srl     sum, sum, 8
+       and     sum, sum, v1
+       or      sum, sum, t0
+1:
+#endif
+       .set reorder
+       ADDC32(sum, psum)
+       jr      ra
+       .set noreorder
+
+.LFl_exc_copy:
+       /*
+        * Copy bytes from src until faulting load address (or until a
+        * lb faults)
+        *
+        * When reached by a faulting LDFIRST/LDREST, THREAD_BUADDR($28)
+        * may be more than a byte beyond the last address.
+        * Hence, the lb below may get an exception.
+        *
+        * Assumes src < THREAD_BUADDR($28)
+        */
+       LOADK   t0, TI_TASK($28)
+        li     t2, SHIFT_START
+       addi    t0, t0, THREAD_BUADDR
+       LOADK   t0, 0(t0)
+1:
+EXC(    lbue     t1, 0(src),     .LFl_exc)
+       ADD     src, src, 1
+       sb      t1, 0(dst)      # can't fault -- we're copy_from_user
+       SLLV    t1, t1, t2
+       addu    t2, SHIFT_INC
+       ADDC(sum, t1)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 1
+       bne     src, t0, 1b
+       .set    noreorder
+.LFl_exc:
+       LOADK   t0, TI_TASK($28)
+       addi    t0, t0, THREAD_BUADDR
+       LOADK   t0, 0(t0)               # t0 is just past last good address
+       SUB     len, AT, t0             # len number of uncopied bytes
+       /*
+        * Here's where we rely on src and dst being incremented in tandem,
+        *   See (3) above.
+        * dst += (fault addr - src) to put dst at first byte to clear
+        */
+       ADD     dst, t0                 # compute start address in a1
+       SUB     dst, src
+       /*
+        * Clear len bytes starting at dst.  Can't call __bzero because it
+        * might modify len.  An inefficient loop for these rare times...
+        */
+       .set    reorder                         /* DADDI_WAR */
+       SUB     src, len, 1
+       beqz    len, .LFdone
+       .set    noreorder
+1:     sb      zero, 0(dst)
+       ADD     dst, dst, 1
+       .set    push
+       .set    noat
+#ifndef CONFIG_CPU_DADDI_WORKAROUNDS
+       bnez    src, 1b
+        SUB    src, src, 1
+#else
+       li      v1, 1
+       bnez    src, 1b
+        SUB    src, src, v1
+#endif
+       li      v1, -EFAULT
+       b       .LFdone
+        sw     v1, (errptr)
+
+       .set    pop
+       END(__csum_partial_copy_fromuser)
+
+
+
+#undef  LOAD
+#undef  LOADL
+#undef  LOADR
+#undef  STORE
+#undef  STOREL
+#undef  STORER
+#undef  LDFIRST
+#undef  LDREST
+#undef  STFIRST
+#undef  STREST
+#undef  COPY_BYTE
+
+#define LOAD   lw
+#define LOADL  lwl
+#define LOADR  lwr
+#define STOREL swle
+#define STORER swre
+#define STORE  swe
+
+#ifdef CONFIG_CPU_LITTLE_ENDIAN
+#define LDFIRST LOADR
+#define LDREST  LOADL
+#define STFIRST STORER
+#define STREST  STOREL
+#else
+#define LDFIRST LOADL
+#define LDREST  LOADR
+#define STFIRST STOREL
+#define STREST  STORER
+#endif
+
+LEAF(__csum_partial_copy_touser)
+       PTR_ADDU        AT, src, len    /* See (1) above. */
+#ifdef CONFIG_64BIT
+       move    errptr, a4
+#else
+       lw      errptr, 16(sp)
+#endif
+       move    sum, zero
+       move    odd, zero
+       /*
+        * Note: dst & src may be unaligned, len may be 0
+        * Temps
+        */
+       /*
+        * The "issue break"s below are very approximate.
+        * Issue delays for dcache fills will perturb the schedule, as will
+        * load queue full replay traps, etc.
+        *
+        * If len < NBYTES use byte operations.
+        */
+       PREF(   0, 0(src) )
+       PREFE(  1, 0(dst) )
+       sltu    t2, len, NBYTES
+       and     t1, dst, ADDRMASK
+       PREF(   0, 1*32(src) )
+       PREFE(  1, 1*32(dst) )
+       bnez    t2, .LTcopy_bytes_checklen
+        and    t0, src, ADDRMASK
+       andi    odd, dst, 0x1                   /* odd buffer? */
+       PREF(   0, 2*32(src) )
+       PREFE(  1, 2*32(dst) )
+       bnez    t1, .LTdst_unaligned
+        nop
+       bnez    t0, .LTsrc_unaligned_dst_aligned
+       /*
+        * use delay slot for fall-through
+        * src and dst are aligned; need to compute rem
+        */
+.LTboth_aligned:
+        SRL    t0, len, LOG_NBYTES+3    # +3 for 8 units/iter
+       beqz    t0, .LTcleanup_both_aligned # len < 8*NBYTES
+        nop
+       SUB     len, 8*NBYTES           # subtract here for bgez loop
+       PREF(   0, 3*32(src) )
+       PREFE(  1, 3*32(dst) )
+       .align  4
+1:
+       LOAD    t0, UNIT(0)(src)
+       LOAD    t1, UNIT(1)(src)
+       LOAD    t2, UNIT(2)(src)
+       LOAD    t3, UNIT(3)(src)
+       LOAD    t4, UNIT(4)(src)
+       LOAD    t5, UNIT(5)(src)
+       LOAD    t6, UNIT(6)(src)
+       LOAD    t7, UNIT(7)(src)
+       SUB     len, len, 8*NBYTES
+       ADD     src, src, 8*NBYTES
+EXC(    STORE   t0, UNIT(0)(dst),       .LTs_exc)
+       ADDC(sum, t0)
+EXC(    STORE   t1, UNIT(1)(dst),       .LTs_exc)
+       ADDC(sum, t1)
+EXC(    STORE   t2, UNIT(2)(dst),       .LTs_exc)
+       ADDC(sum, t2)
+EXC(    STORE   t3, UNIT(3)(dst),       .LTs_exc)
+       ADDC(sum, t3)
+EXC(    STORE   t4, UNIT(4)(dst),       .LTs_exc)
+       ADDC(sum, t4)
+EXC(    STORE   t5, UNIT(5)(dst),       .LTs_exc)
+       ADDC(sum, t5)
+EXC(    STORE   t6, UNIT(6)(dst),       .LTs_exc)
+       ADDC(sum, t6)
+EXC(    STORE   t7, UNIT(7)(dst),       .LTs_exc)
+       ADDC(sum, t7)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 8*NBYTES
+       PREF(   0, 8*32(src) )
+       PREFE(  1, 8*32(dst) )
+       bgez    len, 1b
+       .set    noreorder
+       ADD     len, 8*NBYTES           # revert len (see above)
+
+       /*
+        * len == the number of bytes left to copy < 8*NBYTES
+        */
+.LTcleanup_both_aligned:
+       beqz    len, .LTdone
+        sltu   t0, len, 4*NBYTES
+       bnez    t0, .LTless_than_4units
+        and    rem, len, (NBYTES-1)    # rem = len % NBYTES
+       /*
+        * len >= 4*NBYTES
+        */
+       LOAD    t0, UNIT(0)(src)
+       LOAD    t1, UNIT(1)(src)
+       LOAD    t2, UNIT(2)(src)
+       LOAD    t3, UNIT(3)(src)
+       SUB     len, len, 4*NBYTES
+       ADD     src, src, 4*NBYTES
+EXC(    STORE   t0, UNIT(0)(dst),       .LTs_exc)
+       ADDC(sum, t0)
+EXC(    STORE   t1, UNIT(1)(dst),       .LTs_exc)
+       ADDC(sum, t1)
+EXC(    STORE   t2, UNIT(2)(dst),       .LTs_exc)
+       ADDC(sum, t2)
+EXC(    STORE   t3, UNIT(3)(dst),       .LTs_exc)
+       ADDC(sum, t3)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 4*NBYTES
+       beqz    len, .LTdone
+       .set    noreorder
+.LTless_than_4units:
+       /*
+        * rem = len % NBYTES
+        */
+       beq     rem, len, .LTcopy_bytes
+        nop
+1:
+       LOAD    t0, 0(src)
+       ADD     src, src, NBYTES
+       SUB     len, len, NBYTES
+EXC(    STORE   t0, 0(dst),             .LTs_exc)
+       ADDC(sum, t0)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, NBYTES
+       bne     rem, len, 1b
+       .set    noreorder
+
+       /*
+        * src and dst are aligned, need to copy rem bytes (rem < NBYTES)
+        * A loop would do only a byte at a time with possible branch
+        * mispredicts.  Can't do an explicit LOAD dst,mask,or,STORE
+        * because can't assume read-access to dst.  Instead, use
+        * STREST dst, which doesn't require read access to dst.
+        *
+        * This code should perform better than a simple loop on modern,
+        * wide-issue mips processors because the code has fewer branches and
+        * more instruction-level parallelism.
+        */
+       beqz    len, .LTdone
+        ADD    t1, dst, len    # t1 is just past last byte of dst
+       li      bits, 8*NBYTES
+       SLL     rem, len, 3     # rem = number of bits to keep
+       LOAD    t0, 0(src)
+       SUB     bits, bits, rem # bits = number of bits to discard
+       SHIFT_DISCARD t0, t0, bits
+EXC(    STREST  t0, -1(t1),             .LTs_exc)
+       SHIFT_DISCARD_REVERT t0, t0, bits
+       .set reorder
+       ADDC(sum, t0)
+       b       .LTdone
+       .set noreorder
+.LTdst_unaligned:
+       /*
+        * dst is unaligned
+        * t0 = src & ADDRMASK
+        * t1 = dst & ADDRMASK; T1 > 0
+        * len >= NBYTES
+        *
+        * Copy enough bytes to align dst
+        * Set match = (src and dst have same alignment)
+        */
+       LDFIRST t3, FIRST(0)(src)
+       ADD     t2, zero, NBYTES
+       LDREST  t3, REST(0)(src)
+       SUB     t2, t2, t1      # t2 = number of bytes copied
+       xor     match, t0, t1
+EXC(    STFIRST t3, FIRST(0)(dst),      .LTs_exc)
+       SLL     t4, t1, 3               # t4 = number of bits to discard
+       SHIFT_DISCARD t3, t3, t4
+       /* no SHIFT_DISCARD_REVERT to handle odd buffer properly */
+       ADDC(sum, t3)
+       beq     len, t2, .LTdone
+        SUB    len, len, t2
+       ADD     dst, dst, t2
+       beqz    match, .LTboth_aligned
+        ADD    src, src, t2
+
+.LTsrc_unaligned_dst_aligned:
+       SRL     t0, len, LOG_NBYTES+2    # +2 for 4 units/iter
+       PREF(   0, 3*32(src) )
+       beqz    t0, .LTcleanup_src_unaligned
+        and    rem, len, (4*NBYTES-1)   # rem = len % 4*NBYTES
+       PREFE(  1, 3*32(dst) )
+1:
+/*
+ * Avoid consecutive LD*'s to the same register since some mips
+ * implementations can't issue them in the same cycle.
+ * It's OK to load FIRST(N+1) before REST(N) because the two addresses
+ * are to the same unit (unless src is aligned, but it's not).
+ */
+       LDFIRST t0, FIRST(0)(src)
+       LDFIRST t1, FIRST(1)(src)
+       SUB     len, len, 4*NBYTES
+       LDREST  t0, REST(0)(src)
+       LDREST  t1, REST(1)(src)
+       LDFIRST t2, FIRST(2)(src)
+       LDFIRST t3, FIRST(3)(src)
+       LDREST  t2, REST(2)(src)
+       LDREST  t3, REST(3)(src)
+       PREF(   0, 9*32(src) )          # 0 is PREF_LOAD  (not streamed)
+       ADD     src, src, 4*NBYTES
+#ifdef CONFIG_CPU_SB1
+       nop                             # improves slotting
+#endif
+EXC(    STORE   t0, UNIT(0)(dst),       .LTs_exc)
+       ADDC(sum, t0)
+EXC(    STORE   t1, UNIT(1)(dst),       .LTs_exc)
+       ADDC(sum, t1)
+EXC(    STORE   t2, UNIT(2)(dst),       .LTs_exc)
+       ADDC(sum, t2)
+EXC(    STORE   t3, UNIT(3)(dst),       .LTs_exc)
+       ADDC(sum, t3)
+       PREFE(  1, 9*32(dst) )          # 1 is PREF_STORE (not streamed)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 4*NBYTES
+       bne     len, rem, 1b
+       .set    noreorder
+
+.LTcleanup_src_unaligned:
+       beqz    len, .LTdone
+        and    rem, len, NBYTES-1  # rem = len % NBYTES
+       beq     rem, len, .LTcopy_bytes
+        nop
+1:
+       LDFIRST t0, FIRST(0)(src)
+       LDREST  t0, REST(0)(src)
+       ADD     src, src, NBYTES
+       SUB     len, len, NBYTES
+EXC(    STORE   t0, 0(dst),             .LTs_exc)
+       ADDC(sum, t0)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, NBYTES
+       bne     len, rem, 1b
+       .set    noreorder
+
+.LTcopy_bytes_checklen:
+       beqz    len, .LTdone
+        nop
+.LTcopy_bytes:
+       /* 0 < len < NBYTES  */
+       move    t2, zero        # partial word
+       li      t3, SHIFT_START # shift
+/* use .Ll_exc_copy here to return correct sum on fault */
+#define COPY_BYTE(N)                    \
+       lbu     t0, N(src);     \
+       SUB     len, len, 1;            \
+EXC(    sbe      t0, N(dst), .LTs_exc);   \
+       SLLV    t0, t0, t3;             \
+       addu    t3, SHIFT_INC;          \
+       beqz    len, .LTcopy_bytes_done; \
+        or     t2, t0
+
+       COPY_BYTE(0)
+       COPY_BYTE(1)
+#ifdef USE_DOUBLE
+       COPY_BYTE(2)
+       COPY_BYTE(3)
+       COPY_BYTE(4)
+       COPY_BYTE(5)
+#endif
+       lbu     t0, NBYTES-2(src)
+       SUB     len, len, 1
+EXC(    sbe     t0, NBYTES-2(dst), .LTs_exc)
+       SLLV    t0, t0, t3
+       or      t2, t0
+.LTcopy_bytes_done:
+       ADDC(sum, t2)
+.LTdone:
+       /* fold checksum */
+#ifdef USE_DOUBLE
+       dsll32  v1, sum, 0
+       daddu   sum, v1
+       sltu    v1, sum, v1
+       dsra32  sum, sum, 0
+       addu    sum, v1
+#endif
+
+#ifdef CONFIG_CPU_MIPSR2
+       wsbh    v1, sum
+       movn    sum, v1, odd
+#else
+       beqz    odd, 1f                 /* odd buffer alignment? */
+        lui    v1, 0x00ff
+       addu    v1, 0x00ff
+       and     t0, sum, v1
+       sll     t0, t0, 8
+       srl     sum, sum, 8
+       and     sum, sum, v1
+       or      sum, sum, t0
+1:
+#endif
+       .set reorder
+       ADDC32(sum, psum)
+       jr      ra
+       .set noreorder
+
+.LTs_exc:
+       li      v0, -1 /* invalid checksum */
+       li      v1, -EFAULT
+       jr      ra
+        sw     v1, (errptr)
+       END(__csum_partial_copy_touser)
+
+#endif  /* CONFIG_EVA */
index 32b9f21bfd8562f37d8e51e1ad23908c320ad3e8..70519e2e0a43faee9dabce81b458c6083dfd08eb 100644 (file)
@@ -1,8 +1,13 @@
 /*
- * Dump R4x00 TLB for debugging purposes.
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
  *
  * Copyright (C) 1994, 1995 by Waldorf Electronics, written by Ralf Baechle.
  * Copyright (C) 1999 by Silicon Graphics, Inc.
+ * Copyright (C) 2011 MIPS Technologies, Inc.
+ *
+ * Dump R4x00 TLB for debugging purposes.
  */
 #include <linux/kernel.h>
 #include <linux/mm.h>
@@ -59,8 +64,10 @@ static void dump_tlb(int first, int last)
 
        for (i = first; i <= last; i++) {
                write_c0_index(i);
+               back_to_back_c0_hazard();
                BARRIER();
                tlb_read();
+               back_to_back_c0_hazard();
                BARRIER();
                pagemask = read_c0_pagemask();
                entryhi  = read_c0_entryhi();
@@ -68,8 +75,8 @@ static void dump_tlb(int first, int last)
                entrylo1 = read_c0_entrylo1();
 
                /* Unused entries have a virtual address of CKSEG0.  */
-               if ((entryhi & ~0x1ffffUL) != CKSEG0
-                   && (entryhi & 0xff) == asid) {
+               if (((entryhi & ~0x1ffffUL) != CKSEG0) &&
+                   !(cpu_has_tlbinv && (entryhi & MIPS_EHINV))) {
 #ifdef CONFIG_32BIT
                        int width = 8;
 #else
@@ -83,7 +90,7 @@ static void dump_tlb(int first, int last)
                        c0 = (entrylo0 >> 3) & 7;
                        c1 = (entrylo1 >> 3) & 7;
 
-                       printk("va=%0*lx asid=%02lx\n",
+                       printk("va=%0*lx asid=%02lx:",
                               width, (entryhi & ~0x1fffUL),
                               entryhi & 0xff);
                        printk("\t[pa=%0*llx c=%d d=%d v=%d g=%d] ",
@@ -105,6 +112,8 @@ static void dump_tlb(int first, int last)
        write_c0_entryhi(s_entryhi);
        write_c0_index(s_index);
        write_c0_pagemask(s_pagemask);
+       BARRIER();
+       back_to_back_c0_hazard();
 }
 
 void dump_tlb_all(void)
index c5c40dad0bbf6791d13da727b7262063b4511de0..835996bc91f0e41eddd959f865a723ae4ba38660 100644 (file)
 #define NBYTES 8
 #define LOG_NBYTES 3
 
+#define LOADK  ld
+
 /*
  * As we are sharing code base with the mips32 tree (which use the o32 ABI
  * register definitions). We need to redefine the register definitions from
 #define NBYTES 4
 #define LOG_NBYTES 2
 
+#define LOADK  lw
+
 #endif /* USE_DOUBLE */
 
 #ifdef CONFIG_CPU_LITTLE_ENDIAN
 #define LDFIRST LOADR
-#define LDREST LOADL
+#define LDREST  LOADL
 #define STFIRST STORER
-#define STREST STOREL
+#define STREST  STOREL
 #define SHIFT_DISCARD SLLV
 #else
 #define LDFIRST LOADL
-#define LDREST LOADR
+#define LDREST  LOADR
 #define STFIRST STOREL
-#define STREST STORER
+#define STREST  STORER
 #define SHIFT_DISCARD SRLV
 #endif
 
@@ -235,7 +239,7 @@ __copy_user_common:
         * src and dst are aligned; need to compute rem
         */
 .Lboth_aligned:
-        SRL    t0, len, LOG_NBYTES+3    # +3 for 8 units/iter
+        SRL    t0, len, LOG_NBYTES+3    # +3 for 8 units/iter
        beqz    t0, .Lcleanup_both_aligned # len < 8*NBYTES
         and    rem, len, (8*NBYTES-1)   # rem = len % (8*NBYTES)
        PREF(   0, 3*32(src) )
@@ -313,7 +317,7 @@ EXC(        STORE   t0, 0(dst),             .Ls_exc_p1u)
        /*
         * src and dst are aligned, need to copy rem bytes (rem < NBYTES)
         * A loop would do only a byte at a time with possible branch
-        * mispredicts.  Can't do an explicit LOAD dst,mask,or,STORE
+        * mispredicts.  Can't do an explicit LOAD dst,mask,or,STORE
         * because can't assume read-access to dst.  Instead, use
         * STREST dst, which doesn't require read access to dst.
         *
@@ -327,7 +331,7 @@ EXC(        STORE   t0, 0(dst),             .Ls_exc_p1u)
        li      bits, 8*NBYTES
        SLL     rem, len, 3     # rem = number of bits to keep
 EXC(   LOAD    t0, 0(src),             .Ll_exc)
-       SUB     bits, bits, rem # bits = number of bits to discard
+       SUB     bits, bits, rem # bits = number of bits to discard
        SHIFT_DISCARD t0, t0, bits
 EXC(   STREST  t0, -1(t1),             .Ls_exc)
        jr      ra
@@ -343,7 +347,7 @@ EXC(        STREST  t0, -1(t1),             .Ls_exc)
         * Set match = (src and dst have same alignment)
         */
 #define match rem
-EXC(   LDFIRST t3, FIRST(0)(src),      .Ll_exc)
+EXC(   LDFIRST t3, FIRST(0)(src),      .Ll_exc)
        ADD     t2, zero, NBYTES
 EXC(   LDREST  t3, REST(0)(src),       .Ll_exc_copy)
        SUB     t2, t2, t1      # t2 = number of bytes copied
@@ -357,10 +361,10 @@ EXC(      STFIRST t3, FIRST(0)(dst),      .Ls_exc)
         ADD    src, src, t2
 
 .Lsrc_unaligned_dst_aligned:
-       SRL     t0, len, LOG_NBYTES+2    # +2 for 4 units/iter
+       SRL     t0, len, LOG_NBYTES+2    # +2 for 4 units/iter
        PREF(   0, 3*32(src) )
        beqz    t0, .Lcleanup_src_unaligned
-        and    rem, len, (4*NBYTES-1)   # rem = len % 4*NBYTES
+        and    rem, len, (4*NBYTES-1)   # rem = len % 4*NBYTES
        PREF(   1, 3*32(dst) )
 1:
 /*
@@ -370,13 +374,13 @@ EXC(      STFIRST t3, FIRST(0)(dst),      .Ls_exc)
  * are to the same unit (unless src is aligned, but it's not).
  */
        R10KCBARRIER(0(ra))
-EXC(   LDFIRST t0, FIRST(0)(src),      .Ll_exc)
-EXC(   LDFIRST t1, FIRST(1)(src),      .Ll_exc_copy)
-       SUB     len, len, 4*NBYTES
+EXC(   LDFIRST t0, FIRST(0)(src),      .Ll_exc)
+EXC(   LDFIRST t1, FIRST(1)(src),      .Ll_exc_copy)
+       SUB     len, len, 4*NBYTES
 EXC(   LDREST  t0, REST(0)(src),       .Ll_exc_copy)
 EXC(   LDREST  t1, REST(1)(src),       .Ll_exc_copy)
-EXC(   LDFIRST t2, FIRST(2)(src),      .Ll_exc_copy)
-EXC(   LDFIRST t3, FIRST(3)(src),      .Ll_exc_copy)
+EXC(   LDFIRST t2, FIRST(2)(src),      .Ll_exc_copy)
+EXC(   LDFIRST t3, FIRST(3)(src),      .Ll_exc_copy)
 EXC(   LDREST  t2, REST(2)(src),       .Ll_exc_copy)
 EXC(   LDREST  t3, REST(3)(src),       .Ll_exc_copy)
        PREF(   0, 9*32(src) )          # 0 is PREF_LOAD  (not streamed)
@@ -388,7 +392,7 @@ EXC(        STORE   t0, UNIT(0)(dst),       .Ls_exc_p4u)
 EXC(   STORE   t1, UNIT(1)(dst),       .Ls_exc_p3u)
 EXC(   STORE   t2, UNIT(2)(dst),       .Ls_exc_p2u)
 EXC(   STORE   t3, UNIT(3)(dst),       .Ls_exc_p1u)
-       PREF(   1, 9*32(dst) )          # 1 is PREF_STORE (not streamed)
+       PREF(   1, 9*32(dst) )          # 1 is PREF_STORE (not streamed)
        .set    reorder                         /* DADDI_WAR */
        ADD     dst, dst, 4*NBYTES
        bne     len, rem, 1b
@@ -451,9 +455,9 @@ EXC(         sb     t0, NBYTES-2(dst), .Ls_exc_p1)
         *
         * Assumes src < THREAD_BUADDR($28)
         */
-       LOAD    t0, TI_TASK($28)
+       LOADK   t0, TI_TASK($28)
         nop
-       LOAD    t0, THREAD_BUADDR(t0)
+       LOADK   t0, THREAD_BUADDR(t0)
 1:
 EXC(   lb      t1, 0(src),     .Ll_exc)
        ADD     src, src, 1
@@ -463,12 +467,12 @@ EXC(      lb      t1, 0(src),     .Ll_exc)
        bne     src, t0, 1b
        .set    noreorder
 .Ll_exc:
-       LOAD    t0, TI_TASK($28)
+       LOADK   t0, TI_TASK($28)
         nop
-       LOAD    t0, THREAD_BUADDR(t0)   # t0 is just past last good address
+       LOADK   t0, THREAD_BUADDR(t0)   # t0 is just past last good address
         nop
-       SUB     len, AT, t0             # len number of uncopied bytes
        bnez    t6, .Ldone      /* Skip the zeroing part if inatomic */
+        SUB     len, AT, t0            # len number of uncopied bytes
        /*
         * Here's where we rely on src and dst being incremented in tandem,
         *   See (3) above.
@@ -502,7 +506,7 @@ EXC(        lb      t1, 0(src),     .Ll_exc)
 
 
 #define SEXC(n)                                                        \
-       .set    reorder;                        /* DADDI_WAR */ \
+       .set    reorder;                        /* DADDI_WAR */ \
 .Ls_exc_p ## n ## u:                                           \
        ADD     len, len, n*NBYTES;                             \
        jr      ra;                                             \
@@ -575,3 +579,940 @@ LEAF(__rmemcpy)                                   /* a0=dst a1=src a2=len */
        jr      ra
         move   a2, zero
        END(__rmemcpy)
+
+#ifdef CONFIG_EVA
+
+       .set    eva
+
+LEAF(__copy_fromuser_inatomic)
+       b       __copy_fromuser_common
+        li     t6, 1
+       END(__copy_fromuser_inatomic)
+
+#undef  LOAD
+#undef  LOADL
+#undef  LOADR
+#undef  STORE
+#undef  STOREL
+#undef  STORER
+#undef  LDFIRST
+#undef  LDREST
+#undef  STFIRST
+#undef  STREST
+#undef  SHIFT_DISCARD
+#undef  COPY_BYTE
+#undef  SEXC
+
+#define LOAD   lwe
+#define LOADL  lwle
+#define LOADR  lwre
+#define STOREL swl
+#define STORER swr
+#define STORE  sw
+
+#ifdef CONFIG_CPU_LITTLE_ENDIAN
+#define LDFIRST LOADR
+#define LDREST  LOADL
+#define STFIRST STORER
+#define STREST  STOREL
+#define SHIFT_DISCARD SLLV
+#else
+#define LDFIRST LOADL
+#define LDREST  LOADR
+#define STFIRST STOREL
+#define STREST  STORER
+#define SHIFT_DISCARD SRLV
+#endif
+
+LEAF(__copy_fromuser)
+       li      t6, 0   /* not inatomic */
+__copy_fromuser_common:
+       /*
+        * Note: dst & src may be unaligned, len may be 0
+        * Temps
+        */
+
+       R10KCBARRIER(0(ra))
+       /*
+        * The "issue break"s below are very approximate.
+        * Issue delays for dcache fills will perturb the schedule, as will
+        * load queue full replay traps, etc.
+        *
+        * If len < NBYTES use byte operations.
+        */
+       PREFE(  0, 0(src) )
+       PREF(   1, 0(dst) )
+       sltu    t2, len, NBYTES
+       and     t1, dst, ADDRMASK
+       PREFE(  0, 1*32(src) )
+       PREF(   1, 1*32(dst) )
+       bnez    t2, .LFcopy_bytes_checklen
+        and    t0, src, ADDRMASK
+       PREFE(  0, 2*32(src) )
+       PREF(   1, 2*32(dst) )
+       bnez    t1, .LFdst_unaligned
+        nop
+       bnez    t0, .LFsrc_unaligned_dst_aligned
+       /*
+        * use delay slot for fall-through
+        * src and dst are aligned; need to compute rem
+        */
+.LFboth_aligned:
+        SRL    t0, len, LOG_NBYTES+3    # +3 for 8 units/iter
+       beqz    t0, .LFcleanup_both_aligned # len < 8*NBYTES
+        and    rem, len, (8*NBYTES-1)   # rem = len % (8*NBYTES)
+       PREFE(  0, 3*32(src) )
+       PREF(   1, 3*32(dst) )
+       .align  4
+1:
+       R10KCBARRIER(0(ra))
+EXC(    LOAD    t0, UNIT(0)(src),       .LFl_exc)
+EXC(    LOAD    t1, UNIT(1)(src),       .LFl_exc_copy)
+EXC(    LOAD    t2, UNIT(2)(src),       .LFl_exc_copy)
+EXC(    LOAD    t3, UNIT(3)(src),       .LFl_exc_copy)
+       SUB     len, len, 8*NBYTES
+EXC(    LOAD    t4, UNIT(4)(src),       .LFl_exc_copy)
+EXC(    LOAD    t7, UNIT(5)(src),       .LFl_exc_copy)
+       STORE   t0, UNIT(0)(dst)
+       STORE   t1, UNIT(1)(dst)
+EXC(    LOAD    t0, UNIT(6)(src),       .LFl_exc_copy)
+EXC(    LOAD    t1, UNIT(7)(src),       .LFl_exc_copy)
+       ADD     src, src, 8*NBYTES
+       ADD     dst, dst, 8*NBYTES
+       STORE   t2, UNIT(-6)(dst)
+       STORE   t3, UNIT(-5)(dst)
+       STORE   t4, UNIT(-4)(dst)
+       STORE   t7, UNIT(-3)(dst)
+       STORE   t0, UNIT(-2)(dst)
+       STORE   t1, UNIT(-1)(dst)
+       PREFE(  0, 8*32(src) )
+       PREF(   1, 8*32(dst) )
+       bne     len, rem, 1b
+        nop
+
+       /*
+        * len == rem == the number of bytes left to copy < 8*NBYTES
+        */
+.LFcleanup_both_aligned:
+       beqz    len, .Ldone
+        sltu   t0, len, 4*NBYTES
+       bnez    t0, .LFless_than_4units
+        and    rem, len, (NBYTES-1)    # rem = len % NBYTES
+       /*
+        * len >= 4*NBYTES
+        */
+EXC(    LOAD    t0, UNIT(0)(src),       .LFl_exc)
+EXC(    LOAD    t1, UNIT(1)(src),       .LFl_exc_copy)
+EXC(    LOAD    t2, UNIT(2)(src),       .LFl_exc_copy)
+EXC(    LOAD    t3, UNIT(3)(src),       .LFl_exc_copy)
+       SUB     len, len, 4*NBYTES
+       ADD     src, src, 4*NBYTES
+       R10KCBARRIER(0(ra))
+       STORE   t0, UNIT(0)(dst)
+       STORE   t1, UNIT(1)(dst)
+       STORE   t2, UNIT(2)(dst)
+       STORE   t3, UNIT(3)(dst)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 4*NBYTES
+       beqz    len, .Ldone
+       .set    noreorder
+.LFless_than_4units:
+       /*
+        * rem = len % NBYTES
+        */
+       beq     rem, len, .LFcopy_bytes
+        nop
+1:
+       R10KCBARRIER(0(ra))
+EXC(    LOAD    t0, 0(src),             .LFl_exc)
+       ADD     src, src, NBYTES
+       SUB     len, len, NBYTES
+       STORE   t0, 0(dst)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, NBYTES
+       bne     rem, len, 1b
+       .set    noreorder
+
+       /*
+        * src and dst are aligned, need to copy rem bytes (rem < NBYTES)
+        * A loop would do only a byte at a time with possible branch
+        * mispredicts.  Can't do an explicit LOAD dst,mask,or,STORE
+        * because can't assume read-access to dst.  Instead, use
+        * STREST dst, which doesn't require read access to dst.
+        *
+        * This code should perform better than a simple loop on modern,
+        * wide-issue mips processors because the code has fewer branches and
+        * more instruction-level parallelism.
+        */
+       beqz    len, .Ldone
+        ADD    t1, dst, len    # t1 is just past last byte of dst
+       li      bits, 8*NBYTES
+       SLL     rem, len, 3     # rem = number of bits to keep
+EXC(    LOAD    t0, 0(src),             .LFl_exc)
+       SUB     bits, bits, rem # bits = number of bits to discard
+       SHIFT_DISCARD t0, t0, bits
+       STREST  t0, -1(t1)
+       jr      ra
+        move   len, zero
+.LFdst_unaligned:
+       /*
+        * dst is unaligned
+        * t0 = src & ADDRMASK
+        * t1 = dst & ADDRMASK; T1 > 0
+        * len >= NBYTES
+        *
+        * Copy enough bytes to align dst
+        * Set match = (src and dst have same alignment)
+        */
+#define match rem
+EXC(    LDFIRST t3, FIRST(0)(src),      .LFl_exc)
+       ADD     t2, zero, NBYTES
+EXC(    LDREST  t3, REST(0)(src),       .LFl_exc_copy)
+       SUB     t2, t2, t1      # t2 = number of bytes copied
+       xor     match, t0, t1
+       R10KCBARRIER(0(ra))
+       STFIRST t3, FIRST(0)(dst)
+       beq     len, t2, .Ldone
+        SUB    len, len, t2
+       ADD     dst, dst, t2
+       beqz    match, .LFboth_aligned
+        ADD    src, src, t2
+
+.LFsrc_unaligned_dst_aligned:
+       SRL     t0, len, LOG_NBYTES+2    # +2 for 4 units/iter
+       PREFE(  0, 3*32(src) )
+       beqz    t0, .LFcleanup_src_unaligned
+        and    rem, len, (4*NBYTES-1)   # rem = len % 4*NBYTES
+       PREF(   1, 3*32(dst) )
+1:
+/*
+ * Avoid consecutive LD*'s to the same register since some mips
+ * implementations can't issue them in the same cycle.
+ * It's OK to load FIRST(N+1) before REST(N) because the two addresses
+ * are to the same unit (unless src is aligned, but it's not).
+ */
+       R10KCBARRIER(0(ra))
+EXC(    LDFIRST t0, FIRST(0)(src),      .LFl_exc)
+EXC(    LDFIRST t1, FIRST(1)(src),      .LFl_exc_copy)
+       SUB     len, len, 4*NBYTES
+EXC(    LDREST  t0, REST(0)(src),       .LFl_exc_copy)
+EXC(    LDREST  t1, REST(1)(src),       .LFl_exc_copy)
+EXC(    LDFIRST t2, FIRST(2)(src),      .LFl_exc_copy)
+EXC(    LDFIRST t3, FIRST(3)(src),      .LFl_exc_copy)
+EXC(    LDREST  t2, REST(2)(src),       .LFl_exc_copy)
+EXC(    LDREST  t3, REST(3)(src),       .LFl_exc_copy)
+       PREFE(  0, 9*32(src) )          # 0 is PREF_LOAD  (not streamed)
+       ADD     src, src, 4*NBYTES
+#ifdef CONFIG_CPU_SB1
+       nop                             # improves slotting
+#endif
+       STORE   t0, UNIT(0)(dst)
+       STORE   t1, UNIT(1)(dst)
+       STORE   t2, UNIT(2)(dst)
+       STORE   t3, UNIT(3)(dst)
+       PREF(   1, 9*32(dst) )          # 1 is PREF_STORE (not streamed)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 4*NBYTES
+       bne     len, rem, 1b
+       .set    noreorder
+
+.LFcleanup_src_unaligned:
+       beqz    len, .Ldone
+        and    rem, len, NBYTES-1  # rem = len % NBYTES
+       beq     rem, len, .LFcopy_bytes
+        nop
+1:
+       R10KCBARRIER(0(ra))
+EXC(    LDFIRST t0, FIRST(0)(src),      .LFl_exc)
+EXC(    LDREST  t0, REST(0)(src),       .LFl_exc_copy)
+       ADD     src, src, NBYTES
+       SUB     len, len, NBYTES
+       STORE   t0, 0(dst)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, NBYTES
+       bne     len, rem, 1b
+       .set    noreorder
+
+.LFcopy_bytes_checklen:
+       beqz    len, .Ldone
+        nop
+.LFcopy_bytes:
+       /* 0 < len < NBYTES  */
+       R10KCBARRIER(0(ra))
+#define COPY_BYTE(N)                   \
+EXC(    lbe      t0, N(src), .LFl_exc);   \
+       SUB     len, len, 1;            \
+       beqz    len, .Ldone;            \
+        sb     t0, N(dst)
+
+       COPY_BYTE(0)
+       COPY_BYTE(1)
+#ifdef USE_DOUBLE
+       COPY_BYTE(2)
+       COPY_BYTE(3)
+       COPY_BYTE(4)
+       COPY_BYTE(5)
+#endif
+EXC(    lbe     t0, NBYTES-2(src), .LFl_exc)
+       SUB     len, len, 1
+       jr      ra
+        sb     t0, NBYTES-2(dst)
+
+.LFl_exc_copy:
+       /*
+        * Copy bytes from src until faulting load address (or until a
+        * lb faults)
+        *
+        * When reached by a faulting LDFIRST/LDREST, THREAD_BUADDR($28)
+        * may be more than a byte beyond the last address.
+        * Hence, the lb below may get an exception.
+        *
+        * Assumes src < THREAD_BUADDR($28)
+        */
+       LOADK   t0, TI_TASK($28)
+       addi    t0, t0, THREAD_BUADDR
+       LOADK   t0, 0(t0)
+1:
+EXC(    lbe     t1, 0(src),     .LFl_exc)
+       ADD     src, src, 1
+       sb      t1, 0(dst)      # can't fault -- we're copy_from_user
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 1
+       bne     src, t0, 1b
+       .set    noreorder
+.LFl_exc:
+       LOADK   t0, TI_TASK($28)
+       addi    t0, t0, THREAD_BUADDR
+       LOADK   t0, 0(t0)   # t0 is just past last good address
+       bnez    t6, .Ldone      /* Skip the zeroing part if inatomic */
+        SUB    len, AT, t0             # len number of uncopied bytes
+       /*
+        * Here's where we rely on src and dst being incremented in tandem,
+        *   See (3) above.
+        * dst += (fault addr - src) to put dst at first byte to clear
+        */
+       ADD     dst, t0                 # compute start address in a1
+       SUB     dst, src
+       /*
+        * Clear len bytes starting at dst.  Can't call __bzero because it
+        * might modify len.  An inefficient loop for these rare times...
+        */
+       .set    reorder                         /* DADDI_WAR */
+       SUB     src, len, 1
+       beqz    len, .Ldone
+       .set    noreorder
+1:     sb      zero, 0(dst)
+       ADD     dst, dst, 1
+#ifndef CONFIG_CPU_DADDI_WORKAROUNDS
+       bnez    src, 1b
+        SUB    src, src, 1
+#else
+       .set    push
+       .set    noat
+       li      v1, 1
+       bnez    src, 1b
+        SUB    src, src, v1
+       .set    pop
+#endif
+       jr      ra
+        nop
+       END(__copy_fromuser)
+
+
+#undef  LOAD
+#undef  LOADL
+#undef  LOADR
+#undef  STORE
+#undef  STOREL
+#undef  STORER
+#undef  LDFIRST
+#undef  LDREST
+#undef  STFIRST
+#undef  STREST
+#undef  SHIFT_DISCARD
+#undef  COPY_BYTE
+#undef  SEXC
+
+#define LOAD   lw
+#define LOADL  lwl
+#define LOADR  lwr
+#define STOREL swle
+#define STORER swre
+#define STORE  swe
+
+#ifdef CONFIG_CPU_LITTLE_ENDIAN
+#define LDFIRST LOADR
+#define LDREST  LOADL
+#define STFIRST STORER
+#define STREST  STOREL
+#define SHIFT_DISCARD SLLV
+#else
+#define LDFIRST LOADL
+#define LDREST  LOADR
+#define STFIRST STOREL
+#define STREST  STORER
+#define SHIFT_DISCARD SRLV
+#endif
+
+LEAF(__copy_touser)
+       /*
+        * Note: dst & src may be unaligned, len may be 0
+        * Temps
+        */
+
+       R10KCBARRIER(0(ra))
+       /*
+        * The "issue break"s below are very approximate.
+        * Issue delays for dcache fills will perturb the schedule, as will
+        * load queue full replay traps, etc.
+        *
+        * If len < NBYTES use byte operations.
+        */
+       PREF(   0, 0(src) )
+       PREFE(  1, 0(dst) )
+       sltu    t2, len, NBYTES
+       and     t1, dst, ADDRMASK
+       PREF(   0, 1*32(src) )
+       PREFE(  1, 1*32(dst) )
+       bnez    t2, .LTcopy_bytes_checklen
+        and    t0, src, ADDRMASK
+       PREF(   0, 2*32(src) )
+       PREFE(  1, 2*32(dst) )
+       bnez    t1, .LTdst_unaligned
+        nop
+       bnez    t0, .LTsrc_unaligned_dst_aligned
+       /*
+        * use delay slot for fall-through
+        * src and dst are aligned; need to compute rem
+        */
+.LTboth_aligned:
+        SRL    t0, len, LOG_NBYTES+3    # +3 for 8 units/iter
+       beqz    t0, .LTcleanup_both_aligned # len < 8*NBYTES
+        and    rem, len, (8*NBYTES-1)   # rem = len % (8*NBYTES)
+       PREF(   0, 3*32(src) )
+       PREFE(  1, 3*32(dst) )
+       .align  4
+1:
+       R10KCBARRIER(0(ra))
+       LOAD    t0, UNIT(0)(src)
+       LOAD    t1, UNIT(1)(src)
+       LOAD    t2, UNIT(2)(src)
+       LOAD    t3, UNIT(3)(src)
+       SUB     len, len, 8*NBYTES
+       LOAD    t4, UNIT(4)(src)
+       LOAD    t7, UNIT(5)(src)
+EXC(    STORE   t0, UNIT(0)(dst),       .Ls_exc_p8u)
+EXC(    STORE   t1, UNIT(1)(dst),       .Ls_exc_p7u)
+       LOAD    t0, UNIT(6)(src)
+       LOAD    t1, UNIT(7)(src)
+       ADD     src, src, 8*NBYTES
+       ADD     dst, dst, 8*NBYTES
+EXC(    STORE   t2, UNIT(-6)(dst),      .Ls_exc_p6u)
+EXC(    STORE   t3, UNIT(-5)(dst),      .Ls_exc_p5u)
+EXC(    STORE   t4, UNIT(-4)(dst),      .Ls_exc_p4u)
+EXC(    STORE   t7, UNIT(-3)(dst),      .Ls_exc_p3u)
+EXC(    STORE   t0, UNIT(-2)(dst),      .Ls_exc_p2u)
+EXC(    STORE   t1, UNIT(-1)(dst),      .Ls_exc_p1u)
+       PREF(   0, 8*32(src) )
+       PREFE(  1, 8*32(dst) )
+       bne     len, rem, 1b
+        nop
+
+       /*
+        * len == rem == the number of bytes left to copy < 8*NBYTES
+        */
+.LTcleanup_both_aligned:
+       beqz    len, .Ldone
+        sltu   t0, len, 4*NBYTES
+       bnez    t0, .LTless_than_4units
+        and    rem, len, (NBYTES-1)    # rem = len % NBYTES
+       /*
+        * len >= 4*NBYTES
+        */
+       LOAD    t0, UNIT(0)(src)
+       LOAD    t1, UNIT(1)(src)
+       LOAD    t2, UNIT(2)(src)
+       LOAD    t3, UNIT(3)(src)
+       SUB     len, len, 4*NBYTES
+       ADD     src, src, 4*NBYTES
+       R10KCBARRIER(0(ra))
+EXC(    STORE   t0, UNIT(0)(dst),       .Ls_exc_p4u)
+EXC(    STORE   t1, UNIT(1)(dst),       .Ls_exc_p3u)
+EXC(    STORE   t2, UNIT(2)(dst),       .Ls_exc_p2u)
+EXC(    STORE   t3, UNIT(3)(dst),       .Ls_exc_p1u)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 4*NBYTES
+       beqz    len, .Ldone
+       .set    noreorder
+.LTless_than_4units:
+       /*
+        * rem = len % NBYTES
+        */
+       beq     rem, len, .LTcopy_bytes
+        nop
+1:
+       R10KCBARRIER(0(ra))
+       LOAD    t0, 0(src)
+       ADD     src, src, NBYTES
+       SUB     len, len, NBYTES
+EXC(    STORE   t0, 0(dst),             .Ls_exc_p1u)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, NBYTES
+       bne     rem, len, 1b
+       .set    noreorder
+
+       /*
+        * src and dst are aligned, need to copy rem bytes (rem < NBYTES)
+        * A loop would do only a byte at a time with possible branch
+        * mispredicts.  Can't do an explicit LOAD dst,mask,or,STORE
+        * because can't assume read-access to dst.  Instead, use
+        * STREST dst, which doesn't require read access to dst.
+        *
+        * This code should perform better than a simple loop on modern,
+        * wide-issue mips processors because the code has fewer branches and
+        * more instruction-level parallelism.
+        */
+       beqz    len, .Ldone
+        ADD    t1, dst, len    # t1 is just past last byte of dst
+       li      bits, 8*NBYTES
+       SLL     rem, len, 3     # rem = number of bits to keep
+       LOAD    t0, 0(src)
+       SUB     bits, bits, rem # bits = number of bits to discard
+       SHIFT_DISCARD t0, t0, bits
+EXC(    STREST  t0, -1(t1),             .Ls_exc)
+       jr      ra
+        move   len, zero
+.LTdst_unaligned:
+       /*
+        * dst is unaligned
+        * t0 = src & ADDRMASK
+        * t1 = dst & ADDRMASK; T1 > 0
+        * len >= NBYTES
+        *
+        * Copy enough bytes to align dst
+        * Set match = (src and dst have same alignment)
+        */
+       LDFIRST t3, FIRST(0)(src)
+       ADD     t2, zero, NBYTES
+       LDREST  t3, REST(0)(src)
+       SUB     t2, t2, t1      # t2 = number of bytes copied
+       xor     match, t0, t1
+       R10KCBARRIER(0(ra))
+EXC(    STFIRST t3, FIRST(0)(dst),      .Ls_exc)
+       beq     len, t2, .Ldone
+        SUB    len, len, t2
+       ADD     dst, dst, t2
+       beqz    match, .LTboth_aligned
+        ADD    src, src, t2
+
+.LTsrc_unaligned_dst_aligned:
+       SRL     t0, len, LOG_NBYTES+2    # +2 for 4 units/iter
+       PREF(   0, 3*32(src) )
+       beqz    t0, .LTcleanup_src_unaligned
+        and    rem, len, (4*NBYTES-1)   # rem = len % 4*NBYTES
+       PREFE(  1, 3*32(dst) )
+1:
+/*
+ * Avoid consecutive LD*'s to the same register since some mips
+ * implementations can't issue them in the same cycle.
+ * It's OK to load FIRST(N+1) before REST(N) because the two addresses
+ * are to the same unit (unless src is aligned, but it's not).
+ */
+       R10KCBARRIER(0(ra))
+       LDFIRST t0, FIRST(0)(src)
+       LDFIRST t1, FIRST(1)(src)
+       SUB     len, len, 4*NBYTES
+       LDREST  t0, REST(0)(src)
+       LDREST  t1, REST(1)(src)
+       LDFIRST t2, FIRST(2)(src)
+       LDFIRST t3, FIRST(3)(src)
+       LDREST  t2, REST(2)(src)
+       LDREST  t3, REST(3)(src)
+       PREF(   0, 9*32(src) )          # 0 is PREF_LOAD  (not streamed)
+       ADD     src, src, 4*NBYTES
+#ifdef CONFIG_CPU_SB1
+       nop                             # improves slotting
+#endif
+EXC(    STORE   t0, UNIT(0)(dst),       .Ls_exc_p4u)
+EXC(    STORE   t1, UNIT(1)(dst),       .Ls_exc_p3u)
+EXC(    STORE   t2, UNIT(2)(dst),       .Ls_exc_p2u)
+EXC(    STORE   t3, UNIT(3)(dst),       .Ls_exc_p1u)
+       PREFE(  1, 9*32(dst) )          # 1 is PREF_STORE (not streamed)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 4*NBYTES
+       bne     len, rem, 1b
+       .set    noreorder
+
+.LTcleanup_src_unaligned:
+       beqz    len, .Ldone
+        and    rem, len, NBYTES-1  # rem = len % NBYTES
+       beq     rem, len, .LTcopy_bytes
+        nop
+1:
+       R10KCBARRIER(0(ra))
+       LDFIRST t0, FIRST(0)(src)
+       LDREST  t0, REST(0)(src)
+       ADD     src, src, NBYTES
+       SUB     len, len, NBYTES
+EXC(    STORE   t0, 0(dst),             .Ls_exc_p1u)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, NBYTES
+       bne     len, rem, 1b
+       .set    noreorder
+
+.LTcopy_bytes_checklen:
+       beqz    len, .Ldone
+        nop
+.LTcopy_bytes:
+       /* 0 < len < NBYTES  */
+       R10KCBARRIER(0(ra))
+#define COPY_BYTE(N)                   \
+       lb      t0, N(src);             \
+       SUB     len, len, 1;            \
+       beqz    len, .Ldone;            \
+EXC(     sbe    t0, N(dst), .Ls_exc_p1)
+
+       COPY_BYTE(0)
+       COPY_BYTE(1)
+#ifdef USE_DOUBLE
+       COPY_BYTE(2)
+       COPY_BYTE(3)
+       COPY_BYTE(4)
+       COPY_BYTE(5)
+#endif
+       lb      t0, NBYTES-2(src)
+       SUB     len, len, 1
+       jr      ra
+EXC(     sbe    t0, NBYTES-2(dst), .Ls_exc_p1)
+       END(__copy_touser)
+
+
+#undef  LOAD
+#undef  LOADL
+#undef  LOADR
+#undef  STORE
+#undef  STOREL
+#undef  STORER
+#undef  LDFIRST
+#undef  LDREST
+#undef  STFIRST
+#undef  STREST
+#undef  SHIFT_DISCARD
+#undef  COPY_BYTE
+#undef  SEXC
+
+#define LOAD   lwe
+#define LOADL  lwle
+#define LOADR  lwre
+#define STOREL swle
+#define STORER swre
+#define STORE  swe
+
+#ifdef CONFIG_CPU_LITTLE_ENDIAN
+#define LDFIRST LOADR
+#define LDREST  LOADL
+#define STFIRST STORER
+#define STREST  STOREL
+#define SHIFT_DISCARD SLLV
+#else
+#define LDFIRST LOADL
+#define LDREST  LOADR
+#define STFIRST STOREL
+#define STREST  STORER
+#define SHIFT_DISCARD SRLV
+#endif
+
+
+LEAF(__copy_inuser)
+       /*
+        * Note: dst & src may be unaligned, len may be 0
+        * Temps
+        */
+
+       R10KCBARRIER(0(ra))
+       /*
+        * The "issue break"s below are very approximate.
+        * Issue delays for dcache fills will perturb the schedule, as will
+        * load queue full replay traps, etc.
+        *
+        * If len < NBYTES use byte operations.
+        */
+       PREFE(  0, 0(src) )
+       PREFE(  1, 0(dst) )
+       sltu    t2, len, NBYTES
+       and     t1, dst, ADDRMASK
+       PREFE(  0, 1*32(src) )
+       PREFE(  1, 1*32(dst) )
+       bnez    t2, .LIcopy_bytes_checklen
+        and    t0, src, ADDRMASK
+       PREFE(  0, 2*32(src) )
+       PREFE(  1, 2*32(dst) )
+       bnez    t1, .LIdst_unaligned
+        nop
+       bnez    t0, .LIsrc_unaligned_dst_aligned
+       /*
+        * use delay slot for fall-through
+        * src and dst are aligned; need to compute rem
+        */
+.LIboth_aligned:
+        SRL    t0, len, LOG_NBYTES+3    # +3 for 8 units/iter
+       beqz    t0, .LIcleanup_both_aligned # len < 8*NBYTES
+        and    rem, len, (8*NBYTES-1)   # rem = len % (8*NBYTES)
+       PREFE(  0, 3*32(src) )
+       PREFE(  1, 3*32(dst) )
+       .align  4
+1:
+       R10KCBARRIER(0(ra))
+EXC(    LOAD    t0, UNIT(0)(src),       .LIl_exc)
+EXC(    LOAD    t1, UNIT(1)(src),       .LIl_exc_copy)
+EXC(    LOAD    t2, UNIT(2)(src),       .LIl_exc_copy)
+EXC(    LOAD    t3, UNIT(3)(src),       .LIl_exc_copy)
+       SUB     len, len, 8*NBYTES
+EXC(    LOAD    t4, UNIT(4)(src),       .LIl_exc_copy)
+EXC(    LOAD    t7, UNIT(5)(src),       .LIl_exc_copy)
+EXC(    STORE   t0, UNIT(0)(dst),       .Ls_exc_p8u)
+EXC(    STORE   t1, UNIT(1)(dst),       .Ls_exc_p7u)
+EXC(    LOAD    t0, UNIT(6)(src),       .LIl_exc_copy)
+EXC(    LOAD    t1, UNIT(7)(src),       .LIl_exc_copy)
+       ADD     src, src, 8*NBYTES
+       ADD     dst, dst, 8*NBYTES
+EXC(    STORE   t2, UNIT(-6)(dst),      .Ls_exc_p6u)
+EXC(    STORE   t3, UNIT(-5)(dst),      .Ls_exc_p5u)
+EXC(    STORE   t4, UNIT(-4)(dst),      .Ls_exc_p4u)
+EXC(    STORE   t7, UNIT(-3)(dst),      .Ls_exc_p3u)
+EXC(    STORE   t0, UNIT(-2)(dst),      .Ls_exc_p2u)
+EXC(    STORE   t1, UNIT(-1)(dst),      .Ls_exc_p1u)
+       PREFE(  0, 8*32(src) )
+       PREFE(  1, 8*32(dst) )
+       bne     len, rem, 1b
+        nop
+
+       /*
+        * len == rem == the number of bytes left to copy < 8*NBYTES
+        */
+.LIcleanup_both_aligned:
+       beqz    len, .Ldone
+        sltu   t0, len, 4*NBYTES
+       bnez    t0, .LIless_than_4units
+        and    rem, len, (NBYTES-1)    # rem = len % NBYTES
+       /*
+        * len >= 4*NBYTES
+        */
+EXC(    LOAD    t0, UNIT(0)(src),       .LIl_exc)
+EXC(    LOAD    t1, UNIT(1)(src),       .LIl_exc_copy)
+EXC(    LOAD    t2, UNIT(2)(src),       .LIl_exc_copy)
+EXC(    LOAD    t3, UNIT(3)(src),       .LIl_exc_copy)
+       SUB     len, len, 4*NBYTES
+       ADD     src, src, 4*NBYTES
+       R10KCBARRIER(0(ra))
+EXC(    STORE   t0, UNIT(0)(dst),       .Ls_exc_p4u)
+EXC(    STORE   t1, UNIT(1)(dst),       .Ls_exc_p3u)
+EXC(    STORE   t2, UNIT(2)(dst),       .Ls_exc_p2u)
+EXC(    STORE   t3, UNIT(3)(dst),       .Ls_exc_p1u)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 4*NBYTES
+       beqz    len, .Ldone
+       .set    noreorder
+.LIless_than_4units:
+       /*
+        * rem = len % NBYTES
+        */
+       beq     rem, len, .LIcopy_bytes
+        nop
+1:
+       R10KCBARRIER(0(ra))
+EXC(    LOAD    t0, 0(src),             .LIl_exc)
+       ADD     src, src, NBYTES
+       SUB     len, len, NBYTES
+EXC(    STORE   t0, 0(dst),             .Ls_exc_p1u)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, NBYTES
+       bne     rem, len, 1b
+       .set    noreorder
+
+       /*
+        * src and dst are aligned, need to copy rem bytes (rem < NBYTES)
+        * A loop would do only a byte at a time with possible branch
+        * mispredicts.  Can't do an explicit LOAD dst,mask,or,STORE
+        * because can't assume read-access to dst.  Instead, use
+        * STREST dst, which doesn't require read access to dst.
+        *
+        * This code should perform better than a simple loop on modern,
+        * wide-issue mips processors because the code has fewer branches and
+        * more instruction-level parallelism.
+        */
+       beqz    len, .Ldone
+        ADD    t1, dst, len    # t1 is just past last byte of dst
+       li      bits, 8*NBYTES
+       SLL     rem, len, 3     # rem = number of bits to keep
+EXC(    LOAD    t0, 0(src),             .LIl_exc)
+       SUB     bits, bits, rem # bits = number of bits to discard
+       SHIFT_DISCARD t0, t0, bits
+EXC(    STREST  t0, -1(t1),             .Ls_exc)
+       jr      ra
+        move   len, zero
+.LIdst_unaligned:
+       /*
+        * dst is unaligned
+        * t0 = src & ADDRMASK
+        * t1 = dst & ADDRMASK; T1 > 0
+        * len >= NBYTES
+        *
+        * Copy enough bytes to align dst
+        * Set match = (src and dst have same alignment)
+        */
+EXC(    LDFIRST t3, FIRST(0)(src),      .LIl_exc)
+       ADD     t2, zero, NBYTES
+EXC(    LDREST  t3, REST(0)(src),       .LIl_exc_copy)
+       SUB     t2, t2, t1      # t2 = number of bytes copied
+       xor     match, t0, t1
+       R10KCBARRIER(0(ra))
+EXC(    STFIRST t3, FIRST(0)(dst),      .Ls_exc)
+       beq     len, t2, .Ldone
+        SUB    len, len, t2
+       ADD     dst, dst, t2
+       beqz    match, .LIboth_aligned
+        ADD    src, src, t2
+
+.LIsrc_unaligned_dst_aligned:
+       SRL     t0, len, LOG_NBYTES+2    # +2 for 4 units/iter
+       PREFE(  0, 3*32(src) )
+       beqz    t0, .LIcleanup_src_unaligned
+        and    rem, len, (4*NBYTES-1)   # rem = len % 4*NBYTES
+       PREFE(  1, 3*32(dst) )
+1:
+/*
+ * Avoid consecutive LD*'s to the same register since some mips
+ * implementations can't issue them in the same cycle.
+ * It's OK to load FIRST(N+1) before REST(N) because the two addresses
+ * are to the same unit (unless src is aligned, but it's not).
+ */
+       R10KCBARRIER(0(ra))
+EXC(    LDFIRST t0, FIRST(0)(src),      .LIl_exc)
+EXC(    LDFIRST t1, FIRST(1)(src),      .LIl_exc_copy)
+       SUB     len, len, 4*NBYTES
+EXC(    LDREST  t0, REST(0)(src),       .LIl_exc_copy)
+EXC(    LDREST  t1, REST(1)(src),       .LIl_exc_copy)
+EXC(    LDFIRST t2, FIRST(2)(src),      .LIl_exc_copy)
+EXC(    LDFIRST t3, FIRST(3)(src),      .LIl_exc_copy)
+EXC(    LDREST  t2, REST(2)(src),       .LIl_exc_copy)
+EXC(    LDREST  t3, REST(3)(src),       .LIl_exc_copy)
+       PREFE(  0, 9*32(src) )          # 0 is PREF_LOAD  (not streamed)
+       ADD     src, src, 4*NBYTES
+#ifdef CONFIG_CPU_SB1
+       nop                             # improves slotting
+#endif
+EXC(    STORE   t0, UNIT(0)(dst),       .Ls_exc_p4u)
+EXC(    STORE   t1, UNIT(1)(dst),       .Ls_exc_p3u)
+EXC(    STORE   t2, UNIT(2)(dst),       .Ls_exc_p2u)
+EXC(    STORE   t3, UNIT(3)(dst),       .Ls_exc_p1u)
+       PREFE(  1, 9*32(dst) )          # 1 is PREF_STORE (not streamed)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 4*NBYTES
+       bne     len, rem, 1b
+       .set    noreorder
+
+.LIcleanup_src_unaligned:
+       beqz    len, .Ldone
+        and    rem, len, NBYTES-1  # rem = len % NBYTES
+       beq     rem, len, .LIcopy_bytes
+        nop
+1:
+       R10KCBARRIER(0(ra))
+EXC(    LDFIRST t0, FIRST(0)(src),      .LIl_exc)
+EXC(    LDREST  t0, REST(0)(src),       .LIl_exc_copy)
+       ADD     src, src, NBYTES
+       SUB     len, len, NBYTES
+EXC(    STORE   t0, 0(dst),             .Ls_exc_p1u)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, NBYTES
+       bne     len, rem, 1b
+       .set    noreorder
+
+.LIcopy_bytes_checklen:
+       beqz    len, .Ldone
+        nop
+.LIcopy_bytes:
+       /* 0 < len < NBYTES  */
+       R10KCBARRIER(0(ra))
+#define COPY_BYTE(N)                    \
+EXC(    lbe     t0, N(src), .LIl_exc);  \
+       SUB     len, len, 1;            \
+       beqz    len, .Ldone;            \
+EXC(     sbe    t0, N(dst), .Ls_exc_p1)
+
+       COPY_BYTE(0)
+       COPY_BYTE(1)
+#ifdef USE_DOUBLE
+       COPY_BYTE(2)
+       COPY_BYTE(3)
+       COPY_BYTE(4)
+       COPY_BYTE(5)
+#endif
+EXC(    lbe     t0, NBYTES-2(src), .LIl_exc)
+       SUB     len, len, 1
+       jr      ra
+EXC(     sbe    t0, NBYTES-2(dst), .Ls_exc_p1)
+
+.LIl_exc_copy:
+       /*
+        * Copy bytes from src until faulting load address (or until a
+        * lb faults)
+        *
+        * When reached by a faulting LDFIRST/LDREST, THREAD_BUADDR($28)
+        * may be more than a byte beyond the last address.
+        * Hence, the lb below may get an exception.
+        *
+        * Assumes src < THREAD_BUADDR($28)
+        */
+       LOADK   t0, TI_TASK($28)
+       addi    t0, t0, THREAD_BUADDR
+       LOADK   t0, 0(t0)
+1:
+EXC(    lbe     t1, 0(src),     .LIl_exc)
+       ADD     src, src, 1
+EXC(    sbe     t1, 0(dst),     .Ls_exc)
+       .set    reorder                         /* DADDI_WAR */
+       ADD     dst, dst, 1
+       SUB     len, len, 1             # need this because of sbe above
+       bne     src, t0, 1b
+       .set    noreorder
+.LIl_exc:
+       LOADK   t0, TI_TASK($28)
+       addi    t0, t0, THREAD_BUADDR
+       LOADK   t0, 0(t0)               # t0 is just past last good address
+       SUB     len, AT, t0             # len number of uncopied bytes
+       /*
+        * Here's where we rely on src and dst being incremented in tandem,
+        *   See (3) above.
+        * dst += (fault addr - src) to put dst at first byte to clear
+        */
+       ADD     dst, t0                 # compute start address in a1
+       SUB     dst, src
+       /*
+        * Clear len bytes starting at dst.  Can't call __bzero because it
+        * might modify len.  An inefficient loop for these rare times...
+        */
+       .set    reorder                         /* DADDI_WAR */
+       SUB     src, len, 1
+       beqz    len, .Ldone
+       .set    noreorder
+1:
+EXC(    sbe     zero, 0(dst),   .Ls_exc)
+       ADD     dst, dst, 1
+#ifndef CONFIG_CPU_DADDI_WORKAROUNDS
+       bnez    src, 1b
+        SUB    src, src, 1
+#else
+       .set    push
+       .set    noat
+       li      v1, 1
+       bnez    src, 1b
+        SUB    src, src, v1
+       .set    pop
+#endif
+       jr      ra
+        nop
+       END(__copy_inuser)
+
+#endif
index 0580194e7402aa6508c579e2ff925833fd421948..e781b008ff20098ebc0d0009f0d5591b92f64a93 100644 (file)
        PTR     9b, handler;                            \
        .previous
 
+#define EXE(insn,handler)                               \
+9:      .word   insn;                                   \
+       .section __ex_table,"a";                        \
+       PTR     9b, handler;                            \
+       .previous
+
        .macro  f_fill64 dst, offset, val, fixup
        EX(LONG_S, \val, (\offset +  0 * STORSIZE)(\dst), \fixup)
        EX(LONG_S, \val, (\offset +  1 * STORSIZE)(\dst), \fixup)
 #endif
        .endm
 
+       .macro  f_fill64eva dst, offset, val, fixup
+       .set        eva
+       EX(swe, \val, (\offset +  0 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset +  1 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset +  2 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset +  3 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset +  4 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset +  5 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset +  6 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset +  7 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset +  8 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset +  9 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset + 10 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset + 11 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset + 12 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset + 13 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset + 14 * STORSIZE)(\dst), \fixup)
+       EX(swe, \val, (\offset + 15 * STORSIZE)(\dst), \fixup)
+       .endm
+
 /*
  * memset(void *s, int c, size_t n)
  *
@@ -202,3 +228,142 @@ FEXPORT(__bzero)
 .Llast_fixup:
        jr              ra
         andi           v1, a2, STORMASK
+
+#ifdef CONFIG_EVA
+/*  ++++++++  */
+/*  EVA stuff */
+/*  ++++++++  */
+
+       .set            eva
+
+#undef  LONG_S_L
+#undef  LONG_S_R
+
+#define LONG_S_L swle
+#define LONG_S_R swre
+
+LEAF(__bzero_user)
+       sltiu           t0, a2, STORSIZE        /* very small region? */
+       bnez            t0, .LEsmall_memset
+        andi           t0, a0, STORMASK        /* aligned? */
+
+#ifdef CONFIG_CPU_MICROMIPS
+       move            t8, a1
+       move            t9, a1
+#endif
+#ifndef CONFIG_CPU_DADDI_WORKAROUNDS
+       beqz            t0, 1f
+        PTR_SUBU       t0, STORSIZE            /* alignment in bytes */
+#else
+       .set            noat
+       li              AT, STORSIZE
+       beqz            t0, 1f
+        PTR_SUBU       t0, AT                  /* alignment in bytes */
+       .set            at
+#endif
+
+       R10KCBARRIER(0(ra))
+#ifdef __MIPSEB__
+       EX(LONG_S_L, a1, (a0), .LEfirst_fixup)  /* make word/dword aligned */
+#endif
+#ifdef __MIPSEL__
+       EX(LONG_S_R, a1, (a0), .LEfirst_fixup)  /* make word/dword aligned */
+#endif
+       PTR_SUBU        a0, t0                  /* long align ptr */
+       PTR_ADDU        a2, t0                  /* correct size */
+
+1:     ori             t1, a2, 0x3f            /* # of full blocks */
+       xori            t1, 0x3f
+       beqz            t1, .LEmemset_partial    /* no block to fill */
+        andi           t0, a2, 0x40-STORSIZE
+
+       PTR_ADDU        t1, a0                  /* end address */
+       .set            reorder
+1:     PTR_ADDIU       a0, 64
+       R10KCBARRIER(0(ra))
+       f_fill64eva a0, -64, a1, .LEfwd_fixup
+       bne             t1, a0, 1b
+       .set            noreorder
+
+.LEmemset_partial:
+       R10KCBARRIER(0(ra))
+       PTR_LA          t1, 2f                  /* where to start */
+#ifdef CONFIG_CPU_MICROMIPS
+       LONG_SRL        t7, t0, 1
+#if LONGSIZE == 4
+       PTR_SUBU        t1, t7
+#else
+       .set            noat
+       LONG_SRL        AT, t7, 1
+       PTR_SUBU        t1, AT
+       .set            at
+#endif
+#else
+#if LONGSIZE == 4
+       PTR_SUBU        t1, t0
+#else
+       .set            noat
+       LONG_SRL        AT, t0, 1
+       PTR_SUBU        t1, AT
+       .set            at
+#endif
+#endif
+       jr              t1
+        PTR_ADDU       a0, t0                  /* dest ptr */
+
+       .set            push
+       .set            noreorder
+       .set            nomacro
+       f_fill64eva a0, -64, a1, .LEpartial_fixup   /* ... but first do longs ... */
+2:     .set            pop
+       andi            a2, STORMASK            /* At most one long to go */
+
+       beqz            a2, 1f
+        PTR_ADDU       a0, a2                  /* What's left */
+       R10KCBARRIER(0(ra))
+#ifdef __MIPSEB__
+       EX(LONG_S_R, a1, -1(a0), .LElast_fixup)
+#endif
+#ifdef __MIPSEL__
+       EX(LONG_S_L, a1, -1(a0), .LElast_fixup)
+#endif
+1:     jr              ra
+        move           a2, zero
+
+.LEsmall_memset:
+       beqz            a2, 2f
+        PTR_ADDU       t1, a0, a2
+
+1:     PTR_ADDIU       a0, 1                   /* fill bytewise */
+       R10KCBARRIER(0(ra))
+       bne             t1, a0, 1b
+        sb             a1, -1(a0)
+
+2:     jr              ra                      /* done */
+        move           a2, zero
+
+.LEfirst_fixup:
+       jr      ra
+        nop
+
+.LEfwd_fixup:
+       PTR_L           t0, TI_TASK($28)
+       andi            a2, 0x3f
+       LONG_L          t0, THREAD_BUADDR(t0)
+       LONG_ADDU       a2, t1
+       jr              ra
+        LONG_SUBU      a2, t0
+
+.LEpartial_fixup:
+       PTR_L           t0, TI_TASK($28)
+       andi            a2, STORMASK
+       LONG_L          t0, THREAD_BUADDR(t0)
+       LONG_ADDU       a2, t1
+       jr              ra
+        LONG_SUBU      a2, t0
+
+.LElast_fixup:
+       jr              ra
+        andi           v1, a2, STORMASK
+       END(__bzero_user)
+#endif
index 91615c2ef0cf969baeff215ca3d8a627e3851d2f..ed9bd4db8c7f21e56c9505d8db9cb8a282411fd6 100644 (file)
@@ -1,9 +1,14 @@
 /*
- * Dump R3000 TLB for debugging purposes.
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
  *
  * Copyright (C) 1994, 1995 by Waldorf Electronics, written by Ralf Baechle.
  * Copyright (C) 1999 by Silicon Graphics, Inc.
  * Copyright (C) 1999 by Harald Koerfgen
+ * Copyright (C) 2011 MIPS Technologies, Inc.
+ *
+ * Dump R3000 TLB for debugging purposes.
  */
 #include <linux/kernel.h>
 #include <linux/mm.h>
index e362dcdc69d1617486ee5627f055ffc3075b5548..b3e070f3010e5e43989077fa9432230f23e5c875 100644 (file)
@@ -11,7 +11,7 @@
 #include <asm/asm-offsets.h>
 #include <asm/regdef.h>
 
-#define EX(insn,reg,addr,handler)                      \
+#define EX(insn,reg,addr,handler)                       \
 9:     insn    reg, addr;                              \
        .section __ex_table,"a";                        \
        PTR     9b, handler;                            \
  *
  * Return 0 for error
  */
+LEAF(__strlen_kernel_asm)
+       LONG_L          v0, TI_ADDR_LIMIT($28)  # pointer ok?
+       and             v0, a0
+       bnez            v0, .Lfault
+
+FEXPORT(__strlen_kernel_nocheck_asm)
+       move            v0, a0
+1:      EX(lbu, v1, (v0), .Lfault)
+       PTR_ADDIU       v0, 1
+       bnez            v1, 1b
+       PTR_SUBU        v0, a0
+       jr              ra
+       END(__strlen_kernel_asm)
+
+.Lfault:        move            v0, zero
+       jr              ra
+
+#ifdef CONFIG_EVA
+
 LEAF(__strlen_user_asm)
-       LONG_L          v0, TI_ADDR_LIMIT($28)  # pointer ok?
-       and             v0, a0
-       bnez            v0, .Lfault
+       LONG_L          v0, TI_ADDR_LIMIT($28)  # pointer ok?
+       and             v0, a0
+       bnez            v0, .Lfault
 
 FEXPORT(__strlen_user_nocheck_asm)
-       move            v0, a0
-1:     EX(lbu, v1, (v0), .Lfault)
-       PTR_ADDIU       v0, 1
-       bnez            v1, 1b
-       PTR_SUBU        v0, a0
-       jr              ra
+       move            v0, a0
+       .set            eva
+1:      EX(lbue, v1, (v0), .Lfault)
+       PTR_ADDIU       v0, 1
+       bnez            v1, 1b
+       PTR_SUBU        v0, a0
+       jr              ra
        END(__strlen_user_asm)
 
-.Lfault:       move            v0, zero
-       jr              ra
+#endif
index 92870b6b53eaeee044a0424ef1cdd16870ec445a..a613bb454e213e50230043ae2d1d0fc5e57f149f 100644 (file)
  * it happens at most some bytes of the exceptions handlers will be copied.
  */
 
+LEAF(__strncpy_from_kernel_asm)
+       LONG_L          v0, TI_ADDR_LIMIT($28)  # pointer ok?
+       and             v0, a1
+       bnez            v0, .Lfault
+
+FEXPORT(__strncpy_from_kernel_nocheck_asm)
+       .set            noreorder
+       move            t0, zero
+       move            v1, a1
+1:      EX(lbu, v0, (v1), .Lfault)
+       PTR_ADDIU       v1, 1
+       R10KCBARRIER(0(ra))
+       beqz            v0, 2f
+        sb             v0, (a0)
+       PTR_ADDIU       t0, 1
+       bne             t0, a2, 1b
+        PTR_ADDIU      a0, 1
+2:     PTR_ADDU        v0, a1, t0
+       xor             v0, a1
+       bltz            v0, .Lfault
+        nop
+       jr              ra                      # return n
+        move           v0, t0
+       END(__strncpy_from_kernel_asm)
+
+.Lfault:
+       jr              ra
+        li             v0, -EFAULT
+
+#ifdef CONFIG_EVA
+
 LEAF(__strncpy_from_user_asm)
        LONG_L          v0, TI_ADDR_LIMIT($28)  # pointer ok?
        and             v0, a1
@@ -37,8 +68,10 @@ FEXPORT(__strncpy_from_user_nocheck_asm)
        .set            noreorder
        move            t0, zero
        move            v1, a1
-1:     EX(lbu, v0, (v1), .Lfault)
-       PTR_ADDIU       v1, 1
+1:
+       .set            eva
+       EX(lbue, v0, (v1), .Lfault)
+       PTR_ADDIU       v1, 1
        R10KCBARRIER(0(ra))
        beqz            v0, 2f
         sb             v0, (a0)
@@ -50,12 +83,7 @@ FEXPORT(__strncpy_from_user_nocheck_asm)
        bltz            v0, .Lfault
         nop
        jr              ra                      # return n
-        move           v0, t0
+        move           v0, t0
        END(__strncpy_from_user_asm)
 
-.Lfault: jr            ra
-         li            v0, -EFAULT
-
-       .section        __ex_table,"a"
-       PTR             1b, .Lfault
-       .previous
+#endif
index fcacea5e61f1e685891e8e4af7ec8f701c596fd2..75ea45dcb7452f646a2ff6d311a9aff5eba1da34 100644 (file)
  *      bytes.  There's nothing secret there.  On 64-bit accessing beyond
  *      the maximum is a tad hairier ...
  */
-LEAF(__strnlen_user_asm)
+LEAF(__strnlen_kernel_asm)
        LONG_L          v0, TI_ADDR_LIMIT($28)  # pointer ok?
        and             v0, a0
        bnez            v0, .Lfault
 
-FEXPORT(__strnlen_user_nocheck_asm)
-       move            v0, a0
-       PTR_ADDU        a1, a0                  # stop pointer
-1:     beq             v0, a1, 1f              # limit reached?
+FEXPORT(__strnlen_kernel_nocheck_asm)
+       move            v0, a0
+       PTR_ADDU        a1, a0                  # stop pointer
+1:      beq             v0, a1, 2f              # limit reached?
        EX(lb, t0, (v0), .Lfault)
-       PTR_ADDIU       v0, 1
-       bnez            t0, 1b
-1:     PTR_SUBU        v0, a0
-       jr              ra
-       END(__strnlen_user_asm)
+       PTR_ADDIU       v0, 1
+       bnez            t0, 1b
+2:      PTR_SUBU        v0, a0
+       jr              ra
+       END(__strnlen_kernel_asm)
+
 
 .Lfault:
        move            v0, zero
        jr              ra
+
+
+#ifdef CONFIG_EVA
+
+LEAF(__strnlen_user_asm)
+       LONG_L          v0, TI_ADDR_LIMIT($28)  # pointer ok?
+       and             v0, a0
+       bnez            v0, .Lfault
+
+FEXPORT(__strnlen_user_nocheck_asm)
+       move            v0, a0
+       PTR_ADDU        a1, a0                  # stop pointer
+1:      beq             v0, a1, 2f              # limit reached?
+       .set            eva
+       EX(lbe, t0, (v0), .Lfault)
+       PTR_ADDIU       v0, 1
+       bnez            t0, 1b
+2:      PTR_SUBU        v0, a0
+       jr              ra
+       END(__strnlen_user_asm)
+
+#endif
index f03771900813cb69c9ddd5f579841350a8c48786..969160f4bc755e7b5ff4613dda71387c720351e9 100644 (file)
@@ -471,6 +471,9 @@ int mm_isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
        unsigned int fcr31;
        unsigned int bit;
 
+       if (!cpu_has_mmips)
+               return 0;
+
        switch (insn.mm_i_format.opcode) {
        case mm_pool32a_op:
                if ((insn.mm_i_format.simmediate & MM_POOL32A_MINOR_MASK) ==
@@ -859,13 +862,7 @@ static int isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
  */
 static inline int cop1_64bit(struct pt_regs *xcp)
 {
-#if defined(CONFIG_64BIT) && !defined(CONFIG_MIPS32_O32)
-       return 1;
-#elif defined(CONFIG_64BIT) && defined(CONFIG_MIPS32_O32)
        return !test_thread_flag(TIF_32BIT_REGS);
-#else
-       return 0;
-#endif
 }
 
 #define SIFROMREG(si, x) ((si) = cop1_64bit(xcp) || !(x & 1) ? \
@@ -884,6 +881,10 @@ static inline int cop1_64bit(struct pt_regs *xcp)
 #define DPFROMREG(dp, x)       DIFROMREG((dp).bits, x)
 #define DPTOREG(dp, x) DITOREG((dp).bits, x)
 
+#define SIFROMHREG(si, x)      ((si) = (int)(ctx->fpr[x] >> 32))
+#define SITOHREG(si, x)                (ctx->fpr[x] = \
+                               ctx->fpr[x] << 32 >> 32 | (u64)(si) << 32)
+
 /*
  * Emulate the single floating point instruction pointed at by EPC.
  * Two instructions if the instruction is in a branch delay slot.
@@ -1053,6 +1054,21 @@ static int cop1Emulate(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
                        break;
 #endif
 
+#ifdef CONFIG_CPU_MIPSR2
+               case mfhc_op:
+                       /* copregister rd -> gpr[rt] */
+                       if (MIPSInst_RT(ir) != 0) {
+                               SIFROMHREG(xcp->regs[MIPSInst_RT(ir)],
+                                       MIPSInst_RD(ir));
+                       }
+                       break;
+
+               case mthc_op:
+                       /* copregister rd <- gpr[rt] */
+                       SITOHREG(xcp->regs[MIPSInst_RT(ir)], MIPSInst_RD(ir));
+                       break;
+#endif
+
                case mfc_op:
                        /* copregister rd -> gpr[rt] */
                        if (MIPSInst_RT(ir) != 0) {
@@ -1506,10 +1522,10 @@ static int fpux_emu(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
                break;
        }
 
-       case 0x7:               /* 7 */
-               if (MIPSInst_FUNC(ir) != pfetch_op) {
+       case 0x3:
+               if (MIPSInst_FUNC(ir) != pfetch_op)
                        return SIGILL;
-               }
+
                /* ignore prefx operation */
                break;
 
index 22796e0120607157abb40b91d5467f6761735c9d..39f21706fe46b11d4a6c2f6ec8aa2806f5d2c72e 100644 (file)
@@ -311,7 +311,10 @@ struct _ieee754_csr {
        unsigned pad0:7;
        unsigned nod:1;         /* set 1 for no denormalised numbers */
        unsigned c:1;           /* condition */
-       unsigned pad1:5;
+       unsigned pad1a:2;
+       unsigned mac2008:1;
+       unsigned abs2008:1;
+       unsigned nan2008:1;
        unsigned cx:6;          /* exceptions this operation */
        unsigned mx:5;          /* exception enable  mask */
        unsigned sx:5;          /* exceptions total */
@@ -322,7 +325,10 @@ struct _ieee754_csr {
        unsigned sx:5;          /* exceptions total */
        unsigned mx:5;          /* exception enable  mask */
        unsigned cx:6;          /* exceptions this operation */
-       unsigned pad1:5;
+       unsigned nan2008:1;
+       unsigned abs2008:1;
+       unsigned mac2008:1;
+       unsigned pad1a:2;
        unsigned c:1;           /* condition */
        unsigned nod:1;         /* set 1 for no denormalised numbers */
        unsigned pad0:7;
index 068e56be8de91227c3ec8c65ca2d5c785e3c2360..5afad3db3133114fc9ad57fc618bd0ef2d88e0c5 100644 (file)
@@ -41,7 +41,9 @@ int ieee754dp_isnan(ieee754dp x)
 int ieee754dp_issnan(ieee754dp x)
 {
        assert(ieee754dp_isnan(x));
-       return ((DPMANT(x) & DP_MBIT(DP_MBITS-1)) == DP_MBIT(DP_MBITS-1));
+       if (ieee754_csr.nan2008)
+               return !(DPMANT(x) & DP_MBIT(DP_MBITS-1));
+       return (DPMANT(x) & DP_MBIT(DP_MBITS-1));
 }
 
 
index 4b6c6fb353047e1b9b7185a0cda2e7e95b866d51..1abe83a95c5078214111c6be5f251169325ab2e4 100644 (file)
     if(ve == SP_EMAX+1+SP_EBIAS){\
        if(vm == 0)\
          vc = IEEE754_CLASS_INF;\
-       else if(vm & SP_MBIT(SP_MBITS-1)) \
-         vc = IEEE754_CLASS_SNAN;\
-       else \
-         vc = IEEE754_CLASS_QNAN;\
+       else if (ieee754_csr.nan2008) { \
+         if(vm & SP_MBIT(SP_MBITS-1)) \
+           vc = IEEE754_CLASS_QNAN;\
+         else \
+           vc = IEEE754_CLASS_SNAN;\
+       } else { \
+         if(vm & SP_MBIT(SP_MBITS-1)) \
+           vc = IEEE754_CLASS_SNAN;\
+         else \
+           vc = IEEE754_CLASS_QNAN;\
+       } \
     } else if(ve == SP_EMIN-1+SP_EBIAS) {\
        if(vm) {\
            ve = SP_EMIN;\
@@ -117,10 +124,17 @@ u64 ym; int ye; int ys; int yc
     if(ve == DP_EMAX+1+DP_EBIAS){\
        if(vm == 0)\
          vc = IEEE754_CLASS_INF;\
-       else if(vm & DP_MBIT(DP_MBITS-1)) \
-         vc = IEEE754_CLASS_SNAN;\
-       else \
-         vc = IEEE754_CLASS_QNAN;\
+       else if (ieee754_csr.nan2008) { \
+         if(vm & DP_MBIT(DP_MBITS-1)) \
+           vc = IEEE754_CLASS_QNAN;\
+         else \
+           vc = IEEE754_CLASS_SNAN;\
+       } else { \
+         if(vm & DP_MBIT(DP_MBITS-1)) \
+           vc = IEEE754_CLASS_SNAN;\
+         else \
+           vc = IEEE754_CLASS_QNAN;\
+       } \
     } else if(ve == DP_EMIN-1+DP_EBIAS) {\
        if(vm) {\
            ve = DP_EMIN;\
index 15d1e36cfe64a26c62b10a78133cd9a8415ff9ae..18ba8356982f6bb38956cfef85fc23aaf4d76f09 100644 (file)
@@ -41,6 +41,8 @@ int ieee754sp_isnan(ieee754sp x)
 int ieee754sp_issnan(ieee754sp x)
 {
        assert(ieee754sp_isnan(x));
+       if (ieee754_csr.nan2008)
+               return !(SPMANT(x) & SP_MBIT(SP_MBITS-1));
        return (SPMANT(x) & SP_MBIT(SP_MBITS-1));
 }
 
index 1c586575fe172c2283846da91dc5d9910b52f0c2..92bc4a529b6cffe15298c3c7a6cb3f83f1abf7a4 100644 (file)
 #include <asm/fpu.h>
 #include <asm/fpu_emulator.h>
 
-#define SIGNALLING_NAN 0x7ff800007ff80000LL
+#define SIGNALLING_NAN      0x7ff800007ff80000LL
+#define SIGNALLING_NAN2008  0x7ff000007fa00000LL
+
+extern unsigned int fpu_fcr31 __read_mostly;
+extern unsigned int system_has_fpu __read_mostly;
+static int nan2008 __read_mostly = -1;
+
+static int __init setup_nan2008(char *str)
+{
+       get_option (&str, &nan2008);
+
+       return 1;
+}
+
+__setup("nan2008=", setup_nan2008);
 
 void fpu_emulator_init_fpu(void)
 {
@@ -39,10 +53,28 @@ void fpu_emulator_init_fpu(void)
                printk("Algorithmics/MIPS FPU Emulator v1.5\n");
        }
 
-       current->thread.fpu.fcr31 = 0;
-       for (i = 0; i < 32; i++) {
-               current->thread.fpu.fpr[i] = SIGNALLING_NAN;
+       if (system_has_fpu)
+               current->thread.fpu.fcr31 = fpu_fcr31;
+       else if (nan2008 < 0) {
+               if (!test_thread_flag(TIF_32BIT_REGS))
+                       current->thread.fpu.fcr31 = FPU_CSR_DEFAULT|FPU_CSR_MAC2008|FPU_CSR_ABS2008|FPU_CSR_NAN2008;
+               else
+                       current->thread.fpu.fcr31 = FPU_CSR_DEFAULT;
+       } else {
+               if (nan2008)
+                       current->thread.fpu.fcr31 = FPU_CSR_DEFAULT|FPU_CSR_MAC2008|FPU_CSR_ABS2008|FPU_CSR_NAN2008;
+               else
+                       current->thread.fpu.fcr31 = FPU_CSR_DEFAULT;
        }
+
+       if (current->thread.fpu.fcr31 & FPU_CSR_NAN2008)
+               for (i = 0; i < 32; i++) {
+                       current->thread.fpu.fpr[i] = SIGNALLING_NAN2008;
+               }
+       else
+               for (i = 0; i < 32; i++) {
+                       current->thread.fpu.fpr[i] = SIGNALLING_NAN;
+               }
 }
 
 
@@ -52,7 +84,7 @@ void fpu_emulator_init_fpu(void)
  * with appropriate macros from uaccess.h
  */
 
-int fpu_emulator_save_context(struct sigcontext __user *sc)
+inline int fpu_emulator_save_context(struct sigcontext __user *sc)
 {
        int i;
        int err = 0;
@@ -66,7 +98,7 @@ int fpu_emulator_save_context(struct sigcontext __user *sc)
        return err;
 }
 
-int fpu_emulator_restore_context(struct sigcontext __user *sc)
+inline int fpu_emulator_restore_context(struct sigcontext __user *sc)
 {
        int i;
        int err = 0;
@@ -90,6 +122,17 @@ int fpu_emulator_save_context32(struct sigcontext32 __user *sc)
        int i;
        int err = 0;
 
+       if (!test_thread_flag(TIF_32BIT_REGS)) {
+               for (i = 0; i < 32; i++) {
+                       err |=
+                           __put_user(current->thread.fpu.fpr[i], &sc->sc_fpregs[i]);
+               }
+               err |= __put_user(current->thread.fpu.fcr31, &sc->sc_fpc_csr);
+
+               return err;
+
+       }
+
        for (i = 0; i < 32; i+=2) {
                err |=
                    __put_user(current->thread.fpu.fpr[i], &sc->sc_fpregs[i]);
@@ -104,6 +147,16 @@ int fpu_emulator_restore_context32(struct sigcontext32 __user *sc)
        int i;
        int err = 0;
 
+       if (!test_thread_flag(TIF_32BIT_REGS)) {
+               for (i = 0; i < 32; i++) {
+                       err |=
+                           __get_user(current->thread.fpu.fpr[i], &sc->sc_fpregs[i]);
+               }
+               err |= __get_user(current->thread.fpu.fcr31, &sc->sc_fpc_csr);
+
+               return err;
+       }
+
        for (i = 0; i < 32; i+=2) {
                err |=
                    __get_user(current->thread.fpu.fpr[i], &sc->sc_fpregs[i]);
index 8557fb55286321fe0af21e67ea212212d4c294da..4e082e11ea1412ed33b47cc5ab49cc7fce8a0c6c 100644 (file)
@@ -176,6 +176,14 @@ static void octeon_flush_kernel_vmap_range(unsigned long vaddr, int size)
        BUG();
 }
 
+static void octeon_flush_data_cache_range(struct vm_area_struct *vma,
+       unsigned long addr, struct page *page, unsigned long addr,
+       unsigned long size)
+{
+       octeon_flush_cache_page(vma, addr, page_to_pfn(page));
+}
+
+
 /**
  * Probe Octeon's caches
  *
@@ -275,6 +283,7 @@ void __cpuinit octeon_cache_init(void)
        flush_cache_sigtramp            = octeon_flush_cache_sigtramp;
        flush_icache_all                = octeon_flush_icache_all;
        flush_data_cache_page           = octeon_flush_data_cache_page;
+       mips_flush_data_cache_range     = octeon_flush_data_cache_range;
        flush_icache_range              = octeon_flush_icache_range;
        local_flush_icache_range        = local_octeon_flush_icache_range;
 
index 704dc735a59dfcd1cc368197730f6f279c2ea845..cad7d0f9f0b39e5b668a1deb63408743788e2273 100644 (file)
@@ -274,6 +274,13 @@ static void r3k_flush_data_cache_page(unsigned long addr)
 {
 }
 
+static void r3k_mips_flush_data_cache_range(struct vm_area_struct *vma,
+       unsigned long vaddr, struct page *page, unsigned long addr,
+       unsigned long size)
+{
+       r3k_flush_cache_page(vma, addr, page_to_pfn(page));
+}
+
 static void r3k_flush_cache_sigtramp(unsigned long addr)
 {
        unsigned long flags;
@@ -322,7 +329,7 @@ void __cpuinit r3k_cache_init(void)
        flush_cache_all = r3k_flush_cache_all;
        __flush_cache_all = r3k___flush_cache_all;
        flush_cache_mm = r3k_flush_cache_mm;
-       flush_cache_range = r3k_flush_cache_range;
+       mips_flush_data_cache_range = r3k_mips_flush_data_cache_range;
        flush_cache_page = r3k_flush_cache_page;
        flush_icache_range = r3k_flush_icache_range;
        local_flush_icache_range = r3k_flush_icache_range;
index 940f938cf082a0cc1cf11f10106c827058fdf752..8d48f9fb96ed87159f1cdfa311412b0aea339112 100644 (file)
@@ -6,12 +6,14 @@
  * Copyright (C) 1996 David S. Miller (davem@davemloft.net)
  * Copyright (C) 1997, 1998, 1999, 2000, 2001, 2002 Ralf Baechle (ralf@gnu.org)
  * Copyright (C) 1999, 2000 Silicon Graphics, Inc.
+ * Copyright (C) 2012, MIPS Technology, Leonid Yegoshin (yegoshin@mips.com)
  */
 #include <linux/hardirq.h>
 #include <linux/init.h>
 #include <linux/highmem.h>
 #include <linux/kernel.h>
 #include <linux/linkage.h>
+#include <linux/preempt.h>
 #include <linux/sched.h>
 #include <linux/smp.h>
 #include <linux/mm.h>
  *  o collapses to normal function call on systems with a single shared
  *    primary cache.
  *  o doesn't disable interrupts on the local CPU
+ *
+ *  Note: this function is used now for address cacheops only
+ *
+ *  Note2: It is unsafe to use address cacheops via SMP call, other CPU may not
+ *         have this process address map (ASID) loaded into EntryHI and
+ *         it usualy requires some tricks, which are absent from this file.
+ *         Cross-CPU address cacheops are much easy and safely.
  */
 static inline void r4k_on_each_cpu(void (*func) (void *info), void *info)
 {
@@ -55,12 +64,66 @@ static inline void r4k_on_each_cpu(void (*func) (void *info), void *info)
        preempt_enable();
 }
 
-#if defined(CONFIG_MIPS_CMP)
+#if defined(CONFIG_MIPS_CMP) && defined(CONFIG_SMP)
 #define cpu_has_safe_index_cacheops 0
 #else
 #define cpu_has_safe_index_cacheops 1
 #endif
 
+/*
+ * This variant of smp_call_function is used for index cacheops only.
+ */
+static inline void r4k_indexop_on_each_cpu(void (*func) (void *info), void *info)
+{
+       preempt_disable();
+
+#ifdef CONFIG_SMP
+       if (!cpu_has_safe_index_cacheops) {
+
+               if (smp_num_siblings > 1) {
+                       cpumask_t tmp_mask = INIT_CPUMASK;
+                       int cpu, this_cpu, n = 0;
+
+                       /* If processor hasn't safe index cachops (likely)
+                          then run cache flush on other CPUs.
+                          But I assume that siblings have common L1 cache, so -
+                          - run cache flush only once per sibling group. LY22 */
+
+                       this_cpu = smp_processor_id();
+                       for_each_online_cpu(cpu) {
+
+                               if (cpumask_test_cpu(cpu, (&per_cpu(cpu_sibling_map, this_cpu))))
+                                       continue;
+
+                               if (cpumask_intersects(&tmp_mask, (&per_cpu(cpu_sibling_map, cpu))))
+                                       continue;
+                               cpu_set(cpu, tmp_mask);
+                               n++;
+                       }
+                       if (n)
+                               smp_call_function_many(&tmp_mask, func, info, 1);
+               } else
+                       smp_call_function(func, info, 1);
+       }
+#endif
+       func(info);
+       preempt_enable();
+}
+
+/*  Define a rough size where address cacheops are still more optimal than
+ *  index cacheops on whole cache (in D/I-cache size terms).
+ *  Value "2" reflects an expense of smp_call_function() on top of
+ *  whole cache flush via index cacheops.
+ */
+#ifndef CACHE_CPU_LATENCY
+#ifdef CONFIG_SMP
+#define CACHE_CPU_LATENCY   (2)
+#else
+#define CACHE_CPU_LATENCY   (1)
+#endif
+#endif
+
+
 /*
  * Must die.
  */
@@ -121,6 +184,28 @@ static void __cpuinit r4k_blast_dcache_page_setup(void)
                r4k_blast_dcache_page = r4k_blast_dcache_page_dc64;
 }
 
+#ifndef CONFIG_EVA
+#define r4k_blast_dcache_user_page  r4k_blast_dcache_page
+#else
+
+static void (*r4k_blast_dcache_user_page)(unsigned long addr);
+
+static void __cpuinit r4k_blast_dcache_user_page_setup(void)
+{
+       unsigned long  dc_lsize = cpu_dcache_line_size();
+
+       if (dc_lsize == 0)
+               r4k_blast_dcache_user_page = (void *)cache_noop;
+       else if (dc_lsize == 16)
+               r4k_blast_dcache_user_page = blast_dcache16_user_page;
+       else if (dc_lsize == 32)
+               r4k_blast_dcache_user_page = blast_dcache32_user_page;
+       else if (dc_lsize == 64)
+               r4k_blast_dcache_user_page = blast_dcache64_user_page;
+}
+
+#endif
+
 static void (* r4k_blast_dcache_page_indexed)(unsigned long addr);
 
 static void __cpuinit r4k_blast_dcache_page_indexed_setup(void)
@@ -241,6 +326,27 @@ static void __cpuinit r4k_blast_icache_page_setup(void)
                r4k_blast_icache_page = blast_icache64_page;
 }
 
+#ifndef CONFIG_EVA
+#define r4k_blast_icache_user_page  r4k_blast_icache_page
+#else
+
+static void (* r4k_blast_icache_user_page)(unsigned long addr);
+
+static void __cpuinit r4k_blast_icache_user_page_setup(void)
+{
+       unsigned long ic_lsize = cpu_icache_line_size();
+
+       if (ic_lsize == 0)
+               r4k_blast_icache_user_page = (void *)cache_noop;
+       else if (ic_lsize == 16)
+               r4k_blast_icache_user_page = blast_icache16_user_page;
+       else if (ic_lsize == 32)
+               r4k_blast_icache_user_page = blast_icache32_user_page;
+       else if (ic_lsize == 64)
+               r4k_blast_icache_user_page = blast_icache64_user_page;
+}
+
+#endif
 
 static void (* r4k_blast_icache_page_indexed)(unsigned long addr);
 
@@ -365,7 +471,7 @@ static inline void local_r4k___flush_cache_all(void * args)
 
 static void r4k___flush_cache_all(void)
 {
-       r4k_on_each_cpu(local_r4k___flush_cache_all, NULL);
+       r4k_indexop_on_each_cpu(local_r4k___flush_cache_all, NULL);
 }
 
 static inline int has_valid_asid(const struct mm_struct *mm)
@@ -383,16 +489,73 @@ static inline int has_valid_asid(const struct mm_struct *mm)
 #endif
 }
 
-static void r4k__flush_cache_vmap(void)
+
+static inline void local_r4__flush_dcache(void *args)
 {
        r4k_blast_dcache();
 }
 
-static void r4k__flush_cache_vunmap(void)
+struct vmap_args {
+       unsigned long start;
+       unsigned long end;
+};
+
+static inline void local_r4__flush_cache_vmap(void *args)
+{
+       blast_dcache_range(((struct vmap_args *)args)->start,((struct vmap_args *)args)->end);
+}
+
+static void r4k__flush_cache_vmap(unsigned long start, unsigned long end)
 {
-       r4k_blast_dcache();
+       unsigned long size = end - start;
+
+       if (cpu_has_safe_index_cacheops && size >= dcache_size) {
+               r4k_blast_dcache();
+       } else {
+/* Commented out until bug in free_unmap_vmap_area() is fixed - it calls
+   with unmapped page and address cache op does TLB refill exception
+               if (size >= (dcache_size * CACHE_CPU_LATENCY))
+ */
+                       r4k_indexop_on_each_cpu(local_r4__flush_dcache, NULL);
+/* Commented out until bug in free_unmap_vmap_area() is fixed - it calls
+   with unmapped page and address cache op does TLB refill exception
+               else {
+                       struct vmap_args args;
+
+                       args.start = start;
+                       args.end = end;
+                       r4k_on_each_cpu(local_r4__flush_cache_vmap, (void *)&args);
+               }
+ */
+       }
+}
+
+static void r4k__flush_cache_vunmap(unsigned long start, unsigned long end)
+{
+       unsigned long size = end - start;
+
+       if (cpu_has_safe_index_cacheops && size >= dcache_size)
+               r4k_blast_dcache();
+       else {
+/* Commented out until bug in free_unmap_vmap_area() is fixed - it calls
+   with unmapped page and address cache op does TLB refill exception
+               if (size >= (dcache_size * CACHE_CPU_LATENCY))
+ */
+                       r4k_indexop_on_each_cpu(local_r4__flush_dcache, NULL);
+/* Commented out until bug in free_unmap_vmap_area() is fixed - it calls
+   with unmapped page and address cache op does TLB refill exception
+               else {
+                       struct vmap_args args;
+
+                       args.start = start;
+                       args.end = end;
+                       r4k_on_each_cpu(local_r4__flush_cache_vmap, (void *)&args);
+               }
+ */
+       }
 }
 
+
 static inline void local_r4k_flush_cache_range(void * args)
 {
        struct vm_area_struct *vma = args;
@@ -415,7 +578,7 @@ static void r4k_flush_cache_range(struct vm_area_struct *vma,
        int exec = vma->vm_flags & VM_EXEC;
 
        if (cpu_has_dc_aliases || (exec && !cpu_has_ic_fills_f_dc))
-               r4k_on_each_cpu(local_r4k_flush_cache_range, vma);
+               r4k_indexop_on_each_cpu(local_r4k_flush_cache_range, vma);
 }
 
 static inline void local_r4k_flush_cache_mm(void * args)
@@ -447,7 +610,7 @@ static void r4k_flush_cache_mm(struct mm_struct *mm)
        if (!cpu_has_dc_aliases)
                return;
 
-       r4k_on_each_cpu(local_r4k_flush_cache_mm, mm);
+       r4k_indexop_on_each_cpu(local_r4k_flush_cache_mm, mm);
 }
 
 struct flush_cache_page_args {
@@ -496,9 +659,17 @@ static inline void local_r4k_flush_cache_page(void *args)
        if ((!exec) && !cpu_has_dc_aliases)
                return;
 
-       if ((mm == current->active_mm) && (pte_val(*ptep) & _PAGE_VALID))
-               vaddr = NULL;
-       else {
+       if ((mm == current->active_mm) && (pte_val(*ptep) & _PAGE_VALID)) {
+               if (cpu_has_dc_aliases || (exec && !cpu_has_ic_fills_f_dc)) {
+                       r4k_blast_dcache_user_page(addr);
+                       if (exec && (!cpu_has_cm2) && !cpu_has_ic_fills_f_dc)
+                               wmb();
+                       if (exec && !cpu_icache_snoops_remote_store)
+                               r4k_blast_scache_page(addr);
+               }
+               if (exec)
+                       r4k_blast_icache_user_page(addr);
+       } else {
                /*
                 * Use kmap_coherent or kmap_atomic to do flushes for
                 * another ASID than the current one.
@@ -510,39 +681,37 @@ static inline void local_r4k_flush_cache_page(void *args)
                else
                        vaddr = kmap_atomic(page);
                addr = (unsigned long)vaddr;
-       }
-
-       if (cpu_has_dc_aliases || (exec && !cpu_has_ic_fills_f_dc)) {
-               r4k_blast_dcache_page(addr);
-               if (exec && !cpu_has_ic_fills_f_dc)
-                       wmb();
-               if (exec && !cpu_icache_snoops_remote_store)
-                       r4k_blast_scache_page(addr);
-       }
-       if (exec) {
-               if (vaddr && cpu_has_vtag_icache && mm == current->active_mm) {
-                       int cpu = smp_processor_id();
 
-                       if (cpu_context(cpu, mm) != 0)
-                               drop_mmu_context(mm, cpu);
-                       dontflash = 1;
-               } else
-                       if (map_coherent || !cpu_has_ic_aliases)
-                               r4k_blast_icache_page(addr);
-       }
+               if (cpu_has_dc_aliases || (exec && !cpu_has_ic_fills_f_dc)) {
+                       r4k_blast_dcache_page(addr);
+                       if (exec && (!cpu_has_cm2) && !cpu_has_ic_fills_f_dc)
+                               wmb();
+                       if (exec && !cpu_icache_snoops_remote_store)
+                               r4k_blast_scache_page(addr);
+               }
+               if (exec) {
+                       if (cpu_has_vtag_icache && mm == current->active_mm) {
+                               int cpu = smp_processor_id();
+
+                               if (cpu_context(cpu, mm) != 0)
+                                       drop_mmu_context(mm, cpu);
+                               dontflash = 1;
+                       } else
+                               if (map_coherent || !cpu_has_ic_aliases)
+                                       r4k_blast_icache_page(addr);
+               }
 
-       if (vaddr) {
                if (map_coherent)
                        kunmap_coherent();
                else
                        kunmap_atomic(vaddr);
-       }
 
-       /*  in case of I-cache aliasing - blast it via coherent page */
-       if (exec && cpu_has_ic_aliases && (!dontflash) && !map_coherent) {
-               vaddr = kmap_coherent(page, addr);
-               r4k_blast_icache_page((unsigned long)vaddr);
-               kunmap_coherent();
+               /*  in case of I-cache aliasing - blast it via coherent page */
+               if (exec && cpu_has_ic_aliases && (!dontflash) && !map_coherent) {
+                       vaddr = kmap_coherent(page, addr);
+                       r4k_blast_icache_page((unsigned long)vaddr);
+                       kunmap_coherent();
+               }
        }
 }
 
@@ -573,11 +742,86 @@ static void r4k_flush_data_cache_page(unsigned long addr)
                r4k_on_each_cpu(local_r4k_flush_data_cache_page, (void *) addr);
 }
 
+
+struct mips_flush_data_cache_range_args {
+       struct vm_area_struct *vma;
+       unsigned long vaddr;
+       unsigned long start;
+       unsigned long len;
+};
+
+static inline void local_r4k_mips_flush_data_cache_range(void *args)
+{
+       struct mips_flush_data_cache_range_args *f_args = args;
+       unsigned long vaddr = f_args->vaddr;
+       unsigned long start = f_args->start;
+       unsigned long len = f_args->len;
+       struct vm_area_struct * vma = f_args->vma;
+
+       blast_dcache_range(start, start + len);
+
+       if ((vma->vm_flags & VM_EXEC) && !cpu_has_ic_fills_f_dc) {
+               if (!cpu_has_cm2)
+                       wmb();
+
+               /* vma is given for exec check only, mmap is current,
+                  so - no non-current vma page flush, just user or kernel */
+               protected_blast_icache_range(vaddr, vaddr + len);
+       }
+}
+
+/* flush dirty kernel data and a corresponding user instructions (if needed).
+   used in copy_to_user_page() */
+static void r4k_mips_flush_data_cache_range(struct vm_area_struct *vma,
+       unsigned long vaddr, struct page *page, unsigned long start,
+       unsigned long len)
+{
+       struct mips_flush_data_cache_range_args args;
+
+       args.vma = vma;
+       args.vaddr = vaddr;
+       args.start = start;
+       args.len = len;
+
+       r4k_on_each_cpu(local_r4k_mips_flush_data_cache_range, (void *)&args);
+}
+
+
 struct flush_icache_range_args {
        unsigned long start;
        unsigned long end;
 };
 
+static inline void local_r4k_flush_icache(void *args)
+{
+       if (!cpu_has_ic_fills_f_dc) {
+               r4k_blast_dcache();
+
+               wmb();
+       }
+
+       r4k_blast_icache();
+}
+
+static inline void local_r4k_flush_icache_range_ipi(void *args)
+{
+       struct flush_icache_range_args *fir_args = args;
+       unsigned long start = fir_args->start;
+       unsigned long end = fir_args->end;
+
+       if (!cpu_has_ic_fills_f_dc) {
+               R4600_HIT_CACHEOP_WAR_IMPL;
+               protected_blast_dcache_range(start, end);
+
+               if (!cpu_has_cm2)
+                       wmb();
+       }
+
+       protected_blast_icache_range(start, end);
+
+}
+
+/* This function is used only for local CPU only while boot etc */
 static inline void local_r4k_flush_icache_range(unsigned long start, unsigned long end)
 {
        if (!cpu_has_ic_fills_f_dc) {
@@ -585,38 +829,53 @@ static inline void local_r4k_flush_icache_range(unsigned long start, unsigned lo
                        r4k_blast_dcache();
                } else {
                        R4600_HIT_CACHEOP_WAR_IMPL;
-                       protected_blast_dcache_range(start, end);
+                       blast_dcache_range(start, end);
                }
-       }
 
-       wmb();
+               wmb();
+       }
 
        if (end - start > icache_size)
                r4k_blast_icache();
        else
-               protected_blast_icache_range(start, end);
-}
-
-static inline void local_r4k_flush_icache_range_ipi(void *args)
-{
-       struct flush_icache_range_args *fir_args = args;
-       unsigned long start = fir_args->start;
-       unsigned long end = fir_args->end;
-
-       local_r4k_flush_icache_range(start, end);
+               blast_icache_range(start, end);
+#ifdef CONFIG_EVA
+       /* This is here to smooth effect of any kind of address aliasing.
+          It is used only during boot, so - it doesn't create an impact on
+          performance. LY22 */
+       bc_wback_inv(start, (end - start));
+       __sync();
+#endif
 }
 
+/* this function can be called for kernel OR user addresses,
+ * kernel is for module, *gdb*. User is for binfmt_a.out/flat
+ * So - take care, check get_fs() */
 static void r4k_flush_icache_range(unsigned long start, unsigned long end)
 {
        struct flush_icache_range_args args;
+       unsigned long size = end - start;
 
        args.start = start;
        args.end = end;
 
-       r4k_on_each_cpu(local_r4k_flush_icache_range_ipi, &args);
+       if (cpu_has_safe_index_cacheops &&
+           (((size >= icache_size) && !cpu_has_ic_fills_f_dc) ||
+            (size >= dcache_size)))
+               local_r4k_flush_icache((void *)&args);
+       else if (((size < (icache_size * CACHE_CPU_LATENCY)) && !cpu_has_ic_fills_f_dc) ||
+                (size < (dcache_size * CACHE_CPU_LATENCY))) {
+               struct flush_icache_range_args args;
+
+               args.start = start;
+               args.end = end;
+               r4k_on_each_cpu(local_r4k_flush_icache_range_ipi, (void *)&args);
+       } else
+               r4k_indexop_on_each_cpu(local_r4k_flush_icache, NULL);
        instruction_hazard();
 }
 
+
 #ifdef CONFIG_DMA_NONCOHERENT
 
 static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
@@ -624,11 +883,13 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
        /* Catch bad driver code */
        BUG_ON(size == 0);
 
+       preempt_disable();
        if (cpu_has_inclusive_pcaches) {
                if (size >= scache_size)
                        r4k_blast_scache();
                else
                        blast_scache_range(addr, addr + size);
+               preempt_enable();
                __sync();
                return;
        }
@@ -644,9 +905,11 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size)
                R4600_HIT_CACHEOP_WAR_IMPL;
                blast_dcache_range(addr, addr + size);
        }
+       preempt_enable();
 
        bc_wback_inv(addr, size);
-       __sync();
+       if (!cpu_has_cm2_l2sync)
+               __sync();
 }
 
 static void r4k_dma_cache_inv(unsigned long addr, unsigned long size)
@@ -654,6 +917,7 @@ static void r4k_dma_cache_inv(unsigned long addr, unsigned long size)
        /* Catch bad driver code */
        BUG_ON(size == 0);
 
+       preempt_disable();
        if (cpu_has_inclusive_pcaches) {
                if (size >= scache_size)
                        r4k_blast_scache();
@@ -668,6 +932,7 @@ static void r4k_dma_cache_inv(unsigned long addr, unsigned long size)
                         */
                        blast_inv_scache_range(addr, addr + size);
                }
+               preempt_enable();
                __sync();
                return;
        }
@@ -678,6 +943,7 @@ static void r4k_dma_cache_inv(unsigned long addr, unsigned long size)
                R4600_HIT_CACHEOP_WAR_IMPL;
                blast_inv_dcache_range(addr, addr + size);
        }
+       preempt_enable();
 
        bc_inv(addr, size);
        __sync();
@@ -766,7 +1032,10 @@ static void r4k_flush_kernel_vmap_range(unsigned long vaddr, int size)
        args.vaddr = (unsigned long) vaddr;
        args.size = size;
 
-       r4k_on_each_cpu(local_r4k_flush_kernel_vmap_range, &args);
+       if (cpu_has_safe_index_cacheops && size >= dcache_size)
+               r4k_indexop_on_each_cpu(local_r4k_flush_kernel_vmap_range, &args);
+       else
+               r4k_on_each_cpu(local_r4k_flush_kernel_vmap_range, &args);
 }
 
 static inline void rm7k_erratum31(void)
@@ -803,20 +1072,30 @@ static inline void rm7k_erratum31(void)
 
 static inline void alias_74k_erratum(struct cpuinfo_mips *c)
 {
+       unsigned int imp = c->processor_id & 0xff00;
+       unsigned int rev = c->processor_id & PRID_REV_MASK;
+
        /*
         * Early versions of the 74K do not update the cache tags on a
         * vtag miss/ptag hit which can occur in the case of KSEG0/KUSEG
         * aliases. In this case it is better to treat the cache as always
         * having aliases.
         */
-       if ((c->processor_id & 0xff) <= PRID_REV_ENCODE_332(2, 4, 0))
-               c->dcache.flags |= MIPS_CACHE_VTAG;
-       if ((c->processor_id & 0xff) == PRID_REV_ENCODE_332(2, 4, 0))
-               write_c0_config6(read_c0_config6() | MIPS_CONF6_SYND);
-       if (((c->processor_id & 0xff00) == PRID_IMP_1074K) &&
-           ((c->processor_id & 0xff) <= PRID_REV_ENCODE_332(1, 1, 0))) {
-               c->dcache.flags |= MIPS_CACHE_VTAG;
-               write_c0_config6(read_c0_config6() | MIPS_CONF6_SYND);
+       switch (imp) {
+       case PRID_IMP_74K:
+               if (rev <= PRID_REV_ENCODE_332(2, 4, 0))
+                       c->dcache.flags |= MIPS_CACHE_VTAG;
+               if (rev == PRID_REV_ENCODE_332(2, 4, 0))
+                       write_c0_config6(read_c0_config6() | MIPS_CONF6_SYND);
+               break;
+       case PRID_IMP_1074K:
+               if (rev <= PRID_REV_ENCODE_332(1, 1, 0)) {
+                       c->dcache.flags |= MIPS_CACHE_VTAG;
+                       write_c0_config6(read_c0_config6() | MIPS_CONF6_SYND);
+               }
+               break;
+       default:
+               BUG();
        }
 }
 
@@ -1088,6 +1367,10 @@ static void __cpuinit probe_pcache(void)
        case CPU_34K:
        case CPU_74K:
        case CPU_1004K:
+       case CPU_PROAPTIV:
+       case CPU_INTERAPTIV:
+       case CPU_VIRTUOSO:
+       case CPU_P5600:
                if (c->cputype == CPU_74K)
                        alias_74k_erratum(c);
                if (!(read_c0_config7() & MIPS_CONF7_IAR)) {
@@ -1366,11 +1649,11 @@ static void nxp_pr4450_fixup_config(void)
        NXP_BARRIER();
 }
 
-static int __cpuinitdata cca = -1;
+unsigned int mips_cca = -1;
 
 static int __init cca_setup(char *str)
 {
-       get_option(&str, &cca);
+       get_option(&str, &mips_cca);
 
        return 0;
 }
@@ -1379,12 +1662,12 @@ early_param("cca", cca_setup);
 
 static void __cpuinit coherency_setup(void)
 {
-       if (cca < 0 || cca > 7)
-               cca = read_c0_config() & CONF_CM_CMASK;
-       _page_cachable_default = cca << _CACHE_SHIFT;
+       if (mips_cca < 0 || mips_cca > 7)
+               mips_cca = read_c0_config() & CONF_CM_CMASK;
+       _page_cachable_default = mips_cca << _CACHE_SHIFT;
 
-       pr_debug("Using cache attribute %d\n", cca);
-       change_c0_config(CONF_CM_CMASK, cca);
+       pr_debug("Using cache attribute %d\n", mips_cca);
+       change_c0_config(CONF_CM_CMASK, mips_cca);
 
        /*
         * c0_status.cu=0 specifies that updates by the sc instruction use
@@ -1453,6 +1736,10 @@ void __cpuinit r4k_cache_init(void)
        r4k_blast_scache_page_setup();
        r4k_blast_scache_page_indexed_setup();
        r4k_blast_scache_setup();
+#ifdef CONFIG_EVA
+       r4k_blast_dcache_user_page_setup();
+       r4k_blast_icache_user_page_setup();
+#endif
 
        /*
         * Some MIPS32 and MIPS64 processors have physically indexed caches.
@@ -1481,11 +1768,12 @@ void __cpuinit r4k_cache_init(void)
        flush_icache_all        = r4k_flush_icache_all;
        local_flush_data_cache_page     = local_r4k_flush_data_cache_page;
        flush_data_cache_page   = r4k_flush_data_cache_page;
+       mips_flush_data_cache_range = r4k_mips_flush_data_cache_range;
        flush_icache_range      = r4k_flush_icache_range;
        local_flush_icache_range        = local_r4k_flush_icache_range;
 
 #if defined(CONFIG_DMA_NONCOHERENT)
-       if (coherentio) {
+       if (coherentio > 0) {
                _dma_cache_wback_inv    = (void *)cache_noop;
                _dma_cache_wback        = (void *)cache_noop;
                _dma_cache_inv          = (void *)cache_noop;
@@ -1505,6 +1793,13 @@ void __cpuinit r4k_cache_init(void)
         * or not to flush caches.
         */
        local_r4k___flush_cache_all(NULL);
+#ifdef CONFIG_EVA
+       /* this is done just in case if some address aliasing does exist in
+          board like old Malta memory map. Doesn't hurt anyway. LY22 */
+       smp_wmb();
+       r4k_blast_scache();
+       smp_wmb();
+#endif
 
        coherency_setup();
        board_cache_error_setup = r4k_cache_error_setup;
index ba9da270289fa9924a01df6a819989a6a23a19e6..99d497aa1d20016a85210258fab66a85d7ea2066 100644 (file)
@@ -122,12 +122,12 @@ static inline void tx39_blast_icache(void)
        local_irq_restore(flags);
 }
 
-static void tx39__flush_cache_vmap(void)
+static void tx39__flush_cache_vmap(unsigned long start, unsigned long end)
 {
        tx39_blast_dcache();
 }
 
-static void tx39__flush_cache_vunmap(void)
+static void tx39__flush_cache_vunmap(unsigned long start, unsigned long end)
 {
        tx39_blast_dcache();
 }
@@ -230,6 +230,13 @@ static void tx39_flush_data_cache_page(unsigned long addr)
        tx39_blast_dcache_page(addr);
 }
 
+static void local_flush_data_cache_range(struct vm_area_struct *vma,
+       unsigned long vaddr, struct page *page, unsigned long addr,
+       unsigned long size)
+{
+       flush_cache_page(vma, addr, page_to_pfn(page));
+}
+
 static void tx39_flush_icache_range(unsigned long start, unsigned long end)
 {
        if (end - start > dcache_size)
@@ -371,6 +378,7 @@ void __cpuinit tx39_cache_init(void)
 
                flush_cache_sigtramp    = (void *) tx39h_flush_icache_all;
                local_flush_data_cache_page     = (void *) tx39h_flush_icache_all;
+               mips_flush_data_cache_range     = (void *) local_flush_data_cache_range;
                flush_data_cache_page   = (void *) tx39h_flush_icache_all;
 
                _dma_cache_wback_inv    = tx39h_dma_cache_wback_inv;
@@ -402,6 +410,7 @@ void __cpuinit tx39_cache_init(void)
 
                flush_cache_sigtramp = tx39_flush_cache_sigtramp;
                local_flush_data_cache_page = local_tx39_flush_data_cache_page;
+               mips_flush_data_cache_range     = (void *) local_flush_data_cache_range;
                flush_data_cache_page = tx39_flush_data_cache_page;
 
                _dma_cache_wback_inv = tx39_dma_cache_wback_inv;
index e4b1ae169c22724cef1da5e87a427ff30e43f326..71a60ffa0a33432885866c9374f3e8b17ca52b78 100644 (file)
@@ -33,8 +33,8 @@ void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page,
 void (*flush_icache_range)(unsigned long start, unsigned long end);
 void (*local_flush_icache_range)(unsigned long start, unsigned long end);
 
-void (*__flush_cache_vmap)(void);
-void (*__flush_cache_vunmap)(void);
+void (*__flush_cache_vmap)(unsigned long start, unsigned long end);
+void (*__flush_cache_vunmap)(unsigned long start, unsigned long end);
 
 void (*__flush_kernel_vmap_range)(unsigned long vaddr, int size);
 void (*__invalidate_kernel_vmap_range)(unsigned long vaddr, int size);
@@ -45,10 +45,14 @@ EXPORT_SYMBOL_GPL(__flush_kernel_vmap_range);
 void (*flush_cache_sigtramp)(unsigned long addr);
 void (*local_flush_data_cache_page)(void * addr);
 void (*flush_data_cache_page)(unsigned long addr);
+void (*mips_flush_data_cache_range)(struct vm_area_struct *vma,
+      unsigned long vaddr, struct page *page, unsigned long addr,
+      unsigned long size);
 void (*flush_icache_all)(void);
 
 EXPORT_SYMBOL_GPL(local_flush_data_cache_page);
 EXPORT_SYMBOL(flush_data_cache_page);
+EXPORT_SYMBOL(mips_flush_data_cache_range);
 EXPORT_SYMBOL(flush_icache_all);
 
 #ifdef CONFIG_DMA_NONCOHERENT
index caf92ecb37d6966639c47e6eb277d74cb42f2012..4ab1843f82b47a3d3d8dd5374eea400941131683 100644 (file)
@@ -22,7 +22,7 @@
 
 #include <dma-coherence.h>
 
-int coherentio = 0;    /* User defined DMA coherency from command line. */
+int coherentio = -1;    /* User defined DMA coherency is not defined yet. */
 EXPORT_SYMBOL_GPL(coherentio);
 int hw_coherentio = 0; /* Actual hardware supported DMA coherency setting. */
 
index b57022bde9ae79be32e8e4d5733bf5f21d31bacc..356e095aeac2563f6f53b85ee5e29761e2301526 100644 (file)
@@ -116,6 +116,11 @@ static inline void kmap_coherent_init(void) {}
 
 void *kmap_coherent(struct page *page, unsigned long addr)
 {
+#ifdef CONFIG_EVA
+       dump_stack();
+       panic("kmap_coherent");
+#else
+
        enum fixed_addresses idx;
        unsigned long vaddr, flags, entrylo;
        unsigned long old_ctx;
@@ -123,7 +128,6 @@ void *kmap_coherent(struct page *page, unsigned long addr)
        int tlbidx;
 
        /* BUG_ON(Page_dcache_dirty(page)); - removed for I-cache flush */
-
        inc_preempt_count();
        idx = (addr >> PAGE_SHIFT) & (FIX_N_COLOURS - 1);
 #ifdef CONFIG_MIPS_MT_SMTC
@@ -169,9 +173,11 @@ void *kmap_coherent(struct page *page, unsigned long addr)
        EXIT_CRITICAL(flags);
 
        return (void*) vaddr;
+#endif /* CONFIG_EVA */
 }
 
-#define UNIQUE_ENTRYHI(idx) (CKSEG0 + ((idx) << (PAGE_SHIFT + 1)))
+#define UNIQUE_ENTRYHI(idx) (cpu_has_tlbinv ? ((CKSEG0 + ((idx) << (PAGE_SHIFT + 1))) | MIPS_EHINV) : \
+                            (CKSEG0 + ((idx) << (PAGE_SHIFT + 1))))
 
 void kunmap_coherent(void)
 {
@@ -231,11 +237,12 @@ void copy_to_user_page(struct vm_area_struct *vma,
        struct page *page, unsigned long vaddr, void *dst, const void *src,
        unsigned long len)
 {
+       void *vto = NULL;
+
        if (cpu_has_dc_aliases &&
            page_mapped(page) && !Page_dcache_dirty(page)) {
-               void *vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK);
+               vto = kmap_coherent(page, vaddr) + (vaddr & ~PAGE_MASK);
                memcpy(vto, src, len);
-               kunmap_coherent();
        } else {
                memcpy(dst, src, len);
                if (cpu_has_dc_aliases)
@@ -245,10 +252,14 @@ void copy_to_user_page(struct vm_area_struct *vma,
            (Page_dcache_dirty(page) &&
             pages_do_alias((unsigned long)dst & PAGE_MASK,
                            vaddr & PAGE_MASK))) {
-               flush_cache_page(vma, vaddr, page_to_pfn(page));
+               mips_flush_data_cache_range(vma, vaddr, page,
+                       vto?(unsigned long)vto : (unsigned long)dst, len);
+
                if (cpu_has_dc_aliases)
                        ClearPageDcacheDirty(page);
        }
+       if (vto)
+               kunmap_coherent();
 }
 
 void copy_from_user_page(struct vm_area_struct *vma,
@@ -263,6 +274,7 @@ void copy_from_user_page(struct vm_area_struct *vma,
        } else
                memcpy(dst, src, len);
 }
+EXPORT_SYMBOL_GPL(copy_from_user_page);
 
 void __init fixrange_init(unsigned long start, unsigned long end,
        pgd_t *pgd_base)
@@ -281,11 +293,11 @@ void __init fixrange_init(unsigned long start, unsigned long end,
        k = __pmd_offset(vaddr);
        pgd = pgd_base + i;
 
-       for ( ; (i < PTRS_PER_PGD) && (vaddr < end); pgd++, i++) {
+       for ( ; (i < PTRS_PER_PGD) && (vaddr != end); pgd++, i++) {
                pud = (pud_t *)pgd;
-               for ( ; (j < PTRS_PER_PUD) && (vaddr < end); pud++, j++) {
+               for ( ; (j < PTRS_PER_PUD) && (vaddr != end); pud++, j++) {
                        pmd = (pmd_t *)pud;
-                       for (; (k < PTRS_PER_PMD) && (vaddr < end); pmd++, k++) {
+                       for (; (k < PTRS_PER_PMD) && (vaddr != end); pmd++, k++) {
                                if (pmd_none(*pmd)) {
                                        pte = (pte_t *) alloc_bootmem_low_pages(PAGE_SIZE);
                                        set_pmd(pmd, __pmd((unsigned long)pte));
@@ -448,7 +460,12 @@ void free_initrd_mem(unsigned long start, unsigned long end)
 void __init_refok free_initmem(void)
 {
        prom_free_prom_memory();
+#ifdef CONFIG_EVA
+       free_init_pages("unused memory", __pa_symbol(&__init_begin),
+               __pa_symbol(&__init_end));
+#else
        free_initmem_default(POISON_FREE_INITMEM);
+#endif
 }
 
 #ifndef CONFIG_MIPS_PGD_C0_CONTEXT
index adc6911ba748915bda5b2575fc03b76891f1fb21..a58300432a61a7892ad3da8d11ac847b1a966eab 100644 (file)
@@ -32,8 +32,11 @@ void pgd_init(unsigned long page)
 
 void __init pagetable_init(void)
 {
+#if defined(CONFIG_HIGHMEM) || defined(FIXADDR_START)
        unsigned long vaddr;
+       unsigned long vend;
        pgd_t *pgd_base;
+#endif
 #ifdef CONFIG_HIGHMEM
        pgd_t *pgd;
        pud_t *pud;
@@ -46,13 +49,18 @@ void __init pagetable_init(void)
        pgd_init((unsigned long)swapper_pg_dir
                 + sizeof(pgd_t) * USER_PTRS_PER_PGD);
 
+#ifdef FIXADDR_START
        pgd_base = swapper_pg_dir;
 
        /*
         * Fixed mappings:
         */
-       vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
-       fixrange_init(vaddr, vaddr + FIXADDR_SIZE, pgd_base);
+       vaddr = __fix_to_virt(__end_of_fixed_addresses - 1);
+       /* Calculate real end before alignment. */
+       vend = vaddr + FIXADDR_SIZE;
+       vaddr = vaddr & PMD_MASK;
+       fixrange_init(vaddr, vend, pgd_base);
+#endif
 
 #ifdef CONFIG_HIGHMEM
        /*
index e8adc0069d66f17fcc6e27915fa1d5eb8e8258a6..a6ae0f12902c30fbbe57ea3a7c9ea22346a8fd9b 100644 (file)
@@ -107,5 +107,5 @@ void __init pagetable_init(void)
         * Fixed mappings:
         */
        vaddr = __fix_to_virt(__end_of_fixed_addresses - 1) & PMD_MASK;
-       fixrange_init(vaddr, vaddr + FIXADDR_SIZE, pgd_base);
+       fixrange_init(vaddr, 0, pgd_base);
 }
index 3ede85fac088c747bdbd9a89c0907b22f3413e03..a29a0d961eae4528250a44e8cf4de32d661d7507 100644 (file)
@@ -7,6 +7,7 @@
 #include <linux/mm.h>
 
 #include <asm/mipsregs.h>
+#include <asm/gcmpregs.h>
 #include <asm/bcache.h>
 #include <asm/cacheops.h>
 #include <asm/page.h>
  */
 static void mips_sc_wback_inv(unsigned long addr, unsigned long size)
 {
-       __sync();
+       if (!cpu_has_cm2)
+               __sync();
        blast_scache_range(addr, addr + size);
+       if (cpu_has_cm2_l2sync)
+               *(unsigned long *)(_gcmp_base + GCMP_L2SYNC_OFFSET) = 0;
 }
 
 /*
@@ -74,8 +78,10 @@ static inline int mips_sc_is_activated(struct cpuinfo_mips *c)
        /* Check the bypass bit (L2B) */
        switch (c->cputype) {
        case CPU_34K:
-       case CPU_74K:
        case CPU_1004K:
+       case CPU_74K:
+       case CPU_PROAPTIV:      /* proAptiv havn't L2B capability but ... */
+       case CPU_INTERAPTIV:
        case CPU_BMIPS5000:
                if (config2 & (1 << 12))
                        return 0;
@@ -139,6 +145,7 @@ int __cpuinit mips_sc_init(void)
        if (found) {
                mips_sc_enable();
                bcops = &mips_sc_ops;
-       }
+       } else
+               cpu_data[0].options &= ~MIPS_CPU_CM2_L2SYNC;
        return found;
 }
index c643de4c473a8d67115c7f0d304ebe1dc1e8c4ce..4ffebac04f359b58fed9173a06765b29e370d331 100644 (file)
@@ -27,7 +27,8 @@ extern void build_tlb_refill_handler(void);
  * Make sure all entries differ.  If they're not different
  * MIPS32 will take revenge ...
  */
-#define UNIQUE_ENTRYHI(idx) (CKSEG0 + ((idx) << (PAGE_SHIFT + 1)))
+#define UNIQUE_ENTRYHI(idx) (cpu_has_tlbinv ? ((CKSEG0 + ((idx) << (PAGE_SHIFT + 1))) | MIPS_EHINV) : \
+                            (CKSEG0 + ((idx) << (PAGE_SHIFT + 1))))
 
 /* Atomicity and interruptability */
 #ifdef CONFIG_MIPS_MT_SMTC
@@ -72,6 +73,7 @@ void local_flush_tlb_all(void)
        unsigned long flags;
        unsigned long old_ctx;
        int entry;
+       int ftlbhighset;
 
        ENTER_CRITICAL(flags);
        /* Save old context and create impossible VPN2 value */
@@ -82,14 +84,29 @@ void local_flush_tlb_all(void)
        entry = read_c0_wired();
 
        /* Blast 'em all away. */
-       while (entry < current_cpu_data.tlbsize) {
-               /* Make sure all entries differ. */
-               write_c0_entryhi(UNIQUE_ENTRYHI(entry));
-               write_c0_index(entry);
-               mtc0_tlbw_hazard();
-               tlb_write_indexed();
-               entry++;
-       }
+       if (cpu_has_tlbinv) {
+               if (current_cpu_data.tlbsizevtlb) {
+                       write_c0_index(0);
+                       mtc0_tlbw_hazard();
+                       tlbinvf();  /* invalide VTLB */
+               }
+               ftlbhighset = current_cpu_data.tlbsizevtlb + current_cpu_data.tlbsizeftlbsets;
+               for (entry=current_cpu_data.tlbsizevtlb;
+                    entry < ftlbhighset;
+                    entry++) {
+                       write_c0_index(entry);
+                       mtc0_tlbw_hazard();
+                       tlbinvf();  /* invalide one FTLB set */
+               }
+       } else
+               while (entry < current_cpu_data.tlbsize) {
+                       /* Make sure all entries differ. */
+                       write_c0_entryhi(UNIQUE_ENTRYHI(entry));
+                       write_c0_index(entry);
+                       mtc0_tlbw_hazard();
+                       tlb_write_indexed();
+                       entry++;
+               }
        tlbw_use_hazard();
        write_c0_entryhi(old_ctx);
        FLUSH_ITLB;
@@ -127,7 +144,8 @@ void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
                start = round_down(start, PAGE_SIZE << 1);
                end = round_up(end, PAGE_SIZE << 1);
                size = (end - start) >> (PAGE_SHIFT + 1);
-               if (size <= current_cpu_data.tlbsize/2) {
+               if ((current_cpu_data.tlbsizeftlbsets && (size <= current_cpu_data.tlbsize/8)) ||
+                   ((!current_cpu_data.tlbsizeftlbsets) && (size <= current_cpu_data.tlbsize/2))) {
                        int oldpid = read_c0_entryhi();
                        int newpid = cpu_asid(cpu, mm);
 
@@ -166,7 +184,8 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
        ENTER_CRITICAL(flags);
        size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
        size = (size + 1) >> 1;
-       if (size <= current_cpu_data.tlbsize / 2) {
+       if ((current_cpu_data.tlbsizeftlbsets && (size <= current_cpu_data.tlbsize/8)) ||
+           ((!current_cpu_data.tlbsizeftlbsets) && (size <= current_cpu_data.tlbsize/2))) {
                int pid = read_c0_entryhi();
 
                start &= (PAGE_MASK << 1);
index afeef93f81a79829ec564eaa8ddafebd4ed7e377..a26891cccbc67c27aa79272ffeabe9908f38cefa 100644 (file)
@@ -440,6 +440,7 @@ static void __cpuinit build_r3000_tlb_refill_handler(void)
                 (unsigned int)(p - tlb_handler));
 
        memcpy((void *)ebase, tlb_handler, 0x80);
+       local_flush_icache_range(ebase, ebase + 0x80);
 
        dump_handler("r3000_tlb_refill", (u32 *)ebase, 32);
 }
@@ -517,14 +518,21 @@ static void __cpuinit build_tlb_write_entry(u32 **p, struct uasm_label **l,
                 * but a number of cores do not have the hazard and
                 * using an ehb causes an expensive pipeline stall.
                 */
-               switch (current_cpu_type()) {
-               case CPU_M14KC:
-               case CPU_74K:
-                       break;
-
-               default:
-                       uasm_i_ehb(p);
-                       break;
+               if (cpu_has_mips_r2_exec_hazard) {
+                       switch (current_cpu_type()) {
+                       case CPU_M14KC:
+                       case CPU_M14KEC:
+                       case CPU_74K:
+                       case CPU_PROAPTIV:
+                       case CPU_INTERAPTIV:
+                       case CPU_VIRTUOSO:
+                       case CPU_P5600:
+                               break;
+
+                       default:
+                               uasm_i_ehb(p);
+                               break;
+                       }
                }
                tlbw(p);
                return;
@@ -973,9 +981,17 @@ build_get_pgde32(u32 **p, unsigned int tmp, unsigned int ptr)
 #endif
        uasm_i_mfc0(p, tmp, C0_BADVADDR); /* get faulting address */
        uasm_i_lw(p, ptr, uasm_rel_lo(pgdc), ptr);
+
+       if (cpu_has_mips32r2) {
+               uasm_i_ext(p, tmp, tmp, PGDIR_SHIFT, (32 - PGDIR_SHIFT));
+               uasm_i_ins(p, ptr, tmp, PGD_T_LOG2, (32 - PGDIR_SHIFT));
+               return;
+       }
+
        uasm_i_srl(p, tmp, tmp, PGDIR_SHIFT); /* get pgd only bits */
        uasm_i_sll(p, tmp, tmp, PGD_T_LOG2);
        uasm_i_addu(p, ptr, ptr, tmp); /* add in pgd offset */
+
 }
 
 #endif /* !CONFIG_64BIT */
@@ -1008,6 +1024,17 @@ static void __cpuinit build_adjust_context(u32 **p, unsigned int ctx)
 
 static void __cpuinit build_get_ptep(u32 **p, unsigned int tmp, unsigned int ptr)
 {
+#ifndef CONFIG_64BIT
+       if (cpu_has_mips_r2) {
+               /* For MIPS32R2, PTE ptr offset is obtained from BadVAddr */
+               UASM_i_MFC0(p, tmp, C0_BADVADDR);
+               UASM_i_LW(p, ptr, 0, ptr);
+               uasm_i_ext(p, tmp, tmp, PAGE_SHIFT+1, PGDIR_SHIFT-PAGE_SHIFT-1);
+               uasm_i_ins(p, ptr, tmp, PTE_T_LOG2+1, PGDIR_SHIFT-PAGE_SHIFT-1);
+               return;
+       }
+#endif /* CONFIG_64BIT */
+
        /*
         * Bug workaround for the Nevada. It seems as if under certain
         * circumstances the move from cp0_context might produce a
@@ -1440,6 +1467,7 @@ static void __cpuinit build_r4000_tlb_refill_handler(void)
                 final_len);
 
        memcpy((void *)ebase, final_handler, 0x100);
+       local_flush_icache_range(ebase, ebase + 0x100);
 
        dump_handler("r4000_tlb_refill", (u32 *)ebase, 64);
 }
index ff8caffd3266ea0a1deaaa1ee1093eac50b600b9..845d92fdecee64d049a5ec73215a588fe17af90c 100644 (file)
@@ -20,6 +20,7 @@
 #include <asm/traps.h>
 #include <asm/fw/fw.h>
 #include <asm/gcmpregs.h>
+#include <asm/cpcregs.h>
 #include <asm/mips-boards/generic.h>
 #include <asm/mips-boards/malta.h>
 
@@ -84,10 +85,15 @@ static void __init mips_nmi_setup(void)
        extern char except_vec_nmi;
 
        base = cpu_has_veic ?
+#ifndef CONFIG_EVA
                (void *)(CAC_BASE + 0xa80) :
                (void *)(CAC_BASE + 0x380);
+#else
+               (void *)(YAMON_BASE + 0xa80) :
+               (void *)(YAMON_BASE + 0x380);
+#endif
        memcpy(base, &except_vec_nmi, 0x80);
-       flush_icache_range((unsigned long)base, (unsigned long)base + 0x80);
+       local_flush_icache_range((unsigned long)base, (unsigned long)base + 0x80);
 }
 
 static void __init mips_ejtag_setup(void)
@@ -96,12 +102,18 @@ static void __init mips_ejtag_setup(void)
        extern char except_vec_ejtag_debug;
 
        base = cpu_has_veic ?
+#ifndef CONFIG_EVA
                (void *)(CAC_BASE + 0xa00) :
                (void *)(CAC_BASE + 0x300);
+#else
+               (void *)(YAMON_BASE + 0xa00) :
+               (void *)(YAMON_BASE + 0x300);
+#endif
        memcpy(base, &except_vec_ejtag_debug, 0x80);
-       flush_icache_range((unsigned long)base, (unsigned long)base + 0x80);
+       local_flush_icache_range((unsigned long)base, (unsigned long)base + 0x80);
 }
 
+void __init prom_mem_check(int niocu);
 extern struct plat_smp_ops msmtc_smp_ops;
 
 void __init prom_init(void)
@@ -230,9 +242,39 @@ mips_pci_controller:
                          MSC01_PCI_SWAP_BYTESWAP << MSC01_PCI_SWAP_MEM_SHF |
                          MSC01_PCI_SWAP_BYTESWAP << MSC01_PCI_SWAP_BAR0_SHF);
 #endif
-               /* Fix up target memory mapping.  */
+               /* Fix up target memory mapping. */
+#ifndef CONFIG_EVA
                MSC_READ(MSC01_PCI_BAR0, mask);
                MSC_WRITE(MSC01_PCI_P2SCMSKL, mask & MSC01_PCI_BAR0_SIZE_MSK);
+#else
+#ifdef CONFIG_EVA_OLD_MALTA_MAP
+               /* Classic (old) Malta memory map:
+                  Setup the Malta max (2GB) memory for PCI DMA in host bridge
+                  in transparent addressing mode, starting from 80000000.
+                  Don't believe in registers content */
+               mask = 0x80000008;
+               MSC_WRITE(MSC01_PCI_BAR0, mask);
+
+               mask = 0x80000000;
+               MSC_WRITE(MSC01_PCI_HEAD4, mask);
+               MSC_WRITE(MSC01_PCI_P2SCMSKL, mask);
+               MSC_WRITE(MSC01_PCI_P2SCMAPL, mask);
+#else
+               /* New Malta memory map:
+                  Setup the Malta max memory (2G) for PCI DMA in host bridge
+                  in transparent addressing mode, starting from 00000000.
+                  Don't believe in registers content */
+               mask = 0x80000008;
+               MSC_WRITE(MSC01_PCI_BAR0, mask);
+
+               mask = 0x00000000;
+               MSC_WRITE(MSC01_PCI_HEAD4, mask);
+               mask = 0x80000000;
+               MSC_WRITE(MSC01_PCI_P2SCMSKL, mask);
+               mask = 0x00000000;
+               MSC_WRITE(MSC01_PCI_P2SCMAPL, mask);
+#endif
+#endif
 
                /* Don't handle target retries indefinitely.  */
                if ((data & MSC01_PCI_CFG_MAXRTRY_MSK) ==
@@ -267,10 +309,21 @@ mips_pci_controller:
 #ifdef CONFIG_SERIAL_8250_CONSOLE
        console_config();
 #endif
+#ifdef CONFIG_MIPS_CMP
        /* Early detection of CMP support */
-       if (gcmp_probe(GCMP_BASE_ADDR, GCMP_ADDRSPACE_SZ))
+       if ((mips_revision_sconid != MIPS_REVISION_SCON_ROCIT) &&
+           (mips_revision_sconid != MIPS_REVISION_SCON_GT64120)) {
+               gcmp_present = 0;
+               printk("GCMP NOT present\n");
+       } else if (gcmp_probe(GCMP_BASE_ADDR_MALTA, GCMP_ADDRSPACE_SZ_MALTA)) {
+               cpc_probe(CPC_BASE_ADDR_MALTA, CPC_ADDRSPACE_SZ_MALTA);
+#if defined(CONFIG_EVA) && !defined(CONFIG_EVA_OLD_MALTA_MAP)
+               prom_mem_check(gcmp_niocu());
+#endif
                if (!register_cmp_smp_ops())
                        return;
+       }
+#endif
 
        if (!register_vsmp_smp_ops())
                return;
index 0a1339ac3ec8f8e6c3d9be6c4dc3b272d7044f98..2ac4f5c7df78bbd875b881cdf917749b5c8fa352 100644 (file)
@@ -46,9 +46,7 @@
 #include <asm/gcmpregs.h>
 #include <asm/setup.h>
 
-int gcmp_present = -1;
 static unsigned long _msc01_biu_base;
-static unsigned long _gcmp_base;
 static unsigned int ipi_map[NR_CPUS];
 
 static DEFINE_RAW_SPINLOCK(mips_irq_lock);
@@ -417,44 +415,6 @@ static struct gic_intr_map gic_intr_map[GIC_NUM_INTRS] = {
 };
 #undef X
 
-/*
- * GCMP needs to be detected before any SMP initialisation
- */
-int __init gcmp_probe(unsigned long addr, unsigned long size)
-{
-       if (mips_revision_sconid != MIPS_REVISION_SCON_ROCIT) {
-               gcmp_present = 0;
-               return gcmp_present;
-       }
-
-       if (gcmp_present >= 0)
-               return gcmp_present;
-
-       _gcmp_base = (unsigned long) ioremap_nocache(GCMP_BASE_ADDR, GCMP_ADDRSPACE_SZ);
-       _msc01_biu_base = (unsigned long) ioremap_nocache(MSC01_BIU_REG_BASE, MSC01_BIU_ADDRSPACE_SZ);
-       gcmp_present = (GCMPGCB(GCMPB) & GCMP_GCB_GCMPB_GCMPBASE_MSK) == GCMP_BASE_ADDR;
-
-       if (gcmp_present)
-               pr_debug("GCMP present\n");
-       return gcmp_present;
-}
-
-/* Return the number of IOCU's present */
-int __init gcmp_niocu(void)
-{
-  return gcmp_present ?
-    (GCMPGCB(GC) & GCMP_GCB_GC_NUMIOCU_MSK) >> GCMP_GCB_GC_NUMIOCU_SHF :
-    0;
-}
-
-/* Set GCMP region attributes */
-void __init gcmp_setregion(int region, unsigned long base,
-                          unsigned long mask, int type)
-{
-       GCMPGCBn(CMxBASE, region) = base;
-       GCMPGCBn(CMxMASK, region) = mask | type;
-}
-
 #if defined(CONFIG_MIPS_MT_SMP)
 static void __init fill_ipi_map1(int baseintr, int cpu, int cpupin)
 {
@@ -471,7 +431,7 @@ static void __init fill_ipi_map(void)
 {
        int cpu;
 
-       for (cpu = 0; cpu < NR_CPUS; cpu++) {
+       for (cpu = 0; cpu < nr_cpu_ids; cpu++) {
                fill_ipi_map1(gic_resched_int_base, cpu, GIC_CPU_INT1);
                fill_ipi_map1(gic_call_int_base, cpu, GIC_CPU_INT2);
        }
@@ -493,7 +453,8 @@ void __init arch_init_irq(void)
 
        if (gcmp_present)  {
                GCMPGCB(GICBA) = GIC_BASE_ADDR | GCMP_GCB_GICBA_EN_MSK;
-               gic_present = 1;
+               if (GCMPGCB(GICBA) & GCMP_GCB_GICBA_EN_MSK)
+                       gic_present = 1;
        } else {
                if (mips_revision_sconid == MIPS_REVISION_SCON_ROCIT) {
                        _msc01_biu_base = (unsigned long)
@@ -572,12 +533,11 @@ void __init arch_init_irq(void)
                /* FIXME */
                int i;
 #if defined(CONFIG_MIPS_MT_SMP)
-               gic_call_int_base = GIC_NUM_INTRS - NR_CPUS;
-               gic_resched_int_base = gic_call_int_base - NR_CPUS;
+               gic_call_int_base = GIC_NUM_INTRS -
+                       (NR_CPUS - nr_cpu_ids) * 2 - nr_cpu_ids;
+               gic_resched_int_base = gic_call_int_base - nr_cpu_ids;
                fill_ipi_map();
 #endif
-               gic_init(GIC_BASE_ADDR, GIC_ADDRSPACE_SZ, gic_intr_map,
-                               ARRAY_SIZE(gic_intr_map), MIPS_GIC_IRQ_BASE);
                if (!gcmp_present) {
                        /* Enable the GIC */
                        i = REG(_msc01_biu_base, MSC01_SC_CFG);
@@ -585,24 +545,27 @@ void __init arch_init_irq(void)
                                (i | (0x1 << MSC01_SC_CFG_GICENA_SHF));
                        pr_debug("GIC Enabled\n");
                }
+               gic_init(GIC_BASE_ADDR, GIC_ADDRSPACE_SZ, gic_intr_map,
+                               ARRAY_SIZE(gic_intr_map), MIPS_GIC_IRQ_BASE);
 #if defined(CONFIG_MIPS_MT_SMP)
                /* set up ipi interrupts */
                if (cpu_has_vint) {
                        set_vi_handler(MIPSCPU_INT_IPI0, malta_ipi_irqdispatch);
                        set_vi_handler(MIPSCPU_INT_IPI1, malta_ipi_irqdispatch);
                }
-               /* Argh.. this really needs sorting out.. */
-               printk("CPU%d: status register was %08x\n", smp_processor_id(), read_c0_status());
-               write_c0_status(read_c0_status() | STATUSF_IP3 | STATUSF_IP4);
-               printk("CPU%d: status register now %08x\n", smp_processor_id(), read_c0_status());
-               write_c0_status(0x1100dc00);
-               printk("CPU%d: status register frc %08x\n", smp_processor_id(), read_c0_status());
-               for (i = 0; i < NR_CPUS; i++) {
+               for (i = 0; i < nr_cpu_ids; i++) {
                        arch_init_ipiirq(MIPS_GIC_IRQ_BASE +
                                         GIC_RESCHED_INT(i), &irq_resched);
                        arch_init_ipiirq(MIPS_GIC_IRQ_BASE +
                                         GIC_CALL_INT(i), &irq_call);
                }
+               set_c0_status(mips_smp_c0_status_mask |
+                             (0x100 << GIC_MIPS_CPU_IPI_RESCHED_IRQ) |
+                             (0x100 << GIC_MIPS_CPU_IPI_CALL_IRQ));
+               back_to_back_c0_hazard();
+               printk("CPU%d: status register %08x\n", smp_processor_id(), read_c0_status());
+               mips_smp_c0_status_mask |= ((0x100 << GIC_MIPS_CPU_IPI_RESCHED_IRQ) |
+                                           (0x100 << GIC_MIPS_CPU_IPI_CALL_IRQ));
 #endif
        } else {
 #if defined(CONFIG_MIPS_MT_SMP)
@@ -619,6 +582,8 @@ void __init arch_init_irq(void)
                        }
                        cpu_ipi_resched_irq = MIPS_CPU_IRQ_BASE + MIPS_CPU_IPI_RESCHED_IRQ;
                        cpu_ipi_call_irq = MIPS_CPU_IRQ_BASE + MIPS_CPU_IPI_CALL_IRQ;
+                       mips_smp_c0_status_mask |= ((0x100 << MIPS_CPU_IPI_RESCHED_IRQ) |
+                                                   (0x100 << MIPS_CPU_IPI_CALL_IRQ));
                }
                arch_init_ipiirq(cpu_ipi_resched_irq, &irq_resched);
                arch_init_ipiirq(cpu_ipi_call_irq, &irq_call);
index 1f73d63e92a765d3ab1d829244a19e57dab8bd8e..a3503b654f94c46ab0a6cef386f7f7c07698974f 100644 (file)
 
 static fw_memblock_t mdesc[FW_MAX_MEMBLOCKS];
 
-/* determined physical memory size, not overridden by command line args         */
+#ifdef DEBUG
+static char *mtypes[3] = {
+       "Dont use memory",
+       "YAMON PROM memory",
+       "Free memmory",
+};
+#endif
+
+/* determined physical memory size, not overridden by command line args  */
 unsigned long physical_memsize = 0L;
+static unsigned newMapType;
 
-fw_memblock_t * __init fw_getmdesc(void)
+static inline fw_memblock_t * __init prom_getmdesc(void)
 {
-       char *memsize_str, *ptr;
-       unsigned int memsize;
+       char *memsize_str;
+       char *ememsize_str;
+       unsigned long memsize = 0;
+       unsigned long ememsize = 0;
+       char *ptr;
        static char cmdline[COMMAND_LINE_SIZE] __initdata;
-       long val;
-       int tmp;
 
        /* otherwise look in the environment */
        memsize_str = fw_getenv("memsize");
-       if (!memsize_str) {
-               pr_warn("memsize not set in YAMON, set to default (32Mb)\n");
+#ifdef DEBUG
+       pr_debug("prom_memsize = %s\n", memsize_str);
+#endif
+       if (memsize_str)
+               memsize = simple_strtol(memsize_str, NULL, 0);
+       ememsize_str = fw_getenv("ememsize");
+#ifdef DEBUG
+       pr_debug("fw_ememsize = %s\n", ememsize_str);
+#endif
+       if (ememsize_str)
+               ememsize = simple_strtol(ememsize_str, NULL, 0);
+
+       if ((!memsize) && !ememsize) {
+               printk(KERN_WARNING
+                      "memsize not set in boot prom, set to default (32Mb)\n");
                physical_memsize = 0x02000000;
        } else {
-               tmp = kstrtol(memsize_str, 0, &val);
-               physical_memsize = (unsigned long)val;
+               physical_memsize = ememsize;
+               if (!physical_memsize)
+                       physical_memsize = memsize;
        }
 
 #ifdef CONFIG_CPU_BIG_ENDIAN
@@ -48,26 +72,54 @@ fw_memblock_t * __init fw_getmdesc(void)
        physical_memsize -= PAGE_SIZE;
 #endif
 
+       memsize = 0;
        /* Check the command line for a memsize directive that overrides
           the physical/default amount */
        strcpy(cmdline, arcs_cmdline);
-       ptr = strstr(cmdline, "memsize=");
-       if (ptr && (ptr != cmdline) && (*(ptr - 1) != ' '))
-               ptr = strstr(ptr, " memsize=");
-
-       if (ptr)
-               memsize = memparse(ptr + 8, &ptr);
-       else
+       ptr = strstr(cmdline, " memsize=");
+       if (ptr && (ptr != cmdline))
+               memsize = memparse(ptr + 9, &ptr);
+       ptr = strstr(cmdline, " ememsize=");
+       if (ptr && (ptr != cmdline))
+               memsize = memparse(ptr + 10, &ptr);
+       if (!memsize) {
+               ptr = strstr(cmdline, "memsize=");
+               if (ptr && (ptr != cmdline))
+                       memsize = memparse(ptr + 8, &ptr);
+               ptr = strstr(cmdline, "ememsize=");
+               if (ptr && (ptr != cmdline))
+                       memsize = memparse(ptr + 9, &ptr);
+       }
+       if (!memsize)
                memsize = physical_memsize;
 
+       if ((memsize == 0x10000000) && !ememsize)
+               if ((!ptr) || (ptr == cmdline)) {
+                       printk("YAMON reports memsize=256M but doesn't report ememsize option\n");
+                       printk("If you install > 256MB memory, upgrade YAMON or use boot option memsize=XXXM\n");
+               }
+       newMapType = *((unsigned int *)CKSEG1ADDR(0xbf403004));
+       printk("System Controller register = %0x\n",newMapType);
+       newMapType &= 0x100;    /* extract map type bit */
+#ifdef CONFIG_EVA_OLD_MALTA_MAP
+       if (newMapType)
+               panic("Malta board has new memory map layout but kernel is configured for legacy map\n");
+#endif
+#if (!defined(CONFIG_PHYS_ADDR_T_64BIT)) && !defined(CONFIG_HIGHMEM)
+       /* Don't use last 64KB - it is just for macros arithmetics overflow */
+       /* It is assumed that HIGHMEM lowmem map follows this rule too */
+       if (((unsigned long long)PHYS_OFFSET + (unsigned long long)memsize) > 0xffff0000ULL)
+               memsize = 0xffff0000ULL - (unsigned long long)PHYS_OFFSET;
+#endif
+
        memset(mdesc, 0, sizeof(mdesc));
 
        mdesc[0].type = fw_dontuse;
-       mdesc[0].base = 0x00000000;
+       mdesc[0].base = PHYS_OFFSET;
        mdesc[0].size = 0x00001000;
 
        mdesc[1].type = fw_code;
-       mdesc[1].base = 0x00001000;
+       mdesc[1].base = mdesc[0].base + 0x00001000UL;
        mdesc[1].size = 0x000ef000;
 
        /*
@@ -78,21 +130,53 @@ fw_memblock_t * __init fw_getmdesc(void)
         * devices.
         */
        mdesc[2].type = fw_dontuse;
-       mdesc[2].base = 0x000f0000;
+       mdesc[2].base = mdesc[0].base + 0x000f0000UL;
        mdesc[2].size = 0x00010000;
 
        mdesc[3].type = fw_dontuse;
-       mdesc[3].base = 0x00100000;
-       mdesc[3].size = CPHYSADDR(PFN_ALIGN((unsigned long)&_end)) -
-               mdesc[3].base;
+       mdesc[3].base = mdesc[0].base + 0x00100000UL;
+       mdesc[3].size = CPHYSADDR(PFN_ALIGN((unsigned long)&_end)) - 0x00100000UL;
+
+       /* this code assumes that PAGE_OFFSET == 0 and PHYS_OFFSET == n*512MB */
+       if ((memsize > 0x20000000) && !PHYS_OFFSET) {
+               /* first 256MB */
+               mdesc[4].type = fw_free;
+               mdesc[4].base = mdesc[0].base + CPHYSADDR(PFN_ALIGN(&_end));
+               mdesc[4].size = mdesc[0].base + 0x10000000 - CPHYSADDR(mdesc[4].base);
+
+               /* I/O hole ... */
+
+               /* the rest of memory (256MB behind hole is lost) */
+               mdesc[5].type = fw_free;
+               mdesc[5].base = mdesc[0].base + 0x20000000;
+               mdesc[5].size = memsize - 0x20000000;
+       } else {
+               /* limit to 256MB, exclude I/O hole */
+               if (!PHYS_OFFSET)
+                       memsize = (memsize > 0x10000000)? 0x10000000 : memsize;
 
-       mdesc[4].type = fw_free;
-       mdesc[4].base = CPHYSADDR(PFN_ALIGN(&_end));
-       mdesc[4].size = memsize - mdesc[4].base;
+               mdesc[4].type = fw_free;
+               mdesc[4].base = mdesc[0].base + CPHYSADDR(PFN_ALIGN(&_end));
+               mdesc[4].size = memsize - CPHYSADDR(mdesc[4].base);
+       }
 
        return &mdesc[0];
 }
 
+#ifdef CONFIG_EVA
+#ifndef CONFIG_EVA_OLD_MALTA_MAP
+void __init prom_mem_check(int niocu)
+{
+       if (!newMapType) {
+               if (niocu && mdesc[5].size) {
+                       printk(KERN_WARNING "Malta board has legacy memory map + IOCU, but kernel is configured for new map layout, restrict memsize to 256MB\n");
+                       boot_mem_map.nr_map--;
+               }
+       }
+}
+#endif /* !CONFIG_EVA_OLD_MALTA_MAP */
+#endif /* CONFIG_EVA */
+
 static int __init fw_memtype_classify(unsigned int type)
 {
        switch (type) {
@@ -105,10 +189,30 @@ static int __init fw_memtype_classify(unsigned int type)
        }
 }
 
+fw_memblock_t __init *fw_getmdesc(void)
+{
+       fw_memblock_t *p;
+
+       p = prom_getmdesc();
+       return p;
+}
+
 void __init fw_meminit(void)
 {
        fw_memblock_t *p;
 
+#ifdef DEBUG
+       pr_debug("YAMON MEMORY DESCRIPTOR dump:\n");
+       p = fw_getmdesc();
+
+       while (p->size) {
+               int i = 0;
+               pr_debug("[%d,%p]: base<%08lx> size<%x> type<%s>\n",
+                        i, p, p->base, p->size, mtypes[p->type]);
+               p++;
+               i++;
+       }
+#endif
        p = fw_getmdesc();
 
        while (p->size) {
index 37134ddfeaa5d43b26ac482e251523a42e404741..da8271cfa7e2c0667ed7ee8045964353192c3be6 100644 (file)
@@ -198,6 +198,12 @@ void __init mips_pcibios_init(void)
                MSC_READ(MSC01_PCI_SC2PMBASL, start);
                MSC_READ(MSC01_PCI_SC2PMMSKL, mask);
                MSC_READ(MSC01_PCI_SC2PMMAPL, map);
+#if defined(CONFIG_EVA) && !defined(CONFIG_EVA_OLD_MALTA_MAP)
+               /* shift PCI devices to upper 2GB, to prevent PCI bridges loop */
+               map |= 0xa0000000;
+               MSC_WRITE(MSC01_PCI_SC2PMMAPL, map);
+               MSC_READ(MSC01_PCI_SC2PMMAPL, map);
+#endif
                msc_mem_resource.start = start & mask;
                msc_mem_resource.end = (start & mask) | ~mask;
                msc_controller.mem_offset = (start & mask) - (map & mask);
index 132f8663825e4b29da00e44b48b32407cfba871b..e1dd1c1d3fdeed9f5214dc18ec4cb7f2c5593279 100644 (file)
@@ -47,6 +47,7 @@
 static struct plat_serial8250_port uart8250_data[] = {
        SMC_PORT(0x3F8, 4),
        SMC_PORT(0x2F8, 3),
+#ifndef CONFIG_MIPS_CMP
        {
                .mapbase        = 0x1f000900,   /* The CBUS UART */
                .irq            = MIPS_CPU_IRQ_BASE + MIPSCPU_INT_MB2,
@@ -55,6 +56,7 @@ static struct plat_serial8250_port uart8250_data[] = {
                .flags          = CBUS_UART_FLAGS,
                .regshift       = 3,
        },
+#endif
        { },
 };
 
index 329420536241b5b70743d5ef7ce56c02a4087aa3..d627d4b2b47f4f456addef452e77c16513238bd7 100644 (file)
@@ -1,33 +1,18 @@
 /*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License.  See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
  * Carsten Langgaard, carstenl@mips.com
  * Copyright (C) 1999,2000 MIPS Technologies, Inc.  All rights reserved.
- *
- * ########################################################################
- *
- *  This program is free software; you can distribute it and/or modify it
- *  under the terms of the GNU General Public License (Version 2) as
- *  published by the Free Software Foundation.
- *
- *  This program is distributed in the hope it will be useful, but WITHOUT
- *  ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- *  FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
- *  for more details.
- *
- *  You should have received a copy of the GNU General Public License along
- *  with this program; if not, write to the Free Software Foundation, Inc.,
- *  59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
- *
- * ########################################################################
- *
- * Reset the MIPS boards.
- *
  */
-#include <linux/init.h>
+#include <linux/io.h>
 #include <linux/pm.h>
 
-#include <asm/io.h>
 #include <asm/reboot.h>
-#include <asm/mips-boards/generic.h>
+
+#define SOFTRES_REG    0x1f000500
+#define GORESET                0x42
 
 static void mips_machine_restart(char *command)
 {
@@ -45,7 +30,6 @@ static void mips_machine_halt(void)
        __raw_writel(GORESET, softres_reg);
 }
 
-
 static int __init mips_reboot_setup(void)
 {
        _machine_restart = mips_machine_restart;
@@ -54,5 +38,4 @@ static int __init mips_reboot_setup(void)
 
        return 0;
 }
-
 arch_initcall(mips_reboot_setup);
index c72a069367819d1ca8c91532862dbfbd890de1b1..08ae6b2773c3b104826d69e29d9d029a80866f56 100644 (file)
@@ -130,8 +130,9 @@ static int __init plat_enable_iocoherency(void)
        } else if (gcmp_niocu() != 0) {
                /* Nothing special needs to be done to enable coherency */
                pr_info("CMP IOCU detected\n");
-               if ((*(unsigned int *)0xbf403000 & 0x81) != 0x81) {
-                       pr_crit("IOCU OPERATION DISABLED BY SWITCH - DEFAULTING TO SW IO COHERENCY\n");
+               if ((*(unsigned int *)CKSEG1ADDR(0xbf403000) & 0x81) != 0x81) {
+                       pr_crit("IOCU OPERATION DISABLED BY SWITCH"
+                               " - DEFAULTING TO SW IO COHERENCY\n");
                        return 0;
                }
                supported = 1;
@@ -151,13 +152,17 @@ static void __init plat_setup_iocoherency(void)
        if (plat_enable_iocoherency()) {
                if (coherentio == 0)
                        pr_info("Hardware DMA cache coherency disabled\n");
-               else
+               else {
+                       coherentio = 1;
                        pr_info("Hardware DMA cache coherency enabled\n");
+               }
        } else {
                if (coherentio == 1)
                        pr_info("Hardware DMA cache coherency unsupported, but enabled from command line!\n");
-               else
+               else {
+                       coherentio = 0;
                        pr_info("Software DMA cache coherency enabled\n");
+               }
        }
 #else
        if (!plat_enable_iocoherency())
@@ -243,10 +248,143 @@ static void __init bonito_quirks_setup(void)
 #endif
 }
 
+#ifdef CONFIG_EVA
+extern unsigned int mips_cca;
+
+void __init plat_eva_setup(void)
+{
+       unsigned int val;
+
+#ifdef CONFIG_EVA_OLD_MALTA_MAP
+
+#ifdef CONFIG_EVA_3GB
+       val = ((MIPS_SEGCFG_UK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (2 << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT));
+       val |= (((MIPS_SEGCFG_MK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16);
+       write_c0_segctl0(val);
+
+       val = ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT));
+#else /* !CONFIG_EVA_3G */
+       val = ((MIPS_SEGCFG_MK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT));
+       val |= (((MIPS_SEGCFG_MK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16);
+       write_c0_segctl0(val);
+
+       val = ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (2 << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT));
+#endif /* CONFIG_EVA_3G */
+#ifdef CONFIG_SMP
+       val |= (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16);
+#else
+       val |= (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (4 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16);
+#endif
+       write_c0_segctl1(val);
+
+       val = ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (6 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT));
+       val |= (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (4 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16);
+
+#else /* !CONFIG_EVA_OLD_MALTA_MAP */
+
+#ifdef CONFIG_EVA_3GB
+       val = ((MIPS_SEGCFG_UK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (2 << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT));
+       val |= (((MIPS_SEGCFG_MK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16);
+       write_c0_segctl0(val);
+
+       val = ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (6 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT));
+       val |= (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (5 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16);
+       write_c0_segctl1(val);
+
+       val = ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (3 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT));
+       val |= (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (1 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16);
+#else /* !CONFIG_EVA_3G */
+       val = ((MIPS_SEGCFG_MK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT));
+       val |= (((MIPS_SEGCFG_MK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16);
+       write_c0_segctl0(val);
+
+       val = ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (2 << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT));
+       val |= (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16);
+       write_c0_segctl1(val);
+
+       val = ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (2 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT));
+       val |= (((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) |
+               (0 << MIPS_SEGCFG_PA_SHIFT) | (mips_cca << MIPS_SEGCFG_C_SHIFT) |
+               (1 << MIPS_SEGCFG_EU_SHIFT)) << 16);
+#endif /* CONFIG_EVA_3G */
+
+#endif /* CONFIG_EVA_OLD_MALTA_MAP */
+
+       write_c0_segctl2(val);
+       back_to_back_c0_hazard();
+
+       val = read_c0_config5();
+       write_c0_config5(val|MIPS_CONF5_K|MIPS_CONF5_CV|MIPS_CONF5_EVA);
+       back_to_back_c0_hazard();
+
+       printk("Enhanced Virtual Addressing (EVA) active\n");
+}
+
+extern int gcmp_present;
+void BEV_overlay_segment(void);
+#endif
+
 void __init plat_mem_setup(void)
 {
        unsigned int i;
 
+#ifdef CONFIG_EVA
+#ifdef CONFIG_MIPS_CMP
+       if (gcmp_present)
+               BEV_overlay_segment();
+#endif
+
+       if ((cpu_has_segments) && (cpu_has_eva))
+               plat_eva_setup();
+       else {
+           printk("cpu_has_segments=%ld cpu_has_eva=%ld\n",cpu_has_segments,cpu_has_eva);
+           printk("Kernel is built for EVA support but EVA or segment control registers are not found\n");
+           panic("EVA absent");
+       }
+#endif
+
        mips_pcibios_init();
 
        /* Request I/O space for devices used on the Malta board. */
index 658f437870562924bf0ad9d93b3aba2873bf9e5f..e4b317d414f112576bbd04012381ab6804dabf86 100644 (file)
                };
        };
 
-       chosen {
-               bootargs = "console=ttyS1,38400 rootdelay=10 root=/dev/sda3";
-       };
-
        memory {
                device_type = "memory";
                reg = <0x0 0x08000000>;
index af763e838fdde8f7b22e63e19f268889db2f025d..aa14b66c72c098b2aaf4850c117180fcf347f3fa 100644 (file)
@@ -86,12 +86,15 @@ int __init oprofile_arch_init(struct oprofile_operations *ops)
        case CPU_1004K:
        case CPU_74K:
        case CPU_LOONGSON1:
+       case CPU_INTERAPTIV:
        case CPU_SB1:
        case CPU_SB1A:
        case CPU_R10000:
        case CPU_R12000:
        case CPU_R14000:
        case CPU_XLR:
+       case CPU_VIRTUOSO:
+       case CPU_P5600:
                lmodel = &op_model_mipsxx_ops;
                break;
 
index e4b1140cdae060dca0de8bfdd6a5985b1429de58..811564aab262b6382402b1c2a89d5aa711bf33d9 100644 (file)
@@ -376,6 +376,22 @@ static int __init mipsxx_init(void)
                op_model_mipsxx_ops.cpu_type = "mips/74K";
                break;
 
+       case CPU_PROAPTIV:
+               op_model_mipsxx_ops.cpu_type = "mips/proAptiv";
+               break;
+
+       case CPU_INTERAPTIV:
+               op_model_mipsxx_ops.cpu_type = "mips/interAptiv";
+               break;
+
+       case CPU_P5600:
+               op_model_mipsxx_ops.cpu_type = "mips/P5600";
+               break;
+
+       case CPU_VIRTUOSO:
+               op_model_mipsxx_ops.cpu_type = "mips/Virtuoso";
+               break;
+
        case CPU_5KC:
                op_model_mipsxx_ops.cpu_type = "mips/5K";
                break;