Merge 'akpm' patch series
authorLinus Torvalds <torvalds@linux-foundation.org>
Tue, 26 Jul 2011 04:00:19 +0000 (21:00 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 26 Jul 2011 04:00:19 +0000 (21:00 -0700)
* Merge akpm patch series: (122 commits)
  drivers/connector/cn_proc.c: remove unused local
  Documentation/SubmitChecklist: add RCU debug config options
  reiserfs: use hweight_long()
  reiserfs: use proper little-endian bitops
  pnpacpi: register disabled resources
  drivers/rtc/rtc-tegra.c: properly initialize spinlock
  drivers/rtc/rtc-twl.c: check return value of twl_rtc_write_u8() in twl_rtc_set_time()
  drivers/rtc: add support for Qualcomm PMIC8xxx RTC
  drivers/rtc/rtc-s3c.c: support clock gating
  drivers/rtc/rtc-mpc5121.c: add support for RTC on MPC5200
  init: skip calibration delay if previously done
  misc/eeprom: add eeprom access driver for digsy_mtc board
  misc/eeprom: add driver for microwire 93xx46 EEPROMs
  checkpatch.pl: update $logFunctions
  checkpatch: make utf-8 test --strict
  checkpatch.pl: add ability to ignore various messages
  checkpatch: add a "prefer __aligned" check
  checkpatch: validate signature styles and To: and Cc: lines
  checkpatch: add __rcu as a sparse modifier
  checkpatch: suggest using min_t or max_t
  ...

Did this as a merge because of (trivial) conflicts in
 - Documentation/feature-removal-schedule.txt
 - arch/xtensa/include/asm/uaccess.h
that were just easier to fix up in the merge than in the patch series.

1  2 
Documentation/feature-removal-schedule.txt
MAINTAINERS
arch/xtensa/include/asm/uaccess.h
drivers/pnp/pnpacpi/rsparser.c
drivers/rtc/rtc-s3c.c
fs/hugetlbfs/inode.c
lib/vsprintf.c
mm/backing-dev.c
mm/shmem.c

@@@ -184,7 -184,7 +184,7 @@@ Why:       /proc/<pid>/oom_adj allows userspa
  
        A much more powerful interface, /proc/<pid>/oom_score_adj, was
        introduced with the oom killer rewrite that allows users to increase or
-       decrease the badness() score linearly.  This interface will replace
+       decrease the badness score linearly.  This interface will replace
        /proc/<pid>/oom_adj.
  
        A warning will be emitted to the kernel log if an application uses this
@@@ -199,7 -199,7 +199,7 @@@ Files:     drivers/staging/cs5535_gpio/
  Check:        drivers/staging/cs5535_gpio/cs5535_gpio.c
  Why:  A newer driver replaces this; it is drivers/gpio/cs5535-gpio.c, and
        integrates with the Linux GPIO subsystem.  The old driver has been
 -      moved to staging, and will be removed altogether around 2.6.40.
 +      moved to staging, and will be removed altogether around 3.0.
        Please test the new driver, and ensure that the functionality you
        need and any bugfixes from the old driver are available in the new
        one.
@@@ -294,7 -294,7 +294,7 @@@ When:      The schedule was July 2008, but i
  Why:  The support code for the old firmware hurts code readability/maintainability
        and slightly hurts runtime performance. Bugfixes for the old firmware
        are not provided by Broadcom anymore.
 -Who:  Michael Buesch <mb@bu3sch.de>
 +Who:  Michael Buesch <m@bues.ch>
  
  ---------------------------
  
@@@ -430,7 -430,7 +430,7 @@@ Who:       Avi Kivity <avi@redhat.com
  ----------------------------
  
  What: iwlwifi 50XX module parameters
 -When: 2.6.40
 +When: 3.0
  Why:  The "..50" modules parameters were used to configure 5000 series and
        up devices; different set of module parameters also available for 4965
        with same functionalities. Consolidate both set into single place
@@@ -441,7 -441,7 +441,7 @@@ Who:       Wey-Yi Guy <wey-yi.w.guy@intel.com
  ----------------------------
  
  What: iwl4965 alias support
 -When: 2.6.40
 +When: 3.0
  Why:  Internal alias support has been present in module-init-tools for some
        time, the MODULE_ALIAS("iwl4965") boilerplate aliases can be removed
        with no impact.
@@@ -482,7 -482,7 +482,7 @@@ Who:       FUJITA Tomonori <fujita.tomonori@l
  ----------------------------
  
  What: iwlwifi disable_hw_scan module parameters
 -When: 2.6.40
 +When: 3.0
  Why:  Hareware scan is the prefer method for iwlwifi devices for
        scanning operation. Remove software scan support for all the
        iwlwifi devices.
@@@ -493,7 -493,7 +493,7 @@@ Who:       Wey-Yi Guy <wey-yi.w.guy@intel.com
  
  What:   access to nfsd auth cache through sys_nfsservctl or '.' files
          in the 'nfsd' filesystem.
 -When:   2.6.40
 +When:   3.0
  Why:    This is a legacy interface which have been replaced by a more
          dynamic cache.  Continuing to maintain this interface is an
          unnecessary burden.
@@@ -518,22 -518,6 +518,6 @@@ Files:    net/netfilter/xt_connlimit.
  
  ----------------------------
  
- What: noswapaccount kernel command line parameter
- When: 3.0
- Why:  The original implementation of memsw feature enabled by
-       CONFIG_CGROUP_MEM_RES_CTLR_SWAP could be disabled by the noswapaccount
-       kernel parameter (introduced in 2.6.29-rc1). Later on, this decision
-       turned out to be not ideal because we cannot have the feature compiled
-       in and disabled by default and let only interested to enable it
-       (e.g. general distribution kernels might need it). Therefore we have
-       added swapaccount[=0|1] parameter (introduced in 2.6.37) which provides
-       the both possibilities. If we remove noswapaccount we will have
-       less command line parameters with the same functionality and we
-       can also cleanup the parameter handling a bit ().
- Who:  Michal Hocko <mhocko@suse.cz>
- ----------------------------
  What: ipt_addrtype match include file
  When: 2012
  Why:  superseded by xt_addrtype
@@@ -552,7 -536,7 +536,7 @@@ Who:       Jean Delvare <khali@linux-fr.org
  ----------------------------
  
  What: Support for UVCIOC_CTRL_ADD in the uvcvideo driver
 -When: 2.6.42
 +When: 3.2
  Why:  The information passed to the driver by this ioctl is now queried
        dynamically from the device.
  Who:  Laurent Pinchart <laurent.pinchart@ideasonboard.com>
  ----------------------------
  
  What: Support for UVCIOC_CTRL_MAP_OLD in the uvcvideo driver
 -When: 2.6.42
 +When: 3.2
  Why:  Used only by applications compiled against older driver versions.
        Superseded by UVCIOC_CTRL_MAP which supports V4L2 menu controls.
  Who:  Laurent Pinchart <laurent.pinchart@ideasonboard.com>
  ----------------------------
  
  What: Support for UVCIOC_CTRL_GET and UVCIOC_CTRL_SET in the uvcvideo driver
 -When: 2.6.42
 +When: 3.2
  Why:  Superseded by the UVCIOC_CTRL_QUERY ioctl.
  Who:  Laurent Pinchart <laurent.pinchart@ideasonboard.com>
  
diff --combined MAINTAINERS
@@@ -696,7 -696,7 +696,7 @@@ T: git git://git.infradead.org/users/cb
  
  ARM/CIRRUS LOGIC EP93XX ARM ARCHITECTURE
  M:    Hartley Sweeten <hsweeten@visionengravers.com>
 -M:    Ryan Mallon <ryan@bluewatersys.com>
 +M:    Ryan Mallon <rmallon@gmail.com>
  L:    linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
  S:    Maintained
  F:    arch/arm/mach-ep93xx/
@@@ -1588,7 -1588,7 +1588,7 @@@ F:      Documentation/sound/alsa/Bt87x.tx
  F:    sound/pci/bt87x.c
  
  BT8XXGPIO DRIVER
 -M:    Michael Buesch <mb@bu3sch.de>
 +M:    Michael Buesch <m@bues.ch>
  W:    http://bu3sch.de/btgpio.php
  S:    Maintained
  F:    drivers/gpio/bt8xxgpio.c
@@@ -3012,7 -3012,7 +3012,7 @@@ F:      kernel/hrtimer.
  F:    kernel/time/clockevents.c
  F:    kernel/time/tick*.*
  F:    kernel/time/timer_*.c
- F:    include/linux/clockevents.h
+ F:    include/linux/clockchips.h
  F:    include/linux/hrtimer.h
  
  HIGH-SPEED SCC DRIVER FOR AX.25
@@@ -3960,13 -3960,6 +3960,13 @@@ L:    lm-sensors@lm-sensors.or
  S:    Maintained
  F:    drivers/hwmon/lm73.c
  
 +LM78 HARDWARE MONITOR DRIVER
 +M:    Jean Delvare <khali@linux-fr.org>
 +L:    lm-sensors@lm-sensors.org
 +S:    Maintained
 +F:    Documentation/hwmon/lm78
 +F:    drivers/hwmon/lm78.c
 +
  LM83 HARDWARE MONITOR DRIVER
  M:    Jean Delvare <khali@linux-fr.org>
  L:    lm-sensors@lm-sensors.org
@@@ -5917,7 -5910,7 +5917,7 @@@ S:      Maintaine
  F:    drivers/net/sonic.*
  
  SONICS SILICON BACKPLANE DRIVER (SSB)
 -M:    Michael Buesch <mb@bu3sch.de>
 +M:    Michael Buesch <m@bues.ch>
  L:    netdev@vger.kernel.org
  S:    Maintained
  F:    drivers/ssb/
@@@ -17,7 -17,7 +17,8 @@@
  #define _XTENSA_UACCESS_H
  
  #include <linux/errno.h>
+ #include <linux/prefetch.h>
 +#include <asm/types.h>
  
  #define VERIFY_READ    0
  #define VERIFY_WRITE   1
@@@ -27,6 -27,7 +28,6 @@@
  #include <asm/current.h>
  #include <asm/asm-offsets.h>
  #include <asm/processor.h>
 -#include <asm/types.h>
  
  /*
   * These assembly macros mirror the C macros that follow below.  They
  #else /* __ASSEMBLY__ not defined */
  
  #include <linux/sched.h>
 -#include <asm/types.h>
  
  /*
   * The fs value determines whether argument validity checking should
@@@ -509,15 -509,15 +509,15 @@@ static __init void pnpacpi_parse_dma_op
                                            struct acpi_resource_dma *p)
  {
        int i;
-       unsigned char map = 0, flags;
+       unsigned char map = 0, flags = 0;
  
        if (p->channel_count == 0)
-               return;
+               flags |= IORESOURCE_DISABLED;
  
        for (i = 0; i < p->channel_count; i++)
                map |= 1 << p->channels[i];
  
-       flags = dma_flags(dev, p->type, p->bus_master, p->transfer);
+       flags |= dma_flags(dev, p->type, p->bus_master, p->transfer);
        pnp_register_dma_resource(dev, option_flags, map, flags);
  }
  
@@@ -527,17 -527,17 +527,17 @@@ static __init void pnpacpi_parse_irq_op
  {
        int i;
        pnp_irq_mask_t map;
-       unsigned char flags;
+       unsigned char flags = 0;
  
        if (p->interrupt_count == 0)
-               return;
+               flags |= IORESOURCE_DISABLED;
  
        bitmap_zero(map.bits, PNP_IRQ_NR);
        for (i = 0; i < p->interrupt_count; i++)
                if (p->interrupts[i])
                        __set_bit(p->interrupts[i], map.bits);
  
-       flags = irq_flags(p->triggering, p->polarity, p->sharable);
+       flags |= irq_flags(p->triggering, p->polarity, p->sharable);
        pnp_register_irq_resource(dev, option_flags, &map, flags);
  }
  
@@@ -547,10 -547,10 +547,10 @@@ static __init void pnpacpi_parse_ext_ir
  {
        int i;
        pnp_irq_mask_t map;
-       unsigned char flags;
+       unsigned char flags = 0;
  
        if (p->interrupt_count == 0)
-               return;
+               flags |= IORESOURCE_DISABLED;
  
        bitmap_zero(map.bits, PNP_IRQ_NR);
        for (i = 0; i < p->interrupt_count; i++) {
                }
        }
  
-       flags = irq_flags(p->triggering, p->polarity, p->sharable);
+       flags |= irq_flags(p->triggering, p->polarity, p->sharable);
        pnp_register_irq_resource(dev, option_flags, &map, flags);
  }
  
@@@ -575,10 -575,10 +575,10 @@@ static __init void pnpacpi_parse_port_o
        unsigned char flags = 0;
  
        if (io->address_length == 0)
-               return;
+               flags |= IORESOURCE_DISABLED;
  
        if (io->io_decode == ACPI_DECODE_16)
-               flags = IORESOURCE_IO_16BIT_ADDR;
+               flags |= IORESOURCE_IO_16BIT_ADDR;
        pnp_register_port_resource(dev, option_flags, io->minimum, io->maximum,
                                   io->alignment, io->address_length, flags);
  }
@@@ -587,11 -587,13 +587,13 @@@ static __init void pnpacpi_parse_fixed_
                                        unsigned int option_flags,
                                        struct acpi_resource_fixed_io *io)
  {
+       unsigned char flags = 0;
        if (io->address_length == 0)
-               return;
+               flags |= IORESOURCE_DISABLED;
  
        pnp_register_port_resource(dev, option_flags, io->address, io->address,
-                                  0, io->address_length, IORESOURCE_IO_FIXED);
+                                  0, io->address_length, flags | IORESOURCE_IO_FIXED);
  }
  
  static __init void pnpacpi_parse_mem24_option(struct pnp_dev *dev,
        unsigned char flags = 0;
  
        if (p->address_length == 0)
-               return;
+               flags |= IORESOURCE_DISABLED;
  
        if (p->write_protect == ACPI_READ_WRITE_MEMORY)
-               flags = IORESOURCE_MEM_WRITEABLE;
+               flags |= IORESOURCE_MEM_WRITEABLE;
        pnp_register_mem_resource(dev, option_flags, p->minimum, p->maximum,
                                  p->alignment, p->address_length, flags);
  }
@@@ -616,10 -618,10 +618,10 @@@ static __init void pnpacpi_parse_mem32_
        unsigned char flags = 0;
  
        if (p->address_length == 0)
-               return;
+               flags |= IORESOURCE_DISABLED;
  
        if (p->write_protect == ACPI_READ_WRITE_MEMORY)
-               flags = IORESOURCE_MEM_WRITEABLE;
+               flags |= IORESOURCE_MEM_WRITEABLE;
        pnp_register_mem_resource(dev, option_flags, p->minimum, p->maximum,
                                  p->alignment, p->address_length, flags);
  }
@@@ -631,10 -633,10 +633,10 @@@ static __init void pnpacpi_parse_fixed_
        unsigned char flags = 0;
  
        if (p->address_length == 0)
-               return;
+               flags |= IORESOURCE_DISABLED;
  
        if (p->write_protect == ACPI_READ_WRITE_MEMORY)
-               flags = IORESOURCE_MEM_WRITEABLE;
+               flags |= IORESOURCE_MEM_WRITEABLE;
        pnp_register_mem_resource(dev, option_flags, p->address, p->address,
                                  0, p->address_length, flags);
  }
@@@ -655,18 -657,18 +657,18 @@@ static __init void pnpacpi_parse_addres
        }
  
        if (p->address_length == 0)
-               return;
+               flags |= IORESOURCE_DISABLED;
  
        if (p->resource_type == ACPI_MEMORY_RANGE) {
                if (p->info.mem.write_protect == ACPI_READ_WRITE_MEMORY)
-                       flags = IORESOURCE_MEM_WRITEABLE;
+                       flags |= IORESOURCE_MEM_WRITEABLE;
                pnp_register_mem_resource(dev, option_flags, p->minimum,
                                          p->minimum, 0, p->address_length,
                                          flags);
        } else if (p->resource_type == ACPI_IO_RANGE)
                pnp_register_port_resource(dev, option_flags, p->minimum,
                                           p->minimum, 0, p->address_length,
-                                          IORESOURCE_IO_FIXED);
+                                          flags | IORESOURCE_IO_FIXED);
  }
  
  static __init void pnpacpi_parse_ext_address_option(struct pnp_dev *dev,
        unsigned char flags = 0;
  
        if (p->address_length == 0)
-               return;
+               flags |= IORESOURCE_DISABLED;
  
        if (p->resource_type == ACPI_MEMORY_RANGE) {
                if (p->info.mem.write_protect == ACPI_READ_WRITE_MEMORY)
-                       flags = IORESOURCE_MEM_WRITEABLE;
+                       flags |= IORESOURCE_MEM_WRITEABLE;
                pnp_register_mem_resource(dev, option_flags, p->minimum,
                                          p->minimum, 0, p->address_length,
                                          flags);
        } else if (p->resource_type == ACPI_IO_RANGE)
                pnp_register_port_resource(dev, option_flags, p->minimum,
                                           p->minimum, 0, p->address_length,
-                                          IORESOURCE_IO_FIXED);
+                                          flags | IORESOURCE_IO_FIXED);
  }
  
  struct acpipnp_parse_option_s {
@@@ -1018,7 -1020,7 +1020,7 @@@ static void pnpacpi_encode_io(struct pn
                io->minimum = p->start;
                io->maximum = p->end;
                io->alignment = 0;      /* Correct? */
 -              io->address_length = p->end - p->start + 1;
 +              io->address_length = resource_size(p);
        } else {
                io->minimum = 0;
                io->address_length = 0;
@@@ -1036,7 -1038,7 +1038,7 @@@ static void pnpacpi_encode_fixed_io(str
  
        if (pnp_resource_enabled(p)) {
                fixed_io->address = p->start;
 -              fixed_io->address_length = p->end - p->start + 1;
 +              fixed_io->address_length = resource_size(p);
        } else {
                fixed_io->address = 0;
                fixed_io->address_length = 0;
@@@ -1059,7 -1061,7 +1061,7 @@@ static void pnpacpi_encode_mem24(struc
                memory24->minimum = p->start;
                memory24->maximum = p->end;
                memory24->alignment = 0;
 -              memory24->address_length = p->end - p->start + 1;
 +              memory24->address_length = resource_size(p);
        } else {
                memory24->minimum = 0;
                memory24->address_length = 0;
@@@ -1083,7 -1085,7 +1085,7 @@@ static void pnpacpi_encode_mem32(struc
                memory32->minimum = p->start;
                memory32->maximum = p->end;
                memory32->alignment = 0;
 -              memory32->address_length = p->end - p->start + 1;
 +              memory32->address_length = resource_size(p);
        } else {
                memory32->minimum = 0;
                memory32->alignment = 0;
@@@ -1106,7 -1108,7 +1108,7 @@@ static void pnpacpi_encode_fixed_mem32(
                    p->flags & IORESOURCE_MEM_WRITEABLE ?
                    ACPI_READ_WRITE_MEMORY : ACPI_READ_ONLY_MEMORY;
                fixed_memory32->address = p->start;
 -              fixed_memory32->address_length = p->end - p->start + 1;
 +              fixed_memory32->address_length = resource_size(p);
        } else {
                fixed_memory32->address = 0;
                fixed_memory32->address_length = 0;
diff --combined drivers/rtc/rtc-s3c.c
@@@ -57,11 -57,13 +57,13 @@@ static irqreturn_t s3c_rtc_alarmirq(in
  {
        struct rtc_device *rdev = id;
  
+       clk_enable(rtc_clk);
        rtc_update_irq(rdev, 1, RTC_AF | RTC_IRQF);
  
        if (s3c_rtc_cpu_type == TYPE_S3C64XX)
                writeb(S3C2410_INTP_ALM, s3c_rtc_base + S3C2410_INTP);
  
+       clk_disable(rtc_clk);
        return IRQ_HANDLED;
  }
  
@@@ -69,11 -71,13 +71,13 @@@ static irqreturn_t s3c_rtc_tickirq(int 
  {
        struct rtc_device *rdev = id;
  
+       clk_enable(rtc_clk);
        rtc_update_irq(rdev, 1, RTC_PF | RTC_IRQF);
  
        if (s3c_rtc_cpu_type == TYPE_S3C64XX)
                writeb(S3C2410_INTP_TIC, s3c_rtc_base + S3C2410_INTP);
  
+       clk_disable(rtc_clk);
        return IRQ_HANDLED;
  }
  
@@@ -84,12 -88,14 +88,14 @@@ static int s3c_rtc_setaie(struct devic
  
        pr_debug("%s: aie=%d\n", __func__, enabled);
  
+       clk_enable(rtc_clk);
        tmp = readb(s3c_rtc_base + S3C2410_RTCALM) & ~S3C2410_RTCALM_ALMEN;
  
        if (enabled)
                tmp |= S3C2410_RTCALM_ALMEN;
  
        writeb(tmp, s3c_rtc_base + S3C2410_RTCALM);
+       clk_disable(rtc_clk);
  
        return 0;
  }
@@@ -103,6 -109,7 +109,7 @@@ static int s3c_rtc_setfreq(struct devic
        if (!is_power_of_2(freq))
                return -EINVAL;
  
+       clk_enable(rtc_clk);
        spin_lock_irq(&s3c_rtc_pie_lock);
  
        if (s3c_rtc_cpu_type == TYPE_S3C2410) {
  
        writel(tmp, s3c_rtc_base + S3C2410_TICNT);
        spin_unlock_irq(&s3c_rtc_pie_lock);
+       clk_disable(rtc_clk);
  
        return 0;
  }
@@@ -125,6 -133,7 +133,7 @@@ static int s3c_rtc_gettime(struct devic
        unsigned int have_retried = 0;
        void __iomem *base = s3c_rtc_base;
  
+       clk_enable(rtc_clk);
   retry_get_time:
        rtc_tm->tm_min  = readb(base + S3C2410_RTCMIN);
        rtc_tm->tm_hour = readb(base + S3C2410_RTCHOUR);
        rtc_tm->tm_year += 100;
        rtc_tm->tm_mon -= 1;
  
+       clk_disable(rtc_clk);
        return rtc_valid_tm(rtc_tm);
  }
  
@@@ -165,6 -175,7 +175,7 @@@ static int s3c_rtc_settime(struct devic
        void __iomem *base = s3c_rtc_base;
        int year = tm->tm_year - 100;
  
+       clk_enable(rtc_clk);
        pr_debug("set time %04d.%02d.%02d %02d:%02d:%02d\n",
                 1900 + tm->tm_year, tm->tm_mon, tm->tm_mday,
                 tm->tm_hour, tm->tm_min, tm->tm_sec);
        writeb(bin2bcd(tm->tm_mday), base + S3C2410_RTCDATE);
        writeb(bin2bcd(tm->tm_mon + 1), base + S3C2410_RTCMON);
        writeb(bin2bcd(year), base + S3C2410_RTCYEAR);
+       clk_disable(rtc_clk);
  
        return 0;
  }
@@@ -192,6 -204,7 +204,7 @@@ static int s3c_rtc_getalarm(struct devi
        void __iomem *base = s3c_rtc_base;
        unsigned int alm_en;
  
+       clk_enable(rtc_clk);
        alm_tm->tm_sec  = readb(base + S3C2410_ALMSEC);
        alm_tm->tm_min  = readb(base + S3C2410_ALMMIN);
        alm_tm->tm_hour = readb(base + S3C2410_ALMHOUR);
        else
                alm_tm->tm_year = -1;
  
+       clk_disable(rtc_clk);
        return 0;
  }
  
@@@ -252,6 -266,7 +266,7 @@@ static int s3c_rtc_setalarm(struct devi
        void __iomem *base = s3c_rtc_base;
        unsigned int alrm_en;
  
+       clk_enable(rtc_clk);
        pr_debug("s3c_rtc_setalarm: %d, %04d.%02d.%02d %02d:%02d:%02d\n",
                 alrm->enabled,
                 1900 + tm->tm_year, tm->tm_mon, tm->tm_mday,
  
        s3c_rtc_setaie(dev, alrm->enabled);
  
+       clk_disable(rtc_clk);
        return 0;
  }
  
@@@ -289,6 -305,7 +305,7 @@@ static int s3c_rtc_proc(struct device *
  {
        unsigned int ticnt;
  
+       clk_enable(rtc_clk);
        if (s3c_rtc_cpu_type == TYPE_S3C64XX) {
                ticnt = readw(s3c_rtc_base + S3C2410_RTCCON);
                ticnt &= S3C64XX_RTCCON_TICEN;
        }
  
        seq_printf(seq, "periodic_IRQ\t: %s\n", ticnt  ? "yes" : "no");
+       clk_disable(rtc_clk);
        return 0;
  }
  
@@@ -360,6 -378,7 +378,7 @@@ static void s3c_rtc_enable(struct platf
        if (s3c_rtc_base == NULL)
                return;
  
+       clk_enable(rtc_clk);
        if (!en) {
                tmp = readw(base + S3C2410_RTCCON);
                if (s3c_rtc_cpu_type == TYPE_S3C64XX)
                                base + S3C2410_RTCCON);
                }
        }
+       clk_disable(rtc_clk);
  }
  
  static int __devexit s3c_rtc_remove(struct platform_device *dev)
  
        s3c_rtc_setaie(&dev->dev, 0);
  
-       clk_disable(rtc_clk);
        clk_put(rtc_clk);
        rtc_clk = NULL;
  
@@@ -455,7 -474,8 +474,7 @@@ static int __devinit s3c_rtc_probe(stru
                return -ENOENT;
        }
  
 -      s3c_rtc_mem = request_mem_region(res->start,
 -                                       res->end-res->start+1,
 +      s3c_rtc_mem = request_mem_region(res->start, resource_size(res),
                                         pdev->name);
  
        if (s3c_rtc_mem == NULL) {
                goto err_nores;
        }
  
 -      s3c_rtc_base = ioremap(res->start, res->end - res->start + 1);
 +      s3c_rtc_base = ioremap(res->start, resource_size(res));
        if (s3c_rtc_base == NULL) {
                dev_err(&pdev->dev, "failed ioremap()\n");
                ret = -EINVAL;
  
        s3c_rtc_setfreq(&pdev->dev, 1);
  
+       clk_disable(rtc_clk);
        return 0;
  
   err_nortc:
@@@ -554,6 -576,7 +575,7 @@@ static int ticnt_save, ticnt_en_save
  
  static int s3c_rtc_suspend(struct platform_device *pdev, pm_message_t state)
  {
+       clk_enable(rtc_clk);
        /* save TICNT for anyone using periodic interrupts */
        ticnt_save = readb(s3c_rtc_base + S3C2410_TICNT);
        if (s3c_rtc_cpu_type == TYPE_S3C64XX) {
                else
                        dev_err(&pdev->dev, "enable_irq_wake failed\n");
        }
+       clk_disable(rtc_clk);
  
        return 0;
  }
@@@ -576,6 -600,7 +599,7 @@@ static int s3c_rtc_resume(struct platfo
  {
        unsigned int tmp;
  
+       clk_enable(rtc_clk);
        s3c_rtc_enable(pdev, 1);
        writeb(ticnt_save, s3c_rtc_base + S3C2410_TICNT);
        if (s3c_rtc_cpu_type == TYPE_S3C64XX && ticnt_en_save) {
                disable_irq_wake(s3c_rtc_alarmno);
                wake_en = false;
        }
+       clk_disable(rtc_clk);
  
        return 0;
  }
diff --combined fs/hugetlbfs/inode.c
@@@ -94,7 -94,7 +94,7 @@@ static int hugetlbfs_file_mmap(struct f
        vma->vm_flags |= VM_HUGETLB | VM_RESERVED;
        vma->vm_ops = &hugetlb_vm_ops;
  
-       if (vma->vm_pgoff & ~(huge_page_mask(h) >> PAGE_SHIFT))
+       if (vma->vm_pgoff & (~huge_page_mask(h) >> PAGE_SHIFT))
                return -EINVAL;
  
        vma_len = (loff_t)(vma->vm_end - vma->vm_start);
@@@ -1030,7 -1030,6 +1030,7 @@@ static int __init init_hugetlbfs_fs(voi
  static void __exit exit_hugetlbfs_fs(void)
  {
        kmem_cache_destroy(hugetlbfs_inode_cachep);
 +      kern_unmount(hugetlbfs_vfsmount);
        unregister_filesystem(&hugetlbfs_fs_type);
        bdi_destroy(&hugetlbfs_backing_dev_info);
  }
diff --combined lib/vsprintf.c
  #include <asm/div64.h>
  #include <asm/sections.h>     /* for dereference_function_descriptor() */
  
- /* Works only for digits and letters, but small and fast */
- #define TOLOWER(x) ((x) | 0x20)
  static unsigned int simple_guess_base(const char *cp)
  {
        if (cp[0] == '0') {
-               if (TOLOWER(cp[1]) == 'x' && isxdigit(cp[2]))
+               if (_tolower(cp[1]) == 'x' && isxdigit(cp[2]))
                        return 16;
                else
                        return 8;
@@@ -59,13 -56,13 +56,13 @@@ unsigned long long simple_strtoull(cons
        if (!base)
                base = simple_guess_base(cp);
  
-       if (base == 16 && cp[0] == '0' && TOLOWER(cp[1]) == 'x')
+       if (base == 16 && cp[0] == '0' && _tolower(cp[1]) == 'x')
                cp += 2;
  
        while (isxdigit(*cp)) {
                unsigned int value;
  
-               value = isdigit(*cp) ? *cp - '0' : TOLOWER(*cp) - 'a' + 10;
+               value = isdigit(*cp) ? *cp - '0' : _tolower(*cp) - 'a' + 10;
                if (value >= base)
                        break;
                result = result * base + value;
@@@ -1036,8 -1033,8 +1033,8 @@@ precision
  qualifier:
        /* get the conversion qualifier */
        spec->qualifier = -1;
-       if (*fmt == 'h' || TOLOWER(*fmt) == 'l' ||
-           TOLOWER(*fmt) == 'z' || *fmt == 't') {
+       if (*fmt == 'h' || _tolower(*fmt) == 'l' ||
+           _tolower(*fmt) == 'z' || *fmt == 't') {
                spec->qualifier = *fmt++;
                if (unlikely(spec->qualifier == *fmt)) {
                        if (spec->qualifier == 'l') {
                        spec->type = FORMAT_TYPE_LONG;
                else
                        spec->type = FORMAT_TYPE_ULONG;
-       } else if (TOLOWER(spec->qualifier) == 'z') {
+       } else if (_tolower(spec->qualifier) == 'z') {
                spec->type = FORMAT_TYPE_SIZE_T;
        } else if (spec->qualifier == 't') {
                spec->type = FORMAT_TYPE_PTRDIFF;
   * %pi4 print an IPv4 address with leading zeros
   * %pI6 print an IPv6 address with colons
   * %pi6 print an IPv6 address without colons
 - * %pI6c print an IPv6 address as specified by
 - *   http://tools.ietf.org/html/draft-ietf-6man-text-addr-representation-00
 + * %pI6c print an IPv6 address as specified by RFC 5952
   * %pU[bBlL] print a UUID/GUID in big or little endian using lower or upper
   *   case.
   * %n is ignored
@@@ -1262,7 -1260,7 +1259,7 @@@ int vsnprintf(char *buf, size_t size, c
                        if (qualifier == 'l') {
                                long *ip = va_arg(args, long *);
                                *ip = (str - buf);
-                       } else if (TOLOWER(qualifier) == 'z') {
+                       } else if (_tolower(qualifier) == 'z') {
                                size_t *ip = va_arg(args, size_t *);
                                *ip = (str - buf);
                        } else {
@@@ -1549,7 -1547,7 +1546,7 @@@ do {                                                                    
                        void *skip_arg;
                        if (qualifier == 'l')
                                skip_arg = va_arg(args, long *);
-                       else if (TOLOWER(qualifier) == 'z')
+                       else if (_tolower(qualifier) == 'z')
                                skip_arg = va_arg(args, size_t *);
                        else
                                skip_arg = va_arg(args, int *);
@@@ -1855,8 -1853,8 +1852,8 @@@ int vsscanf(const char *buf, const cha
  
                /* get conversion qualifier */
                qualifier = -1;
-               if (*fmt == 'h' || TOLOWER(*fmt) == 'l' ||
-                   TOLOWER(*fmt) == 'z') {
+               if (*fmt == 'h' || _tolower(*fmt) == 'l' ||
+                   _tolower(*fmt) == 'z') {
                        qualifier = *fmt++;
                        if (unlikely(qualifier == *fmt)) {
                                if (qualifier == 'h') {
diff --combined mm/backing-dev.c
@@@ -505,7 -505,7 +505,7 @@@ static void bdi_remove_from_list(struc
        list_del_rcu(&bdi->bdi_list);
        spin_unlock_bh(&bdi_lock);
  
 -      synchronize_rcu();
 +      synchronize_rcu_expedited();
  }
  
  int bdi_register(struct backing_dev_info *bdi, struct device *parent,
@@@ -606,6 -606,7 +606,7 @@@ static void bdi_prune_sb(struct backing
  void bdi_unregister(struct backing_dev_info *bdi)
  {
        if (bdi->dev) {
+               bdi_set_min_ratio(bdi, 0);
                trace_writeback_bdi_unregister(bdi);
                bdi_prune_sb(bdi);
                del_timer_sync(&bdi->wb.wakeup_timer);
diff --combined mm/shmem.c
@@@ -51,6 -51,7 +51,7 @@@ static struct vfsmount *shm_mnt
  #include <linux/shmem_fs.h>
  #include <linux/writeback.h>
  #include <linux/blkdev.h>
+ #include <linux/splice.h>
  #include <linux/security.h>
  #include <linux/swapops.h>
  #include <linux/mempolicy.h>
@@@ -126,8 -127,15 +127,15 @@@ static unsigned long shmem_default_max_
  }
  #endif
  
- static int shmem_getpage(struct inode *inode, unsigned long idx,
-                        struct page **pagep, enum sgp_type sgp, int *type);
+ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
+       struct page **pagep, enum sgp_type sgp, gfp_t gfp, int *fault_type);
+ static inline int shmem_getpage(struct inode *inode, pgoff_t index,
+       struct page **pagep, enum sgp_type sgp, int *fault_type)
+ {
+       return shmem_getpage_gfp(inode, index, pagep, sgp,
+                       mapping_gfp_mask(inode->i_mapping), fault_type);
+ }
  
  static inline struct page *shmem_dir_alloc(gfp_t gfp_mask)
  {
@@@ -241,9 -249,7 +249,7 @@@ static void shmem_free_blocks(struct in
        struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
        if (sbinfo->max_blocks) {
                percpu_counter_add(&sbinfo->used_blocks, -pages);
-               spin_lock(&inode->i_lock);
                inode->i_blocks -= pages*BLOCKS_PER_PAGE;
-               spin_unlock(&inode->i_lock);
        }
  }
  
@@@ -405,10 -411,12 +411,12 @@@ static void shmem_swp_set(struct shmem_
   * @info:     info structure for the inode
   * @index:    index of the page to find
   * @sgp:      check and recheck i_size? skip allocation?
+  * @gfp:      gfp mask to use for any page allocation
   *
   * If the entry does not exist, allocate it.
   */
- static swp_entry_t *shmem_swp_alloc(struct shmem_inode_info *info, unsigned long index, enum sgp_type sgp)
+ static swp_entry_t *shmem_swp_alloc(struct shmem_inode_info *info,
+                       unsigned long index, enum sgp_type sgp, gfp_t gfp)
  {
        struct inode *inode = &info->vfs_inode;
        struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb);
                                                sbinfo->max_blocks - 1) >= 0)
                                return ERR_PTR(-ENOSPC);
                        percpu_counter_inc(&sbinfo->used_blocks);
-                       spin_lock(&inode->i_lock);
                        inode->i_blocks += BLOCKS_PER_PAGE;
-                       spin_unlock(&inode->i_lock);
                }
  
                spin_unlock(&info->lock);
-               page = shmem_dir_alloc(mapping_gfp_mask(inode->i_mapping));
+               page = shmem_dir_alloc(gfp);
                spin_lock(&info->lock);
  
                if (!page) {
@@@ -966,20 -972,7 +972,7 @@@ found
        error = add_to_page_cache_locked(page, mapping, idx, GFP_NOWAIT);
        /* which does mem_cgroup_uncharge_cache_page on error */
  
-       if (error == -EEXIST) {
-               struct page *filepage = find_get_page(mapping, idx);
-               error = 1;
-               if (filepage) {
-                       /*
-                        * There might be a more uptodate page coming down
-                        * from a stacked writepage: forget our swappage if so.
-                        */
-                       if (PageUptodate(filepage))
-                               error = 0;
-                       page_cache_release(filepage);
-               }
-       }
-       if (!error) {
+       if (error != -ENOMEM) {
                delete_from_swap_cache(page);
                set_page_dirty(page);
                info->flags |= SHMEM_PAGEIN;
@@@ -1066,16 -1059,17 +1059,17 @@@ static int shmem_writepage(struct page 
        /*
         * shmem_backing_dev_info's capabilities prevent regular writeback or
         * sync from ever calling shmem_writepage; but a stacking filesystem
-        * may use the ->writepage of its underlying filesystem, in which case
+        * might use ->writepage of its underlying filesystem, in which case
         * tmpfs should write out to swap only in response to memory pressure,
-        * and not for the writeback threads or sync.  However, in those cases,
-        * we do still want to check if there's a redundant swappage to be
-        * discarded.
+        * and not for the writeback threads or sync.
         */
-       if (wbc->for_reclaim)
-               swap = get_swap_page();
-       else
-               swap.val = 0;
+       if (!wbc->for_reclaim) {
+               WARN_ON_ONCE(1);        /* Still happens? Tell us about it! */
+               goto redirty;
+       }
+       swap = get_swap_page();
+       if (!swap.val)
+               goto redirty;
  
        /*
         * Add inode to shmem_unuse()'s list of swapped-out inodes,
         * we've taken the spinlock, because shmem_unuse_inode() will
         * prune a !swapped inode from the swaplist under both locks.
         */
-       if (swap.val) {
-               mutex_lock(&shmem_swaplist_mutex);
-               if (list_empty(&info->swaplist))
-                       list_add_tail(&info->swaplist, &shmem_swaplist);
-       }
+       mutex_lock(&shmem_swaplist_mutex);
+       if (list_empty(&info->swaplist))
+               list_add_tail(&info->swaplist, &shmem_swaplist);
  
        spin_lock(&info->lock);
-       if (swap.val)
-               mutex_unlock(&shmem_swaplist_mutex);
+       mutex_unlock(&shmem_swaplist_mutex);
  
        if (index >= info->next_index) {
                BUG_ON(!(info->flags & SHMEM_TRUNCATE));
        }
        entry = shmem_swp_entry(info, index, NULL);
        if (entry->val) {
-               /*
-                * The more uptodate page coming down from a stacked
-                * writepage should replace our old swappage.
-                */
+               WARN_ON_ONCE(1);        /* Still happens? Tell us about it! */
                free_swap_and_cache(*entry);
                shmem_swp_set(info, entry, 0);
        }
        shmem_recalc_inode(inode);
  
-       if (swap.val && add_to_swap_cache(page, swap, GFP_ATOMIC) == 0) {
+       if (add_to_swap_cache(page, swap, GFP_ATOMIC) == 0) {
                delete_from_page_cache(page);
                shmem_swp_set(info, entry, swap.val);
                shmem_swp_unmap(entry);
@@@ -1228,92 -1216,83 +1216,83 @@@ static inline struct mempolicy *shmem_g
  #endif
  
  /*
-  * shmem_getpage - either get the page from swap or allocate a new one
+  * shmem_getpage_gfp - find page in cache, or get from swap, or allocate
   *
   * If we allocate a new one we do not mark it dirty. That's up to the
   * vm. If we swap it in we mark it dirty since we also free the swap
   * entry since a page cannot live in both the swap and page cache
   */
- static int shmem_getpage(struct inode *inode, unsigned long idx,
-                       struct page **pagep, enum sgp_type sgp, int *type)
+ static int shmem_getpage_gfp(struct inode *inode, pgoff_t idx,
+       struct page **pagep, enum sgp_type sgp, gfp_t gfp, int *fault_type)
  {
        struct address_space *mapping = inode->i_mapping;
        struct shmem_inode_info *info = SHMEM_I(inode);
        struct shmem_sb_info *sbinfo;
-       struct page *filepage = *pagep;
-       struct page *swappage;
+       struct page *page;
        struct page *prealloc_page = NULL;
        swp_entry_t *entry;
        swp_entry_t swap;
-       gfp_t gfp;
        int error;
+       int ret;
  
        if (idx >= SHMEM_MAX_INDEX)
                return -EFBIG;
-       if (type)
-               *type = 0;
-       /*
-        * Normally, filepage is NULL on entry, and either found
-        * uptodate immediately, or allocated and zeroed, or read
-        * in under swappage, which is then assigned to filepage.
-        * But shmem_readpage (required for splice) passes in a locked
-        * filepage, which may be found not uptodate by other callers
-        * too, and may need to be copied from the swappage read in.
-        */
  repeat:
-       if (!filepage)
-               filepage = find_lock_page(mapping, idx);
-       if (filepage && PageUptodate(filepage))
-               goto done;
-       gfp = mapping_gfp_mask(mapping);
-       if (!filepage) {
+       page = find_lock_page(mapping, idx);
+       if (page) {
                /*
-                * Try to preload while we can wait, to not make a habit of
-                * draining atomic reserves; but don't latch on to this cpu.
+                * Once we can get the page lock, it must be uptodate:
+                * if there were an error in reading back from swap,
+                * the page would not be inserted into the filecache.
                 */
-               error = radix_tree_preload(gfp & ~__GFP_HIGHMEM);
-               if (error)
-                       goto failed;
-               radix_tree_preload_end();
-               if (sgp != SGP_READ && !prealloc_page) {
-                       /* We don't care if this fails */
-                       prealloc_page = shmem_alloc_page(gfp, info, idx);
-                       if (prealloc_page) {
-                               if (mem_cgroup_cache_charge(prealloc_page,
-                                               current->mm, GFP_KERNEL)) {
-                                       page_cache_release(prealloc_page);
-                                       prealloc_page = NULL;
-                               }
+               BUG_ON(!PageUptodate(page));
+               goto done;
+       }
+       /*
+        * Try to preload while we can wait, to not make a habit of
+        * draining atomic reserves; but don't latch on to this cpu.
+        */
+       error = radix_tree_preload(gfp & GFP_RECLAIM_MASK);
+       if (error)
+               goto out;
+       radix_tree_preload_end();
+       if (sgp != SGP_READ && !prealloc_page) {
+               prealloc_page = shmem_alloc_page(gfp, info, idx);
+               if (prealloc_page) {
+                       SetPageSwapBacked(prealloc_page);
+                       if (mem_cgroup_cache_charge(prealloc_page,
+                                       current->mm, GFP_KERNEL)) {
+                               page_cache_release(prealloc_page);
+                               prealloc_page = NULL;
                        }
                }
        }
-       error = 0;
  
        spin_lock(&info->lock);
        shmem_recalc_inode(inode);
-       entry = shmem_swp_alloc(info, idx, sgp);
+       entry = shmem_swp_alloc(info, idx, sgp, gfp);
        if (IS_ERR(entry)) {
                spin_unlock(&info->lock);
                error = PTR_ERR(entry);
-               goto failed;
+               goto out;
        }
        swap = *entry;
  
        if (swap.val) {
                /* Look it up and read it in.. */
-               swappage = lookup_swap_cache(swap);
-               if (!swappage) {
+               page = lookup_swap_cache(swap);
+               if (!page) {
                        shmem_swp_unmap(entry);
                        spin_unlock(&info->lock);
                        /* here we actually do the io */
-                       if (type)
-                               *type |= VM_FAULT_MAJOR;
-                       swappage = shmem_swapin(swap, gfp, info, idx);
-                       if (!swappage) {
+                       if (fault_type)
+                               *fault_type |= VM_FAULT_MAJOR;
+                       page = shmem_swapin(swap, gfp, info, idx);
+                       if (!page) {
                                spin_lock(&info->lock);
-                               entry = shmem_swp_alloc(info, idx, sgp);
+                               entry = shmem_swp_alloc(info, idx, sgp, gfp);
                                if (IS_ERR(entry))
                                        error = PTR_ERR(entry);
                                else {
                                }
                                spin_unlock(&info->lock);
                                if (error)
-                                       goto failed;
+                                       goto out;
                                goto repeat;
                        }
-                       wait_on_page_locked(swappage);
-                       page_cache_release(swappage);
+                       wait_on_page_locked(page);
+                       page_cache_release(page);
                        goto repeat;
                }
  
                /* We have to do this with page locked to prevent races */
-               if (!trylock_page(swappage)) {
+               if (!trylock_page(page)) {
                        shmem_swp_unmap(entry);
                        spin_unlock(&info->lock);
-                       wait_on_page_locked(swappage);
-                       page_cache_release(swappage);
+                       wait_on_page_locked(page);
+                       page_cache_release(page);
                        goto repeat;
                }
-               if (PageWriteback(swappage)) {
+               if (PageWriteback(page)) {
                        shmem_swp_unmap(entry);
                        spin_unlock(&info->lock);
-                       wait_on_page_writeback(swappage);
-                       unlock_page(swappage);
-                       page_cache_release(swappage);
+                       wait_on_page_writeback(page);
+                       unlock_page(page);
+                       page_cache_release(page);
                        goto repeat;
                }
-               if (!PageUptodate(swappage)) {
+               if (!PageUptodate(page)) {
                        shmem_swp_unmap(entry);
                        spin_unlock(&info->lock);
-                       unlock_page(swappage);
-                       page_cache_release(swappage);
+                       unlock_page(page);
+                       page_cache_release(page);
                        error = -EIO;
-                       goto failed;
+                       goto out;
                }
  
-               if (filepage) {
-                       shmem_swp_set(info, entry, 0);
-                       shmem_swp_unmap(entry);
-                       delete_from_swap_cache(swappage);
-                       spin_unlock(&info->lock);
-                       copy_highpage(filepage, swappage);
-                       unlock_page(swappage);
-                       page_cache_release(swappage);
-                       flush_dcache_page(filepage);
-                       SetPageUptodate(filepage);
-                       set_page_dirty(filepage);
-                       swap_free(swap);
-               } else if (!(error = add_to_page_cache_locked(swappage, mapping,
-                                       idx, GFP_NOWAIT))) {
-                       info->flags |= SHMEM_PAGEIN;
-                       shmem_swp_set(info, entry, 0);
-                       shmem_swp_unmap(entry);
-                       delete_from_swap_cache(swappage);
-                       spin_unlock(&info->lock);
-                       filepage = swappage;
-                       set_page_dirty(filepage);
-                       swap_free(swap);
-               } else {
+               error = add_to_page_cache_locked(page, mapping,
+                                                idx, GFP_NOWAIT);
+               if (error) {
                        shmem_swp_unmap(entry);
                        spin_unlock(&info->lock);
                        if (error == -ENOMEM) {
                                 * call memcg's OOM if needed.
                                 */
                                error = mem_cgroup_shmem_charge_fallback(
-                                                               swappage,
-                                                               current->mm,
-                                                               gfp);
+                                               page, current->mm, gfp);
                                if (error) {
-                                       unlock_page(swappage);
-                                       page_cache_release(swappage);
-                                       goto failed;
+                                       unlock_page(page);
+                                       page_cache_release(page);
+                                       goto out;
                                }
                        }
-                       unlock_page(swappage);
-                       page_cache_release(swappage);
+                       unlock_page(page);
+                       page_cache_release(page);
                        goto repeat;
                }
-       } else if (sgp == SGP_READ && !filepage) {
+               info->flags |= SHMEM_PAGEIN;
+               shmem_swp_set(info, entry, 0);
                shmem_swp_unmap(entry);
-               filepage = find_get_page(mapping, idx);
-               if (filepage &&
-                   (!PageUptodate(filepage) || !trylock_page(filepage))) {
+               delete_from_swap_cache(page);
+               spin_unlock(&info->lock);
+               set_page_dirty(page);
+               swap_free(swap);
+       } else if (sgp == SGP_READ) {
+               shmem_swp_unmap(entry);
+               page = find_get_page(mapping, idx);
+               if (page && !trylock_page(page)) {
                        spin_unlock(&info->lock);
-                       wait_on_page_locked(filepage);
-                       page_cache_release(filepage);
-                       filepage = NULL;
+                       wait_on_page_locked(page);
+                       page_cache_release(page);
                        goto repeat;
                }
                spin_unlock(&info->lock);
-       } else {
+       } else if (prealloc_page) {
                shmem_swp_unmap(entry);
                sbinfo = SHMEM_SB(inode->i_sb);
                if (sbinfo->max_blocks) {
                            shmem_acct_block(info->flags))
                                goto nospace;
                        percpu_counter_inc(&sbinfo->used_blocks);
-                       spin_lock(&inode->i_lock);
                        inode->i_blocks += BLOCKS_PER_PAGE;
-                       spin_unlock(&inode->i_lock);
                } else if (shmem_acct_block(info->flags))
                        goto nospace;
  
-               if (!filepage) {
-                       int ret;
-                       if (!prealloc_page) {
-                               spin_unlock(&info->lock);
-                               filepage = shmem_alloc_page(gfp, info, idx);
-                               if (!filepage) {
-                                       shmem_unacct_blocks(info->flags, 1);
-                                       shmem_free_blocks(inode, 1);
-                                       error = -ENOMEM;
-                                       goto failed;
-                               }
-                               SetPageSwapBacked(filepage);
+               page = prealloc_page;
+               prealloc_page = NULL;
  
-                               /*
-                                * Precharge page while we can wait, compensate
-                                * after
-                                */
-                               error = mem_cgroup_cache_charge(filepage,
-                                       current->mm, GFP_KERNEL);
-                               if (error) {
-                                       page_cache_release(filepage);
-                                       shmem_unacct_blocks(info->flags, 1);
-                                       shmem_free_blocks(inode, 1);
-                                       filepage = NULL;
-                                       goto failed;
-                               }
-                               spin_lock(&info->lock);
-                       } else {
-                               filepage = prealloc_page;
-                               prealloc_page = NULL;
-                               SetPageSwapBacked(filepage);
-                       }
-                       entry = shmem_swp_alloc(info, idx, sgp);
-                       if (IS_ERR(entry))
-                               error = PTR_ERR(entry);
-                       else {
-                               swap = *entry;
-                               shmem_swp_unmap(entry);
-                       }
-                       ret = error || swap.val;
-                       if (ret)
-                               mem_cgroup_uncharge_cache_page(filepage);
-                       else
-                               ret = add_to_page_cache_lru(filepage, mapping,
+               entry = shmem_swp_alloc(info, idx, sgp, gfp);
+               if (IS_ERR(entry))
+                       error = PTR_ERR(entry);
+               else {
+                       swap = *entry;
+                       shmem_swp_unmap(entry);
+               }
+               ret = error || swap.val;
+               if (ret)
+                       mem_cgroup_uncharge_cache_page(page);
+               else
+                       ret = add_to_page_cache_lru(page, mapping,
                                                idx, GFP_NOWAIT);
-                       /*
-                        * At add_to_page_cache_lru() failure, uncharge will
-                        * be done automatically.
-                        */
-                       if (ret) {
-                               spin_unlock(&info->lock);
-                               page_cache_release(filepage);
-                               shmem_unacct_blocks(info->flags, 1);
-                               shmem_free_blocks(inode, 1);
-                               filepage = NULL;
-                               if (error)
-                                       goto failed;
-                               goto repeat;
-                       }
-                       info->flags |= SHMEM_PAGEIN;
+               /*
+                * At add_to_page_cache_lru() failure,
+                * uncharge will be done automatically.
+                */
+               if (ret) {
+                       shmem_unacct_blocks(info->flags, 1);
+                       shmem_free_blocks(inode, 1);
+                       spin_unlock(&info->lock);
+                       page_cache_release(page);
+                       if (error)
+                               goto out;
+                       goto repeat;
                }
  
+               info->flags |= SHMEM_PAGEIN;
                info->alloced++;
                spin_unlock(&info->lock);
-               clear_highpage(filepage);
-               flush_dcache_page(filepage);
-               SetPageUptodate(filepage);
+               clear_highpage(page);
+               flush_dcache_page(page);
+               SetPageUptodate(page);
                if (sgp == SGP_DIRTY)
-                       set_page_dirty(filepage);
+                       set_page_dirty(page);
+       } else {
+               spin_unlock(&info->lock);
+               error = -ENOMEM;
+               goto out;
        }
  done:
-       *pagep = filepage;
+       *pagep = page;
        error = 0;
-       goto out;
+ out:
+       if (prealloc_page) {
+               mem_cgroup_uncharge_cache_page(prealloc_page);
+               page_cache_release(prealloc_page);
+       }
+       return error;
  
  nospace:
        /*
         * Perhaps the page was brought in from swap between find_lock_page
         * and taking info->lock?  We allow for that at add_to_page_cache_lru,
         * but must also avoid reporting a spurious ENOSPC while working on a
-        * full tmpfs.  (When filepage has been passed in to shmem_getpage, it
-        * is already in page cache, which prevents this race from occurring.)
+        * full tmpfs.
         */
-       if (!filepage) {
-               struct page *page = find_get_page(mapping, idx);
-               if (page) {
-                       spin_unlock(&info->lock);
-                       page_cache_release(page);
-                       goto repeat;
-               }
-       }
+       page = find_get_page(mapping, idx);
        spin_unlock(&info->lock);
-       error = -ENOSPC;
- failed:
-       if (*pagep != filepage) {
-               unlock_page(filepage);
-               page_cache_release(filepage);
-       }
- out:
-       if (prealloc_page) {
-               mem_cgroup_uncharge_cache_page(prealloc_page);
-               page_cache_release(prealloc_page);
+       if (page) {
+               page_cache_release(page);
+               goto repeat;
        }
-       return error;
+       error = -ENOSPC;
+       goto out;
  }
  
  static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
  {
        struct inode *inode = vma->vm_file->f_path.dentry->d_inode;
        int error;
-       int ret;
+       int ret = VM_FAULT_LOCKED;
  
        if (((loff_t)vmf->pgoff << PAGE_CACHE_SHIFT) >= i_size_read(inode))
                return VM_FAULT_SIGBUS;
        error = shmem_getpage(inode, vmf->pgoff, &vmf->page, SGP_CACHE, &ret);
        if (error)
                return ((error == -ENOMEM) ? VM_FAULT_OOM : VM_FAULT_SIGBUS);
        if (ret & VM_FAULT_MAJOR) {
                count_vm_event(PGMAJFAULT);
                mem_cgroup_count_vm_event(vma->vm_mm, PGMAJFAULT);
        }
-       return ret | VM_FAULT_LOCKED;
+       return ret;
  }
  
  #ifdef CONFIG_NUMA
@@@ -1669,19 -1595,6 +1595,6 @@@ static struct inode *shmem_get_inode(st
  static const struct inode_operations shmem_symlink_inode_operations;
  static const struct inode_operations shmem_symlink_inline_operations;
  
- /*
-  * Normally tmpfs avoids the use of shmem_readpage and shmem_write_begin;
-  * but providing them allows a tmpfs file to be used for splice, sendfile, and
-  * below the loop driver, in the generic fashion that many filesystems support.
-  */
- static int shmem_readpage(struct file *file, struct page *page)
- {
-       struct inode *inode = page->mapping->host;
-       int error = shmem_getpage(inode, page->index, &page, SGP_CACHE, NULL);
-       unlock_page(page);
-       return error;
- }
  static int
  shmem_write_begin(struct file *file, struct address_space *mapping,
                        loff_t pos, unsigned len, unsigned flags,
  {
        struct inode *inode = mapping->host;
        pgoff_t index = pos >> PAGE_CACHE_SHIFT;
-       *pagep = NULL;
        return shmem_getpage(inode, index, pagep, SGP_WRITE, NULL);
  }
  
@@@ -1846,6 -1758,119 +1758,119 @@@ static ssize_t shmem_file_aio_read(stru
        return retval;
  }
  
+ static ssize_t shmem_file_splice_read(struct file *in, loff_t *ppos,
+                               struct pipe_inode_info *pipe, size_t len,
+                               unsigned int flags)
+ {
+       struct address_space *mapping = in->f_mapping;
+       struct inode *inode = mapping->host;
+       unsigned int loff, nr_pages, req_pages;
+       struct page *pages[PIPE_DEF_BUFFERS];
+       struct partial_page partial[PIPE_DEF_BUFFERS];
+       struct page *page;
+       pgoff_t index, end_index;
+       loff_t isize, left;
+       int error, page_nr;
+       struct splice_pipe_desc spd = {
+               .pages = pages,
+               .partial = partial,
+               .flags = flags,
+               .ops = &page_cache_pipe_buf_ops,
+               .spd_release = spd_release_page,
+       };
+       isize = i_size_read(inode);
+       if (unlikely(*ppos >= isize))
+               return 0;
+       left = isize - *ppos;
+       if (unlikely(left < len))
+               len = left;
+       if (splice_grow_spd(pipe, &spd))
+               return -ENOMEM;
+       index = *ppos >> PAGE_CACHE_SHIFT;
+       loff = *ppos & ~PAGE_CACHE_MASK;
+       req_pages = (len + loff + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
+       nr_pages = min(req_pages, pipe->buffers);
+       spd.nr_pages = find_get_pages_contig(mapping, index,
+                                               nr_pages, spd.pages);
+       index += spd.nr_pages;
+       error = 0;
+       while (spd.nr_pages < nr_pages) {
+               error = shmem_getpage(inode, index, &page, SGP_CACHE, NULL);
+               if (error)
+                       break;
+               unlock_page(page);
+               spd.pages[spd.nr_pages++] = page;
+               index++;
+       }
+       index = *ppos >> PAGE_CACHE_SHIFT;
+       nr_pages = spd.nr_pages;
+       spd.nr_pages = 0;
+       for (page_nr = 0; page_nr < nr_pages; page_nr++) {
+               unsigned int this_len;
+               if (!len)
+                       break;
+               this_len = min_t(unsigned long, len, PAGE_CACHE_SIZE - loff);
+               page = spd.pages[page_nr];
+               if (!PageUptodate(page) || page->mapping != mapping) {
+                       error = shmem_getpage(inode, index, &page,
+                                                       SGP_CACHE, NULL);
+                       if (error)
+                               break;
+                       unlock_page(page);
+                       page_cache_release(spd.pages[page_nr]);
+                       spd.pages[page_nr] = page;
+               }
+               isize = i_size_read(inode);
+               end_index = (isize - 1) >> PAGE_CACHE_SHIFT;
+               if (unlikely(!isize || index > end_index))
+                       break;
+               if (end_index == index) {
+                       unsigned int plen;
+                       plen = ((isize - 1) & ~PAGE_CACHE_MASK) + 1;
+                       if (plen <= loff)
+                               break;
+                       this_len = min(this_len, plen - loff);
+                       len = this_len;
+               }
+               spd.partial[page_nr].offset = loff;
+               spd.partial[page_nr].len = this_len;
+               len -= this_len;
+               loff = 0;
+               spd.nr_pages++;
+               index++;
+       }
+       while (page_nr < nr_pages)
+               page_cache_release(spd.pages[page_nr++]);
+       if (spd.nr_pages)
+               error = splice_to_pipe(pipe, &spd);
+       splice_shrink_spd(pipe, &spd);
+       if (error > 0) {
+               *ppos += error;
+               file_accessed(in);
+       }
+       return error;
+ }
  static int shmem_statfs(struct dentry *dentry, struct kstatfs *buf)
  {
        struct shmem_sb_info *sbinfo = SHMEM_SB(dentry->d_sb);
@@@ -2006,7 -2031,7 +2031,7 @@@ static int shmem_symlink(struct inode *
        int error;
        int len;
        struct inode *inode;
-       struct page *page = NULL;
+       struct page *page;
        char *kaddr;
        struct shmem_inode_info *info;
  
@@@ -2684,7 -2709,6 +2709,6 @@@ static const struct address_space_opera
        .writepage      = shmem_writepage,
        .set_page_dirty = __set_page_dirty_no_writeback,
  #ifdef CONFIG_TMPFS
-       .readpage       = shmem_readpage,
        .write_begin    = shmem_write_begin,
        .write_end      = shmem_write_end,
  #endif
@@@ -2701,7 -2725,7 +2725,7 @@@ static const struct file_operations shm
        .aio_read       = shmem_file_aio_read,
        .aio_write      = generic_file_aio_write,
        .fsync          = noop_fsync,
-       .splice_read    = generic_file_splice_read,
+       .splice_read    = shmem_file_splice_read,
        .splice_write   = generic_file_splice_write,
  #endif
  };
@@@ -2715,6 -2739,10 +2739,6 @@@ static const struct inode_operations sh
        .listxattr      = shmem_listxattr,
        .removexattr    = shmem_removexattr,
  #endif
 -#ifdef CONFIG_TMPFS_POSIX_ACL
 -      .check_acl      = generic_check_acl,
 -#endif
 -
  };
  
  static const struct inode_operations shmem_dir_inode_operations = {
  #endif
  #ifdef CONFIG_TMPFS_POSIX_ACL
        .setattr        = shmem_setattr,
 -      .check_acl      = generic_check_acl,
  #endif
  };
  
@@@ -2749,6 -2778,7 +2773,6 @@@ static const struct inode_operations sh
  #endif
  #ifdef CONFIG_TMPFS_POSIX_ACL
        .setattr        = shmem_setattr,
 -      .check_acl      = generic_check_acl,
  #endif
  };
  
@@@ -3042,13 -3072,29 +3066,29 @@@ int shmem_zero_setup(struct vm_area_str
   * suit tmpfs, since it may have pages in swapcache, and needs to find those
   * for itself; although drivers/gpu/drm i915 and ttm rely upon this support.
   *
-  * Provide a stub for those callers to start using now, then later
-  * flesh it out to call shmem_getpage() with additional gfp mask, when
-  * shmem_file_splice_read() is added and shmem_readpage() is removed.
+  * i915_gem_object_get_pages_gtt() mixes __GFP_NORETRY | __GFP_NOWARN in
+  * with the mapping_gfp_mask(), to avoid OOMing the machine unnecessarily.
   */
  struct page *shmem_read_mapping_page_gfp(struct address_space *mapping,
                                         pgoff_t index, gfp_t gfp)
  {
+ #ifdef CONFIG_SHMEM
+       struct inode *inode = mapping->host;
+       struct page *page;
+       int error;
+       BUG_ON(mapping->a_ops != &shmem_aops);
+       error = shmem_getpage_gfp(inode, index, &page, SGP_CACHE, gfp, NULL);
+       if (error)
+               page = ERR_PTR(error);
+       else
+               unlock_page(page);
+       return page;
+ #else
+       /*
+        * The tiny !SHMEM case uses ramfs without swap
+        */
        return read_cache_page_gfp(mapping, index, gfp);
+ #endif
  }
  EXPORT_SYMBOL_GPL(shmem_read_mapping_page_gfp);